我该如何做我的 Perl 家庭作业?

发布于 2024-12-09 15:48:32 字数 817 浏览 1 评论 0原文

给定一个perl哈希结构

{
'A' => {
        'B' => 'C',
        'D' => 'E'
                    },
'F' => {
        'B' => 'G',
        'D' => 'H'
                    },
'I' => {
        'B' => 'G',
        'D' => 'H'
                    },
'J' => {
        'B' => 'C',
        'D' => 'F'
                    },

    }
}

,我需要检查重复的 F ,I 基于其 G 和 H 的内部配对(G 和 H 分别对于 F 和 I 中的 B 和 D 是常见的,(它们构成一个常见的重复对)

最终输出计数结构是这样的:

{ 
   'B' => { 'C' => 2 ,'G' => 1}             # see G's and H's count is 1  Taking G and H's pair only once.  C is 2 because C, E and C,F do not make a pair, C comes twice and E and F once
   'D' => { 'E' => 1, 'H' => 1, 'F'=>1, }   # see H's count is 1
}

Perl中有没有快速的方法来做到这一点?

Given a perl hash structure

{
'A' => {
        'B' => 'C',
        'D' => 'E'
                    },
'F' => {
        'B' => 'G',
        'D' => 'H'
                    },
'I' => {
        'B' => 'G',
        'D' => 'H'
                    },
'J' => {
        'B' => 'C',
        'D' => 'F'
                    },

    }
}

I need to check for duplicate F ,I based on its inner pairing of G and H (G and H is common for B and D respectively in F and I, (They make a common duplicate pair)

The final output count structure is like this:

{ 
   'B' => { 'C' => 2 ,'G' => 1}             # see G's and H's count is 1  Taking G and H's pair only once.  C is 2 because C, E and C,F do not make a pair, C comes twice and E and F once
   'D' => { 'E' => 1, 'H' => 1, 'F'=>1, }   # see H's count is 1
}

Is there any fast way in perl to do this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

小…红帽 2024-12-16 15:48:32

假设您想从 $hoh 中删除重复项并且二级结构并非偶然,您可以使用类似以下内容的内容:

  my %pruned;                       # resulting pruned/uniq HoH
  my %vs;                           # store/count uniq values
  my @k0 = keys %$hoh;              # top level keys 
  my @k1 = keys %{$hoh->{$k0[0]}};  # common items keys
  for my $k0 (@k0) {
    # add item to pruned if item values seen for the first time
    $pruned{$k0} = $hoh->{$k0} if (1 == ++$vs{join "\t", map {$hoh->{$k0}{$_}} @k1} );
  }
  print Dumper( \%pruned ), "\n";

输出:

$VAR1 = {
          'A' => {
                   'D' => 'E',
                   'B' => 'C'
                 },
          'F' => {
                   'D' => 'H',
                   'B' => 'G'
                 },
          'J' => {
                   'D' => 'F',
                   'B' => 'C'
                 }
        };

Assuming you want to prune duplicates from $hoh and the two level structure isn't accidential, you could use something like:

  my %pruned;                       # resulting pruned/uniq HoH
  my %vs;                           # store/count uniq values
  my @k0 = keys %$hoh;              # top level keys 
  my @k1 = keys %{$hoh->{$k0[0]}};  # common items keys
  for my $k0 (@k0) {
    # add item to pruned if item values seen for the first time
    $pruned{$k0} = $hoh->{$k0} if (1 == ++$vs{join "\t", map {$hoh->{$k0}{$_}} @k1} );
  }
  print Dumper( \%pruned ), "\n";

output:

$VAR1 = {
          'A' => {
                   'D' => 'E',
                   'B' => 'C'
                 },
          'F' => {
                   'D' => 'H',
                   'B' => 'G'
                 },
          'J' => {
                   'D' => 'F',
                   'B' => 'C'
                 }
        };
書生途 2024-12-16 15:48:32

首先创建一个方法来告诉您哈希值是否相同。我不会自己编写这个,而是将其从另一个模块中拉出来 - 我将只使用 Test::More 中的 eq_hash,然后我们需要的只是一点一些 Perl 代码。

## Set Hash of Hashes
my $hoh = {
'A' => {
        'B' => 'C',
        'D' => 'E'
                    },
'F' => {
        'B' => 'G',
        'D' => 'H'
                    },
'I' => {
        'B' => 'G',
        'D' => 'H'
                    },
'J' => {
        'B' => 'C',
        'D' => 'F'
                    },

    }
}

use Test::More;
use Data::Dumper;
my @del;
foreach my $h1 ( keys %$hoh ) {
  INNER: foreach my $h2 ( keys %$hoh ) {
    if ( $h1 ne $h2 && Test::More::eq_hash( $hoh->{$h1}, $hoh->{$h2} ) ) {

      my @sort = sort ($h1, $h2);
      foreach my $r ( @del ) {
        next INNER if $r->[0] eq $sort[0] && $r->[1] eq $sort[1];
      }
      push @del, [sort $h1, $h2];

    }
  }
}

delete $hoh->{$_->[0]} for @del;

my $o;
foreach my $h1 ( values %$hoh ) {
  while ( my ($k, $v) = each %$h1 ) {
    $o->{$k}{$v}++
  }
}

use Data::Dumper; die Dumper $o;

而且,就是这样!

First create a method to tell you whether or not your hashes are the same. Rather than writing this myself I'll just yank it out of another module -- I'll just use eq_hash from Test::More, then all we need is a little bit of Perl code.

## Set Hash of Hashes
my $hoh = {
'A' => {
        'B' => 'C',
        'D' => 'E'
                    },
'F' => {
        'B' => 'G',
        'D' => 'H'
                    },
'I' => {
        'B' => 'G',
        'D' => 'H'
                    },
'J' => {
        'B' => 'C',
        'D' => 'F'
                    },

    }
}

use Test::More;
use Data::Dumper;
my @del;
foreach my $h1 ( keys %$hoh ) {
  INNER: foreach my $h2 ( keys %$hoh ) {
    if ( $h1 ne $h2 && Test::More::eq_hash( $hoh->{$h1}, $hoh->{$h2} ) ) {

      my @sort = sort ($h1, $h2);
      foreach my $r ( @del ) {
        next INNER if $r->[0] eq $sort[0] && $r->[1] eq $sort[1];
      }
      push @del, [sort $h1, $h2];

    }
  }
}

delete $hoh->{$_->[0]} for @del;

my $o;
foreach my $h1 ( values %$hoh ) {
  while ( my ($k, $v) = each %$h1 ) {
    $o->{$k}{$v}++
  }
}

use Data::Dumper; die Dumper $o;

And, that's it!

罪#恶を代价 2024-12-16 15:48:32

非常简单直接的解决方案:

sub Count {
  my $input = shift;
  my (%output,%seen);
  for my $bunch (values %$input) {
    next if $seen{join'|',%$bunch}++;
    $output{$_}{$bunch->{$_}}++ for keys %$bunch;
  }
  return \%output;
}

Very simple and straightforward solution:

sub Count {
  my $input = shift;
  my (%output,%seen);
  for my $bunch (values %$input) {
    next if $seen{join'|',%$bunch}++;
    $output{$_}{$bunch->{$_}}++ for keys %$bunch;
  }
  return \%output;
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文