需要 Perl 中峰值信号检测的帮助

发布于 2024-09-15 11:01:48 字数 1070 浏览 7 评论 0原文

大家好,我从酵母菌落板的图像中得到了一些强度值。我需要能够从强度值中找到峰值。下面的示例图像显示了绘制图表时值的外观。

一些值的示例

5.7
5.3
8.2
16.5
34.2
58.8
**75.4**
75
65.9
62.6
58.6
66.4
71.4
53.5
40.5
26.8
14.2
8.6
5.9
7.7
14.9
30.5
49.9
69.1
**75.3**
69.8
58.8
57.2
56.3
67.1
69
45.1
27.6
13.4
8
5

这些值在 75.4 和 75.3 处显示两个峰值,您可以看到值先增加然后减少。变化并不总是相同的。

强度值图

http://lh4.ggpht.com/_aEDyS6ECO8s/THKTLgDPhaI/AAAAAAAAAio/HQW7Ut-HBhA/s400/peaks.pngFrom research

我正在考虑做的事情之一是将每个组(即山脉)存储在散列中,然后查找组中的最大值。我遇到的问题之一是如何确定每个组的边界。

这是我迄今为止拥有的代码的链接: http://paste-it.net/public/y485822/

这是一个链接完整的数据集: http://paste-it.net/public/ub121b4/

我正在编写我的代码在 Perl 中。任何帮助将不胜感激。谢谢

Hi everyone I have some values of intensities from images of yeast colony plates. I need to be able to find the peak values from the intensity values. Below is an example image showing how the values look when graphed.

Example of some of the values

5.7
5.3
8.2
16.5
34.2
58.8
**75.4**
75
65.9
62.6
58.6
66.4
71.4
53.5
40.5
26.8
14.2
8.6
5.9
7.7
14.9
30.5
49.9
69.1
**75.3**
69.8
58.8
57.2
56.3
67.1
69
45.1
27.6
13.4
8
5

These values show two peaks at 75.4 and 75.3, you can see that the values increase then decrease. The change is not always the same.

Graph of intensity values

http://lh4.ggpht.com/_aEDyS6ECO8s/THKTLgDPhaI/AAAAAAAAAio/HQW7Ut-HBhA/s400/peaks.pngFrom research

One of the things that I am thinking of doing is to store each of the groups i.e. mountains in a hash then look for the largest value in a group. One if the issues that I am seeing though is how to determine the boundaries of each of the groups.

Here is a link to the code that I have so far:
http://paste-it.net/public/y485822/

Here is a link to a complete data set:
http://paste-it.net/public/ub121b4/

I am writing my code in Perl. Any help would be greatly appreciated. Thank you

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

淡淡の花香 2024-09-22 11:01:48

您需要决定峰的本地化程度。这里的方法可以在广泛的数据区域内找到波峰和波谷。

use strict;
use warnings;

my @data = (
    5.7, 5.3, 8.2, 16.5, 34.2, 58.8, 75.4, 75, 65.9, 62.6,
    58.6, 66.4, 71.4, 53.5, 40.5, 26.8, 14.2, 8.6, 5.9, 7.7,
    14.9, 30.5, 49.9, 69.1, 75.3, 69.8, 58.8, 57.2, 56.3, 67.1,
    69, 45.1, 27.6, 13.4, 8, 5,
);

# Determine mean. Or use Statistics::Descriptive.
my $sum;
$sum += $_ for @data;
my $mean = $sum / @data;

# Make a pass over the data to find contiguous runs of values
# that are either less than or greater than the mean. Also
# keep track of the mins and maxes within those groups.
my $group = -1;
my $gt_mean_prev = '';
my @mins_maxs;
my $i = -1;

for my $d (@data){
    $i ++;
    my $gt_mean = $d > $mean ? 1 : 0;

    unless ($gt_mean eq $gt_mean_prev){
        $gt_mean_prev = $gt_mean;
        $group ++;
        $mins_maxs[$group] = $d;
    }

    if ($gt_mean){
        $mins_maxs[$group] = $d if $d > $mins_maxs[$group];
    }
    else {
        $mins_maxs[$group] = $d if $d < $mins_maxs[$group];
    }

    $d = {
        i       => $i,
        val     => $d,
        group   => $group,
        gt_mean => $gt_mean,
    };
}

# A fun picture.
for my $d (@data){
    printf
        "%6.1f  %2d  %1s  %1d  %3s  %s\n",
        $d->{val},
        $d->{i},
        $d->{gt_mean} ? '+' : '-',
        $d->{group},
        $d->{val} == $mins_maxs[$d->{group}] ? '==>' : '',
        '.' x ($d->{val} / 2),
    ;

}

输出:

   5.7   0  -  0       ..
   5.3   1  -  0  ==>  ..
   8.2   2  -  0       ....
  16.5   3  -  0       ........
  34.2   4  -  0       .................
  58.8   5  +  1       .............................
  75.4   6  +  1  ==>  .....................................
  75.0   7  +  1       .....................................
  65.9   8  +  1       ................................
  62.6   9  +  1       ...............................
  58.6  10  +  1       .............................
  66.4  11  +  1       .................................
  71.4  12  +  1       ...................................
  53.5  13  +  1       ..........................
  40.5  14  -  2       ....................
  26.8  15  -  2       .............
  14.2  16  -  2       .......
   8.6  17  -  2       ....
   5.9  18  -  2  ==>  ..
   7.7  19  -  2       ...
  14.9  20  -  2       .......
  30.5  21  -  2       ...............
  49.9  22  +  3       ........................
  69.1  23  +  3       ..................................
  75.3  24  +  3  ==>  .....................................
  69.8  25  +  3       ..................................
  58.8  26  +  3       .............................
  57.2  27  +  3       ............................
  56.3  28  +  3       ............................
  67.1  29  +  3       .................................
  69.0  30  +  3       ..................................
  45.1  31  +  3       ......................
  27.6  32  -  4       .............
  13.4  33  -  4       ......
   8.0  34  -  4       ....
   5.0  35  -  4  ==>  ..

You need to decide how local you want the peaks to be. The approach here finds peaks and troughs within broad regions of the data.

use strict;
use warnings;

my @data = (
    5.7, 5.3, 8.2, 16.5, 34.2, 58.8, 75.4, 75, 65.9, 62.6,
    58.6, 66.4, 71.4, 53.5, 40.5, 26.8, 14.2, 8.6, 5.9, 7.7,
    14.9, 30.5, 49.9, 69.1, 75.3, 69.8, 58.8, 57.2, 56.3, 67.1,
    69, 45.1, 27.6, 13.4, 8, 5,
);

# Determine mean. Or use Statistics::Descriptive.
my $sum;
$sum += $_ for @data;
my $mean = $sum / @data;

# Make a pass over the data to find contiguous runs of values
# that are either less than or greater than the mean. Also
# keep track of the mins and maxes within those groups.
my $group = -1;
my $gt_mean_prev = '';
my @mins_maxs;
my $i = -1;

for my $d (@data){
    $i ++;
    my $gt_mean = $d > $mean ? 1 : 0;

    unless ($gt_mean eq $gt_mean_prev){
        $gt_mean_prev = $gt_mean;
        $group ++;
        $mins_maxs[$group] = $d;
    }

    if ($gt_mean){
        $mins_maxs[$group] = $d if $d > $mins_maxs[$group];
    }
    else {
        $mins_maxs[$group] = $d if $d < $mins_maxs[$group];
    }

    $d = {
        i       => $i,
        val     => $d,
        group   => $group,
        gt_mean => $gt_mean,
    };
}

# A fun picture.
for my $d (@data){
    printf
        "%6.1f  %2d  %1s  %1d  %3s  %s\n",
        $d->{val},
        $d->{i},
        $d->{gt_mean} ? '+' : '-',
        $d->{group},
        $d->{val} == $mins_maxs[$d->{group}] ? '==>' : '',
        '.' x ($d->{val} / 2),
    ;

}

Output:

   5.7   0  -  0       ..
   5.3   1  -  0  ==>  ..
   8.2   2  -  0       ....
  16.5   3  -  0       ........
  34.2   4  -  0       .................
  58.8   5  +  1       .............................
  75.4   6  +  1  ==>  .....................................
  75.0   7  +  1       .....................................
  65.9   8  +  1       ................................
  62.6   9  +  1       ...............................
  58.6  10  +  1       .............................
  66.4  11  +  1       .................................
  71.4  12  +  1       ...................................
  53.5  13  +  1       ..........................
  40.5  14  -  2       ....................
  26.8  15  -  2       .............
  14.2  16  -  2       .......
   8.6  17  -  2       ....
   5.9  18  -  2  ==>  ..
   7.7  19  -  2       ...
  14.9  20  -  2       .......
  30.5  21  -  2       ...............
  49.9  22  +  3       ........................
  69.1  23  +  3       ..................................
  75.3  24  +  3  ==>  .....................................
  69.8  25  +  3       ..................................
  58.8  26  +  3       .............................
  57.2  27  +  3       ............................
  56.3  28  +  3       ............................
  67.1  29  +  3       .................................
  69.0  30  +  3       ..................................
  45.1  31  +  3       ......................
  27.6  32  -  4       .............
  13.4  33  -  4       ......
   8.0  34  -  4       ....
   5.0  35  -  4  ==>  ..
伪装你 2024-09-22 11:01:48
my @data = ...;

# filter out sequential duplicate values
my @orig_index = 0;
my @deduped = $data[0];
for my $index ( 1..$#data ) {
    if ( $data[$index] != $data[$index-1] ) {
        push @deduped, $data[$index];
        push @orig_index, $index;
    }
}

# add a sentinel (works for both ends)
push @deduped, -9**9**9;

my @local_maxima_indexes;
for my $index ( 0..$#deduped-1 ) {
    if ( $deduped[$index] > $deduped[$index-1] && $deduped[$index] > $deduped[$index+1] ) {
        push @local_maxima_indexes, $orig_index[$index];
    }
}

请注意,这将第一个值视为局部最大值,以及值 71.4 和 69。我不确定您如何区分要包含哪些值。

my @data = ...;

# filter out sequential duplicate values
my @orig_index = 0;
my @deduped = $data[0];
for my $index ( 1..$#data ) {
    if ( $data[$index] != $data[$index-1] ) {
        push @deduped, $data[$index];
        push @orig_index, $index;
    }
}

# add a sentinel (works for both ends)
push @deduped, -9**9**9;

my @local_maxima_indexes;
for my $index ( 0..$#deduped-1 ) {
    if ( $deduped[$index] > $deduped[$index-1] && $deduped[$index] > $deduped[$index+1] ) {
        push @local_maxima_indexes, $orig_index[$index];
    }
}

Note that this considers the first value a local maximum, and also the values 71.4 and 69. I'm not sure how you are distinguishing which ones you want included.

长伴 2024-09-22 11:01:48

你有控制数据集吗?如果是这样,我建议使用酵母强度和对照图像之间的简单对数比率来标准化您的数据。

然后,您可以使用 ChiPOTle 的 Perl 端口 来获取重要的峰值,这听起来更重要比搜索局部/全局最大值等更强大。ChiPOTle

“是一种用于分析 ChIP 芯片微阵列数据的峰值查找算法”,但我已经在许多其他应用中成功使用了它(例如 ChIP-seq,无可否认,它更接近于它的最初目的比你的情况要多)。

所得的对数(酵母/对照)负值将用于构建高斯背景模型以进行显着性估计。然后,该算法使用错误发现率进行多重测试校正。

这是原始论文

Do you have a control data set? If so, I'd recommend normalizing your data using say, a simple log ratio between yeast intensities and control images.

You could then use the perl port of ChiPOTle to grab the significant peaks, which sounds way more robust than searching local/global maxima, etc.

ChiPOTle "is a peak-finding algorithm used to analyze ChIP-chip microarray data", but I've used it successfully in many other applications (like ChIP-seq, which admittedly is closer to its original purpose than in your case).

The resulting log(yeast/control) negative values would be used to build a Gaussian background model for significance estimation. The algorithm then uses the false-discovery rate for multiple testing correction.

Here's the original paper.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文