如何替换和统计单词或单词序列的频率?

发布于 2024-09-27 07:10:35 字数 1418 浏览 5 评论 0原文

我需要做两件事,首先,找到给定的文本,其中最常用的单词单词序列(仅限于n)。 例子:

Lorem *ipsum* dolor sit amet,consectetur adipiscing elit。 Nunc auctor urna sed urna mattis nec interdum magna ullamcorper。 Donec ut lorem eros,id rhoncus nisl。 Praesent sodales lorem vitae sapien volutpat et accumsan lorem viverra。 Proin lectus elit,cursus ut feugiat ut,porta sit amet leo。 Cras est nisl, aliquet quis lobortis sit amet, viverra nonerat。 Faucibus Orci luctus et ultrices posuere cubilia Curae 中的前庭前庭 (Vestibulum ante ipsum primis);整数 euismod scelerisque quam,et aliquet nibh dignissim at。 Pellentesque ut elit neque。 Etiam facilisis nisl eu mauris luctus in consequat libero volutpat。 Pellentesque auctor,justo in suscipit mollis,erat justo sollicitudin ipsum,in cursuserat ipsum id turpis。 Intincidunt hendrerit celerisque。

(我省略了一些词,但这是一个例子)。

我想要的结果是 sit amet,而不是 sitamet

关于如何开始有什么想法吗?

其次,我需要将给定列表中匹配的所有单词或单词序列包装到给定文件中。

为此,我认为按长度递减对结果进行排序,然后在替换函数中处理每个字符串,以避免在列表中还有另一个 sit 单词时包裹 sit amet 。 这是一个好方法吗?!

谢谢

I need to do two things, first, find a given text which are the most used word and word sequences (limited to n).
Example:

Lorem *ipsum* dolor sit amet, consectetur adipiscing elit. Nunc auctor urna sed urna mattis nec interdum magna ullamcorper. Donec ut lorem eros, id rhoncus nisl. Praesent sodales lorem vitae sapien volutpat et accumsan lorem viverra. Proin lectus elit, cursus ut feugiat ut, porta sit amet leo. Cras est nisl, aliquet quis lobortis sit amet, viverra non erat. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Integer euismod scelerisque quam, et aliquet nibh dignissim at. Pellentesque ut elit neque. Etiam facilisis nisl eu mauris luctus in consequat libero volutpat. Pellentesque auctor, justo in suscipit mollis, erat justo sollicitudin ipsum, in cursus erat ipsum id turpis. In tincidunt hendrerit scelerisque.

(some words my have been omited, but it's an example).

I'd like to result with sit amet and not sit and amet

Any ideas on how to start?

Second, I need to wrap all the words or word sequences matched from a given list in a given file.

For this, I think to order the result by desceding length and then process each string in replace function, to avoid having sit amet wrapped if I have another sit word in my list.
Is it a good way to do?!

Thank you

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

浊酒尽余欢 2024-10-04 07:10:35

这是一个功能性的解决方案,仍然需要一些清理工作。我的一般算法是这样的:

  1. 将所有单词分解成一个列表w
    去除多余的空白和
    标点符号
  2. 查找所有n长度的数组
    从偏移量 0 开始的 w
  3. 查找所有 n 长度的数组
    从偏移量 1 开始的 w

    • ...继续,直到找到从偏移量 n-1
    • 开始的 n 长度块的数组

    • 注意:如果 w 的最后一个块不是 n 长度,请勿将其包含在块数组中
  4. 将所有块数组连接为 c
  5. 查找中每个值的频率
    c

$sample = 'Lorem *ipsum* dolor sit amet, consectetur adipiscing elit. Nunc auctor urna sed urna mattis nec interdum magna ullamcorper. Donec ut lorem eros, id rhoncus nisl. Praesent sodales lorem vitae sapien volutpat et accumsan lorem viverra. Proin lectus elit, cursus ut feugiat ut, porta sit amet leo. Cras est nisl, aliquet quis lobortis sit amet, viverra non erat. Vestibulum ante ipsum  primis in faucibus orci luctus et ultrices posuere cubilia Curae; Integer euismod scelerisque quam, et aliquet nibh dignissim at. Pellentesque ut elit neque. Etiam facilisis nisl eu mauris luctus in consequat libero volutpat. Pellentesque auctor, justo in suscipit mollis, erat justo sollicitudin ipsum, in cursus erat ipsum id turpis. In tincidunt hendrerit scelerisque.';

function buildPhrases($string, $length) {

    $onlyWords = preg_replace('/\p{P}/', '', $string);
    $wordArray = preg_split('/\s+/s', $onlyWords);

    function buildPhraseChunks($wordArray, $length, $offset = 0)    
    {
        if ($offset >= $length) {
            return array();
        } else {
            $offsetWordArray = array_slice($wordArray, $offset);
            return array_merge(
                array_chunk($offsetWordArray, $length),             
                buildPhraseChunks(
                    $wordArray, $length, $offset + 1
                )
            );
        }
    }

    $onlyLengthN = function ($n) {
        return function($a) use ($n) {
            return count($a) == $n;
        };
    };

    $concatWords = function ($a, $b) {
        return $a . ' ' . $b;
    };

    $reduce = function ($a) use ($concatWords) {
        return array_reduce($a, $concatWords);
    };

    $format = function ($a) {
        return strtolower(trim($a));
    };

    $chunks = array_filter(
        buildPhraseChunks($wordArray, $length),
        $onlyLengthN($length)
    );
    $phrases = array_map($reduce, $chunks);
    $formattedPhrases = array_map($format, $phrases);

    return $formattedPhrases;

}

$phrases = buildPhrases($sample, 1);
$dropOnes = function($a) {
    return $a != 1;
};
$freqCount = array_filter(
    array_count_values($phrases),
    $dropOnes
);

arsort($freqCount);

print_r($freqCount);

This is a functional solution that could still use some cleaning up. My general algorithm is this:

  1. Explode all words into a list w,
    stripping excess whitespace and
    punctuation
  2. Find the array of all n-length
    chunks of w starting at offset 0
  3. Find the array of all n-length
    chunks of w starting at offset 1

    • ... continue until you've found the array of n-length chunks starting at offset n-1
    • Note: if the last chunk of w is not n-length, do not include it as part of the chunk array
  4. Concatenate all chunk arrays as c
  5. Find the frequency of every value in
    c

$sample = 'Lorem *ipsum* dolor sit amet, consectetur adipiscing elit. Nunc auctor urna sed urna mattis nec interdum magna ullamcorper. Donec ut lorem eros, id rhoncus nisl. Praesent sodales lorem vitae sapien volutpat et accumsan lorem viverra. Proin lectus elit, cursus ut feugiat ut, porta sit amet leo. Cras est nisl, aliquet quis lobortis sit amet, viverra non erat. Vestibulum ante ipsum  primis in faucibus orci luctus et ultrices posuere cubilia Curae; Integer euismod scelerisque quam, et aliquet nibh dignissim at. Pellentesque ut elit neque. Etiam facilisis nisl eu mauris luctus in consequat libero volutpat. Pellentesque auctor, justo in suscipit mollis, erat justo sollicitudin ipsum, in cursus erat ipsum id turpis. In tincidunt hendrerit scelerisque.';

function buildPhrases($string, $length) {

    $onlyWords = preg_replace('/\p{P}/', '', $string);
    $wordArray = preg_split('/\s+/s', $onlyWords);

    function buildPhraseChunks($wordArray, $length, $offset = 0)    
    {
        if ($offset >= $length) {
            return array();
        } else {
            $offsetWordArray = array_slice($wordArray, $offset);
            return array_merge(
                array_chunk($offsetWordArray, $length),             
                buildPhraseChunks(
                    $wordArray, $length, $offset + 1
                )
            );
        }
    }

    $onlyLengthN = function ($n) {
        return function($a) use ($n) {
            return count($a) == $n;
        };
    };

    $concatWords = function ($a, $b) {
        return $a . ' ' . $b;
    };

    $reduce = function ($a) use ($concatWords) {
        return array_reduce($a, $concatWords);
    };

    $format = function ($a) {
        return strtolower(trim($a));
    };

    $chunks = array_filter(
        buildPhraseChunks($wordArray, $length),
        $onlyLengthN($length)
    );
    $phrases = array_map($reduce, $chunks);
    $formattedPhrases = array_map($format, $phrases);

    return $formattedPhrases;

}

$phrases = buildPhrases($sample, 1);
$dropOnes = function($a) {
    return $a != 1;
};
$freqCount = array_filter(
    array_count_values($phrases),
    $dropOnes
);

arsort($freqCount);

print_r($freqCount);
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文