检测和比较短语的算法

发布于 2024-11-17 21:39:02 字数 282 浏览 2 评论 0原文

我有一些非英语文本。我想对它们进行风格比较。

比较风格的一种方法是寻找相似的短语。如果我在一本书中多次发现“钓鱼、滑雪和徒步旅行”,而在另一本书“钓鱼、徒步旅行和滑雪”中,风格上的相似之处就表明是同一位作者。不过,我还需要能够找到“钓鱼,甚至滑雪或徒步旅行”。理想情况下,我还会找到“钓鱼、徒步旅行和滑雪”,但因为它们是非英语文本(通用希腊语),所以很难允许同义词,这方面并不重要。

(1) 检测此类短语,然后 (2) 以在其他文本中不过分严格的方式搜索它们(以便找到“钓鱼,甚至滑雪或徒步旅行”)的最佳方法是什么?

I have a couple of non-English texts. I would like to perform stylistic comparisons on them.

One method of comparing style is to look for similar phrases. If I find in one book "fishing, skiing and hiking" a couple of times and in another book "fishing, hiking and skiing" the similarity in style points to one author. I need to also be able to find "fishing and even skiing or hiking" though. Ideally I would also find "angling, hiking and skiing" but because they are non-English texts (Koine Greek), synonyms are harder to allow for and this aspect is not vital.

What is the best way to (1) go about detecting these sorts of phrases and then (2) searching for them in a way that is not overly rigid in other texts (so as to find "fishing and even skiing or hiking")?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

暮凉 2024-11-24 21:39:02
  • 取出所有文本,并建立一个单词列表。简单的方法:记下所有单词。困难的方法:只取相关的一个(即:在英语中,“the”从来都不是一个相关的词,因为它使用得太频繁了)。假设您的词汇表中有 V 个单词。
  • 对于每个文本,构建一个邻接矩阵A,其大小为V*V。 A(i) 行说明您词汇表中的单词与第 i 个单词 V(i) 的接近程度。例如,如果 V(i)="skiing",则 A(i,j) 是单词 V(j) 与单词“skiing”的接近程度。您更喜欢小词汇量!

技术细节:
对于词汇,您有几种获得良好词汇的可能性。不幸的是,我不记得名字了。其中之一是删除经常出现且无处不在的单词。相反,您应该保留少数文本中出现的生僻单词。然而,保留一篇文本中精确出现的单词是没有用的。

对于邻接矩阵,邻接性是通过计算您正在考虑的单词的距离(计算分隔它们的单词数)来测量的。例如,让我们使用您的文本=)

比较风格的一种方法是寻找相似的短语。如果我在一本书中多次发现“钓鱼、滑雪和徒步旅行”,而在另一本书“钓鱼、徒步旅行和滑雪”中,则风格上的相似就表明是同一个作者。不过,我还需要能够找到“钓鱼,甚至滑雪或徒步旅行”。理想情况下,我还会找到“钓鱼、徒步旅行和滑雪”,但因为它们是非英语文本(通用语希腊语),同义词很难允许,这方面并不重要。

这些完全是虚构的值:
A(方法,比较) += 1.0
A(方法、相似度) += 0.5
A(method, Greek) += 0.0

你主要需要一个“典型距离”。例如,您可以说,在 20 个分隔词之后,这些词就不能再被认为是相邻的。

经过一点归一化之后,只需在两个文本的邻接矩阵之间做一个 L2 距离,看看它们有多接近。之后您可以做更奇特的事情,但这应该会产生可接受的结果。现在,如果您有同义词,您可以以一种很好的方式更新邻接关系。例如,如果您输入“美丽的少女”,那么
A(美丽、少女) += 1.0
A(华丽、少女) += 0.9
A(公平,少女) += 0.8
A(崇高,少女)+= 0.8
...

  • Take all your texts, and build a list of the words. Easy way : take all the words. Hard way : take only the relevant one (i.e : in English, "the" is never a pertinent word as it it used too often). Let's say you have V words in your vocabulary.
  • For each text, build an adjacency matrix A, which size is V*V. The row A(i) states how close the words in your vocabulary are to the i-th word V(i). For example, if V(i)="skiing", then A(i,j) is how close the word V(j) is to the word "skiing". You'd prefer a small vocabulary!

Technical details :
For the vocabulary, you have several possibilities to get a good vocabulary. Unfortunately, I can't remember the names. One of them consists of deleting words that are present often and everywhere. On the contrary, you should keep rare words that are present in few texts. However, there is no use in conserving words present exactly in one text.

For the adjacency matrix, the adjacency is measured is done by counting how far the words you are considering are (couting the number of words separating them). For example, let's use your very text =)

One method of comparing style is to look for similar phrases. If I find in one book "fishing, skiing and hiking" a couple of times and in another book "fishing, hiking and skiing" the similarity in style points to one author. I need to also be able to find "fishing and even skiing or hiking" though. Ideally I would also find "angling, hiking and skiing" but because they are non-English texts (Koine Greek), synonyms are harder to allow for and this aspect is not vital.

These are entirely made up values :
A(method, comparing) += 1.0
A(method, similarity) += 0.5
A(method, Greek) += 0.0

You mainly need a "typical distance". You can say for example that after 20 separation-words, then the words can't be considered adjacent anymore.

After a bit of normalization, just make a L2 distance between the adjacency matrix of two texts to see how close they are. You can do fancier stuff afterwards, but this should yield acceptable results. Now, if you have synonyms, you can update the adjacency in a nice way. For example, if you have in input "beautiful maiden", then
A(beautiful, maiden) += 1.0
A(magnificent, maiden) += 0.9
A(fair, maiden) += 0.8
A(sublime, maiden) += 0.8
...

探春 2024-11-24 21:39:02

您可能应该使用一些字符串相似性度量,例如 Jaccard骰子余弦相似度。您可以在单词、(单词或字符级)n-gram 或引理上尝试这些方法。 (对于诸如 Koinè Greek 之类高度变形的语言,如果您有良好的词形还原器,我建议您使用词元。)

除非您有像 WordNet 这样将同义词映射在一起的东西,否则捕获同义词很困难。

You should probably use some string similarity measure such as Jaccard, Dice or cosine similarity. You could try these either on words, on (word or character-level) n-grams or on lemmas. (For a highly inflected language such as Koinè Greek, I would suggest using lemmas if you have a good lemmatizer for it.)

Catching synonyms is hard unless you have something like WordNet, which maps synonyms together.

鲜肉鲜肉永远不皱 2024-11-24 21:39:02

我会遵循两个准则:

  • 谨防匹配算法中的过早优化。从广泛的方法开始,然后根据需要对其进行细化(即检查简单的“邻近”测试是否给出了足够好的结果您知道答案的数据集,如果不知道,请对其进行调整,直到找到答案为止)。在许多情况下,您会发现高度优化的解决方案不会给出与您第一次粗略尝试有很大不同的结果。
  • 使用某种自学习算法。通过这种方式,您可以向人工智能提供大量文本,使其变得更聪明。从你的例子中汲取灵感:在尝试比较两个目标文本之前,我会提供一篇有关户外生活的文本。这样,人工智能很可能会自己了解到钓鱼钓鱼非常接近。

作为一个自学人工智能,我会使用(至少在开始时)神经网络。有一个简单且完全有效的示例(在Python中)可以找到这里,其目标正是“数据挖掘”。当然,您可能希望用其他语言来实现。

关于你的两个具体问题:

检测此类短语的最佳方法是什么

您问题的其他答案已经详细介绍了这一点(他们的作者似乎比我在这个主题上了解更多!),但再说一次:我会从简单开始只需使用一个神经网络来告诉您两个术语的接近程度。然后我将继续进行“波浪式”优化(例如,如果它是英文文本,则仅使用单词的词根,或者根据文本的其他元数据(例如年份)调整分数可能会有一些用处) ,或作者,或地理起源,或完全改变匹配算法......)直到您对结果感到满意。

以其他文本中不太严格的方式搜索它们的最佳方法是什么(以便找到“钓鱼,甚至滑雪或徒步旅行”

我想说这相当于要求人工智能返回所有短语,其“接近度分数”超过给定阈值

I would follow two guidelines:

  • Beware of premature optimisation in the matching algorithm. Start from a broad approach and then refinine it as per need (i.e. check if a simple "proximity" test gives good-enough results for dataset you know the answer for, and if not, tweak it until it does). In many cases you will find out that a highly optimised solution won't give results considerably different than your first rough attempt.
  • Use some sort of self-learning algorithm. This way you could feed the AI a number of texts that can make it smarter. Taking inspiration from your example: before trying to compare two target texts, I would feed a text on outdoor life. This way the AI would most probably learn by itself that angling is a very close match for fishing.

As a self-learning AI, I would use (at least for a start) a neural network. There is an easy and fully working example (in python) that can be found here and that targets precisely "data mining". You might wish to implement in some other language, of course.

About your two specific questions:

What is the best way to go about detecting these sorts of phrases

Other answers to your question have gone in details about this (and their authors seem to know way more than I do on the subject!), but again: I would start easy and simply use a neural network that tells you how close two terms are. Then I would proceed with "waves" of optimisation (for example - if it was an English text - using only the root of the word, or maybe it is of some use to tweak the score according to some other metadata of the text like year, or author, or geographical origin, or yet changing the matching algorithm altogether...) until you are satisfied with the outcome.

What is the best way to go for searching for them in a way that is not overly rigid in other texts (so as to find "fishing and even skiing or hiking"

I would say this is equivalent to ask the AI to return all phrases whose "proximity score" is over a given threshold.

HTH!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文