具有融合拼写错误纠正算法的拼写检查器
最近,我研究了几种拼写检查算法,包括简单的算法(例如 Peter Norvig 的)等等复杂的(如布里尔和摩尔的)。但有一类错误是它们都无法处理的。例如,如果我输入 stackoverflow
而不是 stackoverflow
,这些拼写检查程序将无法纠正错误类型(除非术语词典中的 stackoverflow
) 。存储所有单词对的成本太高(如果错误是 3 个单词之间没有空格,这将无济于事)。 是否有一种算法可以纠正(尽管通常会出现错误)此类错误?
我需要的一些示例:拼写检查器
-> 拼写检查器
拼写检查器
-> 拼写检查器
spelcheker
-> 拼写检查器
Recently I've looked through several spell checker algorithms including simple ones(like Peter Norvig's) and much more complex (like Brill and Moore's) ones. But there's a type of errors which none of them can handle. If for example I type stackoverflow
instead of stack overflow
these spellcheckers will fail to correct the mistype (unless the stack overflow
in the dictionary of terms). Storing all the pairs of words is too expensive (and it will not help if the error is 3 single words without spaces between them).
Is there an algorithm which can correct (despite usual mistypes) this type of errors?
Some examples of what I need:spel checker
-> spell checker
spellchecker
-> spell checker
spelcheker
-> spell checker
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
我有时在 kate 中进行拼写检查时会得到这样的建议,所以肯定有一种算法可以纠正一些此类错误。我确信可以做得更好,但一个想法是将候选者拆分到可能的位置,并检查组件是否存在紧密匹配。困难的部分是确定可能的位置。在我熟悉的语言中,有些字母组合在单词中很少出现。例如,据我所知,组合
dk
或lh
在英语单词中很少见。其他组合通常出现在单词的开头(例如un
、ch
),因此这些组合也是分割的好猜测。在spelcheker
示例中,lc
组合并不太普遍,而ch
是常见的单词开头,因此拆分spel
和cheker
是主要候选者,任何像样的算法都会找到spell
和checker
(但它也可能会找到 < code>spiel,所以不要自动更正,只是提出建议)。I sometimes get such suggestions when spell-checking in kate, so there certainly is an algorithm that can correct some such errors. I am sure one can do better, but one idea is to split the candidate in likely places and check whether close matches for the components exist. The hard part is to decide what are likely places. In the languages I'm sort of familiar with, there are letter combinations that occur rarely in words. For example, the combinations
dk
orlh
are, as far as I'm aware rare in English words. Other combinations occur often at the start of words (e.g.un
,ch
), so those would be good guesses for splitting too. In the examplespelcheker
, thelc
combination is not too widespread, andch
is a common start of words, so the splitspel
andcheker
is a prime candidate, and any decent algorithm would then findspell
andchecker
(but it would probably also findspiel
, so don't auto-correct, just give suggestions).我破解了 Norvig 的拼写校正器来做到这一点。我不得不作一些作弊,将“检查器”一词添加到 Norvig 的数据文件中,因为它从未出现。如果没有这种作弊,这个问题真的很难。
基本上,您需要更改代码,以便:
后者是最棘手的,我对短语组合使用了一个脑残的独立假设,即两个相邻单词的概率是它们各自概率的乘积(这里用对数概率空间中的总和完成),并有一个小的惩罚。我确信在实践中,您会想要保留一些二元组统计数据以很好地进行拆分。
I hacked up Norvig's spell corrector to do this. I had to cheat a bit and add the word 'checker' to Norvig's data file because it never appears. Without that cheating, the problem is really hard.
Basically you need to change the code so that:
The latter is the trickiest, and I use a braindead independence assumption for phrase composition that the probability of two adjacent words is the product of their individual probabilities (here done with sum in log prob space), with a small penalty. I am sure that in practice, you'll want to keep some bigram stats to do that splitting well.
这个问题与应用于德语或荷兰语的复合分裂问题非常相似,但也与嘈杂的英语数据有关。请参阅Monz 和De Rijke 提出了一个非常简单的算法(我认为可以将其实现为有限状态转换器以提高效率),而 Google 则提出“复合分裂”和“分解”。
This problem is very similar to the problem of compound splitting as applied to German or Dutch, but also noisy English data. See Monz & De Rijke for a very simple algorithm (which can I think be implemented as a finite state transducer for efficiency) and Google for "compound splitting" and "decompounding".