从大型单词词典中查找包含任何单词的文本所需的工具或 API
我正在寻找一个工具(理想情况下)或失败的API来从大量文本文件中的大型单词词典中搜索文本中的任何单词的实例。在我的例子中,“单词”实际上是文件名,但不包含空格。
快速算法也许可以通过读取字典来构建 DFA(确定性有限自动机),然后能够使用单遍在任意数量的文件中查找字典单词的实例。
注意:我想要精确的文本匹配而不是模糊匹配,就像这样的问题: - 算法想要:查找字典中与自由文本中的单词相似的所有单词
I'm looking for a tool (ideally) or failing that an API to search text for instances of any word from a large dictionary of words in a large number of text files. "Words" in my case are actually file names but won't contain spaces.
A fast algorithm might perhaps build a DFA (deterministic finite automata) by reading the dictionary and then be able to use a single pass to find instances of the dictionary words over any number of files.
Note: I'm wanting exact text matching not fuzzy matching like this SO question:
- Algorithm wanted: Find all words of a dictionary that are similar to words in a free text
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
你看过lucene吗?有一个java和一个.net版本
http://lucene.apache.org/java/docs /index.html
Have you looked at lucene ? There's a java and a .net version
http://lucene.apache.org/java/docs/index.html
我会将单词字典加载到 HashMap 或“字典”中,然后逐行或逐字读取文件,检查映射是否包含该单词。
I'd load the dictionary of words to a HashMap or "Dictionary", then read the file in line by line or word by word, checking if the map contains the word.