自动完成的算法?
我指的是当用户在 Google 中输入搜索词时用于提供查询建议的算法。
我主要感兴趣的是: 1. 最重要的结果(最有可能的查询而不是任何匹配的结果) 2. 匹配子串 3.模糊匹配
我知道你可以使用Trie或广义trie来查找匹配,但它不能满足上述要求...
之前提出的类似问题这里
I am referring to the algorithm that is used to give query suggestions when a user types a search term in Google.
I am mainly interested in:
1. Most important results (most likely queries rather than anything that matches)
2. Match substrings
3. Fuzzy matches
I know you could use Trie or generalized trie to find matches, but it wouldn't meet the above requirements...
Similar questions asked earlier here
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
对于(呵呵)很棒的模糊/部分字符串匹配算法,请查看该死的酷算法:
这些不会取代尝试,而是阻止尝试在尝试中进行强力查找 - 这仍然是一个巨大的胜利。接下来,您可能需要一种方法来限制 trie 的大小:
最后,您希望尽可能防止查找...
For (heh) awesome fuzzy/partial string matching algorithms, check out Damn Cool Algorithms:
These don't replace tries, but rather prevent brute-force lookups in tries - which is still a huge win. Next, you probably want a way to bound the size of the trie:
Finally, you want to prevent lookups whenever possible...
我只想说...
解决这个问题的一个好方法是结合三元搜索树以外的东西。
需要 Ngram 和 Shingles(短语)。还需要检测字边界错误。 “hell o”应该是“hello”......而“whitesocks”应该是“白袜子” - 这些是预处理步骤。如果您没有正确预处理数据,您将无法获得有价值的搜索结果。
三元搜索树是弄清楚什么是单词的有用组件,并且还可以在输入的单词不是索引中的有效单词时实现相关单词猜测。
谷歌算法执行短语建议和纠正。
谷歌算法也有一些上下文的概念......如果您搜索的第一个词与天气相关,并且您将它们组合起来“weatherforcst”与“monsoonfrcst”与“deskfrcst” - 我的猜测是幕后排名正在发生变化基于遇到的第一个单词的建议 - 预测和天气是相关的单词,因此预测在“您是不是想说”猜测中排名很高。
单词部分(ngram)、短语术语(shingles)、单词邻近度(单词聚类索引)、三元搜索树(单词查找)。
I'd just like to say...
A good solution to this problem is going to incorporate more than a Ternary Search Tree.
Ngrams, and Shingles (Phrases) are needed. Word-boundary errors also need to be detected. "hell o" should be "hello" ... and "whitesocks" should be "white socks" - these are pre-processing steps. If you don't preprocess the data properly you aren't going to get valuable search results.
Ternary search trees are a useful component in figuring out what is a word, and also for implementing related-word guessing when a word typed isn't a valid word in the index.
The google algorithm performs phrase suggestion and correction.
The google algorithm also has some concept of context... if the first word you search for is weather related and you combine them "weatherforcst" vs "monsoonfrcst" vs "deskfrcst" - my guess is behind the scenes rankings are being changed in the suggestion based on the first word encountered - forecast and weather are related words therefore forecast get's a high rank in the Did-You-Mean guess.
word-partials (ngrams), phrase-terms (shingles), word-proximity (word-clustering-index), ternary-search-tree (word lookup).
Google 的具体算法尚不清楚,但据说 通过对用户输入进行统计分析来进行工作。这种方法不适合大多数情况。更常见的自动完成是使用以下之一实现的:
看一下 completely,这是一个实现了后面一些概念的 Java 自动完成库。
Google's exact algorithm is unknown, but it is said to work by statistical analysis of users input. An approach not suitable for most cases. More commonly auto completion is implemented using one of the following:
Take a look at completely, a Java autocomplete library that implements some of the latter concepts.
有一些工具,例如 soundex 和 编辑距离,可用于查找特定范围内的模糊匹配。
Soundex 查找听起来相似的单词,levenshtein distance 查找与另一个单词在一定编辑距离内的单词。
There are tools like soundex and levenshtein distance that can be used to find fuzzy matches that are within a certain range.
Soundex finds words that sound similar and levenshtein distance finds words that are within a certain edit distance from another word.
看看 Firefox 的 Awesome bar 算法
Google 建议很有用,因为它考虑了数百万个热门查询+您过去的相关查询。
但它没有良好的完成算法/用户界面:
例如:尝试
tomcat tut
-->正确建议“tomcat教程”。现在尝试tomcat rial
-->没有建议)-:Take a look at Firefox's Awesome bar algorithm
Google suggest is useful, because it take the millions of popular queries + your past related queries into account.
It doesn't have a good completion algorithm / UI though:
For example: Try
tomcat tut
--> correctly suggest "tomcat tutorial". Now trytomcat rial
--> no suggestions )-:对于子字符串和模糊匹配,编辑距离算法对我来说效果相当好。尽管我承认它似乎并不像自动完成/建议的行业实现那么完美。我认为谷歌和微软的 Intellisense 都做得更好,因为他们改进了这个基本算法来权衡匹配不同字符串所需的编辑操作类型。例如,调换两个字符可能只算作 1 次操作,而不是 2 次操作(插入和删除)。
但即便如此,我发现这已经足够接近了。这是它在 C# 中的实现...
For substrings and fuzzy matches, the Levenshtein distance algorithm has worked fairly well for me. Though I will admit it does not seem to be as perfect as industry implementations of autocomplete/suggest. Both Google and Microsoft's Intellisense do a better job, I think because they've refined this basic algorithm to weigh the kind of edit operations it takes to match the dissimilar strings. E.g. transposing two characters should probably only count as 1 operation, not 2 (an insert & delete).
But even so I find this is close enough. Here is it's implementation in C#...
如果您正在寻找问题的整体设计,请尝试阅读 https://www 中的内容.interviewbit.com/problems/search-typeahead/。
他们首先通过使用 trie 的简单方法构建自动完成功能,然后在此基础上进行构建。他们还解释了采样和离线更新等优化技术,以满足特定的用例。
为了保持解决方案的可扩展性,您必须智能地对 trie 数据进行分片。
If you are looking for an overall design for the problem, try reading the content at https://www.interviewbit.com/problems/search-typeahead/.
They start by building autocomplete through a naive approach of using a trie and then build upon it. They also explain optimization techniques like sampling and offline updates to cater to specific use cases.
To keep the solution scalable, you would have to shard your trie data intelligently.
我认为构建一个专门的特里树可能会更好,而不是追求一种完全不同的数据结构。
我可以看到该功能体现在 trie 中,其中每个叶子都有一个字段,反映其相应单词的搜索频率。
搜索查询方法将显示具有最大值的后代叶节点,该最大值通过将到每个后代叶节点的距离乘以与每个后代叶节点相关联的搜索频率来计算。
谷歌使用的数据结构(以及相应的算法)可能要复杂得多,可能会考虑大量其他因素,例如来自您自己的特定帐户的搜索频率(以及一天中的时间......以及天气......季节) ...和月相...和...)。
然而,我相信通过在每个节点中包含附加字段并在搜索查询方法中使用这些字段,基本 trie 数据结构可以扩展到任何类型的专门搜索首选项。
I think that one might be better off constructing a specialized trie, rather than pursuing a completely different data structure.
I could see that functionality manifested in a trie in which each leaf had a field that reflected the frequency of searches of its corresponding word.
The search query method would display the descendant leaf nodes with the largest values calculated from multiplying the distance to each descendant leaf node by the search frequency associated with each descendant leaf node.
The data structure (and consequently the algorithm) Google uses are probably vastly more complicated, potentially taking into a large number of other factors, such as search frequencies from your own specific account (and time of day... and weather... season... and lunar phase... and... ).
However, I believe that the basic trie data structure can be expanded to any kind of specialized search preference by including additional fields to each of the nodes and using those fields in the search query method.
我不知道这是否能回答你的问题,但当时我用 C 语言做了一个非常简单的输入自动完成代码。我还没有在这方面实现机器学习和神经网络,所以它不会进行概率计算之类的。它的作用是使用子字符串检查算法检查与输入匹配的第一个索引。
如果您指的是一个具有空初始化匹配的程序,然后将用户的输入保存到数组或文件中,那么当用户键入相同的单词时,程序会将其与先前的输入进行匹配,也许我可以解决这个问题。
I don't know if this will answer your question but I made a very simple input-autocomplete code using C language back then. I haven't implemented machine learning and neural networks on this so it won't make probability calculations and whatnot. What it does is check the very first index that matches the input using sub-string checking algorithm.
If you're referring to a program that has a null initialized matches then saves an input from user into an array or file then when the user types the same word the program matches it to a previous input, maybe I can work into that.