如果我正在编写一个软件,尝试使用用户之前输入的两个单词来预测用户接下来要输入的单词,我将创建两个表。
就像这样:
== 1-gram table ==
Token | NextWord | Frequency
------+----------+-----------
"I" | "like" | 15
"I" | "hate" | 20
== 2-gram table ==
Token | NextWord | Frequency
---------+------------+-----------
"I like" | "apples" | 8
"I like" | "tomatoes" | 12
"I hate" | "tomatoes" | 20
"I hate" | "apples" | 2
按照这个示例实现,用户输入“I”,软件使用上述数据库预测用户要输入的下一个单词是“hate”。如果用户确实输入了“hate”,那么软件将预测用户要输入的下一个单词是“tomatoes”。
然而,这种实现需要为我选择考虑的每个附加 n 元模型提供一个表格。如果我决定在预测下一个单词时考虑前面的 5 或 6 个单词,那么我将需要 5-6 个表,并且每个 n 元语法的空间呈指数级增长。
仅用一两个表来表示这一点的最佳方法是什么,并且我可以支持的 n 元语法数量没有上限?
If I was writing a piece of software that attempted to predict what word a user was going to type next using the two previous words the user had typed, I would create two tables.
Like so:
== 1-gram table ==
Token | NextWord | Frequency
------+----------+-----------
"I" | "like" | 15
"I" | "hate" | 20
== 2-gram table ==
Token | NextWord | Frequency
---------+------------+-----------
"I like" | "apples" | 8
"I like" | "tomatoes" | 12
"I hate" | "tomatoes" | 20
"I hate" | "apples" | 2
Following this example implimentation the user types "I" and the software, using the above database, predicts that the next word the user is going to type is "hate". If the user does type "hate" then the software will then predict that the next word the user is going to type is "tomatoes".
However, this implimentation would require a table for each additional n-gram that I choose to take into account. If I decided that I wanted to take the 5 or 6 preceding words into account when predicting the next word, then I would need 5-6 tables, and an exponentially increase in space per n-gram.
What would be the best way to represent this in only one or two tables, that has no upper-limit on the number of n-grams I can support?
发布评论
评论(3)
实际上,您可以将其保持原样,只使用一张桌子。二元语法不能等于一元语法,因为二元语法中会有空格。类似地,任何三元语法都不等于任何两元语法,因为三元语法将有两个空格。无穷无尽。
因此,您可以将所有 1-gram、2-gram 等放入
Token
字段中,并且不会发生冲突。You can actually just leave it the way you have it and use only one table. A two-gram cannot be equal to a one-gram, because the two-gram will have a space in it. Similarly, no three-gram will be equal to any two-gram because the three-gram will have two spaces. Ad infinitum.
So you can put all the 1-grams, 2-grams, etc into the
Token
field and none will ever collide.尝试使用两列表 -
一种优化是将短语中的某些单词“正常化”,例如“isn't”为“is not”。
第二个优化是使用 MD5、CRC32 或类似的短语哈希作为密钥。
Try a two column table -
One optimisation would be to "noramalise" some words in the phrase e.g. "isn't" to "is not".
A second optimisation would be to use an MD5, CRC32 or similar hash of the phrase as the key.
为什么不将它们全部存储在一张表中呢?
然后由您的软件决定您为“令牌”传递的内容,以及何时插入新值(即不要插入部分键入的单词)。如果你想变得棘手,你可以有一个额外的单词数列,但我不认为这实际上是必需的(空格数+1是单词数)
Why not just store them all in the one table?
It'd then be up to your software to decide what you pass in for 'Token', and also when you insert new values (i.e. don't insert a partially-typed word). If you want to get tricky, you can have an extra column for the number of words, but I don't think that would actually be required (the number of spaces+1 is the number of words)