这是针对数百万数据进行自动建议的高效数据库
我需要知道哪个数据库对于拥有约 8000 万条记录的 autosugest 数据库来说是最好的数据库...
1)Redis
2)tokyoCabinet
3)Kyoto Cabinet
i need to know which will be the best db for an autosugest db with some 80 million records...
1)Redis
2)tokyoCabinet
3)Kyoto Cabinet
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
此网站可能有您要查找的内容:http://perfectmarket.com/blog/not_only_nosql_review_solution_evaluation_guide_chart
有几件事需要考虑:
我知道它不在您的列表中,但我会选择 MongoDB。如果你不能,那么我会选择 Redis,只是为了速度因素。
This site may have what you're looking for: http://perfectmarket.com/blog/not_only_nosql_review_solution_evaluation_guide_chart
You have several things to consider:
I know it isn't on your list, but I would go with MongoDB. If you can't then I would go with Redis, simply for the speed factor.
Redis 非常适合自动建议,因为它具有排序集(作为跳跃列表实现)。我成功使用的模式基本上将每个部分单词作为键(因此“python”将映射到键:“py”、“pyt”、“pyth”、“pytho”和“python”)。与每个键关联的数据是一个排序集,其中值用于提供原始短语的词汇排序(提供结果排序),而键是映射到您希望返回的数据的 id。然后我将 ID 和数据存储在哈希中。
这是一个用 python 编写的示例项目,包含更多详细信息:https://github.com/coleifer/redis-completion
Redis is a great fit for autosuggest because of its sorted sets (implemented as a skiplist). The schema I've used w/success basically has each partial word as a key (so "python" would map to keys: "py", "pyt", "pyth", "pytho", and "python"). The data associated with each key is a sorted set where the value is there to provide lexical ordering of the original phrase (provide sorting of the results) and the key is an id mapping to the data you wish to return. I then store the ids and data in a hash.
Here is a sample project written in python, with more details: https://github.com/coleifer/redis-completion