局部敏感哈希 - 查找 R 的概率和值
感谢那些回答我之前的问题并让我走到这一步的人。
我有一个包含大约 25,000 个向量的表,每个向量有 48 个维度,值范围为 0-255。
我正在尝试开发局部敏感哈希(http://en.wikipedia.org/wiki/ Locality-sensitive_hashing)用于查找近邻或最近邻点的算法。
我当前的 LSH 函数是这样的:
def lsh(vector, r = 1.0, a = None, b = None):
if not a:
a = [normalvariate(10, 4) for i in range(48)]
if not b:
b = uniform(0, r)
hashVal = floor((sum([a[i]*vector[i] for i in range(48)]) + b)/r)
return int(hashVal)
此时我的问题是:
A: 我的代码的“normalvariate(10, 4)”部分是否有最佳值。这是内置于 random.normalvariate 的 python (http://docs.python.org /library/random.html#random.normalvariate) 函数,我用它来生成“d 维向量,其条目独立于稳定分布而选择”。根据我的实验,这些值似乎不太重要。
B:在维基百科文章的顶部指出:
如果 d(p,q) <= R,则 h(p) = h(q) 的概率至少为 P1
如果 d(p,q) >= cR,则 h(p) = h(q),概率至多为 P2
这里提到的 R 值也是稳定分布部分中提到的 R 值。 (http://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
C: 与我之前的问题(B) 相关。我发现在 hasing 函数中使用较高的 R 值会将我的向量映射到较小范围的哈希值。有没有办法优化我的 R 值。
D: 一个人大约可以使用多少张桌子?
Thanks to those who've answered my previous questions and gotten me this far.
I have a table of about 25,000 vectors, each with 48 dimensions, with values ranging from 0-255.
I am attempting to develop a Locality Sensitive Hash (http://en.wikipedia.org/wiki/Locality-sensitive_hashing) algorithm for finding a near neighbor or closest neighbor points.
My current LSH function is this:
def lsh(vector, r = 1.0, a = None, b = None):
if not a:
a = [normalvariate(10, 4) for i in range(48)]
if not b:
b = uniform(0, r)
hashVal = floor((sum([a[i]*vector[i] for i in range(48)]) + b)/r)
return int(hashVal)
My questions at this point are:
A: Are there optimal values for "normalvariate(10, 4)" portion of my code. This is pythons built in random.normalvariate (http://docs.python.org/library/random.html#random.normalvariate) function which I am using to produce the "d dimensional vector with entries chosen independently from a stable distribution". From my experimenting, the values don't seem to matter too much.
B: At the top of the wikipedia article it states:
if d(p,q) <= R, then h(p) = h(q) with probability at least P1
if d(p,q) >= cR, then h(p) = h(q) with probability at most P2
Is the R value mentioned here also the R value mentioned under the Stable Distributions section. (http://en.wikipedia.org/wiki/Locality-sensitive_hashing#Stable_distributions)
C: Related to my previous question(B). I've discovered that using higher values of R in my hasing function map my vectors into a smaller range of hash values. Is there a way to optimize my R value.
D: Approximately how many tables might one use?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您可能想查看“MetaOptimize”——例如用于机器学习的 Stack Overflow。
http://metaoptimize.com/qa
您的问题并不是真正的 python 或编程问题,而且该社区或许能提供更多帮助。
You might want to check out "MetaOptimize" -- like Stack Overflow for machine learning.
http://metaoptimize.com/qa
Your question isn't really a python or programing question, and that community might be able to help a bit more.
对于那些有兴趣的人。我找到了这个文档(http://web.mit.edu/andoni /www/papers/cSquared.pdf 其中对如何将 LSH 用于高维空间进行了非常详细但复杂的解释。
To those interested. I've found this document (http://web.mit.edu/andoni/www/papers/cSquared.pdf which has a very detailed, albeit complicated explanation of how to use LSH for high dimensional spaces.