在 Python 中对约 100,000 个短字符串进行聚类
我想通过 q-gram 距离或简单的“bag 距离”或者 Python 中的 Levenshtein 距离之类的东西对大约 100,000 个短字符串进行聚类。我打算填写一个距离矩阵(100,000选择2个比较),然后使用 pyCluster。但在开始之前我就遇到了一些内存问题。例如,距离矩阵对于 numpy 来说太大。
aa = numpy.zeros((100000, 100000))
ValueError: array is too big.
这看起来合理吗?或者我在这项任务中注定会出现记忆问题?感谢您的帮助。
I want to cluster ~100,000 short strings by something like q-gram distance or simple "bag distance" or maybe Levenshtein distance in Python. I was planning to fill out a distance matrix (100,000 choose 2 comparisons) and then do hierarchical clustering with pyCluster. But I'm running into some memory problems before even getting off the ground. For example, the distance matrix is too large for numpy.
aa = numpy.zeros((100000, 100000))
ValueError: array is too big.
Does this seem like a reasonable thing to do? Or am I doomed to memory problems in this task? Thanks for your help.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
100,000 * 100,000 * 32 位 = 40 GB,这将是大量 RAM,所以是的,您需要找到另一种方法。 (即使您可以将这些数据放入内存中,计算也会花费太长时间。)
一个常见且简单的捷径是对数据的一个小的随机子集进行聚类,在找到该子集的聚类后,只需将其余的放入将这些点放入最适合的簇中。
100,000 * 100,000 * 32bits = 40 GBytes, which would be a lot of RAM, so yes, you need to find another way. (And even if you could fit this data into memory, the calculation would take too long.)
One common and easy shortcut is to cluster a small random subset of the data, and after you find the clusters of this subset, just put the rest of the points into the clusters where they fit best.
100 亿个元素是一个非常多的数字。我不知道 q-grams,但如果该矩阵是稀疏的,您可以使用 200,000 个元素的字典。
10 billion elements is an awful lot. I don't know from q-grams, but if that matrix is sparse, you could use a 200,000-ish element dict.
你需要矩阵吗?我假设你想使用矩阵来提高速度?
我有一个 k 均值聚类算法(而不是分层聚类算法),它可以根据需要计算节点距离。不过,可能只适用于快速距离度量。而且你拥有的数据比我多 - 但你受到内存限制的束缚。
Do you need the matrix? I assume you want to use a matrix for speed?
I have a k-means cluster algorithm (rather than a hierarchical cluster algorithm) and this calculates node distances as required. Probably only viable for fast distance metrics, though. And you have more data than I do - but you are bound by memory limitations.
机器学习中有一种称为嵌入的方法,原则上可以使用 O(n+m) 内存而不是 O 来搜索该问题的解决方案(n*m)(n=10^5 项,m=10^5 特征)。不幸的是,我不知道有可用的源代码在 O(m+n) 中实现。请参阅:
<块引用>
共现数据的欧几里得嵌入。
阿米尔·格洛伯森、加尔·切奇克、费尔南多·佩雷拉和纳夫塔利·蒂什比。
机器学习研究杂志,JMLR,8(10 月),2007 年。 pdf /
Matlab代码
可能还有其他解决方案。我认为你应该在机器学习人员的论坛上问这个问题,例如,https://stats.stackexchange.com/,或更具体的语言处理:http://metaoptimize.com/qa/。
There is a method in Machine Learning called Embedding which can, in principle, search for a solution for this problem using O(n+m) memory instead of O(n*m) (n=10^5 items, m=10^5 features). Unfortunately, I don't know of an available source code that is implemented in O(m+n). See:
There could be other solutions. I think that you should ask this question at a forum of Machine Learning people, e.g., https://stats.stackexchange.com/, or even more specific for language processing: http://metaoptimize.com/qa/.