在 Python 中对约 100,000 个短字符串进行聚类

发布于 2024-10-03 07:33:22 字数 415 浏览 19 评论 0原文

我想通过 q​​-gram 距离或简单的“bag 距离”或者 Python 中的 Levenshtein 距离之类的东西对大约 100,000 个短字符串进行聚类。我打算填写一个距离矩阵(100,000选择2个比较),然后使用 pyCluster。但在开始之前我就遇到了一些内存问题。例如,距离矩阵对于 numpy 来说太大。

aa = numpy.zeros((100000, 100000))
ValueError: array is too big.

这看起来合理吗?或者我在这项任务中注定会出现记忆问题?感谢您的帮助。

I want to cluster ~100,000 short strings by something like q-gram distance or simple "bag distance" or maybe Levenshtein distance in Python. I was planning to fill out a distance matrix (100,000 choose 2 comparisons) and then do hierarchical clustering with pyCluster. But I'm running into some memory problems before even getting off the ground. For example, the distance matrix is too large for numpy.

aa = numpy.zeros((100000, 100000))
ValueError: array is too big.

Does this seem like a reasonable thing to do? Or am I doomed to memory problems in this task? Thanks for your help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

帅哥哥的热头脑 2024-10-10 07:33:22

100,000 * 100,000 * 32 位 = 40 GB,这将是大量 RAM,所以是的,您需要找到另一种方法。 (即使您可以将这些数据放入内存中,计算也会花费太长时间。)

一个常见且简单的捷径是对数据的一个小的随机子集进行聚类,在找到该子集的聚类后,只需将其余的放入将这些点放入最适合的簇中。

100,000 * 100,000 * 32bits = 40 GBytes, which would be a lot of RAM, so yes, you need to find another way. (And even if you could fit this data into memory, the calculation would take too long.)

One common and easy shortcut is to cluster a small random subset of the data, and after you find the clusters of this subset, just put the rest of the points into the clusters where they fit best.

残花月 2024-10-10 07:33:22

100 亿个元素是一个非常多的数字。我不知道 q-grams,但如果该矩阵是稀疏的,您可以使用 200,000 个元素的字典。

10 billion elements is an awful lot. I don't know from q-grams, but if that matrix is sparse, you could use a 200,000-ish element dict.

Bonjour°[大白 2024-10-10 07:33:22

你需要矩阵吗?我假设你想使用矩阵来提高速度?

我有一个 k 均值聚类算法(而不是分层聚类算法),它可以根据需要计算节点距离。不过,可能只适用于快速距离度量。而且你拥有的数据比我多 - 但你受到内存限制的束缚。

Do you need the matrix? I assume you want to use a matrix for speed?

I have a k-means cluster algorithm (rather than a hierarchical cluster algorithm) and this calculates node distances as required. Probably only viable for fast distance metrics, though. And you have more data than I do - but you are bound by memory limitations.

半仙 2024-10-10 07:33:22
  1. 机器学习中有一种称为嵌入的方法,原则上可以使用 O(n+m) 内存而不是 O 来搜索该问题的解决方案(n*m)(n=10^5 项,m=10^5 特征)。不幸的是,我不知道有可用的源代码在 O(m+n) 中实现。请参阅:

    <块引用>

    共现数据的欧几里得嵌入。
    阿米尔·格洛伯森、加尔·切奇克、费尔南多·佩雷拉和纳夫塔利·蒂什比。
    机器学习研究杂志,JMLR,8(10 月),2007 年。 pdf /
    Matlab代码

  2. 可能还有其他解决方案。我认为你应该在机器学习人员的论坛上问这个问题,例如,https://stats.stackexchange.com/,或更具体的语言处理:http://metaoptimize.com/qa/

  1. There is a method in Machine Learning called Embedding which can, in principle, search for a solution for this problem using O(n+m) memory instead of O(n*m) (n=10^5 items, m=10^5 features). Unfortunately, I don't know of an available source code that is implemented in O(m+n). See:

    Euclidean Embedding of Co-occurrence Data.
    Amir Globerson, Gal Chechik, Fernando Pereira and Naftali Tishby.
    Journal of Machine Learning Research, JMLR, 8 (Oct), 2007. pdf /
    Matlab code

  2. There could be other solutions. I think that you should ask this question at a forum of Machine Learning people, e.g., https://stats.stackexchange.com/, or even more specific for language processing: http://metaoptimize.com/qa/.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文