大数据集上的聚类
我正在尝试对一个大(千兆字节)数据集进行聚类。为了进行聚类,您需要每个点到每个其他点的距离,因此您最终会得到一个 N^2 大小的距离矩阵,在我的数据集的情况下,该距离矩阵的大小约为艾字节。当然,Matlab 中的 Pdist 会立即崩溃;)
有没有办法首先对大数据的子集进行聚类,然后对相似的聚类进行一些合并?
我不知道这是否有帮助,但数据是固定长度的二进制字符串,所以我使用汉明距离(距离= string1 XOR string2)计算它们的距离。
I'm trying to cluster a large (Gigabyte) dataset. In order to cluster, you need distance of every point to every other point, so you end up with a N^2 sized distance matrix, which in case of my dataset would be on the order of exabytes. Pdist in Matlab blows up instantly of course ;)
Is there a way to cluster subsets of the large data first, and then maybe do some merging of similar clusters?
I don't know if this helps any, but the data are fixed length binary strings, so I'm calculating their distances using Hamming distance (Distance=string1 XOR string2).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
来自的好方法的简化版本
Tabei 等人,所有对相似性搜索中的单排序与多排序< /a>,
对于与 Hammingdist 1 的对来说:对
这些块将是相对较小的
这会错过例如 32/128 的附近对的分数
汉明分布(左 32)1 + 汉明分布(其余)0。
如果您确实想要这些,请使用“first 32”重复上述内容 -> “最后32”。
该方法可以扩展。
以 4 个 32 位字为例,Hammingdist <= 2;不匹配必须像其中之一一样分裂
2000 0200 0020 0002 1100 1010 1001 0110 0101 0011,
所以其中2个单词一定是0,排序相同。
(顺便说一句,sketchsort-0.0.7.tar 是 99% src/boost/, build/, .svn/ 。)
A simplified version of the nice method from
Tabei et al., Single versus Multiple Sorting in All Pairs Similarity Search,
say for pairs with Hammingdist 1:
these blocks will be relatively small
This misses the fraction of e.g. 32/128 of the nearby pairs which have
Hammingdist( left 32 ) 1 + Hammingdist( the rest ) 0.
If you really want these, repeat the above with "first 32" -> "last 32".
The method can be extended.
Take for example Hammingdist <= 2 on 4 32-bit words; the mismatches must split like one of
2000 0200 0020 0002 1100 1010 1001 0110 0101 0011,
so 2 of the words must be 0, sort the same.
(Btw, sketchsort-0.0.7.tar is 99 % src/boost/, build/, .svn/ .)
先对它们进行排序怎么样?也许类似于修改后的合并排序?您可以从适合内存的数据集块开始执行正常排序。
一旦获得排序后的数据,就可以迭代地进行聚类。也许保留 N-1 个点的滚动质心,并与读入的第 N 个点进行比较。然后,根据您的簇距离阈值,您可以将其合并到当前簇中或启动一个新簇。
How about sorting them first? Maybe something like a modified merge sort? You could start with chunks of the dataset which will fit in memory to perform a normal sort.
Once you have the sorted data, clustering could be done iteratively. Maybe keep a rolling centroid of N-1 points and compare against the Nth point being read in. Then depending on your cluster distance threshold, you could pool it into the current cluster or start a new one.
LMW-tree 项目中的 EM-tree 和 K-tree 算法可以聚类如此大的问题更大。我们最新的结果是将 7.33 亿个网页聚类成 600,000 个集群。 EM 树还有一个流式变体,其中每次迭代的数据集都是从磁盘流式传输的。
此外,这些算法可以直接对位串进行聚类,其中所有聚类代表和数据点都是位串,并且使用的相似性度量是汉明距离。这最小化了找到的每个簇内的汉明距离。
The EM-tree and K-tree algorithms in the LMW-tree project can cluster problems this big and larger. Our most recent result is clustering 733 million web pages into 600,000 clusters. There is also a streaming variant of the EM-tree where the dataset is streamed from disk for each iteration.
Additionally, these algorithms can cluster bit strings directly where all cluster representatives and data points are bit strings, and the similarity measure that is used is Hamming distance. This minimizes the Hamming distance within each cluster found.