K 均值可用于帮助基于像素值的图像分离吗?

发布于 2024-10-26 03:44:02 字数 328 浏览 6 评论 0原文

我正在尝试根据像素值分离灰度图像:假设一个 bin 中的像素为 0 到 60,另一个 bin 中的像素为 60-120,120-180 ...依此类推,直到 255。范围在此大致等距案件。 然而,通过使用 K 均值聚类是否可以更实际地测量我的像素值范围应该是什么?尝试同时获取相似的像素,而不是像素浓度较低的垃圾箱。

编辑(包括获得的结果): 在此处输入图像描述 在此处输入图像描述

k 均值,簇数 = 5

I'm trying to separate a greylevel image based on pixel-value: suppose pixels from 0 to 60 in one bin, 60-120 in another, 120-180 ... and so on til 255. The ranges are roughly equispaced in this case.
However by using K-means clustering will it be possible to get more realistic measures of what my pixel value ranges should be? Trying to obtain similar pixels together and not waste bins where there is lower concentration of pixels present.

EDITS (to include obtained results):
enter image description here
enter image description here

k-means with no of cluster = 5

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

情深缘浅 2024-11-02 03:44:02

当然K-Means可以用于颜色量化。这非常方便。

让我们看一下 Mathematica 中的示例:

我们从灰度 (150x150) 图像开始:

在此处输入图像描述

让我们看看以 8 位表示图像时有多少个灰度级:

ac = ImageData[ImageTake[i, All, All], "Byte"];
First@Dimensions@Tally@Flatten@ac
-> 234

好的。让我们减少这 234 个级别。我们的第一个尝试是让算法单独确定默认配置下有多少个簇:

ic = ClusteringComponents[Image@ac];
First@Dimensions@Tally@Flatten@ic 
-> 3

它选择 3 个簇,对应的图像为:

在此处输入图像描述

现在,是否可以,或者您需要更多集群,这取决于您。

假设您决定需要更细粒度的分色。让我们要求 6 个簇而不是 3 个:

ic2 = ClusteringComponents[Image@ac, 6];
Image@ic2 // ImageAdjust  

结果:

在此处输入图像描述

以下是每个 bin 中使用的像素范围:

Table[{Min@#, Max@#} &@(Take[orig, {#[[1]]}, {#[[2]]}] & /@ 
    Position[clus, n]), {n, 1, 6}]
-> {{0, 11}, {12, 30}, {31, 52}, {53, 85}, {86, 134}, {135, 241}}

以及每个箱中的像素数:

Table[Count[Flatten@clus, i], {i, 6}]
-> {8906, 4400, 4261, 2850, 1363, 720}

所以,答案是肯定的,而且很简单。

编辑

也许这会帮助您理解在新示例中做错了什么。

如果我对彩色图像进行聚类,并使用聚类编号来表示亮度,我会得到:

在此处输入图像描述

这是因为簇没有按亮度升序编号。

但是,如果我计算每个簇的平均亮度值,并用它来表示簇值,我得到:

在此处输入图像描述

在我之前的示例中,这不是必需的,但这只是运气:D(即按亮度升序找到簇)

Of course K-Means can be used for color quantization. It's very handy for that.

Let's see an example in Mathematica:

We start with a greyscale (150x150) image:

enter image description here

Let's see how many grey levels are there when representing the image in 8 bits:

ac = ImageData[ImageTake[i, All, All], "Byte"];
First@Dimensions@Tally@Flatten@ac
-> 234

Ok. Let's reduce those 234 levels. Our first try will be to let the algorithm alone to determine how many clusters are there with the default configuration:

ic = ClusteringComponents[Image@ac];
First@Dimensions@Tally@Flatten@ic 
-> 3

It selects 3 clusters, and the corresponding image is:

enter image description here

Now, if that is ok, or you need more clusters, is up to you.

Let's suppose you decide that a more fine-grained color separation is needed. Let's ask for 6 clusters instead of 3:

ic2 = ClusteringComponents[Image@ac, 6];
Image@ic2 // ImageAdjust  

Result:

enter image description here

and here are the pixel ranges used in each bin:

Table[{Min@#, Max@#} &@(Take[orig, {#[[1]]}, {#[[2]]}] & /@ 
    Position[clus, n]), {n, 1, 6}]
-> {{0, 11}, {12, 30}, {31, 52}, {53, 85}, {86, 134}, {135, 241}}

and the number of pixels in each bin:

Table[Count[Flatten@clus, i], {i, 6}]
-> {8906, 4400, 4261, 2850, 1363, 720}

So, the answer is YES, and it is straightforward.

Edit

Perhaps this will help you understand what you are doing wrong in your new example.

If I clusterize your color image, and use the cluster number to represent brightness, I get:

enter image description here

That's because the clusters are not being numbered in an ascending brightness order.

But if I calculate the mean brightness value for each cluster, and use it to represent the cluster value, I get:

enter image description here

In my previous example, that was not needed, but that was just luck :D (i.e. clusters were found in ascending brightness order)

秋日私语 2024-11-02 03:44:02

k-means 可以应用于您的问题。如果是我,我会首先尝试从决策树借用的基本方法(尽管“更简单”取决于您的精确聚类算法!)

假设存在一个容器,开始将像素强度填充到容器中。当箱“足够满”时,计算箱(或节点)的平均值和标准差。如果标准差大于某个阈值,则将节点分成两半。继续此过程,直到完成所有强度,您将获得更有效的直方图。

当然,可以通过其他细节来改进此方法:

  1. 您可以考虑使用峰度作为分割标准。
  2. 偏度可用于确定分割发生的位置
  3. 您可能会一直进入决策树区域并借用 Jini 索引来指导分割(某些分割技术依赖于更“奇异”的统计数据,例如 t 检验)。
  4. 最后,您可以执行最终合并过程来折叠任何稀疏的节点。

当然,如果您已经应用了上述所有“改进”,那么您基本上已经实现了 k 均值聚类算法的一种变体;-)

注意:我不同意上面的评论 - 您描述的问题不会出现密切相关的直方图均衡。

k-means could be applied to your problem. If it were me, I would first try a basic approach borrowed from decision trees (although "simpler" is dependent upon your precise clustering algorithm!)

Assume one bin exists, begin stuffing the pixel intensities into the bin. When the bin is "full enough", compute the mean and standard deviation of the bin (or node). If the standard deviation is greater than some threshold, split the node in half. Continue this process until all intensities are done, and you will have a more efficient histogram.

This method can be improved with additional details of course:

  1. You might consider using kurtosis as a splitting criteria.
  2. Skewness might be used to determine where the split occurs
  3. You might cross all the way into decision tree land and borrow the Jini index to guide splitting (some split techniques rely on more "exotic" statistics, like the t-test).
  4. Lastly, you might perform a final consolidation pass to collapse any sparsely populated nodes.

Of course, if you've applied all of the above "improvements", then you've basically implemented one variation of a k-means clustering algorithm ;-)

Note: I disagree with the comment above - the problem you describe does not appear closely related histogram equalization.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文