测量图像之间的距离
关于我的 关于高斯降噪的问题中,我想知道一种简单的方法来量化降噪滤波器的成功。
我尝试了几种降噪方法,我想要一些方法来确定哪种方法效果最好。我有原始图像、有噪声的版本以及尝试减少噪声而创建的一些版本。我考虑尝试对增强图像和原始图像进行一些矩阵距离测量,以比较降噪方法。这行得通吗?除了看图片之外,还有其他常见的方法吗?
Regarding my question about Gaussian noise reduction, I would like to know of a simple method to quantify the success of a noise reduction filter.
I've attempted a few methods of noise reduction and I want some method to determine which one works best. I have the original image, a noisy version and a few versions created from attempts to reduce the noise. I thought about trying some matrix distance measurement from the enhanced image and the original image, in order to compare the methods of noise reduction. Will this work okay or is there some other common method other than just looking at the pictures?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
均方误差度量的问题在于它不能很好地代表恢复图像的视觉质量。为了解决这个问题,开发了一些其他指标。现在非常流行的一种称为结构相似性。其源代码可以在此处找到。
The problem with the mean-square error metric is that it doesn't represent well the visual quality of the restored image. To address that, some other metrics have been developed. One that is quite popular now is called Structural Similarity. The source code for it can be found here.
我从事降噪工作的同事总是使用信噪比(SNR)来比较降噪的质量:
http://en.wikipedia.org/wiki/Signal-to-noise_ratio
以下是我的同事 Julien Mairal 发表的一些关于最先进的降噪技术的科学文章:
http://www.di.ens.fr/~mairal/index.php
My colleagues working on noise redution always use Signal To Noise Ratio (SNR) to compare the quality of the denoising:
http://en.wikipedia.org/wiki/Signal-to-noise_ratio
Here are some scientific articles of my colleague Julien Mairal doing state-of-the-art noise reduction:
http://www.di.ens.fr/~mairal/index.php
使用的明显距离是像素误差的平方和。对于灰度图像,平方像素误差为 (p1 - p2)^2(两个像素的强度为 p1 和 p2),或 (r1 - r2)^2 + (g1 - g2)^2 + (b1 - b2)^2 如果您有 RGB 图像(两个像素的颜色为 (r1, g1, b1) 和 (r2, g2, b2))。您可以通过不同方式缩放 RGB 分量来对此进行微调,以补偿人眼对蓝色的响应不如绿色和红色强烈的事实。
The obvious distance to use is the sum of the squares of the pixel errors. The squared pixel error would be (p1 - p2)^2 for a grayscale image (the intensity of the two pixels are p1 and p2), or (r1 - r2)^2 + (g1 - g2)^2 + (b1 - b2)^2 if you have a RGB image (the colors of the two pixels are (r1, g1, b1) and (r2, g2, b2)). You can refine this a bit by scaling the RGB components differently to compensate for the fact that the human eye responds to blue less strongly than green and red.