这就是我想做的:
我定期用网络摄像头拍照。 有点像延时摄影。 但是,如果没有任何变化,即图片看起来几乎一样,我不想存储最新的快照。
我想有某种方法可以量化差异,并且我必须凭经验确定阈值。
我追求简单而不是完美。
我正在使用Python。
Here's what I would like to do:
I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don't want to store the latest snapshot.
I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold.
I'm looking for simplicity rather than perfection.
I'm using python.
发布评论
评论(25)
一般想法
选项 1:将两个图像加载为数组 (
scipy.misc.imread
) 并计算逐个元素(逐像素)差异。 计算差异的范数。选项 2:加载两个图像。 计算每个特征向量(如直方图)。 计算特征向量之间的距离而不是图像之间的距离。
然而,首先需要做出一些决定。
问题
您应该首先回答以下问题:
图像的形状和尺寸相同吗?
如果没有,您可能需要调整它们的大小或裁剪它们。 PIL 库将有助于在 Python 中完成此操作。
如果它们是使用相同的设置和相同的设备拍摄的,则它们可能是相同的。
图像是否对齐良好?
如果没有,您可能需要先运行互相关,以首先找到最佳对齐方式。 SciPy 有函数可以做到这一点。
如果相机和场景静止,则图像可能对齐良好。
图像的曝光总是相同吗? (亮度/对比度相同吗?)
如果没有,您可能需要标准化图像。
但要小心,在某些情况下,这可能弊大于利。 例如,深色背景上的单个明亮像素将使标准化图像非常不同。
颜色信息重要吗?
如果您想注意到颜色变化,您将拥有每个点的颜色值向量,而不是像灰度图像中的标量值。 编写此类代码时需要多加注意。
图像中是否有明显的边缘? 他们可能会搬家吗?
如果是,您可以首先应用边缘检测算法(例如,使用 Sobel 或 Prewitt 变换计算梯度,应用一些阈值),然后将第一个图像上的边缘与第二个图像上的边缘进行比较。
图像中有噪点吗?
所有传感器都会产生一定量的噪声污染图像。 低成本传感器的噪声更大。 您可能希望在比较图像之前应用一些降噪措施。 模糊是这里最简单(但不是最好)的方法。
您想注意到什么样的变化?
这可能会影响图像之间差异所使用的标准的选择。
考虑使用曼哈顿范数(绝对值之和)或零范数(不等于零的元素数量)来衡量图像变化的程度。 前者会告诉您图像有多少偏差,后者只会告诉您有多少像素不同。
示例
我假设您的图像对齐良好,大小和形状相同,可能具有不同的曝光度。 为简单起见,即使它们是彩色 (RGB) 图像,我也会将它们转换为灰度。
您将需要这些导入:
主函数,读取两个图像,转换为灰度,比较并打印结果:
如何比较。
img1
和img2
在这里是 2D SciPy 数组:如果文件是彩色图像,
imread
返回一个 3D 数组,平均 RGB 通道(最后一个)阵列轴)以获得强度。 不需要对灰度图像执行此操作(例如.pgm
):标准化很简单,您可以选择标准化为 [0,1] 而不是 [0,255]。
arr
在这里是一个 SciPy 数组,因此所有操作都是按元素进行的:运行
main
函数:现在您可以将这一切放入脚本中并针对两个图像运行。 如果我们将图像与其本身进行比较,则没有区别:
如果我们模糊图像并与原始图像进行比较,则存在一些差异:
PS Entire compare.py 脚本。
更新:相关技术
由于问题是关于视频序列,其中帧可能几乎相同,并且您寻找不寻常的东西,我想提一些可能相关的替代方法:
我强烈建议看一下“学习 OpenCV”一书,第 9 章(图像部分和分割)和第 10 章(跟踪和运动)。 前者教导使用背景扣除方法,后者给出一些有关光流方法的信息。 所有方法均在 OpenCV 库中实现。 如果你使用Python,我建议使用OpenCV ≥ 2.3,及其
cv2
Python模块。背景减法最简单的版本:
更高级的版本考虑每个像素的时间序列并处理非静态场景(例如移动的树木或草地)。
光流的思想是获取两个或更多帧,并将速度矢量分配给每个像素(密集光流)或其中一些像素(稀疏光流)。 要估计稀疏光流,您可以使用 Lucas-Kanade 方法(它也在 OpenCV 中实现)。 显然,如果存在大量流动(速度场的平均值高于速度场的最大值),则说明帧中存在某些物体正在移动,并且后续图像会更加不同。
比较直方图可能有助于检测连续帧之间的突然变化。 Courbon 等人,2010 使用了这种方法:
General idea
Option 1: Load both images as arrays (
scipy.misc.imread
) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the difference.Option 2: Load both images. Calculate some feature vector for each of them (like a histogram). Calculate distance between feature vectors rather than images.
However, there are some decisions to make first.
Questions
You should answer these questions first:
Are images of the same shape and dimension?
If not, you may need to resize or crop them. PIL library will help to do it in Python.
If they are taken with the same settings and the same device, they are probably the same.
Are images well-aligned?
If not, you may want to run cross-correlation first, to find the best alignment first. SciPy has functions to do it.
If the camera and the scene are still, the images are likely to be well-aligned.
Is exposure of the images always the same? (Is lightness/contrast the same?)
If not, you may want to normalize images.
But be careful, in some situations this may do more wrong than good. For example, a single bright pixel on a dark background will make the normalized image very different.
Is color information important?
If you want to notice color changes, you will have a vector of color values per point, rather than a scalar value as in gray-scale image. You need more attention when writing such code.
Are there distinct edges in the image? Are they likely to move?
If yes, you can apply edge detection algorithm first (e.g. calculate gradient with Sobel or Prewitt transform, apply some threshold), then compare edges on the first image to edges on the second.
Is there noise in the image?
All sensors pollute the image with some amount of noise. Low-cost sensors have more noise. You may wish to apply some noise reduction before you compare images. Blur is the most simple (but not the best) approach here.
What kind of changes do you want to notice?
This may affect the choice of norm to use for the difference between images.
Consider using Manhattan norm (the sum of the absolute values) or zero norm (the number of elements not equal to zero) to measure how much the image has changed. The former will tell you how much the image is off, the latter will tell only how many pixels differ.
Example
I assume your images are well-aligned, the same size and shape, possibly with different exposure. For simplicity, I convert them to grayscale even if they are color (RGB) images.
You will need these imports:
Main function, read two images, convert to grayscale, compare and print results:
How to compare.
img1
andimg2
are 2D SciPy arrays here:If the file is a color image,
imread
returns a 3D array, average RGB channels (the last array axis) to obtain intensity. No need to do it for grayscale images (e.g..pgm
):Normalization is trivial, you may choose to normalize to [0,1] instead of [0,255].
arr
is a SciPy array here, so all operations are element-wise:Run the
main
function:Now you can put this all in a script and run against two images. If we compare image to itself, there is no difference:
If we blur the image and compare to the original, there is some difference:
P.S. Entire compare.py script.
Update: relevant techniques
As the question is about a video sequence, where frames are likely to be almost the same, and you look for something unusual, I'd like to mention some alternative approaches which may be relevant:
I strongly recommend taking a look at “Learning OpenCV” book, Chapters 9 (Image parts and segmentation) and 10 (Tracking and motion). The former teaches to use Background subtraction method, the latter gives some info on optical flow methods. All methods are implemented in OpenCV library. If you use Python, I suggest to use OpenCV ≥ 2.3, and its
cv2
Python module.The most simple version of the background subtraction:
More advanced versions make take into account time series for every pixel and handle non-static scenes (like moving trees or grass).
The idea of optical flow is to take two or more frames, and assign velocity vector to every pixel (dense optical flow) or to some of them (sparse optical flow). To estimate sparse optical flow, you may use Lucas-Kanade method (it is also implemented in OpenCV). Obviously, if there is a lot of flow (high average over max values of the velocity field), then something is moving in the frame, and subsequent images are more different.
Comparing histograms may help to detect sudden changes between consecutive frames. This approach was used in Courbon et al, 2010:
简单的解决方案:
将图像编码为jpeg,并查找文件大小的重大变化。
我已经实现了与视频缩略图类似的功能,并取得了很大的成功和可扩展性。
A simple solution:
Encode the image as a jpeg and look for a substantial change in filesize.
I've implemented something similar with video thumbnails, and had a lot of success and scalability.
您可以使用 PIL 中的函数来比较两个图像。
diff 对象是一个图像,其中每个像素都是从第一图像中减去第二图像中该像素的颜色值的结果。 使用差异图像您可以做几件事。 最简单的一个是
diff.getbbox()
函数。 它会告诉您包含两个图像之间所有变化的最小矩形。您也可以使用 PIL 中的函数来实现此处提到的其他内容的近似值。
You can compare two images using functions from PIL.
The diff object is an image in which every pixel is the result of the subtraction of the color values of that pixel in the second image from the first image. Using the diff image you can do several things. The simplest one is the
diff.getbbox()
function. It will tell you the minimal rectangle that contains all the changes between your two images.You can probably implement approximations of the other stuff mentioned here using functions from PIL as well.
两种流行且相对简单的方法是:(a)已经建议的欧几里德距离,或(b)归一化互相关。 归一化互相关往往比简单互相关对光照变化更稳健。 维基百科给出了归一化互相关的公式。 也存在更复杂的方法,但它们需要更多的工作。
使用类似 numpy 的语法,
假设
i1
和i2
是 2D 灰度图像数组。Two popular and relatively simple methods are: (a) the Euclidean distance already suggested, or (b) normalized cross-correlation. Normalized cross-correlation tends to be noticeably more robust to lighting changes than simple cross-correlation. Wikipedia gives a formula for the normalized cross-correlation. More sophisticated methods exist too, but they require quite a bit more work.
Using numpy-like syntax,
assuming that
i1
andi2
are 2D grayscale image arrays.尝试一个简单的事情:
将两个图像重新采样为小缩略图(例如 64 x 64),并将缩略图逐像素与特定阈值进行比较。 如果原始图像几乎相同,则重新采样的缩略图将非常相似甚至完全相同。 此方法可以处理尤其是在低光场景中可能出现的噪点。 如果你使用灰度,效果可能会更好。
A trivial thing to try:
Resample both images to small thumbnails (e.g. 64 x 64) and compare the thumbnails pixel-by-pixel with a certain threshold. If the original images are almost the same, the resampled thumbnails will be very similar or even exactly the same. This method takes care of noise that can occur especially in low-light scenes. It may even be better if you go grayscale.
另一种很好的、简单的方法来测量两个图像之间的相似性:
如果其他人对比较图像相似性的更强大的方法感兴趣,我会整理一个 教程 和网页 应用 使用 Tensorflow 测量和可视化类似图像。
Another nice, simple way to measure the similarity between two images:
If others are interested in a more powerful way to compare image similarity, I put together a tutorial and web app for measuring and visualizing similar images using Tensorflow.
我在工作中遇到了类似的问题,我正在重写我们的图像转换端点,我想检查新版本是否产生与旧版本相同或几乎相同的输出。 所以我写了这个:
https://github.com/nicolashahn/diffimg
它对相同的图像进行操作大小,并在每个像素级别测量每个通道的值差异:R、G、B(、A),取这些通道的平均差异,然后对所有像素的差异进行平均,并返回比率。
例如,对于白色像素的 10x10 图像,同一图像但一个像素更改为红色,则该像素处的差异为 1/3 或 0.33...(RGB 0,0,0 与 255,0,0 ),所有其他像素为 0。总共 100 个像素,0.33.../100 = 图像中约 0.33% 的差异。
我相信这对于 OP 的项目来说非常适合(我意识到这是一篇非常旧的帖子,但为将来也想在 python 中比较图像的 StackOverflowers 发布)。
I had a similar problem at work, I was rewriting our image transform endpoint and I wanted to check that the new version was producing the same or nearly the same output as the old version. So I wrote this:
https://github.com/nicolashahn/diffimg
Which operates on images of the same size, and at a per-pixel level, measures the difference in values at each channel: R, G, B(, A), takes the average difference of those channels, and then averages the difference over all pixels, and returns a ratio.
For example, with a 10x10 image of white pixels, and the same image but one pixel has changed to red, the difference at that pixel is 1/3 or 0.33... (RGB 0,0,0 vs 255,0,0) and at all other pixels is 0. With 100 pixels total, 0.33.../100 = a ~0.33% difference in image.
I believe this would work perfectly for OP's project (I realize this is a very old post now, but posting for future StackOverflowers who also want to compare images in python).
我正在具体解决如何计算它们是否“足够不同”的问题。 我想你能弄清楚如何一一减去像素。
首先,我会拍摄一堆没有任何变化的图像,并找出由于捕获的变化、成像系统中的噪声、JPEG 压缩伪影和时刻而导致任何像素变化的最大量。 - 光照的即时变化。 也许您会发现,即使没有任何变化,也会出现 1 或 2 位差异。
然后,对于“真实”测试,您需要这样的标准:
因此,也许,如果 E = 0.02,P = 1000,这意味着(大约)它将如果任何单个像素变化超过 ~5 个单位(假设 8 位图像),或者超过 1000 个像素有任何错误,则为“不同”。
这主要是作为一种良好的“分类”技术,以快速识别足够接近而无需进一步检查的图像。 “失败”的图像可能更多地是一种更复杂/更昂贵的技术,例如,如果相机稍微晃动,或者对照明变化更稳健,则不会出现误报。
我运行一个开源项目 OpenImageIO,其中包含一个名为“idiff”的实用程序,该实用程序将差异与阈值进行比较,如下所示(实际上,甚至更详细)。 即使您不想使用该软件,您也可能需要查看源代码以了解我们是如何做到的。 它在商业上得到了相当多的使用,并且开发了这种阈值技术,以便我们可以拥有一个用于渲染和图像处理软件的测试套件,其中“参考图像”可能在不同平台之间存在微小差异,或者当我们对那个算法,所以我们想要一个“容差范围内的匹配”操作。
I am addressing specifically the question of how to compute if they are "different enough". I assume you can figure out how to subtract the pixels one by one.
First, I would take a bunch of images with nothing changing, and find out the maximum amount that any pixel changes just because of variations in the capture, noise in the imaging system, JPEG compression artifacts, and moment-to-moment changes in lighting. Perhaps you'll find that 1 or 2 bit differences are to be expected even when nothing moves.
Then for the "real" test, you want a criterion like this:
So, perhaps, if E = 0.02, P = 1000, that would mean (approximately) that it would be "different" if any single pixel changes by more than ~5 units (assuming 8-bit images), or if more than 1000 pixels had any errors at all.
This is intended mainly as a good "triage" technique to quickly identify images that are close enough to not need further examination. The images that "fail" may then more to a more elaborate/expensive technique that wouldn't have false positives if the camera shook bit, for example, or was more robust to lighting changes.
I run an open source project, OpenImageIO, that contains a utility called "idiff" that compares differences with thresholds like this (even more elaborate, actually). Even if you don't want to use this software, you may want to look at the source to see how we did it. It's used commercially quite a bit and this thresholding technique was developed so that we could have a test suite for rendering and image processing software, with "reference images" that might have small differences from platform-to-platform or as we made minor tweaks to tha algorithms, so we wanted a "match within tolerance" operation.
给出的大多数答案都不会涉及照明级别。
在进行比较之前,我首先将图像标准化为标准光照水平。
Most of the answers given won't deal with lighting levels.
I would first normalize the image to a standard light level before doing the comparison.
您看过查找相似图像的算法问题吗? 检查一下以查看建议。
我建议对你的帧进行小波变换(我已经使用 Haar 变换编写了一个 C 扩展); 然后,比较两张图片之间最大(按比例)小波因子的索引,您应该得到数值相似度近似值。
Have you seen the Algorithm for finding similar images question? Check it out to see suggestions.
I would suggest a wavelet transformation of your frames (I've written a C extension for that using Haar transformation); then, comparing the indexes of the largest (proportionally) wavelet factors between the two pictures, you should get a numerical similarity approximation.
我遇到了同样的问题,并编写了一个简单的 python 模块,该模块使用枕头的 ImageChops 比较两个相同大小的图像,以创建黑白差异图像并总结直方图值。
您可以直接获得此分数,也可以获得与全黑与白色差异相比的百分比值。
它还包含一个简单的 is_equal 函数,可以在图像传递为相等的情况下(并包括)提供模糊阈值。
该方法不是很复杂,但也许对其他遇到同样问题的人有用。
https://pypi.python.org/pypi/imgcompare/
I had the same problem and wrote a simple python module which compares two same-size images using pillow's ImageChops to create a black/white diff image and sums up the histogram values.
You can get either this score directly, or a percentage value compared to a full black vs. white diff.
It also contains a simple is_equal function, with the possibility to supply a fuzzy-threshold under (and including) the image passes as equal.
The approach is not very elaborate, but maybe is of use for other out there struggling with the same issue.
https://pypi.python.org/pypi/imgcompare/
如果回复太晚了,我深表歉意,但由于我一直在做类似的事情,所以我想我可以做出一些贡献。
也许使用 OpenCV,您可以使用模板匹配。 假设您按照您所说的那样使用网络摄像头:
提示:max_val(或 min_val 取决于所使用的方法)将为您提供数字,大数字。 要获得百分比差异,请使用与相同图像匹配的模板 - 结果将是 100%。
伪代码举例:
希望有帮助。
I apologize if this is too late to reply, but since I've been doing something similar I thought I could contribute somehow.
Maybe with OpenCV you could use template matching. Assuming you're using a webcam as you said:
Tip: max_val (or min_val depending on the method used) will give you numbers, large numbers. To get the difference in percentage, use template matching with the same image -- the result will be your 100%.
Pseudo code to exemplify:
Hope it helps.
您可以计算两个图像的直方图,然后计算 Bhattacharyya 系数,这是一个非常快的算法,我用它来检测板球视频中的镜头变化(在 C 中使用 openCV)
you can compute the histogram of both the images and then calculate the Bhattacharyya Coefficient, this is a very fast algorithm and I have used it to detect shot changes in a cricket video (in C using openCV)
输出:
错误
真实
图片2\5.jpg 图片1\815.jpg
图片2\6.jpg 图片1\819.jpg
图片2\7.jpg 图片1\900.jpg
图片2\8.jpg 图片1\998.jpg
图片2\9.jpg 图片1\1012.jpg
示例图片:
815.jpg
5.jpg
output:
False
True
image2\5.jpg image1\815.jpg
image2\6.jpg image1\819.jpg
image2\7.jpg image1\900.jpg
image2\8.jpg image1\998.jpg
image2\9.jpg image1\1012.jpg
the example pictures:
815.jpg
5.jpg
地球移动距离 可能正是您所需要的。
不过,实时实施可能会有点繁重。
Earth movers distance might be exactly what you need.
It might be abit heavy to implement in real time though.
计算两个图像的曼哈顿距离怎么样? 这给你 n*n 个值。 然后,您可以执行诸如行平均值之类的操作来减少到 n 个值,并对其进行函数以获得一个值。
What about calculating the Manhattan Distance of the two images. That gives you n*n values. Then you could do something like an row average to reduce to n values and a function over that to get one single value.
我认为您可以简单地计算两个图像的亮度之间的欧几里德距离(即 sqrt(差异平方和,逐像素)),并在低于某个经验阈值时认为它们相等。 你最好将它封装成一个 C 函数。
I think you could simply compute the euclidean distance (i.e. sqrt(sum of squares of differences, pixel by pixel)) between the luminance of the two images, and consider them equal if this falls under some empirical threshold. And you would better do it wrapping a C function.
我很幸运地使用三脚架上的同一台相机拍摄了 jpg 图像
(1) 大幅简化(如从 3000 像素宽变为 100 像素宽甚至更少)
(2) 将每个 jpg 数组展平为单个向量
(3)用简单的相关算法对序列图像进行两两相关,得到相关系数
(4) 对相关系数进行平方以获得 r 平方(即一幅图像中的变异性分数由下一幅图像中的变异性解释)
(5) 一般在我的应用中,如果 r-square < 0.9,我说这两个图像是不同的,并且中间发生了一些事情。
这在我的实现中是稳健且快速的(Mathematica 7),
值得尝试一下您感兴趣的图像部分,并通过将所有图像裁剪到该小区域来重点关注该部分,否则远离相机但很重要将会错过改变。
我不知道如何使用Python,但我确信它也能进行关联,不是吗?
I have been having a lot of luck with jpg images taken with the same camera on a tripod by
(1) simplifying greatly (like going from 3000 pixels wide to 100 pixels wide or even fewer)
(2) flattening each jpg array into a single vector
(3) pairwise correlating sequential images with a simple correlate algorithm to get correlation coefficient
(4) squaring correlation coefficient to get r-square (i.e fraction of variability in one image explained by variation in the next)
(5) generally in my application if r-square < 0.9, I say the two images are different and something happened in between.
This is robust and fast in my implementation (Mathematica 7)
It's worth playing around with the part of the image you are interested in and focussing on that by cropping all images to that little area, otherwise a distant-from-the-camera but important change will be missed.
I don't know how to use Python, but am sure it does correlations, too, no?
了解 isk-daemon 是如何实现 Haar 小波的。 您可以使用它的 imgdb C++ 代码来实时计算图像之间的差异:
Check out how Haar Wavelets are implemented by isk-daemon. You could use it's imgdb C++ code to calculate the difference between images on-the-fly:
一种更有原则性的方法是使用全局描述符来比较图像,例如 GIST 或 CENTRIST。 哈希函数,如此处所述,提供了类似的解决方案。
A somewhat more principled approach is to use a global descriptor to compare images, such as GIST or CENTRIST. A hash function, as described here, also provides a similar solution.
有一个使用 numpy 计算均方误差的简单快速的解决方案:
There's a simple and fast solution using numpy by calculating mean squared error:
这是我编写的一个函数,它接受 2 个图像(文件路径)作为参数,并返回两个图像像素分量之间的平均差异。 这对我来说非常有效,可以确定视觉上“相等”的图像(当它们不
==
相等时)。(我发现 8 是确定图像是否本质上相同的一个很好的限制。)
(如果不添加任何预处理,图像必须具有相同的尺寸。)
Here is a function I wrote, which takes 2 images (filepaths) as arguments and returns the average difference between the two images' pixels' components. This worked pretty well for me to determine visually "equal" images (when they're not
==
equal).(I found 8 to be a good limit to determine if images are essentially the same.)
(Images must have the same dimensions if you add no preprocessing to this.)
有许多指标可用于评估两个图像是否相似/相似程度如何。
我不会在这里讨论任何代码,因为我认为这应该是一个科学问题,而不是技术问题。
一般来说,这个问题与人类对图像的感知有关,因此每种算法都有其对人类视觉系统特征的支持。
经典方法有:
可见差异预测器:一种用于评估图像保真度的算法 (https://www.spiedigitallibrary.org/conference-proceedings-of-spie/1666/ 0000/Visible-differences-predictor--an-algorithm-for-the-assessment-of/10.1117/12.135952.short?SSO=1)
图像质量评估:从错误可见性到结构相似性 (http://www.cns.nyu.edu/pub/lcv/wang03-reprint。 pdf)
FSIM:图像质量评估的特征相似性指数 (https://www4.comp.polyu.edu.hk/~cslzhang/IQA/TIP_IQA_FSIM.pdf)
其中,SSIM(图像质量评估:从错误可见性到结构相似性)最容易计算而且它的开销也很小,正如另一篇论文“基于梯度相似性的图像质量评估”(https://www.semanticscholar.org/paper/Image-Quality-Assessment-Based-on-Gradient-Liu-Lin/2b819bef80c02d5d4cb56f27b2 02535e119df988)。
还有更多其他方法。 如果您对艺术感兴趣/真正关心的话,请查看 Google Scholar 并搜索“视觉差异”、“图像质量评估”等内容。
There are many metrics out there for evaluating whether two images look like/how much they look like.
I will not go into any code here, because I think it should be a scientific problem, other than a technical problem.
Generally, the question is related to human's perception on images, so each algorithm has its support on human visual system traits.
Classic approaches are:
Visible differences predictor: an algorithm for the assessment of image fidelity (https://www.spiedigitallibrary.org/conference-proceedings-of-spie/1666/0000/Visible-differences-predictor--an-algorithm-for-the-assessment-of/10.1117/12.135952.short?SSO=1)
Image Quality Assessment: From Error Visibility to Structural Similarity (http://www.cns.nyu.edu/pub/lcv/wang03-reprint.pdf)
FSIM: A Feature Similarity Index for Image Quality Assessment (https://www4.comp.polyu.edu.hk/~cslzhang/IQA/TIP_IQA_FSIM.pdf)
Among them, SSIM (Image Quality Assessment: From Error Visibility to Structural Similarity ) is the easiest to calculate and its overhead is also small, as reported in another paper "Image Quality Assessment Based on Gradient Similarity" (https://www.semanticscholar.org/paper/Image-Quality-Assessment-Based-on-Gradient-Liu-Lin/2b819bef80c02d5d4cb56f27b202535e119df988).
There are many more other approaches. Take a look at Google Scholar and search for something like "visual difference", "image quality assessment", etc, if you are interested/really care about the art.
使用 SSIM 测量 2 个图像之间的结构相似性指数测量。
Use SSIM to measure the Structural Similarity Index Measure between 2 images.
如果有人需要检查图像质量指标,请查看这个非常有用的 python 包。 sewar 项目
Check out this quite useful python package if someone may need to check image quality metric. project sewar