如何在 iPhone 上将一张图像与另一张图像进行比较,看看它们是否有一定比例的相似度?

发布于 2024-11-17 10:19:10 字数 181 浏览 2 评论 0原文

我基本上想用 iPhone 或 iPad 2 上的相机拍摄两张图像,并将它们相互比较,看看它们是否几乎相同。显然,由于光线等原因,图像永远不会完全相同,因此我想检查大约 90% 的兼容性。

我在这里看到的所有其他类似的问题要么不是针对 iOS 的,要么是针对在图像中定位对象的。我只是想看看两张图片是否相似。

谢谢。

I basically want to take two images taken from the camera on the iPhone or iPad 2 and compare them to each other to see if they are pretty much the same. Obviously due to light etc the image will never be EXACTLY the same so I would like to check for around 90% compatibility.

All the other questions like this that I saw on here were either not for iOS or were for locating objects in images. I just want to see if two images are similar.

Thank you.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

旧故 2024-11-24 10:19:10

作为一种快速、简单的算法,我建议迭代每个图像中大约 1% 的像素,然后将它们直接相互比较,或者保持运行平均值,然后在最后比较两个平均颜色值。

您可以查看 这个答案了解如何确定图像中给定位置的像素颜色。您可能希望对其进行一些优化以更好地适合您的用例(重复查询同一图像),但它应该提供一个良好的起点。

然后你可以使用大致如下的算法:

float numDifferences = 0.0f;
float totalCompares = width * height / 100.0f;
for (int yCoord = 0; yCoord < height; yCoord += 10) {
    for (int xCoord = 0; xCoord < width; xCoord += 10) {
        int img1RGB[] = [image1 getRGBForX:xCoord andY: yCoord];
        int img2RGB[] = [image2 getRGBForX:xCoord andY: yCoord];
        if (abs(img1RGB[0] - img2RGB[0]) > 25 || abs(img1RGB[1] - img2RGB[1]) > 25 || abs(img1RGB[2] - img2RGB[2]) > 25) {
            //one or more pixel components differs by 10% or more
            numDifferences++;
        }
    }
}

if (numDifferences / totalCompares <= 0.1f) {
    //images are at least 90% identical 90% of the time
}
else {
    //images are less than 90% identical 90% of the time
}

As a quick, simple algorithm, I'd suggest iterating through about 1% of the pixels in each image and either comparing them directly against each other or keeping a running average and then comparing the two average color values at the end.

You can look at this answer for an idea of how to determine the color of a pixel at a given position in an image. You may want to optimize it somewhat to better suit your use-case (repeatedly querying the same image), but it should provide a good starting point.

Then you can use an algorithm roughly like:

float numDifferences = 0.0f;
float totalCompares = width * height / 100.0f;
for (int yCoord = 0; yCoord < height; yCoord += 10) {
    for (int xCoord = 0; xCoord < width; xCoord += 10) {
        int img1RGB[] = [image1 getRGBForX:xCoord andY: yCoord];
        int img2RGB[] = [image2 getRGBForX:xCoord andY: yCoord];
        if (abs(img1RGB[0] - img2RGB[0]) > 25 || abs(img1RGB[1] - img2RGB[1]) > 25 || abs(img1RGB[2] - img2RGB[2]) > 25) {
            //one or more pixel components differs by 10% or more
            numDifferences++;
        }
    }
}

if (numDifferences / totalCompares <= 0.1f) {
    //images are at least 90% identical 90% of the time
}
else {
    //images are less than 90% identical 90% of the time
}
淡墨 2024-11-24 10:19:10

基于aroth的想法,这是我的完整实现。它检查一些随机像素是否相同。对于我所需要的,它工作完美。

- (bool)isTheImage:(UIImage *)image1 apparentlyEqualToImage:(UIImage *)image2 accordingToRandomPixelsPer1:(float)pixelsPer1
{
    if (!CGSizeEqualToSize(image1.size, image2.size))
    {
        return false;
    }

    int pixelsWidth = CGImageGetWidth(image1.CGImage);
    int pixelsHeight = CGImageGetHeight(image1.CGImage);

    int pixelsToCompare = pixelsWidth * pixelsHeight * pixelsPer1;

    uint32_t pixel1;
    CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
    uint32_t pixel2;
    CGContextRef context2 = CGBitmapContextCreate(&pixel2, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);

    bool isEqual = true;

    for (int i = 0; i < pixelsToCompare; i++)
    {
        int pixelX = arc4random() % pixelsWidth;
        int pixelY = arc4random() % pixelsHeight;

        CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image1.CGImage);
        CGContextDrawImage(context2, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image2.CGImage);

        if (pixel1 != pixel2)
        {
            isEqual = false;
            break;
        }
    }
    CGContextRelease(context1);
    CGContextRelease(context2);

    return isEqual;
}

用法:

[self isTheImage:image1 apparentlyEqualToImage:image2
accordingToRandomPixelsPer1:0.001]; // Use a value between 0.0001 and 0.005

根据我的性能测试,0.005(像素的 0.5%)是您应该使用的最大值。如果您需要更高的精度,只需比较整个图像即可
使用这个。 0.001 似乎是一个安全且性能良好的值。对于大图像(例如 0.5 到 2 兆像素或百万像素),我使用 0.0001 (0.01%),它运行良好且速度快得令人难以置信,而且永远不会出错。

但当然,错误率将取决于您使用的图像类型。我使用的是 UIWebView 屏幕截图,0.0001 表现良好,但如果您比较真实照片(实际上甚至只是比较一个随机像素),您可能可以使用更少的值。如果您正在处理非常相似的计算机设计图像,那么您肯定需要更高的精度。

注意:我总是在不考虑 Alpha 通道的情况下比较 ARGB 图像。如果这不完全是您的情况,也许您需要对其进行调整。

Based on aroth's idea, this is my full implementation. It checks if some random pixels are the same. For what I needed it works flawlessly.

- (bool)isTheImage:(UIImage *)image1 apparentlyEqualToImage:(UIImage *)image2 accordingToRandomPixelsPer1:(float)pixelsPer1
{
    if (!CGSizeEqualToSize(image1.size, image2.size))
    {
        return false;
    }

    int pixelsWidth = CGImageGetWidth(image1.CGImage);
    int pixelsHeight = CGImageGetHeight(image1.CGImage);

    int pixelsToCompare = pixelsWidth * pixelsHeight * pixelsPer1;

    uint32_t pixel1;
    CGContextRef context1 = CGBitmapContextCreate(&pixel1, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
    uint32_t pixel2;
    CGContextRef context2 = CGBitmapContextCreate(&pixel2, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);

    bool isEqual = true;

    for (int i = 0; i < pixelsToCompare; i++)
    {
        int pixelX = arc4random() % pixelsWidth;
        int pixelY = arc4random() % pixelsHeight;

        CGContextDrawImage(context1, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image1.CGImage);
        CGContextDrawImage(context2, CGRectMake(-pixelX, -pixelY, pixelsWidth, pixelsHeight), image2.CGImage);

        if (pixel1 != pixel2)
        {
            isEqual = false;
            break;
        }
    }
    CGContextRelease(context1);
    CGContextRelease(context2);

    return isEqual;
}

Usage:

[self isTheImage:image1 apparentlyEqualToImage:image2
accordingToRandomPixelsPer1:0.001]; // Use a value between 0.0001 and 0.005

According to my performance tests, 0.005 (0.5% of the pixels) is the maximum value you should use. If you need more precision, just compare the whole images
using this. 0.001 seems to be a safe and well-performing value. For large images (like between 0.5 and 2 megapixels or million pixels), I'm using 0.0001 (0.01%) and it works great and incredibly fast, it never makes a mistake.

But of course the mistake-ratio will depend on the type of images you are using. I'm using UIWebView screenshots and 0.0001 performs well, but you can probably use much less if you are comparing real photographs (even just compare one random pixel in fact). If you are dealing with very similar computer designed images you definitely need more precision.

Note: I'm always comparing ARGB images without taking into account the alpha channel. Maybe you'll need to adapt it if that's not exactly your case.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文