对 CGImage 进行二进制量化(图像主要是亮还是暗)

发布于 2024-12-27 05:19:01 字数 127 浏览 1 评论 0原文

我有一个 CGImage,我想确定它是主要明亮还是主要黑暗。我当然可以迭代矩阵并查看是否有足够数量的像素超过所需的阈值。然而,由于我是图像处理领域的新手,我认为 CoreGraphics 或 Quartz 中一定有更适合甚至加速的内置函数。

I have a CGImage and I want to determine whether it is majorly bright or majorly dark. I can surely just iterate through the matrix and see whether a sufficient number of pixels exceed the desired threshold. However, since I am new to image processing, I assume there must be built-in functions in CoreGraphics or Quartz that are better suited, and maybe even accelerated.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

明月松间行 2025-01-03 05:19:01

CoreGraphics(又名 Quartz 2D)没有任何这方面的功能。 Mac OS X 上的 CoreImage 有 CIAreaAverageCIAreaHistogram,这可能对您有帮助,但我认为 iOS(从 5.0.1 开始)没有这些过滤器。

iOS 确实有Accelerate 框架vImageHistogramCalculation_ARGBFFFF 函数和相关函数可能会对您有所帮助。

CoreGraphics (aka Quartz 2D) doesn't have any functions for this. CoreImage on Mac OS X has CIAreaAverage and CIAreaHistogram, which might help you, but I don't think iOS (as of 5.0.1) has those filters.

iOS does have the Accelerate framework. The vImageHistogramCalculation_ARGBFFFF function and related functions might help you.

独守阴晴ぅ圆缺 2025-01-03 05:19:01

以下是在 iOS 应用程序中使用 CIAreaAverage 的方法:

    CGRect inputExtent = [self.inputImage extent];
    CIVector *extent = [CIVector vectorWithX:inputExtent.origin.x
                                           Y:inputExtent.origin.y
                                           Z:inputExtent.size.width
                                           W:inputExtent.size.height];
    CIImage* inputAverage = [CIFilter filterWithName:@"CIAreaAverage" keysAndValues:@"inputImage", self.inputImage, @"inputExtent", extent, nil].outputImage;

    //CIImage* inputAverage = [self.inputImage imageByApplyingFilter:@"CIAreaMinimum" withInputParameters:@{@"inputImage" : inputImage, @"inputExtent" : extent}];
    EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    NSDictionary *options = @{ kCIContextWorkingColorSpace : [NSNull null] };
    CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];

    size_t rowBytes = 32 ; // ARGB has 4 components
    uint8_t byteBuffer[rowBytes]; // Buffer to render into

    [myContext render:inputAverage toBitmap:byteBuffer rowBytes:rowBytes bounds:[inputAverage extent] format:kCIFormatRGBA8 colorSpace:nil];

    const uint8_t* pixel = &byteBuffer[0];
    float red   = pixel[0] / 255.0;
    float green = pixel[1] / 255.0;
    float blue  = pixel[2] / 255.0;
    NSLog(@"%f, %f, %f\n", red, green, blue);


    return outputImage;
}

@end

Here's how to use CIAreaAverage in an iOS app:

    CGRect inputExtent = [self.inputImage extent];
    CIVector *extent = [CIVector vectorWithX:inputExtent.origin.x
                                           Y:inputExtent.origin.y
                                           Z:inputExtent.size.width
                                           W:inputExtent.size.height];
    CIImage* inputAverage = [CIFilter filterWithName:@"CIAreaAverage" keysAndValues:@"inputImage", self.inputImage, @"inputExtent", extent, nil].outputImage;

    //CIImage* inputAverage = [self.inputImage imageByApplyingFilter:@"CIAreaMinimum" withInputParameters:@{@"inputImage" : inputImage, @"inputExtent" : extent}];
    EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    NSDictionary *options = @{ kCIContextWorkingColorSpace : [NSNull null] };
    CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];

    size_t rowBytes = 32 ; // ARGB has 4 components
    uint8_t byteBuffer[rowBytes]; // Buffer to render into

    [myContext render:inputAverage toBitmap:byteBuffer rowBytes:rowBytes bounds:[inputAverage extent] format:kCIFormatRGBA8 colorSpace:nil];

    const uint8_t* pixel = &byteBuffer[0];
    float red   = pixel[0] / 255.0;
    float green = pixel[1] / 255.0;
    float blue  = pixel[2] / 255.0;
    NSLog(@"%f, %f, %f\n", red, green, blue);


    return outputImage;
}

@end
一个人的旅程 2025-01-03 05:19:01

有比强度直方图更快、更有效的方法来测量特定指标——如果事实上您只想用它做测量的话。

图像键控是另一回事。它由强度的平均总和确定,但不需要对它们进行分箱。图像键控公式返回的值(如果需要,我有)可用于局部自适应色调范围映射(您想要的),使用简单的伽玛调整(像素的强度值,在图像键值)。

这并不难,而且很明显,您拥有技能和经验来采用这种更快、更有效的方法来区分明暗图像。

更重要的是,您应该尽可能建立一种使用图像度量公式而不是直方图的模式和实践。它们旨在解释信息,而不仅仅是收集信息。不仅如此,它们通常是可互操作的,这意味着它们可以堆叠在一起,就像 Core Image 滤镜一样。

有关详细信息,请阅读:

Gamma Correction with Adaptation to the Image Key,第 14 页,作者 Laurence Meylan 的《高动态范围图像的色调映射》。

There are faster, more effective ways of measuring that specific metric than by an intensity histogram—if, in fact, all you intend to do with it is measurements.

Image keying is another; it is determined by the sum-average of intensities, but doesn't require binning them. The value returned by an image keying formula (which I have, if you need) can be used for local adaptive tonal-range mapping (what you want) using a simple gamma adjustment (the intensity value of a pixel, raised by one over the image key value).

This is not hard, and it's clear that you have the skills and experience to employ this faster and more effective way of differentiating between a light and dark image.

What's more is, you should establish a pattern and practice of using image-metric formulas instead of histograms wherever you can. They are designed to interpret information, not just collect it. Not only that, but they are often interoperable, meaning they can be stacked one on top of the other, just like Core Image filters.

For specifics, read:

Gamma Correction with Adaptation to the Image Key, on page 14 of Tone Mapping for High Dynamic Range Images by Laurence Meylan.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文