绘制标准 NSImage 反转(白色而不是黑色)

发布于 2024-08-18 19:12:43 字数 370 浏览 4 评论 0原文

我正在尝试用白色而不是黑色绘制标准 NSImage。以下方法可以很好地在当前 NSGraphicsContext 中绘制黑色图像:

NSImage* image = [NSImage imageNamed:NSImageNameEnterFullScreenTemplate];
[image drawInRect:r fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];

我希望 NSCompositeXOR 能够实现这一目的,但没有。我需要走复杂的 [CIFilter filterWithName:@"CIColorInvert"] 路径吗?我觉得我一定错过了一些简单的东西。

I'm trying to draw a standard NSImage in white instead of black. The following works fine for drawing the image in black in the current NSGraphicsContext:

NSImage* image = [NSImage imageNamed:NSImageNameEnterFullScreenTemplate];
[image drawInRect:r fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];

I expected NSCompositeXOR to do the trick, but no. Do I need to go down the complicated [CIFilter filterWithName:@"CIColorInvert"] path? I feel like I must be missing something simple.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

无敌元气妹 2024-08-25 19:12:43

Core Image 路线是最可靠的。其实并不复杂,我在下面发布了一个示例。如果您知道您的图像不会被翻转,那么您可以删除转换代码。主要要注意的是,从 NSImageCIImage 的转换在性能方面可能会很昂贵,因此您应该确保缓存 CIImage > 如果可能的话,不要在每次绘图操作期间重新创建它。

CIImage* ciImage = [[CIImage alloc] initWithData:[yourImage TIFFRepresentation]];
if ([yourImage isFlipped])
{
    CGRect cgRect    = [ciImage extent];
    CGAffineTransform transform;
    transform = CGAffineTransformMakeTranslation(0.0,cgRect.size.height);
    transform = CGAffineTransformScale(transform, 1.0, -1.0);
    ciImage   = [ciImage imageByApplyingTransform:transform];
}
CIFilter* filter = [CIFilter filterWithName:@"CIColorInvert"];
[filter setDefaults];
[filter setValue:ciImage forKey:@"inputImage"];
CIImage* output = [filter valueForKey:@"outputImage"];
[output drawAtPoint:NSZeroPoint fromRect:NSRectFromCGRect([output extent]) operation:NSCompositeSourceOver fraction:1.0];

注意:释放/保留内存管理留作练习,上面的代码假设垃圾收集。

如果你想以任意大小渲染图像,你可以执行以下操作:

NSSize imageSize = NSMakeSize(1024,768); //or whatever size you want
[yourImage setSize:imageSize];
[yourImage lockFocus];
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
[yourImage unlockFocus];
CIImage* image = [CIImage imageWithData:[bitmap TIFFRepresentation]];

The Core Image route would be the most reliable. It's actually not very complicated, I've posted a sample below. If you know none of your images will be flipped then you can remove the transform code. The main thing to be careful of is that the conversion from NSImage to CIImage can be expensive performance-wise, so you should ensure you cache the CIImage if possible and don't re-create it during each drawing operation.

CIImage* ciImage = [[CIImage alloc] initWithData:[yourImage TIFFRepresentation]];
if ([yourImage isFlipped])
{
    CGRect cgRect    = [ciImage extent];
    CGAffineTransform transform;
    transform = CGAffineTransformMakeTranslation(0.0,cgRect.size.height);
    transform = CGAffineTransformScale(transform, 1.0, -1.0);
    ciImage   = [ciImage imageByApplyingTransform:transform];
}
CIFilter* filter = [CIFilter filterWithName:@"CIColorInvert"];
[filter setDefaults];
[filter setValue:ciImage forKey:@"inputImage"];
CIImage* output = [filter valueForKey:@"outputImage"];
[output drawAtPoint:NSZeroPoint fromRect:NSRectFromCGRect([output extent]) operation:NSCompositeSourceOver fraction:1.0];

Note: release/retain memory management is left as an exercise, the code above assumes garbage collection.

If you want to render the image at an arbitrary size, you could do the following:

NSSize imageSize = NSMakeSize(1024,768); //or whatever size you want
[yourImage setSize:imageSize];
[yourImage lockFocus];
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, imageSize.width, imageSize.height)];
[yourImage unlockFocus];
CIImage* image = [CIImage imageWithData:[bitmap TIFFRepresentation]];
苦笑流年记忆 2024-08-25 19:12:43

这是一个使用 Swift 5.1 的解决方案,有点基于上述解决方案。请注意,我没有缓存图像,因此它可能不是最有效的,因为我的主要用例是根据当前配色方案是浅色还是深色来翻转工具栏按钮中的小单色图像。

import os
import AppKit
import Foundation

public extension NSImage {

    func inverted() -> NSImage {
        guard let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
            os_log(.error, "Could not create CGImage from NSImage")
            return self
        }

        let ciImage = CIImage(cgImage: cgImage)
        guard let filter = CIFilter(name: "CIColorInvert") else {
            os_log(.error, "Could not create CIColorInvert filter")
            return self
        }

        filter.setValue(ciImage, forKey: kCIInputImageKey)
        guard let outputImage = filter.outputImage else {
            os_log(.error, "Could not obtain output CIImage from filter")
            return self
        }

        guard let outputCgImage = outputImage.toCGImage() else {
            os_log(.error, "Could not create CGImage from CIImage")
            return self
        }

        return NSImage(cgImage: outputCgImage, size: self.size)
    }
}

fileprivate extension CIImage {
    func toCGImage() -> CGImage? {
        let context = CIContext(options: nil)
        if let cgImage = context.createCGImage(self, from: self.extent) {
            return cgImage
        }
        return nil
    }
}

Here is a solution using Swift 5.1, somewhat based on the above solutions. Note that I am not cacheing the images, so it likely isn't the most efficient as my primary use case is to flip small monochrome images in toolbar buttons based on whether the current color scheme is light or dark.

import os
import AppKit
import Foundation

public extension NSImage {

    func inverted() -> NSImage {
        guard let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
            os_log(.error, "Could not create CGImage from NSImage")
            return self
        }

        let ciImage = CIImage(cgImage: cgImage)
        guard let filter = CIFilter(name: "CIColorInvert") else {
            os_log(.error, "Could not create CIColorInvert filter")
            return self
        }

        filter.setValue(ciImage, forKey: kCIInputImageKey)
        guard let outputImage = filter.outputImage else {
            os_log(.error, "Could not obtain output CIImage from filter")
            return self
        }

        guard let outputCgImage = outputImage.toCGImage() else {
            os_log(.error, "Could not create CGImage from CIImage")
            return self
        }

        return NSImage(cgImage: outputCgImage, size: self.size)
    }
}

fileprivate extension CIImage {
    func toCGImage() -> CGImage? {
        let context = CIContext(options: nil)
        if let cgImage = context.createCGImage(self, from: self.extent) {
            return cgImage
        }
        return nil
    }
}
爱冒险 2024-08-25 19:12:43

请注意:我发现 CIColorInvert 过滤器并不总是可靠。例如,如果您想反转 Photoshop 中反转的图像,CIFilter 将生成更亮的图像。据我了解,发生这种情况是因为 CIFilter 的伽玛值(伽玛为 1)和来自其他来源的图像存在差异。

当我在寻找更改 CIFilter 的 gamma 值的方法时,我发现了一个注释,指出 CIContext 中存在一个错误:更改其默认的 1 的 gamma 值将产生不可预测的结果。

无论如何,还有另一种反转 NSImage 的解决方案,它总是产生正确的结果 - 通过反转 NSBitmapImageRep 的像素。

我重新发布来自 etutorials.org 的代码 (http://bit.ly/Y6GpLn):

// srcImageRep is the NSBitmapImageRep of the source image
int n = [srcImageRep bitsPerPixel] / 8;           // Bytes per pixel
int w = [srcImageRep pixelsWide];
int h = [srcImageRep pixelsHigh];
int rowBytes = [srcImageRep bytesPerRow];
int i;

NSImage *destImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
NSBitmapImageRep *destImageRep = [[[NSBitmapImageRep alloc] 
      initWithBitmapDataPlanes:NULL
          pixelsWide:w
          pixelsHigh:h
          bitsPerSample:8
          samplesPerPixel:n
          hasAlpha:[srcImageRep hasAlpha] 
          isPlanar:NO
          colorSpaceName:[srcImageRep colorSpaceName]
          bytesPerRow:rowBytes 
          bitsPerPixel:NULL] autorelease];

unsigned char *srcData = [srcImageRep bitmapData];
unsigned char *destData = [destImageRep bitmapData];

for ( i = 0; i < rowBytes * h; i++ )
    *(destData + i) = 255 - *(srcData + i);

[destImage addRepresentation:destImageRep];

Just one note: I found that CIColorInvert filter isn't always reliable. For example, if you want to invert back an image inverted in Photoshop, the CIFilter will produce a much lighter image. As far as I understood, it happens because of the differences in gamma value of CIFilter (gamma is 1) and images that came from other sources.

While I was looking for ways to change the gamma value for CIFilter, I found a note that there's a bug in CIContext: changing its gamma value from the default 1 will produce unpredictable results.

Regardless, there's another solution to invert NSImage, which always produces the correct results - by inverting pixels of NSBitmapImageRep.

I'm reposting the code from etutorials.org (http://bit.ly/Y6GpLn):

// srcImageRep is the NSBitmapImageRep of the source image
int n = [srcImageRep bitsPerPixel] / 8;           // Bytes per pixel
int w = [srcImageRep pixelsWide];
int h = [srcImageRep pixelsHigh];
int rowBytes = [srcImageRep bytesPerRow];
int i;

NSImage *destImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
NSBitmapImageRep *destImageRep = [[[NSBitmapImageRep alloc] 
      initWithBitmapDataPlanes:NULL
          pixelsWide:w
          pixelsHigh:h
          bitsPerSample:8
          samplesPerPixel:n
          hasAlpha:[srcImageRep hasAlpha] 
          isPlanar:NO
          colorSpaceName:[srcImageRep colorSpaceName]
          bytesPerRow:rowBytes 
          bitsPerPixel:NULL] autorelease];

unsigned char *srcData = [srcImageRep bitmapData];
unsigned char *destData = [destImageRep bitmapData];

for ( i = 0; i < rowBytes * h; i++ )
    *(destData + i) = 255 - *(srcData + i);

[destImage addRepresentation:destImageRep];
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文