NSImage 在 writeToFile 时失去质量

发布于 2024-12-10 15:16:53 字数 2717 浏览 0 评论 0原文

基本上,我正在尝试创建一个用于批量图像处理的程序,该程序将调整每个图像的大小并在边缘周围添加边框(边框也将由图像组成)。尽管我还没有实现该实现,并且这超出了我的问题的范围,但我提出这个问题是因为即使我在这里得到了很好的答案,我仍然可能采取了错误的方法来实现这一点,以及认识到这一点的任何帮助将不胜感激。无论如何,这是我的问题:

问题: 我可以使用下面的现有代码并修改它以创建比当前输出的代码更高质量的保存到文件的图像吗?我实际上花了 10 多个小时试图找出我做错了什么; “secondaryImage”将高质量的调整大小的图像绘制到自定义视图中,但我尝试保存文件的所有操作都会导致图像质量大大降低(像素化程度不高,只是明显更加模糊)。最后,我在Apple的“Reducer”示例(位于ImageReducer.m末尾)中找到了一些代码,该代码锁定焦点并从当前视图获取NSBitmapImageRep。这使得图像质量显着提高,但是,Photoshop 执行相同操作的输出更加清晰。看起来绘制到视图中的图像与保存到文件中的图像具有相同的质量,因此两者都低于将大小调整为 50% 的同一图像的 Photoshop 质量,就像这幅图像一样。是否有可能获得比这更高质量的调整大小图像?

除此之外,我如何修改现有代码才能控制保存到文件的图像质量?我可以更改压缩和像素密度吗?如果您能帮助修改我的代码或为我提供好的示例或教程(最好是后者),我将不胜感激。非常感谢!

- (void)drawRect:(NSRect)rect {

// Getting source image
NSImage *image = [[NSImage alloc] initWithContentsOfFile: @"/Users/TheUser/Desktop/4.jpg"];

// Setting NSRect, which is how resizing is done in this example. Is there a better way?
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);

// Sort of used as an offscreen image or palate to do drawing onto; the future I will use to group several images into one.
NSImage *secondaryImage = [[NSImage alloc] initWithSize: halfSizeRect.size];
[secondaryImage lockFocus];

[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];

[image drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];

[secondaryImage unlockFocus];
[secondaryImage drawInRect: halfSizeRect fromRect: NSZeroRect   operation: NSCompositeSourceOver fraction: 1.0];

// Trying to add image quality options; does this usage even affect the final image?
NSBitmapImageRep *bip = nil;
bip = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide: secondaryImage.size.width pixelsHigh: secondaryImage.size.width bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];

[secondaryImage addRepresentation: bip];

// Four lines below are from aforementioned "ImageReducer.m"
NSSize size = [secondaryImage size];
[secondaryImage lockFocus];
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, size.width, size.height)];
[secondaryImage unlockFocus];

NSDictionary *prop = [NSDictionary dictionaryWithObject: [NSNumber numberWithFloat: 1.0] forKey: NSImageCompressionFactor];
NSData *outputData = [bitmapImageRep representationUsingType:NSJPEGFileType properties: prop];
[outputData writeToFile:@"/Users/TheUser/Desktop/4_halfsize.jpg" atomically:NO];

// release from memory
[image release];    
[secondaryImage release];
[bitmapImageRep release];
[bip release];
}

Basically, I'm trying to create a program for batch image processing that will resize every image and add a border around the edge (the border will be made up of images as well). Although I have yet to get to that implementation, and that's beyond the scope of my question, I ask it because even if I get a great answer here, I still may be taking the wrong approach to get there, and any help in recognizing that would be greatly appreciated. Anyway, here's my question:

Question:
Can I take the existing code I have below and modify it to create higher-quality images saved-to-file than the code currently outputs? I literally spent 10+ hours trying to figure out what I was doing wrong; "secondaryImage" drew the high quality resized image into the Custom View, but everything I tried to do to save the file resulted in an image that was substantially lower quality (not so much pixelated, just noticeably more blurry). Finally, I found some code in Apple's "Reducer" example (at the end of ImageReducer.m) that locks the focus and gets a NSBitmapImageRep from the current view. This made a substantial increase in image quality, however, the output from Photoshop doing the same thing is a bit clearer. It looks like the image drawn to the view is of the same quality that's saved to file, and so both are below Photoshop's quality of the same image resized to 50%, just as this one is. Is it even possible to get higher quality resized images than this?

Aside from that, how can I modify the existing code to be able to control the quality of image saved to file? Can I change the compression and pixel density? I'd appreciate any help with either modifying my code or pointing me in the way of good examples or tutorials (preferably the later). Thanks so much!

- (void)drawRect:(NSRect)rect {

// Getting source image
NSImage *image = [[NSImage alloc] initWithContentsOfFile: @"/Users/TheUser/Desktop/4.jpg"];

// Setting NSRect, which is how resizing is done in this example. Is there a better way?
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);

// Sort of used as an offscreen image or palate to do drawing onto; the future I will use to group several images into one.
NSImage *secondaryImage = [[NSImage alloc] initWithSize: halfSizeRect.size];
[secondaryImage lockFocus];

[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];

[image drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];

[secondaryImage unlockFocus];
[secondaryImage drawInRect: halfSizeRect fromRect: NSZeroRect   operation: NSCompositeSourceOver fraction: 1.0];

// Trying to add image quality options; does this usage even affect the final image?
NSBitmapImageRep *bip = nil;
bip = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide: secondaryImage.size.width pixelsHigh: secondaryImage.size.width bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];

[secondaryImage addRepresentation: bip];

// Four lines below are from aforementioned "ImageReducer.m"
NSSize size = [secondaryImage size];
[secondaryImage lockFocus];
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, size.width, size.height)];
[secondaryImage unlockFocus];

NSDictionary *prop = [NSDictionary dictionaryWithObject: [NSNumber numberWithFloat: 1.0] forKey: NSImageCompressionFactor];
NSData *outputData = [bitmapImageRep representationUsingType:NSJPEGFileType properties: prop];
[outputData writeToFile:@"/Users/TheUser/Desktop/4_halfsize.jpg" atomically:NO];

// release from memory
[image release];    
[secondaryImage release];
[bitmapImageRep release];
[bip release];
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

乖乖哒 2024-12-17 15:16:53

我不知道你为什么要在屏幕上往返。这可能会影响结果,但没有必要。

您可以使用 CGImage 和 CGBitmapContext 来完成所有这些,并根据需要使用生成的图像绘制到屏幕上。我已经使用了这些 API 并取得了良好的结果(但我不知道它们与您当前的方法相比如何)。

另一个注意事项:以更高的质量渲染中间体,然后为您编写的版本调整大小并减少到 8bpc。现在这不会产生显着的差异,但一旦引入过滤就会(在大多数情况下)。

I'm not sure why you are round tripping to and from the screen. That could affect the result, and it's not needed.

You can accomplish all this using CGImage and CGBitmapContext, using the resultant image to draw to the screen if needed. I've used those APIs and had good results (but I do not know how they compare to your current approach).

Another note: Render at a higher quality for the intermediate, then resize and reduce to 8bpc for the version you write. This will not make a significant difference now, but it will (in most cases) once you introduce filtering.

南…巷孤猫 2024-12-17 15:16:53

最后,其中一个“啊哈!”时刻!我尝试在高质量 .tif 文件上使用相同的代码,生成的图像小了 8 倍(尺寸),而不是我告诉它的 50%。当我尝试显示它以重新缩放图像时,它仍然比原始图像小 4 倍,而它应该以相同的高度和宽度显示。 我发现我从导入的图像中获取 NSSize 的方式是错误的。以前,它是这样的:

NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);

它应该在哪里:

NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData: [image TIFFRepresentation]];
NSRect halfSizeRect = NSMakeRect(0, 0, [imageRep pixelsWide]/2, [imageRep pixelsHigh]/2);

显然它与 DPI 和爵士乐有关,所以我需要从 BitmapImageRep 而不是从 image.size 获取正确的大小。 通过这一更改,我能够以与 Photoshop 几乎没有区别的质量进行保存。

Finally, one of those "Aha!" moments! I tried using the same code on a high-quality .tif file, and the resultant image was 8 times smaller (in dimensions), rather than than the 50% I'd told it to do. When I tried displaying it would any rescaling of the image, it wound up still 4 times smaller than the original, when it should have displayed at the same height and width. I found out the way I was taking the NSSize from the imported image was wrong. Previously, it read:

NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);

Where it should be:

NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData: [image TIFFRepresentation]];
NSRect halfSizeRect = NSMakeRect(0, 0, [imageRep pixelsWide]/2, [imageRep pixelsHigh]/2);

Apparently it has something to do with DPI and that jazz, so I needed to get the correct size from the BitmapImageRep rather than from image.size. With this change, I was able to save at a quality nearly indistinguishable from Photoshop.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文