CIFilter / CIKernel 中的最大图像大小?
有谁知道自定义 CIFilter 的图像大小有什么限制?我创建了一个过滤器,当图像高达 2 兆像素时,它可以按预期执行,但当图像更大时,会产生非常奇怪的结果。我已经在我的可可应用程序和石英作曲家中测试了这一点。我开发的滤波器是一种几何类型的失真滤波器,(我认为)需要一个覆盖整个输入图像的 ROI 和 DOD。我创建了这个过滤器来重新映射全景图像,因此我希望它可以处理非常大(50-100 兆像素)的图像。
作为一个简单的测试,考虑以下 CIFilter(可以在 Quartz Composer 中使用),它简单地平移图像,以便图像的左下角平移到中心(我知道这可以通过仿射变换来完成,但我需要在更复杂的过滤器中执行这样的操作)。当图像为 2000x1000 时,此过滤器按预期工作,但当输入图像为 4000x2000 像素时,会产生奇怪的结果。问题是要么平移没有将角精确地移动到中心,要么图像输出完全消失。我注意到大图像上更复杂的过滤器存在其他奇怪的问题,但我认为这个简单的过滤器说明了我的问题,并且可以在 Quartz Composer 中复制。
kernel vec4 equidistantProjection(sampler src, __color color)
{
vec2 coordinate = samplerCoord(src);
vec2 result;
vec4 outputImage;
result.x = (coordinate.x - samplerSize(src).x / 2.0);
result.y = (coordinate.y - samplerSize(src).y / 2.0);
outputImage = unpremultiply(sample(src,result));
return premultiply(outputImage);
}
当使用工作坐标而不是采样器坐标时,会出现相同的奇怪行为,但在这种情况下,尺寸为 2000x1000 的图像会出现错误,但对于尺寸为 1000x500 的图像效果很好。
kernel vec4 equidistantProjection(sampler src, __color color, vec2 destinationDimensions)
{
vec2 coordinate = destCoord();
vec2 result;
vec4 outputImage;
result.x = (coordinate.x - destinationDimensions.x / 2.0);
result.y = (coordinate.y - destinationDimensions.y / 2.0);
outputImage = unpremultiply(sample(src,result));
outputImage = unpremultiply(sample(src,samplerTransform(src, result)));
return premultiply(outputImage);
}
作为参考,我已添加到过滤器 < 的 Objective-C 部分code>- (CIImage *)outputImage 方法如下,将 DOD 设置为整个输入图像。
- (CIImage *)outputImage
{
CISampler *src = [CISampler samplerWithImage: inputImage];
NSArray * outputExtent = [NSArray arrayWithObjects:
[NSNumber numberWithInt:0],
[NSNumber numberWithInt:0],
[NSNumber numberWithFloat:[inputImage extent].size.width],
[NSNumber numberWithFloat:[inputImage extent].size.height],nil];
return [self apply: filterKernel, src, inputColor, zoom, viewBounds, inputOrigin,
kCIApplyOptionDefinition, [src definition], kCIApplyOptionExtent, outputExtent, nil];
}
此外,我添加了以下方法来设置我在 - (id)init
方法中调用的 ROI:[filterKernel setROISelector:@selector(regionOf:destRect:userInfo:)];
- (CGRect) regionOf:(int)samplerIndex destRect:(CGRect)r userInfo:obj
{
return r;
}
对于此问题的任何帮助或建议,我们将不胜感激。我确信 CIFilters 可以处理更大的图像,因为我已经使用 CIBumpDistortion 处理大于 50 兆像素的图像,所以我一定做错了什么。有什么想法吗?
Does anyone know what the limitations are on image size with custom CIFilters? I've created a filter that performs as expected when the images are up to 2 mega pixels but then produce very strange results when the images are larger. I've tested this both in my cocoa app as well as in quartz composer. The filter I've developed is a geometry-type distortion filter that (I think) requires an ROI and a DOD that spans the entire input image. I've created this filter for remapping panoramic images so I'd like this to work on very large (50-100 mega pixel) images.
As a simple test the consider the following CIFilter (can be used in Quartz Composer) that simply translates the image so that the lower-left corner of the images is translated to the center (I know this could be done with an affine transform but I need to perform such an operation in a more complex filter). This filter works as expected when the image is 2000x1000 but produces odd results when the input image is 4000x2000 pixels. The problem is that either the translation does not move the corner to the center exactly or that the image output is gone entirely. I've noticed other odd problems with more complicated filters on large images but I think this simple filter illustrates my issue and can be replicated in Quartz Composer.
kernel vec4 equidistantProjection(sampler src, __color color)
{
vec2 coordinate = samplerCoord(src);
vec2 result;
vec4 outputImage;
result.x = (coordinate.x - samplerSize(src).x / 2.0);
result.y = (coordinate.y - samplerSize(src).y / 2.0);
outputImage = unpremultiply(sample(src,result));
return premultiply(outputImage);
}
The same odd behavior appears when using the working coordinates instead of the sampler coordinates but in this case the error occurs for images of size 2000x1000 but works fine for images of size 1000x500
kernel vec4 equidistantProjection(sampler src, __color color, vec2 destinationDimensions)
{
vec2 coordinate = destCoord();
vec2 result;
vec4 outputImage;
result.x = (coordinate.x - destinationDimensions.x / 2.0);
result.y = (coordinate.y - destinationDimensions.y / 2.0);
outputImage = unpremultiply(sample(src,result));
outputImage = unpremultiply(sample(src,samplerTransform(src, result)));
return premultiply(outputImage);
}
For reference I have added to the Objective-C portion of my filter's - (CIImage *)outputImage
method the following to set the DOD to be the entire input image.
- (CIImage *)outputImage
{
CISampler *src = [CISampler samplerWithImage: inputImage];
NSArray * outputExtent = [NSArray arrayWithObjects:
[NSNumber numberWithInt:0],
[NSNumber numberWithInt:0],
[NSNumber numberWithFloat:[inputImage extent].size.width],
[NSNumber numberWithFloat:[inputImage extent].size.height],nil];
return [self apply: filterKernel, src, inputColor, zoom, viewBounds, inputOrigin,
kCIApplyOptionDefinition, [src definition], kCIApplyOptionExtent, outputExtent, nil];
}
Additionally I added the following method to set the ROI which I call in my - (id)init
method with this: [filterKernel setROISelector:@selector(regionOf:destRect:userInfo:)];
- (CGRect) regionOf:(int)samplerIndex destRect:(CGRect)r userInfo:obj
{
return r;
}
Any help or advice on this issue would be greatly appreciated. I'm sure that CIFilters can work with larger images as I've used the CIBumpDistortion with greater than 50 megapixel images so I must be doing something wrong. Any ideas?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在使用 CoreImage 的过程中,我发现它可以将大图像切割成多个部分。例如,在您的情况下,4k x 2k 图像可以分割为 4 2k x 1k 图像并单独渲染。不幸的是,这种优化技巧会影响samplerCoord,并且一些依赖于坐标的过滤器在大图像上无法正常工作。
我的解决方案是使用 destCoord 而不是 samplerCoord。当然,您应该记住,图像可以以非零原点和destCoord 进行渲染。我编写了自己的过滤器,因此我能够将整个范围作为 vec4 参数传递。
示例:尝试使用 CIFilter 生成图像,如下所示:
此输出应该为我们提供 (0,0) 处的黑色和 (1,1) 处的白色,对吧?然而,对于大图像,您会看到很少的四边形,而不是单个渐变。发生这种情况是由于 CoreImage 引擎优化了渲染,我还没有找到传递它的方法,但您可以这样重写内核:
其中 rect 是 real 您必须通过的采样器范围。我使用 [inputImage extent] 来达到此目的,但它取决于过滤器,并且在您的情况下可以是其他东西。
希望这个解释能够清楚地说明这一点。顺便说一句,看起来系统内核即使在处理大图像时也能正常工作,所以您应该只在自定义内核中担心这个技巧。
Working with the CoreImage I discovered that it cuts big images to parts. For example, in your case 4k x 2k image can be splitted to 4 2k x 1k images and rendered separately. Unfortunately, this optimization tricks affects samplerCoord and some coordinate-depended filters work incorrectly on big images.
My solution was in using destCoord instead of samplerCoord. Of course, you should keep in mind that an image can be rendered in non-zero origin and destCoord. I wrote my own filter, so I was able to pass whole extent as a vec4 parameter.
Example: try generate an image with CIFilter, something like that:
This output should give us black color at (0,0) and white at (1,1), right? However, for big images you'll see few quads, not a single gradient. This happens due to optimized rendering coming from CoreImage engine, I haven't found a way to pass it, but you can re-write the kernel this way:
Where rect is real extent of the sampler you must pass. I used [inputImage extent] for this purpose, but it depends on filter and can be something other in your case.
Hope this explanation made it clear. Buy the way, it looks like system kernels work just fine even with big images, so you should worry about this tricks in your custom kernels only.