CoreImage、iOS5和速度,为什么CoreImage这么慢?

发布于 2024-12-20 11:39:13 字数 1035 浏览 2 评论 0原文

我终于坐下来将我们的一些图形库转换为使用 CoreImage 来渲染图像(用于混合两个图像之类的事情),我已经让它工作了,但速度很慢(iOS5.01 iPhone 4S)。我认为 CoreImage 的承诺是硬件加速,这就是我正在做的事情:

        CIImage *intputBackgroundImage = [[CIImage alloc] initWithCGImage:backgroundImageRef];
    CIImage *inputImage = [[CIImage alloc] initWithCGImage:inputImageRef];

    CIFilter *multiply = [CIFilter filterWithName:@"CIMultiplyBlendMode"];
    [multiply setDefaults];

    [multiply setValue:intputBackgroundImage forKey:@"inputBackgroundImage"];
    [multiply setValue:inputImage forKey:@"inputImage"];

    CIImage *result = multiply.outputImage;
    CIContext *ctx = [CIContext contextWithOptions:nil];

    CGImageRef resultRef = [ctx createCGImage:result fromRect:CGRectMake(0, 0, imgFrame.size.width, imgFrame.size.height)];
    UIImage *resultImage = [UIImage imageWithCGImage:resultRef];
    CGImageRelease(resultRef);

它有效,我得到了一个将 inputImage 混合到 backgroundImage 中的图像,但实际上比我刚刚使用 CoreGraphics 和 CGContext 完成它需要更长的时间()。有没有办法检查这项工作是否没有在 GPU 上完成,是否有原因?

谢谢,

I'm finally sitting down to convert some of our graphics libraries to use CoreImage to render images (for things like blending two images) and I've got it working but its slow (iOS5.01 iPhone 4S). I thought the promise of CoreImage was hardware acceleration, here is what I'm doing:

        CIImage *intputBackgroundImage = [[CIImage alloc] initWithCGImage:backgroundImageRef];
    CIImage *inputImage = [[CIImage alloc] initWithCGImage:inputImageRef];

    CIFilter *multiply = [CIFilter filterWithName:@"CIMultiplyBlendMode"];
    [multiply setDefaults];

    [multiply setValue:intputBackgroundImage forKey:@"inputBackgroundImage"];
    [multiply setValue:inputImage forKey:@"inputImage"];

    CIImage *result = multiply.outputImage;
    CIContext *ctx = [CIContext contextWithOptions:nil];

    CGImageRef resultRef = [ctx createCGImage:result fromRect:CGRectMake(0, 0, imgFrame.size.width, imgFrame.size.height)];
    UIImage *resultImage = [UIImage imageWithCGImage:resultRef];
    CGImageRelease(resultRef);

It works, I get an image which has the inputImage blended into the backgroundImage but it actually takes longer than if I had just done it using CoreGraphics and a CGContext(). Is there a way to check if this work isn't being done on the GPU, is there a reason it wouldn't be?

Thanks,

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

私藏温柔 2024-12-27 11:39:13

需要重新渲染的东西最适合ci,或者需要做多个过滤器的东西。

目前速度较慢,因为必须将项目从 cpu 内存复制到 GPU 内存(ios 上没有 opengl 统一内存支持)。

这意味着,由于您只做一种效果,因此由于双重副本而需要更多时间。

Things that you need rerender are best for ci, or things that need to do multiple filters.

It's currently slower because the items have to be copied from cpu memory to gpu memory (no opengl unified memory support on ios).

This means that since you are only doing one effect, it will take more time due to the double copy.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文