无法对 iPhone 相机的图像进行后期制作?

发布于 2024-08-05 17:53:41 字数 312 浏览 6 评论 0原文

在 iPhone 3GS 上,相机拍摄的图像具有 2048x1536 像素。如果我的数学正确,在 CGLayer 上打开此图像将消耗 12.5 MB。

Springboard 将终止任何超过 12 MB 的应用程序(至少这是发生在我身上的情况)。

使用 CGContextDrawLayer 之类的函数操作该图像将至少再消耗 12 mb。

这是 24 MB。

如何在不终止程序的情况下在 iPhone 上操作此类图像?

有没有什么方法可以在不减小相机拍摄图像尺寸的情况下减少其所占用的空间?

有什么线索吗?谢谢。

On an iPhone 3GS, the image captured by the camera has 2048x1536 pixels. If my math is correct, opening this image on a CGLayer will consume 12.5 MB.

Springboard will terminate any application that reaches beyond 12 mb (at least this is what happens to me).

Manipulating this image with a function like CGContextDrawLayer will consume another 12 mb, at least.

This is 24 MB.

How can one manipulate such images on iPhone without having the program terminated?

Is there any way to reduce the footprint of the image taken by the camera without reducing its dimensions?

any clues? thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

尘曦 2024-08-12 17:53:41

您应该考虑使用 NSInputStream< /a> 以便以任何有意义的大小块来处理图像。例如,您可以读取 1 MB 数据,对其进行处理,将结果写入 NSOutputStream,然后再重复11次直到EOF。

您的图像处理算法很可能会确定最佳块大小。

You should consider using NSInputStream in order to process your image in chunks of whatever size makes sense. For example, you might read 1 MB of data, process it, write results out to an NSOutputStream, and then repeat 11 more times until EOF.

More likely than not, your image processing algorithm will determine the optimal chunk size.

仅一夜美梦 2024-08-12 17:53:41

您的屏幕只有 320 x 480 像素,因此在图层上放置更多内容似乎会浪费内存。

因此,您不妨先将原始图像的原点从 2048 x 1526 像素转换为 320 x 480 像素,然后再将其放入图层上。

例如,如果您使用 UIScrollView 来呈现图层,您将编写代码,以便收缩和拉伸将根据当前缩放级别计算新的 320 x 480 像素表示,该缩放级别由帧和视图的边界。在您的代码中,点击并拖动将平移原点并重新计算丢失的位。

当您放大文档时,您可以在 Safari 中看到这种效果。随着新视图的渲染,它会从模糊变为清晰。同样,当您拖动视图时,会计算其新缺失的部分并将其添加到视图中。

无论触摸事件如何,您可能只想在图层上放置 320 x 480 像素表示。

Your screen only has 320 x 480 pixels, so putting anything more on the layer seems to be a waste of memory.

So you might as well translate the origin of and scale the original image from 2048 x 1526 pixels down to 320 x 480 pixels, before putting it onto a layer.

If you use a UIScrollView to present the layer, for example, you would write code so that pinching and stretching would calculate a new 320 x 480 pixel representation based on the current zoom level, determined from the frame and bounds of the view. In your code, tapping-and-dragging would translate the origin and recalculate the missing bits.

You can see this effect with Safari, when you zoom into the document. It goes from blurry to sharp as the new view is rendered. Likewise, as you drag the view, its newly missing parts are calculated and added to the view.

Regardless of the touch event, you would probably only want to put a 320 x 480 pixel representation on the layer.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文