iOS 4.0 及更高版本中的相机数码变焦

发布于 2024-10-05 07:02:28 字数 187 浏览 3 评论 0原文

如何为相机实现数字变焦滑块。 我使用以下 API:AVCaptureVideoPreviewLayer、AVCaptureSession、AVCaptureVideoDataOutput、AVCaptureDeviceInput。

我想要同样的滑块,iPhone 4 相机应用程序中提供了该滑块。

预先感谢您提供任何提示和示例!

how can I implement a digital zoom slider for the camera.
I use the following APIs: AVCaptureVideoPreviewLayer, AVCaptureSession, AVCaptureVideoDataOutput, AVCaptureDeviceInput.

I would like to have the same slider, which is available in iphone 4 camera app.

Thanks in advance for any tips and examples!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

美人如玉 2024-10-12 07:02:28

我是新手,我尝试仅使用 AVFoundation 框架使用 AVCaptureVideoPreviewLayer 进行缩放,但我也无法使其工作。我认为这是因为该层有自己的 AVCaptureSession 来控制自己的输出,即使我将其作为子层添加到 UIScrollView 中,它仍然独立运行,并且滚动层不会影响预览层。

在 WWDC 会议 419“在 iOS5 中使用 AVFoundation 从相机捕获”中,Brad Ford 表示“AVCaptureVideoPreviewLayer 并不继承自 AVCaptureOutput(如 AVCaptureVideoDataOutput 那样)。它继承自 CALayer,但可以插入到核心动画树中(如其他层)。在 AVFoundation 中,AVSession 拥有它的输出,但不拥有它的层,因此如果您想将一个层插入到视图层次结构中,您可以将一个会话附加到它,然后在层树中忘记它。它会自行处理,它也会清理会话。”

我见过 Brad Larson 使用 Open GL ES 和 AVFoundation 框架的组合:
http://www.sunsetlakesoftware.com /2010/10/22/gpu-accelerated-video-processing-mac-and-ios
使用 AVCaptureVideoPreviewLayer 可以在其中调整来自相机的原始数据,所以我认为这就是开始的地方。查看他的 ColorTrackingCamera 应用程序。它使用您(和我)不需要缩放的着色器,但我认为可以使用类似的机制来缩放。

哦,我忘了提及 Brad Larson 没有将 AVCaptureInput 附加到 AVCaptureSession。我可以看到他也在使用主线程作为队列,而不是在另一个线程上创建自己的队列。他的Open GL ES方法drawFrame也是如何
他渲染图像,而捕获会话本身并没有这样做。所以,如果你理解得更多,或者我的假设是错误的,也请告诉我。

希望这会有所帮助,但由于我对所有这些和 OpenGL ES 都是新手,我假设如果我们可以捕获每个帧并将其转换为具有不同分辨率和/或帧大小的 UIImage,则可以使用该库进行缩放。

杰夫·W.

I'm a newbie, and I have tried doing a zoom with the AVFoundation framework only, using an AVCaptureVideoPreviewLayer and I can't make it work either. I think its because that layer has its own AVCaptureSession which controls its own output and even though I added it as a sublayer to a UIScrollView, it still runs on its own and the scroll layer can't affect the preview layer.

From WWDC session 419, "Capture from camera using AVFoundation in iOS5", Brad Ford said "AVCaptureVideoPreviewLayer does NOT inherit from AVCaptureOutput (like AVCaptureVideoDataOutput does). It inherits from CALayer but can be inserted into a core animation tree (like other layers). In AVFoundation, the AVSession owns it outputs, but does NOT own its layers. The layers own the session. So if you want to insert a layer into a view hierarchy, you attach a session to it and forget about it. Then when layer tree disposes of itself, it will clean up the session as well."

I have seen Brad Larson, using a combination of Open GL ES and AVFoundation framework at:
http://www.sunsetlakesoftware.com/2010/10/22/gpu-accelerated-video-processing-mac-and-ios
use an AVCaptureVideoPreviewLayer where he can adjust the raw data from the camera, so I assume thats the place to start. Check out his ColorTrackingCamera app. Its using shaders which you (and I) don't need to zoom, but I think a similar mechanism can be used to zoom.

Oh, I forgot to mention that Brad Larson does NOT attach the AVCaptureInput to the AVCaptureSession. I can see that he is also using the main thread for his queue instead of creating his own queue on another thread. His Open GL ES methods to drawFrame is also how
he renders the image, and the capture session itself is not doing that. So, if you understand more, or my assumptions are wrong, please let me know too.

Hope this helps, but since I am new to all of this, and OpenGL ES, I am assuming that library can be used to zoom if we can capture each frame and turn it into a UIImage with a different resolution and/or frame size.

Jeff W.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文