如何使用 iPhone 的摄像头跟踪运动?

发布于 2024-09-27 19:09:49 字数 120 浏览 10 评论 0原文

我看到有人制作了一个应用程序,可以使用摄像头跟踪你的脚,这样你就可以在 iPhone 屏幕上踢虚拟足球。

你怎么能做这样的事?有谁知道有关使用 iPhone 摄像头检测物体并跟踪它们的任何代码示例或其他信息吗?

I saw that someone has made an app that tracks your feet using the camera, so that you can kick a virtual football on your iPhone screen.

How could you do something like this? Does anyone know of any code examples or other information about using the iPhone camera for detecting objects and tracking them?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

久夏青 2024-10-04 19:09:49

我刚刚在 SecondConf 上做了一次演讲,在那里我演示了如何使用 iPhone 的摄像头使用 OpenGL ES 2.0 着色器来跟踪彩色对象。该演讲附带的帖子,包括我的幻灯片和所有演示的示例代码,可以找到 此处

我编写的示例应用程序的代码可以从此处下载基于 Apple 在 WWDC 2007 上演示 Core Image 的示例。该示例在章节中进行了描述GPU Gems 3 书中的第 27 章

基本思想是,您可以使用自定义 GLSL 着色器实时处理来自 iPhone 摄像头的图像,确定哪些像素与给定阈值内的目标颜色匹配。然后,这些像素将其归一化 X、Y 坐标嵌入到其红色和绿色颜色分量中,而所有其他像素则标记为黑色。然后对整个帧的颜色进行平均以获得彩色对象的质心,您可以在它在相机视图上移动时对其进行跟踪。

虽然这不能解决跟踪脚等更复杂对象的情况,但应该能够编写这样的着色器来挑选出这样的移动对象。

作为对上述内容的更新,自从我写这篇文章以来的两年里,我现在开发了一个开源框架 封装了图像和视频的 OpenGL ES 2.0 着色器处理。最近添加的内容之一是 GPUImageMotionDetector 类,它处理场景并检测其中的任何类型的运动。它将作为简单回调块的一部分返回给您检测到的整体运动的质心和强度。使用这个框架来做到这一点应该比滚动您自己的解决方案容易得多。

I just gave a talk at SecondConf where I demonstrated the use of the iPhone's camera to track a colored object using OpenGL ES 2.0 shaders. The post accompanying that talk, including my slides and sample code for all demos can be found here.

The sample application I wrote, whose code can be downloaded from here, is based on an example produced by Apple for demonstrating Core Image at WWDC 2007. That example is described in Chapter 27 of the GPU Gems 3 book.

The basic idea is that you can use custom GLSL shaders to process images from the iPhone camera in realtime, determining which pixels match a target color within a given threshold. Those pixels then have their normalized X,Y coordinates embedded in their red and green color components, while all other pixels are marked as black. The color of the whole frame is then averaged to obtain the centroid of the colored object, which you can track as it moves across the view of the camera.

While this doesn't address the case of tracking a more complex object like a foot, shaders like this should be able to be written that could pick out such a moving object.

As an update to the above, in the two years since I wrote this I've now developed an open source framework that encapsulates OpenGL ES 2.0 shader processing of images and video. One of the recent additions to that is a GPUImageMotionDetector class that processes a scene and detects any kind of motion within it. It will give you back the centroid and intensity of the overall motion it detects as part of a simple callback block. Using this framework to do this should be a lot easier than rolling your own solution.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文