在 VR 中渲染来自 GStreamer 的视频 (Oculus Quest 2)
我正在开发一个机器人,它通过 VR 耳机进行控制器并向耳机发送实时视频。
我选择在 Android 上采用本机方式,现在拥有接收视频流并对其进行编码(使用 GStreamer)以及通过 UDP 将控制数据发送到机器人所需的一切。
最后要做的一件事(也是我最困难的一件事,因为我之前没有计算机图形学方面的经验)是将图像(编码的相机输入)绘制到屏幕上。在过去的几天里,我一直在阅读有关 Vulkan 和 OpenGL 如何工作的内容,我还浏览了 Oculus Mobile SDK 中提供的示例(主要是 VRCubeWorld_SurfaceView),但这对于我需要的东西来说太复杂了,我已经尝试过为了简化它,这样我就可以画两个图像,但后来我想。
我还需要这些吗?这个问题可能听起来很愚蠢,但我之前确实没有任何这样做的经验。
我的意思是,该示例使用 OpenGL 基本上计算 3D 场景的所有图层,应用颜色,然后将它们融合在一起以获得最终帧,并通过以下函数传递给 VR_API
: vrapi_SubmitFrame2(appState.Ovr, &frameDesc);
我可以只拍摄这些图像,并以某种方式强制它们进入frameDesc结构以跳过整个OpenGL管道吗?如果是这样,任何有足够知识的人都可以为我指出一个可行的解决方案吗?
我不需要对图像进行任何类型的平移,只需渲染它们即可。稍后我将使用头部传感器数据,但它实际上不会对“场景”做任何事情。
I'm working on a robot that is controller via the VR headset and sends a real-time video feed to the headset.
I've chosen to go the native way on Android and now have everything I need to receive the video stream and encode it (using GStreamer) and also to send the control data to the robot via UDP.
The last thing to do (and the one I most struggle with as I nave no prior experience with computer graphics) is to draw the image (encoded camera feed) to the screen. In the last few days, I've been reading stuff about how Vulkan and OpenGL works, I've also went through the examples provided in Oculus Mobile SDK (mainly VRCubeWorld_SurfaceView) but that's way to complex for what I need, I've tried to simplify it so I could just draw two images, but then I thought.
Do I even need any of that? And this question might sound stupid, but I really don't have any prior experience doing this.
I mean, the example is using OpenGL to basically compute all the layers of the 3D scene, apply colors and then fuse them together to get a final frame that is passed to VR_API
via the function:
vrapi_SubmitFrame2(appState.Ovr, &frameDesc);
Can I just take those images, and somehow force them into the frameDesc
structure to skip the whole OpenGL pipeline? If so, can anyone knowledgeable enough point me to a working solution?
I don't need any kind of panning over the images, just to render them. Later I'll be using head sensor data, but it won't actually do anything with the "scene".
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论