Android 上更好的 AR
我正在尝试创建一个具有相当简单的 AR 功能的小型 Android 应用程序 - 加载一些已知标记,并在检测到这些标记时在视频流顶部渲染已知的 2D/3D 对象。我将不胜感激任何有关执行此操作的库的指示,或者至少是正确执行此操作的一个不错的示例。
以下是我研究过的一些线索:
AndAR - https://code.google.com/p/andar / - 一开始很棒,AndAR 应用程序运行良好,足以在实时视频流上的单个图案上渲染一个立方体,但看起来该项目实际上已被放弃,并且为了扩展它,我'我必须深入研究 OpenGL 领域——并非不可能,但非常困难不受欢迎的。后续的 AndAR Model Viewer 项目据称可以让您加载自定义 .obj 文件,但它似乎根本无法识别该标记。再一次,这看起来非常像废弃软件,而且还可以有更多。
处理 - 之前提到的 NyARToolkit 非常适合从 PC 进行处理 - 示例用法,非常适合“这是一个模式,这是一个对象,只需在那里渲染它”功能,但是对于 Android 来说这一切都崩溃了 - Android 版的 GStreamer 正处于非常早期的 hacky 阶段,并且一般来说视频功能似乎是一个相当低的优先级Android 处理项目 - 现在 importprocessing.video.*;
失败了。
laar、wikitude 等,它们似乎都更关注交互性、位置等,而我绝对不需要这些,并且在某种程度上缺少这种基本用法。
我哪里出错了?我很乐意编写视频捕获/检测/渲染的某些部分,我不需要拖放库,但 AndAR 的示例代码让我充满恐惧
I am trying to create a small Android app with reasonably simple AR functionality - load a few known markers and render known 2D/3D objects on top of the video stream when those are detected. I would appreciate any pointers to a library for doing this, or at least a decent example of doing it right.
Here are some leads I have looked into:
AndAR - https://code.google.com/p/andar/ - This starts out great, and the AndAR app works well enough to render one cube on a single pattern on a real-time video stream, but it looks like the project is effectively abandoned, and to extend it I'll have to go heavily into OpenGL land - not impossible, but very undesirable. The follow-up AndAR Model Viewer project, which supposedly lets you load custom .obj files, doesn't seem to recognize the marker at all. Once again, this looks very much abandonware, and it could have been so much more.
Processing - Previously mentioned NyARToolkit is great with Processing from a PC - example usage, which works perfectly for the 'here's a pattern, here's an object, just render it there' functionality, but then it all breaks down for Android - GStreamer for Android is at a very very early hacky stage, and in general video functionality seems to be a rather low priority for the Android Processing project - right now import processing.video.*;
just fails.
layar, wikitude etc, they all seem to focus more on interactivity, location and whatnot, which I absolutely don't need, and are somehow missing this basic usage.
Where am I going wrong? I would be happy to code some part of the video capture/detection/rendering, I don't need a drag-and-drop library, but the sample code from AndAR just fills me with dread
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我建议查看 Vuforia SDK(以前称为 QCAR) Qualcomm 加上 jPCT-AE 作为 3D 引擎。它们可以很好地协同工作,不需要纯粹的 OpenGL。但是,您需要一些 C/C++ 知识,因为 Vuforia 在某种程度上依赖 NDK。
它基本上归结为通过一个简单的 JNI 函数(SDK 包含功能齐全且广泛的示例代码)从 Vuforia 获取标记姿势,并使用它来使用 jPCT 放置 3D 对象(最简单的方法是将姿势设置为对象的旋转矩阵,这有点hacky,但会产生快速结果)。
jPCT-AE 支持某些常见格式的 3D 模型加载。 API 文档很好,但您可能需要咨询论坛以获取示例代码。
I suggest to take a look at the Vuforia SDK (formerly QCAR) by Qualcomm plus jPCT-AE as 3D-Engine. They both work very well together, no pure OpenGL needed. However you need some C/C++ knowledge, since Vuforia relies to the NDK to some degree.
It basically boils to to get the marker pose from Vuforia via a simple JNI-function (the SDK contains fully functional and extensive sample code) and use that to place the 3D-objects with jPCT (the easiest way is to set the pose as the rotation matrix of the object, which is a bit hacky, but produces quick results).
jPCT-AE supports 3D-model loading for some common formats. The API docs are good, but you may need to consult the forums for sample code.