Android增强现实Cameraview转换为屏幕坐标
我目前正在开发自己的增强现实应用程序。我正在尝试编写自己的 AR 引擎,因为到目前为止我见过的所有框架都只能用于 GPS 数据。 它将在室内使用,我从另一个系统获取位置数据。
到目前为止我所拥有的是:
float[] vector = { 2, 2, 1, 0 };
float transformed[] = new float[4];
float[] R = new float[16];
float[] I = new float[16];
float[] r = new float[16];
float[] S = { 400f, 1, 1, 1, 1, -240f, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };
float[] B = { 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 400f, 240f, 1, 1 };
float[] temp1 = new float[16];
float[] temp2 = new float[16];
float[] frustumM = {1.5f,0,0,0,0,-1.5f,0,0,0,0,1.16f,1,0,0,-3.24f,0};
//Rotationmatrix to get transformation for device to world coordinates
SensorManager.getRotationMatrix(R, I, accelerometerValues, geomagneticMatrix);
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_X, SensorManager.AXIS_Z, r);
//invert to get transformation for world to camera
Matrix.invertM(R, 0, r, 0);
Matrix.multiplyMM(temp1, 0, frustumM, 0, R, 0);
Matrix.multiplyMM(temp2, 0, S, 0, temp1, 0);
Matrix.multiplyMM(temp1, 0, B, 0, temp2, 0);
Matrix.multiplyMV(transformed, 0, temp1, 0, vector, 0);
我知道它丑陋的代码,但我只是试图让对象“向量”正确绘制,目前我的位置为 (0,0,0)。 我的屏幕尺寸硬编码在矩阵 S 和 B (800x480) 中。
结果应存储在“transformed”中,并且应采用类似 Transformed = {x,y,z,w} 的形式 对于数学,我使用了此链接: http ://www.inf.fu-berlin.de/lehre/WS06/19605_Computergrafik/doku/rossbach_siewert/camera.html
有时我的图形被绘制,但它跳来跳去,而且位置不正确。我已经用 SensorManager.getOrientation 记录了滚动情况,它们看起来正常且稳定。
所以我认为我在数学上做了一些错误的事情,但我找不到更好的数学资源来转换我的数据。有人可以帮我吗? 预先感谢
马丁
I'm currently developing my own augmented reality app. I'm trying to write my own AR Engine, since all frameworks I've seen so far are just usable with GPS data.
It's going to be used indoors, I'm getting my position data from another system.
What I have so far is:
float[] vector = { 2, 2, 1, 0 };
float transformed[] = new float[4];
float[] R = new float[16];
float[] I = new float[16];
float[] r = new float[16];
float[] S = { 400f, 1, 1, 1, 1, -240f, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };
float[] B = { 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 400f, 240f, 1, 1 };
float[] temp1 = new float[16];
float[] temp2 = new float[16];
float[] frustumM = {1.5f,0,0,0,0,-1.5f,0,0,0,0,1.16f,1,0,0,-3.24f,0};
//Rotationmatrix to get transformation for device to world coordinates
SensorManager.getRotationMatrix(R, I, accelerometerValues, geomagneticMatrix);
SensorManager.remapCoordinateSystem(R, SensorManager.AXIS_X, SensorManager.AXIS_Z, r);
//invert to get transformation for world to camera
Matrix.invertM(R, 0, r, 0);
Matrix.multiplyMM(temp1, 0, frustumM, 0, R, 0);
Matrix.multiplyMM(temp2, 0, S, 0, temp1, 0);
Matrix.multiplyMM(temp1, 0, B, 0, temp2, 0);
Matrix.multiplyMV(transformed, 0, temp1, 0, vector, 0);
I know its ugly code, but i'm just trying to get the object "vector" get painted correctly with my position being (0,0,0) for now.
My screen size is hardcoded in the matrix S and B (800x480).
The result should be stored in "transformed" and should be in a form like transformed = {x,y,z,w}
For the math I've used this link: http://www.inf.fu-berlin.de/lehre/WS06/19605_Computergrafik/doku/rossbach_siewert/camera.html
Sometimes my graphic gets painted but it jumps around and its not at the correct position. I've logged the rolling with SensorManager.getOrientation and they seem ok and stable.
So I think I'm doing something with the math wrong but I couldn't find better sources about the math to transform my data. Could anyone help me please?
Thanks in advance
martin
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论