使用方向信息来获取向下方向

发布于 2024-10-19 13:07:46 字数 281 浏览 3 评论 0原文

我正在开发一个应用程序,该应用程序涉及根据其方向在我的 Android 屏幕上画一条线,并且可以使用一些帮助或指针。

这条线的绘制方式如下:如果手机平放,那么这条线会收缩并变成一个点,而当手机倾斜和定向时,这条线会变大 - 即手机直立,线指向下方,并且最大星等为 9.8,平放时它是一个小点。至关重要的是,无论手机以什么角度握持,箭头始终指向下方,即重力线。

现在我弄清楚了如何计算手机的偏航俯仰角和滚动角,但从数学上讲,我对如何从这些信息中推导出这条线的向量有点迷失——任何指针都会受到欢迎。

谢谢

I am working on an app that involves drawing a line on my android screen based on its orientation and could use some help or pointers.

The line is drawn in the following way: If the phone is held flat then then the line shrinks and becomes a dot and as the phone is tilted and orientatated the line becomes bigger - ie the phone stood up and the line points down and is max magnitude of 9.8 and held flat it is a small dot. Crucialy no matter what angle the phone is held at the arrow allways points down- ie the line of gravity.

Now I figured out how to calculate the yaw pitch and roll angles of the phones but mathematically I am a bit lost on how to derive the vector of this line from this information - any pointers would be most welcome.

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

巷雨优美回忆 2024-10-26 13:07:46

好的,我在 Replica Island 来源和 Nvidia 论文的帮助下解决了这个问题。

从 TYPE_ORIENTATION 传感器读数中获得俯仰、横滚、偏航后:

        @Override   
public void onSensorChanged(SensorEvent event) 
{
    synchronized (this) 
    {
        m_orientationInput[0] = x;
        m_orientationInput[1] = y;
        m_orientationInput[2] = z;

       canonicalOrientationToScreenOrientation(m_rotationIndex, m_orientationInput, m_orientationOutput);

       // Now we have screen space rotations around xyz.
       final float horizontalMotion = m_orientationOutput[0] / 90.0f;
       final float verticalMotion = m_orientationOutput[1] / 90.0f;

       // send details to renderer....

   }
}

这是 canonicalOrientationToScreenOrientation 函数:

    // From NVIDIA http://developer.download.nvidia.com/tegra/docs/tegra_android_accelerometer_v5f.pdf
private void canonicalOrientationToScreenOrientation(int displayRotation, float[] canVec, float[] screenVec) 
{ 
    final int axisSwap[][] = 
    { 
        { 1, -1, 0, 1 },   // ROTATION_0 
        {-1, -1, 1, 0 },   // ROTATION_90 
        {-1,  1, 0, 1 },   // ROTATION_180 
        { 1,  1, 1, 0 }    // ROTATION_270 
    };

    final int[] as = axisSwap[displayRotation]; 
    screenVec[0] = (float)as[0] * canVec[ as[2] ]; 
    screenVec[1] = (float)as[1] * canVec[ as[3] ]; 
    screenVec[2] = canVec[2]; 
}

Ok so I figured this out with a lot of help from the Replica Island source and in turn an Nvidia paper.

Once you have the pitch, roll, yaw from the TYPE_ORIENTATION sensor reading:

        @Override   
public void onSensorChanged(SensorEvent event) 
{
    synchronized (this) 
    {
        m_orientationInput[0] = x;
        m_orientationInput[1] = y;
        m_orientationInput[2] = z;

       canonicalOrientationToScreenOrientation(m_rotationIndex, m_orientationInput, m_orientationOutput);

       // Now we have screen space rotations around xyz.
       final float horizontalMotion = m_orientationOutput[0] / 90.0f;
       final float verticalMotion = m_orientationOutput[1] / 90.0f;

       // send details to renderer....

   }
}

Here is the canonicalOrientationToScreenOrientation function:

    // From NVIDIA http://developer.download.nvidia.com/tegra/docs/tegra_android_accelerometer_v5f.pdf
private void canonicalOrientationToScreenOrientation(int displayRotation, float[] canVec, float[] screenVec) 
{ 
    final int axisSwap[][] = 
    { 
        { 1, -1, 0, 1 },   // ROTATION_0 
        {-1, -1, 1, 0 },   // ROTATION_90 
        {-1,  1, 0, 1 },   // ROTATION_180 
        { 1,  1, 1, 0 }    // ROTATION_270 
    };

    final int[] as = axisSwap[displayRotation]; 
    screenVec[0] = (float)as[0] * canVec[ as[2] ]; 
    screenVec[1] = (float)as[1] * canVec[ as[3] ]; 
    screenVec[2] = canVec[2]; 
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文