Android 中的 glReadPixels() 重建帧问题

发布于 2024-10-21 20:34:45 字数 1668 浏览 3 评论 0原文

我正在开发一个关于 Android 增强现实的项目。该代码捕获摄像机视频,找到标记并在其顶部显示一个立方体。此后,找到运动矢量(以在 x 和 y 方向移动的像素的形式)。我需要做的是从GL层读取像素,并在将它们移动运动矢量指定的距离后再次绘制它们。 GL 层是使用 GLSurfaceView 类指定的,它是一个透明层。我面临的问题是,当我使用 glReadPixels() 读取像素并将其转换为 480x800 数组(连接一屏幕分辨率)时,我得到立方体的 3 个不同部分,而不是一个。 我打算在此之后通过运动向量移动像素,并使用 glDrawPixels() 将像素放回到帧缓冲区中 请帮我解释一下。使用 glReadPixels 时是否缺少一些东西,以及是否有其他功能可以帮助我实现相同的目标。我正在考虑使用 glBlitFrameBuffer() 但这不受 android GL10 类支持。 我已附加了读取像素并将其更改为 2D 矩阵的代码部分以及我使用 MatLab 重建的像素图像。 抱歉,我不被允许发布图片。如果有人看到图片后可以帮助我,请在此处发帖,我可以将图片邮寄给您。非常感谢!

任何帮助将不胜感激。

gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT, 1);
        IntBuffer pixels = IntBuffer.allocate(384000);

        gl.glReadPixels(0, 0, 800, 480, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE ,pixels);

            File f = new File ("./mnt/sdcard/Sahiba_PixelData_" + flag + ".txt");

                if(!f.exists())
                    f.createNewFile();
                FileWriter fw = new FileWriter(f);

                int pixelsArr [][] = new int [480][800];
                //int temp[] = pixels.array();
                int n=0;
                for(int i=0; i<480; i++){
                    for(int j=0; j<800; j++){
                        pixelsArr[i][j] = pixels.get(n);
                            //temp[n + pixels.arrayOffset()];
                        fw.write(pixelsArr[i][j] + " ");
                        //fw.write(pixels.get(n) + " ");
                        n++;
                    }
                    //fw.write("\n");
                }

        Log.i("GLLayer", "Pixels reading and storing finished !!!");
        }catch(Exception e){
            Log.i("GLLayer","Exception = " +e);
        }

I am working on a project on Augmented Reality with Android. The code captures the camera video, finds the marker and displays a cube on top of it. After this a motion vector (in the form of pixels moved in the x and y direction) is found. What I need to do is read the pixels from the GL layer and draw them again after moving them by the distance specified by the motion vector.
The GL layer is specified using the GLSurfaceView class which is a transparent layer. The problem I am facing is that when I use glReadPixels() to read the pixels and convert it into a 480x800 array (nexus one screen resolution), I get 3 different portions of the cube instead of one.
I intend to move the pixels by the motion vector after this and use glDrawPixels() to put the pixels back into the frame buffer
Please help me with the interpretation of the same. Is there something I am missing while using glReadPixels and also if there is some other function that will help me achieve the same. I was thinking of using glBlitFrameBuffer() but this is not supported by the android GL10 class.
I have attached the part of the code where I am reading the pixels and changing them to a 2D matrix along with the image of the pixels I reconstructed using MatLab.
Sorry I am not allowed to post an image. Incase someone can help me after seeing the image, please post here and I can mail you the picture. Thanks a ton !!!

Any help will be greatly appreciated.

gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT, 1);
        IntBuffer pixels = IntBuffer.allocate(384000);

        gl.glReadPixels(0, 0, 800, 480, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE ,pixels);

            File f = new File ("./mnt/sdcard/Sahiba_PixelData_" + flag + ".txt");

                if(!f.exists())
                    f.createNewFile();
                FileWriter fw = new FileWriter(f);

                int pixelsArr [][] = new int [480][800];
                //int temp[] = pixels.array();
                int n=0;
                for(int i=0; i<480; i++){
                    for(int j=0; j<800; j++){
                        pixelsArr[i][j] = pixels.get(n);
                            //temp[n + pixels.arrayOffset()];
                        fw.write(pixelsArr[i][j] + " ");
                        //fw.write(pixels.get(n) + " ");
                        n++;
                    }
                    //fw.write("\n");
                }

        Log.i("GLLayer", "Pixels reading and storing finished !!!");
        }catch(Exception e){
            Log.i("GLLayer","Exception = " +e);
        }

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

稀香 2024-10-28 20:34:45

首先,您不应该使用硬编码值,因为系统允许更改实际大小,因此可能会导致异常。尝试使用视口尺寸:

int[] viewportDim=new int[4];
glGetIntegerv(GL10.GL_VIEWPORT,viewportDim);

viewportDim[2] 是宽度,viewportDim[3] 是高度。

gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT,1);

我不确定这是否正确,但我认为 RGBA 应该是 4。

你得到的图像为800*480,然后将其保存为480*800?我认为也应该是 800*480。

First of all you shoun't use hardcoded values because the system is allowed to change the actual size so it may lead to exceptions. Try using viewport dimensions :

int[] viewportDim=new int[4];
glGetIntegerv(GL10.GL_VIEWPORT,viewportDim);

viewportDim[2] is the width and viewportDim[3] is the height.

gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT,1);

I'm not sure this is correct or not but I think this should be 4 for RGBA.

You get the image as 800*480 and then you save it as 480*800? I think that supposed to 800*480 as well.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文