如何在不同线程上调用glReadPixels?

发布于 2024-12-04 02:00:51 字数 620 浏览 2 评论 0原文

当我在另一个线程上调用 glReadPixels 时,它不会返回任何数据。我在某处读到建议我需要在调用线程中创建一个新上下文并将内存复制过来。我到底该怎么做?

这是我使用的 glReadPixels 代码:

pixels = new BYTE[ 3 * width * height];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
image = FreeImage_ConvertFromRawBits(pixels, width, height, 3 * width, 24, 0xFF0000, 0x00FF00, 0x0000FF, false);
FreeImage_Save(FIF_PNG, image, pngpath.c_str() , 0);

或者,我从中读取 thread 他们建议使用另一段代码(见结尾),但我不明白什么是 origX、origY、srcOrigX,来源?

When I call glReadPixels on another thread, it doesn't return me any data. I read somewhere suggesting that I need to create a new context in the calling thread and copy the memory over. How exactly do I do this?

This is the glReadPixels code I use:

pixels = new BYTE[ 3 * width * height];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
image = FreeImage_ConvertFromRawBits(pixels, width, height, 3 * width, 24, 0xFF0000, 0x00FF00, 0x0000FF, false);
FreeImage_Save(FIF_PNG, image, pngpath.c_str() , 0);

Alternatively, I read from this thread they suggest to use another piece of code (see the end) but I dont understand what are origX, origY, srcOrigX, srcOrigY?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

氛圍 2024-12-11 02:00:51

你有不同的选择。

您可以通过渲染线程调用ReadPixel。在这种情况下,返回的数据应存储在缓冲区中,该缓冲区可以排队到专用于保存图片的线程中。这可以通过缓冲区队列、互斥体和信号量轻松完成:渲染线程使用 ReadPixel 获取数据、锁定互斥体、排队(系统内存)像素缓冲区、解锁互斥体、增加信号量;工作线程(锁定在信号量上)将由渲染线程发出信号,锁定互斥体,使像素缓冲区出列,解锁互斥体并保存图像。

否则,您可以将当前帧缓冲区复制到纹理或像素缓冲区对象上。在这种情况下,您必须有两个不同的线程,每个线程都有一个当前的 OpenGL 上下文(通过 MakeCurrent),彼此共享对象空间(如 user771921 建议的那样)。当第一个渲染线程调用ReadPixels(或CopyPixels)时,通知第二个线程有关操作(例如使用信号量);第二个渲染线程将映射像素缓冲区对象(或获取纹理数据)。
此方法的优点是允许驱动程序将第一个线程读取操作流水线化,但它实际上通过引入额外的支持缓冲区使内存复制操作加倍。此外,当第二个线程映射缓冲区时,ReadPixel 操作将被刷新,该操作(很可能)是在第二个线程收到信号后立即执行的。

我建议第一个选项,因为它更干净、简单。第二个过于复杂,我怀疑您能否从使用它中获得优势:图像保存操作比ReadPixel慢很多。

即使 ReadPixel 不是流水线的,你的 FPS 真的会变慢吗?在进行分析之前不要进行优化。

您链接的示例使用 GDI 函数,与 OpenGL 无关。我认为代码会导致重绘表单事件,然后捕获窗口客户区内容。与 ReadPixel 相比,它似乎慢得多,即使我实际上没有对此问题执行任何分析。

You have different options.

You call ReadPixel pipelined with the rendering thread. In this case the returned data shall be stored on a buffer that can be enqueued to a thread that is dedcated for saving pictures. This can be done easily with a buffer queue, a mutex and a semaphore: rendering thread get data using ReadPixel, lock the mutex, enqueue (system memory) pixel buffer, unlock the mutex, increase the semaphore; the worker thread (locked on the semaphore) will be signaled by the rendering thread, lock the mutex, dequeue the pixel buffer, unlock the mutex and save the image.

Otherwise, you can copy the current framebuffer on a texture or a pixel buffer object. In this case you must have two different threads, having an OpenGL context current (via MakeCurrent) each, sharing their object space with each other (as suggested by user771921). When the first rendering thread calls ReadPixels (or CopyPixels), notifies the second thread about the operation (using a semaphore for example); the second rendering thread will map the pixel buffer object (or get the texture data).
This method has the advantage to allow the driver to pipeline the first thread read operation, but it actually doubles the memory copy operations by introducing an additional support buffer. Moreover the ReadPixel operation is flushed when the second thread maps the buffer, which is executed (most probably) just after the second thread is signaled.

I would suggest the first option, since it is the much cleaner and simple. The second one is overcomplicated, and I doubt you can get advantages from using it: the image saving operation is a lot slower than ReadPixel.

Even if the ReadPixel is not pipelined, does your FPS really slow down? Don't optimize before you can profile.

The example you have linked uses GDI functions, which are not OpenGL related. I think the code would causea repaint form event and then capture the window client area contents. It seems to much slower compared with ReadPixel, even if I haven't actually executed any profiling on this issue.

瑾夏年华 2024-12-11 02:00:51

您可以创建共享上下文,这将按您的预期工作。请参阅wglShareLists(名称选得不好,它共享的不仅仅是列表)。或者,使用 WGL_ARB_create_context,它也直接支持共享上下文(您已标记问题“windows”,但非 WGL 也存在类似的功能)。

然而,使用像素缓冲区对象要容易得多,这将具有与多线程相同的净效果(传输将异步运行而不会阻塞渲染线程),而且复杂性要低很多倍。

You can create shared contexts, and this will work as you intended. See wglShareLists (the name is chosen badly, it shares more than just lists). Or, use WGL_ARB_create_context, which directly supports sharing contexts too (you have tagged the question "windows", but similar functionality exists for non-WGL too).

However, it is much, much easier to use a pixel buffer object instead, that will have the same net effect as multithreading (the transfer will run asynchronously without blocking the render thread), and it is many times less complex.

远山浅 2024-12-11 02:00:51

嗯,在多线程程序中使用 opengl 是一个坏主意 - 特别是如果您在没有创建上下文的线程中使用 opengl 函数。

除此之外,您的代码示例没有任何问题。

Well, using opengl in a multi threaded program is a bad idea - specially if you use opengl functions in a thread that has no context created.

Apart from that, there is nothing wrong in your code example.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文