我有一个程序可以模拟随时间变化的物理系统。我想以预定的时间间隔(例如每 10 秒)将模拟状态的可视化输出到文件中。我想以这样一种方式来做到这一点,即很容易“关闭可视化”并且根本不输出可视化。
我将 OpenGL 和 GLUT 视为进行可视化的图形工具。然而,问题似乎首先是,它看起来只能输出到窗口,而不能输出到文件。其次,为了生成可视化,您必须调用 GLUTMainLoop 并停止主函数的执行 - 从那时起,唯一被调用的函数是来自 GUI 的调用。但是我不希望这是一个基于 GUI 的应用程序 - 我希望它只是一个从命令行运行的应用程序,并且它会生成一系列图像。有没有办法在 GLUT/OpenGL 中做到这一点?或者 OpenGL 完全是错误的工具,我应该使用其他工具
I have a program which simulates a physical system that changes over time. I want to, at predetermined intervals (say every 10 seconds) output a visualization of the state of the simulation to a file. I want to do it in such a way that it is easy to "turn the visualization off" and not output the visualization at all.
I am looking at OpenGL and GLUT as graphics tools to do the visualization. However the problem seems to be that first of all, it looks like it only outputs to a window and can't output to a file. Second, in order to generate the visualization you have to call GLUTMainLoop and that stops the execution of the main function - the only functions that get called from then on are calls from the GUI. However I do not want this to be a GUI based application - I want it to just be an application that you run from the command line, and it generates a series of images. Is there a way to do this in GLUT/OpenGL? Or is OpenGL the wrong tool for this completely and I should use something else
发布评论
评论(6)
glReadPixels
可运行的 PBO 示例下面的示例生成以下任一结果:
在 ramfs 上 。压缩越好,FPS就越大,所以我们必须 内存 IO 限制。
在我的 60 FPS 屏幕上,FPS 大于 200,并且所有图像都不同,因此我确信它不限于屏幕的 FPS。
此答案中的 GIF 是从视频生成的,如下所述:https://askubuntu.com/questions/648603/how-to-create-an-animated-gif-from-mp4-video-via-command-line/837574#837574< /a>
glReadPixels
是从屏幕读取像素的关键 OpenGL 函数。另请查看init()
下的设置。与大多数图像格式不同,glReadPixels 首先读取像素的底行,因此通常需要进行转换。
offscreen.c
在 GitHub 上。
编译:
“离屏”运行 10 帧(主要是 TODO,可以工作,但没有优势),大小为 200 x 100,所有输出格式:
CLI 格式为:
并且
output_formats
是位掩码:运行于-屏幕(也不限制我的 FPS):
在 Ubuntu 15.10、OpenGL 4.4.0 NVIDIA 352.63、Lenovo Thinkpad T430 上进行基准测试。
还在 ubuntu 18.04、OpenGL 4.6.0 NVIDIA 390.77、Lenovo Thinkpad P51 上进行了测试。
TODO:找到一种方法在没有 GUI 的机器上执行此操作(例如 X11)。看来 OpenGL 并不是为离屏渲染而设计的,将像素读回到 GPU 是在窗口系统的接口上实现的(例如 GLX)。请参阅:Linux 中没有 X.org 的 OpenGL
TODO:使用 1x1 窗口,使其不可调整大小,并将其隐藏以使事情更加稳健。如果我执行其中任何一个,渲染都会失败,请参阅代码注释。在 Glut 中阻止调整大小似乎是不可能的,但是GLFW 支持。无论如何,这些并不重要,因为即使
offscreen
关闭,我的 FPS 也不受屏幕刷新频率的限制。除 PBO 之外的其他选项
Pixelbuffer
对象 (PBO)Framebuffer
和Pixelbuffer
比后备缓冲区和纹理更好,因为它们是为了将数据读回 CPU 而设计的,而后备缓冲区和纹理则是为了保留在 GPU 上并在屏幕上显示而设计的。PBO用于异步传输,所以我认为我们不需要它,请参阅:OpenGL 中的帧缓冲区对象和像素缓冲区对象有什么区别?,
也许离屏 Mesa 值得研究一下:http://www.mesa3d.org/osmesa.html
Vulkan
看来Vulkan 的设计比 OpenGL 更好地支持离屏渲染。
NVIDIA 概述中提到了这一点:https://developer.nvidia.com/transitioning-opengl- vulkan
这是一个我刚刚在本地运行的可运行示例: https://github.com/SaschaWillems/Vulkan/tree/b9f0ac91d2adccc3055a904d3a8f6553b10ff6cd/examples/renderheadless/renderheadless.cpp
安装驱动程序和 确保 GPU 正在工作 我可以做:
这会立即生成不打开任何窗口的图像
headless.ppm
:我还设法运行此程序 Ubuntu Ctrl + Alt + F3 非图形 TTY,这进一步表明它确实不需要屏幕。
其他可能感兴趣的示例:
相关:Vulkan 中是否可以在没有 Surface 的情况下进行离屏渲染?
在 Ubuntu 20.04、NVIDIA 驱动程序 435.21、NVIDIA Quadro M1200 GPU 上进行测试。
apiretrace
https://github.com/apitrace/apitrace
就可以了,并且根本不需要您修改代码:
也可在 Ubuntu 18.10 上使用:
您现在有一堆名为:
TODO:工作原理的屏幕截图。
文档还建议视频使用此方法:
另请参阅:
WebGL + 画布图像保存
由于性能原因,这主要是一个玩具,但它适用于真正基本的用例:
参考书目
如何使用 GLUT/OpenGL 渲染到文件?< /p>
如何在OpenGL
如何在 OpenGL 上渲染离屏?
glReadPixels()“数据”参数用法?
将 OpenGL ES 2.0 渲染到图像
http://www.songho.ca/opengl/gl_fbo.html
http://www.mesa3d.org/brianp/sig97/offscrn.htm
关闭渲染屏幕(带有 FBO 和 RenderBuffer)和颜色、深度、模板的像素传输
https://gamedev.stackexchange.com/questions/59204/opengl-fbo-render-off-screen-and-texture
OpenGL 中的帧缓冲区对象和像素缓冲区对象有什么区别?
glReadPixels() " data”参数用法?
MuJoCo 有保存视频的功能
mjr_readPixels
函数:<块引用>
此示例可以通过三种方式进行编译,这三种方式在创建 OpenGL 上下文的方式上有所不同:使用带有不可见窗口的 GLFW、使用 OSMesa 或使用 EGL。
FBO 大于窗口:
无窗口/X11:
glReadPixels
runnable PBO exampleThe example below generates either:
on a ramfs. The better the compression, the larger the FPS, so we must be memory IO bound.
FPS is larger than 200 on my 60 FPS screen, and all images are different, so I'm sure that it's not limited to the screen's FPS.
The GIF in this answer was generated from the video as explained at: https://askubuntu.com/questions/648603/how-to-create-an-animated-gif-from-mp4-video-via-command-line/837574#837574
glReadPixels
is the key OpenGL function that reads pixels from screen. Also have a look at the setup underinit()
.glReadPixels
reads the bottom line of pixels first, unlike most image formats, so converting that is usually needed.offscreen.c
On GitHub.
Compile with:
Run 10 frames "offscreen" (mostly TODO, works but has no advantage), with size 200 x 100 and all output formats:
CLI format is:
and
output_formats
is a bitmask:Run on-screen (does not limit my FPS either):
Benchmarked on Ubuntu 15.10, OpenGL 4.4.0 NVIDIA 352.63, Lenovo Thinkpad T430.
Also tested on ubuntu 18.04, OpenGL 4.6.0 NVIDIA 390.77, Lenovo Thinkpad P51.
TODO: find a way to do it on a machine without GUI (e.g. X11). It seems that OpenGL is just not made for offscreen rendering, and that reading pixels back to the GPU is implemented on the interface with the windowing system (e.g. GLX). See: OpenGL without X.org in linux
TODO: use a 1x1 window, make it un-resizable, and hide it to make things more robust. If I do either of those, the rendering fails, see code comments. Preventing resize seems impossible in Glut, but GLFW supports it. In any case, those don't matter much as my FPS is not limited by the screen refresh frequency, even when
offscreen
is off.Other options besides PBO
Pixelbuffer
object (PBO)Framebuffer
andPixelbuffer
are better than the backbuffer and texture since they are made for data to be read back to CPU, while the backbuffer and textures are made to stay on the GPU and show on screen.PBO is for asynchronous transfers, so I think we don't need it, see: What are the differences between a Frame Buffer Object and a Pixel Buffer Object in OpenGL?,
Maybe off-screen Mesa is worth looking into: http://www.mesa3d.org/osmesa.html
Vulkan
It seems that Vulkan is designed to support offscreen rendering better than OpenGL.
This is mentioned on this NVIDIA overview: https://developer.nvidia.com/transitioning-opengl-vulkan
This is a runnable example that I just managed to run locally: https://github.com/SaschaWillems/Vulkan/tree/b9f0ac91d2adccc3055a904d3a8f6553b10ff6cd/examples/renderheadless/renderheadless.cpp
After installing the drivers and ensuring that the GPU is working I can do:
and this immediately generates an image
headless.ppm
without opening any windows:I have also managed to run this program an Ubuntu Ctrl + Alt + F3 non-graphical TTY, which further indicates it really does not need a screen.
Other examples that might be of interest:
Related: Is it possible to do offscreen rendering without Surface in Vulkan?
Tested on Ubuntu 20.04, NVIDIA driver 435.21, NVIDIA Quadro M1200 GPU.
apiretrace
https://github.com/apitrace/apitrace
Just works, and does not require you to modify your code at all:
Also available on Ubuntu 18.10 with:
You now have a bunch of screenshots named as:
TODO: working principle.
The docs also suggest this for video:
See also:
WebGL + canvas image saving
This is mostly a toy due to performance, but it kind of works for really basic use cases:
Bibliography
How to use GLUT/OpenGL to render to a file?
How to take screenshot in OpenGL
How to render offscreen on OpenGL?
glReadPixels() "data" argument usage?
Render OpenGL ES 2.0 to image
http://www.songho.ca/opengl/gl_fbo.html
http://www.mesa3d.org/brianp/sig97/offscrn.htm
Render off screen (with FBO and RenderBuffer) and pixel transfer of color, depth, stencil
https://gamedev.stackexchange.com/questions/59204/opengl-fbo-render-off-screen-and-texture
What are the differences between a Frame Buffer Object and a Pixel Buffer Object in OpenGL?
glReadPixels() "data" argument usage?
MuJoCo has a functionality to save video with the
mjr_readPixels
function:FBO larger than window:
No Window / X11:
无论如何,你几乎肯定不希望出现过剩。您的要求不符合其预期目的(即使您的要求确实符合其预期目的,您通常也不想要它)。
您可以使用 OpenGL。要在文件中生成输出,您基本上需要设置 OpenGL 来渲染纹理,然后将生成的纹理读入主内存并将其保存到文件中。至少在某些系统(例如 Windows)上,我很确定您仍然需要创建一个窗口并将渲染上下文与该窗口关联起来,尽管如果窗口可能会很好总是隐藏的。
You almost certainly don't want GLUT, regardless. Your requirements don't fit what it's intended to do (and even when your requirements do fit its intended purpose, you usually don't want it anyway).
You can use OpenGL. To generate output in a file, you basically set up OpenGL to render to a texture, and then read the resulting texture into main memory and save it to a file. At least on some systems (e.g., Windows), I'm pretty sure you'll still have to create a window and associate the rendering context with the window, though it will probably be fine if the window is always hidden.
并不是要放弃其他优秀的答案,但如果您想要一个现有的示例,我们已经在 OpenSCAD 中进行离屏 GL 渲染几年了,作为从命令行渲染到 .png 文件的测试框架的一部分。相关文件位于 https://github.com/openscad/openscad/tree/master/ Offscreen*.cc 下的 src
它在 OSX (CGL)、Linux X11 (GLX)、BSD (GLX) 和 Windows (WGL) 上运行,由于驱动程序差异而存在一些怪癖。基本技巧是忘记打开窗户(就像道格拉斯·亚当斯所说,飞行的秘诀是忘记落地)。如果您有一个像 Xvfb 或 Xvnc 这样运行的虚拟 X11 服务器,它甚至可以在“无头”linux/bsd 上运行。还可以通过在运行程序之前设置环境变量 LIBGL_ALWAYS_SOFTWARE=1 在 Linux/BSD 上使用软件渲染,这在某些情况下会有所帮助。
这不是唯一执行此操作的系统,我相信 VTK 成像系统也执行类似的操作。
这段代码的方法有点旧(我从 Brian Paul 的 glxgears 中删除了 GLX 代码),特别是随着新系统的出现,OSMesa、Mir、Wayland、EGL、Android、Vulkan 等,但请注意 OffscreenXXX.cc文件名,其中 XXX 是 GL 上下文子系统,理论上它可以移植到不同的上下文生成器。
Not to take away from the other excellent answers, but if you want an existing example we have been doing Offscreen GL rendering for a few years now in OpenSCAD, as part of the Test Framework rendering to .png files from the commandline. The relevant files are at https://github.com/openscad/openscad/tree/master/src under the Offscreen*.cc
It runs on OSX (CGL), Linux X11 (GLX), BSD (GLX), and Windows (WGL), with some quirks due to driver differences. The basic trick is to forget to open a window (like, Douglas Adams says the trick to flying is to forget to hit the ground). It even runs on 'headless' linux/bsd if you have a virtual X11 server running like Xvfb or Xvnc. There is also the possibility to use Software Rendering on Linux/BSD by setting the environment variable LIBGL_ALWAYS_SOFTWARE=1 before running your program, which can help in some situations.
This is not the only system to do this, I believe the VTK imaging system does something similar.
This code is a bit old in it's methods, (I ripped the GLX code out of Brian Paul's glxgears), especially as new systems come along, OSMesa, Mir, Wayland, EGL, Android, Vulkan, etc, but notice the OffscreenXXX.cc filenames where XXX is the GL context subsystem, it can in theory be ported to different context generators.
不确定 OpenGL 是最佳解决方案。
但您始终可以渲染到屏幕外缓冲区。
将 openGL 输出写入文件的典型方法是使用 readPixels 将生成的场景像素复制到图像文件
Not sure OpenGL is the best solution.
But you can always render to an off-screen buffer.
The typical way to write openGL output to a file is to use readPixels to copy the resulting scene pixel-pixel to an image file
您可以使用 SFML http://www.sfml-dev.org/。您可以使用图像类来保存渲染的输出。
http://www.sfml-dev.org/documentation/1.6/classsf_1_1Image。 htm
要获得渲染输出,您可以渲染到纹理或复制屏幕。
渲染到纹理:
复制屏幕输出:
You could use SFML http://www.sfml-dev.org/. You can use the image class to save your rendered output.
http://www.sfml-dev.org/documentation/1.6/classsf_1_1Image.htm
To get your rendered output, you can render to a texture or copy your screen.
Rendering to a texture:
Copying screen output:
1. 使用带有 PbufferSurface 而不是 WindowSurface 的 EGL 进行离屏渲染
2. 保存图像
有多种方法可以实现此目的。实际上上面的代码是演示的一部分(https: //github.com/kallaballa/GCV/blob/main/src/tetra/tetra-demo.cpp)我写的,使用 OpenCL/OpenGL/VAAPI 互操作与 OpenCV 结合来编写渲染内容的视频在OpenGL中。
如果您想构建演示,请注意自述文件。
1. Rendering offscreen using EGL with a PbufferSurface instead of a WindowSurface
2. Saving the images
There are several ways to achieve this. Actually the above code is part of a demo (https://github.com/kallaballa/GCV/blob/main/src/tetra/tetra-demo.cpp) i wrote that uses OpenCL/OpenGL/VAAPI interop in conjunction with OpenCV to write a video of what is rendered in OpenGL.
Please mind the README if you want to build the demo.