SDL 1.3:如何渲染视频而不显示它?
所以我需要的很简单:想象一下我们根本没有 gui - ssh 访问一些 linux,我们将在其中构建和托管我们的应用程序。该应用程序将生成视频流。我们有一些带有 OpenGL 着色器的 SDL 应用程序。我们想要的只是将渲染(通常我们在 SDL 窗口中)作为 char* (大小为 W*H*3) 如何做这样的事情?如何使 SDL 渲染内容不渲染到 GUI 窗口上,而是渲染到一些可交换指针中?
So what I need is simple: Imagine we have no gui at all - ssh access to some linux where we gonna build and host our app. That app would generate video stream. We have some SDL app with OpenGL shader in it. All we want is to get rendering (as normally we would have in SDL window) as a char* (with size W*H*3) How to do such thing? How to make SDL render stuff not onto its gui window but into some swappable pointer?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
为了发挥作用,OpenGL 应该是硬件加速的,因此首先检查您的服务器是否有满足您要求的 GPU。如果您使用的是租用的虚拟服务器或某些标准根服务器,那么您很可能没有 GPU。
如果您有 GPU,则有两种可能的方法:
方法 1 - 简单的方法
(不幸的是)您必须为其配置并启动 X 服务器和 该X 服务器还必须是当前的虚拟终端(即它必须是显卡上活动的东西)。然后,您向将运行该视频生成器的用户授予对该 X 显示的访问权限(阅读 man xauth 及其引用的内容)
下一步与 SDL 无关,它是 OpenGL 的思考:创建帧缓冲区渲染所需图形的对象; PBuffer 也可以工作,实际上我更喜欢在这种情况下使用它,但是我发现在当前的 Linux 及其驱动程序上,帧缓冲区对象比 PBuffer 更可靠。
然后照常渲染到此 Framebuffer 对象或 PBuffer 并使用
glReadPixels
检索内容方法 2 - 灵活的方法
在低级别上,这与方法 1 非常相似,但是事情会为您抽象出来:获取VirtualGL http://www.virtualgl.org/ 执行GPU 上的实际 OpenGL 渲染。您无需在辅助 X 服务器上启动应用程序,而是直接使用提供的 VirtualGL 服务器发送 GLX 流并返回 JPEG 图像流。您还可以使用运行虚拟帧缓冲区的辅助 X 服务器并对其进行连续屏幕捕获。或者可能是最优雅的:编写您自己的 X.Org 视频驱动程序,将视频直接传递到视频流媒体。
To be of any use, OpenGL should be hardware accelerated, so first check if your server does have a GPU that meets your requirements. If you're on a rented virtual server or some standard root server, then you very likely don't have a GPU.
If you have a GPU, then there are two possible methods:
Method 1 -- the easy one
You'll (unfortunately) have to configure and start the X server for it and this X server must also be the current virtual terminal (i.e. it must be the active thing on the graphics card). Then you give the user who'll be running that video generator access to that X display (read
man xauth
and what it references)The next step is independent of SDL, it's an OpenGL think: Create a Framebuffer Object onto which the desired graphics is rendered; a PBuffer would work as well, and actually I'd prefer it in this situation, however I found Framebuffer Objects be more reliable than PBuffers on current Linux and its drivers.
Then render to this Framebuffer Object or PBuffer as usual and retrieve the content using
glReadPixels
Method 2 -- the flexible one
On the low level this is quite similar to Method 1, but things get abstracted for you: Get VirtualGL http://www.virtualgl.org/ to perform the actual OpenGL rendering on the GPU. Instead of starting your application on a secondary X server you make direct use of the VirtualGL server provided sending the GLX stream and get a JPEG image stream back. You could also use a secondary X server running a virtual framebuffer and take a continous screencapture of that. Or probably most elegant: Write your own X.Org video driver that passes the video to the video streamer directly.
您无法在 OpenGL 中直接渲染为字节数组。
有两种方法可以解决这个问题。第一种方法是最简单的,不需要上下文花招,而第二种方法则需要。
首先,简单的方法。
为了让 OpenGL 工作,你需要有一个窗口。这并不意味着窗口需要可见,但您需要创建一个窗口来获取有效的 OpenGL 上下文。因此第1步:创建一个窗口并将其最小化。
现在,为了获得有效的渲染,帧缓冲区中的像素必须通过“像素所有权测试”。当渲染到保存屏幕本身的帧缓冲区时,在屏幕上实际不可见的窗口像素无法通过像素所有权测试。因此,如果您使用 glReadPixels,这些像素的值是未定义的。
但是,这仅适用于与窗口关联的默认帧缓冲区。帧缓冲区对象总是通过像素所有权测试。因此,第 2 步:根据您的需要创建帧缓冲区对象和关联的渲染缓冲区。
从这里开始,一切就非常简单了。只需正常渲染并在想要获取数据时执行 glReadPixels 即可。如果考虑性能的话,像素缓冲区对象可用于异步传输像素数据。 第 3 步:渲染并使用 glReadPixels 获取数据。
第二种方法使用更广泛(FBO 需要扩展支持或 OpenGL 3.0),但更特定于平台。
您无需在步骤 2 中创建 FBO,而是执行步骤 2:使用 glXCreatePbuffer 创建 pbuffer。 pbuffer 是一个离屏渲染目标,其作用类似于默认帧缓冲区。您glXMakeContextCurrent< /a> 告诉 OpenGL 渲染到 pbuffer 而不是默认的帧缓冲区。
步骤1和3与上面相同。
You cannot directly render to a byte array in OpenGL.
There are two ways to work with this. The first way is the simplest and doesn't require context gimmickery, and the second way does.
So first, the simple way.
In order for OpenGL to work, you need to have a window. That doesn't mean the window needs to be visible, but you need to create one to get a valid OpenGL context. Therefore Step 1: Create a window and minimize it.
Now, in order to get valid rendering, the pixels in the framebuffer must pass the "pixel ownership test." When rendering to the framebuffer that holds the screen itself, pixels of the window that are not actually visible on screen fail the pixel ownership test. So the values of those pixels are undefined if you use glReadPixels.
However, this only pertains to the default framebuffer that is associated with the window. Framebuffer objects always pass the pixel ownership test. Therefore, Step 2: Create a framebuffer object and the associated renderbuffers for your needs.
From there, it's pretty simple. Just render as normal and do a glReadPixels when you want to get the data. Pixel buffer objects can be used to asynchronous transfer pixel data, if performance is a concern. Step 3: Render and use glReadPixels to get the data.
The second way is more widely available (FBOs require extension support or OpenGL 3.0), but more platform-specific.
Instead of creating an FBO in step 2, you instead have Step 2: use glXCreatePbuffer to create a pbuffer. A pbuffer is an off-screen render target that acts like the default framebuffer. You glXMakeContextCurrent to tell OpenGL to render to the pbuffer instead of the default framebuffer.
Steps 1 and 3 are the same as above.