Gstreamer - 将图像叠加流式传输到 youtube
尝试使用 picamera 2 从我的 Jetson nano 流式传输到使用 gstreamer 的 YouTube。
仅流式传输视频有效,但我需要使用 multifilesrc 将视频与图像叠加(图像会随着时间的推移而改变)。 几个小时后,未能成功将 multifilesrc 合并到管道中。 我尝试过合成器、视频混合器,但都失败了。也许使用 nvcompositor? 有什么想法吗?
这是我到目前为止所拥有的
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! \
"video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1" ! omxh264enc ! \
'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! \
audioresample ! "audio/x-raw,rate=48000" ! queue ! \
voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! \
rtmpsink location="rtmp://a.rtmp.youtube.com/live2/x/xxx app=live2"
编辑:尝试过但不起作用
gst-launch-1.0 \
nvcompositor name=mix sink_0::zorder=1 sink_1::alpha=1.0 sink_1::zorder=2 ! nvvidconv ! omxh264enc ! \
'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! \
audioresample ! "audio/x-raw,rate=48000" ! queue ! \
voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! \
rtmpsink location="rtmp://a.rtmp.youtube.com/live2/x/xxx app=live2" \
nvarguscamerasrc sensor-id=0 ! \
"video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1" ! \
nvvidconv ! video/x-raw, format=RGBA, width=1920, height=1080, framerate=30/1 ! autovideoconvert ! queue ! mix.sink_0 \
filesrc location=logo.png ! pngdec ! alphacolor ! video/x-raw,format=RGBA ! imagefreeze ! nvvidconv ! mix.sink_1
trying to stream from my Jetson nano with picamera 2 to youtube with gstreamer.
Streaming only video works, but i need to overlay video with image using multifilesrc(image will change over time).
After many hours a was not sucesfull to incorporate multifilesrc into pipeline.
I have tried compositor, videomixer but all failed. Maybe using nvcompositor?
Any ideas?
This is what i have so far
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! \
"video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1" ! omxh264enc ! \
'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! \
audioresample ! "audio/x-raw,rate=48000" ! queue ! \
voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! \
rtmpsink location="rtmp://a.rtmp.youtube.com/live2/x/xxx app=live2"
EDIT: tried this but not working
gst-launch-1.0 \
nvcompositor name=mix sink_0::zorder=1 sink_1::alpha=1.0 sink_1::zorder=2 ! nvvidconv ! omxh264enc ! \
'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! \
audioresample ! "audio/x-raw,rate=48000" ! queue ! \
voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! \
rtmpsink location="rtmp://a.rtmp.youtube.com/live2/x/xxx app=live2" \
nvarguscamerasrc sensor-id=0 ! \
"video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1" ! \
nvvidconv ! video/x-raw, format=RGBA, width=1920, height=1080, framerate=30/1 ! autovideoconvert ! queue ! mix.sink_0 \
filesrc location=logo.png ! pngdec ! alphacolor ! video/x-raw,format=RGBA ! imagefreeze ! nvvidconv ! mix.sink_1
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
虽然在某些情况下没有这些也可以工作,但对于使用 nvcompositor,我建议在 NVMM 内存中使用 RGBA 格式,输入和输出的像素长宽比 = 1/1。在 nvvidconv 之后使用 caps 来确定输入管道,并在编码之前使用 nvvidconv 将 nvcompositor 输出转换为 NV12(仍在 NVMM 内存中)。
您还可以在合成器之前在第二个输入上添加徽标队列。可能不是强制性的,但更安全。您还可以在 imagefreeze 后设置大写的帧速率。
最后,您可能必须为所有源设置 xpos、ypos、宽度和高度,以获得更可靠的行为。
Although it may work in some cases without these, for using nvcompositor, I'd advise to use RGBA format in NVMM memory with pixel-aspect-ratio=1/1 for both inputs and for output. Use caps after nvvidconv for being sure in inputs pipelines, and use nvvidconv for converting nvcompositor output into NV12 (still in NVMM memory) before encoding.
You may also add a queue on 2nd input for logo before compositor. Probably not mandatory, but safer. You may also set a framerate in caps after imagefreeze.
Last, you may have to set xpos,ypos,width and height for all sources for a more reliable behavior.