蟒蛇 + QT + Gstreamer
我正在使用 PyQt 并尝试从网络摄像头获取视频以在 QT 小部件中播放。我找到了 C 和 Qt、python 和 gtk 的教程,但没有找到 pyQt 和 gstreamer 组合的教程。有人能做到这一点吗?
这可以很好地播放视频,但在一个单独的窗口中:
self.gcam = gst.parse_launch('v4l2src device=/dev/video0 ! autovideosink')
self.gcam.set_state(gst.STATE_PLAYING)
我需要的是让覆盖层正常工作,以便它显示在我的 GUI 上的小部件中。谢谢互联网大佬们!
好吧,我已经走了很多路,但仍然需要一些帮助。我实际上是为 Maemo 编写的,但以下代码在我的 Linux 笔记本电脑上运行良好:
class Vid:
def __init__(self, windowId):
self.player = gst.Pipeline("player")
self.source = gst.element_factory_make("v4l2src", "vsource")
self.sink = gst.element_factory_make("autovideosink", "outsink")
self.source.set_property("device", "/dev/video0")
self.scaler = gst.element_factory_make("videoscale", "vscale")
self.window_id = None
self.windowId = windowId
self.player.add(self.source, self.scaler, self.sink)
gst.element_link_many(self.source,self.scaler, self.sink)
bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect("message", self.on_message)
bus.connect("sync-message::element", self.on_sync_message)
def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)
def on_sync_message(self, bus, message):
if message.structure is None:
return
message_name = message.structure.get_name()
if message_name == "prepare-xwindow-id":
win_id = self.windowId
assert win_id
imagesink = message.src
imagesink.set_property("force-aspect-ratio", True)
imagesink.set_xwindow_id(win_id)
def startPrev(self):
self.player.set_state(gst.STATE_PLAYING)
print "should be playing"
vidStream = Vid(wId)
vidStream.startPrev()
其中 wId 是我试图显示输出的小部件的窗口 id。当我在 N900 上运行此代码时,屏幕显示黑色并闪烁。有什么想法吗?我快死在这里了!
编辑:我被要求发布完整的代码,虽然我仍然需要稍微清理一下,但相关部分如下:
self.cameraWindow = QtGui.QWidget(self)
self.cameraWindow.setGeometry(QtCore.QRect(530, 20, 256, 192))
self.cameraWindow.setObjectName("cameraWindow")
self.cameraWindow.setAttribute(0, 1); # AA_ImmediateWidgetCreation == 0
self.cameraWindow.setAttribute(3, 1); # AA_NativeWindow == 3
global wId
wId = self.cameraWindow.winId()
self.camera = Vid(wId)
self.camera.startPrev()
class Vid:
def __init__(self, windowId):
self.player = gst.Pipeline("player")
self.source = gst.element_factory_make("v4l2src", "vsource")
self.sink = gst.element_factory_make("autovideosink", "outsink")
self.source.set_property("device", "/dev/video0")
#self.scaler = gst.element_factory_make("videoscale", "vscale")
self.fvidscale = gst.element_factory_make("videoscale", "fvidscale")
self.fvidscale_cap = gst.element_factory_make("capsfilter", "fvidscale_cap")
self.fvidscale_cap.set_property('caps', gst.caps_from_string('video/x-raw-yuv, width=256, height=192'))
self.window_id = None
self.windowId = windowId
print windowId
self.player.add(self.source, self.fvidscale, self.fvidscale_cap, self.sink)
gst.element_link_many(self.source,self.fvidscale, self.fvidscale_cap, self.sink)
bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect("message", self.on_message)
bus.connect("sync-message::element", self.on_sync_message)
def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)
def on_sync_message(self, bus, message):
if message.structure is None:
return
message_name = message.structure.get_name()
if message_name == "prepare-xwindow-id":
win_id = self.windowId
assert win_id
imagesink = message.src
imagesink.set_property("force-aspect-ratio", True)
imagesink.set_xwindow_id(win_id)
def startPrev(self):
self.player.set_state(gst.STATE_PLAYING)
def pausePrev(self):
self.player.set_state(gst.STATE_NULL)
这是将一些位拼凑在一起,我现在无法测试它,但也许这会对某人有帮助。祝你好运!
I'm working with PyQt and trying to get video from a webcam to play within a QT widget. I've found tutorials for C and Qt, and for python and gtk, but NOTHING for this combo of pyQt and gstreamer. Anybody get this working?
This plays the video fine, but in a separate window:
self.gcam = gst.parse_launch('v4l2src device=/dev/video0 ! autovideosink')
self.gcam.set_state(gst.STATE_PLAYING)
what I need is to get the overlay working so it's displayed within a widget on my GUI. Thanks, Gurus of the internet!
ok, so I've gotten a lot farther, but still in need of some help. I'm actually writing this for Maemo, but the following code works fine on my linux laptop:
class Vid:
def __init__(self, windowId):
self.player = gst.Pipeline("player")
self.source = gst.element_factory_make("v4l2src", "vsource")
self.sink = gst.element_factory_make("autovideosink", "outsink")
self.source.set_property("device", "/dev/video0")
self.scaler = gst.element_factory_make("videoscale", "vscale")
self.window_id = None
self.windowId = windowId
self.player.add(self.source, self.scaler, self.sink)
gst.element_link_many(self.source,self.scaler, self.sink)
bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect("message", self.on_message)
bus.connect("sync-message::element", self.on_sync_message)
def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)
def on_sync_message(self, bus, message):
if message.structure is None:
return
message_name = message.structure.get_name()
if message_name == "prepare-xwindow-id":
win_id = self.windowId
assert win_id
imagesink = message.src
imagesink.set_property("force-aspect-ratio", True)
imagesink.set_xwindow_id(win_id)
def startPrev(self):
self.player.set_state(gst.STATE_PLAYING)
print "should be playing"
vidStream = Vid(wId)
vidStream.startPrev()
where wId is the window id of the widget im trying to get to display the output in. When I run this on the N900, the screen goes black and blinks. Any ideas? I'm dying here!
EDIT: I've been asked to post the full code, and though I still need to clean it up a bit, here's the relevant part:
self.cameraWindow = QtGui.QWidget(self)
self.cameraWindow.setGeometry(QtCore.QRect(530, 20, 256, 192))
self.cameraWindow.setObjectName("cameraWindow")
self.cameraWindow.setAttribute(0, 1); # AA_ImmediateWidgetCreation == 0
self.cameraWindow.setAttribute(3, 1); # AA_NativeWindow == 3
global wId
wId = self.cameraWindow.winId()
self.camera = Vid(wId)
self.camera.startPrev()
class Vid:
def __init__(self, windowId):
self.player = gst.Pipeline("player")
self.source = gst.element_factory_make("v4l2src", "vsource")
self.sink = gst.element_factory_make("autovideosink", "outsink")
self.source.set_property("device", "/dev/video0")
#self.scaler = gst.element_factory_make("videoscale", "vscale")
self.fvidscale = gst.element_factory_make("videoscale", "fvidscale")
self.fvidscale_cap = gst.element_factory_make("capsfilter", "fvidscale_cap")
self.fvidscale_cap.set_property('caps', gst.caps_from_string('video/x-raw-yuv, width=256, height=192'))
self.window_id = None
self.windowId = windowId
print windowId
self.player.add(self.source, self.fvidscale, self.fvidscale_cap, self.sink)
gst.element_link_many(self.source,self.fvidscale, self.fvidscale_cap, self.sink)
bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect("message", self.on_message)
bus.connect("sync-message::element", self.on_sync_message)
def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)
def on_sync_message(self, bus, message):
if message.structure is None:
return
message_name = message.structure.get_name()
if message_name == "prepare-xwindow-id":
win_id = self.windowId
assert win_id
imagesink = message.src
imagesink.set_property("force-aspect-ratio", True)
imagesink.set_xwindow_id(win_id)
def startPrev(self):
self.player.set_state(gst.STATE_PLAYING)
def pausePrev(self):
self.player.set_state(gst.STATE_NULL)
This is piecing together a few bits, and I can't test it right now, but maybe it will be helpful to someone. Good luck!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果您碰巧在 Linux 以外的平台上使用 PySide 而不是 PyQt,则 winId() 返回一个 PyCObject,其中 不能直接与本机函数或其他模块一起使用。就我而言,当在 Microsoft Windows 上将 GStreamer (pygst) 与 PySide 结合使用时,这会派上用场:
If you're happening to use PySide instead of PyQt on a platform other than Linux, winId() returns a PyCObject which can't be used directly with native functions or other modules. In my case this came in handy when using GStreamer (pygst) with PySide on Microsoft Windows:
知道了!看来我需要强制管道的分辨率与我将视频泵送到的小部件的分辨率相匹配:
self.fvidscale_cap = gst.element_factory_make("capsfilter", "fvidscale_cap")
self.fvidscale_cap.set_property('caps', gst.caps_from_string('video/x-raw-yuv, width=256, height=192'))
然后只需像其他元素一样将它们添加到管道中,效果就很好。伙计,现在看来很容易,但是当我用头撞墙几天时,事情就不那么明显了......
Got it! It appears I needed to force the resolution of the pipeline to match the resolution of the widget where I was pumping the video to:
self.fvidscale_cap = gst.element_factory_make("capsfilter", "fvidscale_cap")
self.fvidscale_cap.set_property('caps', gst.caps_from_string('video/x-raw-yuv, width=256, height=192'))
Then just add those to the pipeline like the other elements, and it works great. Man, it seems so easy looking at it now, but when I was pounding my head against the wall for a few days it wasn't so obvious...
Ptterb 你能发布你的完整代码吗?
我复制了你的代码。
将 fvidscale_cap 添加到管道,其中:
从主程序中,我创建一个新的 QWidget,并将其 winId() 传递给 Vid 构造函数。
小部件开始加载,但崩溃了。
输出显示:
应该在玩
分段错误
Ptterb can you post your full code please?
I copied your code.
Added fvidscale_cap to pipeline, with:
From the main program I create a new QWidget, and pass its winId() to Vid constructor.
The widget start loading, but crashes.
The output says:
should be playing
Segmentation fault
粘贴的代码未显示 gobject 加载,无法忽略。
我花了很长时间才弄清楚缺少了什么。感谢Jun
一个工作音频示例。
The pasted code is not showing gobject loading, which cannot be dismissed.
It took me a good while to figure out what was missing. Thanks Jun for having
a working audio example.