使用 opencv 和 python 将 gstreamer 视频流式传输到不同的地址

发布于 2025-01-20 12:29:37 字数 3007 浏览 1 评论 0原文

在我的最后一个问题中,我正在努力打开 gstreamer 管道来流式传输网络摄像头视频源,但在帮助下,我设法在同一台计算机上构建视频发送器和视频接收器。当尝试在另一台计算机上托管该视频流时,出现了一个新问题。 两台计算机都通过以太网连接,IP相同(当然最后一个数字不同)。 我的发件人代码:

sender.py

camset='v4l2src device=/dev/video0 ! video/x-raw,width=640,height=360,framerate=52/1 ! \
    nvvidconv flip-method=0 ! video/x-raw(memory:NVMM), format=I420, width=640, height=360 ! \
        nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! queue ! appsink drop=1'



gst_str_rtp = "appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv !\
     video/x-raw(memory:NVMM),format=NV12,width=640,height=360,framerate=52/1 ! nvv4l2h264enc insert-sps-pps=1 \
        insert-vui=1 idrinterval=30 ! h264parse ! rtph264pay ! udpsink host=169.254.84.12 port=5004 auto-multicast=0"
    

out = cv2.VideoWriter(gst_str_rtp, cv2.CAP_GSTREAMER, 0, float(52), (frame_width, frame_height), True)

cap = cv2.VideoCapture(camset,cv2.CAP_GSTREAMER)

if not cap.isOpened():
    print("Cannot capture from camera. Exiting.")
    quit()

# Check writer
if not out:
    print("Cannot write. Exiting.")
    quit()

# Go
while True:
    ret, frame = cap.read()
    if ret == False:
        break

    out.write(frame)
    cv2.imshow("sender", frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

在 gst_str_rtp 上我添加了主机,它是我想要将流发送到的计算机的 IP。

和我的接收者:

server.py

def main():
    global video_frame

    camSet='udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! \
        rtpjitterbuffer latency=0 \
    ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! \
        videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
    print("before capture")
    try:
        try:
            cap = cv2.VideoCapture(camSet)
        except:
            print("capture fail")
        while (cap.isOpened()):
            print("in loop")
            ret, frame = cap.read()
            try:
                cv2.imshow('stream',frame) 
            except:
                print("fail")

            #outvid.write(frame)
            if cv2.waitKey(1)==ord('q'):
                break
        cap.release()
    except:
        print("bad capture")
    
    cv2.destroyAllWindows()
    os._exit(0)
    exit()

if __name__ == '__main__':
    main()

我已经尝试过使用和不使用地址,但都不起作用。 如果我在同一台计算机上启动 server.py 和 sender.py (ubuntu 18.04(jetson NX)将始终是发送者)并将主机和地址更改为本地主机,则流可以正常工作,但是当尝试通过网络进行流时使用不同的 IP 地址,我在服务器端的视频捕获中不断收到 None 。

我所有的防火墙都已关闭 ** WireShark 显示发送方无法

source           destination   protocol length        info

169.254.84.2    169.254.84.12   UDP      1442         57170 → 5004 Len=1400

169.254.84.2    169.254.84.12   UDP      1442         57170 → 5004 Len=1400

169.254.84.2    169.254.84.12   UDP      1442         57170 → 5004 Len=1400

真正确定我还可以尝试什么,也许是由于某种原因不起作用的配置。 任何帮助将不胜感激

In my last question i was struggling with opening a gstreamer pipeline to stream a webcam video feed, but with help i managed to build my video sender and video receiver on the same computer. A new problem rose when trying to host that video stream on a different computer.
Both computers are connected with an ethernet connection, same IP(different last number ofcourse).
my sender code:

sender.py

camset='v4l2src device=/dev/video0 ! video/x-raw,width=640,height=360,framerate=52/1 ! \
    nvvidconv flip-method=0 ! video/x-raw(memory:NVMM), format=I420, width=640, height=360 ! \
        nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! queue ! appsink drop=1'



gst_str_rtp = "appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv !\
     video/x-raw(memory:NVMM),format=NV12,width=640,height=360,framerate=52/1 ! nvv4l2h264enc insert-sps-pps=1 \
        insert-vui=1 idrinterval=30 ! h264parse ! rtph264pay ! udpsink host=169.254.84.12 port=5004 auto-multicast=0"
    

out = cv2.VideoWriter(gst_str_rtp, cv2.CAP_GSTREAMER, 0, float(52), (frame_width, frame_height), True)

cap = cv2.VideoCapture(camset,cv2.CAP_GSTREAMER)

if not cap.isOpened():
    print("Cannot capture from camera. Exiting.")
    quit()

# Check writer
if not out:
    print("Cannot write. Exiting.")
    quit()

# Go
while True:
    ret, frame = cap.read()
    if ret == False:
        break

    out.write(frame)
    cv2.imshow("sender", frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

on gst_str_rtp i've added host which is the IP of the computer i want to send my stream to.

and my receiver:

server.py

def main():
    global video_frame

    camSet='udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! \
        rtpjitterbuffer latency=0 \
    ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! \
        videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
    print("before capture")
    try:
        try:
            cap = cv2.VideoCapture(camSet)
        except:
            print("capture fail")
        while (cap.isOpened()):
            print("in loop")
            ret, frame = cap.read()
            try:
                cv2.imshow('stream',frame) 
            except:
                print("fail")

            #outvid.write(frame)
            if cv2.waitKey(1)==ord('q'):
                break
        cap.release()
    except:
        print("bad capture")
    
    cv2.destroyAllWindows()
    os._exit(0)
    exit()

if __name__ == '__main__':
    main()

I've tried with and without the address, neither worked.
if i launch both server.py and sender.py on the same computer(ubuntu 18.04(jetson NX) will be the sender always) and change the host and address to local host, the stream works perfectly, but when trying to stream over network with different IP addresses I keep getting None in my videocapture on the server side.

all my firewalls are down **
WireShark shows that the sender side works

source           destination   protocol length        info

169.254.84.2    169.254.84.12   UDP      1442         57170 → 5004 Len=1400

169.254.84.2    169.254.84.12   UDP      1442         57170 → 5004 Len=1400

169.254.84.2    169.254.84.12   UDP      1442         57170 → 5004 Len=1400

not really sure what more i can try maybe a configuration that doesn't work for some reason.
any help would be appreciated

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

回忆躺在深渊里 2025-01-27 12:29:37

可能不是解决您的情况的解决方案,但是以下情况可能有助于找到问题:

  1. 您将首先从终端检查:
gst-launch-1.0 -v udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 ! rtph264depay ! decodebin ! autovideosink

# For Windows add .exe
gst-launch-1.0.exe -v udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 ! rtph264depay ! decodebin ! autovideosink

如果这不起作用,则将使用命令发布输出。

如果有效,则可以安装Gstreamer,并且能够通过UDP接收和解码RTPH264流,因此下一步将是从OpenCV尝试。

  1. 检查您的OPENCV LIB构建是否具有GSTREAMER支持。在Python中,您将尝试:
import cv2
print(cv2.getBuildInformation())

在输出中,如果您看到类似的内容:

    GStreamer:                   NO

那么您的OpENCV库没有GSTREAMER支持(如果可用,则可以使用另一个后端(例如FFMPEG)读取它)。

  1. 仅在NVIDIA HW上运行时,仅使用NVVIDCONV。对于一般计算机,您可以使用:
udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 ! rtph264depay ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1
  1. 创建捕获时指定后端:
cap = cv2.VideoCapture(camSet, cv2.CAP_GSTREAMER)

May not be the solution for your case, but the following may help to locate the issue:

  1. You would first check from terminal with:
gst-launch-1.0 -v udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 ! rtph264depay ! decodebin ! autovideosink

# For Windows add .exe
gst-launch-1.0.exe -v udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 ! rtph264depay ! decodebin ! autovideosink

If this doesn't work, you would post the output with the command.

If it works, you have gstreamer installed and able to receive and decode the rtph264 stream over udp, so next step would be to try it from opencv.

  1. Check that your opencv lib build has gstreamer support. In python, you would try:
import cv2
print(cv2.getBuildInformation())

in the output if you see something like:

    GStreamer:                   NO

then your opencv library has no gstreamer support (you may be able to read it with another backend such as FFMPEG if available).

  1. Only use nvvidconv if running on NVIDIA HW. For general computer you may just use:
udpsrc address=169.254.84.12 port=5004 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 ! rtph264depay ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1
  1. Specify backend when creating the capture :
cap = cv2.VideoCapture(camSet, cv2.CAP_GSTREAMER)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文