如何使用从 nginx 服务器获取的视频数据流?

发布于 2025-01-11 03:55:15 字数 1948 浏览 0 评论 0原文

我的网络中有三个节点: 数据服务器---节点1---节点2。 我的视频数据“friends.mp4”保存在dataServer上。我启动了 dataServer 和 node2 作为 rtmp-nginx 服务器。我在节点 1 上使用 ffmpeg 在数据服务器上拉取数据流,并将转换后的数据流推送到节点 2 上的“实时”应用程序。 这是我为node2配置的nginx.conf。

worker_processes  1;
events {
    worker_connections  1024;
}

rtmp {
    server {

    listen 1935;

    chunk_size 4000;

application play {
        play /usr/local/nginx/html/play;
    }

application hls {
        live on;
        hls on;
        hls_path /usr/local/nginx/html/hls;
    hls_fragment 1s;
    hls_playlist_length 4s;
    }

application live  
    {
        live on; 
    allow play all;
    }
}
}

我想运行这个Python代码来识别friends.mp4中的面孔: import cv2

vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
if (vid_capture.isOpened() == False):
    print("Error opening the video file")
else:
    fps = vid_capture.get(5)
    print("Frames per second : ", fps,'FPS')
    frame_count = vid_capture.get(7)
    print('Frame count : ', frame_count)

while(vid_capture.isOpened()):
    ret, frame = vid_capture.read()
    if ret == True:
        gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
        face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
        for x, y, w, h in face_zone:
            cv2.rectangle(frame, pt1 = (x, y), pt2 = (x+w, y+h), color = [0,0,255], thickness=2)
            cv2.circle(frame, center = (x + w//2, y + h//2), radius = w//2, color = [0,255,0], thickness = 2)
        cv2.imshow('Frame', frame)
        key = cv2.waitKey(50)
        if key == ord('q'):
            break
    else:
        break
vid_capture.release()
cv2.destoryAllWindows()

但我不能这样做,因为 cv2.VideoCapture 无法从“rtmp://127.0.0.1:1935/live”获取数据流。也许是因为这个路径不是文件。如何获取 nginx 服务器接收到的视频流并将其放入我的 openCV 模型中?有没有一种方法可以让我访问 niginx 服务器接收到的 dataStreaming 并将其设为 openCV 可以使用的 python 对象?

I have three nodes in my network:
dataServer --- node1 --- node2.
My video data "friends.mp4" is saved on dataServer. I started both dataServer and node2 as rtmp-nginx servers. I use ffmpeg on node1 to pull datastreaming on dataServerand and push the converted datastreaming to the application "live" on node2.
Here's my configuration of nginx.conf for node2.

worker_processes  1;
events {
    worker_connections  1024;
}

rtmp {
    server {

    listen 1935;

    chunk_size 4000;

application play {
        play /usr/local/nginx/html/play;
    }

application hls {
        live on;
        hls on;
        hls_path /usr/local/nginx/html/hls;
    hls_fragment 1s;
    hls_playlist_length 4s;
    }

application live  
    {
        live on; 
    allow play all;
    }
}
}

I want to run this python code to recognize the faces in friends.mp4:
import cv2

vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
if (vid_capture.isOpened() == False):
    print("Error opening the video file")
else:
    fps = vid_capture.get(5)
    print("Frames per second : ", fps,'FPS')
    frame_count = vid_capture.get(7)
    print('Frame count : ', frame_count)

while(vid_capture.isOpened()):
    ret, frame = vid_capture.read()
    if ret == True:
        gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
        face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
        for x, y, w, h in face_zone:
            cv2.rectangle(frame, pt1 = (x, y), pt2 = (x+w, y+h), color = [0,0,255], thickness=2)
            cv2.circle(frame, center = (x + w//2, y + h//2), radius = w//2, color = [0,255,0], thickness = 2)
        cv2.imshow('Frame', frame)
        key = cv2.waitKey(50)
        if key == ord('q'):
            break
    else:
        break
vid_capture.release()
cv2.destoryAllWindows()

But I can't do it because cv2.VideoCapture can not get the data streaming from "rtmp://127.0.0.1:1935/live". Maybe it is because this path is not a file. How can I get the video streaming received by the nginx server and put it to my openCV model? Is there a way that I just access the dataStreaming received by the niginx server and make it a python object that openCV can use?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

嘴硬脾气大 2025-01-18 03:55:15

尝试将文件更改为直播流,然后使用 cv2 处理流:

DataServer --> Node1(FFmpeg MP4 to RTMP) --> Node2(Media Server)
Node2 ---> Node1(cv2 process RTMP)

对于 Node1,您可以运行如下命令:

ffmpeg -re -i friends.mp4 -c copy -f flv rtmp://node2/live/livestream

然后您得到一个 RTMP 流并再次在 Node1 上处理它:

cv2.VideoCapture("rtmp://node2:1935/live/livestream")

请注意,RTMP 不在 Node1 上,所以你不应该使用 localhost 或 127.0.0.1 来让 cv 使用它。

Try to change the file to a live stream, then use cv2 to process the stream:

DataServer --> Node1(FFmpeg MP4 to RTMP) --> Node2(Media Server)
Node2 ---> Node1(cv2 process RTMP)

For Node1, you could run command like:

ffmpeg -re -i friends.mp4 -c copy -f flv rtmp://node2/live/livestream

Then you got a RTMP stream and process it on Node1 again:

cv2.VideoCapture("rtmp://node2:1935/live/livestream")

Please note that the RTMP is not on node1, so you should never use localhost or 127.0.0.1 for cv to consume it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文