如何使用从 nginx 服务器获取的视频数据流?
我的网络中有三个节点: 数据服务器---节点1---节点2。 我的视频数据“friends.mp4”保存在dataServer上。我启动了 dataServer 和 node2 作为 rtmp-nginx 服务器。我在节点 1 上使用 ffmpeg 在数据服务器上拉取数据流,并将转换后的数据流推送到节点 2 上的“实时”应用程序。 这是我为node2配置的nginx.conf。
worker_processes 1;
events {
worker_connections 1024;
}
rtmp {
server {
listen 1935;
chunk_size 4000;
application play {
play /usr/local/nginx/html/play;
}
application hls {
live on;
hls on;
hls_path /usr/local/nginx/html/hls;
hls_fragment 1s;
hls_playlist_length 4s;
}
application live
{
live on;
allow play all;
}
}
}
我想运行这个Python代码来识别friends.mp4中的面孔: import cv2
vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
if (vid_capture.isOpened() == False):
print("Error opening the video file")
else:
fps = vid_capture.get(5)
print("Frames per second : ", fps,'FPS')
frame_count = vid_capture.get(7)
print('Frame count : ', frame_count)
while(vid_capture.isOpened()):
ret, frame = vid_capture.read()
if ret == True:
gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
for x, y, w, h in face_zone:
cv2.rectangle(frame, pt1 = (x, y), pt2 = (x+w, y+h), color = [0,0,255], thickness=2)
cv2.circle(frame, center = (x + w//2, y + h//2), radius = w//2, color = [0,255,0], thickness = 2)
cv2.imshow('Frame', frame)
key = cv2.waitKey(50)
if key == ord('q'):
break
else:
break
vid_capture.release()
cv2.destoryAllWindows()
但我不能这样做,因为 cv2.VideoCapture 无法从“rtmp://127.0.0.1:1935/live”获取数据流。也许是因为这个路径不是文件。如何获取 nginx 服务器接收到的视频流并将其放入我的 openCV 模型中?有没有一种方法可以让我访问 niginx 服务器接收到的 dataStreaming 并将其设为 openCV 可以使用的 python 对象?
I have three nodes in my network:
dataServer --- node1 --- node2.
My video data "friends.mp4" is saved on dataServer. I started both dataServer and node2 as rtmp-nginx servers. I use ffmpeg on node1 to pull datastreaming on dataServerand and push the converted datastreaming to the application "live" on node2.
Here's my configuration of nginx.conf for node2.
worker_processes 1;
events {
worker_connections 1024;
}
rtmp {
server {
listen 1935;
chunk_size 4000;
application play {
play /usr/local/nginx/html/play;
}
application hls {
live on;
hls on;
hls_path /usr/local/nginx/html/hls;
hls_fragment 1s;
hls_playlist_length 4s;
}
application live
{
live on;
allow play all;
}
}
}
I want to run this python code to recognize the faces in friends.mp4:
import cv2
vid_capture=cv2.VideoCapture("rtmp://127.0.0.1:1935/live")
face_detect = cv2.CascadeClassifier('./haarcascade_frontalface_default.xml')
if (vid_capture.isOpened() == False):
print("Error opening the video file")
else:
fps = vid_capture.get(5)
print("Frames per second : ", fps,'FPS')
frame_count = vid_capture.get(7)
print('Frame count : ', frame_count)
while(vid_capture.isOpened()):
ret, frame = vid_capture.read()
if ret == True:
gray = cv2.cvtColor(frame, code=cv2.COLOR_BGR2GRAY)
face_zone = face_detect.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=3)
for x, y, w, h in face_zone:
cv2.rectangle(frame, pt1 = (x, y), pt2 = (x+w, y+h), color = [0,0,255], thickness=2)
cv2.circle(frame, center = (x + w//2, y + h//2), radius = w//2, color = [0,255,0], thickness = 2)
cv2.imshow('Frame', frame)
key = cv2.waitKey(50)
if key == ord('q'):
break
else:
break
vid_capture.release()
cv2.destoryAllWindows()
But I can't do it because cv2.VideoCapture can not get the data streaming from "rtmp://127.0.0.1:1935/live". Maybe it is because this path is not a file. How can I get the video streaming received by the nginx server and put it to my openCV model? Is there a way that I just access the dataStreaming received by the niginx server and make it a python object that openCV can use?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
尝试将文件更改为直播流,然后使用 cv2 处理流:
对于 Node1,您可以运行如下命令:
然后您得到一个 RTMP 流并再次在 Node1 上处理它:
请注意,RTMP 不在 Node1 上,所以你不应该使用 localhost 或 127.0.0.1 来让 cv 使用它。
Try to change the file to a live stream, then use cv2 to process the stream:
For Node1, you could run command like:
Then you got a RTMP stream and process it on Node1 again:
Please note that the RTMP is not on node1, so you should never use
localhost
or127.0.0.1
for cv to consume it.