使用FFMPEG命令读取帧并使用opencv中的inshow函数进行显示
我正在尝试使用 ffmpeg 命令获取帧并使用 opencv 函数 cv2.imshow() 进行显示。此片段给出了 RTSP Stream 链接上的黑白图像。输出在链接[FFmpeg 链接的输出]下面给出。 我尝试过 ffplay 命令,但它给出了直接图像。我无法访问框架或应用图像处理。
import cv2
import subprocess as sp
command = [ 'C:/ffmpeg/ffmpeg.exe',
'-i', 'rtsp://192.168.1.12/media/video2',
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
import numpy
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
raw_image = pipe.stdout.read(420*360*3)
# transform the byte read into a numpy array
image = numpy.fromstring(raw_image, dtype='uint8')
image = image.reshape((360,420,3))
cv2.imshow('hello',image)
cv2.waitKey(1)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
I am trying to get the frame using the ffmpeg command and show using the opencv function cv2.imshow(). This snippet gives the black and white image on the RTSP Stream link . Output is given below link [ output of FFmpeg link].
I have tried the ffplay command but it gives the direct image . i am not able to access the frame or apply the image processing.
import cv2
import subprocess as sp
command = [ 'C:/ffmpeg/ffmpeg.exe',
'-i', 'rtsp://192.168.1.12/media/video2',
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
import numpy
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
raw_image = pipe.stdout.read(420*360*3)
# transform the byte read into a numpy array
image = numpy.fromstring(raw_image, dtype='uint8')
image = image.reshape((360,420,3))
cv2.imshow('hello',image)
cv2.waitKey(1)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您使用了错误的输出格式,它应该是
-f rawvideo
。这应该可以解决您的主要问题。当前的-f image2pipe
将 RGB 数据包装为图像格式(不知道它是什么,可能是 BMP,因为正在使用rawvideo
编解码器?),因此无法正确显示。其他提示:
420*360
字节。np.frombuffer
而不是np.fromstring
pipe.stdout.flush()
是IMO 是危险的举动,因为缓冲区可能有部分帧。考虑将bufsize
设置为帧大小(以字节为单位)的精确整数倍。-r
以匹配处理速率(以避免从 ffmpeg 到 python 的无关数据传输)You're using a wrong output format, it should be
-f rawvideo
. This should fix your primary problem. Current-f image2pipe
wraps the RGB data in an image format (donno what it is maybe BMP asrawvideo
codec is being used?) thus not shown correctly.Other tips:
-pix_fmt gray
and read420*360
bytes at a time.np.frombuffer
instead ofnp.fromstring
pipe.stdout.flush()
is a dangerous move IMO as the buffer may have a partial frame. Consider settingbufsize
to be an exact integer multiple of framesize in bytes.-r
to match the processing rate (to avoid extraneous data transfer from ffmpeg to python)pix_fmt
的值应为bgr24
。OpenCV默认的像素格式是BGR。 ffmpeg 上的等效格式是 bgr24。
现在
fromstring
已被弃用。您应该使用frombuffer
。The vaue of
pix_fmt
should bebgr24
.The default pixel format of OpenCV is BGR. And the equivalent format on ffmpeg is
bgr24
.And right now
fromstring
is deprecated. You should usefrombuffer
.