如何根据触发器捕获RTSP流并捕获磁盘的RTSP流?
我认为我要问的是类似于有关如何捕获雷击的FFMPEG帖子( https://trac.ffmpeg.org/wiki/capture/lightning )。
我在RTSP上有一个带有IP摄像头的Raspberry Pi,我想知道如何保持连续5秒的实时视频缓冲区,直到我触发一个“保存”命令,该命令将把5秒的缓冲区送到磁盘,然后继续将实时视频流式传输到磁盘,直到我将其关闭。
本质上,Pi启动,这个神奇的黑匣子进程开始并将实时视频保存到固定尺寸的5秒缓冲区中,然后是一个小时后 - 我单击一个按钮,然后将5秒的缓冲区冲洗到磁盘上的文件,并继续将视频传输到磁盘上,直到我单击“取消”为止。
在我的环境中,我可以使用FFMPEG,GSTREAMER或OPERRTSP。对于每个这些,我都可以连接到我的RTSP流并将其保存到磁盘上,但是我不确定如何创建这个永远存在的5秒缓存。
我觉得Gstreamer文档在这里暗示它( https://gstreamer.freedesktop.org/documentation/application-development/advanced/buffering.html?gi-language=c )触发的保存。从那篇文章中,我的印象是,该视频的末日是事先知道的(我可以人为地限制我的。
我没有一个很好的位置,可以在文件进行后处理,因此使用OpenRTSP之类的东西,保存了一大堆视频段,然后将它们合并并不是真正的选择。
注意:成功保存成功后,我无需保存另一个视频左右,因此5秒的缓存有足够的时间填写,然后在下一个之前,
这是我发现的最接近的类似问题: https://video.stackexchange.com/questions/questions/18514/ffmpeg-buffered-recorder-record-record-record-record-record-record-recorping
I think what I'm asking about is similar to this ffmpeg post about how to capture a lightning strike (https://trac.ffmpeg.org/wiki/Capture/Lightning).
I have a Raspberry Pi with an IP cam over RTSP, and what I'm wondering is how to maintain a continual 5 second live video buffer, until I trigger a "save" command which will pipe that 5 second buffer to disk, and continue streaming the live video to disk until I turn it off.
Essentially, Pi boots up, this magic black box process starts and is saving live video into a fixed-size, 5-second buffer, and then let's say an hour later - I click a button, and it flushes that 5-second buffer to a file on disk and continues to pipe the video to disk, until I click cancel.
In my environment, I'm able to use ffmpeg, gstreamer, or openRTSP. For each of these, I can connect to my RTSP stream and save it to disk, but I'm not sure how to create this ever-present 5 second cache.
I feel like the gstreamer docs are alluding to it here (https://gstreamer.freedesktop.org/documentation/application-development/advanced/buffering.html?gi-language=c), but I guess I'm just not grokking how the buffering fits in with a triggered save. From that article, I get the impression that the end-time of the video is known in advance (I could artificially limit mine, I guess).
I'm not in a great position to post-process the file, so using something like openRTSP, saving a whole bunch of video segments, and then merging them isn't really an option.
Note: After a successful save, I wouldn't need to save another video for a minute or so, so that 5 second cache has plenty of time to fill back up before the next
This is the closest similar question that I've found: https://video.stackexchange.com/questions/18514/ffmpeg-buffered-recording
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
嘿,
我不知道您是否对Python有知识,但是有一个名为Pyav的性欲,这是FFMPEG的花哨的Python包装器/界面。
您可以从RTSP源读取您的帧,然后根据需要处理框架。
这只是您所描述的想法/骇客实现,您需要设计框架。当您知道自己从相机中获得25 fps时,您可以将队列大小限制在125。
Hey,
I dont know if you have knowledge about python, but there is a libary called pyav thats a fancy python wrapper/interface for ffmpeg.
There u can just read your frames from an RTSP Source and handle that frames as you want.
Here is just an idea/hack implementaion about that what u describe, you need to design your framebuffer. When u know that u get 25 FPS from your camera than you can restrict the queue size to 125.
ispy/agentDvr可以准确地做您想要的 https://wwww.ispyconnect.com/userguide.com/userguide-comorge.com/userguide-recording .aspx :
编辑:
ISPY仅在Windows上运行,与AgentDVR不同,该版本还具有Linux/OSX/RPI的版本。
iSpy/AgentDVR can do exactly what you want https://www.ispyconnect.com/userguide-recording.aspx:
Edit:
iSpy runs only on Windows unlike AgentDVR which also has versions for Linux/OSX/RPi.