理解“/dev/video”输出
我尝试编写一个简单的应用程序,该应用程序应该检测 /dev/video 设备的像素差异。就像 motion 所做的那样。
我不知道 /dev/video 设备是如何工作的,所以大部分都是猜测。我发现数据(来自特定网络摄像头)似乎可以分为 8192 字节的部分。我假设每个代表一个框架。每个“帧”的前 +-600 字节与前一帧相同。
我如何将该数据解释为可理解的像素矩阵?
程序代码参考:
#!/usr/bin/ruby
# calculates a percentage difference between two array's
def percentage_difference( arrayA, arrayB )
top_size = arrayA.size > arrayB.size ? arrayA.size : arrayB.size
diff_elements = 0;
0.upto( top_size - 1 ) do |i|
diff_elements += 1 unless arrayA[i] == arrayB[i]
end
( 1.0 * diffelements ) / top_size
end
cam = File.open( '/dev/video' );
lastframe = [];
while( true ) do
# reads a frame from the open video device ( cam ) and converts to bytes;
newframe = cam.readpartial( num_of_bytes_per_frame ).bytes.map { |b| b }
# prints the percentage difference between the two frames
puts percentage_difference( lastframe, newframe );
lastframe = newframe;
end
I've tried writing a simple application thats supposed to detect pixel differences from the /dev/video device. Like motion does.
I don't know how the /dev/video device works, so most of it was guesswork. What I found is that it seems like the data ( from the specific webcam ) can be divided into sections of 8192 bytes. I assume each represents a frame. The first +-600 bytes from each "frame" is identical to the previous frame.
How can i interpret that data into a understandable matrice of pixels?
The program code for refference:
#!/usr/bin/ruby
# calculates a percentage difference between two array's
def percentage_difference( arrayA, arrayB )
top_size = arrayA.size > arrayB.size ? arrayA.size : arrayB.size
diff_elements = 0;
0.upto( top_size - 1 ) do |i|
diff_elements += 1 unless arrayA[i] == arrayB[i]
end
( 1.0 * diffelements ) / top_size
end
cam = File.open( '/dev/video' );
lastframe = [];
while( true ) do
# reads a frame from the open video device ( cam ) and converts to bytes;
newframe = cam.readpartial( num_of_bytes_per_frame ).bytes.map { |b| b }
# prints the percentage difference between the two frames
puts percentage_difference( lastframe, newframe );
lastframe = newframe;
end
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
从 /dev/video 读取并不简单。
我建议为此使用特定的库。
也许你可以尝试 OpenCV lib。它有一个简单的接口来连接网络摄像头和相机中的原始像素。
Reading from /dev/video is not straightforward.
I suggest using an especific library for this.
Maybe you can try OpenCV lib. It has an easy interface to raw pixels in webcams and cameras.
我对这个主题一无所知,但这也许适用?
有一个文档解释了每个字节的含义,以及 C 语言的代码示例:
http://v4l2spec.bytesex.org
I know nothing about the topic, but maybe this applies?
There is a documentation explaining what each byte means, and code samples in C:
http://v4l2spec.bytesex.org