数码相机算法
我正在开发一个简单的视频设备,我想介绍一些标准的很酷的相机功能。其中我想介绍一下
- 对焦指示器
- 自动对焦
- 自动曝光(理想曝光时间估计)
现在我正在寻找一些示例,了解如何实现这些功能。您有任何有用的链接吗?
编辑 : 好的,我将使用标准 CCD 相机,它可以在 ~1MPix 分辨率下提供 ~20fps。我打算用 C# 编写,如果出现性能问题,我将使用 C++。我会有镜头+CCD相机+电机。
编辑 : 我想看一些更详细的算法描述。我确信有些必须在大学课程中教授,但我很难找到一些。对于焦点指示器,我尝试了一种原始方法,但在某些情况下失败了。
int verticalPoints = 0, horizontalPoints = 0;
///Calculate the vertical differences
for (int x = 0; x < toAnalyze.Width; x++)
{
for (int y = 1; y < toAnalyze.Height; y++)
{
byte* pixel = (byte*)data.Scan0 + y * stride + x;
verticalDiff += Math.Abs(*pixel - *(pixel - stride));;
}
}
verticalDiff /= toAnalyze.Width * (toAnalyze.Height-1);
///Calculate horizontal differences
for (int y = 0; y < toAnalyze.Height; y++)
{
for (int x = 1; x < toAnalyze.Width; x++)
{
byte* pixel = (byte*)data.Scan0 + y * stride + x;
horizontalDiff += Math.Abs(*pixel - *(pixel - 1));
}
}
horizontalDiff /= (toAnalyze.Width-1) * toAnalyze.Height;
///And return the average value
return(verticalDiff + horizontalDiff) / 2;
谢谢
I'm working on a simple video device and I'd like to introduce some standard cool camera features. Amongst all I'd like to introduce
- Focus indicator
- Auto focus
- Auto exposure (ideal exposure time estimation)
Right now I'm looking for some examples, how these features can be implemented. Do you have any useful links?
EDIT :
Ok, I will use standard CCD camera, which can provide me ~ 20fps in ~1MPix resolution. I'm planning to write it in C#, in case of performance issues, I'll use C++. I'll have lens + CCD camera + motor.
EDIT :
I'd like to see some more detailed algorithm description. I'm sure some have to be taught in university courses, but I have troubles finding some. For focus indicator I've tried a primitive approach, but in some cases it failed.
int verticalPoints = 0, horizontalPoints = 0;
///Calculate the vertical differences
for (int x = 0; x < toAnalyze.Width; x++)
{
for (int y = 1; y < toAnalyze.Height; y++)
{
byte* pixel = (byte*)data.Scan0 + y * stride + x;
verticalDiff += Math.Abs(*pixel - *(pixel - stride));;
}
}
verticalDiff /= toAnalyze.Width * (toAnalyze.Height-1);
///Calculate horizontal differences
for (int y = 0; y < toAnalyze.Height; y++)
{
for (int x = 1; x < toAnalyze.Width; x++)
{
byte* pixel = (byte*)data.Scan0 + y * stride + x;
horizontalDiff += Math.Abs(*pixel - *(pixel - 1));
}
}
horizontalDiff /= (toAnalyze.Width-1) * toAnalyze.Height;
///And return the average value
return(verticalDiff + horizontalDiff) / 2;
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
只是为了通知您。我正在 WPF 中开发专业的取证 5 兆像素数码相机软件。在 DotNet 中,而不是在 C++ 中。有一些线程问题需要了解,但它运行得非常快。由于使用 GPU,因此性能更高。
杰瑞的回答做得很好。
焦点检测是“基于时间/帧的对比度检测”。逻辑很简单,要保持其性能并不容易。
自动对焦检测
要检查曝光时间,它如果您已经创建了图像的直方图,那就很容易了。 图像直方图
在任何情况下,您都需要对
此混合使其变得更加复杂,因为您还可以使用颜色增益通道来增加图像的亮度。 RGB 图像数字。亮度可以具有与“增益”和“曝光”时间相同的结果。
如果您自动计算曝光时间,请记住,您需要一个帧来计算它,并且获得的帧数越多,曝光时间越小。这意味着,如果您想拥有一个好的算法,请始终尝试非常短的曝光时间并缓慢增加。不要使用缓慢减小值的线性算法。
数码相机还有更多方法,例如 Pixel Binning Pixel Binning 来提高帧速率以获得快速对焦结果。
以下是焦点如何生成焦点强度图像的示例:
Just to inform you. I am working on a professional forensic 5 Megapixel digital camera software in WPF. In DotNet not C++. There are some threading issus to know but it works perfectly fast. More performant because GPU is used.
Jerry did a good work with his answer.
Focus detection is "Contrast detection based on time / frames". Logic is simple, to keep it performant it is not easy.
Auto Focus detection
To check the exposure time, it is easy if you have created the histogram of image. Image histogram
In any case you need to do it for
This mix makes it a bit more complicated because you also can use Color gain channels to increase brightness of image. RGB image digital. Luminance can have the same result like with "Gain" and "Exposure" time.
If you calculate the exposure time automatically, keep good in mind that you need a frame to calculate it and as smaller the exposure time as more frames you will get. That means, if you want to have a good algorithm, always try to have a very small exposure time and increase it slowly. Not use a linear algorithm where you decrease the value slowly.
There are also more methodes for digital cameras like Pixel Binning Pixel Binning to increase framerate to get quick focus results.
Here is a sample of how focus can work to generate a focus intensity image :
AForge.net 有 很多东西用于图像处理,包括边缘检测和卷积滤波器。您可能想要查看的另一个(更大的)库是 OpenCV 但只有 .net 的包装器,其中因为AForge是直接用c#编写的
The AForge.net has alot of stuff for doing image processing , including edge detection and convolution filters. Another (larger) library you might want to look at is OpenCV but only has wrappers for .net, where as AForge is written in c# directly
从最后开始,可以这么说:
自动曝光非常简单:测量光线水平并计算出平均光线需要多长时间才能产生约 15-18% 的灰度级。人们进行了很多改进的尝试(通常通过分别测量图片的多个部分并处理这些结果),但这只是起点。
自动对焦有两种不同的类型。大多数摄像机使用基于检测对比度的方法——查看传感器的输入,当相邻像素之间的差异最大化时,您就认为“焦点清晰”。
不过,对比度检测自动对焦确实使对焦指示变得有点困难——特别是,在对比度再次开始下降之前,您永远不会真正知道何时达到最大对比度。当您进行自动对焦时,您会一直对焦,直到看到峰值,然后看到它再次开始下降,然后将其拉回到最高点。对于带有指示器的手动对焦,您无法识别最大对比度,直到它再次开始下降。用户必须遵循大致相同的模式,越过最佳焦点,然后回到最佳焦点。
或者,您可以使用相位检测。这使用了通过两个棱镜的“图片”的对齐方式,很像自动对焦出现之前许多(大多数?)单反相机中使用的裂像取景器。
Starting from the end, so to speak:
Auto-exposure is pretty simple: measure the light level and figure out how long of an exposure is needed for that average light to produce ~15-18% gray level. There are lots of attempts at improving that (usually by metering a number of sections of the picture separately, and processing those results), but that's the starting point.
There are two separate types of autofocus. Most video cameras use one based on detecting contrast -- look at the input from the sensor, and when the differences between adjacent pixels are maximized, you consider that "in focus."
Contrast detection autofocus does make it a bit difficult to do focus indication though -- in particular, you never really know when you've achieved maximum contrast until the contrast starts to fall again. When you're doing autofocus, you focus until you see a peak and then see it start to fall again, and then drive it back to where it was highest. For manual focus with an indicator, you can't recognize maximum contrast until it starts to fall again. The user would have to follow roughly the same pattern, moving past best focus, then back to optimum.
Alternatively, you could use phase detection. This uses the alignment of the "pictures" coming through two prisms, much like the split-image viewfinders that were used in many (most?) SLRs before autofocus came into use.