以编程方式校正鱼眼失真
赏金状态更新:
我发现了如何映射线性透镜,从目标
坐标到源
坐标。
如何计算从中心到从鱼眼到直线的径向距离?
1)。我实际上很难扭转它,并将源坐标映射到目标坐标。在我发布的转换函数风格的代码中,相反的是什么?
2)。 我也看到了我的不失真在某些镜头上是不完美的——大概是那些不是严格线性的。这些镜头的等效往返源和目标坐标是多少?再次强调,请提供更多的代码,而不仅仅是数学公式......
问题如最初所述:
我有一些观点可以描述用鱼眼镜头拍摄的照片中的位置。
我想将这些点转换为直线坐标。我想消除图像的扭曲。
我找到了关于如何生成鱼眼效果的此说明,但没有找到如何反转它。
还有一篇博客文章介绍了如何使用工具来完成此操作;这些图片来自:
(1):SOURCE
原始照片链接
输入:需要修复鱼眼失真的原始图像。
(2) : 目的地
原始照片链接
输出:校正后的图像(技术上也进行了透视校正,但这是一个单独的步骤)。
如何计算从鱼眼到直线的距中心的径向距离?
我的函数存根看起来像这样:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
或者,我可以在找到点之前以某种方式将图像从鱼眼转换为直线,但我完全被 OpenCV 文档。在 OpenCV 中是否有一种简单的方法可以做到这一点,并且它的性能是否足以在实时视频源中执行此操作?
BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination
coordinates to source
coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE
Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION
Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
您提到的描述指出,通过针孔相机(不会引入镜头畸变的相机)进行投影建模为
普通鱼眼镜头相机的投影(即扭曲)建模为
您已经知道 R_d 和 theta,并且如果您知道相机的焦距(由 f 表示),则校正图像相当于计算 R_u R_d 和 theta。换句话说,
就是您正在寻找的公式。估计焦距 f 可以通过校准相机或其他方式来解决,例如让用户提供有关图像校正程度的反馈或使用原始场景的知识。
为了使用 OpenCV 解决同样的问题,您必须获得相机的内在参数和镜头畸变系数。例如,请参阅学习 OpenCV 的第 11 章(不要忘记检查 更正)。然后,您可以使用这样的程序(使用 OpenCV 的 Python 绑定编写)来反转镜头畸变:
另请注意,OpenCV 使用与您链接到的网页中的镜头畸变模型非常不同的镜头畸变模型。
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(原始海报,提供替代方案)
以下函数将目标(直线)坐标映射到源(鱼眼扭曲)坐标。 (我希望能够帮助您逆转它)
我通过反复试验得出了这一点:我从根本上不明白为什么这段代码可以工作,感谢解释和提高准确性!
当使用 3.0 因子时,它成功地使用作示例的图像不失真(我没有尝试进行质量插值):
死链接(这是来自博客文章,用于比较:)
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link(And this is from the blog post, for comparison:)
如果您认为您的公式是精确的,您可以使用 trig 计算精确的公式,如下所示:
但是,正如 @jmbr 所说,实际的相机失真将取决于镜头和变焦。您可能不想依赖固定公式,而是尝试多项式展开:
通过先调整 A,然后调整高阶系数,您可以计算任何合理的局部函数(展开的形式利用了问题的对称性) 。特别是,应该可以计算初始系数以近似上述理论函数。
此外,为了获得良好的结果,您将需要使用插值滤波器来生成校正后的图像。只要失真不是太大,您就可以使用您用来线性重新缩放图像的滤镜,而不会出现太大问题。
编辑:根据您的要求,上述公式的等效比例因子:
如果将上述公式与 tan(Rin/f) 一起绘制,您可以看到它们的形状非常相似。基本上,在 sin(w) 变得与 w 相差很大之前,切线的扭曲就会变得严重。
反演公式应该是这样的:
If you think your formulas are exact, you can comput an exact formula with trig, like so:
However, as @jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
我盲目地实现了此处中的公式,所以我不能保证它会满足您的需要。
使用
auto_zoom
获取zoom
参数的值。I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use
auto_zoom
to get the value for thezoom
parameter.我借鉴了 JMBR 的做法,并基本上扭转了它。他计算出扭曲图像的半径(Rd,即距图像中心的像素距离),并找到了 Ru(未扭曲图像的半径)的公式。
你想走另一条路。对于未失真(处理后的图像)中的每个像素,您想知道失真图像中对应的像素是什么。
换句话说,给定 (xu, yu) --> (xd,yd)。然后,将未失真图像中的每个像素替换为失真图像中相应的像素。
从 JMBR 所做的开始,我做了相反的事情,找到 Rd 作为 Ru 的函数。我得到:
其中 f 是以像素为单位的焦距(我稍后会解释),而 r = Ru/f 。
我的相机的焦距是 2.5 毫米。我的 CCD 上每个像素的大小是 6 平方微米。因此 f 为 2500/6 = 417 像素。这可以通过反复试验找到。
查找 Rd 允许您使用极坐标找到扭曲图像中对应的像素。
每个像素与中心点的角度相同:
theta = arctan( (yu-yc)/(xu-xc) )
其中 xc、yc 是中心点。然后,
确保你知道你在哪个象限。
这是我使用的 C# 代码
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
where f is the focal length in pixels (I'll explain later), and
r = Ru/f
.The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) )
where xc, yc are the center points.Then,
Make sure you know which quadrant you are in.
Here is the C# code I used
我找到了这个 pdf 文件,并且证明了数学是正确的(除了
vd = *xd**fv+v0 行,它应该是 vd = **yd**+fv+v0
) 。http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV。 。
它没有使用 OpenCV 可用的所有最新系数,但我确信它可以相当容易地进行调整
I found this pdf file and I have proved that the maths are correct (except for the line
vd = *xd**fv+v0 which should say vd = **yd**+fv+v0
).http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
这可以作为优化问题来解决。只需在图像中绘制本来应该是直线的曲线即可。存储每条曲线的轮廓点。现在我们可以将鱼眼矩阵作为一个最小化问题来解决。以点为单位最小化曲线,这将为我们提供一个鱼眼矩阵。有用。
可以通过使用轨迹栏调整鱼眼矩阵来手动完成!这是使用 OpenCV 进行手动校准的 鱼眼 GUI 代码。
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.