如何根据已知的内在和外在参数在 Matlab 中进行透视校正?

发布于 2024-11-07 04:10:14 字数 400 浏览 3 评论 0 原文

我正在使用 Matlab 使用 Jean- 进行相机校准 Yves Bouget 的相机校准工具箱。我有所有的相机 校准过程中的参数。当我使用新图像时不 在校准集中,我可以得到它的变换方程,例如 Xc=R*X+T,其中 X 是校准装置(平面)的 3D 点 世界坐标系,Xc 是相机坐标系中的坐标。在其他方面 换句话说,我拥有一切(外部参数和内部参数)。

我想做的是对这张图像进行透视校正 即我希望它消除任何透视并看到校准装置 不扭曲(它是棋盘)。

Matlab 的新计算机视觉工具箱有一个对象,可以对 图像,给定 3X3 矩阵 H。问题是,我无法计算这个 由已知的内在和外在参数组成的矩阵!

I'm using Matlab for camera calibration using Jean-
Yves Bouget's Camera Calibration Toolbox
. I have all the camera
parameters from the calibration procedure. When I use a new image not
in the calibration set, I can get its transformation equation e.g.
Xc=R*X+T, where X is the 3D point of the calibration rig (planar) in
the world frame, and Xc its coordinates in the camera frame. In other
words, I have everything (both extrinsic and intrinsic parameters).

What I want to do is to perform perspective correction on this image
i.e. I want it to remove any perspective and see the calibration rig
undistorted (its a checkerboard).

Matlab's new Computer Vision toolbox has an object that performs a perspective transformation on an
image, given a 3X3 matrix H. The problem is, I can't compute this
matrix from the known intrinsic and extrinsic parameters!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

我ぃ本無心為│何有愛 2024-11-14 04:10:15

对于这么多个月后仍然对此感兴趣的人,我已经设法使用 Kovesi 的代码获得了正确的单应矩阵(http://www.csse.uwa.edu.au/~pk/research/matlabfns),尤其是homography2d.m 函数。然而,您将需要装备四个角的像素值。如果相机固定牢固,那么您将需要执行此操作一次。请参阅下面的示例代码:

%get corner pixel coords from base image
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
por=[p1 p2 p3 p4];
por=[0 1 0;1 0 0;0 0 1]*por;    %swap x-y <--------------------

%calculate target image coordinates in world frame
% rig is 9x7 (X,Y) with 27.5mm box edges
XXw=[[0;0;0] [0;27.5*9;0] [27.5*7;27.5*9;0] [27.5*7;0;0]];
Rtarget=[0 1 0;1 0 0;0 0 -1]; %Rotation matrix of target camera (vertical pose)
XXc=Rtarget*XXw+Tc_ext*ones(1,4); %go from world frame to camera frame
xn=XXc./[XXc(3,:);XXc(3,:);XXc(3,:)]; %calculate normalized coords
xpp=KK*xn;  %calculate target pixel coords

% get homography matrix from original to target image
HH=homography2d(por,xpp);
%do perspective transformation to validate homography
pnew=HH*por./[HH(3,:)*por;HH(3,:)*por;HH(3,:)*por]; 

这应该可以解决问题。请注意,Matlab 将图像中的 x 轴定义为行索引,将 y 定义为列。因此,必须交换方程中的 xy(正如您可能在上面的代码中看到的那样)。此外,我已经设法仅根据参数计算单应性矩阵,但结果略有偏差(可能是校准工具箱中的舍入误差)。执行此操作的最佳方法是上述方法。

如果您只想使用相机参数(即不使用 Kovesi 代码),则单应性矩阵为 H=KK*Rmat*inv_KK。在这种情况下,代码是,

% corner coords in pixels
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
pmat=[p1 p2 p3 p4];
pmat=[0 1 0;1 0 0;0 0 1]*pmat; %swap x-y

R=[0 1 0;1 0 0;0 0 1];  %rotation matrix of final camera pose
Rmat=Rc_ext'*R;  %rotation from original pose to final pose
H=KK*Rmat*inv_KK; %homography matrix
pnew=H*pmat./[H(3,:)*pmat;H(3,:)*pmat;H(3,:)*pmat]; %do perspective transformation

H2=[0 1 0;-1 0 0;0 0 1]*H;  %swap x-y in the homography matrix to apply in image

To all who are still interested in this after so many months, i've managed to get the correct homography matrix using Kovesi's code (http://www.csse.uwa.edu.au/~pk/research/matlabfns), and especially the homography2d.m function. You will need however the pixel values of the four corners of the rig. If the camera is steady fixed, then you will need to do this once. See example code below:

%get corner pixel coords from base image
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
por=[p1 p2 p3 p4];
por=[0 1 0;1 0 0;0 0 1]*por;    %swap x-y <--------------------

%calculate target image coordinates in world frame
% rig is 9x7 (X,Y) with 27.5mm box edges
XXw=[[0;0;0] [0;27.5*9;0] [27.5*7;27.5*9;0] [27.5*7;0;0]];
Rtarget=[0 1 0;1 0 0;0 0 -1]; %Rotation matrix of target camera (vertical pose)
XXc=Rtarget*XXw+Tc_ext*ones(1,4); %go from world frame to camera frame
xn=XXc./[XXc(3,:);XXc(3,:);XXc(3,:)]; %calculate normalized coords
xpp=KK*xn;  %calculate target pixel coords

% get homography matrix from original to target image
HH=homography2d(por,xpp);
%do perspective transformation to validate homography
pnew=HH*por./[HH(3,:)*por;HH(3,:)*por;HH(3,:)*por]; 

That should do the trick. Note that Matlab defines the x axis in an image ans the rows index and y as the columns. Thus one must swap x-y in the equations (as you'll probably see in the code above). Furthermore, i had managed to compute the homography matrix from the parameters solely, but the result was slightly off (maybe roundoff errors in the calibration toolbox). The best way to do this is the above.

If you want to use just the camera parameters (that is, don't use Kovesi's code), then the Homography matrix is H=KK*Rmat*inv_KK. In this case the code is,

% corner coords in pixels
p1=[33;150;1];
p2=[316;136;1];
p3=[274;22;1];
p4=[63;34;1];
pmat=[p1 p2 p3 p4];
pmat=[0 1 0;1 0 0;0 0 1]*pmat; %swap x-y

R=[0 1 0;1 0 0;0 0 1];  %rotation matrix of final camera pose
Rmat=Rc_ext'*R;  %rotation from original pose to final pose
H=KK*Rmat*inv_KK; %homography matrix
pnew=H*pmat./[H(3,:)*pmat;H(3,:)*pmat;H(3,:)*pmat]; %do perspective transformation

H2=[0 1 0;-1 0 0;0 0 1]*H;  %swap x-y in the homography matrix to apply in image
妄断弥空 2024-11-14 04:10:15

方法一:
在相机校准工具箱中,您应该注意到工作区中棋盘格的每个图像都有一个 H 矩阵。我还不熟悉计算机视觉工具箱,但也许这就是您的功能所需的矩阵。 H 似乎是这样计算的:

KK = [fc(1) fc(1)*alpha_c cc(1);0 fc(2) cc(2); 0 0 1];
H = KK * [R(:,1) R(:,2) Tc]; % where R is your extrinsic rotation matrix and Tc the translation matrix
H = H / H(3,3);

方法 2:
如果计算机视觉工具箱功能不适合您,那么为了找到图像的预期投影,我使用了 interp2 函数如下:

[X, Y] = meshgrid(0:size(I,2)-1, 0:size(I,1)-1);
im_coord = [X(:), Y(:), ones(prod(size(I_1)))]';
% Insert projection here for X and Y to XI and YI
ZI = interp2(X,Y,Z,XI,YI);

我不久前在一个项目中使用了前瞻性投影,我相信你需要使用齐次坐标。我认为我发现这篇维基百科文章非常有帮助。

Approach 1:
In the Camera Calibration Toolbox you should notice that there is an H matrix for each image of your checkerboard in your workspace. I am not familiar with the computer vision toolbox yet but perhaps this is the matrix you need for your function. It seems that H is computed like so:

KK = [fc(1) fc(1)*alpha_c cc(1);0 fc(2) cc(2); 0 0 1];
H = KK * [R(:,1) R(:,2) Tc]; % where R is your extrinsic rotation matrix and Tc the translation matrix
H = H / H(3,3);

Approach 2:
If the computer vision toolbox function doesn't work out for you then to find the prospective projection of an image I have used the interp2 function like so:

[X, Y] = meshgrid(0:size(I,2)-1, 0:size(I,1)-1);
im_coord = [X(:), Y(:), ones(prod(size(I_1)))]';
% Insert projection here for X and Y to XI and YI
ZI = interp2(X,Y,Z,XI,YI);

I have used prospective projections on a project a while ago and I believe that you need to use homogeneous coordinates. I think I found this wikipedia article quite helpful.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文