我正在尝试使用 OpenCV 的 C++ API 将1296x968图像旋转90度,并且我面临一些问题。
输入:
旋转:
如您所见,旋转后的图像存在一些问题。首先,它与原始尺寸相同,尽管我专门使用原始尺寸的反转尺寸创建了目标 Mat
。结果,目标图像被裁剪。
我怀疑发生这种情况是因为我正在调用 warpAffine()
并传递原始 Mat
的大小,而不是目标 Mat
的大小。但我这样做是因为我遵循了这个答案 ,但现在我怀疑答案可能是错误的。所以这是我的第一个疑问/问题。
第二个是 warpAffine()
正在以特定偏移量写入目标(可能是将旋转后的数据复制到图像的中间),并且此操作会留下一个图像周围有可怕的大黑色边框。
如何解决这些问题?
我分享以下源代码:
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
void rotate(Mat& image, double angle)
{
Point2f src_center(image.cols/2.0F, image.rows/2.0F);
Mat rot_matrix = getRotationMatrix2D(src_center, angle, 1.0);
Mat rotated_img(Size(image.size().height, image.size().width), image.type());
warpAffine(image, rotated_img, rot_matrix, image.size());
imwrite("rotated.jpg", rotated_img);
}
int main(int argc, char* argv[])
{
Mat orig_image = imread(argv[1], 1);
if (orig_image.empty())
{
cout << "!!! Couldn't load " << argv[1] << endl;
return -1;
}
rotate(orig_image, 90);
return 0;
}
I'm trying to rotate a 1296x968 image by 90 degrees using the C++ API of OpenCV and I'm facing a few problems.
Input:
Rotated:
As you can see, the rotated image has a few problems. First, it has the same size of the original, even though I specifically create the destination Mat
with the inverted size of the original. As a result, the destination image gets cropped.
I suspect this is happening because I'm calling warpAffine()
and passing the size of the original Mat
instead of the size of destination Mat
. But I'm doing this because I followed this answer, but now I suspect that the answer may be wrong. So this is my first doubt/problem.
The second, is that warpAffine()
is writing to the destination at a certain offset (probably to copy the rotated data to the middle of the image) and this operation leaves a horrible and large black border around the image.
How do I fix these issues?
I'm sharing the source code below:
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
void rotate(Mat& image, double angle)
{
Point2f src_center(image.cols/2.0F, image.rows/2.0F);
Mat rot_matrix = getRotationMatrix2D(src_center, angle, 1.0);
Mat rotated_img(Size(image.size().height, image.size().width), image.type());
warpAffine(image, rotated_img, rot_matrix, image.size());
imwrite("rotated.jpg", rotated_img);
}
int main(int argc, char* argv[])
{
Mat orig_image = imread(argv[1], 1);
if (orig_image.empty())
{
cout << "!!! Couldn't load " << argv[1] << endl;
return -1;
}
rotate(orig_image, 90);
return 0;
}
发布评论
评论(5)
我找到了一个不涉及 >
warpAffine()
。但在此之前,我需要声明(以供将来参考)我的怀疑是正确的,您需要在调用
warpAffine()
:据我所知,黑色边框(由偏移书写引起)这个函数绘制的似乎是它的标准行为。我在 Mac 和 Linux 上运行的 OpenCV 的 C 接口以及 C++ 接口(使用版本 2.3.1a 和 2.3.0)注意到了这一点。
我最终使用的解决方案比所有这些扭曲的东西简单得多。您可以使用
cv::transpose()
< /a> 和cv::flip()
将图像旋转 90 度。这是:----I>
I've found a solution that doesn't involve
warpAffine()
.But before that, I need to state (for future references) that my suspicion was right, you needed to pass the size of the destination when calling
warpAffine()
:As far as I can tell, the black border (caused by writing at an offset) drawed by this function seems to be it's standard behavior. I've noticed this with the C interface and also with the C++ interface of OpenCV running on Mac and Linux, using the versions 2.3.1a and 2.3.0.
The solution I ended up using is much simpler than all this warp thing. You can use
cv::transpose()
andcv::flip()
to rotate an image by 90 degrees. Here it is:----I>
由于偏移等原因,很多人在旋转图像或图像块时遇到问题。因此,我发布了一个解决方案,允许您旋转图像的一个区域(或整个)并将其粘贴到另一个图像中或具有该功能计算一个所有东西都适合的图像。
A lot of people have had problems with rotating images or image chunks due to offsets etc. So, I'm posting a solution to allow you to rotate a region (or whole) of an image and stick it into another image or have the function compute an image where everything will just fit.
也许这可以帮助某人。
变量是
img :原始图像
角度:度
规模
dst :目标图像
Maybe this can help someone.
variables are
img : original image
angle : degrees
scale
dst : destination image
我意识到您已经找到了其他更快的解决方案(90 度旋转应该非常快,并且不需要 warpAffine 的所有机制),但我想为遇到此问题的其他人解决黑色边框问题。
warpAffine 还能做什么?目标图像被指定为宽大于高,并且仿射变换仅指定旋转(围绕图像中心),而不是缩放。这正是它所做的。没有任何信息告诉 warpAffine 应该在这些黑色边框中绘制什么,因此它使它们保持黑色。
直接物理实验:将一张纸放在桌子上。在它周围画一个轮廓(这是当您指定希望结果与原始形状/大小相同时所做的)。现在将该片材绕其中心旋转 90 度。查看桌子上轮廓所界定的区域。如果它是一张黑色的桌子,它看起来就会和你的结果一模一样。
I realize you've found other faster solutions (90 degree rotation should be very fast, and doesn't need all the machinery of warpAffine), but I want to address the black border problem for anyone else who runs across this.
What else could warpAffine do? The destination image was specified to be wider than it was tall, and the affine transform only specified rotation (around the center of the image), not scaling. That is exactly what it did. There is no information anywhere to tell warpAffine what should be drawn in those black borders, so it left them black.
Direct physical experiment: Put a sheet down on the table. Draw an outline around it (this is what you did when you specified that you wanted the result to be the same shape/size as the original). Now rotate that sheet 90 degrees around its center. Look at the region bound by the outline on the table. If it was a black table, it would look exactly like your result.
我发现的一个问题是
warpAffine
的目标图像大小设置为image.size()
而不是rotated_img.size()
>。然而,在扭曲之后,它仍然在 x 和 y 中平移得太远......我在 Matlab 中尝试了 OpenCV 的 getRotationMatrix2D 中完全相同的扭曲,它工作得很好。我开始嗅到
warpAffine
可能存在的错误...One issue I found, is that you destination image size for
warpAffine
is is set toimage.size()
instead of therotated_img.size()
. However, after the warp, it is still translated too far to in x and y...I tried the exact same warpfrom OpenCV's getRotationMatrix2D in Matlab, and it worked perfectly. I'm starting to smell a possible bug with
warpAffine
...