OpenCV 根据像素值调整图像大小和裁剪图像
#include "iostream"
#include "cv.h"
#include "highgui.h"
#include "cvaux.h"
#include "cxmisc.h"
#include "math.h"
using namespace cv;
using namespace std;
int main(){
int height, width, x, y, i, minX, minY, maxX, maxY;
char imgFileName[100];
IplImage *origImage = cvLoadImage("BaybayinMark/b9.jpg", -1);
height = origImage->height;
width = origImage->width;
IplImage *grayImage = cvCreateImage(cvSize(width, height), 8, 1);
IplImage *binImage = cvCreateImage(cvSize(width, height), 8, 1);
//Pre-processing phase
cvCvtColor(origImage, grayImage, CV_BGR2GRAY);
cvDilate(grayImage, grayImage, NULL, 1);
cvSmooth(grayImage, grayImage, CV_GAUSSIAN, 21, 21, 0, 0);
cvThreshold(grayImage, binImage, 120, 255, CV_THRESH_BINARY);
cvNormalize(binImage,binImage,0,1,CV_MINMAX);
minX = width;
minY = height;
maxX = 0;
maxY = 0;
CvScalar s;
for (x=0; x<width-1; x++){
for(y=0; y<height-1; y++){
s = cvGet2D(binImage, y, x);
//printf("%f\n", s.val[0]);
if (s.val[0] == 1){
//printf("HELLO");
minX = min(minX, x);
minY = min(minY, y);
maxX = max(maxX, x);
maxY = max(maxY, y);
}
}
}
cvSetImageROI(binImage, cvRect(minX, minY, maxX-minX, maxY-minY));
IplImage *cropImage = cvCreateImage(cvGetSize(binImage), 8, 1);
cvCopy(binImage, cropImage, NULL);
cvSaveImage("crop/cropImage9.jpg", cropImage);
cvResetImageROI(binImage);
cvReleaseImage(&origImage);
cvReleaseImage(&binImage);
cvReleaseImage(&grayImage);
cvReleaseImage(&cropImage);
}
你好!我只是想问一下这段代码。我正在尝试识别图像的最外边缘并根据它们裁剪图像。运行后我所看到的只是一张大小相同的黑色图像。我是否尝试以错误的方式做这件事?请赐教,我是 OpenCV 的初学者。
#include "iostream"
#include "cv.h"
#include "highgui.h"
#include "cvaux.h"
#include "cxmisc.h"
#include "math.h"
using namespace cv;
using namespace std;
int main(){
int height, width, x, y, i, minX, minY, maxX, maxY;
char imgFileName[100];
IplImage *origImage = cvLoadImage("BaybayinMark/b9.jpg", -1);
height = origImage->height;
width = origImage->width;
IplImage *grayImage = cvCreateImage(cvSize(width, height), 8, 1);
IplImage *binImage = cvCreateImage(cvSize(width, height), 8, 1);
//Pre-processing phase
cvCvtColor(origImage, grayImage, CV_BGR2GRAY);
cvDilate(grayImage, grayImage, NULL, 1);
cvSmooth(grayImage, grayImage, CV_GAUSSIAN, 21, 21, 0, 0);
cvThreshold(grayImage, binImage, 120, 255, CV_THRESH_BINARY);
cvNormalize(binImage,binImage,0,1,CV_MINMAX);
minX = width;
minY = height;
maxX = 0;
maxY = 0;
CvScalar s;
for (x=0; x<width-1; x++){
for(y=0; y<height-1; y++){
s = cvGet2D(binImage, y, x);
//printf("%f\n", s.val[0]);
if (s.val[0] == 1){
//printf("HELLO");
minX = min(minX, x);
minY = min(minY, y);
maxX = max(maxX, x);
maxY = max(maxY, y);
}
}
}
cvSetImageROI(binImage, cvRect(minX, minY, maxX-minX, maxY-minY));
IplImage *cropImage = cvCreateImage(cvGetSize(binImage), 8, 1);
cvCopy(binImage, cropImage, NULL);
cvSaveImage("crop/cropImage9.jpg", cropImage);
cvResetImageROI(binImage);
cvReleaseImage(&origImage);
cvReleaseImage(&binImage);
cvReleaseImage(&grayImage);
cvReleaseImage(&cropImage);
}
Hi! i just want to ask about this code. I am trying to identify the outermost edges of an image and crop the image according them. All I was having after running was a black image with the same size. Am I trying to do it the wrong way? Please enlighten me I'm a beginner with OpenCV.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在急于寻找什么问题所在时,人们往往会忘记一个更重要的问题:< em>如何到底如何找到问题。
对于图像处理应用程序,如何可以通过 OpenCV 中的穷人调试器,通过代码添加
cvSaveImage()
调用,以便能够可视化每一步的内容做法是这样的:此代码显示,即使在自定义
for
循环之前,生成的图像也会变黑,而负责此操作的调用是cvNormalize()
。但这有道理吗?您正在将 [0..255] 范围内的像素转换为 0 和 1 的值。所以问题是,在处理结束时,当您将生成的图像保存到磁盘时,您忘记了将值标准化回原始范围:
这样就解决了问题。
In the rush of finding what-the-heck-is-the-problem people tend to forget a more important question: how-the-heck-do-i-find-the-problem.
With image processing applications, the how can be answered by the poor man's debugger in OpenCV, which is adding
cvSaveImage()
calls through the code to be able to visualize what every step of the way is doing:This code reveals that the resulting image gets black even before your custom
for
loop, and the call that is responsible for that iscvNormalize()
. But it makes sense right? You are converting pixels which are in the range [0..255] to values of 0 and 1.So the problem is that at the end of your processing, when you save the resulting image to the dislk, you forgot to normalize the values back to the original range:
And that solves the problem.