使用 cvMoments 的对象区域
我正在做一个步行的动作识别项目,涉及openCV和C++。我已经达到了算法的阶段,要求我找到人体斑点的面积。我已加载视频,将其转换为灰度并对其进行阈值处理以获得二进制图像,其中除了其他白色区域之外,还具有显示人类行走的白色区域。我需要找到每个白色区域的面积来确定人体斑点的面积,因为该区域的面积大于其他白色区域的面积。请查看我的代码并向我解释输出,因为我得到的区域为 40872,但我不知道这意味着什么。这是我的代码。我想上传我使用的视频,但我不知道如何:/如果有人可以告诉我如何上传我使用的视频,请这样做,因为这是我能够获得有关该特定视频的帮助的唯一方法。我真的希望有人能帮助我。
#include "cv.h"
#include "highgui.h"
#include "iostream"
using namespace std;
int main( int argc, char* argv ) {
CvCapture *capture = NULL;
capture = cvCaptureFromAVI("C:\\walking\\lady walking.avi");
if(!capture){
return -1;
}
IplImage* color_frame = NULL;
IplImage* gray_frame = NULL ;
int thresh_frame = 70;
CvMoments moments;
int frameCount=0;//Counts every 5 frames
cvNamedWindow( "walking", CV_WINDOW_AUTOSIZE );
while(1) {
color_frame = cvQueryFrame( capture );//Grabs the frame from a file
if( !color_frame ) break;
gray_frame = cvCreateImage(cvSize(color_frame->width, color_frame->height), color_frame->depth, 1);
if( !color_frame ) break;// If the frame does not exist, quit the loop
frameCount++;
if(frameCount==5)
{
cvCvtColor(color_frame, gray_frame, CV_BGR2GRAY);
cvThreshold(gray_frame, gray_frame, thresh_frame, 255, CV_THRESH_BINARY);
cvErode(gray_frame, gray_frame, NULL, 1);
cvDilate(gray_frame, gray_frame, NULL, 1);
cvMoments(gray_frame, &moments, 1);
double m00;
m00 = cvGetSpatialMoment(&moments, 0,0);
cvShowImage("walking", gray_frame);
frameCount=0;
}
char c = cvWaitKey(33);
if( c == 27 ) break;
}
double m00 = (double)cvGetSpatialMoment(&moments, 0,0);
cout << "Area - : " << m00 << endl;
cvReleaseImage(&color_frame);
cvReleaseImage(&gray_frame);
cvReleaseCapture( &capture );
cvDestroyWindow( "walking" );
return 0;
}
I am working on a motion recognition project of walking, involving openCV and C++. I have reached the stage in the algorithm where I am required to find the area of the human blob. I have loaded the video, converted it to grayscale and thresholded it to obtain a binary image with white regions showing the human walking in addition to other white regions. I need to find the area of each white region to determine the area of the human blob since this region will have an area greater than that of the other white regions. Please look through my code and explain the output to me because I am getting an area of 40872 and I do not know what this means. This is my code. I want to upload the video I used but I do not know how to:/ If someone can tell me how to upload the video I used, please do, because this is the only way I will be able to get help with this particular video. I really hope someone can help me.
#include "cv.h"
#include "highgui.h"
#include "iostream"
using namespace std;
int main( int argc, char* argv ) {
CvCapture *capture = NULL;
capture = cvCaptureFromAVI("C:\\walking\\lady walking.avi");
if(!capture){
return -1;
}
IplImage* color_frame = NULL;
IplImage* gray_frame = NULL ;
int thresh_frame = 70;
CvMoments moments;
int frameCount=0;//Counts every 5 frames
cvNamedWindow( "walking", CV_WINDOW_AUTOSIZE );
while(1) {
color_frame = cvQueryFrame( capture );//Grabs the frame from a file
if( !color_frame ) break;
gray_frame = cvCreateImage(cvSize(color_frame->width, color_frame->height), color_frame->depth, 1);
if( !color_frame ) break;// If the frame does not exist, quit the loop
frameCount++;
if(frameCount==5)
{
cvCvtColor(color_frame, gray_frame, CV_BGR2GRAY);
cvThreshold(gray_frame, gray_frame, thresh_frame, 255, CV_THRESH_BINARY);
cvErode(gray_frame, gray_frame, NULL, 1);
cvDilate(gray_frame, gray_frame, NULL, 1);
cvMoments(gray_frame, &moments, 1);
double m00;
m00 = cvGetSpatialMoment(&moments, 0,0);
cvShowImage("walking", gray_frame);
frameCount=0;
}
char c = cvWaitKey(33);
if( c == 27 ) break;
}
double m00 = (double)cvGetSpatialMoment(&moments, 0,0);
cout << "Area - : " << m00 << endl;
cvReleaseImage(&color_frame);
cvReleaseImage(&gray_frame);
cvReleaseCapture( &capture );
cvDestroyWindow( "walking" );
return 0;
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
函数
cvGetSpatialMoment
检索空间矩,在图像矩的情况下定义为:其中
I(x,y)
是像素的强度(x ,y)
。空间矩 m00 就像物体的质量。它不包含 x、y 信息。平均 x 位置为所有 i 的
average(x) = sum(密度(x)*x_i)
。I(x,y)
就像密度函数,但这里它是像素的强度。如果您不希望结果根据光照而变化,您可能希望将矩阵设为二进制矩阵。像素要么是对象的一部分,要么不是。输入对象的灰度图像基本上将根据上面的公式将灰度级转换为密度。所以你想要的
m00 基本上是对图像中所有像素的灰度进行求和。没有空间意义。不过,如果您不将图像转换为二进制,您可能需要除以 m00 来“标准化”它。
The function
cvGetSpatialMoment
retrieves the spatial moment, which in case of image moments is defined as:where
I(x,y)
is the intensity of the pixel(x, y)
.The spatial moment m00 is like the mass of an object. It contains no x, y information. The average x position is
average(x) = sum(density(x)*x_i)
over all i's.I(x,y)
is like the density function, but here it is the intensity of the pixel. If you don't want your result to change based on the lighting, you probably want to make the matrix a binary matrix. A pixel is either part of the object or not. Feeding in a greyscale image of the object will essentially convert the greylevel to density as per the formula above.so you want
m00 is basically summing the grey-level over all the pixels in the image. No spatial meaning. Though if you don't convert your image to binary, you may want to divide by m00 to "normalize" it.
您可以使用 MEI 和 MHI 图像来识别运动。使用 50frame/1 更新 MHI 图像并获取分段运动并通过 cvMotions 创建运动,之后您需要使用与训练数据不同的 mathanan 。我是越南人。而且英语我很差。
You can use MEI and MHI image to recognize motion. with 50frame/1 you updateMHI image and get segment motion and create motion by cvMotions, after that you need to use mathanan distinct with training data. I'm Vietnamese. And english i'm very bad.