计算明亮像素并对它们求和。医学图像 C++

发布于 2024-12-14 21:32:01 字数 1420 浏览 10 评论 0原文

目前,我正在从事一个医学工程项目。我有一个大图像,其中包含多个细胞子图像,因此我的第一个任务是分割图像。

我想到了下一件事:

将图像转换为二进制

,将亮度像素投影到 x 轴,这样我就可以看到亮度值之间存在间隙的地方,然后划分图像。

当我尝试到达第二部分时,问题就出现了。我的想法是使用向量作为投影并将沿一列的所有亮度值相加,因此向量的位置号 0 是图像第一列中所有亮度值的总和,直到我到达最后一列,所以最后我有了投影。

这就是我尝试过的方法:

void calculo(cv::Mat &result,cv::Mat &binary){  //result=the sum,binary the imag.

    int i,j;

    for (i=0;i<=binary.rows;i++){
        for(j=0;j<=binary.cols;j++){
                cv::Scalar intensity= binaria.at<uchar>(j,i);
                result.at<uchar>(i,i)=result.at<uchar>(i,i)+intensity.val[0];
        }
        cv::Scalar intensity2= result.at<uchar>(i,i);
        cout<< "content" "\n"<< intensity2.val[0] << endl;              
    }
} 

执行此代码时,出现违规错误。另一个问题是我无法创建一个具有唯一行的矩阵,所以......我不知道我能做什么。

有什么想法吗?谢谢!


最后,它不起作用,我需要将一列中的所有像素相加。我做到了:

cv::Mat suma(cv::Mat& matrix){

int i;

cv::Mat output(1,matrix.cols,CV_64F);


for (i=0;i<=matrix.cols;i++){
        output.at<double>(0,i)=norm(matrix.col(i),1);   
    }
    return output;
}

但它给了我一个错误: Mat 中的断言失败(0 <= colRange.start && colRange.start <= colRange.end && colRange.end <= m.cols),文件 /home/usuario/OpenCV-2.2。 0/modules/core/src/matrix.cpp,第 276 行

我不知道,任何想法都会有帮助,无论如何,非常感谢 mevatron,你真的把我挡在了路上。

Currently, I'm working on a project in medical engineering. I have a big image with several sub-images of the cell, so my first task is to divide the image.

I thought about the next thing:

Convert the image into binary

doing a projection of the brightness pixels into the x-axis so I can see where there are gaps between brightnesses values and then divide the image.

The problem comes when I try to reach the second part. My idea is using a vector as the projection and sum all the brightnesses values all along one column, so the position number 0 of the vector is the sum of all the brightnesses values that are in the first column of the image, the same until I reach the last column, so at the end I have the projection.

This is how I have tried:

void calculo(cv::Mat &result,cv::Mat &binary){  //result=the sum,binary the imag.

    int i,j;

    for (i=0;i<=binary.rows;i++){
        for(j=0;j<=binary.cols;j++){
                cv::Scalar intensity= binaria.at<uchar>(j,i);
                result.at<uchar>(i,i)=result.at<uchar>(i,i)+intensity.val[0];
        }
        cv::Scalar intensity2= result.at<uchar>(i,i);
        cout<< "content" "\n"<< intensity2.val[0] << endl;              
    }
} 

When executing this code, I have a violation error. Another problem is that I cannot create a matrix with one unique row, so...I don't know what could I do.

Any ideas?! Thanks!


At the end, it does not work, I need to sum all the pixels in one COLUMN. I did:

cv::Mat suma(cv::Mat& matrix){

int i;

cv::Mat output(1,matrix.cols,CV_64F);


for (i=0;i<=matrix.cols;i++){
        output.at<double>(0,i)=norm(matrix.col(i),1);   
    }
    return output;
}

but It gave me a mistake:
Assertion failed (0 <= colRange.start && colRange.start <= colRange.end && colRange.end <= m.cols) in Mat, file /home/usuario/OpenCV-2.2.0/modules/core/src/matrix.cpp, line 276

I dont know, any idea would be helpful, anyway many thanks mevatron, you really left me in the way.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

倾其所爱 2024-12-21 21:32:01

如果你只想要二值图像的总和,你可以简单地采用 L1 范数。就像这样:

Mat binaryVectorSum(const Mat& binary)
{
    Mat output(1, binary.rows, CV_64F);
    for(int i = 0; i < binary.rows; i++)
    {
        output.at<double>(0, i) = norm(binary.row(i), NORM_L1);
    }

    return output;
}

我在工作,所以我无法测试它,但这应该会让你接近。

编辑:回家了。测试了一下。有用。 :) 一个警告...如果您的二进制矩阵是真正的二进制(即 0 和 1),则此函数有效。如果二进制矩阵是 0 和 255,您可能需要用最大值缩放范数输出。

编辑:如果您的 .cpp 文件中没有using namespace cv;,那么您需要声明命名空间以使用NORM_L1 就像这样cv::NORM_L1

在调用函数之前您是否考虑过转置矩阵?像这样:

sumCols = binaryVectorSum(binary.t());

vs.

sumRows = binaryVectorSum(binary);

编辑: 我的代码有一个错误:)
我更改

Mat output(1, binary.cols, CV_64F);

Mat output(1, binary.rows, CV_64F);

:我的测试用例是方阵,因此没有找到该错误...

希望有帮助!

If you just want the sum of the binary image, you could simply take the L1-norm. Like so:

Mat binaryVectorSum(const Mat& binary)
{
    Mat output(1, binary.rows, CV_64F);
    for(int i = 0; i < binary.rows; i++)
    {
        output.at<double>(0, i) = norm(binary.row(i), NORM_L1);
    }

    return output;
}

I'm at work, so I can't test it out, but that should get you close.

EDIT : Got home. Tested it. It works. :) One caveat...this function works if your binary matrix is truly binary (i.e., 0's and 1's). You may need to scale the norm output with the maximum value if the binary matrix is say 0's and 255's.

EDIT : If you don't have using namespace cv; in your .cpp file, then you'll need to declare the namespace to use NORM_L1 like this cv::NORM_L1.

Have you considered transposing the matrix before you call the function? Like this:

sumCols = binaryVectorSum(binary.t());

vs.

sumRows = binaryVectorSum(binary);

EDIT : A bug with my code :)
I changed:

Mat output(1, binary.cols, CV_64F);

to

Mat output(1, binary.rows, CV_64F);

My test case was a square matrix, so that bug didn't get found...

Hope that is helpful!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文