OpenCV 中的背景扣除(C++)

发布于 2024-12-09 17:07:12 字数 256 浏览 1 评论 0原文

我想实现一种背景平均方法。我在一秒钟内拍摄了 50 帧图像,其中一些帧包含闪电,我想将其提取为前景。这些帧是用固定相机拍摄的,并且帧被拍摄为灰度。我想做的是:

  1. 获取背景模型
  2. 后,将每一帧与背景模型进行比较,以确定该帧中是否有光照。

我阅读了一些有关如何使用 cvAcc() 来完成此操作的文档,但我很难理解如何完成此操作。我希望有一段代码可以指导我并链接到可以帮助我理解如何实现这一点的文档。

预先感谢您。

I want to implement a background averaging method. I have 50 frames of images taken in one second and some of the frames contain lightning which I want to extract as the foreground. The frames are taken with a stationary camera and the frames are taken as grayscales. What I want to do is:

  1. Get the background model
  2. After, compare each frame to the background model to determine whether there is lighting in that frame or not.

I read some documents on how this can possible be done by using cvAcc() but am having a difficulty understanding how this can be done. I would appreciate a piece of code which guide me and links to documents that can help me understand how I can implement this.

Thanking you in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

百合的盛世恋 2024-12-16 17:07:12

我们在一个项目中承担了同样的任务。

为了获得背景模型,我们只需创建一个BackgroundModel类,捕获前(比方说)50帧并计算平均帧以避免背景模型中的像素错误。

例如,如果您从相机获取 8 位灰度图像 (​​CV_8UC1),则可以使用 CV_16UC1 初始化模型以避免剪切。

cv::Mat model = cv::Mat(HEIGHT, WIDTH, CV_16UC1, cv::Scalar(0));

现在,等待第一帧计算模型,只需将每一帧添加到模型中并计算接收到的帧的数量。

void addFrame(cv::Mat frame) {
    cv::Mat convertedFrame;
    frame.convertTo(convertedFrame, CV_16UC1);
    cv::add(convertedFrame, model, model);
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        createMask();
    }
}

createMask() 函数计算我们用于模型的平均帧。

void createMask() {
    cv::convertScaleAbs(model, mask, 1.0 / learnedFrames);
    mask.convertTo(mask, CV_8UC1);
}

现在,您只需通过BackgroundModel 类将所有帧发送到函数subtract() 即可。如果结果是空的 cv::Mat,则仍会计算掩码。否则,您会得到一个减去的帧。

cv::Mat subtract(cv::Mat frame) {
    cv::Mat result;
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        cv::subtract(frame, mask, result);
    }
    else {
        addFrame(frame);
    }
    return result;
}

最后但并非最不重要的一点是,您可以使用
标量和(const Mat& mtx)
计算像素总和并确定它是否是带有灯光的帧。

We had the same task in one of our projects.

To get the background model, we simply create a class BackgroundModel, capture the first (lets say) 50 frames and calculate the average frame to avoid pixel errors in the background model.

For example, if you get an 8-bit greyscale image (CV_8UC1) from your camera, you initialize your model with CV_16UC1 to avoid clipping.

cv::Mat model = cv::Mat(HEIGHT, WIDTH, CV_16UC1, cv::Scalar(0));

Now, waiting for the first frames to calculate your model, just add every frame to the model and count the amount of received frames.

void addFrame(cv::Mat frame) {
    cv::Mat convertedFrame;
    frame.convertTo(convertedFrame, CV_16UC1);
    cv::add(convertedFrame, model, model);
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        createMask();
    }
}

The createMask() function calculates the average frame which we use for the model.

void createMask() {
    cv::convertScaleAbs(model, mask, 1.0 / learnedFrames);
    mask.convertTo(mask, CV_8UC1);
}

Now, you just send all the frames the way through the BackgroundModel class to a function subtract(). If the result is an empty cv::Mat, the mask is still calculated. Otherwise, you get a subtracted frame.

cv::Mat subtract(cv::Mat frame) {
    cv::Mat result;
    if (++learnedFrames >= FRAMES_TO_LEAN) { // FRAMES_TO_LEARN = 50
        cv::subtract(frame, mask, result);
    }
    else {
        addFrame(frame);
    }
    return result;
}

Last but not least, you can use
Scalar sum(const Mat& mtx)
to calculate the pixel sum and decide if it's a frame with lights on it.

淡淡離愁欲言轉身 2024-12-16 17:07:12

MyPolygon 函数屏蔽 ROI,然后计算绝对像素差并计算白色像素的数量。
srcImage :参考图像。

#include <opencv2/opencv.hpp>
#include <iostream>
#include <random>


using namespace std;
using namespace cv;

cv::Mat MyPolygon( Mat img )
{
  int lineType = 8;
// [(892, 145), (965, 150), (933, 199), (935, 238), (970, 248), (1219, 715), (836, 709), (864, 204)]

  /** Create some points */
  Point rook_points[1][8];
  rook_points[0][0] = Point(892, 145);
  rook_points[0][1] = Point(965, 150);
  rook_points[0][2] = Point(933, 199);
  rook_points[0][3] = Point(935, 238);
  rook_points[0][4] = Point(970, 248);
  rook_points[0][5] = Point(1219, 715);
  rook_points[0][6] = Point(836, 709);
  rook_points[0][7] = Point(864, 204);

  const Point* ppt[1] = { rook_points[0] };
  int npt[] = { 8 };

  cv::Mat mask = cv::Mat::zeros(img.size(), img.type());

  fillPoly( mask,
            ppt,
            npt,
            1,
            Scalar( 255, 0, 0 ),
            lineType
            );

    cv::bitwise_and(mask,img, img);
    
    return img; 
 }

 int main() {
    /* code */
    cv::Mat srcImage = cv::imread("/home/gourav/Pictures/L1 Image.png", cv::IMREAD_GRAYSCALE);
    resize(srcImage, srcImage, Size(1280, 720));
    // cout << " Width : " << srcImage.cols << endl;
    // cout << " Height: " << srcImage.rows << endl;

    if (srcImage.empty()){
        std::cerr<<"Ref Image not found\n";
        return 1;
    }
    cv::Mat img = MyPolygon(srcImage);
    
    Mat grayBlur;
    GaussianBlur(srcImage, grayBlur, Size(5, 5), 0);

    VideoCapture cap("/home/gourav/GenralCode/LD3LF1_stream1.mp4"); 
    Mat frames;
    if(!cap.isOpened()){

        std::cout << "Error opening video stream or file" << endl;

        return -1;
    }
    while (1)
    {
        cap >> frames;
        if (frames.empty())
            break;
        
        // Convert current frame to grayscale
        cvtColor(frames, frames, COLOR_BGR2GRAY);

        // cout << "Frame Width : " << frames.cols << endl;
        // cout << "Frame Height: " << frames.rows << endl;

        Mat imageBlure;
        GaussianBlur(frames, imageBlure, Size(5, 5), 0);

        cv::Mat frame = MyPolygon(imageBlure);

        Mat dframe;
        absdiff(frame, grayBlur, dframe);
        
        // imshow("grayBlur", grayBlur);

        // Threshold to binarize
        threshold(dframe, dframe, 30, 255, THRESH_BINARY);        
        
        //White Pixels
        int number = cv::countNonZero(dframe);
        cout<<"Count: "<< number <<"\n";
        if (number > 3000)
        {
            cout<<"generate Alert ";
        }
        // Display Image
        imshow("dframe", dframe);

        char c=(char)waitKey(25);
        if (c==27)
            break;
    }
    cap.release();
    return 0;

 }

MyPolygon function mask the ROI and after that, it calculates the abs Pixel difference and calculates the number of white pixels.
srcImage : Reference image.

#include <opencv2/opencv.hpp>
#include <iostream>
#include <random>


using namespace std;
using namespace cv;

cv::Mat MyPolygon( Mat img )
{
  int lineType = 8;
// [(892, 145), (965, 150), (933, 199), (935, 238), (970, 248), (1219, 715), (836, 709), (864, 204)]

  /** Create some points */
  Point rook_points[1][8];
  rook_points[0][0] = Point(892, 145);
  rook_points[0][1] = Point(965, 150);
  rook_points[0][2] = Point(933, 199);
  rook_points[0][3] = Point(935, 238);
  rook_points[0][4] = Point(970, 248);
  rook_points[0][5] = Point(1219, 715);
  rook_points[0][6] = Point(836, 709);
  rook_points[0][7] = Point(864, 204);

  const Point* ppt[1] = { rook_points[0] };
  int npt[] = { 8 };

  cv::Mat mask = cv::Mat::zeros(img.size(), img.type());

  fillPoly( mask,
            ppt,
            npt,
            1,
            Scalar( 255, 0, 0 ),
            lineType
            );

    cv::bitwise_and(mask,img, img);
    
    return img; 
 }

 int main() {
    /* code */
    cv::Mat srcImage = cv::imread("/home/gourav/Pictures/L1 Image.png", cv::IMREAD_GRAYSCALE);
    resize(srcImage, srcImage, Size(1280, 720));
    // cout << " Width : " << srcImage.cols << endl;
    // cout << " Height: " << srcImage.rows << endl;

    if (srcImage.empty()){
        std::cerr<<"Ref Image not found\n";
        return 1;
    }
    cv::Mat img = MyPolygon(srcImage);
    
    Mat grayBlur;
    GaussianBlur(srcImage, grayBlur, Size(5, 5), 0);

    VideoCapture cap("/home/gourav/GenralCode/LD3LF1_stream1.mp4"); 
    Mat frames;
    if(!cap.isOpened()){

        std::cout << "Error opening video stream or file" << endl;

        return -1;
    }
    while (1)
    {
        cap >> frames;
        if (frames.empty())
            break;
        
        // Convert current frame to grayscale
        cvtColor(frames, frames, COLOR_BGR2GRAY);

        // cout << "Frame Width : " << frames.cols << endl;
        // cout << "Frame Height: " << frames.rows << endl;

        Mat imageBlure;
        GaussianBlur(frames, imageBlure, Size(5, 5), 0);

        cv::Mat frame = MyPolygon(imageBlure);

        Mat dframe;
        absdiff(frame, grayBlur, dframe);
        
        // imshow("grayBlur", grayBlur);

        // Threshold to binarize
        threshold(dframe, dframe, 30, 255, THRESH_BINARY);        
        
        //White Pixels
        int number = cv::countNonZero(dframe);
        cout<<"Count: "<< number <<"\n";
        if (number > 3000)
        {
            cout<<"generate Alert ";
        }
        // Display Image
        imshow("dframe", dframe);

        char c=(char)waitKey(25);
        if (c==27)
            break;
    }
    cap.release();
    return 0;

 }
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文