如何在 Emgu 上进行视频或相机捕捉的白色斑点跟踪?

发布于 2024-08-29 06:13:25 字数 403 浏览 2 评论 0原文

我想使用 C# 和 Emgu 编写程序,可以检测相机图像上的白色斑点并跟踪它。 的 ID

此外,该程序还可以返回跟踪的 blob Frame1 : http://www.freeimagehosting.net/ uploads/ff2ac19054.jpg

框架2:http://www.freeimagehosting.net/uploads /09e20e5dd6.jpg

I want to make program using C# with Emgu that can detect white blobs on images from camera and also track it. Also, the program can return IDs of tracked blobs

Frame1: http://www.freeimagehosting.net/uploads/ff2ac19054.jpg

Frame2: http://www.freeimagehosting.net/uploads/09e20e5dd6.jpg

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

半暖夏伤 2024-09-05 06:13:25

Emgu.CV.Example 解决方案 (Emgu.CV.Example.sln) 中的 Emgu 示例项目“VideoSurveilance”演示了 Blob 跟踪并为其分配 ID。

我是 OpenCV 的新手,但在我看来,仅跟踪“白色”斑点可能比听起来更难。例如,示例图片中的斑点并不是真正的“白色”,不是吗?我认为你真正想做的是“获得比背景亮一定量的斑点”,即在黑色背景上找到灰色斑点或在灰色背景上找到白色斑点。

The Emgu sample project "VideoSurveilance" in the Emgu.CV.Example solution (Emgu.CV.Example.sln) demonstrates blob tracking and assigns ID's to them.

I'm a newbie to OpenCV but it seems to me that the tracking of only "white" blobs may be harder than it sounds. For example, the blobs in your sample picture aren't really "white" are they? What I think you are really trying to do is "get the blobs that are brighter than the background by a certain amount" i.e. find a gray blob on a black background or a white blob on a gray background.

哆啦不做梦 2024-09-05 06:13:25

这取决于你的背景是什么样的。如果它像您附加的那些图像一样一直很暗,那么您应该能够通过一定的阈值提取那些“白色”斑点。对于任何更智能的分割,您还需要使用一些其他功能(例如,如果您的对象颜色一致,则类似相关性)。

It depends what's your background like. If it is constantly dark like on those images you attached, then you should be able to extract those "white" blobs with some threshold. For any smarter segmentation you'll need to use some other features as well (e.g. like correlation if your object is color consistent).

想你只要分分秒秒 2024-09-05 06:13:25

我不能说该代码可以工作,因为我还没有测试过它。

总体思路是获取捕获的帧(假设您正在捕获帧)并通过修改饱和度和值(亮度)来滤除噪声。然后将此修改后的 HSV 图像处理为灰度图像。可以通过循环跟踪器生成的 Blob 集合并分配 id 和边界框来标记 Blob。

此外,您可能对 AForge.net 和相关文章感兴趣:手势识别,介绍使用直方图进行计算机视觉的机制和实现。

这是在 nui 论坛上找到的自定义跟踪器代码的修改版本

static void Main(){
    Capture capture = new Capture(); //create a camera captue
    Image<Bgr, Byte> img = capture.QuerySmallFrame();

    OptimizeBlobs(img);

    BackgroundStatisticsModel bsm = new BackgroundStatisticsModel(img, Emgu.CV.CvEnum.BG_STAT_TYPE.FGD_STAT_MODEL);
    bsm.Update(img);

    BlobSeq oldBlobs = new BlobSeq();
    BlobSeq newBlobs = new BlobSeq();

    ForgroundDetector fd = new ForgroundDetector(Emgu.CV.CvEnum.FORGROUND_DETECTOR_TYPE.FGD);
    BlobDetector bd = new BlobDetector(Emgu.CV.CvEnum.BLOB_DETECTOR_TYPE.CC);
    BlobTracker bt = new BlobTracker(Emgu.CV.CvEnum.BLOBTRACKER_TYPE.CC);

    BlobTrackerAutoParam btap = new BlobTrackerAutoParam();
    btap.BlobDetector = bd;
    btap.ForgroundDetector = fd;
    btap.BlobTracker = bt;
    btap.FGTrainFrames = 5;

    BlobTrackerAuto bta = new BlobTrackerAuto(btap);


    Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
    {  //run this until application closed (close button click on image viewer)

        //******* capture image ******* 
        img = capture.QuerySmallFrame();

        OptimizeBlobs(img);

        bd.DetectNewBlob(img, bsm.Foreground, newBlobs, oldBlobs);

        List<MCvBlob> blobs = new List<MCvBlob>(bta);

        MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
        foreach (MCvBlob blob in blobs)
        {
           img.Draw(Rectangle.Round(blob), new Gray(255.0), 2);
           img.Draw(blob.ID.ToString(), ref font, Point.Round(blob.Center), new Gray(255.0));
        }

        Image<Gray, Byte> fg = bta.GetForgroundMask();
    });
}

public Image<Gray, Byte> OptimizeBlobs(Image<Gray, Byte img)
{
    // can improve image quality, but expensive if real-time capture
    img._EqualizeHist(); 

    // convert img to temporary HSV object
    Image<Hsv, Byte> imgHSV = img.Convert<Hsv, Byte>();

    // break down HSV
    Image<Gray, Byte>[] channels = imgHSV.Split();
    Image<Gray, Byte> imgHSV_saturation = channels[1];   // saturation channel
    Image<Gray, Byte> imgHSV_value      = channels[2];   // value channel

    //use the saturation and value channel to filter noise. [you will need to tweak these values]
    Image<Gray, Byte> saturationFilter = imgHSV_saturation.InRange(new Gray(0), new Gray(80));
    Image<Gray, Byte> valueFilter = imgHSV_value.InRange(new Gray(200), new Gray(255));

    // combine the filters to get the final image to process.
    Image<Gray, byte> imgTarget = huefilter.And(saturationFilter);

    return imgTarget;
}

I cannot say the code will work because I haven't tested it.

The general idea is to take the captured frame (assuming you're capturing frames) and filter out the noise by modifying the saturation and value(brightness). This modified HSV image is then processed as greyscale. Blobs can be labeled by looping through the blob collection generated by the tracker and assigned id's and bounding boxes.

Also, you may be interested in AForge.net and the related article: Hands Gesture Recognition on the mechanics and implementation of using the histogram for computer vision.

This is a modified version of custom tracker code found on the nui forums:

static void Main(){
    Capture capture = new Capture(); //create a camera captue
    Image<Bgr, Byte> img = capture.QuerySmallFrame();

    OptimizeBlobs(img);

    BackgroundStatisticsModel bsm = new BackgroundStatisticsModel(img, Emgu.CV.CvEnum.BG_STAT_TYPE.FGD_STAT_MODEL);
    bsm.Update(img);

    BlobSeq oldBlobs = new BlobSeq();
    BlobSeq newBlobs = new BlobSeq();

    ForgroundDetector fd = new ForgroundDetector(Emgu.CV.CvEnum.FORGROUND_DETECTOR_TYPE.FGD);
    BlobDetector bd = new BlobDetector(Emgu.CV.CvEnum.BLOB_DETECTOR_TYPE.CC);
    BlobTracker bt = new BlobTracker(Emgu.CV.CvEnum.BLOBTRACKER_TYPE.CC);

    BlobTrackerAutoParam btap = new BlobTrackerAutoParam();
    btap.BlobDetector = bd;
    btap.ForgroundDetector = fd;
    btap.BlobTracker = bt;
    btap.FGTrainFrames = 5;

    BlobTrackerAuto bta = new BlobTrackerAuto(btap);


    Application.Idle += new EventHandler(delegate(object sender, EventArgs e)
    {  //run this until application closed (close button click on image viewer)

        //******* capture image ******* 
        img = capture.QuerySmallFrame();

        OptimizeBlobs(img);

        bd.DetectNewBlob(img, bsm.Foreground, newBlobs, oldBlobs);

        List<MCvBlob> blobs = new List<MCvBlob>(bta);

        MCvFont font = new MCvFont(Emgu.CV.CvEnum.FONT.CV_FONT_HERSHEY_SIMPLEX, 1.0, 1.0);
        foreach (MCvBlob blob in blobs)
        {
           img.Draw(Rectangle.Round(blob), new Gray(255.0), 2);
           img.Draw(blob.ID.ToString(), ref font, Point.Round(blob.Center), new Gray(255.0));
        }

        Image<Gray, Byte> fg = bta.GetForgroundMask();
    });
}

public Image<Gray, Byte> OptimizeBlobs(Image<Gray, Byte img)
{
    // can improve image quality, but expensive if real-time capture
    img._EqualizeHist(); 

    // convert img to temporary HSV object
    Image<Hsv, Byte> imgHSV = img.Convert<Hsv, Byte>();

    // break down HSV
    Image<Gray, Byte>[] channels = imgHSV.Split();
    Image<Gray, Byte> imgHSV_saturation = channels[1];   // saturation channel
    Image<Gray, Byte> imgHSV_value      = channels[2];   // value channel

    //use the saturation and value channel to filter noise. [you will need to tweak these values]
    Image<Gray, Byte> saturationFilter = imgHSV_saturation.InRange(new Gray(0), new Gray(80));
    Image<Gray, Byte> valueFilter = imgHSV_value.InRange(new Gray(200), new Gray(255));

    // combine the filters to get the final image to process.
    Image<Gray, byte> imgTarget = huefilter.And(saturationFilter);

    return imgTarget;
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文