Emgu CV - 我如何获得图像中所有出现的模式

发布于 2024-12-14 16:03:11 字数 3254 浏览 5 评论 0原文

您好,已经有了函数解决方案,但有一个问题:

            // The screenshot will be stored in this bitmap.
            Bitmap capture = new Bitmap(rec.Width, rec.Height, PixelFormat.Format24bppRgb);
            using (Graphics g = Graphics.FromImage(capture))
            {
                g.CopyFromScreen(rec.Location, new System.Drawing.Point(0, 0), rec.Size);
            }

            MCvSURFParams surfParam = new MCvSURFParams(500, false);
            SURFDetector surfDetector = new SURFDetector(surfParam);

            // Template image 
            Image<Gray, Byte> modelImage = new Image<Gray, byte>("template.jpg");
            // Extract features from the object image
            ImageFeature[] modelFeatures = surfDetector.DetectFeatures(modelImage, null);

            // Prepare current frame
            Image<Gray, Byte> observedImage = new Image<Gray, byte>(capture);
            ImageFeature[] imageFeatures = surfDetector.DetectFeatures(observedImage, null);


            // Create a SURF Tracker using k-d Tree
            Features2DTracker tracker = new Features2DTracker(modelFeatures);

            Features2DTracker.MatchedImageFeature[] matchedFeatures = tracker.MatchFeature(imageFeatures, 2);
            matchedFeatures = Features2DTracker.VoteForUniqueness(matchedFeatures, 0.8);
            matchedFeatures = Features2DTracker.VoteForSizeAndOrientation(matchedFeatures, 1.5, 20);
            HomographyMatrix homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(matchedFeatures);

            // Merge the object image and the observed image into one image for display
            Image<Gray, Byte> res = modelImage.ConcateVertical(observedImage);

            #region draw lines between the matched features

            foreach (Features2DTracker.MatchedImageFeature matchedFeature in matchedFeatures)
            {
                PointF p = matchedFeature.ObservedFeature.KeyPoint.Point;
                p.Y += modelImage.Height;
                res.Draw(new LineSegment2DF(matchedFeature.SimilarFeatures[0].Feature.KeyPoint.Point, p), new Gray(0), 1);
            }

            #endregion

            #region draw the project region on the image

            if (homography != null)
            {
                // draw a rectangle along the projected model
                Rectangle rect = modelImage.ROI;
                PointF[] pts = new PointF[] { 
                    new PointF(rect.Left, rect.Bottom),
                    new PointF(rect.Right, rect.Bottom),
                    new PointF(rect.Right, rect.Top),
                    new PointF(rect.Left, rect.Top)
                };

                homography.ProjectPoints(pts);

                for (int i = 0; i < pts.Length; i++)
                    pts[i].Y += modelImage.Height;

                res.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Gray(255.0), 2);
            }

            #endregion

            pictureBoxScreen.Image = res.ToBitmap();

结果是:

在此处输入图像描述

我的问题是,函数 单应性.ProjectPoints(pts); 仅获取第一次出现的图案(上图中的白色矩形)

我如何投影所有出现的模板,分别如何获得图像中模板矩形的出现

Hi have already function solution but one issue:

            // The screenshot will be stored in this bitmap.
            Bitmap capture = new Bitmap(rec.Width, rec.Height, PixelFormat.Format24bppRgb);
            using (Graphics g = Graphics.FromImage(capture))
            {
                g.CopyFromScreen(rec.Location, new System.Drawing.Point(0, 0), rec.Size);
            }

            MCvSURFParams surfParam = new MCvSURFParams(500, false);
            SURFDetector surfDetector = new SURFDetector(surfParam);

            // Template image 
            Image<Gray, Byte> modelImage = new Image<Gray, byte>("template.jpg");
            // Extract features from the object image
            ImageFeature[] modelFeatures = surfDetector.DetectFeatures(modelImage, null);

            // Prepare current frame
            Image<Gray, Byte> observedImage = new Image<Gray, byte>(capture);
            ImageFeature[] imageFeatures = surfDetector.DetectFeatures(observedImage, null);


            // Create a SURF Tracker using k-d Tree
            Features2DTracker tracker = new Features2DTracker(modelFeatures);

            Features2DTracker.MatchedImageFeature[] matchedFeatures = tracker.MatchFeature(imageFeatures, 2);
            matchedFeatures = Features2DTracker.VoteForUniqueness(matchedFeatures, 0.8);
            matchedFeatures = Features2DTracker.VoteForSizeAndOrientation(matchedFeatures, 1.5, 20);
            HomographyMatrix homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(matchedFeatures);

            // Merge the object image and the observed image into one image for display
            Image<Gray, Byte> res = modelImage.ConcateVertical(observedImage);

            #region draw lines between the matched features

            foreach (Features2DTracker.MatchedImageFeature matchedFeature in matchedFeatures)
            {
                PointF p = matchedFeature.ObservedFeature.KeyPoint.Point;
                p.Y += modelImage.Height;
                res.Draw(new LineSegment2DF(matchedFeature.SimilarFeatures[0].Feature.KeyPoint.Point, p), new Gray(0), 1);
            }

            #endregion

            #region draw the project region on the image

            if (homography != null)
            {
                // draw a rectangle along the projected model
                Rectangle rect = modelImage.ROI;
                PointF[] pts = new PointF[] { 
                    new PointF(rect.Left, rect.Bottom),
                    new PointF(rect.Right, rect.Bottom),
                    new PointF(rect.Right, rect.Top),
                    new PointF(rect.Left, rect.Top)
                };

                homography.ProjectPoints(pts);

                for (int i = 0; i < pts.Length; i++)
                    pts[i].Y += modelImage.Height;

                res.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Gray(255.0), 2);
            }

            #endregion

            pictureBoxScreen.Image = res.ToBitmap();

the result is:

enter image description here

And my problem is that, function homography.ProjectPoints(pts);
Get only first occurrence of pattern (white rectangle in pic above)

How i can Project all occurrence of template, respectively how I can get occurrence of template rectangle in image

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

在你怀里撒娇 2024-12-21 16:03:14

我的硕士论文也遇到了和你类似的问题。基本上你有两个选择:

  1. 使用聚类,例如 分层 k-means 或点密度,例如 DBSCAN (它取决于两个参数,但您可以使其在二维 R^2 空间中无阈值)
  2. 使用多个稳健的模型拟合估计技术,例如 J联动。在这种更先进的技术中,您可以对共享单应性的点进行聚类,而不是在欧几里得空间中彼此靠近的聚类点。

一旦您将匹配项划分为“簇”,您就可以估计属于相应簇的匹配项之间的单应性。

I face a problem similar to yours in my master thesis. Basically you have two options:

  1. Use a clustering such as Hierarchical k-means or a point density one such as DBSCAN (it depends on two parameters but you can make it threshold free in bidimensional R^2 space)
  2. Use a multiple robust model fitting estimation techniques such as JLinkage. In this more advanced technique you clusters points that share an homography instead of cluster points that close to each other in euclidean space.

Once you partition your matches in "clusters" you can estimate homographies between matches belonging to correspondant clusters.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文