如何实现光流跟踪器?

发布于 2024-09-25 05:28:15 字数 365 浏览 4 评论 0原文

我正在使用 OpenCV 包装器 - Emgu CV,并且我正在尝试使用光流实现运动跟踪器,但我无法找到一种方法来组合从 OF 算法检索到的水平和垂直信息:

flowx = new Image<Gray, float>(size);
flowy = new Image<Gray, float>(size);

OpticalFlow.LK(currImg, prevImg, new Size(15, 15), flowx, flowy);

我的问题是不知道如何结合垂直和水平运动的信息来构建运动物体的跟踪器?新形象?

顺便问一下,有没有一种简单的方法可以显示当前帧的流量信息?

提前致谢。

I'm using the OpenCV wrapper - Emgu CV, and I'm trying to implement a motion tracker using Optical Flow, but I can't figure out a way to combine the horizontal and vertical information retrieved from the OF algorithm:

flowx = new Image<Gray, float>(size);
flowy = new Image<Gray, float>(size);

OpticalFlow.LK(currImg, prevImg, new Size(15, 15), flowx, flowy);

My problem is not knowing how to combine the info of vertical and horizontal movement in order to build the tracker of moving objects? A new image?

By the way, there is a easy way to display the flow info on the current frame?

Thanks in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

眉目亦如画i 2024-10-02 05:28:15

这是我在 youtube 头部运动跟踪器视频教程中定义的函数。您可以找到视频附带的完整源代码

void ComputeDenseOpticalFlow()
    {
        // Compute dense optical flow using Horn and Schunk algo
        velx = new Image<Gray, float>(faceGrayImage.Size);
        vely = new Image<Gray, float>(faceNextGrayImage.Size);

        OpticalFlow.HS(faceGrayImage, faceNextGrayImage, true, velx, vely, 0.1d, new MCvTermCriteria(100));            

        #region Dense Optical Flow Drawing
        Size winSize = new Size(10, 10);
        vectorFieldX = (int)Math.Round((double)faceGrayImage.Width / winSize.Width);
        vectorFieldY = (int)Math.Round((double)faceGrayImage.Height / winSize.Height);
        sumVectorFieldX = 0f;
        sumVectorFieldY = 0f;
        vectorField = new PointF[vectorFieldX][];
        for (int i = 0; i < vectorFieldX; i++)
        {
            vectorField[i] = new PointF[vectorFieldY];
            for (int j = 0; j < vectorFieldY; j++)
            {
                Gray velx_gray = velx[j * winSize.Width, i * winSize.Width];
                float velx_float = (float)velx_gray.Intensity;
                Gray vely_gray = vely[j * winSize.Height, i * winSize.Height];
                float vely_float = (float)vely_gray.Intensity;
                sumVectorFieldX += velx_float;
                sumVectorFieldY += vely_float;
                vectorField[i][j] = new PointF(velx_float, vely_float);

                Cross2DF cr = new Cross2DF(
                    new PointF((i*winSize.Width) +trackingArea.X,
                               (j*winSize.Height)+trackingArea.Y),
                               1, 1);
                opticalFlowFrame.Draw(cr, new Bgr(Color.Red), 1);

                LineSegment2D ci = new LineSegment2D(
                    new Point((i*winSize.Width)+trackingArea.X,
                              (j * winSize.Height)+trackingArea.Y), 
                    new Point((int)((i * winSize.Width)  + trackingArea.X + velx_float),
                              (int)((j * winSize.Height) + trackingArea.Y + vely_float)));
                opticalFlowFrame.Draw(ci, new Bgr(Color.Yellow), 1);

            }
        }
        #endregion
    }

Here is the function i have defined in my youtube head movement tracker video tutorial. You can find the full source code attached to the video

void ComputeDenseOpticalFlow()
    {
        // Compute dense optical flow using Horn and Schunk algo
        velx = new Image<Gray, float>(faceGrayImage.Size);
        vely = new Image<Gray, float>(faceNextGrayImage.Size);

        OpticalFlow.HS(faceGrayImage, faceNextGrayImage, true, velx, vely, 0.1d, new MCvTermCriteria(100));            

        #region Dense Optical Flow Drawing
        Size winSize = new Size(10, 10);
        vectorFieldX = (int)Math.Round((double)faceGrayImage.Width / winSize.Width);
        vectorFieldY = (int)Math.Round((double)faceGrayImage.Height / winSize.Height);
        sumVectorFieldX = 0f;
        sumVectorFieldY = 0f;
        vectorField = new PointF[vectorFieldX][];
        for (int i = 0; i < vectorFieldX; i++)
        {
            vectorField[i] = new PointF[vectorFieldY];
            for (int j = 0; j < vectorFieldY; j++)
            {
                Gray velx_gray = velx[j * winSize.Width, i * winSize.Width];
                float velx_float = (float)velx_gray.Intensity;
                Gray vely_gray = vely[j * winSize.Height, i * winSize.Height];
                float vely_float = (float)vely_gray.Intensity;
                sumVectorFieldX += velx_float;
                sumVectorFieldY += vely_float;
                vectorField[i][j] = new PointF(velx_float, vely_float);

                Cross2DF cr = new Cross2DF(
                    new PointF((i*winSize.Width) +trackingArea.X,
                               (j*winSize.Height)+trackingArea.Y),
                               1, 1);
                opticalFlowFrame.Draw(cr, new Bgr(Color.Red), 1);

                LineSegment2D ci = new LineSegment2D(
                    new Point((i*winSize.Width)+trackingArea.X,
                              (j * winSize.Height)+trackingArea.Y), 
                    new Point((int)((i * winSize.Width)  + trackingArea.X + velx_float),
                              (int)((j * winSize.Height) + trackingArea.Y + vely_float)));
                opticalFlowFrame.Draw(ci, new Bgr(Color.Yellow), 1);

            }
        }
        #endregion
    }
冰之心 2024-10-02 05:28:15

光流可视化。常见的方法是使用颜色编码的二维流场。这意味着我们将流显示为图像,其中像素强度对应于像素中流的绝对值,而色调反映流的方向。
参见 [Baker et al., 2009 中的图 2 ]。
另一种方法是在第一个图像上的网格中绘制流向量(例如,每 10 个像素)。

组合 x 和 y。目前尚不清楚你在这里的意思。第一个图像中的像素 (x,y) 移动到第二个图像上的 (x+flowx, y+flowy)。因此,要跟踪对象,您需要固定对象在第一张图像上的位置,并添加流量值以获取其在第二张图像上的位置。

Optical flow visualization. The common approach is to use a color-coded 2D flow field. It means that we display the flow as an image, where pixel intensity corresponds to the absolute value of the flow in the pixel, while the hue reflects the direction of the flow.
Look at Fig.2 in [Baker et al., 2009].
Another way is to draw vectors of flow in a grid over the first image (say, every 10 pixels).

Combining x and y. It is not clear what you mean here. The pixel (x,y) from the first image is moved to (x+flowx, y+flowy) on the second one. So, to track an object you fix the position of the object on the first image and add the flow value to get its position on the second one.

旧人哭 2024-10-02 05:28:15

有一些已知的光流算法。 Lucas Kanade 可能对你有好处。你可以找到 matlab 源 这里

There are some known optical flow algorithms. one of them that may be good for you is Lucas Kanade.. you can find a matlab source here

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文