存在相机抖动时的基准标记检测

发布于 2025-01-06 14:13:13 字数 562 浏览 1 评论 0原文

当用户剧烈移动相机(手机)时,我试图使基于 OpenCV 的基准标记检测更加稳健。标记采用 ArTag 风格,并在黑色边框内嵌入汉明码。通过对图像进行阈值处理来检测边界,然后根据找到的轮廓查找四边形,然后检查四边形的内部。

一般来说,如果黑色边框被识别,标记的解码就相当稳健。我尝试过最明显的方法,即对图像进行两次下采样,并在这些级别上执行四重检测。这有助于相机在极端近地标记上散焦,并且图像模糊程度非常小,但对相机运动模糊的一般情况没有很大帮助

是否有关于使检测更加鲁棒的方法的研究?我想知道的想法包括:

  1. 您可以进行某种光流跟踪来“猜测”下一帧中标记的位置,然后在这些猜测的区域中进行某种角点检测,而不是处理矩形搜索作为全帧阈值?
  2. 在 PC 上,是否可以导出模糊系数(也许通过与检测到标记的最近视频帧进行配准)并在处理之前对图像进行去模糊?
  3. 在智能手机上,是否可以使用陀螺仪和/或加速度计来获取去模糊系数并对图像进行预处理? (我假设不是,只是因为如果是的话,市场上将充斥着抖动校正相机应用程序。)

如果失败的想法的链接可以帮助我避免尝试它们,那么我也将不胜感激。

I'm trying to make my OpenCV-based fiducial marker detection more robust when the user moves the camera (phone) violently. Markers are ArTag-style with a Hamming code embedded within a black border. Borders are detected by thresholding the image, then looking for quads based on the found contours, then checking the internals of the quads.

In general, decoding of the marker is fairly robust if the black border is recognized. I've tried the most obvious thing, which is downsampling the image twice, and also performing quad-detection on those levels. This helps with camera defocus on extreme nearground markers, and also with very small levels of image blur, but doesn't hugely help the general case of camera motion blur

Is there available research on ways to make detection more robust? Ideas I'm wondering about include:

  1. Can you do some sort of optical flow tracking to "guess" the positions of the marker in the next frame, then some sort of corner detection in the region of those guesses, rather than treating the rectangle search as a full-frame thresholding?
  2. On PCs, is it possible to derive blur coeffiients (perhaps by registration with recent video frames where the marker was detected) and deblur the image prior to processing?
  3. On smartphones, is it possible to use the gyroscope and/or accelerometers to get deblurring coefficients and pre-process the image? (I'm assuming not, simply because if it were, the market would be flooded with shake-correcting camera apps.)

Links to failed ideas would also be appreciated if it saves me trying them.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

挽梦忆笙歌 2025-01-13 14:13:13
  1. 是的,您可以使用光流来估计标记可能在哪里并定位您的搜索,但这只是重新定位,您的跟踪将因模糊帧而中断。
  2. 我对去模糊了解不多,只是说它的计算量很大,所以实时可能很困难
  3. 你可以使用传感器来猜测你面临的模糊类型,但我猜去模糊对于移动设备来说计算量太大实时设备。

然后是其他一些方法:

这里有一些非常聪明的东西: http: //www.robots.ox.ac.uk/~gk/publications/KleinDrummond2004IVC.pdf 他们正在进行边缘检测(可用于查找您的标记)边界,即使您现在正在寻找四边形),对来自传感器的相机运动进行建模,并使用这些值来估计给定帧速率的模糊方向上的边缘应如何出现,并进行搜索。非常优雅。

同样这里 http://www.eecis.udel.edu/~jye /lab_research/11/BLUT_iccv_11.pdf 他们只是预先模糊跟踪目标,并尝试匹配给定模糊方向的适当模糊目标。他们使用高斯滤波器来模拟模糊,这是对称的,因此您需要的预模糊目标数量是您最初预期的一半。

如果您确实尝试实施其中任何一个,我真的很想听听您的进展如何!

  1. Yes, you can use optical flow to estimate where the marker might be and localise your search, but it's just relocalisation, your tracking will have broken for the blurred frames.
  2. I don't know enough about deblurring except to say it's very computationally intensive, so real-time might be difficult
  3. You can use the sensors to guess the sort of blur you're faced with, but I would guess deblurring is too computational for mobile devices in real time.

Then some other approaches:

There is some really smart stuff in here: http://www.robots.ox.ac.uk/~gk/publications/KleinDrummond2004IVC.pdf where they're doing edge detection (which could be used to find your marker borders, even though you're looking for quads right now), modelling the camera movements from the sensors, and using those values to estimate how an edge in the direction of blur should appear given the frame-rate, and searching for that. Very elegant.

Similarly here http://www.eecis.udel.edu/~jye/lab_research/11/BLUT_iccv_11.pdf they just pre-blur the tracking targets and try to match the blurred targets that are appropriate given the direction of blur. They use Gaussian filters to model blur, which are symmetrical, so you need half as many pre-blurred targets as you might initially expect.

If you do try implementing any of these, I'd be really interested to hear how you get on!

深陷 2025-01-13 14:13:13

从一些相关工作(尝试使用传感器/陀螺仪来预测视频中从一帧到另一帧的特征的可能位置)我想说,3 即使不是不可能,也可能很困难。我认为您最多可以获得运动的大致方向和角度的指示,这可能会帮助您使用 dabhaid 引用的方法对模糊进行建模,但我认为您不太可能获得足够的精度来提供更多帮助。

From some related work (attempting to use sensors/gyroscope to predict likely location of features from one frame to another in video) I'd say that 3 is likely to be difficult if not impossible. I think at best you could get an indication of the approximate direction and angle of motion which may help you model blur using the approaches referenced by dabhaid but I think it unlikely you'd get sufficient precision to be much more help.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文