密集像素逆投影

发布于 2024-07-04 04:28:18 字数 269 浏览 6 评论 0原文

我看到一个关于反向投影 4 个 2D 点以得出 3D 空间中矩形的角的问题。 我对同一问题有一种更通用的版本:

给定焦距(可以解决以产生弧秒/像素)或内在相机矩阵(定义所使用的针孔相机模型的属性的 3x2 矩阵 -它与焦距直接相关),计算穿过每个像素的相机光线。

我想拍摄一系列帧,从每个帧中导出候选光线,并使用某种迭代求解方法从每个帧中导出相机姿势(当然,给定足够大的样本)...全部其中实际上只是广义霍夫算法的大规模并行实现......它首先获得了我遇到问题的候选射线......

I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:

Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.

I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

生生漫 2024-07-11 04:28:18

我的一个朋友从一所大学找到了 PhotoSynth 中相机匹配的源代码。 如果我是你,我会用谷歌搜索一下。

A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.

浪荡不羁 2024-07-11 04:28:18

经过一番探索后,难道不是外在矩阵告诉你相机在 3 空间中的实际位置吗?

我在一家做了很多这样的事情的公司工作,但我总是使用算法人员编写的工具。 :)

After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?

I worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)

素手挽清风 2024-07-11 04:28:18

这是一个很好的建议......我肯定会研究它(photosynth 有点重新激发了我对这个主题的兴趣 - 但我已经为 robochamps 工作了几个月) - 但它是一个稀疏的实现 - 它寻找“好的” “特征(图像中的点应该在同一图像的其他视图中很容易识别),虽然我当然计划根据特征的匹配程度对每个匹配进行评分,但我希望使用完整的密集算法来导出每个像素...或者我应该说体素哈哈?

That's a good suggestion... and I will definitely look into it (photosynth kind of resparked my interest in this subject - but I've been working on it for months for robochamps) - but it's a sparse implementation - it looks for "good" features (points in the image that should be easily identifiable in other views of the same image), and while I certainly plan to score each match based on how good the feature it's matching is, I want the full dense algorithm to derive every pixel... or should I say voxel lol?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文