从 3D 点云进行表面重建处理不需要的重叠表面?
读完这两篇好文章后 简化 3d 表面的算法? https://stackoverflow.com/questions/838761/robust- 3D 点云表面重建算法 我还有一个关于表面重建的问题。
我有一些来自测距相机的 3D 点云数据。这意味着点云数据有噪声,仅具有坐标 (x,y,z) 信息,并且仅代表扫描场景的部分表面(也称为 2.5D 数据)。
在尝试对它们进行网格化之前,我运行一些对齐算法(例如 ICP)将多个范围数据合并为一个。不知何故,对齐并不完美,它让合并数据集有一些不太好的重叠表面伪影,并且整个数据变得更加嘈杂!
这是一个例子。
here are points representing a surface (shown as a line)
.....................................................
here are points representing actually the same surface as the one above,
but due to imperfect alignment of multiple data sets they seem overlapping like onion shell.
............................
.............................
...............................
.......................................
算法(例如球旋转、泊松、行进立方体)可以处理这种情况吗? 或者我是否需要一些预处理来使数据集更薄以减少重叠表面?
顺便说一句,我尝试过使用球枢轴旋转来从此类数据集重建表面的 MeshLab。 它可以工作,但一些表面法线是以错误的方向生成的。我认为重叠点会导致这样的问题。
MeshLab中生成的曲面,白色和黑色的曲面具有不同的法线方向。
感谢您的任何建议和可能的答案。
After reading the two nice posts
Algorithm for simplifying 3d surface?
https://stackoverflow.com/questions/838761/robust-algorithm-for-surface-reconstruction-from-3d-point-cloud
I have still a question about surface reconstruction.
I have some 3d point cloud data from range camera. It means that the point cloud data is noisy, has only coordination (x,y,z) information and represents only a partial surface of a scanned scene (aka. 2.5D data).
Before trying to mesh-lize them, I run some alignment algorithm (e.g. ICP) to merge multiple range data into one. Somehow the alignment is not perfect, it let the merge data sets have some not well overlapping surface artifacts and the whole data gets even more noisy!
here is an illustration.
here are points representing a surface (shown as a line)
.....................................................
here are points representing actually the same surface as the one above,
but due to imperfect alignment of multiple data sets they seem overlapping like onion shell.
............................
.............................
...............................
.......................................
can the algorithms (e.g. ball pivoting, poisson, marching cubes) handle such situation?
or do I need some preprocessing to make the data set thinner to reduce the overlapping surfaces?
btw, I have tried MeshLab with just ball pivoting to reconstruct surface from such data sets.
It works but some of surface normals are generated in wrong direction. I think that the overlapping points cause such problem.
The surface generated in MeshLab, the surface in white and black having different direction of normals.
Thanks for any suggestion and possible answer.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我希望您仍然对答案感兴趣。您可以尝试的一件事是使用点到平面距离而不是点到点距离来定义 ICP。点到点距离如下所示,其中 a 和 b 位于目标点集中,p 是您在 ICP 中注册的点集中的点。最近的点是a,距离是|ap|。
点到面的距离是这样的,c是p在直线ab上的投影,距离是|cp|。
点到平面可能具有优势的原因是在这样的情况下,其中“.”点来自一次扫描,“o”点来自另一次扫描。 ICP 可能会陷入局部最小值,就像这样,水平线上的 . 和 o 完美匹配,但垂直线上的 . 和 o 则不然。它不能将“o”向左“移动”,因为这样做会过多地增加水平点的未对准。
有了点到平面的距离,当您向左滑动“o”时,水平点不会产生任何误差,因此您不会陷入局部最小值。我已经看到了您所描述的“洋葱”状错误,这是由于使用 ICP 点对点距离而导致的。
如果您可以忍受分辨率的降低,您可以尝试的另一件事是对点进行聚类。 Meshlab 有一个过滤器可以执行此操作:“过滤器 -> 采样 -> 聚类顶点子采样”。这也许能够减少“洋葱般的分层”。
关于从 meshlab 中得到的不一致法线,如果您只关心在 meshlab 中可视化它们,则 ctrl-d 将打开“双面照明”并消除黑色区域。如果你确实需要一致的法线,meshlab 有一个名称诱人的过滤器,名为“法线、曲率和方向 -> 连贯地重新定向所有面”,不幸的是,它对我不起作用。根据您拥有的数据类型,特别是如果数据来自范围传感器,那么您已经知道网格面的法线应指向您的传感器,因此可以轻松地对数据进行后处理并翻转指向错误方向的那些(查看法线和观察/测量方向的点积符号)。
I hope you're still interested in an answer. One thing you can try is to define your ICP using point-to-plane distance instead of point-to-point distance. Point to point distance looks like this, where a and b are in the target point set and p is a point from the set you are registering with ICP. The closest point is a and the distance is |a-p|.
Point-to-plane distance is like this, with c being the projection of p onto line ab and the distance being |c-p|.
The reason point-to-plane can be advantageous is in situations like this, where '.' points are from one scan and 'o' points are from another. ICP can get stuck in a local minimum like this where the .'s and o's in the horizontal line are matched perfectly but the ones in the vertical line aren't. It can't "move" the 'o's to the left because doing so will increase the misalignment of the horizontal points too much.
With a point-to-plane distance, you would incur no error from the horizontal points as you slide the 'o's to the left, so you wouldn't get stuck in that local minimum. I have seen the "onion" like error you described as a result of using point-to-point distance with ICP.
Another thing you can try is clustering your points, if you can live with the reduction in resolution. Meshlab has a filter that will do this: "Filters->Sampling->Clustered vertex Subsampling". That might be able to reduce the "onion-like layering".
Regarding the inconsistent normals that you are getting out of meshlab, if all you care about is visualizing them in meshlab, ctrl-d will turn on "double side lighting" and eliminate the black areas. If you really do need consistent normals, meshlab has an enticingly-named filter called "Normals, Curvatures and Orientation->Reorient all faces coherently", which unfortunately doesn't work for me. Depending on the kind of data you have, and especially if it is coming from a range sensor, then you already know that the normal of the mesh face should point toward your sensor, so it would be easy to post-process your data and flip the ones pointing the wrong way (look at sign of dot-product of the normal and the viewing/measuring direction).