我希望你们一切都好
我有一个Livox Mid 70。具有这样的扫描模式。
scan_pattern ,这取决于时间并创建整个场景。
我使用ROS从perticular主题中获取数据并创建Numpy数组。
def callback(data):
pc = rNp.numpify(data)
points = np.zeros((pc.shape[0], 4))
points[:,0]=pc['x']
points[:,1]=pc['y']
points[:,2]=pc['z']
points[:,3]=pc['intensity']
po = np.array(points, dtype=np.float32)
然后,我创建了一个(x,y)数组,其中包含该点cloud数据的x和y坐标,并尝试像这样扩展它:
p = (arr/np.max(arr)*255).astype(np.uint8) #arr = (x, y) numpy array
但是不幸的是,它没有给我任何可理解的图片,
然后我尝试了ROS命令:
rosrun pcl_ros convert_pointcloud_to_image input:=/livox/lidar output:=/img
但是错误msg是:
[ERROR] [1651119689.192807544]: Input point cloud is not organized, ignoring!
我在Matlab IE PCorbanize上看到了一些技术,但是要使用此技术,我需要给它一些参数,例如
params = lidarParameters(sensOrname,horizontalresolution)params =
LIDARPARAMETER(垂直分辨,垂直养殖,水平分辨率)
params = lidarParameters(垂直贝方,水平分辨率)
params = lidarParameters(___,hiforontalfov = hiforontalfov)
,但该激光雷达没有任何水平或垂直分辨率,梁角
因此,可能我不能使用此功能来组织此PCL数据。
我的问题:
- 如何组织这些无组织的PCL数据并从中创建图像?
- 是否可以从cv2.imshow()查看此图像?
I hope you guys doing well
I have a LiDAR which is Livox Mid 70. Which have a scan pattern like this.
scan_pattern, which is depends on the time and create the whole scene.
I used ros to fetch the data from a perticular topic and create the numpy array.
def callback(data):
pc = rNp.numpify(data)
points = np.zeros((pc.shape[0], 4))
points[:,0]=pc['x']
points[:,1]=pc['y']
points[:,2]=pc['z']
points[:,3]=pc['intensity']
po = np.array(points, dtype=np.float32)
Then I create a (x, y) array which is contains X and Y coordinates of that pointcloud data and try to scale it like this:
p = (arr/np.max(arr)*255).astype(np.uint8) #arr = (x, y) numpy array
But unfortunately it's not giving me any understandable picture
Then I tried the ros command:
rosrun pcl_ros convert_pointcloud_to_image input:=/livox/lidar output:=/img
but the error msg is:
[ERROR] [1651119689.192807544]: Input point cloud is not organized, ignoring!
I saw some technique on matlab i.e. pcorganize, but to use this, I need to give it some parameters like
params = lidarParameters(sensorName,horizontalResolution) params =
lidarParameters(verticalResolution,verticalFoV,horizontalResolution)
params = lidarParameters(verticalBeamAngles,horizontalResolution)
params = lidarParameters(___,HorizontalFoV=horizontalFoV)
But this Lidar don't have any horizontal or vertical resolution, beam angles
so may be I can't use this function to organized this pcl data.
My question:
- How to organize these unorganized pcl data and create image from it?
- Is it possible to view this image from cv2.imshow()?
发布评论
评论(1)
答案:
如果需要的话,请在书面节点(最佳python)中从LIDAR发出的PointCloud2消息(只有当扫描模式需要时间进行整个示例2s以进行整个扫描时)。看一下它们不需要的Velodyne激光片,因为飞机扫描激光雷达,即从运行的第一秒开始就完成了详细的图像。正如我提到的,只有在您的LIDAR具有lissajous扫描模式时()或另一个需要时间进行完整的场景扫描。
案例1:扫描模式需要时间:自定义缓冲时间(1s,2s或4s ...) - 与您的附带扫描模式相比)。然后您进行了整个扫描。
案例2:扫描模式不需要任何缓冲时间:从头开始,详细扫描,通常不是必需的
在下一步中,您应该使用此节点:
此节点传递您的PointCloud2消息,并从Bird's Eye View生成图像。我在论文中使用了这个节点,并改变了一些事情,以便于理解和更好的解决方案。我可以与您共享此文件,或者如果您想自己探索此节点,并且需要不时需要帮助,例如更改LiDAR位置,更改Cell_Resolution(缩放顶视图缩放到您的图像),图像颜色等。另外,我还使用来自LiDAR的给定数据添加图像颜色通道的编码:强度,密度,高度(仅在原始代码中进行高度编码)。
使用OpenCV,您可以将此图像视为打开的窗口,也可以将其视为RVIZ中已发布的主题。
Answers:
Buffer the PointCloud2 messages from LiDAR in a written node (best python), if it's needed (only when scan pattern needs time to run a whole scan for example 2s for a whole scan). Take a look at Velodyne LiDARs they don't need it, because there plane scans LiDARs, i.e. complete detailed image from the first second of running. As I mentioned it's only necessary, when your LiDAR has a Lissajous scan pattern (https://www.nature.com/articles/s41598-017-13634-3) or another, which needs the time to do a complete scene scan.
Case 1: scan pattern needs the time: custom buffering time (1s, 2s or 4s...) - compared to your attached scan pattern). Then you have a whole scan.
Case 2: scan pattern doesn't need any buffering time: rich detailed scan from beginning, normally buffering isn't necessary
In the next step you should use this node:
https://github.com/mjshiggins/ros-examples/blob/master/src/lidar/src/lidar_node.cpp
This node takes your pointcloud2 message and generates a image from bird's eye view. I used this node in my thesis and change a few things in the for easier understanding and better solution. I could share this file with you or if you want to explore this node yourself and need help from time to time for example for changing LiDAR position, changing cell_resolution (zoom of the top view to your image), image color and so on. Also I add the coding of the Image color channels with given data from LiDAR: Intensity, Density, Height (only height coding in original code).
With OpenCV you can look at this image as a opened window or visualize it as published topic in RViz.