查找二维点云的边缘及其内部的绘画区域
我目前正在卡拉构建一个汽车模拟器环境,并尝试使用语义相机分割和激光雷达创建周围环境的鸟瞰图。
所以绿色是代表道路的点,而红色点是道路线。蓝点是汽车,您可以注意到它们之间的空白空间,因为这些不是激光雷达拾取的位置。
我的问题是,连接这些点的最佳方法是什么,或者找到边缘并“绘制”内部区域,更好的方法是将所有 [0,0,0] 像素转换为与内部区域的绿点相同的像素值,例如,画整条路。
我还想对红色值(即车道)做同样的事情,加强它们并将它们连接起来以形成更好的可见线。
我的数据是相对于中心的坐标列表,或者只是图像矩阵。 这是将激光雷达数据转换为该图像的代码片段。
LIDAR_RANGE = 100
lidar_data = #This is data from lidar in shape of
#[N_points,4] , x,y,z value with semantic value
# like 1 for car, 2 for road, 3 for pedestrian
disp_size = [400,500]
lidar_range = float(LIDAR_RANGE) * 2.0
points = lidar_data
points[:,:2] *= min(disp_size) / lidar_range
points[:,:2] += (0.5 * disp_size[0], 0.5 * disp_size[1])
points[:,:2] = np.fabs(points[:,:2])
points = points.astype("int32")
lidar_img_size = (disp_size[0],disp_size[1],3)
lidar_img = np.zeros((lidar_img_size),dtype=np.int8)
for point in points:#need to transform this to numpy
if point[3] == 4:
lidar_img[point[0]][point[1]] = [220,20,60]
elif point[3] == 6:
lidar_img[point[0]][point[1]] = [157,234,50]
elif point[3] == 7:
lidar_img[point[0]][point[1]] = [128,64,128]
elif point[3] == 10:
lidar_img[point[0]][point[1]] = [0,0,142]
elif point[3] == 18:
lidar_img[point[0]][point[1]] = [250,170,30]
else:
lidar_img[point[0]][point[1]] = [0,0,0]
lidar_img = np.flip(lidar_img, axis = 0)
cv2.imshow("3", lidar_img)
cv2.waitKey(1)
return lidar_img
I am currently building a car simulator environment in Carla, and im trying to create a birds eye view of the surrounding, using semantic camera segmentation and lidar.
Here you can find images to more easily explain what i want:
So green colors are dots that represent the road, while red dots are road lines. Blue dots are cars and you can notice empty space between them, since those are not positions that lidar picked up.
My question is, what would be the best way to connect these dots, or find edges and "paint" the area inside, better say convert all [0,0,0] pixels to same pixel value as green points for the area inside, thus painting the whole road for instance.
Id also like to do the same with red values, which are lanes, kinda intensifying them and connecting them to make better visible lines.
I have data as a list of relative coordinates to the center, or simply as an image matrix.
This is snippet of code that transforms lidar data to this image.
LIDAR_RANGE = 100
lidar_data = #This is data from lidar in shape of
#[N_points,4] , x,y,z value with semantic value
# like 1 for car, 2 for road, 3 for pedestrian
disp_size = [400,500]
lidar_range = float(LIDAR_RANGE) * 2.0
points = lidar_data
points[:,:2] *= min(disp_size) / lidar_range
points[:,:2] += (0.5 * disp_size[0], 0.5 * disp_size[1])
points[:,:2] = np.fabs(points[:,:2])
points = points.astype("int32")
lidar_img_size = (disp_size[0],disp_size[1],3)
lidar_img = np.zeros((lidar_img_size),dtype=np.int8)
for point in points:#need to transform this to numpy
if point[3] == 4:
lidar_img[point[0]][point[1]] = [220,20,60]
elif point[3] == 6:
lidar_img[point[0]][point[1]] = [157,234,50]
elif point[3] == 7:
lidar_img[point[0]][point[1]] = [128,64,128]
elif point[3] == 10:
lidar_img[point[0]][point[1]] = [0,0,142]
elif point[3] == 18:
lidar_img[point[0]][point[1]] = [250,170,30]
else:
lidar_img[point[0]][point[1]] = [0,0,0]
lidar_img = np.flip(lidar_img, axis = 0)
cv2.imshow("3", lidar_img)
cv2.waitKey(1)
return lidar_img
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论