ros noetic Python:订户排队延迟,订户queue_size不更改
我正在使用Python在ROS 1 Noetic中编写一个对象检测节点,以注释从Realsense D455摄像机录制的图像。对象检测由一个神经网络执行,该神经网络需要约60-70ms的运行。当我运行节点时,rostopic延迟报告的延迟迅速增加到约700毫秒,如下所示。
# rostopic delay /obj_det/annotations
subscribed to [/obj_det/annotations]
average delay: 0.331
min: 0.173s max: 0.484s std dev: 0.09673s window: 14
average delay: 0.501
min: 0.173s max: 0.685s std dev: 0.16728s window: 31
average delay: 0.562
min: 0.173s max: 0.706s std dev: 0.15640s window: 48
average delay: 0.585
min: 0.173s max: 0.706s std dev: 0.14014s window: 65
average delay: 0.599
min: 0.173s max: 0.706s std dev: 0.12700s window: 83
这看起来像是一个经典的信息,即发布到输入主题的速度比节点消耗的速度更快,随着队列的填充而导致延迟。由于Python代码运行需要60-70ms,如果使用大小为10的队列(ROS默认),则队列将迅速填充,导致每个图像等待10倍执行之前执行之前的时间节点。但是,我显然将订户队列大小设置为1,因此不应发生这种延迟。
class ObjectDetectionNode():
def __init__(self):
#... other code here
self.image_output_topic = "/obj_det/annotated"
self.detector = Detector(options)
self.bridge = CvBridge()
self.image_sub = rospy.Subscriber(self.image_input_topic, Image, self.callback, queue_size=1)
self.obj_det_pub = rospy.Publisher(self.track_output_topic, ObjDet3DArray, queue_size=1)
self.output_image_pub = rospy.Publisher(self.image_output_topic, Image, queue_size=1)
我写了一种虚拟回调方法,该方法简单地重新发布了相同的图像以测试队列延迟:
def callback(self, img_msg):
img = self.bridge.imgmsg_to_cv2(img_msg, 'bgr8')
output_img_msg = self.bridge.cv2_to_imgmsg(img, encoding='bgr8')
output_img_msg.header.stamp = img_msg.header.stamp
self.output_image_pub.publish(output_img_msg)
time.sleep(1)
return
队列尺寸为1,尽管它整个秒都睡了1秒,但我不希望这种回调方法会累积任何额外的延迟。 Regardless, ROS seems to accumulate delay as if it had a Subscriber queue size of 10.
# rostopic delay /obj_det/annotated [46/363]
subscribed to [/obj_det/annotated ]
average delay: 1.985
min: 1.985s max: 1.985s std dev: 0.00000s window: 2
average delay: 2.469
min: 1.985s max: 2.953s std dev: 0.48428s window: 3
average delay: 2.954
min: 1.985s max: 3.923s std dev: 0.79124s window: 4
average delay: 3.438
min: 1.985s max: 4.891s std dev: 1.08329s window: 5
average delay: 3.923
min: 1.985s max: 5.861s std dev: 1.37044s window: 6
average delay: 4.407
min: 1.985s max: 6.830s std dev: 1.65510s window: 7
average delay: 4.892
min: 1.985s max: 7.799s std dev: 1.93824s window: 8
average delay: 5.376
min: 1.985s max: 8.769s std dev: 2.22065s window: 9
average delay: 5.787
min: 1.985s max: 9.072s std dev: 2.39428s window: 10
average delay: 6.213
min: 1.985s max: 10.042s std dev: 2.60547s window: 11
average delay: 6.564
min: 1.985s max: 10.079s std dev: 2.72151s window: 12
average delay: 6.857
min: 1.985s max: 10.082s std dev: 2.78118s window: 13
average delay: 7.028
min: 1.985s max: 10.082s std dev: 2.73693s window: 14
average delay: 7.244
min: 1.985s max: 10.082s std dev: 2.74971s window: 15
average delay: 7.433
min: 1.985s max: 10.082s std dev: 2.74871s window: 16
It looks like establishing the publisher-subscriber connection with a queue size of one with the line self.image_sub = rospy.Subscriber(self.image_input_topic ,image,self.callback,queue_size = 1)
根本无法实际工作。我什至尝试将指定的队列大小更改为20,以查看是否增加了rostopic报告的最大延迟,但输出与上面看到的相同(即,queue_size的值传递给了rospy.subscriber。做任何事情)。我在这里的某个地方搞砸了吗?
I am writing an object detection node in ROS 1 noetic using python to annotate images recorded from a Realsense D455 camera. Object detection is performed by a neural network which takes ~60-70ms to run. When I run the node, the delay as reported by rostopic delay quickly increases to ~700ms, as shown below.
# rostopic delay /obj_det/annotations
subscribed to [/obj_det/annotations]
average delay: 0.331
min: 0.173s max: 0.484s std dev: 0.09673s window: 14
average delay: 0.501
min: 0.173s max: 0.685s std dev: 0.16728s window: 31
average delay: 0.562
min: 0.173s max: 0.706s std dev: 0.15640s window: 48
average delay: 0.585
min: 0.173s max: 0.706s std dev: 0.14014s window: 65
average delay: 0.599
min: 0.173s max: 0.706s std dev: 0.12700s window: 83
This looks like a classic example of information being published to the input topic faster than it is being consumed by the node, causing delay as the queue becomes filled. Since it takes 60-70ms for the python code to run, if a queue of size 10 is used(ROS default), the queue would quickly fill up, causing each image to wait for 10x the time it takes to execute before being processed by the node. However, I clearly set the subscriber queue size to 1, so no such delay should occur.
class ObjectDetectionNode():
def __init__(self):
#... other code here
self.image_output_topic = "/obj_det/annotated"
self.detector = Detector(options)
self.bridge = CvBridge()
self.image_sub = rospy.Subscriber(self.image_input_topic, Image, self.callback, queue_size=1)
self.obj_det_pub = rospy.Publisher(self.track_output_topic, ObjDet3DArray, queue_size=1)
self.output_image_pub = rospy.Publisher(self.image_output_topic, Image, queue_size=1)
I wrote a dummy callback method that simply re-publishes the same image to test queue delay with:
def callback(self, img_msg):
img = self.bridge.imgmsg_to_cv2(img_msg, 'bgr8')
output_img_msg = self.bridge.cv2_to_imgmsg(img, encoding='bgr8')
output_img_msg.header.stamp = img_msg.header.stamp
self.output_image_pub.publish(output_img_msg)
time.sleep(1)
return
With a queue size of 1, I would not expect this callback method to accumulate any additional delay despite the fact that it sleeps for 1 whole second. Regardless, ROS seems to accumulate delay as if it had a Subscriber queue size of 10.
# rostopic delay /obj_det/annotated [46/363]
subscribed to [/obj_det/annotated ]
average delay: 1.985
min: 1.985s max: 1.985s std dev: 0.00000s window: 2
average delay: 2.469
min: 1.985s max: 2.953s std dev: 0.48428s window: 3
average delay: 2.954
min: 1.985s max: 3.923s std dev: 0.79124s window: 4
average delay: 3.438
min: 1.985s max: 4.891s std dev: 1.08329s window: 5
average delay: 3.923
min: 1.985s max: 5.861s std dev: 1.37044s window: 6
average delay: 4.407
min: 1.985s max: 6.830s std dev: 1.65510s window: 7
average delay: 4.892
min: 1.985s max: 7.799s std dev: 1.93824s window: 8
average delay: 5.376
min: 1.985s max: 8.769s std dev: 2.22065s window: 9
average delay: 5.787
min: 1.985s max: 9.072s std dev: 2.39428s window: 10
average delay: 6.213
min: 1.985s max: 10.042s std dev: 2.60547s window: 11
average delay: 6.564
min: 1.985s max: 10.079s std dev: 2.72151s window: 12
average delay: 6.857
min: 1.985s max: 10.082s std dev: 2.78118s window: 13
average delay: 7.028
min: 1.985s max: 10.082s std dev: 2.73693s window: 14
average delay: 7.244
min: 1.985s max: 10.082s std dev: 2.74971s window: 15
average delay: 7.433
min: 1.985s max: 10.082s std dev: 2.74871s window: 16
It looks like establishing the publisher-subscriber connection with a queue size of one with the line self.image_sub = rospy.Subscriber(self.image_input_topic, Image, self.callback, queue_size=1)
simply does not actually work. I even tried changing the specified queue size to 20 to see if that increased the maximum delay reported by rostopic up to 20s, but the output is the same as seen above(ie, the value of queue_size passed to rospy.Subscriber doesn't seem to do anything). Have I messed up somewhere along the way here?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在Python中,每个回调都会获得一个线程,因此实际上您正在运行多个回调,这正在创建延迟。
尝试放入闩锁,类似
In python each callback gets a thread so actually you are running more than one callback and that is creating the delay.
try putting in a latch, something like