DDS 有经纪人吗?
我一直在尝试阅读 DDS 标准,特别是 OpenSplice,但我对它的架构感到好奇。
DDS 是否需要运行代理或任何特定的守护进程来管理不同方之间的消息交换和协调? 如果我只启动一个为某个主题发布数据的进程,然后启动另一个订阅同一主题的进程,这是否足够?是否有任何原因可能需要运行另一个进程?
或者,它是否使用 UDP 多播在发布者和订阅者之间进行某种自动发现?
一般来说,我试图将其与传统队列架构(例如 MQ 系列或 EMS)进行对比。
如果有人能帮助阐明这一点,我将非常感激。
谢谢,
法希姆
I've been trying to read up on the DDS standard, and OpenSplice in particular and I'm left wondering about the architecture.
Does DDS require that a broker be running, or any particular daemon to manage message exchange and coordination between different parties?
If I just launch a single process publishing data for a topic, and launch another process subscribing for the same topic, is this sufficient? Is there any reason one might need another process running?
In the alternative, does it use UDP multicasting to have some sort of automated discovery between publishers and subscribers?
In general, I'm trying to contrast this to traditional queue architectures such as MQ Series or EMS.
I'd really appreciate it if anybody could help shed some light on this.
Thanks,
Faheem
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
DDS 没有中央代理,它使用基于多播的发现协议。 OpenSplice 有一个模型,每个节点都有一个服务,但这是一个实现细节,如果您检查例如 RTI DDS,他们没有这个。
DDS doesn't have a central broker, it uses a multicast based discovery protocol. OpenSplice has a model with a service for each node, but that is an implementation detail, if you check for example RTI DDS, they don't have that.
认为区分“集中式代理”架构(其中代理可能/成为单点故障)和每台机器上基于 DDS-QoS 管理流量(例如重要性)的服务/守护程序确实很好(DDS:传输优先级)和紧急性(DDS:延迟预算)。
有趣的是,大多数人认为在机器上绝对有必要有一个(实时)进程调度程序,将 CPU 作为关键/共享资源进行管理(基于时间片、优先级)等)然而,当谈到 DDS 时,它的全部内容都是分发信息(而不是处理应用程序代码),人们发现“网络调度程序”的出现常常“奇怪”在“handy”(最少)将网络(接口)作为共享资源进行管理并调度流量(基于 QoS 策略驱动的“打包”和多个流量形状优先级通道的利用)。
这正是 OpenSplice 在利用其(可选)联合架构模式时所做的事情,其中在单台计算机上运行的多个应用程序可以使用共享内存段共享数据,并且每个物理网络都有一个网络服务(守护进程) -根据其实际 QoS 策略的紧急程度和重要性来调度入站和出站流量的接口。事实上,这样的服务可以“访问”所有节点信息,这也有助于将来自不同应用程序的不同主题的不同样本组合到(可能很大的)UDP 帧中,甚至可能利用一些可用的延迟预算来进行这种“打包”和从而允许在效率(吞吐量)和确定性(延迟/抖动)之间适当平衡。此外,通过使用“专用”Rx/Tx 线程和 DIFSERV 设置在预先配置的流量形状“优先级通道”上调度流量,可以进一步促进端到端确定性。
因此,每个节点都有一个网络调度守护进程肯定有一些优势(同时它可以将网络与错误应用程序解耦,这些应用程序可能“生产力过剩”,即炸毁系统或“反应不足”,导致系统范围内的重传) ......在争论“网络调度守护进程”可以被视为“单点故障”这一事实时,经常会忘记一个方面,其中因为“另一种观点”可能是,在没有任何仲裁的情况下,任何直接与线路对话的“独立”应用程序,当它出于任何原因开始出现如上所述的错误行为时,都可以被视为潜在的系统线程。
无论如何,它总是一个。这是一个有争议的讨论,这就是为什么 OpenSplice DDS(从 v6 开始)支持两种部署模式:联合和非联合(也称为“独立”或“单进程”)
希望这会有所帮助。
Think its indeed good to differentiate between a 'centralized broker' architecture (where that broker could be/become a single-point of failure) and a service/daemon on each machine that manages the traffic-flows based on DDS-QoS's such as importance (DDS:transport-priority) and urgency (DDS: latency-budget).
Its interesting to notice that most people think its absolutely necessary to have a (real-time) process-scheduler on a machine that manages the CPU as a critical/shared resource (based on timeslicing, priority-classes etc.) yet that when it comes to DDS, which is all about distributing information (rather than processing of application-code), people find it often 'strange' that a 'network-scheduler' would come in 'handy' (the least) that manages the network(-interface) as a shared-resource and schedules traffic (based on QoS-policy driven 'packing' and utilization of multiple traffic-shaped priority-lanes).
And this is exactly what OpenSplice does when utilizing its (optional) federated-architecture mode where multiple applications that run on a single-machine can share data using a shared-memory segment and where there's a networking-service (daemon) for each physical network-interface that schedules the in- and out-bound traffic based on its actual QoS policies w.r.t. urgency and importance. The fact that such a service has 'access' to all nodal information also facilitates combining different samples from different topics from different applications into (potentially large) UDP-frames, maybe even exploiting some of the available latency-budget for this 'packing' and thus allowing to properly balance between efficiency (throughput) and determinism (latency/jitter). End-to-End determinism is furthermore facilitated by scheduling the traffic over pre-configured traffic-shaped 'priority-lanes' with 'private' Rx/Tx threads and DIFSERV settings.
So having a network-scheduling-daemon per node certainly has some advantages (also as it decouples the network from faulty-applications that could be either 'over-productive' i.e. blowing up the system or 'under-reactive' causing system-wide retransmissions .. an aspect thats often forgotten when arguing over the fact that a 'network-scheduling-daemon' could be viewed as a 'single-point-of-failure' where as the 'other view' could be that without any arbitration, any 'standalone' application that directly talks to the wire could be viewed as a potential system-thread when it starts misbehaving as described above for ANY reason.
Anyhow .. its always a controversial discussion, thats why OpenSplice DDS (as of v6) supports both deployment modes: federated and non-federated (also called 'standalone' or 'single process').
Hope this is somewhat helpful.
DDS 规范的设计使得实现不需要任何中央守护进程。但当然,这是实施的选择。
RTI DDS、MilSOFT DDS 和 CoreDX DDS 等实现具有分散式架构,它们是点对点的,不需要任何守护进程。 (发现是通过 LAN 网络中的多播完成的)。这种设计具有容错性、低延迟和良好的可扩展性等优点。而且它使得中间件的使用变得非常容易,因为不需要管理守护进程。您只需运行发布者和订阅者,其余的由 DDS 自动处理。
OpenSplice DDS 过去需要在每个节点上运行守护进程服务,但他们在 v6 中添加了一个新功能,这样您就不再需要守护进程了。 (他们仍然支持守护进程选项)。
OpenDDS 也是点对点的,但据我所知,它需要运行一个中央守护进程来进行发现。
DDS specification is designed so that implementations are not required to have any central daemons. But of course, it's a choice of implementation.
Implementations like RTI DDS, MilSOFT DDS and CoreDX DDS have decentralized architectures, which are peer-to-peer and does not need any daemons. (Discovery is done with multicast in LAN networks). This design has many advantages, like fault tolerance, low latency and good scalability. And also it makes really easy to use the middleware, since there's no need to administer daemons. You just run the publishers and subscribers and the rest is automatically handled by DDS.
OpenSplice DDS used to require daemon services running on each node, but they have added a new feature in v6 so that you don't need daemons anymore. (They still support the daemon option).
OpenDDS is also peer-to-peer, but it needs a central daemon running for discovery as far as I know.