网络系统 - “阻塞”和“阻塞”之间有什么区别?和“非阻塞”协议?

发布于 2024-12-19 18:10:21 字数 95 浏览 1 评论 0原文

在计算机网络中 - 实际上在许多其他领域 - 我听到很多对术语“阻塞”、“非阻塞”、“同步”和“异步”的引用。我想知道是否有人可以用非常简单/简单的术语解释这些应该意味着什么?

In computer networking - and in a lot of other fields actually - I hear a lot of reference to the terms "blocking", "non-blocking", "synchronous", and "asynchronous". I was wondering if anyone could explain in pretty straightforward/simple terms what these are supposed to mean?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

心头的小情儿 2024-12-26 18:10:21

“阻塞”调用会“阻塞”调用它的程序,直到它完成。您的程序必须等待它完成(无论如何)才能运行下一条语句。大多数函数调用都是“阻塞”的,例如,set x to 4 + 4 在计算出 8 的值并将其分配给x。同样,阻塞或同步网络方法将阻止调用程序直到完成。在“将数据包发送到远程系统”之类的情况下,这个时间可能以秒或更长的时间来测量,而不是算术消耗的微秒(或更短)。

相反,“非阻塞”或异步方法通常将其结果存储在某种“邮箱”或“队列”中,或者(更常见)将在完成时回调您提供的函数。对于在等待相对较慢的网络进程完成时执行其他任何操作(例如,显示用户界面)的程序来说,这通常/通常更好。

当访问相对较快的本地服务(例如本地磁盘 I/O、一台计算机上的进程间通信或将输出发送到本地显示器)时,阻塞 I/O 有时是首选,因为它更容易写入。

阻塞网络 I/O 的示例:

set web-page to (the result of) get url "http://www.google.com/"
in web-page, find <title>...</title>,
     assign this to google-title;
if not found,
     present a warning, and
     set google-title to "Google"
do something else…

对比:

get url "http://www.google.com/" and call back google-title-callback
do something else…

function google-title-callback, accepts web-page:
    in web-page, find <title>...</title>,
         assign this to google-title;
    if not found,
         present a warning, and
         set google-title to "Google"

异步 I/O 几乎也总是在应用程序级别用于 GUI 编程。例如,基于终端(流)的程序可能会同步等待用户输入,而 GUI 程序可能是异步的,允许您随机选择各种控件或执行需要其接受消息的其他操作(例如调整窗口大小)通过回调方法或事件处理程序,或多或少相当于与上面的网络回调示例相同类型的事情。

A "blocking" call "blocks" the program that calls it until it completes. Your program has to wait for it to do (whatever) before the next statement runs. Most function calls are "blocking," for example, set x to 4 + 4 will not go on to the next statement until it computes the value of 8 and assigns it to x. Likewise, a blocking or synchronous network method will hold up the calling program until it completes. In the case of something like "send a packet to a remote system," this time may be measurable in seconds, or longer, instead of the microseconds (or less) that arithmetic consumes.

A "non-blocking" or asynchronous method usually, instead, either deposits its results in a "mailbox" or "queue" of some kind, or (more commonly) will call back a function that you provide when it completes. This is often/usually better for a program that does anything else (for example, displaying a user interface) while it's waiting on a relatively slow network process to complete.

When accessing relatively fast local services, like local disc I/O, inter-process communications on one computer, or sending output to a local display, blocking I/O is sometimes preferred because it's easier to write.

Example of blocking network I/O:

set web-page to (the result of) get url "http://www.google.com/"
in web-page, find <title>...</title>,
     assign this to google-title;
if not found,
     present a warning, and
     set google-title to "Google"
do something else…

versus:

get url "http://www.google.com/" and call back google-title-callback
do something else…

function google-title-callback, accepts web-page:
    in web-page, find <title>...</title>,
         assign this to google-title;
    if not found,
         present a warning, and
         set google-title to "Google"

Asynchronous I/O is almost always used at the application level for GUI programming, as well. For example, a terminal (stream)-based program might be synchronous awaiting for user input, while a GUI program might be asynchronous, allowing you to randomly choose various controls or perform other actions (like resizing a window) that require it to accept messages of various times through either callback methods or event handlers, which more-or-less amount to the same type of thing as the network callback example, above.

谁许谁一生繁华 2024-12-26 18:10:21

根本问题是什么?

IO 子系统的延迟通常比简单指令处理(在 CPU 中)高几个数量级。这种延迟也是不确定的(当然在已知范围内)。

IO 子系统通常(全部?)独立于系统处理器。虽然这是积极的,因为它允许系统的不同 (hw) 组件中的并发操作,但它也强调了这样一个事实:系统总体上还需要通过提供数据和控制信息传输来耦合不同的 IO 和 CPU 组件。

作为概括,我们正在讨论协作互连(主动)组件。通常,这是主/从关系,但这不是一般情况,例如,这也适用于对等连接。

+-----+                 +-----------+
| dev | <==== DATA ====>| processor |
|     | <---- ctrl -----| <master>  |
+-----+                 +-----------+

(请注意,“设备”可以是内存、网络、磁盘或其他进程。进一步注意,这里有 3 个子系统,两个对等点之间的“总线”或“连接”也是一个具有延迟、带宽、容量的系统等)

术语同步、异步、阻塞非阻塞,解决并定义了两者之间的通信/互连的语义链接的组件。

  • 阻止和非阻塞

这些术语解决了调用语义

在阻塞调用中,启动交换的组件挂起所有活动,直到控制和/或数据到其他组件的传输完成。

在非阻塞调用中,发起交换的组件基本上执行“激发”并(可能)忘记。

  • 同步&异步

这些术语涉及交互模式(协议)。

在同步交互中,两个组件将同步运行,并且交互是完全确定性的。 从根本上说,我们可以说,在组件之间的同步交换中将发生一组已知的、确定性的、有限的操作。

在异步交互中,两个组件不会以锁定方式进行协调-step(因此称为异步)。 从根本上讲,我们可以说在完成交换的过程中,任一组件中可能会发生一组未知的操作。

向这些术语附加“响应”可能会澄清,例如“同步响应”,因为这既充分阐明了总体思想,又消除了同步与阻塞的歧义(这是一个常见的概念错误)。

  • 注意

根据上面的内容,显然我们有一个系统设计空间排列{块,非块} X {同步,异步}。例如,系统可以使用具有异步协议的非阻塞调用语义。

讨论

一般来说,可以公平地说,我们人类程序员更喜欢顺序且完全确定性的模型:坦率地说,它们更容易构思、开发或理解。

但是,作为系统极客,我们也喜欢效率性能,对吗?

根据上图,我们注意到 3 个不同的(且都是独立的)子系统。如果上面的“处理器”可以告诉“总线”“将 xyz 发送到开发人员”并且等到总线说“好吧,我做到了”,这不是很好吗?这将是一个非阻塞调用。 (请注意,它不会以任何方式解决同步或异步协议!)

此外,如果“处理器”在等待响应完成交换时必须做一些其他工作,整个系统会受益怎么办?这将是一个异步交换。

What is the underlying issue?

IO subsystems generally have orders of magnitude greater latencies than simple instruction processing (in CPU). This latency is also non-deterministic (while of course within known bounds).

IO subsystems are also typically (all?) independent from the system processor. While this is positive in the sense that it allows concurrent actions in distinct (hw) components of the system, it also highlights the fact that the system in total also needs to couple the distinct IO and CPU components by providing data and control info transfer.

As a generalization then, we are talking about cooperating interconnected (active) components. Typically this is a master/slave relationship, but that is not the general case, e.g. this applies to peer-to-peer connectivity, as well.

+-----+                 +-----------+
| dev | <==== DATA ====>| processor |
|     | <---- ctrl -----| <master>  |
+-----+                 +-----------+

(Note that 'device' can be memory, network, disk, or another process. Further note there are 3 sub-systems here, the 'bus' or 'connection' between the two peers is also a system with latencies, bandwidth, capacity, etc.)

The terms synchronous, asynchronous, blocking, and non-blocking, address and define the semantics of communication/interconnection between the two linked components.

  • Blocking & Non-Blocking

These terms address call semantics.

In a blocking call, the component that initiates an exchange suspends all activity until the transfer of control and/or data to the other component is completed.

In an non-blocking call, the component that initiates an exchange basically performs a fire and (possibly) forget.

  • Synchronous & Asynchronous

These terms address patterns of interaction (protocol).

In a synchronous interaction, the two component will act in lock-step and the interaction is fully deterministic. Fundamentally, we can say that there is a known, deterministic, and finite set of actions that will take place in a synchronous exchange between the components.

In an asynchronous interaction, the two components are not coordinating in lock-step (thus termed asynchronous). Fundamentally, we can say that there is an unknown set of actions that can occur in either component in course of completing an exchange.

It would likely clarify to append a "response" to these terms, e.g "Synchronous-Response", as this both fully spells out the general idea, and, disambiguates the synchrony from blocking (which is a common conceptual error).

  • Note

Per above, obviously we have a system design space that is the permutation of {block, non-block} X {synch, asynch}. E.g. a system may use non-blocking call semantics with an asynchronous protocol.

Discussion

In general, it is fair to say that we, human programmers, prefer sequential and fully deterministic models: they are simpler to conceive, develop or grok, frankly.

But, being system geeks, we also like efficiency and performance, right?

Per our diagram above, we note 3 distinct (and both independent) sub-systems. Wouldn't it be nice if 'processor' above could tell the 'bus' to 'send xyz to dev' and not wait until the bus says 'ok, I did that'? That would be a non-blocking call. (Note it does not in any way address synch or async protocol!)

Also, what if the overall system would benefit if 'processor' got to do some other work while waiting for the response to complete an exchange? That would be an asynchronous exchange.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文