异步和非阻塞调用?也在阻塞和同步之间

发布于 2024-08-28 18:55:07 字数 40 浏览 6 评论 0原文

异步调用和非阻塞调用有什么区别?还在阻塞和同步调用之间(请举例)?

What is the difference between asynchronous and non-blocking calls? Also between blocking and synchronous calls (with examples please)?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(16

在许多情况下,它们是同一事物的不同名称,但在某些情况下它们却截然不同。所以这取决于。整个软件行业的术语应用方式并不完全一致。

例如,在经典套接字 API 中,非阻塞套接字只是立即返回一条特殊的“将阻塞”错误消息,而阻塞套接字则会阻塞。您必须使用单独的函数,例如 selectpoll 来确定何时是重试的最佳时机。

但异步套接字(由 Windows 套接字支持)或 .NET 中使用的异步 IO 模式更为方便。您调用一个方法来启动一个操作,框架会在操作完成后回调您。即使在这里,也存在基本差异。异步 Win32 套接字通过传递 Window 消息将其结果“编组”到特定的 GUI 线程,而 .NET 异步 IO 是自由线程的(您不知道将在哪个线程上调用回调)。

所以它们并不总是意味着同样的事情。为了提炼套接字示例,我们可以说:

  • 阻塞和同步意味着相同的事情:您调用 API,它挂起线程直到它得到某种答案并且将其返回给您。
  • 非阻塞意味着如果无法快速返回答案,API会立即返回并显示错误,并且不执行任何其他操作。所以必须有一些相关的方式来查询API是否准备好被调用(即以高效的方式模拟等待,避免在紧密循环中手动轮询)。
  • 异步意味着API总是立即返回,已启动“后台”工作来满足您的请求,因此必须有某种相关的方法来得到结果。

In many circumstances they are different names for the same thing, but in some contexts they are quite different. So it depends. Terminology is not applied in a totally consistent way across the whole software industry.

For example in the classic sockets API, a non-blocking socket is one that simply returns immediately with a special "would block" error message, whereas a blocking socket would have blocked. You have to use a separate function such as select or poll to find out when is a good time to retry.

But asynchronous sockets (as supported by Windows sockets), or the asynchronous IO pattern used in .NET, are more convenient. You call a method to start an operation, and the framework calls you back when it's done. Even here, there are basic differences. Asynchronous Win32 sockets "marshal" their results onto a specific GUI thread by passing Window messages, whereas .NET asynchronous IO is free-threaded (you don't know what thread your callback will be called on).

So they don't always mean the same thing. To distil the socket example, we could say:

  • Blocking and synchronous mean the same thing: you call the API, it hangs up the thread until it has some kind of answer and returns it to you.
  • Non-blocking means that if an answer can't be returned rapidly, the API returns immediately with an error and does nothing else. So there must be some related way to query whether the API is ready to be called (that is, to simulate a wait in an efficient way, to avoid manual polling in a tight loop).
  • Asynchronous means that the API always returns immediately, having started a "background" effort to fulfil your request, so there must be some related way to obtain the result.
单身狗的梦 2024-09-04 18:55:07

同步/异步是描述两个模块之间的关系。
阻塞/非阻塞是描述一个模块的情况。

一个例子:
模块 X:“I”。
模块Y:“书店”。
X问Y:你有一本叫《C++ Primer》的书吗?

  1. 阻塞:在Y回答X之前,X一直在那里等待答案。现在 X(一个模块)正在阻塞。 X和Y是两个线程还是两个进程还是一个线程或一个进程?我们不知道。

  2. 非阻塞:在 Y 回答 X 之前,X 就离开那里去做其他事情。 X 可能每两分钟回来检查 Y 是否完成了工作?或者X在Y打电话给他之前不会回来?我们不知道。我们只知道在 Y 完成其工作之前 X 可以做其他事情。这里 X(一个模块)是非阻塞的。 X和Y是两个线程还是两个进程还是一个进程?我们不知道。但我们确信 X 和 Y 不可能是一个线程。

  3. 同步:在 Y 回答 X 之前,X 一直在那里等待答案。这意味着在 Y 完成其工作之前 X 无法继续。现在我们说:X和Y(两个模块)是同步的。X和Y是两个线程或两个进程还是一个线程或一个进程?我们不知道。

  4. 异步:在 Y 回答 X 之前,X 离开那里,X 可以做其他工作。 直到 Y 调用他,X 才会回来。 现在我们说:X 和 Y(两个模块)是异步的。X 和 Y 是两个线程还是两个进程还是一个进程?我们不知道。但我们确信 X 和 Y 不可能是一个线程。

请注意上面的两个粗体句子。为什么2)中的粗体句子包含两种情况,而4)中的粗体句子只包含一种情况?这是非阻塞和异步之间区别的关键。

让我尝试用另一种方式来解释这四个字:

  1. 阻塞:天哪,我冻僵了!我不能动!我必须等待那个特定事件的发生。如果发生这种情况,我就得救了!

  2. 非阻塞:我被告知必须等待特定事件发生。好的,我明白了,我保证我会等待。但在等待的时候,我还可以做一些其他的事情,我没有被冻住,我还活着,我可以跳,我可以走,我可以唱歌等等。

  3. 同步:我妈妈要去做饭,她派我去买一些肉。我就对妈妈说:我们是同步的!我很抱歉,但你必须等待,即使我可能需要 100 年才能拿回一些肉......

  4. 异步:我们要做披萨,我们需要番茄和奶酪。现在我说:我们去购物吧。我买一些西红柿,你买一些奶酪。我们不需要互相等待,因为我们是异步的。

这是一个关于非阻塞和非阻塞的典型例子。同步:

// thread X
while (true)
{
    msg = recv(Y, NON_BLOCKING_FLAG);
    if (msg is not empty)
    {
        break;
    }
    else
    {
        sleep(2000); // 2 sec
    }
}

// thread Y
// prepare the book for X
send(X, book);

你可以看到这个设计是非阻塞的(你可以说大多数时候这个循环做了一些废话,但在CPU眼中,X正在运行,这意味着X是非阻塞的。如果你想要你可以用任何其他代码替换 sleep(2000)),而 X 和 Y(两个模块)是同步的,因为 X 不能继续做任何其他事情(X 可以' t 跳出循环)直到它从 Y 那里得到这本书。
通常在这种情况下,使 X 阻塞会更好,因为非阻塞会为愚蠢的循环花费大量资源。但这个例子很好地帮助你理解一个事实:非阻塞并不意味着异步。

这四个词确实很容易让我们感到困惑,我们应该记住的是,这四个词是为建筑设计服务的。了解如何设计一个好的架构是区分它们的唯一方法。

例如,我们可以设计这样一种架构:

// Module X = Module X1 + Module X2
// Module X1
while (true)
{
    msg = recv(many_other_modules, NON_BLOCKING_FLAG);
    if (msg is not null)
    {
        if (msg == "done")
        {
            break;
        }
        // create a thread to process msg
    }
    else
    {
        sleep(2000); // 2 sec
    }
}
// Module X2
broadcast("I got the book from Y");


// Module Y
// prepare the book for X
send(X, book);

在这里的例子中,我们可以说

  • X1 是非阻塞的
  • X1 和 X2 是同步的
  • X 和 Y 是异步的

如果需要的话,你还可以描述那些创建的线程在X1中有四个词。

再说一遍:四个字为建筑设计服务。所以我们需要的是做出一个合适的架构,而不是像语言律师那样区分四个词。如果你遇到一些情况,你不能非常清楚地区分这四个词,你应该忘记这四个词,用你自己的词来描述你的架构。

那么更重要的是:我们什么时候使用同步而不是异步?我们什么时候使用阻塞而不是非阻塞?让 X1 阻塞比非阻塞更好吗?让 X 和 Y 同步比异步更好吗?为什么 Nginx 是非阻塞的?为什么 Apache 会阻塞?这些问题是你必须弄清楚的。

为了做出好的选择,您必须分析您的需求并测试不同架构的性能。不存在一种能够满足各种需求的架构。

synchronous / asynchronous is to describe the relation between two modules.
blocking / non-blocking is to describe the situation of one module.

An example:
Module X: "I".
Module Y: "bookstore".
X asks Y: do you have a book named "c++ primer"?

  1. blocking: before Y answers X, X keeps waiting there for the answer. Now X (one module) is blocking. X and Y are two threads or two processes or one thread or one process? we DON'T know.

  2. non-blocking: before Y answers X, X just leaves there and do other things. X may come back every two minutes to check if Y has finished its job? Or X won't come back until Y calls him? We don't know. We only know that X can do other things before Y finishes its job. Here X (one module) is non-blocking. X and Y are two threads or two processes or one process? we DON'T know. BUT we are sure that X and Y couldn't be one thread.

  3. synchronous: before Y answers X, X keeps waiting there for the answer. It means that X can't continue until Y finishes its job. Now we say: X and Y (two modules) are synchronous. X and Y are two threads or two processes or one thread or one process? we DON'T know.

  4. asynchronous: before Y answers X, X leaves there and X can do other jobs. X won't come back until Y calls him. Now we say: X and Y (two modules) are asynchronous. X and Y are two threads or two processes or one process? we DON'T know. BUT we are sure that X and Y couldn't be one thread.

Please pay attention on the two bold-sentences above. Why does the bold-sentence in the 2) contain two cases whereas the bold-sentence in the 4) contains only one case? This is a key of the difference between non-blocking and asynchronous.

Let me try to explain the four words with another way:

  1. blocking: OMG, I'm frozen! I can't move! I have to wait for that specific event to happen. If that happens, I would be saved!

  2. non-blocking: I was told that I had to wait for that specific event to happen. OK, I understand and I promise that I would wait for that. But while waiting, I can still do some other things, I'm not frozen, I'm still alive, I can jump, I can walk, I can sing a song etc.

  3. synchronous: My mom is gonna cook, she sends me to buy some meat. I just said to my mom: We are synchronous! I'm so sorry but you have to wait even if I might need 100 years to get some meat back...

  4. asynchronous: We will make a pizza, we need tomato and cheeze. Now I say: Let's go shopping. I'll buy some tomatoes and you will buy some cheeze. We needn't wait for each other because we are asynchronous.

Here is a typical example about non-blocking & synchronous:

// thread X
while (true)
{
    msg = recv(Y, NON_BLOCKING_FLAG);
    if (msg is not empty)
    {
        break;
    }
    else
    {
        sleep(2000); // 2 sec
    }
}

// thread Y
// prepare the book for X
send(X, book);

You can see that this design is non-blocking (you can say that most of time this loop does something nonsense but in CPU's eyes, X is running, which means that X is non-blocking. If you want you can replace sleep(2000) with any other code) whereas X and Y (two modules) are synchronous because X can't continue to do any other things (X can't jump out of the loop) until it gets the book from Y.
Normally in this case, making X blocking is much better because non-blocking spends much resource for a stupid loop. But this example is good to help you understand the fact: non-blocking doesn't mean asynchronous.

The four words do make us confused easily, what we should remember is that the four words serve for the design of architecture. Learning about how to design a good architecture is the only way to distinguish them.

For example, we may design such a kind of architecture:

// Module X = Module X1 + Module X2
// Module X1
while (true)
{
    msg = recv(many_other_modules, NON_BLOCKING_FLAG);
    if (msg is not null)
    {
        if (msg == "done")
        {
            break;
        }
        // create a thread to process msg
    }
    else
    {
        sleep(2000); // 2 sec
    }
}
// Module X2
broadcast("I got the book from Y");


// Module Y
// prepare the book for X
send(X, book);

In the example here, we can say that

  • X1 is non-blocking
  • X1 and X2 are synchronous
  • X and Y are asynchronous

If you need, you can also describe those threads created in X1 with the four words.

One more time: the four words serve for the design of architecture. So what we need is to make a proper architecture, instead of distinguishing the four words like a language lawyer. If you get some cases, where you can't distinguish the four words very clearly, you should forget about the four words, use your own words to describe your architecture.

So the more important things are: when do we use synchronous instead of asynchronous? when do we use blocking instead of non-blocking? Is making X1 blocking better than non-blocking? Is making X and Y synchronous better than asynchronous? Why is Nginx non-blocking? Why is Apache blocking? These questions are what you must figure out.

To make a good choice, you must analyze your need and test the performance of different architectures. There is no such an architecture that is suitable for various of needs.

太傻旳人生 2024-09-04 18:55:07
  • 异步是指并行完成的事情,比如另一个线程。
  • 非阻塞通常指轮询,即检查给定条件是否成立(套接字可读、设备有更多数据等)
  • Asynchronous refers to something done in parallel, say is another thread.
  • Non-blocking often refers to polling, i.e. checking whether given condition holds (socket is readable, device has more data, etc.)
多情出卖 2024-09-04 18:55:07

同步定义为同时发生(以可预测的时间或可预测的顺序)。

异步定义为不同时发生。 (具有不可预测的时间或不可预测的顺序)。

这就是导致第一个混乱的原因,即异步是某种同步方案,是的,它被用来表示这个意思,但实际上它描述了在运行时间或运行顺序方面不可预测的进程。此类事件通常需要同步才能使其正常运行,为此存在多种同步方案,其中一种称为阻塞,另一种称为非阻塞 ,还有一个令人困惑的称为异步的。

所以你看,整个问题就是找到一种同步异步行为的方法,因为你有一些操作需要另一个操作的响应才能开始。因此,这是一个协调问题,您如何知道现在可以开始该操作?

最简单的解决方案称为阻塞。

阻塞是指您只是选择等待其他事情完成并返回响应,然后再继续进行需要它的操作。

所以如果你需要在吐司上涂黄油,那么你首先需要烘烤面包。你协调它们的方式是,你首先烤面包,然后无休止地盯着烤面包机,直到烤面包弹出,然后你开始在它们上涂黄油。

这是最简单的解决方案,而且效果很好。没有真正的理由不使用它,除非您碰巧还需要做其他不需要与操作协调的事情。比如,做一些菜。当你知道这需要一点时间,而且你可以在烤完时洗一整盘盘子时,为什么要一直盯着烤面包机等待烤面包弹出呢?

这就是另外两种分别称为非阻塞和异步的解决方案发挥作用的地方。

非阻塞是指在等待操作完成的同时选择做其他不相关的事情。根据您认为合适的情况重新检查响应的可用性。

因此,不要看着烤面包机让它弹出。你去洗一整个盘子。然后你看一下烤面包机,看看吐司是否爆了。如果他们没有,你就去洗另一个盘子,在每道菜之间检查一下烤面包机。当你看到吐司爆开时,你就停止洗碗,而是拿起吐司,然后在上面涂上黄油。

不过,必须不断检查吐司可能会很烦人,想象一下烤面包机在另一个房间里。在两道菜之间,你浪费时间去另一个房间检查吐司。

这里是异步的。

异步是指您在等待操作完成时选择执行其他不相关的操作。不过,您不是检查它,而是将检查工作委托给其他东西,可以是操作本身或观察者,并且当响应可用时,您可以让该东西通知您并可能中断您,以便您可以继续进行其他操作需要它。

这是一个奇怪的术语。没有多大意义,因为所有这些解决方案都是创建依赖任务的同步协调的方法。这就是为什么我更喜欢称其为事件。

因此,对于这个,您决定升级您的烤面包机,以便在吐司完成时它会发出蜂鸣声。即使在洗碗的时候,你也一直在听。听到嘟嘟声后,您会在记忆中排队,一旦洗完当前的盘子,您就会停下来,把黄油涂在吐司上。或者你可以选择中断当前盘子的清洗,并立即处理吐司。

如果您听不到蜂鸣声,您可以让您的伴侣为您看管烤面包机,并在烤面包准备好时告诉您。您的合作伙伴本身可以选择上述三种策略中的任何一种来协调其观察烤面包机并在准备就绪时告诉您的任务。

最后一点,最好理解的是,虽然非阻塞和异步(或者我更喜欢称之为事件化)确实允许您在等待时做其他事情,但您也没有这样做。您可以选择不断循环检查非阻塞调用的状态,而不执行其他操作。但这通常比阻塞更糟糕(比如看着烤面包机,然后离开,然后再回到它,直到完成),因此许多非阻塞 API 允许您从它转换到阻塞模式。对于事件,您可以只等待空闲,直到收到通知。这种情况的缺点是添加通知很复杂,而且一开始可能成本高昂。您必须购买具有蜂鸣功能的新烤面包机,或者说服您的伴侣为您观看。

还有一件事,您需要意识到这三者提供的权衡。其中一个并不明显比其他更好。想想我的例子。如果您的烤面包机速度如此之快,您将没有时间洗碗,甚至没有开始洗它,这就是您的烤面包机的速度。在这种情况下开始做其他事情只是浪费时间和精力。阻止就可以了。同样,如果洗碗的时间比烘烤的时间长 10 倍。你必须问自己什么是更重要的事情?到时候吐司可能会又冷又硬,不值得,堵也行。或者你应该在等待时选择更快的事情去做。还有更明显的事情,但我的答案已经很长了,我的观点是你需要考虑所有这些,以及实现每个的复杂性,以决定它是否值得,以及它是否真的会提高你的吞吐量或性能。

编辑:

虽然这已经很长了,但我也希望它完整,所以我再补充两点。

  1. 通常还存在第四种模型,称为多路复用。这是当您等待一项任务时,您开始另一项任务,当您等待两项任务时,您又开始一项任务,依此类推,直到您已经启动了许多任务,然后,您等待空闲,但在所有任务上他们。因此,一旦完成任何操作,您就可以继续处理其响应,然后返回等待其他响应。它被称为多路复用,因为在等待时,您需要一个接一个地检查每个任务,看看它们是否已完成,直到有一个任务完成为止。它是普通非阻塞之上的一点扩展。

在我们的示例中,就像启动烤面包机,然后启动洗碗机,然后启动微波炉等,然后等待其中任何一个。你会检查烤面包机是否完成,如果没有,你会检查洗碗机,如果没有,则检查微波炉,等等。

  1. 尽管我认为这是一个很大的错误,但同步通常用来表示一次只做一件事。并且一次异步处理很多事情。因此,您将看到同步阻塞和非阻塞用于指阻塞和非阻塞。异步阻塞和非阻塞用来指多路复用和事件化。

我真的不明白我们是如何到达那里的。但当涉及到 IO 和计算时,同步和异步通常指的是更广为人知的非重叠和重叠。也就是说,异步意味着 IO 和计算是重叠的,也就是同时发生的。虽然同步意味着它们不是,因此是按顺序发生的。对于同步非阻塞,这意味着您不启动其他 IO 或计算,您只是忙着等待并模拟阻塞调用。我希望人们停止这样滥用同步和异步。所以我不鼓励这样做。

编辑2:

我认为很多人对我对同步和异步的定义感到有点困惑。让我试着说得更清楚一些。

同步被定义为以可预测的时间和/或顺序发生。这意味着您知道某件事何时开始和结束。

异步被定义为不以可预测的时间和/或顺序发生。这意味着你不知道某件事何时开始和结束。

两者可以并行或同时发生,也可以依次发生。但在同步情况下,您确切地知道事情何时会发生,而在异步情况下,您不确定事情何时会发生,但您仍然可以进行一些协调,至少保证某些事情只会在其他事情发生之后发生已经发生了(通过同步它的某些部分)。

因此,当您有异步流程时,异步编程可以让您做出一些顺序保证,以便某些事情按正确的顺序发生,即使您不知道事情何时开始和结束。

举个例子,如果我们需要做A,那么B和C可以随时发生。在顺序但异步的模型中,您可以:

A -> B -> C
or
A -> C -> B
or
C -> A -> B

每次运行程序时,您都可以获得其中的一个不同的模型,看似随机的。现在这仍然是顺序的,没有什么是并行或并发的,但是你不知道事情什么时候开始和结束,除非你已经做到了,所以B总是在A之后发生。

如果你只添加并发(没有并行性),你也可以得到这样的东西:

A<start> -> C<start> -> A<end>   -> C<end>   -> B<start> -> B<end>
or
C<start> -> A<start> -> C<end>   -> A<end>   -> B<start> -> B<end>
or
A<start> -> A<end>   -> B<start> -> C<start> -> B<end>   -> C<end>
etc...

再一次,你并不真正知道事情何时开始和结束,但你已经做到了,因此 B 被协调为始终在 A 结束后开始,但这不一定是在 A 结束后立即开始,而是在某个未知的时间A 结束后,B 可能全部或部分发生在两者之间。

如果你添加并行性,现在你会得到这样的结果:

A<start> -> A<end>   -> B<start>       -> B<end>         ->
            C<start> -> C<keeps going> -> C<keeps going> -> C<end>
or
A<start> -> A<end>         -> B<start> -> B<end>
C<start> -> C<keeps going> -> C<end>
etc...

现在,如果我们看看同步情况,在顺序设置中你会得到:

A -> B -> C

这总是顺序,每次运行程序时,你都会得到 A 然后 B 然后C,尽管 C 从概念上讲要求可以随时发生,但在同步模型中,您仍然可以准确定义它何时开始和结束。当然,您可以像这样指定它:

C -> A -> B

相反,但由于它是同步的,那么这个顺序将是每次运行程序时的顺序,除非您再次更改代码以显式更改顺序。

现在,如果您将并发性添加到同步模型中,您可以获得:

C<start> -> A<start> -> C<end> -> A<end> -> B<start> -> B<end>

而且,无论您运行程序多少次,这都将是顺序。同样,您可以在代码中显式更改它,但它在程序执行过程中是一致的。

最后,如果您将并行性也添加到同步模型中,您会得到:

A<start> -> A<end> -> B<start> -> B<end>
C<start> -> C<end>

再次,每个程序运行时都会出现这种情况。这里一个重要的方面是,要以这种方式完全同步,这意味着 B 必须在 A 和 C 都结束后启动。如果 C 是一个可以更快或更慢完成的操作,例如取决于机器的 CPU 能力或其他性能考虑因素,要使其同步,您仍然需要使其 B 等待它结束,否则您会得到异步行为同样,并非所有时间都是确定性的。

在协调CPU操作与CPU时钟时,你会得到很多这种同步的东西,并且你必须确保你可以在下一个时钟周期及时完成每个操作,否则你需要将所有事情再延迟一个时钟为了给这个完成留出空间,如果你不这样做,你就会搞乱你的同步行为,如果事情依赖于这个顺序,它们就会被破坏。

最后,许多系统混合了同步和异步行为,因此,如果您有任何类型的固有不可预测的事件,例如用户何时单击按钮,或者远程 API 将返回响应时,但您需要有保证排序,您基本上需要一种同步异步行为的方法,以便保证所需的顺序和计时。一些同步策略是我之前谈到的,有阻塞、非阻塞、异步、多路复用等。请注意“异步”的强调,这就是我所说的“正在”这个词的意思令人困惑。有人决定将同步异步进程的策略称为“async”。然后,这错误地使人们认为异步意味着并发,同步意味着顺序,或者某种程度上阻塞与异步相反,正如我刚才所解释的,同步和异步实际上是一个不同的概念,与事物发生的时间有关。同步(彼此及时,在某些共享时钟上或以可预测的顺序)或不同步(不在某些共享时钟上或以不可预测的顺序)。异步编程是一种同步两个本身异步的事件(以不可预测的时间和/或顺序发生)的策略,为此我们需要添加一些保证,以保证它们何时可能发生或至少以什么顺序发生。

因此,我们留下了两件事,其中使用了“异步”一词:

  1. 异步进程:我们不知道它们将在何时开始和结束的进程,因此也不知道它们最终会以什么顺序运行。
  2. 异步编程:一种编程风格,允许您使用回调或观察程序来同步两个异步进程,这些回调或观察程序会中断执行程序,以便让它们知道某些操作已完成,以便您可以在进程之间添加可预测的顺序。

Synchronous is defined as happening at the same time (in predictable timing, or in predictable ordering).

Asynchronous is defined as not happening at the same time. (with unpredictable timing or with unpredictable ordering).

This is what causes the first confusion, which is that asynchronous is some sort of synchronization scheme, and yes it is used to mean that, but in actuality it describes processes that are happening unpredictably with regards to when or in what order they run. And such events often need to be synchronized in order to make them behave correctly, where multiple synchronization schemes exists to do so, one of those called blocking, another called non-blocking, and yet another one confusingly called asynchronous.

So you see, the whole problem is about finding a way to synchronize an asynchronous behavior, because you've got some operation that needs the response of another before it can begin. Thus it's a coordination problem, how will you know that you can now start that operation?

The simplest solution is known as blocking.

Blocking is when you simply choose to wait for the other thing to be done and return you a response before moving on to the operation that needed it.

So if you need to put butter on toast, and thus you first need to toast the bred. The way you'd coordinate them is that you'd first toast the bred, then stare endlessly at the toaster until it pops the toast, and then you'd proceed to put butter on them.

It's the simplest solution, and works very well. There's no real reason not to use it, unless you happen to also have other things you need to be doing which don't require coordination with the operations. For example, doing some dishes. Why wait idle staring at the toaster constantly for the toast to pop, when you know it'll take a bit of time, and you could wash a whole dish while it finishes?

That's where two other solutions known respectively as non-blocking and asynchronous come into play.

Non-blocking is when you choose to do other unrelated things while you wait for the operation to be done. Checking back on the availability of the response as you see fit.

So instead of looking at the toaster for it to pop. You go and wash a whole dish. And then you peek at the toaster to see if the toasts have popped. If they haven't, you go wash another dish, checking back at the toaster between each dish. When you see the toasts have popped, you stop washing the dishes, and instead you take the toast and move on to putting butter on them.

Having to constantly check on the toasts can be annoying though, imagine the toaster is in another room. In between dishes you waste your time going to that other room to check on the toast.

Here comes asynchronous.

Asynchronous is when you choose to do other unrelated things while you wait for the operation to be done. Instead of checking on it though, you delegate the work of checking to something else, could be the operation itself or a watcher, and you have that thing notify and possibly interupt you when the response is availaible so you can proceed to the other operation that needed it.

Its a weird terminology. Doesn't make a whole lot of sense, since all these solutions are ways to create synchronous coordination of dependent tasks. That's why I prefer to call it evented.

So for this one, you decide to upgrade your toaster so it beeps when the toasts are done. You happen to be constantly listening, even while you are doing dishes. On hearing the beep, you queue up in your memory that as soon as you are done washing your current dish, you'll stop and go put the butter on the toast. Or you could choose to interrupt the washing of the current dish, and deal with the toast right away.

If you have trouble hearing the beep, you can have your partner watch the toaster for you, and come tell you when the toast is ready. Your partner can itself choose any of the above three strategies to coordinate its task of watching the toaster and telling you when they are ready.

On a final note, it's good to understand that while non-blocking and async (or what I prefer to call evented) do allow you to do other things while you wait, you don't have too. You can choose to constantly loop on checking the status of a non-blocking call, doing nothing else. That's often worse than blocking though (like looking at the toaster, then away, then back at it until it's done), so a lot of non-blocking APIs allow you to transition into a blocking mode from it. For evented, you can just wait idle until you are notified. The downside in that case is that adding the notification was complex and potentially costly to begin with. You had to buy a new toaster with beep functionality, or convince your partner to watch it for you.

And one more thing, you need to realize the trade offs all three provide. One is not obviously better than the others. Think of my example. If your toaster is so fast, you won't have time to wash a dish, not even begin washing it, that's how fast your toaster is. Getting started on something else in that case is just a waste of time and effort. Blocking will do. Similarly, if washing a dish will take 10 times longer then the toasting. You have to ask yourself what's more important to get done? The toast might get cold and hard by that time, not worth it, blocking will also do. Or you should pick faster things to do while you wait. There's more obviously, but my answer is already pretty long, my point is you need to think about all that, and the complexities of implementing each to decide if its worth it, and if it'll actually improve your throughput or performance.

Edit:

Even though this is already long, I also want it to be complete, so I'll add two more points.

  1. There also commonly exists a fourth model known as multiplexed. This is when while you wait for one task, you start another, and while you wait for both, you start one more, and so on, until you've got many tasks all started and then, you wait idle, but on all of them. So as soon as any is done, you can proceed with handling its response, and then go back to waiting for the others. It's known as multiplexed, because while you wait, you need to check each task one after the other to see if they are done, ad vitam, until one is. It's a bit of an extension on top of normal non-blocking.

In our example it would be like starting the toaster, then the dishwasher, then the microwave, etc. And then waiting on any of them. Where you'd check the toaster to see if it's done, if not, you'd check the dishwasher, if not, the microwave, and around again.

  1. Even though I believe it to be a big mistake, synchronous is often used to mean one thing at a time. And asynchronous many things at a time. Thus you'll see synchronous blocking and non-blocking used to refer to blocking and non-blocking. And asynchronous blocking and non-blocking used to refer to multiplexed and evented.

I don't really understand how we got there. But when it comes to IO and Computation, synchronous and asynchronous often refer to what is better known as non-overlapped and overlapped. That is, asynchronous means that IO and Computation are overlapped, aka, happening concurrently. While synchronous means they are not, thus happening sequentially. For synchronous non-blocking, that would mean you don't start other IO or Computation, you just busy wait and simulate a blocking call. I wish people stopped misusing synchronous and asynchronous like that. So I'm not encouraging it.

Edit2:

I think a lot of people got a bit confused by my definition of synchronous and asynchronous. Let me try and be a bit more clear.

Synchronous is defined as happening with predictable timing and/or ordering. That means you know when something will start and end.

Asynchronous is defined as not happening with predictable timing and/or ordering. That means you don't know when something will start and end.

Both of those can be happening in parallel or concurrently, or they can be happening sequentially. But in the synchronous case, you know exactly when things will happen, while in the asynchronous case you're not sure exactly when things will happen, but you can still put some coordination in place that at least guarantees some things will happen only after others have happened (by synchronizing some parts of it).

Thus when you have asynchronous processes, asynchronous programming lets you place some order guarantees so that some things happen in the right sequence, even though you don't know when things will start and end.

Here's an example, if we need to do A then B and C can happen at any time. In a sequential but asynchronous model you can have:

A -> B -> C
or
A -> C -> B
or
C -> A -> B

Every time you run the program, you could get a different one of those, seemingly at random. Now this is still sequential, nothing is parallel or concurrent, but you don't know when things will start and end, except you have made it so B always happens after A.

If you add concurrency only (no parallelism), you can also get things like:

A<start> -> C<start> -> A<end>   -> C<end>   -> B<start> -> B<end>
or
C<start> -> A<start> -> C<end>   -> A<end>   -> B<start> -> B<end>
or
A<start> -> A<end>   -> B<start> -> C<start> -> B<end>   -> C<end>
etc...

Once again, you don't really know when things will start and end, but you have made it so B is coordinated to always start after A ends, but that's not necessarily immediately after A ends, it's at some unknown time after A ends, and B could happen in-between fully or partially.

And if you add parallelism, now you have things like:

A<start> -> A<end>   -> B<start>       -> B<end>         ->
            C<start> -> C<keeps going> -> C<keeps going> -> C<end>
or
A<start> -> A<end>         -> B<start> -> B<end>
C<start> -> C<keeps going> -> C<end>
etc...

Now if we look at the synchronous case, in a sequential setting you would have:

A -> B -> C

And this is the order always, each time you run the program, you get A then B and then C, even though C conceptually from the requirements can happen at any time, in a synchronous model you still define exactly when it will start and end. Off course, you could specify it like:

C -> A -> B

instead, but since it is synchronous, then this order will be the ordering every time the program is ran, unless you changed the code again to change the order explicitly.

Now if you add concurrency to a synchronous model you can get:

C<start> -> A<start> -> C<end> -> A<end> -> B<start> -> B<end>

And once again, this would be the order no matter how many time you ran the program. And similarly, you could explicitly change it in your code, but it would be consistent across program execution.

Finally, if you add parallelism as well to a synchronous model you get:

A<start> -> A<end> -> B<start> -> B<end>
C<start> -> C<end>

Once again, this would be the case on every program run. An important aspect here is that to make it fully synchronous this way, it means B must start after both A and C ends. If C is an operation that can complete faster or slower say depending on the CPU power of the machine, or other performance consideration, to make it synchronous you still need to make it so B waits for it to end, otherwise you get an asynchronous behavior again, where not all timings are deterministic.

You'll get this kind of synchronous thing a lot in coordinating CPU operations with the CPU clock, and you have to make sure that you can complete each operation in time for the next clock cycle, otherwise you need to delay everything by one more clock to give room for this one to finish, if you don't, you mess up your synchronous behavior, and if things depended on that order they'd break.

Finally, lots of systems have synchronous and asynchronous behavior mixed in, so if you have any kind of inherently unpredictable events, like when a user will click a button, or when a remote API will return a response, but you need things to have guaranteed ordering, you will basically need a way to synchronize the asynchronous behavior so it guarantees order and timing as needed. Some strategies to synchronize those are what I talk about previously, you have blocking, non-blocking, async, multiplexed, etc. See the emphasis on "async", this is what I mean by the word being confusing. Somebody decided to call a strategy to synchronize asynchronous processes "async". This then wrongly made people think that asynchronous meant concurrent and synchronous meant sequential, or that somehow blocking was the opposite of asynchronous, where as I just explained, synchronous and asynchronous in reality is a different concept that relates to the timing of things as being in sync (in time with each other, either on some shared clock or in a predictable order) or out of sync (not on some shared clock or in an unpredictable order). Where as asynchronous programming is a strategy to synchronize two events that are themselves asynchronous (happening at an unpredictable time and/or order), and for which we need to add some guarantees of when they might happen or at least in what order.

So we're left with two things using the word "asynchronous" in them:

  1. Asynchronous processes: processes that we don't know at what time they will start and end, and thus in what order they would end up running.
  2. Asynchronous programming: a style of programming that lets you synchronize two asynchronous processes using callbacks or watchers that interrupt the executor in order to let them know something is done, so that you can add predictable ordering between the processes.
星星的轨迹 2024-09-04 18:55:07

非阻塞调用会立即返回任何可用数据:请求的全部字节数、更少的字节数或根本没有。

异步调用请求将整体执行传输,但将在未来某个时间完成。

A nonblocking call returns immediately with whatever data is available: the full number of bytes requested, fewer, or none at all.

An asynchronous call requests a transfer that will be performed in its whole(entirety) but will complete at some future time.

下雨或天晴 2024-09-04 18:55:07

把这个问题放在java 7中的NIO和NIO.2的背景下,异步IO比非阻塞先进了一步。
对于 java NIO 非阻塞调用,可以通过调用 AbstractSelectableChannel.configureBlocking(false) 来设置所有通道(SocketChannel、ServerSocketChannel、FileChannel 等)。
然而,在这些 IO 调用返回后,您可能仍然需要控制检查,例如是否以及何时再次读/写等。
例如,

while (!isDataEnough()) {
    socketchannel.read(inputBuffer);
    // do something else and then read again
}

使用 java 7 中的异步 api,可以通过更通用的方式来制作这些控件。
两种方法之一是使用CompletionHandler。请注意,两个 read 调用都是非阻塞的。

asyncsocket.read(inputBuffer, 60, TimeUnit.SECONDS /* 60 secs for timeout */, 
    new CompletionHandler<Integer, Object>() {
        public void completed(Integer result, Object attachment) {...}  
        public void failed(Throwable e, Object attachment) {...}
    }
}

Putting this question in the context of NIO and NIO.2 in java 7, async IO is one step more advanced than non-blocking.
With java NIO non-blocking calls, one would set all channels (SocketChannel, ServerSocketChannel, FileChannel, etc) as such by calling AbstractSelectableChannel.configureBlocking(false).
After those IO calls return, however, you will likely still need to control the checks such as if and when to read/write again, etc.
For instance,

while (!isDataEnough()) {
    socketchannel.read(inputBuffer);
    // do something else and then read again
}

With the asynchronous api in java 7, these controls can be made in more versatile ways.
One of the 2 ways is to use CompletionHandler. Notice that both read calls are non-blocking.

asyncsocket.read(inputBuffer, 60, TimeUnit.SECONDS /* 60 secs for timeout */, 
    new CompletionHandler<Integer, Object>() {
        public void completed(Integer result, Object attachment) {...}  
        public void failed(Throwable e, Object attachment) {...}
    }
}
鱼忆七猫命九 2024-09-04 18:55:07

正如您可能从众多不同(并且通常是相互排斥的)答案中看到的那样,这取决于您问的是谁。在某些领域,这些术语是同义词。或者它们可能各自引用两个相似的概念:

  • 一种解释是调用将在后台基本上不受监督地执行某些操作,以便使程序不会被不需要控制的冗长进程所阻碍。播放音频可能是一个例子 - 程序可以调用函数来播放(例如)mp3,从那时起可以继续处理其他事情,同时将其留给操作系统来管理在声音硬件上渲染音频的过程。
  • 另一种解释是,该调用将执行程序需要监视的操作,但允许大部分进程在后台发生,仅在进程中的关键点通知程序。例如,异步文件 IO 可能是一个例子 - 程序向操作系统提供缓冲区以写入文件,操作系统仅在操作完成或发生错误时通知程序。

无论哪种情况,目的都是让程序不会被阻塞,等待缓慢的进程完成 - 程序预期如何响应是唯一真正的区别。哪个术语所指的术语也会因程序员、语言或平台的不同而变化。或者这些术语可能指的是完全不同的概念(例如与线程编程相关的同步/异步的使用)。

抱歉,但我不相信有一个全球正确的正确答案。

As you can probably see from the multitude of different (and often mutually exclusive) answers, it depends on who you ask. In some arenas, the terms are synonymous. Or they might each refer to two similar concepts:

  • One interpretation is that the call will do something in the background essentially unsupervised in order to allow the program to not be held up by a lengthy process that it does not need to control. Playing audio might be an example - a program could call a function to play (say) an mp3, and from that point on could continue on to other things while leaving it to the OS to manage the process of rendering the audio on the sound hardware.
  • The alternative interpretation is that the call will do something that the program will need to monitor, but will allow most of the process to occur in the background only notifying the program at critical points in the process. For example, asynchronous file IO might be an example - the program supplies a buffer to the operating system to write to file, and the OS only notifies the program when the operation is complete or an error occurs.

In either case, the intention is to allow the program to not be blocked waiting for a slow process to complete - how the program is expected to respond is the only real difference. Which term refers to which also changes from programmer to programmer, language to language, or platform to platform. Or the terms may refer to completely different concepts (such as the use of synchronous/asynchronous in relation to thread programming).

Sorry, but I don't believe there is a single right answer that is globally true.

苹果你个爱泡泡 2024-09-04 18:55:07

阻止调用:仅当调用完成时控件才返回。

非阻塞调用:控制立即返回。后来的操作系统以某种方式通知进程调用已完成。


同步程序:使用阻塞调用的程序。为了在调用过程中不冻结,它必须有 2 个或更多线程(这就是它被称为同步的原因 - 线程同步运行)。

异步程序:使用非阻塞调用的程序。它只能有 1 个线程并且仍然保持交互。

Blocking call: Control returns only when the call completes.

Non blocking call: Control returns immediately. Later OS somehow notifies the process that the call is complete.


Synchronous program: A program which uses Blocking calls. In order not to freeze during the call it must have 2 or more threads (that's why it's called Synchronous - threads are running synchronously).

Asynchronous program: A program which uses Non blocking calls. It can have only 1 thread and still remain interactive.

虚拟世界 2024-09-04 18:55:07

非阻塞:该函数在堆栈上时不会等待。

异步:在该调用离开堆栈后,可以代表该函数调用继续工作

Non-blocking: This function won't wait while on the stack.

Asynchronous: Work may continue on behalf of the function call after that call has left the stack

起风了 2024-09-04 18:55:07

同步表示在另一个结果之后开始,按顺序。

异步表示一起开始,不保证结果的顺序

阻塞是指导致阻碍执行下一步的事情。

非阻塞意味着无需等待任何事情即可继续运行,克服障碍

阻止例如:我敲门并等待他们打开门。 (我在这里闲着)

非阻塞例如:我敲门,如果他们立即打开,我向他们打招呼,进去等等。如果他们没有立即打开,我就去下一个房子,敲它。 (我正在做某事或另一件事,不闲着)

同步例如:只有下雨我才会出去。 (存在依赖关系)

异步例如:我会出去。可能会下雨。 (独立事件,何时发生并不重要)

同步或异步,两者都可以是阻塞或非阻塞,反之亦然

Synchronous means to start one after the other's result, in a sequence.

Asynchronous means start together, no sequence is guaranteed on the result

Blocking means something that causes an obstruction to perform the next step.

Non-blocking means something that keeps running without waiting for anything, overcoming the obstruction.

Blocking eg: I knock on the door and wait till they open it. ( I am idle here )

Non-Blocking eg: I knock on the door, if they open it instantly, I greet them, go inside, etc. If they do not open instantly, I go to the next house and knock on it. ( I am doing something or the other, not idle )

Synchrounous eg: I will go out only if it rains. ( dependency exists )

Asynchronous eg: I will go out. It can rain. ( independent events, does't matter when they occur )

Synchronous or Asynchronous, both can be blocking or non-blocking and vice versa

叫嚣ゝ 2024-09-04 18:55:07
同步异步
块 I/O 必须是同步 I/O,因为它必须按顺序执行。同步 I/O 可能不是块 I/O不存在
非块非块和同步 I/O 同时是轮询/多路复用..非块和异步 I/O 同时是并行执行,例如信号触发…
  • 块/非块描述初始化实体本身的行为,这意味着实体在等待I/O完成期间执行的操作
  • 同步/异步描述I/O初始化实体和I/O执行器之间的行为(例如操作系统),这意味着这两个实体是否可以并行执行
synchronousasynchonous
blockBlock I/O must be a synchronus I/O, becuase it has to be executed in order. Synchronous I/O might not be block I/ONot exist
non-blockNon-block and Synchronous I/O at the same time is polling/multi-plexing..Non-block and Asynchronous I/O at the same time is parallel execution, such as signal trigger…
  • block/non-block describe behavior of the initializing entity itself, it means what the entity does during wating for I/O completion
  • synchronous/asynchronous describe behavior between I/O initilaizing entity and I/O executor(the operating system, for example), it means whether these two entity can be executed parallelly
埋葬我深情 2024-09-04 18:55:07

TL;DR: 同步/异步是指任务的执行方式,而阻塞/非阻塞是指操作在执行任务时投入的精力。


同步/异步是指任务是如何执行的。 当一个操作在前台执行而不产生任何后台作业时,该操作被认为是同步的;否则,它被视为异步并在后台并发执行*。

阻塞/非阻塞是指操作在执行任务时所投入的精力,特别是当任务可能需要多次重试或等待时。阻塞操作会尽最大努力来完成任务前台,这意味着它不断重试以获得最终结果,而非阻塞操作允许在尝试但未能完成后返回中间结果。

根据定义,我们可以得出以下结论:

  1. 阻塞操作是同步的。这是因为阻塞操作会尽最大努力在前台完成任务,这意味着它不会产生后台工作。阻塞操作的示例包括阻塞发送。
  2. 异步操作是非阻塞的。这是因为异步操作会生成后台作业,并且不会在前台执行所有操作。异步操作的示例包括 CLFLUSHOPT。
  3. 非阻塞操作可以是同步的,也可以是异步的。由于非阻塞操作不会在前台尽最大努力,因此它可以返回当前结果或生成后台作业来完成任务。非阻塞操作的示例包括非阻塞发送(同步)和 CLFLUSHOPT(异步)。
  4. 同步操作也可以是阻塞或非阻塞的。这是因为同步操作不会生成后台作业。它可以尽最大努力在前台完成任务(阻塞),也可以尝试一次并返回结果(非阻塞)。同步操作的示例包括非阻塞发送(同步和非阻塞)和阻塞发送(同步和阻塞)。

下面是一个表格,以方便大家理解。

同步异步
普通函数,如 sum()、send()、printf()不存在
非块尝试执行某些操作的 函数。例如。非阻塞 send/recv()、try_lock()所有异步函数,可能有也可能没有回调函数。例如。在异步中等待。

注意:

  • 最好说异步是利用并发,但不一定是并行,因为并发指的是系统同时执行多个任务或进程的能力,它可能是并行的,也可能不是并行的(例如,多路复用)。

TL;DR: Synchronous/Asynchronous refers to how a task is executed, while blocking/Non-blocking refers to the amount of effort an operation puts into performing a task.


Synchronous/Asynchronous refers to how a task is executed. An operation is considered synchronous when it is executed in the foreground without spawning any background job; otherwise, it is considered asynchronous and executed in the background concurrently*.

Blocking/Non-blocking refers to the amount of effort an operation puts into performing a task especially when the task may need multiple retries or waiting. A blocking operation puts in its best effort to complete the task in the foreground, meaning that it keeps retrying to get the final result, while a non-blocking operation is allowed to return an intermediate result after attempting but failing to complete.

Based on the definition, we can draw the following conclusions:

  1. A blocking operation is synchronous. This is because a blocking operation puts in its best effort to complete the task in the foreground, meaning that it does not spawn a background job. Examples of blocking operations include blocking send.
  2. An asynchronous operation is non-blocking. This is because an asynchronous operation spawns a background job and does not perform everything in the foreground. Examples of asynchronous operations include CLFLUSHOPT.
  3. A non-blocking operation can be either synchronous or asynchronous. Since a non-blocking operation does not put in its best effort in the foreground, it can either return the current result or spawn a background job to complete the task. Examples of non-blocking operations include non-blocking send (synchronous) and CLFLUSHOPT (asynchronous).
  4. A synchronous operation can also be either blocking or non-blocking. This is because a synchronous operation does not spawn a background job. It can either put in its best effort to complete the task in the foreground (blocking) or try once and return the result (non-blocking). Examples of synchronous operations include non-blocking send (synchronous and non-blocking) and blocking send (synchronous and blocking).

Below is a table to facilitate the points.

synchronousasynchonous
blocknormal functions, like sum(), send(), printf()Not exist
non-blockfunctions that tries to do something. Eg. non-blocking send/recv(), try_lock()all asynchronous functions, may or may not have a call back function. Eg. await in asyncio.

Note:

  • its better to say async is utilizing concurrency but not necessarily parallelism, as concurrency refers to the ability of a system to perform multiple tasks or processes at the same time, and it may or may not be parallel (eg. multiplexing).
心碎的声音 2024-09-04 18:55:07

阻塞模型要求启动应用程序在 I/O 启动时阻塞。这意味着不可能同时重叠处理和 I/O。同步非阻塞模型允许处理和 I/O 重叠,但它要求应用程序定期检查 I/O 的状态。这就留下了异步非阻塞 I/O,它允许处理和 I/O 重叠,包括 I/O 完成的通知。

The blocking models require the initiating application to block when the I/O has started. This means that it isn't possible to overlap processing and I/O at the same time. The synchronous non-blocking model allows overlap of processing and I/O, but it requires that the application check the status of the I/O on a recurring basis. This leaves asynchronous non-blocking I/O, which permits overlap of processing and I/O, including notification of I/O completion.

も星光 2024-09-04 18:55:07

简单来说,

function sum(a,b){
return a+b;
}

就是非阻塞。而异步用于执行阻塞任务,然后返回其响应

To Simply Put,

function sum(a,b){
return a+b;
}

is a Non Blocking. while Asynchronous is used to execute Blocking task and then return its response

金兰素衣 2024-09-04 18:55:07

它们仅在拼写上有所不同。它们所指的内容没有区别。从技术角度来说,你可以说它们的侧重点不同。非阻塞是指控制流(它不会阻塞)。异步是指事件\数据被处理时(非同步)。

They differ in spelling only. There is no difference in what they refer to. To be technical you could say they differ in emphasis. Non blocking refers to control flow(it doesn't block.) Asynchronous refers to when the event\data is handled(not synchronously.)

心不设防 2024-09-04 18:55:07

阻塞:原语(同步或异步)处理完成后,控制返回到调用过程

非阻塞:调用后控制立即返回到处理

Blocking: control returns to invoking precess after processing of primitive(sync or async) completes

Non blocking: control returns to process immediately after invocation

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文