Linux 上的 POSIX AIO 和 libaio 之间的区别?

发布于 2024-12-25 04:07:19 字数 717 浏览 1 评论 0原文

似乎的理解是:

POSIX AIO API 在 中原型化,并且您将程序与 librt(-lrt) 链接起来,而 中的 libaio API 和您的程序与 libaio (-laio) 链接。

我不明白的是:

1.内核对这两种方法的处理方式是否不同?

2.使用它们中的任何一个时是否必须使用O_DIRECT标志?

正如这篇文章中提到的,在使用时,libaio 可以在没有 O_DIRECT 的情况下正常工作libaio。好的,明白了,但是:

根据 R.Love 的 Linux 系统编程 书,Linux 支持 aio (我认为是POSIX AIO)在使用O_DIRECT打开的常规文件上。但是我编写的一个小程序(使用aio.h,与-lrt链接)调用aio_write 在没有 O_DIRECT 标志的情况下打开的文件上可以正常工作。

What I seem to understand:

POSIX AIO APIs are prototyped in <aio.h> and you link your program with librt(-lrt), while the libaio APIs in <libaio.h> and your program is linked with libaio (-laio).

What I can't figure out:

1.Does the kernel handle the either of these methods differently?

2.Is the O_DIRECT flag mandatory for using either of them?

As mentioned in this post, libaio works fine without O_DIRECT when using libaio.Okay,understood but:

According to R.Love's Linux System Programming book, Linux supports aio (which I assume is POSIX AIO) on regular files only if opened with O_DIRECT.But a small program that I wrote (using aio.h,linked with -lrt) that calls aio_write on a file opened without the O_DIRECT flag works without issues.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

醉生梦死 2025-01-01 04:07:19

在 Linux 上,这两种 AIO 实现根本不同。

POSIX AIO 是一个用户级实现,它在多个线程中执行正常的阻塞 I/O,因此给人一种 I/O 是异步的错觉。这样做的主要原因是:

  1. 它适用于任何文件系统
  2. 它(本质上)适用于任何操作系统(请记住,gnu 的 libc 是可移植的)
  3. 它适用于启用缓冲的文件(即没有设置 O_DIRECT 标志)

主要缺点是您的队列深度(即实际上可以拥有的未完成操作的数量)受到您选择拥有的线程数量的限制,这也意味着一个磁盘上的缓慢操作可能会阻止前往不同磁盘的操作。它还会影响内核和磁盘调度程序看到哪些 I/O(或多少)。

内核 AIO(即 io_submit() 等)是对异步 I/O 操作的内核支持,其中 io 请求实际上在内核中排队,按您拥有的任何磁盘调度程序排序,大概其中一些被转发(以人们希望的某种最佳顺序)作为异步操作(使用 TCQ 或 NCQ)到实际磁盘。这种方法的主要限制是,并非所有文件系统都能很好地或根本无法使用异步 I/O(并且可能会回退到阻塞语义),文件必须使用 O_DIRECT 打开,这对I/O 请求。如果您无法使用 O_DIRECT 打开文件,它可能仍然“有效”,因为您得到了正确的数据,但它可能不是异步完成的,而是回落到阻塞语义。

另请记住,在某些情况下,io_submit() 实际上可能会在磁盘上阻塞。

On linux, the two AIO implementations are fundamentally different.

The POSIX AIO is a user-level implementation that performs normal blocking I/O in multiple threads, hence giving the illusion that the I/Os are asynchronous. The main reason to do this is that:

  1. it works with any filesystem
  2. it works (essentially) on any operating system (keep in mind that gnu's libc is portable)
  3. it works on files with buffering enabled (i.e. no O_DIRECT flag set)

The main drawback is that your queue depth (i.e. the number of outstanding operations you can have in practice) is limited by the number of threads you choose to have, which also means that a slow operation on one disk may block an operation going to a different disk. It also affects which I/Os (or how many) is seen by the kernel and the disk scheduler as well.

The kernel AIO (i.e. io_submit() et.al.) is kernel support for asynchronous I/O operations, where the io requests are actually queued up in the kernel, sorted by whatever disk scheduler you have, presumably some of them are forwarded (in somewhat optimal order one would hope) to the actual disk as asynchronous operations (using TCQ or NCQ). The main restriction with this approach is that not all filesystems work that well or at all with async I/O (and may fall back to blocking semantics), files have to be opened with O_DIRECT which comes with a whole lot of other restrictions on the I/O requests. If you fail to open your files with O_DIRECT, it may still "work", as in you get the right data back, but it probably isn't done asynchronously, but is falling back to blocking semantics.

Also keep in mind that io_submit() can actually block on the disk under certain circumstances.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文