Most people do stick with poll() / select(), simply because these are well-understood, well-tested, well-documented and well-supported. If I were you I would use select() unless you have a compelling reason not to.
I can't answer your question about POSIX AIO, but I've used libev for events. Small, fast, simple. Makes a good wrapper for IO in place of poll/select.
The issues with aio depend on the platform, so a big part of your decision is what platform you are targeting. Quality varies widely and in some cases it is implemented in terms of poll/select type calls.
People do tend to use poll/select or similar interfaces like kevent/kqueue or epoll for this kind of thing on Unix platforms.
There are problems with the aio interface, and additions like aio_waitcomplete() and the integration of aio and kqueues makes a difference.
Lots of threads for dealing with lots of I/O is not a good approach.
for disk, why do you have to have AIO instead of just buffered read/write, unless you want to 1) use your own caching 2) control the impact on dirty pages or 3) use IO priorities?
Because if your goal is only code refactoring, then you probably go through cache in the current version. Changing from buffered IO to direct IO is a huge change.
For example, on ext3/SSD/noop with 1,5G of RAM, just 3 threads doing streamed writes of 300Mb starve small writes and reads. Switching the offenders to direct IO fixes that, but the writes now take forever.
发布评论
评论(5)
记录不足肯定是这种情况。
大多数人确实坚持使用
poll()
/select()
,仅仅是因为它们易于理解、经过充分测试、有详细记录并且得到很好的支持。如果我是你,我会使用select()
除非你有令人信服的理由不这样做。Poorly documented is certainly the case.
Most people do stick with
poll()
/select()
, simply because these are well-understood, well-tested, well-documented and well-supported. If I were you I would useselect()
unless you have a compelling reason not to.您可以考虑 Boost.Asio跨平台异步套接字库。它有很好的例子并且有广泛的文档记录。
You might consider Boost.Asio for a cross-platform asynchronous socket library. It has excellent examples and is documented extensively.
我无法回答您有关 POSIX AIO 的问题,但我已使用 libev事件。小、快、简单。为 IO 提供了一个很好的包装来代替 poll/select。
I can't answer your question about POSIX AIO, but I've used libev for events. Small, fast, simple. Makes a good wrapper for IO in place of poll/select.
aio 的问题取决于平台,因此您的决定的很大一部分是您的目标平台。质量差异很大,在某些情况下,它是通过轮询/选择类型调用来实现的。
人们确实倾向于在 Unix 平台上使用 poll/select 或类似的接口(如 kevent/kqueue 或 epoll)来完成此类操作。
aio 接口存在问题,像 aio_waitcomplete() 这样的添加以及 aio 和 kqueues 的集成会产生影响。
使用大量线程来处理大量 I/O 并不是一个好方法。
The issues with aio depend on the platform, so a big part of your decision is what platform you are targeting. Quality varies widely and in some cases it is implemented in terms of poll/select type calls.
People do tend to use poll/select or similar interfaces like kevent/kqueue or epoll for this kind of thing on Unix platforms.
There are problems with the aio interface, and additions like aio_waitcomplete() and the integration of aio and kqueues makes a difference.
Lots of threads for dealing with lots of I/O is not a good approach.
对于磁盘,为什么必须有 AIO 而不仅仅是缓冲读/写,除非您想 1) 使用自己的缓存 2) 控制对脏页的影响或 3) 使用 IO 优先级?
因为如果您的目标只是代码重构,那么您可能会在当前版本中使用缓存。从缓冲 IO 更改为直接 IO 是一个巨大的变化。
例如,在具有 1.5G RAM 的 ext3/SSD/noop 上,只有 3 个线程执行 300Mb 的流式写入,从而导致小型写入和读取不足。将违规者切换为直接 IO 可以修复该问题,但写入现在需要很长时间。
for disk, why do you have to have AIO instead of just buffered read/write, unless you want to 1) use your own caching 2) control the impact on dirty pages or 3) use IO priorities?
Because if your goal is only code refactoring, then you probably go through cache in the current version. Changing from buffered IO to direct IO is a huge change.
For example, on ext3/SSD/noop with 1,5G of RAM, just 3 threads doing streamed writes of 300Mb starve small writes and reads. Switching the offenders to direct IO fixes that, but the writes now take forever.