IPC 性能:命名管道与套接字
每个人似乎都说命名管道比套接字 IPC 更快。 他们快了多少? 我更喜欢使用套接字,因为它们可以进行双向通信并且非常灵活,但如果速度相当大,我会选择速度而不是灵活性。
Everyone seems to say named pipes are faster than sockets IPC. How much faster are they? I would prefer to use sockets because they can do two-way communication and are very flexible but will choose speed over flexibility if it is by considerable amount.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(12)
我建议您首先采取简单的方法,仔细隔离 IPC 机制,以便您可以从套接字更改为管道,但我肯定会首先使用套接字。
在进行先发制人的优化之前,您应该确定 IPC 性能是一个问题。
如果你因为 IPC 速度而遇到麻烦,我认为你应该考虑切换到共享内存而不是管道。
如果你想做一些传输速度测试,你应该尝试 socat,这是一个非常通用的工具程序允许您创建几乎任何类型的隧道。
I would suggest you take the easy path first, carefully isolating the IPC mechanism so that you can change from socket to pipe, but I would definitely go with socket first.
You should be sure IPC performance is a problem before preemptively optimizing.
And if you get in trouble because of IPC speed, I think you should consider switching to shared memory rather than going to pipe.
If you want to do some transfer speed testing, you should try socat, which is a very versatile program that allows you to create almost any kind of tunnel.
我同意 Shodanex 的观点,看起来你过早地尝试优化尚未出现问题的东西。 除非您知道套接字将成为瓶颈,否则我只会使用它们。
许多坚信命名管道的人发现了一些节省(取决于其他所有内容编写得如何),但最终得到的代码花在阻塞 IPC 回复上的时间比做有用工作的时间还要多。 当然,非阻塞方案对此有所帮助,但这些可能很棘手。 我可以说,花费数年时间将旧代码带入现代,在我见过的大多数情况下,加速几乎为零。
如果您确实认为套接字会减慢您的速度,那么就开始使用共享内存,并仔细注意如何使用锁。 同样,实际上,您可能会发现速度略有提高,但请注意,您浪费了一部分时间来等待互斥锁。 我不会提倡去 futex 地狱(嗯,不完全是 2015 年就不再是地狱了,这取决于你的经验)。
同等而言,套接字(几乎)总是在单片内核下进行用户空间 IPC 的最佳方式..并且(通常)是最容易调试和维护的。
I'm going to agree with shodanex, it looks like you're prematurely trying to optimize something that isn't yet problematic. Unless you know sockets are going to be a bottleneck, I'd just use them.
A lot of people who swear by named pipes find a little savings (depending on how well everything else is written), but end up with code that spends more time blocking for an IPC reply than it does doing useful work. Sure, non-blocking schemes help this, but those can be tricky. Spending years bringing old code into the modern age, I can say, the speedup is almost nil in the majority of cases I've seen.
If you really think that sockets are going to slow you down, then go out of the gate using shared memory with careful attention to how you use locks. Again, in all actuality, you might find a small speedup, but notice that you're wasting a portion of it waiting on mutual exclusion locks. I'm not going to advocate a trip to futex hell (well, not quite hell anymore in 2015, depending upon your experience).
Pound for pound, sockets are (almost) always the best way to go for user space IPC under a monolithic kernel .. and (usually) the easiest to debug and maintain.
请记住,套接字并不一定意味着 IP(以及 TCP 或 UDP)。 您还可以使用 UNIX 套接字 (PF_UNIX),与连接到 127.0.0.1 相比,它提供了显着的性能改进
Keep in mind that sockets does not necessarily mean IP (and TCP or UDP). You can also use UNIX sockets (PF_UNIX), which offer a noticeable performance improvement over connecting to 127.0.0.1
通常,数字比感觉更能说明问题,以下是一些数据:
管道与 Unix 套接字性能 (opendmx.net)。
该基准测试显示管道速度提高了约 12% 至 15%。
As often, numbers says more than feeling, here are some data:
Pipe vs Unix Socket Performance (opendmx.net).
This benchmark shows a difference of about 12 to 15% faster speed for pipes.
你可以在这里找到一个可运行的工作台 https://github.com/goldsborough/ipc-bench< br>
问候
you can find a runnable bench here https://github.com/goldsborough/ipc-bench
Regards
使用共享内存解决方案可以获得最佳结果。
命名管道仅比TCP套接字好16%。
结果通过 IPC 基准测试获得:
管道基准:
FIFO(命名管道)基准:
消息队列基准:
共享内存基准测试:
TCP 套接字基准测试:
Unix 域套接字基准测试:
ZeroMQ 基准测试:
Best results you'll get with Shared Memory solution.
Named pipes are only 16% better than TCP sockets.
Results are get with IPC benchmarking:
Pipe benchmark:
FIFOs (named pipes) benchmark:
Message Queue benchmark:
Shared Memory benchmark:
TCP sockets benchmark:
Unix domain sockets benchmark:
ZeroMQ benchmark:
如果您不需要速度,套接字是最简单的方法!
如果您关注的是速度,那么最快的解决方案是共享内存,而不是命名管道。
If you do not need speed, sockets are the easiest way to go!
If what you are looking at is speed, the fastest solution is shared Memory, not named pipes.
您可以使用像 ZeroMQ [ zmq/0mq ] 这样的轻量级解决方案。 它非常易于使用,并且比套接字快得多。
You can use lightweight solution like ZeroMQ [ zmq/0mq ]. It is very easy to use and dramatically faster then sockets.
我知道这是一个非常旧的线程,但它很重要,所以我想添加我的 0.02 美元。 从概念上讲,UDS 对于本地 IPC 来说要快得多。 它们不仅速度更快,而且如果您的内存控制器支持 DMA,那么 UDS 几乎不会对您的 CPU 造成任何负载。 DMA 控制器只会卸载 CPU 的内存操作。 TCP 需要被打包成 MTU 大小的块,如果您没有智能网卡或专用硬件中的 TCP 卸载,则会对 CPU 造成相当大的负载。 根据我的经验,UDS 在现代系统上的延迟和吞吐量大约快 5 倍。
这些基准测试来自这个简单的基准测试代码。 自己尝试一下。 它还支持 UDS、管道和 TCP:https://github.com/rigtorp/ipc-bench
处于大约 15% 的负载时,很难跟上 TCP 模式。 请注意,远程 DMA 或 RDMA 在网络中具有相同的优势。
I know this is a super old thread but it's an important one so I'd like to add my $0.02. UDS are much faster in concept for local IPC. Not only are they faster but if your memory controller supports DMA then UDS causes almost no load on your CPU. The DMA controller will just offload memory operations for the CPU. TCP needs to be packetized into chunks of size MTU and if you don't have a smart nic or TCP offload somewhere in specialized hardware that causes quite a bit of load on the CPU. In my experiences UDS are around 5x faster on modern systems in both latency and throughput.
These benchmarks come from this simple benchmark code. Try for yourself. It also supports UDS, pipes, and TCP: https://github.com/rigtorp/ipc-bench
I see a CPU core struggling to keep up with TCP mode while sitting at about ~15% load under UDS thanks to DMA. Note that Remote DMA or RDMA gains the same advantages in a network.
命名管道和套接字在功能上并不等同; 套接字提供了更多功能(首先它们是双向的)。
我们无法告诉您哪个性能更好,但我强烈怀疑这并不重要。
Unix 域套接字的功能与 tcp 套接字的功能非常相似,但仅在本地计算机上并且开销(可能稍微)较低。
如果 Unix 套接字不够快并且您要传输大量数据,请考虑在客户端和服务器之间使用共享内存(设置起来要复杂得多)。
Unix 和 NT 都有“命名管道”,但它们的功能集完全不同。
Named pipes and sockets are not functionally equivalent; sockets provide more features (they are bidirectional, for a start).
We cannot tell you which will perform better, but I strongly suspect it doesn't matter.
Unix domain sockets will do pretty much what tcp sockets will, but only on the local machine and with (perhaps a bit) lower overhead.
If a Unix socket isn't fast enough and you're transferring a lot of data, consider using shared memory between your client and server (which is a LOT more complicated to set up).
Unix and NT both have "Named pipes" but they are totally different in feature set.
套接字的一个问题是它们没有办法刷新缓冲区。 有一种叫做 Nagle 算法的算法,它会收集所有数据并在 40 毫秒后刷新它们。 因此,如果是响应能力而不是带宽,那么使用管道可能会更好。
您可以使用套接字选项 TCP_NODELAY 禁用 Nagle,但读取端将永远不会在一次读取调用中收到两条短消息。
所以测试一下,我最终没有得到这些,并在共享内存中使用 pthread 互斥体和信号量实现了基于内存映射的队列,避免了大量内核系统调用(但今天它们不再很慢了)。
One problem with sockets is that they do not have a way to flush the buffer. There is something called the Nagle algorithm which collects all data and flushes it after 40ms. So if it is responsiveness and not bandwidth you might be better off with a pipe.
You can disable the Nagle with the socket option TCP_NODELAY but then the reading end will never receive two short messages in one single read call.
So test it, i ended up with none of this and implemented memory mapped based queues with pthread mutex and semaphore in shared memory, avoiding a lot of kernel system calls (but today they aren't very slow anymore).
对于与命名管道的双向通信:
命名管道非常容易实现。
例如,我使用命名管道在 C 中实现了一个项目,感谢基于标准文件输入输出的通信(fopen、fprintf、fscanf ...),它是如此简单和干净(如果这也是一个考虑因素)。
我什至用java对它们进行了编码(我正在序列化并通过它们发送对象!)
命名管道有一个缺点:
For two way communication with named pipes:
Named pipes are quite easy to implement.
E.g. I implemented a project in C with named pipes, thanks to standart file input-output based communication (fopen, fprintf, fscanf ...) it was so easy and clean (if that is also a consideration).
I even coded them with java (I was serializing and sending objects over them!)
Named pipes has one disadvantage: