套接字与管道的性能

发布于 2024-08-14 17:33:20 字数 271 浏览 5 评论 0 原文

我有一个 Java 程序,它使用本地主机上的套接字与 C++ 程序进行通信。通过转而使用本机操作系统管道,我是否可以期望获得任何性能(延迟、带宽或两者兼而有之)?目前我主要对 Windows 感兴趣,但也欢迎任何与 Unix/Linux/OSX 相关的见解。

编辑:澄清:两个程序在同一主机上运行,​​当前通过套接字进行通信,即通过与 localhost: 建立 TCP/IP 连接。我的问题是切换到使用(本地)命名管道(Windows)或其 Unix 等效项(AF_UNIX 域套接字?)的潜在性能优势是什么。

I have a Java-program which communicates with a C++ program using a socket on localhost. Can I expect to gain any performance (either latency, bandwidth, or both) by moving to use a native OS pipe? I'm primarily interested in Windows at the moment, but any insight related to Unix/Linux/OSX is welcome as well.

EDIT: Clarification: both programs run on the same host, currently communicating via a socket, i.e. by making a TCP/IP connection to localhost:. My question was what are the potential performance benefits of switching to using (local) named pipes (Windows), or their Unix equivalent (AF_UNIX domain socket?).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

向日葵 2024-08-21 17:33:20

肯是对的。命名管道在 Windows 上肯定更快。在 UNIX 和 Linux 上Linux,您需要 UDS 或本地管道。一样的东西,不同的名字。

对于本地通信来说,除了套接字之外的任何东西都会更快。这包括内存映射文件、本地管道、共享内存、COM 等。

Ken is right. Named pipes are definitely faster on Windows. On UNIX & Linux, you'd want a UDS or local pipe. Same thing, different name.

Anything other than sockets will be faster for local communication. This includes memory mapped files, local pipes, shared memory, COM, etc.

丶情人眼里出诗心の 2024-08-21 17:33:20

第一个谷歌点击出现了这个,它的时钟为NT4和XP,发现命名管道(这就是你的意思,对吧?)在 Windows 上更快。

The first google hit turned up this, which clocked NT4 and XP and found named pipes (that's what you meant, right?) to be faster on Windows.

泛泛之交 2024-08-21 17:33:20

对于本地进程来说,通信管道肯定比套接字更快。这是基准或缓存此处

SYSV IPC 与 UNIX 管道与 UNIX 套接字

延迟测试

样本:100万个

方法平均延迟(us)

SYSV IPC msgsnd/rcv 7.0

UNIX 管道 5.9

UNIX 套接字 11.4

带宽测试

样本:100万个

数据大小:1 kB

块大小:1 kB

方法平均带宽(MB/s)

SYSV IPC msgsnd/rcv 108

UNIX 管道 142

UNIX 套接字 95

注释

msgsnd/rcv 有最大块大小:在我的
系统大约是4kB。性能随着块大小的增加而提高
朝向天花板。我能达到的最高带宽是 284
MB/s,块大小为4000字节,数据大小为2MB。
随着数据大小的减小,性能略有下降,
4kB 数据提供 266 MB/s 的带宽。

我不知道我的系统在内部使用什么块大小
通过管道传输数据,但看起来比4kB高很多。
使用 32kB 的块大小,我可以实现超过 500 MB/s。我测试过
数据大小从 32kB 到 32MB 不等,每次都达到
400-550 MB/秒。随着数据和块大小的增加,性能逐渐下降
减少,与块大小增加类似。

使用更大的块大小时,Unix 套接字性能要好得多
1kB。我使用 2kB 块、4kB 数据大小获得了最佳结果(134 MB/s)。
这与 UNIX 管道相当。

我不确定我的测试方法是否完美。带宽测试
看起来相当简单,但我有点猜测如何测试
延迟。我刚刚在两个进程之间来回发送了 1 个字符
生活在 fork() 的两端。

我没有测试的一个因素是绑定() UNIX 套接字和
connect() 到服务器。如果你保持连接打开,很明显
不显着。

结论

在我的系统上,UNIX 管道提供更高的带宽和更低的带宽。
延迟比 SYSV IPC msgsnd/rcv 和 UNIX 套接字有优势
取决于所使用的块大小。如果您发送少量
数据,您可能不需要担心速度,只需选择
适合您的实施。如果你想转移大量
数据,使用块大小为 32kB 的管道。

系统信息

CPU:Intel Celeron III(铜矿)

内存:256MB

内核:Linux 2.2.18

我认为即使套接字很灵活,但它也会导致糟糕的代码设计。在使用管道时,它强制您设计项目的架构,例如哪个进程应该是父进程,哪个进程应该是子进程,以及它们如何合作(这将决定管道的建立方式),并为进程分配不同的功能。这样您的项目设计将具有层次结构并且易于维护。

https://web.archive.org/web/20160401124744/https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets< /a>

For local processes communication pipes are definitely faster than sockets. Here is a benchmark or cached here.

SYSV IPC vs. UNIX pipes vs. UNIX sockets

Latency test

Samples: 1 million

Method Average latency (us)

SYSV IPC msgsnd/rcv 7.0

UNIX pipe 5.9

UNIX sockets 11.4

Bandwidth test

Samples: 1 million

Data size: 1 kB

Block size: 1 kB

Method Average bandwidth (MB/s)

SYSV IPC msgsnd/rcv 108

UNIX pipe 142

UNIX sockets 95

Notes

msgsnd/rcv have a maximum block size: on my
system it’s about 4kB. Performance increases as block size is raised
towards the ceiling. The highest bandwidth I could achieve was 284
MB/s, with a block size of 4000 bytes and a data size of 2MB.
Performance dropped off slightly as the data size was decreased, with
4kB of data giving a bandwidth of 266 MB/s.

I don’t know what block size my system uses internally when
transferring data through a pipe, but it seems a lot higher than 4kB.
Using a block size of 32kB, I could achieve over 500 MB/s. I tested
this with various data sizes from 32kB to 32MB and each time achieved
400-550 MB/s. Performance tailed off as the data and block sizes were
decreased, similarly as the block size was raised.

Unix socket performance is much better with a higher block size than
1kB. I got best results (134 MB/s) with 2kB blocks, 4kB data size.
This is comparable with UNIX pipes.

I’m not sure if my testing methods are perfect. Bandwidth testing
seems fairly straightforward, but I kind of guessed at how to test
latency. I just sent 1 character back and forth between two processes
living at either end of a fork().

One factor I didn’t test is the time taken to bind() a UNIX socket and
connect() to the server. If you keep connections open, it’s obviously
not significant.

Conclusion

On my system, UNIX pipes give higher bandwidth and lower
latency than SYSV IPC msgsnd/rcv and UNIX sockets, but the advantage
depends on the block size used. If you are sending small amounts of
data, you probably don’t need to worry about speed, just pick the
implementation that suits you. If you want to shift a huge amount of
data, use pipes with a 32kB block size.

System information

CPU : Intel Celeron III (Coppermine)

RAM : 256MB

Kernel : Linux 2.2.18

I think even though socket is flexible but it can also lead to bad code design. While using pipe it enforces you to design the architecture of your project like which process should be the parent which should be the children and how they cooperate(this will determine how pipes are established) and assign different functionality to processes. Your project design this way will have hierarchical structure and easy to maintain.

https://web.archive.org/web/20160401124744/https://sites.google.com/site/rikkus/sysv-ipc-vs-unix-pipes-vs-unix-sockets

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文