C++ 中的快速跨平台进程间通信

发布于 2024-08-20 18:24:00 字数 158 浏览 3 评论 0原文

我正在寻找一种方法,让两个程序能够有效地相互传输大量数据,这需要在Linux和Windows上工作,用C++编写。这里的上下文是一个 P2P 网络程序,它充当网络上的节点并持续运行,其他应用程序(可能是游戏,因此需要快速解决方案)将使用它与网络中的其他节点进行通信。如果有更好的解决方案,我会很感兴趣。

I'm looking for a way to get two programs to efficiently transmit a large amount of data to each other, which needs to work on Linux and Windows, in C++. The context here is a P2P network program that acts as a node on the network and runs continuously, and other applications (which could be games hence the need for a fast solution) will use this to communicate with other nodes in the network. If there's a better solution for this I would be interested.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

等风来 2024-08-27 18:24:00

boost::asio 是一个处理异步的跨平台库通过套接字进行io。您可以将此与例如 Google Protocol Buffers 结合起来用于您的实际消息。

Boost 还为您提供了用于进程间的 boost::interprocess在同一台机器上进行通信,但 asio 允许您异步进行通信,并且您可以轻松地为本地和远程连接使用相同的处理程序。

boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.

Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.

美人如玉 2024-08-27 18:24:00

我一直在使用 ZeroC 的 ICE (www.zeroc.com),效果非常棒。超级易于使用,它不仅是跨平台的,而且还支持多种语言(python、java 等),甚至还有嵌入式版本的库。

I have been using ICE by ZeroC (www.zeroc.com), and it has been fantastic. Super easy to use, and it's not only cross platform, but has support for many languages as well (python, java, etc) and even an embedded version of the library.

时光瘦了 2024-08-27 18:24:00

好吧,如果我们可以假设两个进程在同一台机器上运行,那么它们来回传输大量数据的最快方法是将数据保存在共享内存区域中;通过这种设置,数据根本不会被复制,因为两个进程都可以直接访问它。 (如果您想更进一步,您可以将这两个程序合并为一个程序,每个以前的“进程”现在作为同一进程空间内的线程运行。在这种情况下,它们将自动共享 100% 的内存 。

当然,在大多数情况下,仅仅拥有共享内存区域是不够的:您还需要某种同步机制,以便进程可以安全地读取和更新共享数据,而不会相互干扰 我这样做的方法是在共享内存区域创建两个双端队列(每个进程一个用于发送)。要么使用无锁 FIFO 队列类,要么为每个双端队列提供一个信号量/互斥体,您可以使用它来序列化将数据项推入队列并将数据项从队列中弹出。 (请注意,您放入队列中的数据项只是指向实际数据缓冲区的指针,而不是数据本身......否则您将重新复制大量数据,这是您想要避免的使用shared_ptrs而不是普通的C指针是一个好主意,这样当接收进程使用它时,“旧”数据将被自动释放)。一旦你有了这个,你唯一需要的就是进程A在刚刚将一个项目放入队列中以供B接收时通知进程B(反之亦然)......我通常这样做将一个字节写入另一个进程正在 select() 的管道中,以导致另一个进程唤醒并检查其队列,但还有其他方法可以做到这一点。

Well, if we can assume the two processes are running on the same machine, then the fastest way for them to transfer large quantities of data back and forth is by keeping the data inside a shared memory region; with that setup, the data is never copied at all, since both processes can access it directly. (If you wanted to go even further, you could combine the two programs into one program, with each former 'process' now running as a thread inside the same process space instead. In that case they would be automatically sharing 100% of their memory with each other)

Of course, just having a shared memory area isn't sufficient in most cases: you would also need some sort of synchronization mechanism so that the processes can read and update the shared data safely, without tripping over each other. The way I would do that would be to create two double-ended queues in the shared memory region (one for each process to send with). Either use a lockless FIFO-queue class, or give each double-ended queue a semaphore/mutex that you can use to serialize pushing data items into the queue and popping data items out of the queue. (Note that the data items you'd be putting into the queues would only be pointers to the actual data buffers, not the data itself... otherwise you'd be back to copying large amounts of data around, which you want to avoid. It's a good idea to use shared_ptrs instead of plain C pointers, so that "old" data will be automatically freed when the receiving process is done using it). Once you have that, the only other thing you'd need is a way for process A to notify process B when it has just put an item into the queue for B to receive (and vice versa)... I typically do that by writing a byte into a pipe that the other process is select()-ing on, to cause the other process to wake up and check its queue, but there are other ways to do it as well.

卸妝后依然美 2024-08-27 18:24:00

这是一个难题。

瓶颈是互联网,并且您的客户端可能位于 NAT 上。

如果您不是在谈论互联网,或者如果您明确没有运营商级邪恶 NAT 背后的客户端,您需要说。

因为它归结为:使用 TCP。把它吸起来。

This is a hard problem.

The bottleneck is the internet, and that your clients might be on NAT.

If you are not talking internet, or if you explicitly don't have clients behind carrier grade evil NATs, you need to say.

Because it boils down to: use TCP. Suck it up.

流星番茄 2024-08-27 18:24:00

我强烈建议在 TCP 或 UDP 套接字之上使用协议缓冲区

I would strongly suggest Protocol Buffers on top of TCP or UDP sockets.

浅唱々樱花落 2024-08-27 18:24:00

因此,虽然其他答案涵盖了部分问题(套接字库),但它们并没有告诉您 NAT 问题。与其让您的用户修补他们的路由器,不如使用一些技术来让您在没有额外配置的情况下通过一个模糊理智的路由器。您需要使用所有这些才能获得最佳兼容性。

首先,ICE这里的库是一种与 STUN 和/或一起使用的 NAT 穿越技术关闭网络中的服务器。尽管有一些公共 STUN 服务器,但您可能必须提供一些基础设施才能使其工作。

其次,同时使用 UPnP 和 NAT-PMP。例如,此处有一个库。

第三,使用IPv6。 Teredo 是在 IPv4 上运行 IPv6 的一种方法,通常在上述方法都不起作用的情况下起作用,而且谁知道,您的用户可能通过其他方式获得了可用的 IPv6。实现这一点的代码很少,而且越来越重要。例如,我发现大约一半的 Bittorrent 数据是通过 IPv6 到达的。

So, while the other answers cover part of the problem (socket libraries), they're not telling you about the NAT issue. Rather than have your users tinker with their routers, it's better to use some techniques that should get you through a vaguely sane router with no extra configuration. You need to use all of these to get the best compatibility.

First, ICE library here is a NAT traversal technique that works with STUN and/or TURN servers out in the network. You may have to provide some infrastructure for this to work, although there are some public STUN servers.

Second, use both UPnP and NAT-PMP. One library here, for example.

Third, use IPv6. Teredo, which is one way of running IPv6 over IPv4, often works when none of the above do, and who knows, your users may have working IPv6 by some other means. Very little code to implement this, and increasingly important. I find about half of Bittorrent data arrives over IPv6, for example.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文