同一台计算机上的两个不同程序之间是否可以有一个公共指针
我需要两个不同的程序来处理一组数据。 我可以在它们之间建立网络(UDP)连接,但我想通过任何方式避免传输整个数据。
这听起来有点荒谬,但是是否有可能在这两个程序之间共享某种指针,以便当一个程序更新它时另一个程序可以简单地获取指针并开始使用它?
我使用的是Ubuntu 9.10
I need 2 different programs to work on a single set of data.
I have can set up a network (UDP) connection between them but I want to avoid the transfer of the whole data by any means.
It sounds a little absurd but is it possible to share some kind of pointer between these two programs so that when one updates it the other can simple get the pointer and start using it ??
I am using Ubuntu 9.10
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(8)
你说的是IPC——进程间通信。有很多选择。
一种是内存映射文件。它接近于做你所描述的事情。不过,它可能是也可能不是满足您的要求的最佳方法。阅读 IPC 以获取一些深度。
You're talking about IPC - Interprocess Communication. There are many options.
One is a memory-mapped file. It comes close to doing what you described. It may or may not be the optimal approach for your requirements, though. Read up on IPC to get some depth.
您正在寻找的内容通常称为“共享内存段”,并且访问它的方式是特定于平台的。
在 POSIX(大多数 Unix/Linux)系统上,您可以使用 sys/shm.h 中的 shm_*() API。
在 Win32 上,它是通过内存映射文件完成的,因此您将使用 CreateFileMapping()/MapViewOfFile() 等。
不确定 Mac 上的情况,但您也可以在那里使用 shm_*() 。
What you're looking for is usually called a "shared memory segment", and how you access it is platform-specific.
On POSIX (most Unix/Linux) systems, you use the shm_*() APIs in sys/shm.h.
On Win32, it's done with memory-mapped files, so you'll use CreateFileMapping()/MapViewOfFile() etc.
Not sure about Macs but you can probably use shm_*() there as well.
共享内存可以提供任何形式的 IPC 中最高的带宽,但管理起来也很麻烦——您需要同步对共享内存的访问,就像使用线程一样。如果您确实需要原始带宽,那么它可能是最好的 - 但需要这种带宽的设计通常是进程之间的分界线选择不当,在这种情况下,它可能会使其正常工作变得不必要的困难。
另请注意,管道(例如)更易于使用,并且仍然具有相当大的带宽 - 它们仍然(通常)使用内存中的内核分配的缓冲区,但它们会自动同步对其的访问。带宽损失是因为自动同步需要非常悲观的锁定算法。但这仍然不会带来大量的开销......
Shared memory can give about the highest bandwidth of any form of IPC available, but it's also kind of a pain to manage -- you need to synchronize access to the shared memory, just like you would with threads. If you really need that raw bandwidth, it's about the best there is -- but a design that needs that kind of bandwidth is often one with a poorly chosen dividing line between the processes, in which case it may be unnecessarily difficult to get it to work well.
Also note that pipes (for one example) are a lot easier to use, and still have pretty serious bandwidth -- they still (normally) use a kernel-allocated buffer in memory, but they automate synchronizing access to it. The loss of bandwidth is because automating synchronization requires very pessimistic locking algorithm. That still doesn't impose a huge amount of overhead though...
适用于 Unix 风格的 POSIX 共享内存函数。 IBM 大型机 (370/xa/esa/Zos) 可以在较低级别使用跨内存服务。您还必须考虑您的应用程序是否可以扩展到单个处理器之外。
POSIX shared memory functions for Unix flavors. IBM mainframes (370/xa/esa/Zos) can use cross-memory services at a low level. You also have to consider whether your app will scale beyond a single processor or not.
如果您真的、确实需要这样做,则暗示您的两个程序可能真的会更改为具有两个线程的一个程序......(如果您有程序源,那么执行此操作是小菜一碟)。
If you really, really need to do that it's a hint that your two programs may really be changed to one with two threads... (if you have program sources it's a piece of cake to do it).
也许使用“memcached”作为两个进程之间的代理可能会更好,然后每个进程可以在彼此之间交换密钥。
我相信每个键/值对 1024Kb 或更少,但直接的好处是互操作性、稳定性以及未来将多台机器上的多个进程连接在一起的能力。
Perhaps using "memcached" as broker between your two processes might be better, then each process can swap key's between each other.
Your constrained by I believe 1024Kb per key/value pair or less, but immediate benefits is interoperability, stability, and future ability to connect multiple processes on multiple machines together.
不,抱歉。我很久以前听说过一个实验性操作系统,它有一个非常大的地址空间,其中一部分位于一台机器上,其他部分位于其他机器上。它会完全满足您的要求...
注意:我假设这两个程序在不同的机器上运行。如果它们只是不同的进程,您可以使用命名部分来共享数据。
No, sorry. I heard long ago of an experimental OS that had a very large address space, where parts of it were on one machine and other parts on other machines. It would have allowed exactly what you ask...
Note: I am assuming that the 2 programs run on different machines. If they are simply different processes, you can use named sections to share data.
撇开它可以完成的事实不谈,进程间通信永远不会通过共享资源来完成 - 更不用说内存空间了。这是应对灾难的正确方法。
正确的 IPC 是通过正确的通信方式(例如套接字)来完成的。共享内存永远不是一条出路。
Putting aside the fact that it can be done, interprocess communication is never done by sharing resources - not to mention memory spaces. That is a proper recipe for disaster.
Proper IPC is done by proper means of communication such as sockets. Sharing memory is never the way to go.