Unix:在进程之间共享已经映射的内存
我有一个预构建的用户空间库,它有一个 API,其
void getBuffer (void **ppBuf, unsigned long *pSize);
void bufferFilled (void *pBuf, unsigned long size);
想法是我的代码从库中请求一个缓冲区,用东西填充它,然后将其返回给库。
我希望另一个进程能够填充这个缓冲区。我可以通过 shm*/shm_* API 创建一些新的共享缓冲区,让其他进程填充该缓冲区,然后将其复制到 lib 本地进程中的 lib 缓冲区,但这会产生额外的开销(可能很大)复制。
有没有办法共享已经为进程映射的内存?例如:
[local lib process]
getBuffer (&myLocalBuf, &mySize);
shmName = shareThisMemory (myLocalBuf, mySize);
[other process]
myLocalBuf = openTheSharedMemory (shmName);
这样其他进程就可以直接写入库的缓冲区。 (进程之间的同步已经得到处理,因此没有问题)。
I have a pre-built userspace library that has an API along the lines of
void getBuffer (void **ppBuf, unsigned long *pSize);
void bufferFilled (void *pBuf, unsigned long size);
The idea being that my code requests a buffer from the lib, fills it with stuff, then hands it back to the lib.
I want another process to be able to fill this buffer. I can do this by creating some new shared buffer via shm*/shm_* APIs, have the other process fill that, then copy it to the lib's buffer in the lib's local process, but this has the overhead of an extra (potentially large) copy.
Is there a way to share memory that has ALREADY been mapped for a process? eg something like:
[local lib process]
getBuffer (&myLocalBuf, &mySize);
shmName = shareThisMemory (myLocalBuf, mySize);
[other process]
myLocalBuf = openTheSharedMemory (shmName);
That way the other process could write directly into the lib's buffer.
(Synchronization between the processes is already taken care of so no problems there).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
有充分的理由不允许允许此功能,特别是从安全方面考虑。 “共享此内存”API 会破坏访问权限系统。
假设应用程序在内存中保存某种关键/敏感信息;应用程序链接(例如通过使用共享库、预加载、修改的链接器/加载器)到外部的任何组件,并且所述组件为了纯粹的乐趣而决定“共享地址空间”。这将是一种免费的方法,一种绕过任何类型的数据访问权限/限制的方法。您可以通过隧道进入该应用程序。
承认这对您的用例不利,但从系统/应用程序完整性的角度来看是合理的。尝试在网络上搜索/proc/pid/mem mmap 漏洞,了解为什么不需要这种访问(一般情况下)的一些解释。
如果您使用的库被设计为允许此类共享访问,则它本身必须提供挂钩来分配此类共享缓冲区,或使用其他地方预先分配的(可能是共享的)缓冲区。
编辑:为了明确这一点,进程边界明确是关于不共享地址空间(除其他外)。
如果您需要共享地址空间,请使用线程(然后整个地址空间都是共享的,并且永远不需要“导出”任何内容),或者以与设置共享内存区域相同的方式显式设置共享内存区域。共享文件。
从后一个角度来看,两个不打开它
O_EXCL
的进程将共享对文件的访问。但是,如果一个进程已经将其打开O_EXCL
,那么“使其共享”(可向另一个进程打开)的唯一方法是close()
它首先,然后open()
再次,无需O_EXCL
。除了先关闭文件之外,没有其他方法可以从您打开的文件中“删除”独占访问权限。正如除了首先取消映射之外无法删除对映射的内存区域的独占访问一样 - 对于进程的内存,
MAP_PRIVATE
是默认值,这是有充分理由的。更多:进程共享内存缓冲区实际上与进程共享文件没有太大区别;使用 SysV-IPC 风格的语义,你有:
即键是你正在寻找的“句柄”,以与传递文件名相同的方式传递它,然后 IPC 连接的双方都可以使用该键来检查是否共享资源存在,并且可以访问(附加到句柄)内容。
There are good reasons for not allowing this functionality, particularly from the security side of things. A "share this mem" API would subvert the access permissions system.
Just assume an application holds some sort of critical/sensitive information in memory; the app links (via e.g. using a shared library, a preload, a modified linker/loader) to whatever component outside, and said component for the sheer fun of it decides to "share out the address space". It'd be a free-for-all, a method to bypass any sort of data access permission/restriction. You'd tunnel your way into the app.
Not good for your usecase, admitted, but rather justified from the system / application integrity point of view. Try searching the web for /proc/pid/mem mmap vulnerability for some explanation why this sort of access isn't wanted (in general).
If the library you use is designed to allow such shared access, it must itself provide the hooks to either allocate such a shared buffer, or use an elsewhere-preallocated (and possibly shared) buffer.
Edit: To make this clear, the process boundary is explicitly about not sharing the address space (amongst other things).
If you require a shared address space, either use threads (then the entire address space is shared and there's never any need to "export" anything), or explicitly set up a shared memory region in the same way as you'd set up a shared file.
Look at it from the latter point of view, two processes not opening it
O_EXCL
would share access to a file. But if one process already has it openO_EXCL
, then the only way to "make it shared" (open-able to another process) is toclose()
it first thenopen()
it again withoutO_EXCL
. There's no other way to "remove" exclusive access from a file that you've opened as such other than to close it first.Just as there is no way to remove exclusive access to a memory region mapped as such other than to unmap it first - and for a process' memory,
MAP_PRIVATE
is the default, for good reasons.More: a process-shared memory buffer really isn't much different than a process shared file; using SysV-IPC style semantics, you have:
I.e. the key is the "handle" you're looking for, pass that the same way you would pass a filename, and both sides of the IPC connection can then use that key to check whether the shared resource exists, as well at access (attach to the handle) the contents though that.
在进程之间共享内存的更现代方法是使用 POSIX shm_open() API< /a>.
本质上,它是一种将文件放在 ramdisk (tmpfs) 上的便携式方法。因此,一个进程使用
shm_open
加上ftruncate
加上mmap
。另一种使用shm_open
(同名)加上mmap
加上shm_unlink
。 (对于两个以上的进程,最后一个进行 mmap 的进程可以取消链接。)这样,当最后一个进程退出时,共享内存将被自动回收;无需显式删除共享段(与 SysV 共享内存一样)。
不过,您仍然需要修改应用程序以这种方式分配共享内存。
A more modern way to share memory among processes is to use the POSIX shm_open() API.
Essentially, it's a portable way of putting files on a ramdisk (tmpfs). So one process uses
shm_open
plusftruncate
plusmmap
. The other usesshm_open
(with the same name) plusmmap
plusshm_unlink
. (With more than two processes, the last one to mmap it can unlink it.)This way the shared memory will get reclaimed automatically when the last process exits; no need to explicitly remove the shared segment (as with SysV shared memory).
You still need to modify your application to allocate shared memory in this way, though.
至少从理论上讲,您可以记录从 lib 获得的缓冲区的内存地址,并让其他进程 mmap /proc/$PID_OF_FIRST_PROCCESS/mem 文件,并以该地址作为偏移量。
我还没有测试过它,我不确定 /proc/PID/mem 实际上是否实现了 mmap 文件操作,并且有大量的安全考虑,但它可能有效。祝你好运:-)
In theory at least, you can record the memory address of the buffer you got from your lib and have the other process mmap /proc/$PID_OF_FIRST_PROCCESS/mem file with the address as the offset.
I haven't tested it and I'm not sure /proc/PID/mem actually has an mmap file op implemented and there are a ton of security consideration but it might work. Best of luck :-)