共享内存将相同的数据(numpy 数组)加载到多个 MPI 进程?
我有一个很长的瘦numpy数组(dim=(4096*4096,1)),需要由多个MPI进程(使用mpi4py)读取,并且它们独立地对它们执行一些操作。但是,每个进程加载这么大的数组都会占用大量内存。有没有办法拥有/使用共享内存(也许它是最初分配的并且之后不被触及,除了只有 MPI 进程将从同一位置读取,即只读访问)?也许可以使用 python-multiprocessing 但 mpi4py 怎么样(提前感谢)?
I have a long skinny numpy array (dim=(4096*4096,1)) which needs to be read by multiple MPI processes (using mpi4py) and they do some operations on them independently. But while loading such large array by each process should be heavy on memory. Is there a way to have/use shared memory (maybe be it is allocated initially and not touched afterwards other than only the MPI processes will read from the same location, i.e. read-only access)? It maybe be possible with python-multiprocessing but what about mpi4py (thanks in advance)?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论