具有不相关进程的 Python 多处理
我有许多单独生成的进程,而不是从父进程到子进程。进程需要向特定进程发送消息。接收进程地址(pid)可以存储在数据库中,但进程不能共享内存中的任何公共变量。
我找不到任何方法来使用 python 多进程包来完成此任务,现在正在研究基于套接字的服务器,但是这个问题仍然让我很好奇是否可以通过多处理来实现这种架构 - 优点是可以轻松地通过 pickable对象。
I have many processes that are spawned separately, not from parent to child. The processes need to send a message to specific processes. The receiving processes address (pid) can be stored in a database, but the processes cannot share any common variables in memory.
I could not find any way to accomplish this with pythons multiprocess package and am now looking into a socket-based server, but this issue still left me curious if this kind of architecture could be achieved with multiprocessing - the advantage would be to easily pass pickable objects.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
数据库?为什么?每个人都使用文件来实现这一目的,因为文件很便宜、可用,而且您只存储一个整数值。
还。由于您要使用文件,因此您有更多有趣的选择。
每个进程都将消息写入命名管道。接收进程从命名管道中取出请求。
每个进程都会将消息写入文件。一个简单的锁可确保一次只有一个进程可以访问该文件,从而确保序列化。接收进程从此文件中读取。
每个进程都使用 HTTP 向接收进程发出 RESTful 请求。接收进程使用精简的 HTTP 服务器框架来处理请求。
每个进程都使用消息队列来排队消息。接收进程使消息出队。队列是一个文件。
等等。是的,还有更多。但它们开始变得特定于操作系统。
Database? Why? Everyone uses a file for this, since a file is cheap, available, and you're only storing one integer value.
Also. Since you're going to use a file, you have more interesting choices.
Every process writes the message to a named pipe. The receiving process takes requests off the named pipe.
Every process writes the message to a file. A simple lock assures that only one process at a time has access to the file, assuring serialization. The receiving process reads from this file.
Every process uses HTTP to make a RESTful request to the receiving process. The receiving process uses a stripped-down HTTP server framework to process requests.
Every process uses a message queue to enqueue messages. The receiving process dequeues the messages. The queue is a file.
etc. And yes, there are more. But they start to get OS-specific.