正确断开多处理远程管理器的连接
当使用多处理管理器对象创建服务器并远程连接到该服务器时,客户端需要保持与远程服务器的连接。如果服务器在客户端关闭之前就消失了,客户端将永远尝试连接到服务器的预期地址。
我在服务器消失后尝试退出客户端代码时遇到了死锁,因为我的客户端进程永远不会退出。
如果我在服务器宕机之前del
我的远程对象和我的客户端管理器,该进程将正常退出,但是在使用后立即删除我的客户端管理器对象和远程对象并不理想。
这是我能做的最好的事情吗?是否有另一种(更合适的)方法来断开与远程管理器对象的连接?在服务器宕机和/或连接丢失后,有没有办法干净地退出客户端?
我知道 socket.setdefaulttimeout 不适用于多处理,但是有没有办法专门为多处理模块设置连接超时?这是我遇到问题的代码:
from multiprocessing.managers import BaseManager
m = BaseManager(address=('my.remote.server.dns', 50000), authkey='mykey')
# this next line hangs forever if my server is not running or gets disconnected
m.connect()
更新 这在多处理中被破坏。连接超时需要发生在套接字级别(并且套接字需要是非阻塞的才能做到这一点),但非阻塞套接字会中断多处理。如果远程服务器不可用,则无法处理放弃建立连接的情况。
When using multiprocessing
Manager objects to create a server and connect remotely to that server, the client needs to maintain a connection to the remote server. If the server goes away before the client shuts down, the client will try to connect to the expected address of the server forever.
I'm running into a deadlock on client code trying to exit after the server has gone away, as my client process never exits.
If I del
my remote objects and my clients manager before the server goes down, the process will exit normally, but deleting my client's manager object and remote objects immediately after use is less than ideal.
Is that the best I can do? Is there another (more proper) way to disconnect from a remote manager object? Is there a way to cleanly exit a client after a server has gone down and/or the connection is lost?
I know socket.setdefaulttimeout doesn't work with multiprocessing, but is there a way to set a connection timeout for the multiprocessing module specifically? This is the code I'm having trouble with:
from multiprocessing.managers import BaseManager
m = BaseManager(address=('my.remote.server.dns', 50000), authkey='mykey')
# this next line hangs forever if my server is not running or gets disconnected
m.connect()
UPDATE This is broken in multiprocessing. The connection timeout needs to happen at the socket level (and the socket needs to be non-blocking in order to do this) but non-blocking sockets break multiprocessing. It is not possible to handle giving up on making a connection if the remote server is not available.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
是的,但这是一个黑客行为。我希望有更多 python-fu 的人可以改进这个答案。多处理的超时在
multiprocessing/connection.py
中定义:具体来说,我能够使其工作的方法是通过猴子修补
_init_timeout
方法,如下所示: 5 是新的超时值。如果有更简单的方法,我相信会有人指出。如果没有,这可能是向多处理开发团队提出功能请求的候选者。我认为像设置超时这样基本的事情应该比这更容易。另一方面,他们可能有不公开 API 中超时的哲学原因。
希望有帮助。
Yes, but this is a hack. It is my hope that someone with greater python-fu can improve this answer. The timeout for multiprocessing is defined in
multiprocessing/connection.py
:Specifically, the way I was able to make it work was by monkey-patching the
_init_timeout
method as follows:Where 5 is the new timeout value. If there's an easier way, I'm sure someone will point it out. If not, this might be a candidate for a feature request to the multiprocessing dev team. I think something as elementary as setting a timeout should be easier than this. On the other hand, they may have philosophical reasons for not exposing timeout in the API.
Hope that helps.
这可以帮助你吗?
(摘自python 16.6页面)
通常,超时是在一些while循环中测试的。
Can this help you ?
(Taken from python 16.6 page)
Usually, timeouts are tested in some while loop.