正确断开多处理远程管理器的连接

发布于 2024-11-17 14:44:01 字数 746 浏览 4 评论 0原文

当使用多处理管理器对象创建服务器并远程连接到该服务器时,客户端需要保持与远程服务器的连接。如果服务器在客户端关闭之前就消失了,客户端将永远尝试连接到服务器的预期地址。

我在服务器消失后尝试退出客户端代码时遇到了死锁,因为我的客户端进程永远不会退出。

如果我在服务器宕机之前del我的远程对象和我的客户端管理器,该进程将正常退出,但是在使用后立即删除我的客户端管理器对象和远程对象并不理想。

这是我能做的最好的事情吗?是否有另一种(更合适的)方法来断开与远程管理器对象的连接?在服务器宕机和/或连接丢失后,有没有办法干净地退出客户端?

我知道 socket.setdefaulttimeout 不适用于多处理,但是有没有办法专门为多处理模块设置连接超时?这是我遇到问题的代码:

from multiprocessing.managers import BaseManager
m = BaseManager(address=('my.remote.server.dns', 50000), authkey='mykey')
# this next line hangs forever if my server is not running or gets disconnected
m.connect()

更新 这在多处理中被破坏。连接超时需要发生在套接字级别(并且套接字需要是非阻塞的才能做到这一点),但非阻塞套接字会中断多处理。如果远程服务器不可用,则无法处理放弃建立连接的情况。

When using multiprocessing Manager objects to create a server and connect remotely to that server, the client needs to maintain a connection to the remote server. If the server goes away before the client shuts down, the client will try to connect to the expected address of the server forever.

I'm running into a deadlock on client code trying to exit after the server has gone away, as my client process never exits.

If I del my remote objects and my clients manager before the server goes down, the process will exit normally, but deleting my client's manager object and remote objects immediately after use is less than ideal.

Is that the best I can do? Is there another (more proper) way to disconnect from a remote manager object? Is there a way to cleanly exit a client after a server has gone down and/or the connection is lost?

I know socket.setdefaulttimeout doesn't work with multiprocessing, but is there a way to set a connection timeout for the multiprocessing module specifically? This is the code I'm having trouble with:

from multiprocessing.managers import BaseManager
m = BaseManager(address=('my.remote.server.dns', 50000), authkey='mykey')
# this next line hangs forever if my server is not running or gets disconnected
m.connect()

UPDATE This is broken in multiprocessing. The connection timeout needs to happen at the socket level (and the socket needs to be non-blocking in order to do this) but non-blocking sockets break multiprocessing. It is not possible to handle giving up on making a connection if the remote server is not available.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

稍尽春風 2024-11-24 14:44:01

有没有办法专门为多处理模块设置连接超时?

是的,但这是一个黑客行为。我希望有更多 python-fu 的人可以改进这个答案。多处理的超时在 multiprocessing/connection.py 中定义:

# A very generous timeout when it comes to local connections...
CONNECTION_TIMEOUT = 20.
...
def _init_timeout(timeout=CONNECTION_TIMEOUT):
        return time.time() + timeout

具体来说,我能够使其工作的方法是通过猴子修补 _init_timeout 方法,如下所示

import sys
import time

from multiprocessing import managers, connection

def _new_init_timeout():
    return time.time() + 5

sys.modules['multiprocessing'].__dict__['managers'].__dict__['connection']._init_timeout = _new_init_timeout
from multiprocessing.managers import BaseManager
m = BaseManager(address=('somehost', 50000), authkey='secret')
m.connect()

: 5 是新的超时值。如果有更简单的方法,我相信会有人指出。如果没有,这可能是向多处理开发团队提出功能请求的候选者。我认为像设置超时这样基本的事情应该比这更容易。另一方面,他们可能有不公开 API 中超时的哲学原因。

希望有帮助。

is there a way to set a connection timeout for the multiprocessing module specifically?

Yes, but this is a hack. It is my hope that someone with greater python-fu can improve this answer. The timeout for multiprocessing is defined in multiprocessing/connection.py:

# A very generous timeout when it comes to local connections...
CONNECTION_TIMEOUT = 20.
...
def _init_timeout(timeout=CONNECTION_TIMEOUT):
        return time.time() + timeout

Specifically, the way I was able to make it work was by monkey-patching the _init_timeout method as follows:

import sys
import time

from multiprocessing import managers, connection

def _new_init_timeout():
    return time.time() + 5

sys.modules['multiprocessing'].__dict__['managers'].__dict__['connection']._init_timeout = _new_init_timeout
from multiprocessing.managers import BaseManager
m = BaseManager(address=('somehost', 50000), authkey='secret')
m.connect()

Where 5 is the new timeout value. If there's an easier way, I'm sure someone will point it out. If not, this might be a candidate for a feature request to the multiprocessing dev team. I think something as elementary as setting a timeout should be easier than this. On the other hand, they may have philosophical reasons for not exposing timeout in the API.

Hope that helps.

落叶缤纷 2024-11-24 14:44:01

这可以帮助你吗?

#### TEST_JOIN_TIMEOUT

def join_timeout_func():
    print '\tchild sleeping'
    time.sleep(5.5)
    print '\n\tchild terminating'

def test_join_timeout():
    p = multiprocessing.Process(target=join_timeout_func)
    p.start()

    print 'waiting for process to finish'

    while 1:
        p.join(timeout=1)
        if not p.is_alive():
            break
        print '.',
        sys.stdout.flush()

(摘自python 16.6页面)

通常,超时是在一些while循环中测试的。

Can this help you ?

#### TEST_JOIN_TIMEOUT

def join_timeout_func():
    print '\tchild sleeping'
    time.sleep(5.5)
    print '\n\tchild terminating'

def test_join_timeout():
    p = multiprocessing.Process(target=join_timeout_func)
    p.start()

    print 'waiting for process to finish'

    while 1:
        p.join(timeout=1)
        if not p.is_alive():
            break
        print '.',
        sys.stdout.flush()

(Taken from python 16.6 page)

Usually, timeouts are tested in some while loop.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文