在 Python 中重试连接之前暂停

发布于 2024-11-05 01:52:51 字数 271 浏览 4 评论 0原文

我正在尝试连接到服务器。有时我无法连接到服务器,想暂停几秒钟再重试。我将如何在Python中实现暂停功能。这是我到目前为止所拥有的。谢谢。

   while True:
        try:
            response = urllib.request.urlopen(http)
        except URLError as e:
            continue
        break

我正在使用Python 3.2

I am trying to connect to a server. Sometimes I cannot reach the server and would like to pause for a few seconds before trying again. How would I implement the pause feature in Python. Here is what I have so far. Thank you.

   while True:
        try:
            response = urllib.request.urlopen(http)
        except URLError as e:
            continue
        break

I am using Python 3.2

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

双手揣兜 2024-11-12 01:52:51

这会在继续之前阻塞线程 2 秒:

import time
time.sleep(2)

This will block the thread for 2 seconds before continuing:

import time
time.sleep(2)
甜点 2024-11-12 01:52:51

如果您想并行运行大量这些,使用异步网络框架(例如 Twisted<)会更具可扩展性/a>,其中“睡眠”并不意味着阻止有价值且昂贵的操作系统线程执行其他一些有用的工作。下面是一个粗略的草图,说明如何可以根据需要并行尝试任意数量的请求(设置为 100),并设置超时(5 秒)、延迟(2 秒)和可配置的重试次数(此处为 10)。

from twisted.internet import defer, reactor
from twisted.web import client

# A semaphore lets you run up to `token` deferred operations in parallel
semaphore = defer.DeferredSemaphore(tokens=100)

def job(url, tries=1, d=None):
    if not d:
        d = defer.succeed(None)
    d.addCallback(lambda ignored: client.getPage(url, timeout=5))
    d.addCallback(doSomethingWithData)
    def retry(failure):
        if tries > 10:
            return failure # give up
        else:
            # try again in 2 seconds
            d = defer.succeed(None)
            reactor.callLater(2, job, url, tries=tries+1, d=d)
            return d
    d.addErrback(retry)
    return d

for url in manyURLs:
    semaphore.run(job, url)

In case you want to run lots of these in parallel, it would be much more scalable to use an asynchronous networking framework such as Twisted, where "sleeping" doesn't mean blocking a valuable and expensive OS thread from doing some other useful work. Here's a rough sketch of how you can attempt as many requests in parallel as you wish (set to 100), with timeout (5 seconds), delay (2 seconds) and a configurable number of retries (here, 10).

from twisted.internet import defer, reactor
from twisted.web import client

# A semaphore lets you run up to `token` deferred operations in parallel
semaphore = defer.DeferredSemaphore(tokens=100)

def job(url, tries=1, d=None):
    if not d:
        d = defer.succeed(None)
    d.addCallback(lambda ignored: client.getPage(url, timeout=5))
    d.addCallback(doSomethingWithData)
    def retry(failure):
        if tries > 10:
            return failure # give up
        else:
            # try again in 2 seconds
            d = defer.succeed(None)
            reactor.callLater(2, job, url, tries=tries+1, d=d)
            return d
    d.addErrback(retry)
    return d

for url in manyURLs:
    semaphore.run(job, url)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文