扭曲反应堆在一个程序中多次启动?

发布于 2024-11-03 16:46:21 字数 697 浏览 3 评论 0原文

是否可以在同一个程序中多次启动反应堆?假设您想要将扭曲的功能封装在一个方法内,以用于 API 目的。

例如,mymodule.py 如下所示:

  1 from twisted.web.client import getPage
  2 from twisted.internet import reactor
  3 
  4 def _result(r):
  5     print r
  6     reactor.stop()
  7 
  8 def _error(e):
  9     print e
 10     reactor.stop()
 11 
 12 def getGoogle():
 13     d = getPage('http://www.google.com')
 14     d.addCallbacks(_result, _error)
 15     reactor.run()
 16 
 17 def getYahoo():
 18     d = getPage('http://www.yahoo.com')
 19     d.addCallbacks(_result, _error)
 20     reactor.run()
 21 

main.py 如下所示:

  1 import mymodule
  2 
  3 getGoogle()
  4 getYahoo()

Is it possible to start the reactor more than once in the same program? Suppose you wanted to encapsulate twisted functionality inside a method, for API purposes.

For example, mymodule.py looks like this:

  1 from twisted.web.client import getPage
  2 from twisted.internet import reactor
  3 
  4 def _result(r):
  5     print r
  6     reactor.stop()
  7 
  8 def _error(e):
  9     print e
 10     reactor.stop()
 11 
 12 def getGoogle():
 13     d = getPage('http://www.google.com')
 14     d.addCallbacks(_result, _error)
 15     reactor.run()
 16 
 17 def getYahoo():
 18     d = getPage('http://www.yahoo.com')
 19     d.addCallbacks(_result, _error)
 20     reactor.run()
 21 

main.py looks like this:

  1 import mymodule
  2 
  3 getGoogle()
  4 getYahoo()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

看透却不说透 2024-11-10 16:46:21

这是组织代码的另一种方法,利用 Twisted 的单线程特性:将要处理的所有 url 排队,启动反应器,并在每个请求完成时递减计数器。当计数器达到零时,停止反应器,它将返回结果:

from twisted.web.client import getPage
from twisted.internet import reactor

class Getter(object):

    def __init__(self):
        self._sequence = 0
        self._results = []
        self._errors = []

    def add(self, url):
        d = getPage(url)
        d.addCallbacks(self._on_success, self._on_error)
        d.addCallback(self._on_finish)
        self._sequence += 1

    def _on_finish(self, *narg):
        self._sequence -= 1
        if not self._sequence:
            reactor.stop()

    _on_success = lambda self, *res: self._results.append(res)
    _on_error = lambda self, *err: self._errors.append(err)

    def run(self):
        reactor.run()
        return self._results, self._errors

g = Getter()
for url in ('http://www.google.com', 'http://www.yahoo.com', 'idontexist'):
    g.add(url)
results, errors = g.run()
print results
print errors

Here's another way to organize your code, exploiting the single-threaded nature of Twisted: queue up all the urls you want to process, kick off the reactor, and decrement a counter when each request completes. When the counter reaches zero, stop the reactor which will return the results:

from twisted.web.client import getPage
from twisted.internet import reactor

class Getter(object):

    def __init__(self):
        self._sequence = 0
        self._results = []
        self._errors = []

    def add(self, url):
        d = getPage(url)
        d.addCallbacks(self._on_success, self._on_error)
        d.addCallback(self._on_finish)
        self._sequence += 1

    def _on_finish(self, *narg):
        self._sequence -= 1
        if not self._sequence:
            reactor.stop()

    _on_success = lambda self, *res: self._results.append(res)
    _on_error = lambda self, *err: self._errors.append(err)

    def run(self):
        reactor.run()
        return self._results, self._errors

g = Getter()
for url in ('http://www.google.com', 'http://www.yahoo.com', 'idontexist'):
    g.add(url)
results, errors = g.run()
print results
print errors
葵雨 2024-11-10 16:46:21

一个更简单的解决方案,不需要您管理计数器:

from twisted.internet import reactor, defer
from twisted.web.client import getPage

def printPage(page):
    print page

def printError(err):
    print err

urls = ['http://www.google.com',
        'http://www.example.com']

jobs = []
for url in urls:
    jobs.append(getPage(url).addCallbacks(printPage,
                                          printError))

def done(ignored):
    reactor.stop()
defer.gatherResults(jobs).addCallback(done)

reactor.run()

您应该看看 此处查看延迟 API 提供的内容,因为它将节省您大量时间并使代码更易于调试。

A more straightforward solution, which doesn't require you to manage a counter:

from twisted.internet import reactor, defer
from twisted.web.client import getPage

def printPage(page):
    print page

def printError(err):
    print err

urls = ['http://www.google.com',
        'http://www.example.com']

jobs = []
for url in urls:
    jobs.append(getPage(url).addCallbacks(printPage,
                                          printError))

def done(ignored):
    reactor.stop()
defer.gatherResults(jobs).addCallback(done)

reactor.run()

You should take a look here at what's provided by the deferred API because it will save you a lot of time and make your code easier to debug.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文