谷歌应用程序引擎python发送电子邮件超时

发布于 2024-11-04 11:09:48 字数 1452 浏览 3 评论 0原文

我的脚本抓取 rss 页面的内容,获取该页面中的 url,将它们保存到列表中,然后抓取每个 url 的内容,并将页面的内容通过电子邮件发送给我。一切都工作得很好,接受我无法发送列表中的每个链接。通常列表中有大约 22 个链接。我不想将多个链接的内容合并到一封电子邮件中。如果我不添加超时,我会收到像这样的超配额错误,

<class 'google.appengine.runtime.apiproxy_errors.OverQuotaError'>: The API call mail.Send() required more quota than is available. 

在我添加“time.sleep(9)”以减慢速度后,它会给我这个错误。

<class 'google.appengine.runtime.DeadlineExceededError'>: 
Traceback (most recent call last):

这是我的代码..有什么想法吗?

size = len(my_tabletest)
a=2 
while a < size:
  url = my_tabletest[a].split('html</link>')[0] + "print"
  url_hhhhhh = urlfetch.fetch(url)
  my_story = url_hhhhhh.content
  my_story = my_story.split('<div class="printstory">')[1]
  my_story_subject = my_story.split('<h1>')[1]
  my_story_subject = my_story_subject.split('</h1>')[0]
  my_story =  ''.join(BeautifulSoup(my_story).findAll(text=True))
  message = mail.EmailMessage(sender="me<[email protected]>",
  subject=my_story_subject)
  message.to = "Jim <[email protected]>"
  message.body = my_story
  message.html = my_story_html
  message.send()
  time.sleep(9)
  a=a+1

My script grabs the content of an rss page gets the urls in that page saves them to a list then it grabs the content of each url and it emails the contents of the page to me. Everything is working very well accept I can't send every link in the list. Typically about 22 links in the list. I don't want to combine the contents of multiple links into one email. If I don't add a timeout I get an over quota error like this

<class 'google.appengine.runtime.apiproxy_errors.OverQuotaError'>: The API call mail.Send() required more quota than is available. 

After I added "time.sleep(9)" to slow it down it gives me this error.

<class 'google.appengine.runtime.DeadlineExceededError'>: 
Traceback (most recent call last):

Here is my code.. Any thoughts?

size = len(my_tabletest)
a=2 
while a < size:
  url = my_tabletest[a].split('html</link>')[0] + "print"
  url_hhhhhh = urlfetch.fetch(url)
  my_story = url_hhhhhh.content
  my_story = my_story.split('<div class="printstory">')[1]
  my_story_subject = my_story.split('<h1>')[1]
  my_story_subject = my_story_subject.split('</h1>')[0]
  my_story =  ''.join(BeautifulSoup(my_story).findAll(text=True))
  message = mail.EmailMessage(sender="me<[email protected]>",
  subject=my_story_subject)
  message.to = "Jim <[email protected]>"
  message.body = my_story
  message.html = my_story_html
  message.send()
  time.sleep(9)
  a=a+1

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

爱,才寂寞 2024-11-11 11:09:48

欢迎来到堆栈溢出!

任务队列就是为了解决此问题而构建的。您可以使用延迟库来利用它,并对现有代码进行最小程度的更改

:调用 message.send() 时,请执行以下操作:

def send_email(message):  
  message.send()

deferred.defer(send_email, message)

这将创建一批临时任务,在主请求处理程序返回后在后台发送电子邮件。当您的应用程序达到出站邮件的短期配额限制时,其中一些任务可能会在第一次尝试时失败。没关系;失败的任务将自动退出并重试,直到成功。

编辑:哦,把代码中的sleep去掉。 =)

编辑 #2: 您可以通过将 urlfetch 移至任务中来进一步加快速度,这样每个任务都会获取一个 URL,然后发送一封电子邮件。在一个请求处理程序中获取 22 个 URL 可能足以导致超时,与发送邮件无关。

Welcome to Stack Overflow!

The task queue is built to solve this problem. You can leverage it with minimal change to your existing code using the deferred library:

Instead of calling message.send(), do something like this:

def send_email(message):  
  message.send()

deferred.defer(send_email, message)

This will create a batch of ad-hoc tasks that send your emails in the background, after your main request handler has returned. Some of these tasks will probably fail on the first try as your app hits short term quota limits for outbound mail. That's OK; failed tasks will back off and retry automatically until they succeed.

Edit: Oh, and take the sleep out of your code. =)

Edit #2: You can speed things up further by moving the urlfetch into the task, so each task fetches one URL and then sends one email. Fetching 22 URLs in one request handler could be enough to cause timeouts, independent of sending mail.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文