Python urllib2 URLError异常?

发布于 2024-08-02 22:58:19 字数 1150 浏览 4 评论 0原文

我之前在 Windows XP 计算机上安装了 Python 2.6.2 并运行以下代码:

import urllib2
import urllib

page = urllib2.Request('http://www.python.org/fish.html')
urllib2.urlopen( page )

我收到以下错误。

Traceback (most recent call last):<br>
  File "C:\Python26\test3.py", line 6, in <module><br>
    urllib2.urlopen( page )<br>
  File "C:\Python26\lib\urllib2.py", line 124, in urlopen<br>
    return _opener.open(url, data, timeout)<br>
  File "C:\Python26\lib\urllib2.py", line 383, in open<br>
    response = self._open(req, data)<br>
  File "C:\Python26\lib\urllib2.py", line 401, in _open<br>
    '_open', req)<br>
  File "C:\Python26\lib\urllib2.py", line 361, in _call_chain<br>
    result = func(*args)<br>
  File "C:\Python26\lib\urllib2.py", line 1130, in http_open<br>
    return self.do_open(httplib.HTTPConnection, req)<br>
  File "C:\Python26\lib\urllib2.py", line 1105, in do_open<br>
    raise URLError(err)<br>
URLError: <urlopen error [Errno 11001] getaddrinfo failed><br><br><br>

I installed Python 2.6.2 earlier on a Windows XP machine and run the following code:

import urllib2
import urllib

page = urllib2.Request('http://www.python.org/fish.html')
urllib2.urlopen( page )

I get the following error.

Traceback (most recent call last):<br>
  File "C:\Python26\test3.py", line 6, in <module><br>
    urllib2.urlopen( page )<br>
  File "C:\Python26\lib\urllib2.py", line 124, in urlopen<br>
    return _opener.open(url, data, timeout)<br>
  File "C:\Python26\lib\urllib2.py", line 383, in open<br>
    response = self._open(req, data)<br>
  File "C:\Python26\lib\urllib2.py", line 401, in _open<br>
    '_open', req)<br>
  File "C:\Python26\lib\urllib2.py", line 361, in _call_chain<br>
    result = func(*args)<br>
  File "C:\Python26\lib\urllib2.py", line 1130, in http_open<br>
    return self.do_open(httplib.HTTPConnection, req)<br>
  File "C:\Python26\lib\urllib2.py", line 1105, in do_open<br>
    raise URLError(err)<br>
URLError: <urlopen error [Errno 11001] getaddrinfo failed><br><br><br>

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

記柔刀 2024-08-09 22:58:19
import urllib2
response = urllib2.urlopen('http://www.python.org/fish.html')
html = response.read()

你做错了。

import urllib2
response = urllib2.urlopen('http://www.python.org/fish.html')
html = response.read()

You're doing it wrong.

腻橙味 2024-08-09 22:58:19

查看 urllib2 源代码中回溯指定的行:

File "C:\Python26\lib\urllib2.py", line 1105, in do_open
raise URLError(err)

您将看到以下片段:

    try:
        h.request(req.get_method(), req.get_selector(), req.data, headers)
        r = h.getresponse()
    except socket.error, err: # XXX what error?
        raise URLError(err)

因此,看起来源代码是套接字错误,而不是与 HTTP 协议相关的错误。可能的原因:您不在线,您位于限制性防火墙后面,您的 DNS 已关闭,...

所有这些都与事实无关,如 mcandre指出,你的代码是错误的。

Have a look in the urllib2 source, at the line specified by the traceback:

File "C:\Python26\lib\urllib2.py", line 1105, in do_open
raise URLError(err)

There you'll see the following fragment:

    try:
        h.request(req.get_method(), req.get_selector(), req.data, headers)
        r = h.getresponse()
    except socket.error, err: # XXX what error?
        raise URLError(err)

So, it looks like the source is a socket error, not an HTTP protocol related error. Possible reasons: you are not on line, you are behind a restrictive firewall, your DNS is down,...

All this aside from the fact, as mcandre pointed out, that your code is wrong.

埖埖迣鎅 2024-08-09 22:58:19

名称解析错误。

getaddrinfo 用于解析请求中的主机名 (python.org)。如果失败,则意味着无法解析该名称,因为:

  1. 它不存在,或者记录已过时(不太可能;python.org 是一个完善的域名)
  2. 您的 DNS 服务器已关闭(不太可能;如果可以的话 )浏览其他网站,您应该能够通过 Python 获取该页面)
  3. 防火墙阻止 Python 或您的脚本访问 Internet(很可能;Windows 防火墙有时不会询问您是否要允许某个应用程序)
  4. 您居住在古代巫毒墓地。 (不太可能;如果是这种情况,你应该搬出去)

Name resolution error.

getaddrinfo is used to resolve the hostname (python.org)in your request. If it fails, it means that the name could not be resolved because:

  1. It does not exist, or the records are outdated (unlikely; python.org is a well-established domain name)
  2. Your DNS server is down (unlikely; if you can browse other sites, you should be able to fetch that page through Python)
  3. A firewall is blocking Python or your script from accessing the Internet (most likely; Windows Firewall sometimes does not ask you if you want to allow an application)
  4. You live on an ancient voodoo cemetery. (unlikely; if that is the case, you should move out)
唠甜嗑 2024-08-09 22:58:19

Windows Vista,python 2.6.2

是404页面,对吧?

>>> import urllib2
>>> import urllib
>>>
>>> page = urllib2.Request('http://www.python.org/fish.html')
>>> urllib2.urlopen( page )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python26\lib\urllib2.py", line 124, in urlopen
    return _opener.open(url, data, timeout)
  File "C:\Python26\lib\urllib2.py", line 389, in open
    response = meth(req, response)
  File "C:\Python26\lib\urllib2.py", line 502, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:\Python26\lib\urllib2.py", line 427, in error
    return self._call_chain(*args)
  File "C:\Python26\lib\urllib2.py", line 361, in _call_chain
    result = func(*args)
  File "C:\Python26\lib\urllib2.py", line 510, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
>>>

Windows Vista, python 2.6.2

It's a 404 page, right?

>>> import urllib2
>>> import urllib
>>>
>>> page = urllib2.Request('http://www.python.org/fish.html')
>>> urllib2.urlopen( page )
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python26\lib\urllib2.py", line 124, in urlopen
    return _opener.open(url, data, timeout)
  File "C:\Python26\lib\urllib2.py", line 389, in open
    response = meth(req, response)
  File "C:\Python26\lib\urllib2.py", line 502, in http_response
    'http', request, response, code, msg, hdrs)
  File "C:\Python26\lib\urllib2.py", line 427, in error
    return self._call_chain(*args)
  File "C:\Python26\lib\urllib2.py", line 361, in _call_chain
    result = func(*args)
  File "C:\Python26\lib\urllib2.py", line 510, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
>>>
何以畏孤独 2024-08-09 22:58:19

DJ

首先,我认为没有理由导入 urllib;我只见过 urllib2 用于完全替换 urllib,并且我不知道 urllib 中没有有用的功能,但 urllib2 中却缺少这些功能。

接下来,我注意到 http://www.python.org/fish.html 给出了对我来说 404 错误。 (这并不能解释您所看到的回溯/异常。我得到 urllib2.HTTPError: HTTP Error 404: Not Found

通常,如果您只想默认获取网页(没有添加特殊的 HTTP 标头、执行任何类型的 POST 等),那么以下内容就足够了:

req = urllib2.urlopen('http://www.python.org/')
html = req.read()
# and req.close() if you want to be pedantic

DJ

First, I see no reason to import urllib; I've only ever seen urllib2 used to replace urllib entirely and I know of no functionality that's useful from urllib and yet is missing from urllib2.

Next, I notice that http://www.python.org/fish.html gives a 404 error to me. (That doesn't explain the backtrace/exception you're seeing. I get urllib2.HTTPError: HTTP Error 404: Not Found

Normally if you just want to do a default fetch of a web pages (without adding special HTTP headers, doing doing any sort of POST, etc) then the following suffices:

req = urllib2.urlopen('http://www.python.org/')
html = req.read()
# and req.close() if you want to be pedantic
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文