如何使用 urlopen 获取非 ASCII url?
我需要从包含非 ascii 字符的 URL 获取数据,但 urllib2.urlopen 拒绝打开资源并引发:
UnicodeEncodeError: 'ascii' codec can't encode character u'\u0131' in position 26: ordinal not in range(128)
我知道该 URL 不符合标准,但我没有机会更改它。
使用Python访问包含非ascii字符的URL指向的资源的方法是什么?
编辑: 换句话说,urlopen 可以/如何打开一个 URL,例如:
http://example.org/Ñöñ-ÅŞÇİİ/
I need to fetch data from a URL with non-ascii characters but urllib2.urlopen refuses to open the resource and raises:
UnicodeEncodeError: 'ascii' codec can't encode character u'\u0131' in position 26: ordinal not in range(128)
I know the URL is not standards compliant but I have no chance to change it.
What is the way to access a resource pointed by a URL containing non-ascii characters using Python?
edit: In other words, can / how urlopen open a URL like:
http://example.org/Ñöñ-ÅŞÇİİ/
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(10)
严格来说,URI 不能包含非 ASCII 字符;您拥有的是IRI。
要将 IRI 转换为纯 ASCII URI:
地址主机名部分中的非 ASCII 字符必须使用 进行编码基于 Punycode 的 IDNA 算法;
根据 Ignacio 的回答,路径中的非 ASCII 字符以及地址的大多数其他部分必须使用 UTF-8 和 %-encoding 进行编码。
所以:(
从技术上讲,在一般情况下这仍然不够好,因为
urlparse
不会分割任何user:pass@
前缀或:port<主机名上的 /code> 后缀。只有主机名部分应该进行 IDNA 编码,使用普通的
urllib.quote
和.encode('idna')
进行编码会更容易。构建 URL 的时间比必须将 IRI 分开的时间要多。)Strictly speaking URIs can't contain non-ASCII characters; what you have there is an IRI.
To convert an IRI to a plain ASCII URI:
non-ASCII characters in the hostname part of the address have to be encoded using the Punycode-based IDNA algorithm;
non-ASCII characters in the path, and most of the other parts of the address have to be encoded using UTF-8 and %-encoding, as per Ignacio's answer.
So:
(Technically this still isn't quite good enough in the general case because
urlparse
doesn't split away anyuser:pass@
prefix or:port
suffix on the hostname. Only the hostname part should be IDNA encoded. It's easier to encode using normalurllib.quote
and.encode('idna')
at the time you're constructing a URL than to have to pull an IRI apart.)在 python3 中,对非 ascii 字符串使用 urllib.parse.quote 函数:
In python3, use the
urllib.parse.quote
function on the non-ascii string:Python 3 有库来处理这种情况。使用
urllib.parse.urlsplit
将 URL 拆分为其组件,以及urllib.parse.quote
正确引用/转义 unicode 字符和 urllib.parse.urlunsplit 来将其重新组合在一起。
Python 3 has libraries to handle this situation. Use
urllib.parse.urlsplit
to split the URL into its components, andurllib.parse.quote
to properly quote/escape the unicode charactersand
urllib.parse.urlunsplit
to join it back together.基于@darkfeline 的回答:
Based on @darkfeline answer:
它比公认的 @bobince 的答案所暗示的更复杂:
这就是所有浏览器的工作方式;它在 https://url.spec.whatwg.org/ 中指定 - 请参阅此 示例。 Python 实现可以在 w3lib 中找到(这是 Scrapy 正在使用的库);请参阅 w3lib.url.safe_url_string:
一种简单的方法检查 URL 转义实现是否不正确/不完整是检查它是否提供“页面编码”参数。
It is more complex than the accepted @bobince's answer suggests:
This is how all browsers work; it is specified in https://url.spec.whatwg.org/ - see this example. A Python implementation can be found in w3lib (this is the library Scrapy is using); see w3lib.url.safe_url_string:
An easy way to check if a URL escaping implementation is incorrect/incomplete is to check if it provides 'page encoding' argument or not.
对于那些不严格依赖 urllib 的人来说,一种实用的替代方案是 requests,它处理 IRI ”开箱即用”。
例如,使用
http://bücher.ch
:For those not depending strictly on urllib, one practical alternative is requests, which handles IRIs "out of the box".
For example, with
http://bücher.ch
:将
unicode
编码为 UTF-8,然后进行 URL 编码。Encode the
unicode
to UTF-8, then URL-encode.使用
httplib2
的iri2uri
方法。它与 bobin 制作的东西相同(他/她是那个作者吗?)Use
iri2uri
method ofhttplib2
. It makes the same thing as by bobin (is he/she the author of that?)将 IRI 转换为 ASCII URI 的另一种选择是使用
furl
包:gruns/furl:
Another option to convert an IRI to an ASCII URI is to use
furl
package:gruns/furl: ???? URL parsing and manipulation made easy. - https://github.com/gruns/furl
Examples
Non-ASCII domain
http://国立極地研究所.jp/english/ (Japanese National Institute of Polar Research website)
Non-ASCII path
https://ja.wikipedia.org/wiki/日本語 ("Japanese" article in Wikipedia)
作品!终于
我无法回避这些奇怪的人物,但最终我还是挺过来了。
works! finally
I could not avoid from this strange characters, but at the end I come through it.