如何使用 urlopen 获取非 ASCII url?

发布于 2024-10-06 08:22:53 字数 393 浏览 1 评论 0原文

我需要从包含非 ascii 字符的 URL 获取数据,但 urllib2.urlopen 拒绝打开资源并引发:

UnicodeEncodeError: 'ascii' codec can't encode character u'\u0131' in position 26: ordinal not in range(128)

我知道该 URL 不符合标准,但我没有机会更改它。

使用Python访问包含非ascii字符的URL指向的资源的方法是什么?

编辑: 换句话说,urlopen 可以/如何打开一个 URL,例如:

http://example.org/Ñöñ-ÅŞÇİİ/

I need to fetch data from a URL with non-ascii characters but urllib2.urlopen refuses to open the resource and raises:

UnicodeEncodeError: 'ascii' codec can't encode character u'\u0131' in position 26: ordinal not in range(128)

I know the URL is not standards compliant but I have no chance to change it.

What is the way to access a resource pointed by a URL containing non-ascii characters using Python?

edit: In other words, can / how urlopen open a URL like:

http://example.org/Ñöñ-ÅŞÇİİ/

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

智商已欠费 2024-10-13 08:22:53

严格来说,URI 不能包含非 ASCII 字符;您拥有的是IRI

要将 IRI 转换为纯 ASCII URI:

  • 地址主机名部分中的非 ASCII 字符必须使用 进行编码基于 Punycode 的 IDNA 算法;

  • 根据 Ignacio 的回答,路径中的非 ASCII 字符以及地址的大多数其他部分必须使用 UTF-8 和 %-encoding 进行编码。

所以:(

import re, urlparse

def urlEncodeNonAscii(b):
    return re.sub('[\x80-\xFF]', lambda c: '%%%02x' % ord(c.group(0)), b)

def iriToUri(iri):
    parts= urlparse.urlparse(iri)
    return urlparse.urlunparse(
        part.encode('idna') if parti==1 else urlEncodeNonAscii(part.encode('utf-8'))
        for parti, part in enumerate(parts)
    )

>>> iriToUri(u'http://www.a\u0131b.com/a\u0131b')
'http://www.xn--ab-hpa.com/a%c4%b1b'

从技术上讲,在一般情况下这仍然不够好,因为 urlparse 不会分割任何 user:pass@ 前缀或 :port<主机名上的 /code> 后缀。只有主机名部分应该进行 IDNA 编码,使用普通的 urllib.quote.encode('idna') 进行编码会更容易。构建 URL 的时间比必须将 IRI 分开的时间要多。)

Strictly speaking URIs can't contain non-ASCII characters; what you have there is an IRI.

To convert an IRI to a plain ASCII URI:

  • non-ASCII characters in the hostname part of the address have to be encoded using the Punycode-based IDNA algorithm;

  • non-ASCII characters in the path, and most of the other parts of the address have to be encoded using UTF-8 and %-encoding, as per Ignacio's answer.

So:

import re, urlparse

def urlEncodeNonAscii(b):
    return re.sub('[\x80-\xFF]', lambda c: '%%%02x' % ord(c.group(0)), b)

def iriToUri(iri):
    parts= urlparse.urlparse(iri)
    return urlparse.urlunparse(
        part.encode('idna') if parti==1 else urlEncodeNonAscii(part.encode('utf-8'))
        for parti, part in enumerate(parts)
    )

>>> iriToUri(u'http://www.a\u0131b.com/a\u0131b')
'http://www.xn--ab-hpa.com/a%c4%b1b'

(Technically this still isn't quite good enough in the general case because urlparse doesn't split away any user:pass@ prefix or :port suffix on the hostname. Only the hostname part should be IDNA encoded. It's easier to encode using normal urllib.quote and .encode('idna') at the time you're constructing a URL than to have to pull an IRI apart.)

孤者何惧 2024-10-13 08:22:53

在 python3 中,对非 ascii 字符串使用 urllib.parse.quote 函数:

>>> from urllib.request import urlopen                                                                                                                                                            
>>> from urllib.parse import quote                                                                                                                                                                
>>> chinese_wikipedia = 'http://zh.wikipedia.org/wiki/Wikipedia:' + quote('首页')
>>> urlopen(chinese_wikipedia)

In python3, use the urllib.parse.quote function on the non-ascii string:

>>> from urllib.request import urlopen                                                                                                                                                            
>>> from urllib.parse import quote                                                                                                                                                                
>>> chinese_wikipedia = 'http://zh.wikipedia.org/wiki/Wikipedia:' + quote('首页')
>>> urlopen(chinese_wikipedia)
碍人泪离人颜 2024-10-13 08:22:53

Python 3 有库来处理这种情况。使用
urllib.parse.urlsplit 将 URL 拆分为其组件,以及
urllib.parse.quote 正确引用/转义 unicode 字符
和 urllib.parse.urlunsplit 来将其重新组合在一起。

>>> import urllib.parse
>>> url = 'http://example.com/unicodè'
>>> url = urllib.parse.urlsplit(url)
>>> url = list(url)
>>> url[2] = urllib.parse.quote(url[2])
>>> url = urllib.parse.urlunsplit(url)
>>> print(url)
http://example.com/unicod%C3%A8

Python 3 has libraries to handle this situation. Use
urllib.parse.urlsplit to split the URL into its components, and
urllib.parse.quote to properly quote/escape the unicode characters
and urllib.parse.urlunsplit to join it back together.

>>> import urllib.parse
>>> url = 'http://example.com/unicodè'
>>> url = urllib.parse.urlsplit(url)
>>> url = list(url)
>>> url[2] = urllib.parse.quote(url[2])
>>> url = urllib.parse.urlunsplit(url)
>>> print(url)
http://example.com/unicod%C3%A8
梅倚清风 2024-10-13 08:22:53

基于@darkfeline 的回答:

from urllib.parse import urlsplit, urlunsplit, quote

def iri2uri(iri):
    """
    Convert an IRI to a URI (Python 3).
    """
    uri = ''
    if isinstance(iri, str):
        (scheme, netloc, path, query, fragment) = urlsplit(iri)
        scheme = quote(scheme)
        netloc = netloc.encode('idna').decode('utf-8')
        path = quote(path)
        query = quote(query)
        fragment = quote(fragment)
        uri = urlunsplit((scheme, netloc, path, query, fragment))

    return uri

Based on @darkfeline answer:

from urllib.parse import urlsplit, urlunsplit, quote

def iri2uri(iri):
    """
    Convert an IRI to a URI (Python 3).
    """
    uri = ''
    if isinstance(iri, str):
        (scheme, netloc, path, query, fragment) = urlsplit(iri)
        scheme = quote(scheme)
        netloc = netloc.encode('idna').decode('utf-8')
        path = quote(path)
        query = quote(query)
        fragment = quote(fragment)
        uri = urlunsplit((scheme, netloc, path, query, fragment))

    return uri
诗化ㄋ丶相逢 2024-10-13 08:22:53

它比公认的 @bobince 的答案所暗示的更复杂:

  • netloc 应该使用 IDNA 进行编码;
  • 非 ascii URL 路径应编码为 UTF-8,然后进行百分比转义;
  • 非 ascii 查询参数应编码为从中提取的页面 URL 的编码(或编码服务器使用的编码),然后进行百分比转义。

这就是所有浏览器的工作方式;它在 https://url.spec.whatwg.org/ 中指定 - 请参阅此 示例。 Python 实现可以在 w3lib 中找到(这是 Scrapy 正在使用的库);请参阅 w3lib.url.safe_url_string

from w3lib.url import safe_url_string
url = safe_url_string(u'http://example.org/Ñöñ-ÅŞÇİİ/', encoding="<page encoding>")

一种简单的方法检查 URL 转义实现是否不正确/不完整是检查它是否提供“页面编码”参数。

It is more complex than the accepted @bobince's answer suggests:

  • netloc should be encoded using IDNA;
  • non-ascii URL path should be encoded to UTF-8 and then percent-escaped;
  • non-ascii query parameters should be encoded to the encoding of a page URL was extracted from (or to the encoding server uses), then percent-escaped.

This is how all browsers work; it is specified in https://url.spec.whatwg.org/ - see this example. A Python implementation can be found in w3lib (this is the library Scrapy is using); see w3lib.url.safe_url_string:

from w3lib.url import safe_url_string
url = safe_url_string(u'http://example.org/Ñöñ-ÅŞÇİİ/', encoding="<page encoding>")

An easy way to check if a URL escaping implementation is incorrect/incomplete is to check if it provides 'page encoding' argument or not.

乙白 2024-10-13 08:22:53

对于那些不严格依赖 urllib 的人来说,一种实用的替代方案是 requests,它处理 IRI ”开箱即用”。

例如,使用 http://bücher.ch

>>> import requests
>>> r = requests.get(u'http://b\u00DCcher.ch')
>>> r.status_code
200

For those not depending strictly on urllib, one practical alternative is requests, which handles IRIs "out of the box".

For example, with http://bücher.ch:

>>> import requests
>>> r = requests.get(u'http://b\u00DCcher.ch')
>>> r.status_code
200
我的鱼塘能养鲲 2024-10-13 08:22:53

unicode 编码为 UTF-8,然后进行 URL 编码。

Encode the unicode to UTF-8, then URL-encode.

遇到 2024-10-13 08:22:53

使用httplib2iri2uri方法。它与 bobin 制作的东西相同(他/她是那个作者吗?)

Use iri2uri method of httplib2. It makes the same thing as by bobin (is he/she the author of that?)

游魂 2024-10-13 08:22:53

将 IRI 转换为 ASCII URI 的另一种选择是使用 furl 包:

gruns/furl:

Another option to convert an IRI to an ASCII URI is to use furl package:

gruns/furl: ???? URL parsing and manipulation made easy. - https://github.com/gruns/furl

Python's standard urllib and urlparse modules provide a number of URL
related functions, but using these functions to perform common URL
operations proves tedious. Furl makes parsing and manipulating URLs
easy.

Examples

Non-ASCII domain

http://国立極地研究所.jp/english/ (Japanese National Institute of Polar Research website)

import furl

url = 'http://国立極地研究所.jp/english/'
furl.furl(url).tostr()
'http://xn--vcsoey76a2hh0vtuid5qa.jp/english/'

Non-ASCII path

https://ja.wikipedia.org/wiki/日本語 ("Japanese" article in Wikipedia)

import furl

url = 'https://ja.wikipedia.org/wiki/日本語'
furl.furl(url).tostr()
'https://ja.wikipedia.org/wiki/%E6%97%A5%E6%9C%AC%E8%AA%9E'
千里故人稀 2024-10-13 08:22:53

作品!终于

我无法回避这些奇怪的人物,但最终我还是挺过来了。

import urllib.request
import os


url = "http://www.fourtourismblog.it/le-nuove-tendenze-del-marketing-tenere-docchio/"
with urllib.request.urlopen(url) as file:
    html = file.read()
with open("marketingturismo.html", "w", encoding='utf-8') as file:
    file.write(str(html.decode('utf-8')))
os.system("marketingturismo.html")

works! finally

I could not avoid from this strange characters, but at the end I come through it.

import urllib.request
import os


url = "http://www.fourtourismblog.it/le-nuove-tendenze-del-marketing-tenere-docchio/"
with urllib.request.urlopen(url) as file:
    html = file.read()
with open("marketingturismo.html", "w", encoding='utf-8') as file:
    file.write(str(html.decode('utf-8')))
os.system("marketingturismo.html")
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文