维基百科 API 找不到特定页面(带撇号的 URL)

发布于 2025-01-13 06:45:13 字数 1260 浏览 0 评论 0原文

我正在尝试检索未检索到的页面上的综合浏览量信息,而其他页面则检索到。我收到错误:

File "<unknown>", line 1
    article =='L'amica_geniale_ (serie_di_romanzi )'
                 ^
SyntaxError: invalid syntax

但文本中没有空格。此页面是:https://it.wikipedia.org/wiki/L%27amica_geniale_(serie_di_romanzi)

代码是:

start_date = "2005/01/01"
headers = {
    'User-Agent': 'Mozilla/5.0'
}


def wikimedia_request(page_name, start_date, end_date = None):

    sdate = start_date.split("/")
    sdate = ''.join(sdate)
    

    r = requests.get(
        "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia.org/all-access/all-agents/{}/daily/{}/{}".format(page_name,sdate, edate),
        headers=headers
    )
    r.raise_for_status()  # raises exception when not a 2xx response
    result = r.json()
    df = pd.DataFrame(result['items'])
    df['timestamp'] = [i[:-2] for i in df.timestamp]
    df['timestamp'] = pd.to_datetime(df['timestamp'])
    df.set_index('timestamp', inplace = True)


    return df[['article', 'views']]


df = wikimedia_request(name="Random", start_date)

names = ["L'amica geniale"]

dfs = pd.concat([wikimedia_request(x, start_date) for x in names])

并且该代码除此页面外均有效。我想这可能与撇号有关

I'm trying to retrieve pageviews info on a page which is not retrieved, while other pages are. I get the error:

File "<unknown>", line 1
    article =='L'amica_geniale_ (serie_di_romanzi )'
                 ^
SyntaxError: invalid syntax

But there are no whitespaces in the text. this page is: https://it.wikipedia.org/wiki/L%27amica_geniale_(serie_di_romanzi)

The code is:

start_date = "2005/01/01"
headers = {
    'User-Agent': 'Mozilla/5.0'
}


def wikimedia_request(page_name, start_date, end_date = None):

    sdate = start_date.split("/")
    sdate = ''.join(sdate)
    

    r = requests.get(
        "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia.org/all-access/all-agents/{}/daily/{}/{}".format(page_name,sdate, edate),
        headers=headers
    )
    r.raise_for_status()  # raises exception when not a 2xx response
    result = r.json()
    df = pd.DataFrame(result['items'])
    df['timestamp'] = [i[:-2] for i in df.timestamp]
    df['timestamp'] = pd.to_datetime(df['timestamp'])
    df.set_index('timestamp', inplace = True)


    return df[['article', 'views']]


df = wikimedia_request(name="Random", start_date)

names = ["L'amica geniale"]

dfs = pd.concat([wikimedia_request(x, start_date) for x in names])

And the code works except for this page. I'm thinking it might be something with the apostrophe

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

把人绕傻吧 2025-01-20 06:45:13

请注意您使用的网址。 'it.wikipedia.org''en.wikipedia.org' 之间存在差异

,但在使用正确的网址时效果很好。你可以做这样的事情来解释它:

import requests
import pandas as pd
import datetime

start_date = "2005/01/01"
headers = {
    'User-Agent': 'Mozilla/5.0'
}


def wikimedia_request(page_name, start_date, end_date = None):

    sdate = start_date.split("/")
    sdate = ''.join(sdate)
    
    if end_date == None:
        end_date = datetime.datetime.now()
        edate = end_date.strftime("%Y%m%d")
    
    try:
        lang = 'en'
        url = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/{}.wikipedia.org/all-access/all-agents/{}/daily/{}/{}".format(lang, page_name,sdate, edate)
        r = requests.get(url, headers=headers)
        r.raise_for_status()  # raises exception when not a 2xx response
    except:
        lang = 'it'
        url = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/{}.wikipedia.org/all-access/all-agents/{}/daily/{}/{}".format(lang, page_name,sdate, edate)
        r = requests.get(url, headers=headers)
        r.raise_for_status()  # raises exception when not a 2xx response        
    result = r.json()
    df = pd.DataFrame(result['items'])
    df['timestamp'] = [i[:-2] for i in df.timestamp]
    df['timestamp'] = pd.to_datetime(df['timestamp'])
    df.set_index('timestamp', inplace = True)


    return df[['article', 'views']]


#df = wikimedia_request(name="Random", start_date)

names = ["L'amica geniale_(serie_di_romanzi)", "L'amica geniale"]

dfs = pd.concat([wikimedia_request(x, start_date) for x in names])

输出:

print(dfs)
                                       article  views
timestamp                                            
2018-11-21  L'amica_geniale_(serie_di_romanzi)    499
2018-11-22  L'amica_geniale_(serie_di_romanzi)    909
2018-11-23  L'amica_geniale_(serie_di_romanzi)    739
2018-11-24  L'amica_geniale_(serie_di_romanzi)    696
2018-11-25  L'amica_geniale_(serie_di_romanzi)   1449
                                       ...    ...
2022-03-06                     L'amica_geniale     30
2022-03-07                     L'amica_geniale     24
2022-03-08                     L'amica_geniale     15
2022-03-09                     L'amica_geniale     28
2022-03-10                     L'amica_geniale     18

[3499 rows x 2 columns]

Pay attention to which url you are using. there's a difference between 'it.wikipedia.org' and 'en.wikipedia.org'

But works just fine when using the correct url. You could do something like this to account for it:

import requests
import pandas as pd
import datetime

start_date = "2005/01/01"
headers = {
    'User-Agent': 'Mozilla/5.0'
}


def wikimedia_request(page_name, start_date, end_date = None):

    sdate = start_date.split("/")
    sdate = ''.join(sdate)
    
    if end_date == None:
        end_date = datetime.datetime.now()
        edate = end_date.strftime("%Y%m%d")
    
    try:
        lang = 'en'
        url = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/{}.wikipedia.org/all-access/all-agents/{}/daily/{}/{}".format(lang, page_name,sdate, edate)
        r = requests.get(url, headers=headers)
        r.raise_for_status()  # raises exception when not a 2xx response
    except:
        lang = 'it'
        url = "https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/{}.wikipedia.org/all-access/all-agents/{}/daily/{}/{}".format(lang, page_name,sdate, edate)
        r = requests.get(url, headers=headers)
        r.raise_for_status()  # raises exception when not a 2xx response        
    result = r.json()
    df = pd.DataFrame(result['items'])
    df['timestamp'] = [i[:-2] for i in df.timestamp]
    df['timestamp'] = pd.to_datetime(df['timestamp'])
    df.set_index('timestamp', inplace = True)


    return df[['article', 'views']]


#df = wikimedia_request(name="Random", start_date)

names = ["L'amica geniale_(serie_di_romanzi)", "L'amica geniale"]

dfs = pd.concat([wikimedia_request(x, start_date) for x in names])

Output:

print(dfs)
                                       article  views
timestamp                                            
2018-11-21  L'amica_geniale_(serie_di_romanzi)    499
2018-11-22  L'amica_geniale_(serie_di_romanzi)    909
2018-11-23  L'amica_geniale_(serie_di_romanzi)    739
2018-11-24  L'amica_geniale_(serie_di_romanzi)    696
2018-11-25  L'amica_geniale_(serie_di_romanzi)   1449
                                       ...    ...
2022-03-06                     L'amica_geniale     30
2022-03-07                     L'amica_geniale     24
2022-03-08                     L'amica_geniale     15
2022-03-09                     L'amica_geniale     28
2022-03-10                     L'amica_geniale     18

[3499 rows x 2 columns]
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文