python mechanize follow_link 失败

发布于 2024-11-04 09:29:36 字数 444 浏览 0 评论 0原文

我正在尝试在脚本中访问 NCBI 图像搜索页面 (http://www.ncbi.nlm.nih.gov/images) 上的搜索结果。我想为其提供一个搜索词,报告所有结果,然后继续执行下一个搜索词。为此,我需要在第一页之后访问结果页面,因此我尝试使用 python mechanize 来执行此操作:

import mechanize
browser=mechanize.Browser()
page1=browser.open('http://www.ncbi.nlm.nih.gov/images?term=drug')
a=browser.links(text_regex='Next')
nextlink=a.next()
page2=browser.follow_link(nextlink)

这只会再次返回搜索结果的第一页(在变量 page2 中)。我做错了什么,我怎样才能到达第二页及以后?

I'm trying to access search results on the NCBI Images search page (http://www.ncbi.nlm.nih.gov/images) in a script. I want to feed it a search term, report on all of the results, and then move on to the next search term. To do this I need to get to results pages after the first page, so I'm trying to use python mechanize to do it:

import mechanize
browser=mechanize.Browser()
page1=browser.open('http://www.ncbi.nlm.nih.gov/images?term=drug')
a=browser.links(text_regex='Next')
nextlink=a.next()
page2=browser.follow_link(nextlink)

This just gives me back the first page of search results again (in variable page2). What am I doing wrong, and how can I get to that second page and beyond?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

做个少女永远怀春 2024-11-11 09:29:36

不幸的是,该页面使用 Javascript 将 2459 字节的表单变量 POST 到服务器,只是为了导航到后续页面。以下是一些变量(我总共有 38 个变量):

EntrezSystem2.PEntrez.ImagesDb.Images_SearchBar.Term=drug
EntrezSystem2.PEntrez.ImagesDb.Images_SearchBar.CurrDb=images
EntrezSystem2.PEntrez.ImagesDb.Images_ResultsPanel.Entrez_Pager.CurrPage=2

您需要构造一个到服务器的 POST 请求,其中包含部分或全部这些变量。幸运的是,如果您让它在第 2 页上工作,您只需增加 CurrPage 并发送另一个 POST 即可获取每个后续​​页面的结果(无需提取链接)。

更新 - 该网站完全是一个令人痛苦的地方,但这里是 2-N 个页面的基于 POST 的抓取。将 MAX_PAGE 设置为最高页码 + 1。该脚本将生成类似 file_000003.html 的文件。

注意:使用前,需要将POSTDATA替换为内容此粘贴块(1 个月后过期)。这只是 Firebug 捕获的 POST 请求的主体,我用它来播种正确的参数:

import cookielib
import json
import mechanize
import sys
import urllib
import urlparse

MAX_PAGE = 6
TERM = 'drug'
DEBUG = False

base_url = 'http://www.ncbi.nlm.nih.gov/images?term=' + TERM
browser = mechanize.Browser()
browser.set_handle_robots(False)
browser.set_handle_referer(True)
browser.set_debug_http(DEBUG)
browser.set_debug_responses(DEBUG)
cjar = cookielib.CookieJar()
browser.set_cookiejar(cjar)

# make first GET request. this will populate the cookie
res = browser.open(base_url)

def write(num, data):
    with open('file_%06d.html' % num, 'wb') as out:
        out.write(data)

def encode(kvs):
    res = []
    for key, vals in kvs.iteritems():
        if isinstance(vals, list):
            for v in vals:
                res.append('%s=%s' % (key, urllib.quote(v)))
        else:
            res.append('%s=%s' % (key, urllib.quote(vals)))
    return '&'.join(res)

write(1, res.read())

# set this var equal to the contents of this: http://pastebin.com/UfejW3G0
POSTDATA = '''<post data>'''

# parse the embedded json vars into POST parameters
PREFIX1 = 'EntrezSystem2.PEntrez.ImagesDb.'
PREFIX2 = 'EntrezSystem2.PEntrez.DbConnector.'
params = dict((k, v[0]) for k, v in urlparse.parse_qs(POSTDATA).iteritems())

base_url = 'http://www.ncbi.nlm.nih.gov/images'
for page in range(2, MAX_PAGE):
    params[PREFIX1 + 'Images_ResultsPanel.Entrez_Pager.CurrPage'] = str(page)
    params[PREFIX1 + 'Images_ResultsPanel.Entrez_Pager.cPage'] = [str(page-1)]*2

    data = encode(params)
    req = mechanize.Request(base_url, data)
    cjar.add_cookie_header(req)
    req.add_header('Content-Type', 'application/x-www-form-urlencoded')
    req.add_header('Referer', base_url)
    res = browser.open(req)

    write(page, res.read())

Unfortunately that page uses Javascript to POST 2459 bytes of form variables to the server, just to navigate to a subsequent page. Here are a few of the variables (I count 38 vars in total):

EntrezSystem2.PEntrez.ImagesDb.Images_SearchBar.Term=drug
EntrezSystem2.PEntrez.ImagesDb.Images_SearchBar.CurrDb=images
EntrezSystem2.PEntrez.ImagesDb.Images_ResultsPanel.Entrez_Pager.CurrPage=2

You'll need to construct a POST request to the server containing some or all of these variables. Luckily if you get it working for page 2 you can simply increment CurrPage and send another POST to get each subsequent page of results (no need to extract links).

Update - That site is a total pain-in-the-ass, but here is a POST-based scrape of the 2-N pages. Set MAX_PAGE to the highest page number + 1. The script will produce files like file_000003.html.

Note: Before you use it, you need to replace POSTDATA with the contents of this paste blob (it expires in 1 month). It's just the body a POST request as captured by Firebug, which I use to seed the correct params:

import cookielib
import json
import mechanize
import sys
import urllib
import urlparse

MAX_PAGE = 6
TERM = 'drug'
DEBUG = False

base_url = 'http://www.ncbi.nlm.nih.gov/images?term=' + TERM
browser = mechanize.Browser()
browser.set_handle_robots(False)
browser.set_handle_referer(True)
browser.set_debug_http(DEBUG)
browser.set_debug_responses(DEBUG)
cjar = cookielib.CookieJar()
browser.set_cookiejar(cjar)

# make first GET request. this will populate the cookie
res = browser.open(base_url)

def write(num, data):
    with open('file_%06d.html' % num, 'wb') as out:
        out.write(data)

def encode(kvs):
    res = []
    for key, vals in kvs.iteritems():
        if isinstance(vals, list):
            for v in vals:
                res.append('%s=%s' % (key, urllib.quote(v)))
        else:
            res.append('%s=%s' % (key, urllib.quote(vals)))
    return '&'.join(res)

write(1, res.read())

# set this var equal to the contents of this: http://pastebin.com/UfejW3G0
POSTDATA = '''<post data>'''

# parse the embedded json vars into POST parameters
PREFIX1 = 'EntrezSystem2.PEntrez.ImagesDb.'
PREFIX2 = 'EntrezSystem2.PEntrez.DbConnector.'
params = dict((k, v[0]) for k, v in urlparse.parse_qs(POSTDATA).iteritems())

base_url = 'http://www.ncbi.nlm.nih.gov/images'
for page in range(2, MAX_PAGE):
    params[PREFIX1 + 'Images_ResultsPanel.Entrez_Pager.CurrPage'] = str(page)
    params[PREFIX1 + 'Images_ResultsPanel.Entrez_Pager.cPage'] = [str(page-1)]*2

    data = encode(params)
    req = mechanize.Request(base_url, data)
    cjar.add_cookie_header(req)
    req.add_header('Content-Type', 'application/x-www-form-urlencoded')
    req.add_header('Referer', base_url)
    res = browser.open(req)

    write(page, res.read())
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文