在 Beautiful Soup 中抓取多个页面进行解析

发布于 2024-12-18 15:53:16 字数 1121 浏览 2 评论 0原文

我正在尝试从单个网站上抓取多个页面以供 BeautifulSoup 解析。到目前为止,我已经尝试使用 urllib2 来执行此操作,但遇到了一些问题。我尝试过的是:

import urllib2,sys
from BeautifulSoup import BeautifulSoup

for numb in ('85753', '87433'):
    address = ('http://www.presidency.ucsb.edu/ws/index.php?pid=' + numb)
html = urllib2.urlopen(address).read()
soup = BeautifulSoup(html)

title = soup.find("span", {"class":"paperstitle"})
date = soup.find("span", {"class":"docdate"})
span = soup.find("span", {"class":"displaytext"})  # span.string gives you the first bit
paras = [x for x in span.findAllNext("p")]

first = title.string
second = date.string
start = span.string
middle = "\n\n".join(["".join(x.findAll(text=True)) for x in paras[:-1]])
last = paras[-1].contents[0]

print "%s\n\n%s\n\n%s\n\n%s\n\n%s" % (first, second, start, middle, last)

这只给出了 numb 序列中第二个数字的结果,即 http://www.presidency.ucsb.edu/ws/index.php?pid=87433。我也尝试过使用mechanize,但没有成功。理想情况下,我希望能够做的是拥有一个带有链接列表的页面,然后自动选择一个链接,将 HTML 传递给 BeautifulSoup,然后移至列表中的下一个链接。

I'm trying to scrape multiple pages off of a single website for BeautifulSoup to parse. So far, I've tried using urllib2 to do this, but have been encountering some problems. What I've attempted is:

import urllib2,sys
from BeautifulSoup import BeautifulSoup

for numb in ('85753', '87433'):
    address = ('http://www.presidency.ucsb.edu/ws/index.php?pid=' + numb)
html = urllib2.urlopen(address).read()
soup = BeautifulSoup(html)

title = soup.find("span", {"class":"paperstitle"})
date = soup.find("span", {"class":"docdate"})
span = soup.find("span", {"class":"displaytext"})  # span.string gives you the first bit
paras = [x for x in span.findAllNext("p")]

first = title.string
second = date.string
start = span.string
middle = "\n\n".join(["".join(x.findAll(text=True)) for x in paras[:-1]])
last = paras[-1].contents[0]

print "%s\n\n%s\n\n%s\n\n%s\n\n%s" % (first, second, start, middle, last)

This only gives me results for the second number in the numb sequence, i.e. http://www.presidency.ucsb.edu/ws/index.php?pid=87433. I also made some attempts at using mechanize, but had no success. Ideally what I would like to be able to do is have a page, with a list of links, and then automatically select a link, pass the HTML off to BeautifulSoup, and then move to the next link in the list.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

千寻… 2024-12-25 15:53:16

您需要将其余代码放入循环内。现在,您正在迭代元组中的两个项目,但在迭代结束时,只有最后一个项目仍分配给 address ,随后在循环外对其进行解析。

You need to put the rest of the code inside the loop. Right now you're iterating over both items in the tuple, but at the end of the iteration only the last item remains assigned to address which subsequently gets parsed outside the loop.

泪眸﹌ 2024-12-25 15:53:16

我认为您只是错过了循环中的缩进:

import urllib2,sys
from BeautifulSoup import BeautifulSoup

for numb in ('85753', '87433'):
    address = ('http://www.presidency.ucsb.edu/ws/index.php?pid=' + numb)
    html = urllib2.urlopen(address).read()
    soup = BeautifulSoup(html)

    title = soup.find("span", {"class":"paperstitle"})
    date = soup.find("span", {"class":"docdate"})
    span = soup.find("span", {"class":"displaytext"})  # span.string gives you the first bit
    paras = [x for x in span.findAllNext("p")]

    first = title.string
    second = date.string
    start = span.string
    middle = "\n\n".join(["".join(x.findAll(text=True)) for x in paras[:-1]])
    last = paras[-1].contents[0]

    print "%s\n\n%s\n\n%s\n\n%s\n\n%s" % (first, second, start, middle, last)

我认为这应该可以解决问题..

I think you just missed the indentation in the loop :

import urllib2,sys
from BeautifulSoup import BeautifulSoup

for numb in ('85753', '87433'):
    address = ('http://www.presidency.ucsb.edu/ws/index.php?pid=' + numb)
    html = urllib2.urlopen(address).read()
    soup = BeautifulSoup(html)

    title = soup.find("span", {"class":"paperstitle"})
    date = soup.find("span", {"class":"docdate"})
    span = soup.find("span", {"class":"displaytext"})  # span.string gives you the first bit
    paras = [x for x in span.findAllNext("p")]

    first = title.string
    second = date.string
    start = span.string
    middle = "\n\n".join(["".join(x.findAll(text=True)) for x in paras[:-1]])
    last = paras[-1].contents[0]

    print "%s\n\n%s\n\n%s\n\n%s\n\n%s" % (first, second, start, middle, last)

I think this should solve the problem..

一江春梦 2024-12-25 15:53:16

这是一个更简洁的解决方案(使用 lxml):

import lxml.html as lh

root_url = 'http://www.presidency.ucsb.edu/ws/index.php?pid='
page_ids = ['85753', '87433']

def scrape_page(page_id):
    url = root_url + page_id
    tree = lh.parse(url)

    title = tree.xpath("//span[@class='paperstitle']")[0].text
    date = tree.xpath("//span[@class='docdate']")[0].text
    text = tree.xpath("//span[@class='displaytext']")[0].text_content()

    return title, date, text

if __name__ == '__main__':
    for page_id in page_ids:
        title, date, text = scrape_page(page_id)

Here's a tidier solution (using lxml):

import lxml.html as lh

root_url = 'http://www.presidency.ucsb.edu/ws/index.php?pid='
page_ids = ['85753', '87433']

def scrape_page(page_id):
    url = root_url + page_id
    tree = lh.parse(url)

    title = tree.xpath("//span[@class='paperstitle']")[0].text
    date = tree.xpath("//span[@class='docdate']")[0].text
    text = tree.xpath("//span[@class='displaytext']")[0].text_content()

    return title, date, text

if __name__ == '__main__':
    for page_id in page_ids:
        title, date, text = scrape_page(page_id)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文