8.2 页面分析
首先,我们对一本书的页面进行分析。在进行页面分析时,除了之前使用过的Chrome开发者工具外,另一个常用的工具是scrapy shell <URL>命令,它使用户可以在交互式命令行下操作一个Scrapy爬虫,通常我们利用该工具进行前期爬取实验,从而提高开发效率。
接下来分析第一本书的页面,以页面的url地址为参数运行scrapy shell命令:
$ scrapy shell http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html scrapy shell http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html 2017-03-03 09:17:01 [scrapy] INFO: Scrapy 1.3.3 started (bot: scrapybot) 2017-03-03 09:17:01 [scrapy] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter'} 2017-03-03 09:17:01 [scrapy] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole'] 2017-03-03 09:17:01 [scrapy] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2017-03-03 09:17:01 [scrapy] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2017-03-03 09:17:01 [scrapy] INFO: Enabled item pipelines: [] 2017-03-03 09:17:01 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024 2017-03-03 09:17:01 [scrapy] INFO: Spider opened 2017-03-03 09:17:01 [scrapy] DEBUG: Crawled (200) (referer: None) 2017-03-03 09:17:02 [traitlets] DEBUG: Using default logger 2017-03-03 09:17:02 [traitlets] DEBUG: Using default logger [s] Available Scrapy objects: [s] scrapy scrapy module (contains scrapy.Request, scrapy.Selector, etc) [s] crawler [s] item {} [s] request [s] response <200 http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html> [s] settings [s] spider [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser >>>
运行这条命令后,scrapy shell会使用url参数构造一个Request对象,并提交给Scrapy引擎,页面下载完成后,程序进入一个python shell当中,在此环境中已经创建好了一些变量(对象和函数),以下几个最为常用:
request
最近一次下载对应的Request对象。
response
最近一次下载对应的Response对象。
fetch(req_or_url)
该函数用于下载页面,可传入一个Request对象或url字符串,调用后会更新变量request和response。
view(response)
该函数用于在浏览器中显示response中的页面。
接下来,在scrapy shell中调用view函数,在浏览器中显示response所包含的页面:
>>> view(response)
可能在很多时候,使用view函数打开的页面和在浏览器直接输入url打开的页面看起来是一样的,但需要知道的是,前者是由Scrapy爬虫下载的页面,而后者是由浏览器下载的页面,有时它们是不同的。在进行页面分析时,使用view函数更加可靠。下面使用Chrome审查元素工具分析页面,如图8-3所示。
图8-3
从图8-3中看出,我们可在<div class="col-sm-6 product_main">中提取书名、价格、评价等级,在scrapy shell中尝试提取这些信息,如图8-4所示。
>>> sel = response.css('div.product_main') >>> sel.xpath('./h1/text()').extract_first() 'A Light in the Attic' >>> sel.css('p.price_color::text').extract_first() '£51.77' >>> sel.css('p.star-rating::attr(class)').re_first('star-rating ([A-Za-z]+)') 'Three'
图8-4
另外,可在页面下端位置的<table class="table table-striped">中提取产品编码、库存量、评价数量,在scrapy shell中尝试提取这些信息:
>>> sel = response.css('table.table.table-striped') >>> sel.xpath('(.//tr)[1]/td/text()').extract_first() 'a897fe39b1053632' >>> sel.xpath('(.//tr)[last()-1]/td/text()').re_first('\((\d+) available\)') '22' >>> sel.xpath('(.//tr)[last()]/td/text()').extract_first() '0'
分析完书籍页面后,接着分析如何在书籍列表页面中提取每一个书籍页面的链接。在scrapy shell中,先调用fetch函数下载第一个书籍列表页面(http://books.toscrape.com/),下载完成后再调用view函数在浏览器中查看页面,如图8-5所示。
>>> fetch('http://books.toscrape.com/') [scrapy] DEBUG: Crawled (200) <GET http://books.toscrape.com/> (referer: None) >>> view(response)
图8-5
每个书籍页面的链接可以在每个<article class="product_pod">中找到,在scrapy shell中使用LinkExtractor提取这些链接:
>>> from scrapy.linkextractors import LinkExtractor >>> le = LinkExtractor(restrict_css='article.product_pod') >>> le.extract_links(response) [Link(url='http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/soumission_998/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/sharp-objects_997/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/the-requiem-red_995/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/the-dirty-little-secrets-of-getting-your-dream-job_994/i ndex.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/the-coming-woman-a-novel-based-on-the-life-of-the-in famous-feminist-victoria-woodhull_993/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/the-boys-in-the-boat-nine-americans-and-their-epic-que st-for-gold-at-the-1936-berlin-olympics_992/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/the-black-maria_991/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/starving-hearts-triangular-trade-trilogy-1_990/index.ht ml', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/shakespeares-sonnets_989/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/set-me-free_988/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/scott-pilgrims-precious-little-life-scott-pilgrim-1_987/i ndex.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/rip-it-up-and-start-again_986/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/our-band-could-be-your-life-scenes-from-the-american -indie-underground-1981-1991_985/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/olio_984/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/mesaerion-the-best-science-fiction-stories-1800-1849_ 983/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/libertarianism-for-beginners_982/index.html', text='', fragment='', nofollow=False), Link(url='http://books.toscrape.com/catalogue/its-only-the-himalayas_981/index.html', text='', fragment='', nofollow=False)]
到此,页面分析的工作已经完成了。
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论