lxml - 解析 stackexchange rss feed 时遇到困难
嗨,
我在用 python 解析 stackexchange 的 rss feed 时遇到问题。 当我尝试获取摘要节点时,返回一个空列表
我一直在尝试解决这个问题,但无法理解。
有人可以帮忙吗? 谢谢 一个
<代码> 在[3o]中:导入lxml.etree,urllib2
In [31]: url_cooking = 'http://cooking.stackexchange.com/feeds'
In [32]: cooking_content = urllib2.urlopen(url_cooking)
In [33]: cooking_parsed = lxml.etree.parse(cooking_content)
In [34]: cooking_texts = cooking_parsed.xpath('.//feed/entry/summary')
In [35]: cooking_texts
Out[35]: []
Hia
I am having problems parsing an rss feed from stackexchange in python.
When I try to get the summary nodes, an empty list is return
I have been trying to solve this, but can't get my head around.
Can anyone help out?
thanks
a
In [3o]: import lxml.etree, urllib2
In [31]: url_cooking = 'http://cooking.stackexchange.com/feeds'
In [32]: cooking_content = urllib2.urlopen(url_cooking)
In [33]: cooking_parsed = lxml.etree.parse(cooking_content)
In [34]: cooking_texts = cooking_parsed.xpath('.//feed/entry/summary')
In [35]: cooking_texts
Out[35]: []
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
看一下这两个版本
正如您所发现的,第二个版本不返回任何节点,但
lxml.html
版本工作正常。etree
版本无法工作,因为它需要命名空间,而html
版本则可以工作,因为它忽略命名空间。 http://lxml.de/lxmlhtml.html 的一部分,它说“HTML 解析器明显忽略了命名空间和其他一些 XMLism ”。请注意,当您打印 etree 版本的根节点 (
print(data.getroot())
) 时,您会得到类似
。这意味着它是一个命名空间为http://www.w3.org/2005/Atom
的 feed 元素。这是 etree 代码的更正版本。Take a look at these two versions
As you discovered, the second version returns no nodes, but the
lxml.html
version works fine. Theetree
version is not working because it's expecting namespaces and thehtml
version is working because it ignores namespaces. Part way down http://lxml.de/lxmlhtml.html, it says "The HTML parser notably ignores namespaces and some other XMLisms."Note when you print the root node of the etree version (
print(data.getroot())
), you get something like<Element {http://www.w3.org/2005/Atom}feed at 0x22d1620>
. That means it's a feed element with a namespace ofhttp://www.w3.org/2005/Atom
. Here is a corrected version of the etree code.问题是命名空间。
运行此命令:
您将看到该元素的命名空间为
如果您导航到提要条目之一,
“类似”。这意味着 lxml 中正确的 xpath 是:
The problem is namespaces.
Run this :
And you'll see that the element is namespaced as
Similarly if you navigate to one of the feed entries.
This means the right xpath in lxml is:
尝试使用 beautifulsoup 导入中的 BeautifulStoneSoup。
它可能会起作用。
Try using BeautifulStoneSoup from the beautifulsoup import.
It might do the trick.