在Python中快速解析页面外的链接
我需要解析大量页面(比如 1000 个)并将链接替换为tinyurl 链接。
现在我正在使用正则表达式执行此操作
href_link_re = re.compile(r"<a[^>]+?href\s*=\s*(\"|')(.*?)\1[^>]*>", re.S)
,但它不够快。
到目前为止,我正在考虑
- 状态机(这的成功将取决于我编写聪明代码的能力),
- 使用 html 解析器的
你能建议更快的方法吗?
编辑: 你可能会认为 html 解析器会比正则表达式更快,但在我的测试中它不是:
from BeautifulSoup import BeautifulSoup, SoupStrainer
import re
import time
__author__ = 'misha'
regex = re.compile(r"<a[^>]+?href\s*=\s*(\"|')(.*?)\1[^>]*>", re.S)
def test(text, fn, desc):
start = time.time()
total = 0
links = [];
for i in range(0, 10):
links = fn(text)
total += len(links)
end = time.time()
print(desc % (end-start, total))
# print(links)
def parseRegex(text):
links = set([])
for link in regex.findall(text):
links.add(link[1])
return links
def parseSoup(text):
links = set([])
for link in BeautifulSoup(text, parseOnlyThese=SoupStrainer('a')):
if link.has_key('href'):
links.add(link['href'])
return links
if __name__ == '__main__':
f = open('/Users/misha/test')
text = ''.join(f.readlines())
f.close()
test(text, parseRegex, "regex time taken: %s found links: %s" )
test(text, parseSoup, "soup time taken: %s found links: %s" )
输出:(
regex time taken: 0.00451803207397 found links: 2450
soup time taken: 0.791836977005 found links: 2450
测试是维基百科首页的转储)
我一定使用汤很糟糕。 我做错了什么?
I need to parse a large number of pages (say 1000) and replace the links with tinyurl links.
right now i am doing this using a regex
href_link_re = re.compile(r"<a[^>]+?href\s*=\s*(\"|')(.*?)\1[^>]*>", re.S)
but its not fast enough.
i am thinking so far
- state machine (the success of this will depend on my ability to write clever code)
- using an html parser
Can you suggest faster ways?
EDIT:
You would think that an html parser would be faster than regex, but in my tests it is not:
from BeautifulSoup import BeautifulSoup, SoupStrainer
import re
import time
__author__ = 'misha'
regex = re.compile(r"<a[^>]+?href\s*=\s*(\"|')(.*?)\1[^>]*>", re.S)
def test(text, fn, desc):
start = time.time()
total = 0
links = [];
for i in range(0, 10):
links = fn(text)
total += len(links)
end = time.time()
print(desc % (end-start, total))
# print(links)
def parseRegex(text):
links = set([])
for link in regex.findall(text):
links.add(link[1])
return links
def parseSoup(text):
links = set([])
for link in BeautifulSoup(text, parseOnlyThese=SoupStrainer('a')):
if link.has_key('href'):
links.add(link['href'])
return links
if __name__ == '__main__':
f = open('/Users/misha/test')
text = ''.join(f.readlines())
f.close()
test(text, parseRegex, "regex time taken: %s found links: %s" )
test(text, parseSoup, "soup time taken: %s found links: %s" )
output:
regex time taken: 0.00451803207397 found links: 2450
soup time taken: 0.791836977005 found links: 2450
(test is a dump of the wikipedia front page)
i must be using soup badly. what am i doing wrong?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
LXML 可能是完成此任务的最佳选择。请参阅Beautiful Soup 与 LXML 性能。在 LXML 中解析链接很容易而且速度很快。
LXML is probably your best bet for this task. See Beautiful Soup vs LXML Performance. Parsing links is easy in LXML and it's fast.
由于速度和正则表达式指数时间问题,使用正则表达式进行解析是非常糟糕的主意。
相反,您可以使用 xhtml 解析器。最好的是 LXML。
或者您可以使用 LL、LR 解析器专门为此目的编写解析器。例如:ANTLR,YAPPS,YACC,PYBISON 等
Parsing using regexp's very bad idea, because of speed and regexp exponential time problem.
Instead you can use parsers for xhtml. The best is LXML.
Or you can write parser specially for this purpose with LL,LR parsers. For example: ANTLR,YAPPS,YACC,PYBISON, etc