如何使用beautiful soup和python获取favicon

发布于 2024-10-11 13:57:44 字数 1237 浏览 2 评论 0原文

我写了一些愚蠢的代码只是为了学习,但它不适用于任何网站。 这是代码:

import urllib2, re
from BeautifulSoup import BeautifulSoup as Soup

class Founder:
    def Find_all_links(self, url):
        page_source = urllib2.urlopen(url)
        a = page_source.read()
        soup = Soup(a)

        a = soup.findAll(href=re.compile(r'/.a\w+'))
        return a
    def Find_shortcut_icon (self, url):
        a = self.Find_all_links(url)
        b = ''
        for i in a:
            strre=re.compile('shortcut icon', re.IGNORECASE)
            m=strre.search(str(i))
            if m:
                b = i["href"]
        return b
    def Save_icon(self, url):
        url = self.Find_shortcut_icon(url)
        print url
        host = re.search(r'[0-9a-zA-Z]{1,20}\.[a-zA-Z]{2,4}', url).group()
        opener = urllib2.build_opener()
        icon = opener.open(url).read()
        file = open(host+'.ico', "wb")
        file.write(icon)
        file.close()
        print '%s icon successfully saved' % host
c = Founder()
print c.Save_icon('http://lala.ru')

最奇怪的是它适用于网站: http://habrahabr.ru http://5pd.ru

但对于我检查过的大多数其他人来说不起作用。

I wrote some stupid code for learning just, but it doesn't work for any sites.
here is the code:

import urllib2, re
from BeautifulSoup import BeautifulSoup as Soup

class Founder:
    def Find_all_links(self, url):
        page_source = urllib2.urlopen(url)
        a = page_source.read()
        soup = Soup(a)

        a = soup.findAll(href=re.compile(r'/.a\w+'))
        return a
    def Find_shortcut_icon (self, url):
        a = self.Find_all_links(url)
        b = ''
        for i in a:
            strre=re.compile('shortcut icon', re.IGNORECASE)
            m=strre.search(str(i))
            if m:
                b = i["href"]
        return b
    def Save_icon(self, url):
        url = self.Find_shortcut_icon(url)
        print url
        host = re.search(r'[0-9a-zA-Z]{1,20}\.[a-zA-Z]{2,4}', url).group()
        opener = urllib2.build_opener()
        icon = opener.open(url).read()
        file = open(host+'.ico', "wb")
        file.write(icon)
        file.close()
        print '%s icon successfully saved' % host
c = Founder()
print c.Save_icon('http://lala.ru')

The most strange thing is it works for site:
http://habrahabr.ru
http://5pd.ru

But doesn't work for most others that I've checked.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

歌入人心 2024-10-18 13:57:44

你让事情变得比需要的复杂得多。这是一个简单的方法:

import urllib
page = urllib.urlopen("http://5pd.ru/")
soup = BeautifulSoup(page)
icon_link = soup.find("link", rel="shortcut icon")
icon = urllib.urlopen(icon_link['href'])
with open("test.ico", "wb") as f:
    f.write(icon.read())

You're making it far more complicated than it needs to be. Here's a simple way to do it:

import urllib
page = urllib.urlopen("http://5pd.ru/")
soup = BeautifulSoup(page)
icon_link = soup.find("link", rel="shortcut icon")
icon = urllib.urlopen(icon_link['href'])
with open("test.ico", "wb") as f:
    f.write(icon.read())
高跟鞋的旋律 2024-10-18 13:57:44

Thomas K 的回答让我朝着正确的方向开始,但我发现一些网站没有说 rel="shortcut icon",例如 1800contacts.com 只说 rel="icon"。这适用于 Python 3 并返回链接。如果需要,您可以将其写入文件。

from bs4 import BeautifulSoup
import requests

def getFavicon(domain):
    if 'http' not in domain:
        domain = 'http://' + domain
    page = requests.get(domain)
    soup = BeautifulSoup(page.text, features="lxml")
    icon_link = soup.find("link", rel="shortcut icon")
    if icon_link is None:
        icon_link = soup.find("link", rel="icon")
    if icon_link is None:
        return domain + '/favicon.ico'
    return icon_link["href"]

Thomas K's answer got me started in the right direction, but I found some websites that didn't say rel="shortcut icon", like 1800contacts.com that says just rel="icon". This works in Python 3 and returns the link. You can write that to file if you want.

from bs4 import BeautifulSoup
import requests

def getFavicon(domain):
    if 'http' not in domain:
        domain = 'http://' + domain
    page = requests.get(domain)
    soup = BeautifulSoup(page.text, features="lxml")
    icon_link = soup.find("link", rel="shortcut icon")
    if icon_link is None:
        icon_link = soup.find("link", rel="icon")
    if icon_link is None:
        return domain + '/favicon.ico'
    return icon_link["href"]
楠木可依 2024-10-18 13:57:44

如果有人想使用正则表达式进行一次检查,以下方法对我有用:

import re

from bs4 import BeautifulSoup

html_code = "<Some HTML code you get from somewhere>"

soup = BeautifulSoup(html_code, features="lxml")

for item in soup.find_all('link', attrs={'rel': re.compile("^(shortcut icon|icon)$", re.I)}):
    print(item.get('href'))

这也将解释区分大小写的情况。

In case anyone wants to use a single check with regex, the following works for me:

import re

from bs4 import BeautifulSoup

html_code = "<Some HTML code you get from somewhere>"

soup = BeautifulSoup(html_code, features="lxml")

for item in soup.find_all('link', attrs={'rel': re.compile("^(shortcut icon|icon)$", re.I)}):
    print(item.get('href'))

This will also account for occurrences of case sensitivity.

黑白记忆 2024-10-18 13:57:44

谢谢你,库尔德人。这是经过一些更改的代码:

import  urllib2
from BeautifulSoup import BeautifulSoup 

url = "http://www.facebook.com" 
page = urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
icon_link = soup.find("link", rel="shortcut icon")
try:
    icon = urllib2.urlopen(icon_link['href'])
except:
    icon = urllib2.urlopen(url + icon_link['href'])
iconname = url.split(r'/')
iconname = iconname[2].split('.')
iconname = iconname[1] + '.' + iconname[2] + '.ico'
with open(iconname, "wb") as f:
    f.write(icon.read())

Thank you, kurd. Here is the code with some changes:

import  urllib2
from BeautifulSoup import BeautifulSoup 

url = "http://www.facebook.com" 
page = urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
icon_link = soup.find("link", rel="shortcut icon")
try:
    icon = urllib2.urlopen(icon_link['href'])
except:
    icon = urllib2.urlopen(url + icon_link['href'])
iconname = url.split(r'/')
iconname = iconname[2].split('.')
iconname = iconname[1] + '.' + iconname[2] + '.ico'
with open(iconname, "wb") as f:
    f.write(icon.read())
孤单情人 2024-10-18 13:57:44

谢谢你,托马斯。
这是经过一些更改的代码:

import  urllib2
from BeautifulSoup import BeautifulSoup 

page = urllib2.urlopen("http://5pd.ru/")
soup = BeautifulSoup(page.read())
icon_link = soup.find("link", rel="shortcut icon")
icon = urllib2.urlopen(icon_link['href'])
with open("test.ico", "wb") as f:
    f.write(icon.read())

Thank you, Thomas.
Here is the code wiith some changes:

import  urllib2
from BeautifulSoup import BeautifulSoup 

page = urllib2.urlopen("http://5pd.ru/")
soup = BeautifulSoup(page.read())
icon_link = soup.find("link", rel="shortcut icon")
icon = urllib2.urlopen(icon_link['href'])
with open("test.ico", "wb") as f:
    f.write(icon.read())
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文