来自 Firefox bookmark.html 源代码的 YouTube video_id [几乎存在]
bookmarks.html 看起来像这样:
<DT><A HREF="http://www.youtube.com/watch?v=Gg81zi0pheg" ADD_DATE="1320876124" LAST_MODIFIED="1320878745" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=Gg81zi0pheg</A>
<DT><A HREF="http://www.youtube.com/watch?v=pP9VjGmmhfo" ADD_DATE="1320876156" LAST_MODIFIED="1320878756" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=pP9VjGmmhfo</A>
<DT><A HREF="http://www.youtube.com/watch?v=yTA1u6D1fyE" ADD_DATE="1320876163" LAST_MODIFIED="1320878762" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=yTA1u6D1fyE</A>
<DT><A HREF="http://www.youtube.com/watch?v=4v8HvQf4fgE" ADD_DATE="1320876186" LAST_MODIFIED="1320878767" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=4v8HvQf4fgE</A>
<DT><A HREF="http://www.youtube.com/watch?v=e9zG20wQQ1U" ADD_DATE="1320876195" LAST_MODIFIED="1320878773" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=e9zG20wQQ1U</A>
<DT><A HREF="http://www.youtube.com/watch?v=khL4s2bvn-8" ADD_DATE="1320876203" LAST_MODIFIED="1320878782" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=khL4s2bvn-8</A>
<DT><A HREF="http://www.youtube.com/watch?v=XTndQ7bYV0A" ADD_DATE="1320876271" LAST_MODIFIED="1320876271">Paramore - Walmart Soundcheck 6-For a pessimist(HQ)</A>
<DT><A HREF="http://www.youtube.com/watch?v=xTT2MqgWRRc" ADD_DATE="1320876284" LAST_MODIFIED="1320876284">Paramore - Walmart Soundcheck 5-Pressure(HQ)</A>
<DT><A HREF="http://www.youtube.com/watch?v=J2ZYQngwSUw" ADD_DATE="1320876291" LAST_MODIFIED="1320876291">Paramore - Wal-Mart Soundcheck Interview</A>
<DT><A HREF="http://www.youtube.com/watch?v=9RZwvg7unrU" ADD_DATE="1320878207" LAST_MODIFIED="1320878207">Paramore - 08 - Interview [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=vz3qOYWwm10" ADD_DATE="1320878295" LAST_MODIFIED="1320878295">Paramore - 04 - That's What You Get [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=yarv52QX_Yw" ADD_DATE="1320878301" LAST_MODIFIED="1320878301">Paramore - 05 - Pressure [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=LRREY1H3GCI" ADD_DATE="1320878317" LAST_MODIFIED="1320878317">Paramore - Walmart Promo</A>
它是 Firefox 的标准书签导出文件。
我将它输入到 bookmarks.py 中,如下所示:
#!/usr/bin/env python
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup
url_list = sys.argv[1]
urls = [tag['href'] for tag in
BeautifulSoup(open(url_list)).findAll('a')]
print urls
这会返回一个更干净的 url 列表:
[u'http://www.youtube.com/watch?v=Gg81zi0pheg', u'http://www.youtube.com/watch?v=pP9VjGmmhfo', u'http://www.youtube.com/watch?v=yTA1u6D1fyE', u'http://www.youtube.com/watch?v=4v8HvQf4fgE', u'http://www.youtube.com/watch?v=e9zG20wQQ1U', u'http://www.youtube.com/watch?v=khL4s2bvn-8', u'http://www.youtube.com/watch?v=XTndQ7bYV0A', u'http://www.youtube.com/watch?v=xTT2MqgWRRc', u'http://www.youtube.com/watch?v=J2ZYQngwSUw', u'http://www.youtube.com/watch?v=9RZwvg7unrU', u'http://www.youtube.com/watch?v=vz3qOYWwm10', u'http://www.youtube.com/watch?v=yarv52QX_Yw', u'http://www.youtube.com/watch?v=LRREY1H3GCI']
我的下一步是将每个 youtube url 放入 video_info.py 中
#!/usr/bin/python
import urlparse
import sys
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
youtube_url = sys.argv[1]
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]
print youtube_id
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'
entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)
print 'Video title: %s' % entry.media.title.text
print 'Video view count: %s' % entry.statistics.view_count
,当此 url “http://www.youtube.com /watch?v=aXrgwC1rsw4" 输出看起来像这样:
aXrgwC1rsw4
Video title: OneRepublic Good Life Live Walmart Soundcheck
Video view count: 202
How do I feed the list of urls from bookmarks.py into video_info.py?
*额外的输出点csv 格式和 在将数据传递到 video_info.py 之前,额外检查 bookmarks.html 中的重复项*
感谢您的所有帮助。由于 Stackoverflow,我已经走到了这一步。
David
#So 结合起来,我现在有了:
#!/usr/bin/env python
import urlparse
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'
url_list = sys.argv[1]
urls = [tag['href'] for tag in
BeautifulSoup(open(url_list)).findAll('a')]
print urls
youtube_url = urls
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]
#list(set(my_list))
entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)
myyoutubes = []
myyoutubes.append(", ".join([youtube_id, entry.media.title.text,entry.statistics.view_count]))
print "\n".join(myyoutubes)
如何将 url 列表传递给 youtube_url 变量? 它们需要进一步清理,并且一次传递一个,我相信
我现在已经明白了:
#!/usr/bin/env python
import urlparse
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'
url_list = sys.argv[1]
urls = [tag['href'] for tag in
BeautifulSoup(open(url_list)).findAll('a')]
for url in urls:
youtube_url = url
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]
#list(set(my_list))
entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)
myyoutubes = []
myyoutubes.append(", ".join([youtube_id, entry.media.title.text,entry.statistics.view_count]))
print "\n".join(myyoutubes)
我可以将 bookmarks.html 传递给combined.py,但它只返回第一行。
如何循环遍历 youtube_url 的每一行?
bookmarks.html looks like this:
<DT><A HREF="http://www.youtube.com/watch?v=Gg81zi0pheg" ADD_DATE="1320876124" LAST_MODIFIED="1320878745" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=Gg81zi0pheg</A>
<DT><A HREF="http://www.youtube.com/watch?v=pP9VjGmmhfo" ADD_DATE="1320876156" LAST_MODIFIED="1320878756" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=pP9VjGmmhfo</A>
<DT><A HREF="http://www.youtube.com/watch?v=yTA1u6D1fyE" ADD_DATE="1320876163" LAST_MODIFIED="1320878762" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=yTA1u6D1fyE</A>
<DT><A HREF="http://www.youtube.com/watch?v=4v8HvQf4fgE" ADD_DATE="1320876186" LAST_MODIFIED="1320878767" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=4v8HvQf4fgE</A>
<DT><A HREF="http://www.youtube.com/watch?v=e9zG20wQQ1U" ADD_DATE="1320876195" LAST_MODIFIED="1320878773" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=e9zG20wQQ1U</A>
<DT><A HREF="http://www.youtube.com/watch?v=khL4s2bvn-8" ADD_DATE="1320876203" LAST_MODIFIED="1320878782" ICON_URI="http://s.ytimg.com/yt/favicon-vflZlzSbU.ico" ICON="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABEElEQVQ4jaWSPU7DQBCF3wGo7DZp4qtEQqJJlSu4jLgBNS0dRDR0QARSumyPRJogkIJiWiToYhrSEPJR7Hptx/kDRhrNm93nb3ZXFsD4psPRwR4AzbjHxd0ru4a8kEgvbf1NePfzbQdJfro52UcS8fHQDyjWCuDz7QpJTOYLZk5nH0zmi/8Dzg/DEqgCmL1fW/PXNwADf4V7AMbuis24txrw1xBJAlHgMizoLdkI4CVBREEZWTKGK9bKqa3G3QDO2G7Z2ljqAYyxPmPgI4XHpw2A7ES+d/unZzlwM2BNnwEKb1QFGJNPabdg9GB1v2993W71BNOamNZEWpfXy5nW8/3U9UQBkqTyy677F6rrkvQDptjzJ/PSbekAAAAASUVORK5CYII=">http://www.youtube.com/watch?v=khL4s2bvn-8</A>
<DT><A HREF="http://www.youtube.com/watch?v=XTndQ7bYV0A" ADD_DATE="1320876271" LAST_MODIFIED="1320876271">Paramore - Walmart Soundcheck 6-For a pessimist(HQ)</A>
<DT><A HREF="http://www.youtube.com/watch?v=xTT2MqgWRRc" ADD_DATE="1320876284" LAST_MODIFIED="1320876284">Paramore - Walmart Soundcheck 5-Pressure(HQ)</A>
<DT><A HREF="http://www.youtube.com/watch?v=J2ZYQngwSUw" ADD_DATE="1320876291" LAST_MODIFIED="1320876291">Paramore - Wal-Mart Soundcheck Interview</A>
<DT><A HREF="http://www.youtube.com/watch?v=9RZwvg7unrU" ADD_DATE="1320878207" LAST_MODIFIED="1320878207">Paramore - 08 - Interview [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=vz3qOYWwm10" ADD_DATE="1320878295" LAST_MODIFIED="1320878295">Paramore - 04 - That's What You Get [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=yarv52QX_Yw" ADD_DATE="1320878301" LAST_MODIFIED="1320878301">Paramore - 05 - Pressure [ Wal-Mart Soundcheck ]</A>
<DT><A HREF="http://www.youtube.com/watch?v=LRREY1H3GCI" ADD_DATE="1320878317" LAST_MODIFIED="1320878317">Paramore - Walmart Promo</A>
It's a standard bookmarks export file from Firefox.
I feed it into bookmarks.py which looks like this:
#!/usr/bin/env python
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup
url_list = sys.argv[1]
urls = [tag['href'] for tag in
BeautifulSoup(open(url_list)).findAll('a')]
print urls
This returns a much more clean list of urls:
[u'http://www.youtube.com/watch?v=Gg81zi0pheg', u'http://www.youtube.com/watch?v=pP9VjGmmhfo', u'http://www.youtube.com/watch?v=yTA1u6D1fyE', u'http://www.youtube.com/watch?v=4v8HvQf4fgE', u'http://www.youtube.com/watch?v=e9zG20wQQ1U', u'http://www.youtube.com/watch?v=khL4s2bvn-8', u'http://www.youtube.com/watch?v=XTndQ7bYV0A', u'http://www.youtube.com/watch?v=xTT2MqgWRRc', u'http://www.youtube.com/watch?v=J2ZYQngwSUw', u'http://www.youtube.com/watch?v=9RZwvg7unrU', u'http://www.youtube.com/watch?v=vz3qOYWwm10', u'http://www.youtube.com/watch?v=yarv52QX_Yw', u'http://www.youtube.com/watch?v=LRREY1H3GCI']
my next step is to get each of the youtube urls into video_info.py
#!/usr/bin/python
import urlparse
import sys
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
youtube_url = sys.argv[1]
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]
print youtube_id
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'
entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)
print 'Video title: %s' % entry.media.title.text
print 'Video view count: %s' % entry.statistics.view_count
when this url "http://www.youtube.com/watch?v=aXrgwC1rsw4" the output looks like this:
aXrgwC1rsw4
Video title: OneRepublic Good Life Live Walmart Soundcheck
Video view count: 202
How do I feed the list of urls from bookmarks.py into video_info.py?
*extra points for output to csv format and
extra extra points of checking of duplicates in bookmarks.html before passing data to video_info.py*
Thanks for all your help guys. Because of Stackoverflow I've gotten this far.
David
#
So combined I now have:
#!/usr/bin/env python
import urlparse
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'
url_list = sys.argv[1]
urls = [tag['href'] for tag in
BeautifulSoup(open(url_list)).findAll('a')]
print urls
youtube_url = urls
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]
#list(set(my_list))
entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)
myyoutubes = []
myyoutubes.append(", ".join([youtube_id, entry.media.title.text,entry.statistics.view_count]))
print "\n".join(myyoutubes)
How do I pass the list of urls to the youtube_url variable?
they need to be cleaned up further and passed one at a time I believe
I've got it down to this now:
#!/usr/bin/env python
import urlparse
import gdata.youtube
import gdata.youtube.service
import re
import urlparse
import urllib2
import sys
import BeautifulSoup as bs
from BeautifulSoup import BeautifulSoup
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'AI39si4yOmI0GEhSTXH0nkiVDf6tQjCkqoys5BBYLKEr-PQxWJ0IlwnUJAcdxpocGLBBCapdYeMLIsB7KVC_OA8gYK0VKV726g'
url_list = sys.argv[1]
urls = [tag['href'] for tag in
BeautifulSoup(open(url_list)).findAll('a')]
for url in urls:
youtube_url = url
url_data = urlparse.urlparse(youtube_url)
query = urlparse.parse_qs(url_data.query)
youtube_id = query["v"][0]
#list(set(my_list))
entry = yt_service.GetYouTubeVideoEntry(video_id=youtube_id)
myyoutubes = []
myyoutubes.append(", ".join([youtube_id, entry.media.title.text,entry.statistics.view_count]))
print "\n".join(myyoutubes)
I can pass bookmarks.html to combined.py but it only returns the first line.
How to I loop through each line of youtube_url?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您应该向
BeautifulSoup
提供一个字符串:将非常快速的 url 解析与更长的统计数据下载分开:
您不需要对只读请求进行身份验证:
将一些统计信息保存到提供的 csv 文件中命令行。如果某些视频导致错误,请不要停止:
这是完整脚本,按播放即可观看效果书面。
输出
out.csv
You should provide a string to
BeautifulSoup
:Separate a very fast url parsing from a much longer downloading of the stats:
You don't need to authenticate for readonly requests:
Save some statistics to a csv file provided on the command-line. Don't stop if some video causes an error:
Here's a full script, press playback to watch how it was written.
Output
out.csv
为什么不直接合并两个文件呢?另外,您可能希望将其分解为方法,以便以后更容易理解。
另外,对于 csv,您需要积累数据。所以,也许有一个列表,每次都附加一个 csv 行的 youtube 信息:
对于重复项,我通常这样做:
列表(设置(my_list))
集合只有唯一的元素。
Why not just combine the two files? Also, you may want to break it up into methods to make it easier to understand later.
Also, for csv you're going to want to accumulate your data. So, maybe have a list and each time through append a csv line of youtube info:
For duplicates, I normally do this:
list(set(my_list))
Sets only have unique elements.