美味书签 - 经常添加书签的网址
我还没有找到任何预制脚本可以帮助我分析我的美味书签。我想知道是否有我经常收藏的网站。我知道我可以导出我的书签并可以从那里开始。有人这样做过吗?你怎么样了?
顺便说一句 - 有没有 RSS 阅读器可以做类似的事情?
I haven't found any pre-made scripts that would help me analyze my delicious bookmarks. I want to know if there are any websites that I tend to frequently bookmark. I know I can export my bookmarks and can go from there. Has anyone done this? How have you gone about it?
On a side note - are there any RSS readers that do something similar?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
好吧,我建议最简单的方法是将它们全部导出为 XML(使用 AJAX API)或 HTML,然后将它们全部解析到一个数组中,迭代它们并提取域,然后对列表进行排序并执行值计数,因此您最终会得到像 {"example.org" =>; 1、“cnn.com”、50} 等。然后对它们进行排序,以便您可以看到前十名。
如何做到这一点取决于您喜欢使用哪种编程语言和库。我可能会使用 Nokogiri 和 Ruby。基本上,使用 API 下载数据,使用您喜欢的编程语言的 XML 解析库对其进行解析,使用 URI 库提取 URI 的主机部分(或使用正则表达式),然后只需摇动数组直到它做你想做的事。
Well, I'd suggest the easiest thing to do would be to export them all as XML (using the AJAX API) or HTML, then parse them all into an array, iterate through them and extract the domains and then sort the list and do value counting so you end up with a hash like {"example.org" => 1, "cnn.com", 50} etc. Then sort them so you can see your top ten.
How you do that would depend on which programming language and libraries you prefer to use. I'd probably use Nokogiri and Ruby. Basically, download the data using the API, parse it using the XML parsing library for your preferred programming language, use a URI library to extract the host part of the URI (or use a regular expression) and then just jiggle the array around until it does what you want.
使用导出/下载您的 Delicious 书签页面进行进一步分析怎么样?
What about using the Export / Download Your Delicious Bookmarks page for further analysis?