如何使 RSS 源条目的可用时间比从源访问它们的时间长?

发布于 2024-07-08 08:31:26 字数 612 浏览 13 评论 0原文

我家里的电脑设置为自动从 RSS 源(主要是种子和播客)下载一些内容。 但是,我并不总是让这台计算机保持开启状态。 我订阅的网站具有相对较大的吞吐量,因此当我重新打开计算机时,它不知道从关闭到最新更新之间错过了什么。

您如何将提要条目存储的时间比实际网站上可用的时间更长?

我检查了雅虎的管道,发现没有这样的功能,Google reader 可以这样做,但需要对每个项目进行手动标记。 Magpie RSS for php 可以进行缓存,但这只是为了避免过多检索提要,而不是真正存储更多条目。

我可以访问 24/7 的网络服务器(LAMP),因此使用 php/mysql 的解决方案将非常好,任何现有的网络服务也将很棒。

我可以编写自己的代码来执行此操作,但我确信这一定是某人以前遇到过的问题?

我做了什么: 我不知道你可以使用 Google 阅读器共享整个标签,感谢 Mike Wills 指出了这一点。 一旦我知道我可以做到这一点,只需将提要添加到单独的 Google 帐户(不要堵塞我的个人阅读列表),我还使用雅虎管道进行了一些选择性匹配,只是为了获取我感兴趣的特定条目这也是为了最大限度地减少遗漏任何内容的风险。

My computer at home is set up to automatically download some stuff from RSS feeds (mostly torrents and podcasts). However, I don't always keep this computer on. The sites I subscribe to have a relatively large throughput, so when I turn the computer back on it has no idea what it missed between the the time it was turned off and the latest update.

How would you go about storing the feeds entries for a longer period of time than they're available on the actual sites?

I've checked out Yahoo's pipes and found no such functionality, Google reader can sort of do it, but it requires a manual marking of each item. Magpie RSS for php can do caching, but that's only to avoid retrieving the feed too much not really storing more entries.

I have access to a webserver (LAMP) that's on 24/7, so a solution using a php/mysql would be excellent, any existing web-service would be great too.

I could write my own code to do this, but I'm sure this has to be an issue previously encountered by someone?

What I did:
I wasn't aware you could share an entire tag using Google reader, thanks to Mike Wills for pointing this out.
Once I knew I could do this it was simply a matter of adding the feed to a separate Google account (not to clog up my personal reading list), I also did some selective matching using Yahoo pipes just to get the specific entries I was interested in, this too to minimize the risk that anything would be missed.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

猫七 2024-07-15 08:31:26

听起来 Google 阅读器可以满足您的所有需求。 不确定标记单个项目是什么意思——您必须使用任何 RSS 聚合器来执行此操作。

It sounds like Google Reader does everything you're wanting. Not sure what you mean by marking individual items--you'd have to do that with any RSS aggregator.

年华零落成诗 2024-07-15 08:31:26

我使用 Google Reader 进行 podiobooks.com 订阅。 我将所有提要添加到我共享的标签(在本例中为 podiobooks.com)(但不共享 URL)。 然后我将 RSS 源添加到 iTunes。 示例

I use Google Reader for my podiobooks.com subscriptions. I add all of the feeds to a tag, in this case podiobooks.com, that I share (but don't share the URL). I then add the RSS feed to iTunes. Example here.

与他有关 2024-07-15 08:31:26

听起来您想要某种服务每隔 X 分钟检查一次 RSS 提要,这样您就可以在“观看”时下载发布到提要的每一篇文章/项目,而不是只看到当您查看提要时显示在提要上的项目。 我说的对吗?

您是否可以在网络服务器上使用 cron 或其他类型的作业调度,以及您已经使用的任何解决方案来读取源并下载其内容,而不是提出成熟的软件解决方案?

否则听起来你最终会差点重写一个像 Google Reader 这样成熟的服务。

Sounds like you want some sort of service that checks the RSS feed every X minutes, so you can download every single article/item published to the feed while you are "watching" it, rather than only seeing the items displayed on the feed when you go to view it. Do I have that correct?

Instead of coming up with a full-blown software solution, can you just use cron or some other sort of job scheduling on the webserver with whatever solution you are already using to read the feeds and download their content?

Otherwise it sounds like you'll end up coming close to re-writing a full-blown service like Google Reader.

迷迭香的记忆 2024-07-15 08:31:26

有了一个好的 RSS 库,编写一个聚合器来保存更长的历史记录应该不会太难。

Writing an aggregator for keeping longer history shouldn't be too hard with a good RSS library.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文