如何以容错能力继续阅读无穷无尽的内容

发布于 2024-12-27 02:25:26 字数 285 浏览 0 评论 0原文

目前我正在开发一个应用程序,它从 Twitter api 读取流并将其解析为对象。 目前,我读取流并使用 DataContractJsonSerializer 中的 ReadObject 来制作我的对象。

这太棒了!

然而: 我有点担心如果我的程序赶上流(互联网变慢或w/e)并且没有足够的数据来解析会发生什么......该方法可能会抛出异常,但我想要等待新数据,然后重试同一对象并继续。

另外,我想知道如何使该方法更加稳定,以防损坏的数据进入流或类似的情况。

预先感谢您的任何答案/想法:)

Currently i'm working on an app that reads the stream from a Twitter api and parses it into objects.
At the moment I read the stream and use ReadObject from DataContractJsonSerializer to make my objects.

This works great!!

HOWEVER:
I'm kind of worried what would happen in the off chance my program catches up with the stream (internet slows down or w/e) and there is not enough data to parse...The method will probably throw an exception, but i want to wait for new data and then retry the same object and continue.

Also i was wondering how i could make the method more stable, in case corrupt data would enter the stream or something like this.

Thanks in advance for any answers/ideas:)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

横笛休吹塞上声 2025-01-03 02:25:26

如果您的程序赶上了 twitter feed,DCJS 应该会阻塞,直到您获得足够的数据来完成读取。这通常不是一个问题,因为流旨在向读者隐藏延迟。

更有可能的是,您不会赶上,而是继续落后,直到内存耗尽,抛出 OOME,程序就会崩溃。

我建议不要尝试动态解析流,而是将其写入文件之类的东西并从中读取(甚至可能并行,使用滚动文件或其他东西)。

If your program catches up with the twitter feed, the DCJS should just block until you get enough data to complete reading. That isn't normally a concern, because streams are designed to hide latency from their readers.

Much more likely is that you won't catch up, but rather keep falling behind until you run out of memory, throw an OOME, and the program will crash.

I'd suggest rather than trying to parse the stream on the fly, write it to something like a file and read from that (maybe even in parallel, using rolling files or something).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文