长轮询与流式传输,大约每秒 1 次更新

发布于 2024-07-26 06:41:36 字数 137 浏览 7 评论 0原文

流媒体是一个可行的选择吗? 根据我的选择,服务器端的性能会有所不同吗? 对于这种情况,一个比另一个更好吗?

我正在开发一个 GWT 应用程序,Tomcat 在服务器端运行。 为了理解我的需求,想象一下同时更新几只股票的股价。

is streaming a viable option?
will there be a performance difference on the server end depending on which i choose?
is one better than the other for this case?

I am working on a GWT application with Tomcat running on the server end. To understand my needs, imagine updating the stock prices of several stocks concurrently.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

最好是你 2024-08-02 06:41:37

这并不重要。 HTTP1.1 的连接重新协商开销非常小,您不会注意到任何显着的性能差异。

长轮询的好处是兼容性和可靠性 - 没有代理、端口、检测断开连接等问题。

“真正”流的好处可能会减少开销,但正如已经提到的,这种好处远远少于它的好处被证明是。

就我个人而言,我发现精心设计的 Comet 服务器是大量更新和/或服务器推送的最佳解决方案。

It doesn't really matter. The connection re-negotiation overhead is so slim with HTTP1.1, you won't notice any significant performance differences one way or another.

The benefits of long-polling are compatibility and reliability - no issues with proxies, ports, detecting disconnects, etc.

The benefits of "true" streaming would potentially be reduced overhead, but as mentioned already, this benefit is much, much less than it's made out to be.

Personally, I find a well-designed comet server to be the best solution for large numbers of updates and/or server-push.

围归者 2024-08-02 06:41:37

当然,如果您希望推送数据,如果您的服务器可以处理预期数量的连续连接,那么流式传输似乎可以提供更好的性能。 但还有一个问题没有解决:您使用的是互联网还是内联网? 据报道,流媒体在代理之间存在一些问题,正如您所期望的那样。 因此,对于通用解决方案,长轮询可能会为您提供更好的服务 - 对于了解网络基础设施的内部网,流式传输很可能是一个更简单、性能更好的解决方案。

Certainly, if you're looking to push data, streaming would seem to provide better performance, if your server can handle the expected number of continuous connections. But there's another issue that you don't address: Are you internet or intranet? Streaming has been reported to have some problems across proxies, much as you'd expect. So for a general purpose solution, you would probably be better served by long poll - for an intranet, where you understand the network infrastructure, streaming is quite likely a simpler, better performance solution for you.

度的依靠╰つ 2024-08-02 06:41:37

StreamHub GWT Comet 适配器 专为这种流式股票报价场景而设计。 示例如下: GWT 流媒体股票引号。 它同时更新多只股票的股价。 我认为底层的实现是 Comet,它本质上是通过 HTTP 进行流式传输。

编辑:它对每个浏览器使用不同的技术。 引用该网站:

有几种不同的底层
用于实现 Comet 的技术
包括隐藏的 iFrame,
XMLHttpRequest/脚本长轮询,
以及Flash等嵌入式插件。
HTML 5 WebSocket 简介
未来的浏览器将提供
HTTP 的替代机制
流媒体。 StreamHub 使用“最佳拟合”
利用最高性能的方法
和可靠的技术
浏览器。

The StreamHub GWT Comet Adapter was designed exactly for this scenario of streaming stock quotes. Example here: GWT Streaming Stock Quotes. It updates the stock prices of several stocks concurrently. I think the implementation underneath is Comet which is essentially streaming over HTTP.

Edit: It uses a different technique for each browser. To quote the website:

There are several different underlying
techniques used to implement Comet
including Hidden iFrame,
XMLHttpRequest/Script Long Polling,
and embedded plugins such as Flash.
The introduction of HTML 5 WebSockets
in to future browsers will provide an
alternative mechanism for HTTP
Streaming. StreamHub uses a "best-fit"
approach utilizing the most performant
and reliable technique for each
browser.

撑一把青伞 2024-08-02 06:41:37

流式传输速度会更快,因为数据仅以一种方式通过线路。 使用轮询时,延迟至少是两倍。

轮询对网络中断的恢复能力更强,因为它不依赖于保持打开的连接。

我会为了稳健性而进行民意调查。

Streaming will be faster because data only crosses the wire one way. With polling, the latency is at least twice.

Polling is more resilient to network outages since it doesn't rely on a connection being kept open.

I'd go for polling just for the robustness.

箹锭⒈辈孓 2024-08-02 06:41:37

对于实时股票价格,我绝对会保持连接打开,并确保用户在断开连接时发出警报/重新连接。

For live stock prices I would absolutely keep the connection open, and ensure user alert/reconnection on disconnect.

尤怨 2024-08-02 06:41:36

您希望该流程是客户端驱动的还是服务器驱动的? 换句话说,您是否希望在新数据可用时立即将其推送给客户端,还是希望客户端在认为合适时请求新数据,即使这可能不是一次/秒? 客户能够留下来等待答复的可能性有多大? 即使您预计事件每秒发生一次,从客户端发出请求到服务器返回需要多长时间? 如果时间超过一秒,我希望您倾向于将事件推送给客户端,尽管相反,我希望轮询是可以的。 如果响应花费的时间比间隔长,那么无论如何,您实际上都是在进行流式传输,因为在客户端收到最后一个事件时,已经有一个新事件准备就绪,因此客户端实际上可以不断轮询并始终接收事件 - 在这种情况下,流式传输数据实际上会更轻,因为您从流程中删除了连接/协商开销。

我怀疑基于客户端(拉取)的订阅而不是流式配置的服务器负载会更高,因为客户端每次都必须重新协商连接,而不是保持连接打开,但每个打开的连接在流模型中也需要服务器资源。 这取决于您的协商过程的激进程度与每个打开的连接需要多少内存/处理之间的权衡。 不过,我不是专家,所以可能还有其他因素。

更新:这个人谈论了长期之间的权衡-轮询和流式传输,他似乎说使用 HTTP/1.1,连接重新协商过程很简单,因此这不是什么大问题。

Do you want the process to be client- or server-driven? In other words, do you want to push new data to the clients as soon as it's available, or would you rather that the clients request new data whenever they see fit, even though that might not be once/second? What is the likelyhood that the client will be able to stick around to wait for an answer? Even though you expect the events to occur once/second, how long does it take between a request from a client and the return from the server? If it's longer than a second, I'd expect you to lean towards pushing the events to the clients, though the other way around, I'd expect polling to be okay. If the response takes longer than the interval, then you're essentially streaming anyway, since there's a new event ready by the time the client receives the last one, so the client could essentially poll continually and always receive events - in this case, streaming the data would actually be more lightweight, since you're removing the connection/negotiation overhead from the process.

I would suspect that server load to be higher for a client-based (pull) subscription, instead of a streaming configuration, since the client would have to re-negotiate the connection each time, instead of leaving a connection open, but each open connection in a streaming model would require server resources as well. It depends on what the trade-off is between how aggressive your negotiation process is vs. how much memory/processing is required for each open connection. I'm no expert, though, so there may be other factors.

UPDATE: This guy talks about the trade-offs between long-polling and streaming, and he seems to say that with HTTP/1.1, the connection re-negotiation process is trivial, so that's not as much of an issue.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文