持续推动 Jetty 上的彗星长轮询?

发布于 2024-12-04 21:38:28 字数 324 浏览 1 评论 0 原文

我正在尝试创建一个 Jetty servlet,允许客户端(Web 浏览器、Java 客户端...)从 Web 服务器获取广播通知。

通知应以 JSON 格式发送。

我的第一个想法是让客户端发送一个长轮询请求,当通知可用时服务器使用 Jetty 的 Continuation API 进行响应,然后重复。

这种方法的问题是我错过了两个请求之间发生的所有通知。

我为此找到的唯一解决方案是缓冲服务器上的事件并使用时间戳机制重新传输错过的通知,这可行,但对于它的作用来说似乎相当沉重......

关于如何解决这个问题的任何想法优雅地?

谢谢!

I am trying to create a Jetty servlet that allows clients (web browsers, Java clients, ...) to get broadcast notifications from the web server.

The notifications should be sent in a JSON format.

My first idea was to make the client send a long-polling request, and the server respond when a notification is available using Jetty's Continuation API, then repeat.

The problem with this approach is that I am missing all the notifications that happen between 2 requests.

The only solution I found for this, is to buffer the Events on the server and use a timestamp mechanism to retransmit missed notifications, which works, but seems pretty heavy for what it does...

Any idea on how I could solve this problem more elegantly?

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

转身以后 2024-12-11 21:38:29

HTTP Streaming 绝对是比 HTTP 长轮询更好的解决方案。 WebSocket 是一个更好的解决方案。

WebSocket 为任何客户端(不一定是 Web 浏览器)和服务器之间的 Web 实时通信提供了第一个标准化双向全双工解决方案。恕我直言,WebSockets 是必经之路,因为它们是一种将继续开发、支持和需求的技术,并且只会在使用和普及方面不断增长。它们也非常酷:)

似乎有 一些 Java 的 WebSocket 客户端Jetty还支持 WebSockets

HTTP Streaming is most definitely a better solution than HTTP long-polling. WebSockets are an even better solution.

WebSockets offer the first standardised bi-directional full-duplex solution for realtime communication on the Web between any client (it doesn't have to be a web browser) and server. IMHO WebSockets are the way to go since they are a technology that will continue to be developed, supported and in demand and will only grow in usage and popularity. They're also super-cool :)

There appear to be a few WebSocket clients for Java and Jetty also supports WebSockets.

℉絮湮 2024-12-11 21:38:29

很抱歉提出这个问题,但我相信很多人都会遇到这个帖子,并且接受的答案,恕我直言,至少已经过时了,更不用说具有误导性了。

按照优先顺序,我将其如下:

1) WebSockets 是当今的解决方案。我个人有在面向企业的应用程序中引入 WebSocket 的经验。所有主要浏览器(Chrome、Firefox、IE - 按字母顺序排列:))都支持 WebSockets 本身。所有主要服务器/servlet(IIS、Tomcat、Jetty)都是相同的,并且 Java 中有相当多的框架实现 JSR 356 API。代理存在问题,尤其是在云部署中。然而,人们对 WebSockets 的要求有很高的认识,因此 NginX 在 1.5 年前就已经支持它们了。无论如何,安全的“wss”协议解决了 99.9% 的代理问题(不是 100% 只是为了安全起见,我自己从未经历过)。

2) 长轮询可能是第二佳解决方案,“可能”部分是由于“短轮询”替代方案。当谈到长轮询时,我指的是从客户端到服务器的重复请求,一旦有任何数据可用,服务器就会做出响应。因此,一项轮询可以在几毫秒内完成,另一轮轮询则可以完成最长等待时间。
请务必将轮询时间限制在 2 分钟以内,否则通常您需要在客户端管理超时错误。我建议将轮询时间限制在几十秒之内。
可以肯定的是,一旦轮询完成(及时或在此之前),它会立即重复(最好建立一些简单的协议并让服务器有机会对客户端说“暂停”)。
长轮询的缺点(恕我直言,它证明了列表的延续是合理的)是它保留了少数(4、8?仍然不是那么多)允许的连接之一,浏览器允许每个页面建立到服务器。因此,这可能会占用您网站约 12% 至 25% 的客户端流量资源。

3)短轮询并不是很多人喜欢的,但有时我更喜欢它。当然,这一点的主要缺点是建立新连接时浏览器和服务器的负载很高。然而,我相信,如果连接池使用得当,那么开销会比乍一看要小得多。

4) HTTP 流,无论是通过 IFrame 进行页面流还是 XHR 流,恕我直言,都是非常糟糕的解决方案,因为它就像是所有其他缺点以及更多缺点的累积:

  • 你将保持打开的连接(浏览器和服务器的资源);

  • 您仍将消耗总可用客户端流量限制;

  • 最邪恶的:您需要设计/实现(或重用设计/实现)实际的内容交付,以便能够区分内容与内容强>一(无论是推送脚本,天哪!还是跟踪累积内容的长度)。请不要这样做。

更新 (20/02/2019)

如果 WebSocket 不是一个选项 - 服务器发送事件 是第二个最佳选项恕我直言 - 浏览器有效地在较低级别为您实现了 HTTP 流。

Sorry for bumping this up, yet I believe numerous people will come across this thread and the accepted answer, IMHO, is at least outdated, not to say misleading.

In order of priority I'd put it as following:

1) WebSockets is the solution nowadays. I've personally had the experience of introducing WebSockets in enterprise oriented applications. All of the major browsers (Chrome, Firefox, IE - in alphbetical order :)) support WebSockets natively. All major servers/servlets (IIS, Tomcat, Jetty) are the same and there are quite a number of frameworks in Java implementing JSR 356 APIs. There is a problem with proxies, especially in cloud deployment. Yet there is a high awareness of the WebSockets requirements, so NginX supported them already 1.5 year ago. Anyway, secured 'wss' protocol solves proxy problem in 99.9% (not 100% just to be on the safe side, never experienced that myself).

2) Long Polling is probably the second best solution, and the 'probably' part is due to 'short polling' alternative. When speaking of long polling I mean the repeated request from client to server, which responses as soon as any data available. Thus, one poll can finish in a few millis, another one - till the max wait time.
Be sure to limit the poll time to something lesser than 2mins since usually otherwise you'll need to manage timeout error in you client side. I'd propose to limit the poll time to something like tens of seconds.
To be sure, once poll finished (timely or before that) it is immediately repeated (yet better to establish some simple protocol and give to your server a chance to say to the client - 'suspend').
Cons of the long polling, which IMHO justifies the continuation of the list, is that it holds one of just a few (4, 8? still not that many) allowed connections, that browser allows each page to establish to a server. So that is can eat up ~12% to ~25% of your website's client traffic resource.

3) Short polling not that much loved by many, but sometimes i prefer it. The main Cons of this one, is, of course, the high load on the browser and the server while establishing new connections. Yet, i believe that if connections pools are used properly, that overhead much lesser than it look like on the first glance.

4) HTTP streaming, either it be page streaming via IFrame or XHR streaming, is, IMHO, highly BAD solution since it's like an accumulation of Cons of all the rest and more:

  • you'll hold the connections opened (resources of browser and server);

  • you'll still be eating up from total available client traffic limit;

  • most evil: you'll need to design/implement (or reuse design/implementation) the actual content delivery in order to be able to differentiate the new content from the old one (be it in pushing scripts, oh my! or tracking the length of the accumulated content). Please, don't do this.

Update (20/02/2019)

If WebSockets are not an option - Server Sent Events is the second best option IMHO - effectively browsers implemented HTTP streaming for you here at the lower level.

我恋#小黄人 2024-12-11 21:38:29

我在通过 Atmosphere 框架使用 Http Streaming 之前已经完成了此操作,并且效果很好。

请访问 彗星,流媒体

如果您看到大气层, 他们给出了多个示例的教程

I have done this before using Http Streaming via Atmosphere framework and it worked fine.

Visit Comet, Streaming

if you see the atmosphere tutorial they have given multiple examples

甜是你 2024-12-11 21:38:29

您可能想检查他们如何在 CometD 中实现此功能: http://cometd.org
或者您甚至可以考虑使用该工具,而不必重新发明轮子。

You may want to check how they implemented this in CometD: http://cometd.org .
Or you may even consider to use that tool, without having to reinvent the wheel.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文