提高长轮询 Ajax 性能

发布于 2024-08-18 01:02:48 字数 1428 浏览 5 评论 0原文

我正在编写一个网络应用程序(仅与 Firefox 兼容),它使用长轮询(通过 jQuery 的 ajax 功能)将或多或少的持续更新从服务器发送到客户端。我担心长时间运行(例如全天或整夜)的影响。基本的代码框架是这样的:(

function processResults(xml)
{
    // do stuff with the xml from the server
}

function fetch()
{
    setTimeout(function ()
    {
        $.ajax({
            type: 'GET',
            url: 'foo/bar/baz',
            dataType: 'xml',
            success: function (xml)
            {
                processResults(xml);
                fetch();
            },
            error: function (xhr, type, exception)
            {
                if (xhr.status === 0)
                {
                console.log('XMLHttpRequest cancelled');
                }
                else
                {
                    console.debug(xhr);
                    fetch();
                }
            }
        });
    }, 500);
}

半秒的“睡眠”是为了让客户端在更新快速返回到客户端时不会对服务器造成影响——通常都是这样。)

在让它运行过夜后,它倾向于让 Firefox 抓取。我一直认为这可能部分是由大堆栈深度引起的,因为我基本上编写了一个无限递归函数。但是,如果我使用 Firebug 并将断点放入 fetch 中,看起来情况并非如此。即使在一个小时之后,Firebug 向我展示的堆栈也只有大约 4 或 5 帧深。

我正在考虑的解决方案之一是将递归函数更改为迭代函数,但我不知道如何在 Ajax 请求之间插入延迟而不需要旋转。我查看了 JS 1.7“yield”关键字,但我无法完全概括我绞尽脑汁,想弄清楚这是否是我所需要的。

最好的解决方案只是定期对页面进行硬刷新(例如每小时一次)吗?是否有更好/更精简的长轮询设计模式,即使运行 8 或 12 小时也不会伤害浏览器?或者我应该完全跳过长轮询并使用不同的“不断更新”模式,因为我通常知道服务器对我做出响应的频率?

I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this:

function processResults(xml)
{
    // do stuff with the xml from the server
}

function fetch()
{
    setTimeout(function ()
    {
        $.ajax({
            type: 'GET',
            url: 'foo/bar/baz',
            dataType: 'xml',
            success: function (xml)
            {
                processResults(xml);
                fetch();
            },
            error: function (xhr, type, exception)
            {
                if (xhr.status === 0)
                {
                console.log('XMLHttpRequest cancelled');
                }
                else
                {
                    console.debug(xhr);
                    fetch();
                }
            }
        });
    }, 500);
}

(The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.)

After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour.

One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here.

Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

枯叶蝶 2024-08-25 01:02:48

也有可能是 FireBug。你是console.logging的东西,这意味着你可能打开了一个网络监视器选项卡等,这意味着每个请求都存储在内存中。

尝试禁用它,看看是否有帮助。

It's also possible that it's FireBug. You're console.logging stuff, which means you probably have a network monitor tab open, etc, which means every request is stored in memory.

Try disabling it, see if that helps.

智商已欠费 2024-08-25 01:02:48

我怀疑内存从 processResults() 泄漏。

我一直在一个长轮询 Web 应用程序中使用与您非常相似的代码,该应用程序能够不间断运行数周而无需刷新页面。

您的堆栈不应太深,因为 fetch() 会立即返回。您没有无限递归循环。

您可能需要使用 Firefox Leak Monitor Add-on 来协助你发现内存泄漏。

I suspect that memory is leaking from processResults().

I have been using very similar code to yours in a long-polling web application, which is able to run uninterrupted for weeks without a page refresh.

Your stack should not be deep, because fetch() returns immediately. You do not have an infinitely recursive loop.

You may want to use the Firefox Leak Monitor Add-on to assist you in finding memory leaks.

栀子花开つ 2024-08-25 01:02:48

4-5 的堆栈深度是正确的。 setTimeout$.ajax 是异步调用,会立即返回。稍后浏览器会使用空的调用堆栈调用该回调。由于无法以同步方式实现长轮询,因此必须使用这种递归方法。没有办法让它迭代。

我怀疑这种速度变慢的原因是您的代码存在内存泄漏。泄漏可能发生在 jQuery 的 $.ajax 中(可能性很小),也可能发生在您的 processResults 调用中。

The stack depth of 4-5 is correct. setTimeout and $.ajax are asynchronous calls, which return immediately. The callback is later called by the browser with an empty call stack. Since you cannot implement long polling in a synchronous way, you must use this recursive approach. There is no way to make it iterative.

I suspect the reason for this slow down is that your code has a memory leak. The leak could either be in $.ajax by jQuery (very unlikely) or in your processResults call.

冷夜 2024-08-25 01:02:48

从方法本身内部调用 fetch() 是一个坏主意。当您期望方法在某个时刻结束并且结果将开始发送给调用者时,最好使用递归。问题是,当您递归调用该方法时,它会使调用者方法保持打开状态并使用内存。如果你只有 3-4 帧深度,那是因为 jQuery 或浏览器以某种方式“修复”了你所做的事情。

最新版本的 jquery 默认支持长轮询。这样您就可以确保您不依赖浏览器的智能来处理无限递归调用。调用 $.ajax() 方法时,您可以使用下面的代码进行长时间轮询,并在新调用之前安全等待 500 毫秒。

function myLongPoll(){
    setTimeout(function(){
        $.ajax({
            type:'POST',
            dataType: 'JSON',
            url: 'http://my.domain.com/action',
            data: {},
            cache: false,
            success:function(data){

                //do something with the result

            },
            complete: myLongPoll, 
            async : false,
            timeout: 5000
        });
   //Doesn't matter how long it took the ajax call, 1 milisec or 
   //5 seconds (timeout), the next call will only happen after 2 seconds
   }, 2000);

这样您就可以确保 $.ajax() 调用在下一个调用开始之前关闭。这可以通过在 $.ajax() 调用之前和之后添加一个简单的 console.log() 来证明。

It is a bad idea to call fetch() from inside the method itself. Recursivity is better used when you expect that at some point the method will reach an end and the results will start to be send to the caller. The thing is, when you call the method recursively it keeps the caller method open and using memory. If you are only 3-4 frames deep, it is because jQuery or the browser are somehow "fixing" what you've done.

Recent releases of jquery support long-polling by default. This way you can be sure that yhou are not deppending on browser's intelligence to deal with your infinite recursive call. When calling the $.ajax() method you could use the code below to do a long poll combined with a safe wait of 500 miliseconds before a new call.

function myLongPoll(){
    setTimeout(function(){
        $.ajax({
            type:'POST',
            dataType: 'JSON',
            url: 'http://my.domain.com/action',
            data: {},
            cache: false,
            success:function(data){

                //do something with the result

            },
            complete: myLongPoll, 
            async : false,
            timeout: 5000
        });
   //Doesn't matter how long it took the ajax call, 1 milisec or 
   //5 seconds (timeout), the next call will only happen after 2 seconds
   }, 2000);

This way you can be sure that the $.ajax() call is closed before the next one starts. This can be proved by adding a simple console.log() at the prior and another after your $.ajax() call.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文