我如何帮助 IE 有缺陷的垃圾收集器?
我有一个 JavaScript 应用程序,它使用 XMLHttpRequest 来获取和解析大约 60,000 个 XML 文档。然而,IE的内存使用量增长很快,最终导致程序崩溃。我怀疑这与 IE 的 JScript GC 有关。下面是我的代码的简化版本:
在代码上方,我声明了两个变量:
var xmlhttp;
var xmlDoc;
当代码第一次开始运行时,我设置了 xmlhttp 的值:
xmlhttp = new XMLHttpRequest();
然后脚本进入主循环:
function loadXML() {
xmlhttp.abort();
xmlhttp.open("GET", url, false);
xmlhttp.setRequestHeader('Content-Type', 'text/xml', 'Pragma', 'no-cache');
xmlhttp.send("");
while (xmlhttp.readyState != 4) { }
xmlDoc = xmlhttp.responseXML;
setTimeout("readXML()",0);
}
function readXML() {
//Reads the XML.
//If all data has been retrieved, exit loop.
//Else, change the url and go back to loadXML()
}
Google Chrome 运行代码很好,没有错误。然而,IE 在因“内存不足”错误而崩溃之前会循环大约 2000 次。垃圾收集器没有完成它的工作吗?我可以重写代码来防止出现问题吗?
I have a JavaScript application that uses XMLHttpRequest to fetch and parse about 60,000 XML documents. However, IE's memory usage grows quickly, and eventually the program crashes. I suspect this has to do with IE's JScript GC. Below is a simplified version of my code:
Above the code, I declare two variables:
var xmlhttp;
var xmlDoc;
When the code first starts running, I set the value of xmlhttp:
xmlhttp = new XMLHttpRequest();
The script then enters the main loop:
function loadXML() {
xmlhttp.abort();
xmlhttp.open("GET", url, false);
xmlhttp.setRequestHeader('Content-Type', 'text/xml', 'Pragma', 'no-cache');
xmlhttp.send("");
while (xmlhttp.readyState != 4) { }
xmlDoc = xmlhttp.responseXML;
setTimeout("readXML()",0);
}
function readXML() {
//Reads the XML.
//If all data has been retrieved, exit loop.
//Else, change the url and go back to loadXML()
}
Google Chrome runs the code just fine, with no errors. However, IE loops about 2000 times before crashing with an "Out of Memory" error. Is the Garbage Collector not doing it;s job? Can I rewrite my code to prevent problems?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您根本不应该使用繁忙循环来等待 XMLHttpRequest 的结果。此外,没有理由将
xmlhttp
对象公开。相反,在每次调用时创建一个新的并注册一个回调:You should not use a busy loop at all to wait for the result of an XMLHttpRequest. Also, there's no reason to have the
xmlhttp
object public. Instead, create a new one on every call and register a callback:您绝对应该遵循 phihag 的建议,找到更好的方法来处理 xml 请求并等待其完成。
然后我建议清空旧的 xmlhttp 对象并为每个连续请求创建一个新对象,以便可以完全释放每个旧请求:
您没有向我们展示如何运行相同的东西 60,000 次,所以我真的无法有助于了解该代码的详细信息,但如果 xmlhttp 对象本身在每个 xmlhttp 请求上泄漏一些内存,则丢弃旧对象并每次创建一个新对象可能会有所帮助。
我们也看不到您在 readXML 中所做的可能会泄漏的操作,或者您在循环并获取下一个请求的代码中所做的操作。你可能会泄漏函数闭包,你可能有循环对象引用,等等......
You should definitely follow phihag's advice on a better way to process the xml requests and wait for their completion.
Then I would suggest nulling out your old xmlhttp object and creating a new one for each successive request so each old request can be entirely freed:
You don't show us how you go about running the same thing 60,000 times so I can't really help with the details of that code, but if the xmlhttp object itself is leaking some memory on each xmlhttp request, then throwing away the old object and creating a new one each time may help.
We also can't see what you're doing in readXML that could be leaking or what you're doing in the code that loops and gets the next request. You could be leaking function closures, you could have circular object references, etc...