但为什么浏览器 DOM 经过 10 年的努力仍然这么慢?
Web 浏览器 DOM 自 90 年代末以来就已存在,但它仍然是性能/速度方面最大的限制之一。
我们拥有来自 Google、Mozilla、Microsoft、Opera、W3C 和其他各种组织的一些世界上最聪明的人才,为我们所有人致力于 Web 技术,所以显然这不是一个简单的“哦,我们没有优化它” “ 问题。
我的问题是如果我要开发专门处理此问题的网络浏览器部分,为什么我会很难让它运行得更快?
我的问题不是问什么 使其变慢,它询问 为什么它没有变得更快?
这似乎与其他地方的情况背道而驰,例如性能接近 C++ 代码的 JS 引擎。
快速脚本示例:
for (var i=0;i<=10000;i++){
someString = "foo";
}
由于 DOM 导致缓慢的示例:
for (var i=0;i<=10000;i++){
element.innerHTML = "foo";
}
根据要求提供一些详细信息:
经过基准测试,看起来这不是一个无法解决的缓慢问题,但经常使用错误的工具,并且使用的工具取决于您的用途做跨浏览器。
看起来浏览器之间的 DOM 效率差异很大,但我最初认为 dom 很慢并且无法解决的假设似乎是错误的。
我对 Chrome、FF4 和 IE 5-9 进行了测试,您可以在此图表中看到每秒的操作数:
,但使用 .innerHTML 运算符时速度要慢得多(慢了 1000 倍),但是,FF 在某些方面比 Chrome 更差(例如,附加测试是比 Chrome 慢得多),但 InnerHTML 测试运行速度比 Chrome 快得多。
从 5.5 开始,随着版本的进步,IE 似乎在使用 DOM 追加方面实际上变得越来越差,而在使用 innerHTML 方面却越来越好(即 IE8 中的 73 操作/秒,现在 IE9 中的 51 操作/秒)。
我在这里有测试页面:
http://jsperf.com/browser-dom-speed-tests2
有趣的是,不同的浏览器在生成 DOM 时似乎都面临着不同的挑战。为什么这里会出现这样的差距呢?
The web browser DOM has been around since the late '90s, but it remains one of the largest constraints in performance/speed.
We have some of the world's most brilliant minds from Google, Mozilla, Microsoft, Opera, W3C, and various other organizations working on web technologies for all of us, so obviously this isn't a simple "Oh, we didn't optimize it" issue.
My question is if i were to work on the the part of a web browser that deals specifically with this, why would I have such a hard time making it run faster?
My question is not asking what makes it slow, it's asking why hasn't it become faster?
This seems to be against the grain of what's going on elsewhere, such as JS engines with performance near that of C++ code.
Example of quick script:
for (var i=0;i<=10000;i++){
someString = "foo";
}
Example of slow because of DOM:
for (var i=0;i<=10000;i++){
element.innerHTML = "foo";
}
Some details as per request:
After bench marking, it looks like it's not an unsolvable slow issue, but often the wrong tool is used, and the tool used depends on what you're doing cross-browser.
It looks like the DOM efficiency varies greatly between browsers, but my original presumption that the dom is slow and unsolvable seems to be wrong.
I ran tests against Chrome, FF4, and IE 5-9, you can see the operations per second in this chart:
Chrome is lightning fast when you use the DOM API, but vastly slower using the .innerHTML operator (by a magnitude 1000-fold slower), however, FF is worse than Chrome in some areas (for instance, the append test is much slower than Chrome), but the InnerHTML test runs much faster than chrome.
IE seems to actually be getting worse at using DOM append and better at innerHTML as you progress through versions since 5.5 (ie, 73ops/sec in IE8 now at 51 ops/sec in IE9).
I have the test page over here:
http://jsperf.com/browser-dom-speed-tests2
What's interesting is that it seems different browsers seem to all be having different challenges when generating the DOM. Why is there such disparity here?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
当您更改 DOM 中的某些内容时,它可能会产生无数副作用,与重新计算布局、样式表等有关。
这不是唯一的原因:当您设置
element.innerHTML=x
时,您就不会不再处理普通的“在此处存储值”变量,而是处理特殊对象,这些对象在设置它们时会更新浏览器中的内部状态负载。element.innerHTML=x
的全部含义是巨大的。粗略概述:x
解析为 HTMLelement
的现有子节点所有这些更新都必须通过连接 Javascript 和 HTML 引擎的 API。如今 JavaScript 如此之快的原因之一是我们将其编译为某种更快的语言甚至机器代码,由于值的行为定义良好,因此发生了大量优化。当使用 DOM API 时,这一切都是不可能的。其他地方的加速已经把 DOM 抛在了后面。
When you change something in the DOM it can have myriad side-effects to do with recalculating layouts, style sheets etc.
This isn't the only reason: when you set
element.innerHTML=x
you are no longer dealing with ordinary "store a value here" variables, but with special objects which update a load of internal state in the browser when you set them.The full implications of
element.innerHTML=x
are enormous. Rough overview:x
as HTMLelement
All these updates have to go through an API which bridges Javascript and the HTML engine. One reason that Javascript is so fast these days is that we compile it to some faster language or even machine code, masses of optimisations happen because the behaviour of the values is well-defined. When working through the DOM API, none of this is possible. Speedups elsewhere have left the DOM behind.
首先,您对 DOM 所做的任何操作都可能是用户可见的更改。如果更改 DOM,浏览器必须重新布局所有内容。如果浏览器缓存更改,然后仅每 X 毫秒布局一次(假设它还没有这样做),那么它可能会更快,但也许对这种功能的需求并不大。
其次,innerHTML 并不是一个简单的操作。这是微软推出的一种肮脏的黑客行为,其他浏览器也采用了它,因为它非常有用;但它不是标准(IIRC)的一部分。使用innerHTML,浏览器必须解析字符串,并将其转换为DOM。解析很难。
Firstly, anything you do to the DOM could be a user visible change. If you change the DOM, the browser has to lay everything out again. It could be faster, if the browser caches the changes, then only lays out every X ms (assuming it doesn't do this already), but perhaps there's not a huge demand for this kind of feature.
Second, innerHTML isn't a simple operation. It's a dirty hack that MS pushed, and other browsers adopted because it's so useful; but it's not part of the standard (IIRC). Using innerHTML, the browser has to parse the string, and convert it to a DOM. Parsing is hard.
原始测试作者是 Hixie (http://nontroppo.org/timer/Hixie_DOM.html) 。
此问题已在 StackOverflow 此处 和 连接(错误跟踪器) 也是如此。 IE10 解决了这个问题。我所说的解决是指他们已经部分转向另一种更新 DOM 的方式。
IE 团队处理 DOM 更新的方式似乎与 Microsoft 的 Excel 宏团队类似,在该团队中,更新工作表上的活动单元格被认为是一种糟糕的做法。作为开发人员,您应该将繁重的工作离线,然后批量更新实时团队。在 IE 中,您应该使用文档片段(而不是文档)来做到这一点。随着新出现的 ECMA 和 W3C 标准的出现,文档碎片已不再适用。因此 IE 团队做了一些漂亮的工作来解决这个问题。
他们花了几周时间才将其从 IE10-ConsumerPreview 中的约 42,000 毫秒缩减到 IE10-RTM 的约 600 毫秒。但我们花了很多力气才让他们相信这是一个问题。他们声称,现实世界中不存在每个元素有 10,000 次更新的示例。由于富互联网应用程序 (RIA) 的范围和性质无法预测,因此拥有接近联盟其他浏览器的性能至关重要。这是 OP 在 MS Connect 上对 DOM 的另一个看法(在评论中):
显然,onload 和 onclick 的底层行为也有所不同。在未来的更新中它可能会变得更好。
Original test author is Hixie (http://nontroppo.org/timer/Hixie_DOM.html).
This issue has been discussed on StackOverflow here and Connect (bug-tracker) as well. With IE10, the issue is resolved. By resolved, I mean they have partially moved on to another way of updating DOM.
IE team seems to handle the DOM update similar to Excel-macros team at Microsoft, where it's considered a poor practice to update the live-cells on the sheet. You, the developer, is supposed to take the heavy lifting task offline and then update the live team in batch. In IE you are supposed to do that using document-fragment (as opposed to document). With new emerging ECMA and W3C standards, document-frags are depreciated. So IE team has done some pretty work to contain the issue.
It took them few weeks to strip it down from ~42,000 ms in IE10-ConsumerPreview to ~600 ms IE10-RTM. But it took lots of leg pulling to convince them that this IS an issue. Their claim was that there is no real-world example which has 10,000 updates per element. Since the scope and nature of rich-internet-applications (RIAs) can't be predicted, its vital to have performance close to the other browsers of the league. Here is another take on DOM by OP on MS Connect (in comments):
Apparently, the underlying behavior of onload and onclick varies as well. It may get even better in future updates.
实际上,innerHTML 比 createElement 慢。
在优化过程中,我发现 js 可以毫不费力地解析巨大的 json。 Json 解析器可以毫无问题地进行大量嵌套函数调用。人们可以在显示:无和显示:阻止数千个元素之间切换,没有任何问题。
但是,如果您尝试创建几千个元素(或者即使您只是克隆它们),性能就会很糟糕。 您甚至不必将它们添加到文档中!
然后,当它们创建后,在页面中插入和删除的速度再次变得非常快。
在我看来,缓慢与它们与页面其他元素的关系无关。
Actually, innerHTML is less slow than createElement.
In an effort to optimize I found js can parse enormous json effortlessly. Json parsers can have a huge number of nested function calls without issues. One can toggle between display:none and display:block thousands of elements without issues.
But if you try create a few thousand elements (or even if you simply clone them) performance is terrible. You don't even have to add them to the document!
Then, when they are created, insert and remove from the page works supper fast again.
It looks to me like the slowness has little to do with their relation to other elements of the page.