Varnish和ESI,性能如何?
我想知道现在ESI模块的性能如何?我在网上读过一些帖子,说清漆上的 ESI 性能实际上比真实的慢。
假设我有一个包含超过 3500 个 esi 包含的页面,这会如何执行? ESI是为这种用途而设计的吗?
Im wondering how the performance of th ESI module is nowadays? I've read some posts on the web that ESI performance on varnish were actually slower than the real thing.
Say i had a page with over 3500 esi includes, how would this perform? is esi designed for such usage?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
我们使用 Varnish 和 ESI 将子文档嵌入到 JSON 文档中。基本上,我们的应用程序服务器的响应如下所示:
包含的资源本身就是完整且有效的 JSON 响应。所有电台的完整列表约为 1070 个。因此,当缓存处于冷状态并且完整的电台列表是第一个请求时,清漆会在我们的后端发出 1000 个请求。当缓存很热时,ab 看起来像这样:
100 次记录/秒看起来不太好,但请考虑文档的大小。 214066Kbytes/sec 使 1Gbit 接口过饱和。
使用热缓存 ab (ab -c 1 -n 1 ...) 的单个请求显示 83ms/req。
后端本身是基于redis的。我们在 NewRelic 中测量的平均响应时间为 0.9 毫秒 [原文如此]。重新启动 Varnish 后,第一个带有冷缓存的请求 (ab -c 1 -n 1 ...) 显示 3158ms/rec。这意味着在生成响应时,Varnish 和我们的后端每个 ESI 包含大约需要 3 毫秒。这是一个标准的核心 i7 披萨盒,有 8 个核心。我是在满载的情况下测量的。我们以这种方式每月提供大约 150mio 请求,命中率为 0.9。这些数字确实表明 ESI-includes 是串行解析的。
在设计这样的系统时,您必须考虑的是 1) 当缓存冷时,您的后端能够在 Varnish 重新启动后承担负载,2) 通常您的资源不会一次性全部过期。对于我们的站点,它们每隔一小时就会过期,但我们会在过期标头中添加一个最多 120 秒的随机值。
希望有帮助。
We're using Varnish and ESI to embed sub-documents into JSON documents. Basically a response from our app-server looks like this:
The included resources are complete and valid JSON responses on their own. The complete list of all stations is about 1070. So when the cache is cold and a complete station list is the first request varnish issues 1000 requests on our backend. When the cache is hot ab looks like this:
100 rec/sec doesn't look that good but consider the size of the document. 214066Kbytes/sec oversaturates a 1Gbit interface well.
A single request with warm cache ab (ab -c 1 -n 1 ...) shows 83ms/req.
The backend itself is redis based. We're measuring a mean response time of 0.9ms [sic] in NewRelic. After restarting Varnish the first request with a cold cache (ab -c 1 -n 1 ...) shows 3158ms/rec. This means it takes Varnish and our backend about 3ms per ESI include when generating the response. This is a standard core i7 pizza box with 8 cores. I measured this while being under full load. We're serving about 150mio req/month this way with a hitrate of 0.9. These numbers suggest indeed that the ESI-includes are resolved in serial.
What you have to consider when designing a system like this is 1) that your backend is able to take the load after a Varnish restart when the cache is cold and 2) that usually your resources don't expire all at once. In case of our stations they expire every full hour but we're adding a random value of up to 120 seconds to the expiration header.
Hope that helps.
这不是第一手资料,但我相信 Varnish 当前的 ESI 实现序列化了包含请求;即,它们不是并发的。
如果是这样的话,在你提到的情况下,它的性能确实会很差。
我会尝试让有第一手经验的人发表评论。
This isn't first-hand, but I'm led to believe that Varnish's current ESI implementation serialises include requests; i.e., they're not concurrent.
If that's the case, it would indeed suck for performance in the case you mention.
I'll try to get someone with first-hand experience to comment.
并行 ESI 请求在 varnish 的 **商业** 版本中可用:https://www.varnish-software.com/plus/parallel-esi/。片段请求的并行性质显然使得由多个片段组成的页面的组装更快。
(这将是一个评论,但我没有足够的声誉来做到这一点)
Parallel ESI requests are available in the **commercial** version of varnish: https://www.varnish-software.com/plus/parallel-esi/. The parallel nature of the fragment requests apparently makes the assembly of a page comprised of multiple fragments faster.
(this would be a comment but I have insufficient reputation to do that)