限制清漆客户端等待时间,而不是后端时间
我正在寻找一种解决方案来保证清漆的客户端响应时间,而不限制后端响应时间。
我有大约 100 个不同的资源(http://host/resource.js?id=1 等),它们在后端平均在一秒钟内完成计算。这些资源由清漆缓存,因此每个资源都可以同时提供给许多客户端。这些资源作为同步(页面阻塞)javascript 包含在内,因此应该快速提供响应(例如 3 秒)。因为我想保证客户端的响应时间,所以我想不出比将后端超时配置在这3秒更好的解决方案。 vcl 示例如下:
backend mybackend {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 100ms;
.first_byte_timeout = 3s;
.between_bytes_timeout = 3s;
.probe = {
.url = "/resource?id=1";
.timeout = 3s;
.window = 4;
.threshold = 4;
.interval = 15s;
}
}
sub vcl_recv {
set req.backend = mybackend;
set req.grace = 5d;
return (lookup);
}
sub vcl_fetch {
set obj.ttl = 2m;
set obj.grace = 5d;
return (deliver);
}
我的问题如下。在我停止后端 5 分钟并重新启动它之后(而 varnish 在宽限期内提供过时的数据),许多不同的资源(超出 TTL 但在宽限期内)会在后端同时获取。这会对数据库造成严重打击,3 秒内不会交付任何资源,也不会缓存任何内容。
我该如何避免这个问题?我想保证客户端响应时间但不限制后端响应时间。暂时可以接受失败(虚拟 javascript)。有什么办法可以随着时间的推移分散请求吗? (过时的数据优先于错误)。
谢谢, 艾弗
I'm looking for a solution to guarantee client response time with varnish, without limiting backend response time.
I have around 100 different resources (http://host/resource.js?id=1 etc.) which on average compute within a second on a back-end. The resources are cached by varnish so each of them can be served to many clients concurrently. The resources are included as synchronous (page blocking) javascript, so the responses should be served fast (e.g. 3 seconds). Because I would like to guarantee client response time, I could not think of a better solution than to configure the back-end timeout on this 3 seconds. An example vcl looks like:
backend mybackend {
.host = "127.0.0.1";
.port = "8080";
.connect_timeout = 100ms;
.first_byte_timeout = 3s;
.between_bytes_timeout = 3s;
.probe = {
.url = "/resource?id=1";
.timeout = 3s;
.window = 4;
.threshold = 4;
.interval = 15s;
}
}
sub vcl_recv {
set req.backend = mybackend;
set req.grace = 5d;
return (lookup);
}
sub vcl_fetch {
set obj.ttl = 2m;
set obj.grace = 5d;
return (deliver);
}
My problem is the following. After I've stopped the back-end for 5 minutes and restart it (while varnish serves stale data within the grace period), many different resources (out of TTL but within grace) are fetched concurrently at the back-end. This hits the database hard and none of the resources is delivered within 3 seconds and nothing gets cached.
How do I avoid this problem? I would like to guarantee a client response time but not limit back-end response time. A failure (dummy javascript) would be acceptable temporarily. Is there some way to spread the requests over time? (Stale data is preferred above errors).
Thanks,
Ivor
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
最简单的解决方案之一是,在缓存某些资源之前不要让后端公开可用(服务器重新启动后设置一些超时,例如 10 分钟)。
One of the easiest solution would be that do not make your back end publicly available until you cached some of your resources (after a server restart set some timeout. e.g. 10 minutes).