Amazon SDB - 每秒 PUTS 限制有何解释?

发布于 2024-10-04 20:40:16 字数 133 浏览 3 评论 0原文

我相信对 Amazon Simple DB 的最大 PUT 请求是 300?

当我向它发出 500 个或 1,000 个请求时会发生什么?它是在 Amazon 端排队吗?我会收到 504 错误还是应该在 EC2 上构建自己的排队服务器?

I believe the max PUT requests to Amazon's Simple DB is 300?

What happens when I throw 500 or 1,000 requests to it? Is it queued on the Amazon side, do I get 504's or should I build my own queuing server on EC2?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

清秋悲枫 2024-10-11 20:40:16

最大请求量不是一个固定的数字,而是多种因素的组合。有一个针对每个域的限制策略,但在限制启动之前似乎有一些突发请求的空间。此外,每个 SimpleDB 节点都处理许多域,并且每个域都由多个节点处理。处理您的请求的节点上的负载也会影响您的最大请求量。因此,您可以在非高峰时段获得更高的吞吐量(一般而言)。

如果您发送的请求多于 SimpleDB 愿意或能够服务的数量,您将收到 503 HTTP 代码。 503 服务不可用响应正常运行,应重试。 SimpleDB 内没有请求排队。

如果您想获得绝对最大可用吞吐量,您必须能够(或有一个 SimpleDB 客户端可以)微观管理您的请求传输速率。当 503 响应率达到大约 10% 时,您必须减少请求量并随后将其恢复。此外,将请求分散到多个域是扩展的主要手段。

我不建议在 EC2 上构建您自己的队列服务器。我会尝试让 SimpleDB 直接处理请求量。额外的一层可以使事情变得顺利,但它不会让你处理更高的负载。

The max request volume is not a fixed number, but a combination of factors. There is a per-domain throttling policy but there seems to be some room for bursting requests before throttling kicks in. Also, every SimpleDB node handles many domains and every domain is handled by multiple nodes. The load on the node handling your request also contributes to your max request volume. So you can get higher throughput (in general) during off-peak hours.

If you send more requests than SimpleDB is willing or able to service, you will get back a 503 HTTP code. 503 Service unavailable responses are business as usual and should be retried. There is no request queuing going on within SimpleDB.

If you want to get the absolute max available throughput you have to be able to (or have a SimpleDB client that can) micro manage your request transmission rate. When the 503 response rate reaches about 10% you have to back off your request volume and subsequently build it back up. Also, spreading the requests across multiple domains is the primary means of scaling.

I wouldn't recommend building your own queuing server on EC2. I would try to get SimpleDB to handle the request volume directly. An extra layer could smooth things out, but it won't let you handle higher load.

别再吹冷风 2024-10-11 20:40:16

我会利用 Netflix 所做的工作作为高吞吐量写入的灵感:
http://practicalcloudcomputing.com/post/313922691/5-steps-simpledb-performance

I would use the work done at Netflix as an inspiration for high throughput writes:
http://practicalcloudcomputing.com/post/313922691/5-steps-simpledb-performance

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文