限制亚马逊 AWS Cloudfront

发布于 2024-12-11 17:30:42 字数 313 浏览 0 评论 0原文

我最近设置了一个新站点,它利用 Amazon Cloudfront 来分发非常大的文件,但是 Amazon 目前向我的服务器发出了如此多的请求,持续时间太长,以至于我的整个站点都陷入了停滞状态。

我应该注意,我没有使用 S3,cloudfront 直接连接到我的服务器。

我有一个 100mb 的数据连接,我尝试分发的文件是两个 3GB 的文件。但是,如果我在 ssh 中运行 iftop,亚马逊 IP 地址似乎会占用每一行,可能会尝试将同一文件缓存到多个不同的服务器,并且它们似乎会耗尽我的整个连接。

无论如何,有没有办法将云前端的连接限制为 10mb 或更少?

I've recently setup a new site which utilises Amazon Cloudfront to distribute very large files, however Amazon is currently making so many requests to my server for so long that my entire site is coming to a stand still.

I should note that i'm not using S3, cloudfront is connecting direct to my server.

I have a 100mb data connection and the files i'm trying to distribute are two 3GB files. However if I run iftop in ssh Amazon ip addresses seem to take up every row probably trying to cache the same file to multiple different servers and they appear to be using up my entire connection.

Is there anyway to limit cloudfront to a connection of say 10mb's or less?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

呆橘 2024-12-18 17:30:42

您确定为文件正确设置了缓存标头吗? CloudFront 遵循 ExpiresCache-Control 标头,您可以使用它们来延长和减少文件被视为有效的时间。将 Cache-Control: public, max-age=86400 添加到响应标头将导致边缘服务器将您的文件缓存最多一天(86400 秒)。

另一件需要注意的事情是,边缘服务器用于缓存文件的存储容量有限。如果要在保留半频繁使用的 10GB 文件和不频繁使用的 10KB 文件之间进行选择,亚马逊可能会选择删除 10GB 文件,以便为更多客户提供服务。如果可能,请考虑减小对象大小以避免被删除。

Are you sure that you are setting the caching headers properly for your files? CloudFront respects the Expires and Cache-Control headers, which you can use to both extend and reduce the amount of time that a file is considered valid. Adding Cache-Control: public, max-age=86400 to your response headers will cause edge servers to cache your files for up to a day (86400 seconds).

Another thing to note is that edge servers have limited storage capacities for caching files. Given the choice between keeping a 10GB file that is used semi-frequently and a 10KB file that is used less frequently, Amazon may elect to remove the 10GB file in order to serve more customers. If possible, consider reducing your object size to avoid being expunged.

忆梦 2024-12-18 17:30:42

如果您只有 2 个大文件并且这些文件不经常更改,为什么不将它们放入 S3 中并使该存储桶成为 CloudFront 分配的来源呢?然后,您只需通过 Internet 连接传输文件一次,无需担心与这些文件分发相关的任何基础设施。

If you only have 2 large files and those files don't change frequently, why not just drop them in S3 and make that bucket the origin for a CloudFront distribution? Then you only have to transfer the files over your Internet connection one time and you don't have to worry about any infrastructure related to the distribution of those files.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文