从lambda到S3上传时间缓慢的原因
我们在EC2上有一项服务,我们需要将许多文件上传到S3桶,但是请求的数量少于S. S3的最大配置。当我们使用EC2实例上传时,请求每个请求的数量在近200毫秒内归档。相同内容长度相同的文件花费更多的时间在AWS lambda上。时间有什么特别的原因吗? 我看到某些文件而不是其他文件的时间增加。有些人以相同的内容长度占用3-4秒。 EC2实例是C5.LARGE,我已将10 GB配置为AWS lambda函数。 水桶与lambda函数位于同一区域。这段时间是通过测量要在上传数据之前测量数据并完成上传后从日志中获得的。
We have a service on ec2 and we have a requirement to upload many files to s3 bucket but the number of requests is less than maximum configured on s3.When we upload it using the ec2 instance, it uploads each file in almost 200 ms. The same files with same content length is taking more time on AWS lambda. Is there any particular reason for increase in time?
I see increase in time for some files and not for others.Some are taking around 3-4sec for same content length.
The ec2 instance is c5.large and I have configured 10 GB to AWS lambda function.
The bucket is in the same region as the lambda function.This time is obtained from the logs by measuring time before the data is to be uploaded and after upload is complete.These files are made from processing of data from database calls inside the application.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
不幸的是,在寒冷的时期没有好的方法。
您可以使用已配置的并发性,但这意味着要付费始终在lambdas,而Java的“仅在需要时”设计的设计意味着您需要编写明确的代码才能“热身”执行环境,然后再进行第一个请求。 可能有效的一
件事是限制并发的lambdas数量。如果您用请求阻止lambda,则
但是,如果您使用 recserverrency 并发实例的最大数量。您将为每个实例中的每个实例支付冷门的罚款,但是Lambda将尝试重新使用实例而不是旋转新事件。但是,如果没有任何请求,它将关闭环境(因此您没有为它们付费,但会在下一次爆发中产生冷启动时间)。
您还可以使用SQS队列来平滑爆发:将每个文件添加到队列中,配置并发式兰巴斯的数量,并保留并发,并让它们慢慢通过队列工作。
最后,如果您要做的就是上传文件,那么值得考虑使用不同的实施语言(例如Python)。
Unfortunately, there's no good way around cold-start times.
You can use provisioned concurrency, but that means paying for always-on Lambdas, and Java's "load classes only when needed" design means that you'll need to write explicit code to "warm up" the execution environment before the first request comes in.
One thing that might work is to limit the number of concurrent Lambdas. If you barrage Lambda with requests, it will spin up as many execution environments as it can to process those requests. So you'll pay the cold-start time for each of those new invocation environments.
However, if you use reserved concurrency, you specify the maximum number of concurrent instances. You will pay the cold-start penalty for each of those instances, but then Lambda will attempt to reuse instances and not spin up new ones. It will, however, shut down environments if there aren't any requests (so you're not paying for them, but will incur cold-start times on the next burst).
You can also smooth out bursts by using an SQS queue: add each file to the queue, configure the number of concurrent Lambdas with reserved concurrency, and let them slowly work their way through the queue.
Lastly, if all you're doing is uploading files, it's worth considering a different implementation language, such as Python.