对于 RESTful Web 服务来说,最具可扩展性和高性能的 Amazon Web Service (AWS) 配置是什么?

发布于 2024-10-20 20:52:21 字数 844 浏览 2 评论 0 原文

我正在构建一个异步 RESTful Web 服务,并试图找出最具可扩展性和高性能的解决方案。最初,我计划使用 FriendFeed 配置,使用一台运行 nginx 的机器来托管静态内容,充当负载均衡器,并充当四台运行 Tornado Web 服务器的机器的反向代理来获取动态内容。建议在四核机器上运行 nginx,并在单核机器上运行每个 Tornado 服务器。 Amazon Web Services (AWS) 似乎是最经济、最灵活的托管提供商,因此我的问题如下:

1a.) 在 AWS 上,我只能找到 c1.medium(双核 CPU 和 1.7 GB 内存)实例类型。那么这是否意味着我应该在 c1.medium 上运行一个 nginx 实例,在 m1.small(单核 CPU 和 1.7 GB 内存)实例上运行两个 Tornado 服务器?

1b.) 如果我需要扩展,我如何将这三个实例链接到同一配置中的另外三个实例?

2a.) 在 S3 存储桶中托管静态内容更有意义。 nginx 还会托管这些文件吗?

2b.) 如果没有,性能是否会因为没有 nginx 托管它们而受到影响?

2c.) 如果 nginx 不托管静态内容,它实际上只是充当负载平衡器。 此处有一篇很棒的论文,它比较了不同云配置的性能,并介绍了负载均衡器:“ HaProxy 和 Nginx 都在第 7 层转发流量,因此由于 SSL 终止和 SSL 重新协商,它们的可扩展性较差,相比之下,Rock 在第 4 层转发流量,没有 SSL 处理开销。”您是否建议将 nginx 作为负载均衡器替换为在第 4 层上运行的负载均衡器,或者 Amazon 的弹性负载均衡器性能是否足够高?

I'm building an asynchronous RESTful web service and I'm trying to figure out what the most scalable and high performing solution is. Originally, I planned to use the FriendFeed configuration, using one machine running nginx to host static content, act as a load balancer, and act as a reverse proxy to four machines running the Tornado web server for dynamic content. It's recommended to run nginx on a quad-core machine and each Tornado server on a single core machine. Amazon Web Services (AWS) seems to be the most economical and flexible hosting provider, so here are my questions:

1a.) On AWS, I can only find c1.medium (dual core CPU and 1.7 GB memory) instance types. So does this mean I should have one nginx instance running on c1.medium and two Tornado servers on m1.small (single core CPU and 1.7 GB memory) instances?

1b.) If I needed to scale up, how would I chain these three instances to another three instances in the same configuration?

2a.) It makes more sense to host static content in an S3 bucket. Would nginx still be hosting these files?

2b.) If not, would performance suffer from not having nginx host them?

2c.) If nginx won't be hosting the static content, it's really only acting as a load balancer. There's a great paper here that compares the performance of different cloud configurations, and says this about load balancers: "Both HaProxy and Nginx forward traffic at layer 7, so they are less scalable because of SSL termination and SSL renegotiation. In comparison, Rock forwards traffic at layer 4 without the SSL processing overhead." Would you recommend replacing nginx as a load balancer by one that operates on layer 4, or is Amazon's Elastic Load Balancer sufficiently high performing?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

寄风 2024-10-27 20:52:21

1a) Nginx 是异步服务器(基于事件),具有单个工作进程本身,它们可以处理大量同时连接(max_clients =worker_processes *worker_connections/4ref)并且仍然表现良好。我自己在 c1.medium 类型的盒子(不在 aws 中)上测试了大约 20K 的并发连接。在这里,您将工作线程设置为 2 个(每个 cpu 一个)并运行 4 个后端(您甚至可以使用更多后端进行测试,看看它在哪里损坏)。仅当这给您带来更多问题时,才再进行一个类似的设置,并通过 弹性负载均衡器 将它们链接起来

1b)如(1a)中所述使用弹性负载平衡器。请参阅有人测试了 20K 的 ELB请求/秒,这不是限制,因为他放弃了,因为他们失去了兴趣。

2a) 在 cloudfront 中托管静态内容,其 CDN 正是为此而设计的(比 S3 更便宜、更快,它可以从 s3 存储桶或您自己的服务器中提取内容)。其高度可扩展。

2b) 显然,随着 nginx 提供静态文件,它现在必须向相同数量的用户提供更多请求。消除该负载将减少接受连接和发送文件的工作(减少带宽使用)。

2c)。完全避免使用 nginx 看起来是一个不错的解决方案(少一个中间人)。弹性负载均衡器将处理 SSL 终止并减少后端服务器上的 SSL 负载(这将提高后端的性能)。从上面的实验来看,它显示大约 20K,并且由于其弹性,它应该比软件 LB 拉伸更多(参见 这个很好的文档关于它的工作)

1a) Nginx is asynchronous server (event based), with single worker itself they can handle lots of simultaneous connection (max_clients = worker_processes * worker_connections/4 ref) and still perform well. I myself tested around 20K simultaneous connection on c1.medium kind of box (not in aws). Here you set workers to two (one for each cpu) and run 4 backend (you can even test with more to see where it breaks). Only if this gives you more problem then go for one more similar setups and chain them via an elastic load balancer

1b) As said in (1a) use elastic load balancer. See somebody tested ELB for 20K reqs/sec and this is not the limit as he gave up as they lost interest.

2a) Host static content in cloudfront, its CDN and meant for exactly this (Cheaper and faster then S3, and it can pull content from s3 bucket or your own server). Its highly scalable.

2b) Obviously with nginx serving static files, it will now have to serve more requests to same number of users. Taking that load away will reduce work of accepting connections and sending the files across (less bandwidth usage).

2c). Avoiding nginx altogether looks good solution (one less middle man). Elastic Load balancer will handle SSL termination and reduce SSL load on your backend servers (This will improve performance of backends). From above experiments it showed around 20K and since its elastic it should stretch more then software LB (See this nice document on its working)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文