Web 服务器的理论性能限制是多少?

发布于 2024-07-14 16:43:31 字数 395 浏览 10 评论 0 原文

在当前部署的 Web 服务器中,其性能的典型限制是什么?

我相信一个有意义的答案是每秒 100、1,000、10,000、100,000 或 1,000,000 个请求中的一个,但今天这是真的吗? 5年前哪一个是正确的? 5年后我们会期待什么? (即,带宽、磁盘性能、CPU 性能等趋势如何影响答案)

如果很重要,则应考虑基于 TCP 的 HTTP 是访问协议的事实。 操作系统、服务器语言和文件系统效果应被假定为最佳组合。

假设磁盘包含许多静态提供的小型唯一文件。 我打算消除内存缓存的影响,并且 CPU 时间主要用于组装网络/协议信息。 这些假设旨在使答案偏向“最坏情况”估计,其中请求需要一些带宽、一些 CPU 时间和磁盘访问。

我只是在寻找精确到一个数量级左右的东西。

In a currently deployed web server, what are the typical limits on its performance?

I believe a meaningful answer would be one of 100, 1,000, 10,000, 100,000 or 1,000,000 requests/second, but which is true today? Which was true 5 years ago? Which might we expect in 5 years? (ie, how do trends in bandwidth, disk performance, CPU performance, etc. impact the answer)

If it is material, the fact that HTTP over TCP is the access protocol should be considered. OS, server language, and filesystem effects should be assumed to be best-of-breed.

Assume that the disk contains many small unique files that are statically served. I'm intending to eliminate the effect of memory caches, and that CPU time is mainly used to assemble the network/protocol information. These assumptions are intended to bias the answer towards 'worst case' estimates where a request requires some bandwidth, some cpu time and a disk access.

I'm only looking for something accurate to an order of magnitude or so.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

尬尬 2024-07-21 16:43:32

操作系统、服务器语言和文件系统影响是这里的变量。 如果你把它们去掉,那么你就剩下一个无开销的 TCP 套接字。

那时,这实际上不是服务器性能的问题,而是网络的问题。 使用无开销 TCP 套接字,您将遇到的限制很可能是在防火墙或网络交换机上可以同时处理的连接数。

OS, server language, and filesystem effects are the variables here. If you take them out, then you're left with a no-overhead TCP socket.

At that point it's not really a question of performance of the server, but of the network. With a no-overhead TCP socket your limit that you will hit will most likely be at the firewall or your network switches with how many connections can be handled concurrently.

故笙诉离歌 2024-07-21 16:43:32

在任何使用数据库的 Web 应用程序中,您还会提出一系列全新的优化需求。

索引、查询优化等

对于静态文件,您的应用程序是否将它们缓存在内存中?

等等,等等,等等

In any web application that uses a database you also open up a whole new range of optimisation needs.

indexes, query optimisation etc

For static files, does your application cache them in memory?

etc, etc, etc

御守 2024-07-21 16:43:32

这取决于你的CPU核心是什么
你的磁盘速度是多少
什么是“胖”“中”型托管公司管道。
什么是网络服务器?

问题太笼统了,

使用 http://jmeter.apache.org/ 等工具部署服务器进行测试看看你进展如何。

This will depend what is your CPU core
What speed are your disks
What is a 'fat' 'medium' sized hosting companies pipe.
What is the web server?

The question is too general

Deploy you server test it using tools like http://jmeter.apache.org/ and see how you get on.

冬天旳寂寞 2024-07-21 16:43:31

请阅读http://www.kegel.com/c10k.html。 您还可以阅读 StackOverflow 标记为“c10k”的问题。 C10K 代表 10'000 个并发客户端。

长话短说——原则上,限制既不是带宽,也不是 CPU。 这是并发。

Read http://www.kegel.com/c10k.html. You might also read StackOverflow questions tagged 'c10k'. C10K stands for 10'000 simultaneous clients.

Long story short -- principally, the limit is neither bandwidth, nor CPU. It's concurrency.

彩虹直至黑白 2024-07-21 16:43:31

六年前,我看到一个 8 进程的 Windows Server 2003 机器每秒处理 100,000 个静态内容请求。 该盒子有 8 个千兆位以太网卡,每个卡都位于一个单独的子网上。 限制因素是网络带宽。 即使使用真正巨大的管道,您也无法通过互联网提供如此多的内容。

实际上,对于纯静态内容,即使是一个不起眼的盒子也会使网络连接饱和。

对于动态内容,没有简单的答案。 可能是 CPU 利用率、磁盘 I/O、后端数据库延迟、工作线程不足、上下文切换过多……

您必须测量应用程序以找出瓶颈所在。 它可能在框架中,也可能在您的应用程序逻辑中。 它可能会随着您的工作量变化而变化。

Six years ago, I saw an 8-proc Windows Server 2003 box serve 100,000 requests per second for static content. That box had 8 Gigabit Ethernet cards, each on a separate subnet. The limiting factor there was network bandwidth. There's no way you could serve that much content over the Internet, even with a truly enormous pipe.

In practice, for purely static content, even a modest box can saturate a network connection.

For dynamic content, there's no easy answer. It could be CPU utilization, disk I/O, backend database latency, not enough worker threads, too much context switching, ...

You have to measure your application to find out where your bottlenecks lie. It might be in the framework, it might be in your application logic. It probably changes as your workload changes.

菩提树下叶撕阳。 2024-07-21 16:43:31

我认为这实际上取决于您所服务的内容。

如果您提供动态渲染 html 的 Web 应用程序,那么 CPU 消耗最多。

相对少量的静态项目,则可能会遇到带宽问题(因为静态文件本身可能会在内存中找到自己)。

如果您多次提供 项目,您可能首先遇到磁盘限制(查找和读取文件)

I think it really depends on what you are serving.

If you're serving web applications that dynamically render html, CPU is what is consumed most.

If you are serving up a relatively small number of static items lots and lots of times, you'll probably run into bandwidth issues (since the static files themselves will probably find themselves in memory)

If you're serving up a large number of static items, you may run into disk limits first (seeking and reading files)

七月上 2024-07-21 16:43:31

如果您无法将文件缓存在内存中,那么磁盘寻道时间可能会成为限制因素,并将性能限制在每秒 1000 个请求以下。 使用固态磁盘时,这可能会有所改善。

If you are not able to cache your files in memory, then disk seek times will likely be the limiting factor and limit your performance to less than 1000 requests/second. This might improve when using solid state disks.

著墨染雨君画夕 2024-07-21 16:43:31

每秒 100、1,000、10,000、100,000 或 1,000,000 个请求,但今天哪个是正确的?

该测试是在一台普通的 i3 笔记本电脑上完成的,但它审查了 Varnish、ATS(Apache Traffic Server)、Nginx、Lighttpd 等。

http://nbonvin.wordpress.com/2011/03/24/serving-small-static-files-which-server- <

有趣的一点是,使用高端 8 核服务器对大多数服务器(Apache、Cherokee、Litespeed、Lighttpd、Nginx、G-WAN)的提升很小:

a href ="http://www.rootusers.com/web-server-performance-benchmark/" rel="nofollow">http://www.rootusers.com/web-server-performance-benchmark/

作为测试是在本地主机上完成的,以避免网络成为瓶颈,问题出在无法扩展的内核中 - 除非您 调整其选项

因此,回答你的问题,进度裕度取决于服务器处理 IO 的方式。
他们将不得不使用更好的数据结构(无等待)。

100, 1,000, 10,000, 100,000 or 1,000,000 requests/second, but which is true today?

This test was done on a modest i3 laptop, but it reviewed Varnish, ATS (Apache Traffic Server), Nginx, Lighttpd, etc.

http://nbonvin.wordpress.com/2011/03/24/serving-small-static-files-which-server-to-use/

The interesting point is that using a high-end 8-core server gives a very little boost to most of them (Apache, Cherokee, Litespeed, Lighttpd, Nginx, G-WAN):

http://www.rootusers.com/web-server-performance-benchmark/

As the tests were done on localhost to avoid hitting the network as a bottleneck, the problem is in the kernel which does not scale - unless you tune its options.

So, to answer your question, the progress margin is in the way servers process IO.
They will have to use better data structures (wait-free).

眼前雾蒙蒙 2024-07-21 16:43:31

我认为这里有太多变量来回答你的问题。

什么处理器,什么速度,什么缓存,什么芯片组,什么磁盘接口,什么主轴速度,什么网卡,如何配置,这个列表是巨大的。 我认为你需要从另一个方面来解决这个问题......

“这就是我想要做和实现的目标,我需要做什么?”

I think there are too many variables here to answer your question.

What processor, what speed, what cache, what chipset, what disk interface, what spindle speed, what network card, how configured, the list is huge. I think you need to approach the problem from the other side...

"This is what I want to do and achieve, what do I need to do it?"

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文