每分钟有多少个请求被视为“重负载”? (近似)

发布于 2024-08-02 05:40:13 字数 211 浏览 4 评论 0原文

人们经常在他们的(优化和性能相关)问题和答案中谈论“重负载”。

我试图在典型服务器上的常规 Web 应用程序的上下文中(以 SO 及其相当小的基础设施为例)以每分钟的请求数来量化这一点,假设它们立即返回(以简化并获取数据库)速度等不在等式中)。

我正在寻找标称数字/范围,而不是“CPU 最大极限”或类似的数字/范围。粗略的近似值会很好(例如>5000/分钟)。谢谢你!

Often times people talk in their (optimization & performance related) questions and answers about 'heavy load'.

I'm trying to quantify this in the context of a regular web application on a typical server (take SO & its fairly small infrastructure as example) in a number of Requests per Minute, assuming that they return immediately (to simplify and take database speeds etc. out of the equation).

I'm looking for a nominal number/range, not 'where the CPU maxes out' or similar. A rough approximation would be great (e.g. >5000/min). Thank you!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

屌丝范 2024-08-09 05:40:13

我认为,考虑到您不需要硬件负载测量(CPU、内存、IO 利用率),对此问题的正确答案是重负载是每个时间单位的请求量等于或超过所需的最大量。每个时间单位的请求。

所需的最大请求量是由客户或负责整体架构的人员定义的。

假设 X 是应用程序所需的最大负载。我认为这样的事情可以近似回答:

0 < Light Load < X/2 < Regular Load < 2X/3 < High Load < X <= Heavy Load

凭空出现的单个数字与您的应用程序没有任何关系。什么是重负载完全、绝对、不可避免地与应用程序应该执行的操作相关。

尽管每秒 200 个请求的负载足以让小型 Web 服务器繁忙(每分钟约 12000 个)。

I would think that the proper answer to this, given that you don't want the hardware load measure (CPU, memory, IO utilization), is that heavy load is the amount of requests per time unit at or over the required maximum amount of requests per time unit.

The required maximum amount of requests is what has been defined with the customer or with whomever is in charge of the overall architecture.

Say X is that required maximum load for the application. I think something like this would approximate the answer:

0 < Light Load < X/2 < Regular Load < 2X/3 < High Load < X <= Heavy Load

The thing with a single number out of thin air is that it has no relation whatsoever with your application. And what heavy load is is totally, absolutely, ineludibly tied to what the application is supposed to do.

Although 200 requests per second is a load that would keep small webservers busy (~12000 a minute).

赠佳期 2024-08-09 05:40:13

每秒数百个请求。

大多数服务器的开箱即用的打开连接数通常约为 256 或更少,因此每秒 256 请求。对于 ping 请求,您可以将其提高到 2000-5000,对于轻量级请求,可以将其提高到 500-1000。使其变得更高是非常困难的,需要在网络、硬件、操作系统、服务器应用程序和用户应用程序方面进行全面更改(请参阅

HDD 的寻道速度 + 延迟约为 1-10 毫秒,SSD 为0.1-1 毫秒。因此,它是 100-100 000 IOPS。我们将 100 000 作为最高值(SSD 后续写入

通常连接保持打开状态至少1 x 延迟值毫秒。从客户端到服务器的延迟很少低于 50-100 毫秒,因此只有 100 000/50 = 2000 IOPS 可以创建新连接。

因此,来自不同客户端的每秒 2000 ping 请求是普通服务器的基本上限。它可以通过使用 RAM 磁盘或添加更多 SSD 来增加 IOPS 数量、路由请求以减少 ping、更改/修改操作系统以减少内核开销等来改进。通常它也会更高,因为许多请求来自同一客户端(连接)并且客户数量有限。在良好的条件下,它可以达到 数十万

另一方面,较高的 ping、应用程序执行时间、操作系统和硬件缺陷很容易将基本值降低到每秒数百个请求。此外,典型的 Web 服务器和应用程序通常不太适合高级优化,因此 Vinko Vrsalovic 的 200 建议非常现实。

Several hundred requests per second.

The from-the-box number of open connections for most servers is usually around 256 or fewer, ergo 256 requests per second. You can push it up to 2000-5000 for ping requests or to 500-1000 for lightweight requests. Making it even higher is very difficult and requires changes all the way in network, hardware, OS, server application and user application (see problem 10k).

Seek speed + latency for HDDs is around 1-10ms, for SSDs it's 0.1-1 ms. So, it's 100-100 000 IOPS. Let's take 100 000 as top value (SSD consequential write)

Usually connection stays open for at least 1 x latency value ms. Latency from client to server is rarely below 50-100 ms, so only 100 000/50 = 2000 IOPS can create new connections.

So, 2000 ping request per second from different clients is a base upper limit for a normal server. It can be improved via usage of RAM disk or adding more SSDs to increase IOPS number, routing requests to reduce ping, changing/modifying OS to reduce kernel overhead etc. Usually it's also higher due to many requests coming from same client (connection) and limited number of clients at all. In good conditions it can go up to hundreds of thousands

On the other hand, higher ping, application execution time, OS and hardware imperfection can easily reduce the base value to several hundreds requests per second. Also, typical web servers and applications are usually not very well suited for high-level optimization, so Vinko Vrsalovic's suggestion of 200 is pretty realistic.

凑诗 2024-08-09 05:40:13

这不是一个可以用简单的请求/分钟数来回答的直接问题。

在电信领域,我们经常进行性能测试,并模拟每秒运行大量呼叫以尝试找出限制。我们不断提高呼叫率,直到服务器跟不上。

因此,这取决于您的服务器及其可以处理的内容。这也取决于您的观点。例如,旧的 386 每分钟可能只能处理区区 50 个请求。我称其为轻负载。但是高规格的服务器可能能够每分钟处理 60000 个请求。这只是猜测。我不知道 Apache 是否可以做到这一点。我们的电信软件当然可以。

我认为最好从服务器的角度来回答这个问题。我想说的是,非常重的负载是指当您的负载达到服务器能够持续处理几分钟或几十分钟的负载的 10% 以内时。重载15%以内。

This is not a straight forward question that can be answered with a simple requests/minute number.

In the telecom sector, we often do performance testing and we simulate running lots of calls per second to try and find out the limit. We keep upping the call rate until the server fails to keep up.

So, it depends on your server and what it can handle. It also depends on your perspective. For example, an old 386 might only handle a measly 50 requests/minute. I'd call that a light load. But a high spec'd server might be capable of handling 60000 requests/minute. This is just guessing. I have no idea whether Apache could do this. Our telecom software certainly can.

I think it's best to answer this from the server perspective. I would say very heavy load is when you come within 10% of what your server is capable of handling sustained over several minutes or tens of minutes. Heavy load within 15%.

怪我入戏太深 2024-08-09 05:40:13

这很难回答,因为负载不仅仅是单位时间请求的问题。这取决于这些请求正在做什么以及它们是如何实现的。

例如,读取多于写入可能意味着负载较轻。

写入的异步处理可能意味着比必须等待同步处理完成更轻的负载。

一种极端情况是股票交易系统每个交易日处理数十亿笔交易。查看纽约证券交易所或纳斯达克的典型交易量,并用它来估计每分钟的高值。

假设一个交易日的 2B 笔交易代表纳斯达克。市场上午 9 点开盘,下午 4 点收市,因此 7 小时*3600 秒/小时 = 25200 秒。这意味着平均 2B 事务/25200 秒 = 每秒 79,365 个事务——确实是一个非常高的负载。他们显然使用大量服务器,因此您需要该数字来确定每台服务器的负载应该是多少。

如果 SO 可以被认为是一个很好的基准,你可能会询问它在元上的数量。

It's hard to answer, because load isn't simply a matter of requests per unit time. It depends on what those requests are doing and how they're implemented.

For example, more reads than writes might mean a lighter load.

Asynchronous processing of writes might mean a lighter load than having to wait for synchronous processing to complete.

One extreme would be stock trading systems that handle billions of transactions each trading day. Look at the typical volume on the NYSE or NASDAQ and use that to estimate a high value per minute.

Let's say 2B transactions in a trading day is representative for NASDAQ. Markets open at 9AM and close at 4PM, so that's 7 hours*3600 seconds/hour = 25200 seconds. That would give an average of 2B transactions/25200 seconds = 79,365 transactions per second - a very high load, indeed. They obviously use lots of servers, so you'd need that number to figure out what the load per server should be.

If SO can be considered a good benchmark, you might ask about its volume on meta.

夏末染殇 2024-08-09 05:40:13

重负载是指大于要求中规定的负载。您需要知道如何使用您的应用程序来确定什么可能构成重负载。否则,您最终可能会建造一辆仅用于购买杂货的法拉利。很棒的体验,但是浪费资源。

Heavy load is whatever is greater than what was stated in the requirements. You need to know how your application will be used to determine what might constitute as heavy load. Otherwise you might end up building a Ferrari that will only be used to do the groceries. Great experience, but waste of resources.

咋地 2024-08-09 05:40:13

我们无法判断某个rps数是否是重负载。这一切都归结为该请求在我们的机器上做了什么。

我以电商平台为例,说说我做的系统。当用户打开我们的网站时,我们的域名首先被解析为LB的IP。 LB 的工作就是简单地将请求发送到后端服务,该服务将为主页的不同部分获取产品。因此,LB 不执行任何 CPU 或内存密集型工作,因此能够接受非常高的 RPS。这是关于我的后端系统,它需要从 Elastic Search 获取广告,根据用户身份处理要显示的产品,然后检查产​​品在用户位置是否可用、送货费用等。所以,很多的计算。因此,单个请求会在此后端服务上产生大量负载。因此,如果 LB 和该服务器具有相同的配置,则 LB 可以处理更多的 RPS,而该服务器相比之下可以处理更少的 RPS。
因此,对于 rps 值到底多高并没有统一的定义。

我希望这个实际例子有助于更好地理解。

We cannot decide whether a certain rps number is heavy load or not. It all boils down to what is that request doing on our machine.

Let me take an example of e-commerce platform and talk about the system I work on. When user opens our website, our domain is first resolved to LB's IP. Job of LB is to simply send that request to backend service which will fetch products for different sections of the homepage. So, LB doesn't do any CPU or memory intensive work and thus, is capable of accepting a very high RPS. This is with respect to my backend system which needs to fetch ads from Elastic Search, do processing on which products to be shown based on who the user is, then check if product is available at user's location, delivery charges etc. So, a lot of computation. So a single request generates a lot of load on this backend service. So, if LB and this server has same configuration, LB can handle much more RPS and this server can handle very less RPS compared to that.
So there is no single definition of what rps number is actually high.

I hope this practical example helps get better understanding.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文