HTTP 标头值的最大值?

发布于 2024-07-16 01:58:22 字数 76 浏览 5 评论 0原文

HTTP 标头是否有可接受的最大允许大小? 如果是的话,那是什么? 如果不是,这是特定于服务器的内容还是允许任何大小的标头的公认标准?

Is there an accepted maximum allowed size for HTTP headers? If so, what is it? If not, is this something that's server specific or is the accepted standard to allow headers of any size?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

油饼 2024-07-23 01:58:22

不,HTTP 没有定义任何限制。 然而,大多数网络服务器确实限制它们接受的标头的大小。 例如,Apache 默认限制是 8KB,在 IIS 16K。 如果标头大小超过该限制,服务器将返回 413 Entity Too Large 错误。

相关问题:用户代理字符串可以有多大?

No, HTTP does not define any limit. However most web servers do limit size of headers they accept. For example in Apache default limit is 8KB, in IIS it's 16K. Server will return 413 Entity Too Large error if headers size exceeds that limit.

Related question: How big can a user agent string get?

白龙吟 2024-07-23 01:58:22

正如 vartec 上面所说,HTTP 规范没有定义限制,但许多服务器默认情况下都定义了限制。 这意味着,实际上,下限是8K。 对于大多数服务器,此限制适用于请求行和所有标头字段的总和(因此请保持 cookie 简短)。

值得注意的是,nginx 默认使用系统页面大小,在大多数系统上为 4K。 您可以使用这个小程序进行检查:

pagesize.c:

#include <unistd.h>
#include <stdio.h>

int main() {
    int pageSize = getpagesize();
    printf("Page size on your system = %i bytes\n", pageSize);
    return 0;
}

使用gcc -o pagesize pagesize.c编译,然后运行./pagesize。 我的 Linode 的 ubuntu 服务器尽职尽责地告诉我答案是 4k。

As vartec says above, the HTTP spec does not define a limit, however many servers do by default. This means, practically speaking, the lower limit is 8K. For most servers, this limit applies to the sum of the request line and ALL header fields (so keep your cookies short).

It's worth noting that nginx uses the system page size by default, which is 4K on most systems. You can check with this tiny program:

pagesize.c:

#include <unistd.h>
#include <stdio.h>

int main() {
    int pageSize = getpagesize();
    printf("Page size on your system = %i bytes\n", pageSize);
    return 0;
}

Compile with gcc -o pagesize pagesize.c then run ./pagesize. My ubuntu server from Linode dutifully informs me the answer is 4k.

草莓味的萝莉 2024-07-23 01:58:22

这是最流行的 Web 服务器的限制

  • Apache - 8K
  • Nginx - 4K-8K
  • IIS - 8K-16K
  • Tomcat - 8K - 48K
  • Node (<13) - 8K; (>13) - 16K

Here is the limit of most popular web server

  • Apache - 8K
  • Nginx - 4K-8K
  • IIS - 8K-16K
  • Tomcat - 8K – 48K
  • Node (<13) - 8K; (>13) - 16K
零崎曲识 2024-07-23 01:58:22

HTTP 并未对每个标头的长度设置预定义限制
字段或整个标头部分的长度,如所述
在第 2.5 节中。 对单个标头的各种临时限制
字段长度在实际中发现,往往取决于具体情况
字段语义。

HTTP 标头值受服务器实现的限制。 HTTP 规范不限制标头大小。

接收请求标头字段或字段集的服务器,
大于它希望处理的必须以适当的 4xx 响应
(客户端错误)状态代码。 忽略此类标头字段将
增加服务器请求走私攻击的漏洞
(第 9.5 节)。

发生这种情况时,大多数服务器将返回 413 Entity Too Large 或相应的 4xx 错误。

客户端可以丢弃或截断收到的头字段
如果字段语义大于客户端希望处理的值
这样可以安全地忽略删除的值而不改变
消息框架或响应语义。

不受限制的 HTTP 标头大小使服务器容易受到攻击,并可能降低其服务自然流量的能力。

来源

HTTP does not place a predefined limit on the length of each header
field or on the length of the header section as a whole, as described
in Section 2.5. Various ad hoc limitations on individual header
field length are found in practice, often depending on the specific
field semantics.

HTTP Header values are restricted by server implementations. Http specification doesn't restrict header size.

A server that receives a request header field, or set of fields,
larger than it wishes to process MUST respond with an appropriate 4xx
(Client Error) status code. Ignoring such header fields would
increase the server's vulnerability to request smuggling attacks
(Section 9.5).

Most servers will return 413 Entity Too Large or appropriate 4xx error when this happens.

A client MAY discard or truncate received header fields that are
larger than the client wishes to process if the field semantics are
such that the dropped value(s) can be safely ignored without changing
the message framing or response semantics.

Uncapped HTTP header size keeps the server exposed to attacks and can bring down its capacity to serve organic traffic.

Source

凶凌 2024-07-23 01:58:22

2011 年的 RFC 6265 规定了对 Cookie 的具体限制。

6.1。 限制

实际的用户代理实现对其可以存储的 cookie 的数量和大小有限制。 通用用户代理应该提供以下每一项最低限度的功能:

  • 每个 cookie 至少 4096 字节(通过 cookie 名称、值和属性的长度总和来衡量)。

  • 每个域至少有 50 个 Cookie。

  • 总共至少 3000 个 cookie。

服务器应该使用尽可能少、尽可能小的 cookie,以避免达到这些实现限制,并最大限度地减少网络带宽,因为每个请求中都包含 Cookie 标头。

如果用户代理无法在 Cookie 标头中返回一个或多个 cookie,服务器应该正常降级,因为用户代理可能会根据用户的命令随时逐出任何 cookie。

RFC 的目标受众是用户代理或服务器必须支持的内容。 看来要调整您的服务器以支持浏览器允许的内容,您需要将 4096*50 配置为限制。 正如下面的文字所示,这似乎远远超出了典型 Web 应用程序的需要。 使用电流限制和 RFC 概述的上限并比较较高配置的内存和 IO 后果将很有用。

RFC 6265, dated 2011, prescribes specific limits on cookies.

6.1. Limits

Practical user agent implementations have limits on the number and size of cookies that they can store. General-use user agents SHOULD provide each of the following minimum capabilities:

  • At least 4096 bytes per cookie (as measured by the sum of the length of the cookie's name, value, and attributes).

  • At least 50 cookies per domain.

  • At least 3000 cookies total.

Servers SHOULD use as few and as small cookies as possible to avoid reaching these implementation limits and to minimize network bandwidth due to the Cookie header being included in every request.

Servers SHOULD gracefully degrade if the user agent fails to return one or more cookies in the Cookie header because the user agent might evict any cookie at any time on orders from the user.

The intended audience of the RFC is what must be supported by a user-agent or a server. It appears that to tune your server to support what the browser allows you would need to configure 4096*50 as the limit. As the text that follows suggests, this does appear to be far in excess of what is needed for the typical web application. It would be useful to use the current limit and the RFC outlined upper limit and compare the memory and IO consequences of the higher configuration.

尽揽少女心 2024-07-23 01:58:22

我还发现,在某些情况下,在许多标头的情况下出现 502/400 的原因可能是因为存在大量标头,而不考虑大小。
从文档

tune.http.maxhdr
设置请求中标头的最大数量。 当请求带有
大于此值的标头数量(包括第一行),即
被拒绝并显示“400 错误请求”状态代码。 同样,太大的响应
被“502 Bad Gateway”阻止。 默认值是101,就足够了
对于所有用途,考虑到广泛部署的 Apache 服务器使用
相同的限制。 进一步推动此限制以暂时允许
有缺陷的应用程序在修复后仍能正常工作。 请记住,每个
新标头每个会话消耗 32 位内存,所以不要推送它
限制太高。

https://cbonte.github.io/ haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr

I also found that in some cases the reason for 502/400 in case of many headers could be because of a large number of headers without regard to size.
from the docs

tune.http.maxhdr
Sets the maximum number of headers in a request. When a request comes with a
number of headers greater than this value (including the first line), it is
rejected with a "400 Bad Request" status code. Similarly, too large responses
are blocked with "502 Bad Gateway". The default value is 101, which is enough
for all usages, considering that the widely deployed Apache server uses the
same limit. It can be useful to push this limit further to temporarily allow
a buggy application to work by the time it gets fixed. Keep in mind that each
new header consumes 32bits of memory for each session, so don't push this
limit too high.

https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr

许一世地老天荒 2024-07-23 01:58:22

如果您打算使用 Akamai 等任何 DDOS 提供商,他们的响应标头大小最大限制为 8k。 因此,本质上尝试将响应标头大小限制在 8k 以下。

If you are going to use any DDOS provider like Akamai, they have a maximum limitation of 8k in the response header size. So essentially try to limit your response header size below 8k.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文