使用 GZIP 压缩的 HTTP 请求内容被截断(仅在浏览器中)
我正在使用 Yesod Web 框架在 Haskell 中开发一个 Web 应用程序(尽管我认为这个问题与 Haskell 和/或 Yesod 无关,我只是为了完整性而提到这一点)。我正在使用 Warp 服务器来服务请求,并且在使用 Chromium/Firefox(但不是 Opera)访问涉及 GZIP 压缩的网站时遇到了一个奇怪的问题。
我设置了一个仅返回 Hello world!
的网站。
- 如果我使用
netcat
获取站点并将Accept-Encoding
设置为gzip
,我会得到正确的结果。这意味着我可以解压缩收到的数据,并且它会正确解压缩为Hello world!
。 - 如果我想使用 Chromium 或 Firefox 查看该网站,我得到的只是
H
(其余内容被截断)。我验证了服务器是否正确设置了Content-Length
和Content-Encoding
标头。
以下是我用来发送 Hello world!
字符串的代码:
getRootR = return $ RepPlain $ toContent ("Hello world!" :: ByteString)
我正在使用标准 run
函数调用 Warp:
withWebApp $ Warp.run 3000
这是我使用 发送的请求>netcat
,它与之配合使用:
GET / HTTP/1.0
Accept-Encoding: gzip,
以及解压缩 netcat 输出的结果:
$ nc --idle-timeout=1 localhost 3000 < test | tail -n1 | gunzip
nc: using stream socket
Hello world!
还有一件事:如果我使用 Wireshark 嗅探流量,数据包将显示为 HTTP 流量,但 Wireshark 告诉我 ( text/plain)连续或非 HTTP 流量
。这个包对我来说看起来很好很硬。
因此,由于某种原因,它在 Chromium 或 Firefox 中不起作用,我不明白为什么。有人可以帮助我解决这个问题或为我指出正确的方向吗?
I'm developing a web application in Haskell using the Yesod web framework (altough I think that this problem is not related to Haskell and/or Yesod, I'm just mentioning this for completeness). I'm using the Warp server in order to serve request and I'm experiencing a strange problem when accessing sites using Chromium/Firefox (but not Opera) involving GZIP compression.
I have a site set up which returns only Hello world!
.
- If I fetch the site using
netcat
and I setAccept-Encoding
togzip
, I get the correct result. That means I can decompress the data I receive and it correctly decompresses toHello world!
. - If I want to look at the site using Chromium or Firefox, all I get is
H
(the rest of the content is cut off). I verified that theContent-Length
andContent-Encoding
headers are set correctly by the server.
Here is the code I use to send the Hello world!
string:
getRootR = return $ RepPlain $ toContent ("Hello world!" :: ByteString)
I'm calling Warp with the standard run
function:
withWebApp $ Warp.run 3000
This is the request I'm sending with netcat
, with which it works:
GET / HTTP/1.0
Accept-Encoding: gzip,
And the result of decompressing the output of netcat:
$ nc --idle-timeout=1 localhost 3000 < test | tail -n1 | gunzip
nc: using stream socket
Hello world!
And one more thing: If I sniff the traffic using Wireshark the packets show up as HTTP traffic, but Wireshark tells me (text/plain)Continuation or non-HTTP traffic
. The packet looks fine to me tough.
So for some reason, it just won't work in Chromium or Firefox and I can't figure out why. Can anybody help me with this or point me in the right direction?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
最可能的原因是内容长度设置不正确,即服务器报告原始内容的大小而不是压缩数据的大小。
正如 sclv 上面所述,这一定是 Web 服务器中的错误。
Most likely reason is the Content-Length is not set properly, i.e. the server reports the size of the original content as opposed to the size of the compressed data.
As sclv states above, this must be a bug in the web server.
我可以确认这是 wai-extra 中的一个错误。看来正确的操作应该是在使用 gzip 时删除 Content-Length 标头,以便 Warp 将自动提供分块传输编码。我希望今天晚些时候发布一个补丁。
I can confirm that this is a bug in wai-extra. It seems that the correct action should be to remove an Content-Length headers when using gzip, so that Warp will automatically serve chunked transfer encoding. I'll release a patch later today hopefully.