gzip 没有服务器支持?
我编写了一个 CSS 服务器,它可以进行最小化和基本解析/var 替换。服务器使用的是node.js。
我想从该服务器压缩我的响应。正如 IRC 中所述,node.js 目前没有 gzip 库,因此我尝试从命令行手动执行此操作(因为我仅在不在缓存中时进行 gzip 压缩)。
我将文件数据推送到临时文件,然后使用 exec 调用 'gzip -c -9 -q ' + tempFile
。我正确地恢复了压缩数据(看起来),并将正确的 Content-Encoding
标头发送为 'gzip'
,但 Chrome 报告:
Error 330 (net ::ERR_CONTENT_DECODING_FAILED):未知错误
。
此外,一些独立的 gzip 在线测试程序也失败了(不仅仅是 Chrome)。
我假设这是一件简单的事情,我不知道如何为浏览器生成 gzip 块,因为我从未尝试过手动执行此操作。
任何帮助都会有所帮助。服务器速度极快,但我需要对内容进行 gzip 压缩,以便为最终用户获得最佳性能。
谢谢。
更新 我已验证我的 Content-Length
是正确的
I have written a CSS server which does minimization and basic parsing/var replacement. The server is using node.js.
I am wanting to gzip my response from this server. As told in IRC, node.js does not currently have a gzip lib, so I am attempting to do it manually from the command line (as I am only gzipping when not in cache).
I am pushing the file data out to a temp file and then using exec to call 'gzip -c -9 -q ' + tempFile
. I get the compressed data back correctly (it seems), and send the proper Content-Encoding
header as 'gzip'
, but Chrome reports:
Error 330 (net::ERR_CONTENT_DECODING_FAILED): Unknown error
.
Also, some independent gzip testers online fail as well (not just Chrome).
I'm assuming this is something simple I do not know about generating gzip blocks for browsers, seeing as I have never tried to do it manually.
Any assistance would be helpful. The server is blazing fast, but I need to gzip the content to get the best performance for end users.
Thanks.
UPDATE
I have verified my Content-Length
is correct
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Node 仍然处于前沿,似乎还没有很好地处理二进制数据。
Node 的字符串编码 为 ascii、二进制和 utf8。 [...]“二进制”仅查看 16 位 JavaScript 字符串字符的前 8 位。问题是根据 ECMA 的字符串是 16 位字符串。如果您使用 UTF-8(这是默认值),则在读入字符串时会进行一些标准化,这会损坏 gzip。如果你使用ascii,显然是不行的。
如果您使用二进制编码读取和写入,它将起作用。 Javascript 字符串字符的高 8 位没有被使用。如果没有,请尝试将文件直接发送到客户端,而不加载任何 JavaScript 字符串,也许可以借助 Node.js 前面的代理服务器。
我本人希望 Google 的 V8 引擎实现真正的二进制字符串数据类型,就像这个提案 http://groups.google.com/group/nodejs/browse_thread/thread/648a0f5ed2c95211/ef89acfe538931a1?lnk=gst&q=binary+type#ef89acfe538931a1
CommonJS 还提议 Binary/B,并且由于 Node 试图遵循 CommonJS,所以未来还是有希望的。
编辑我刚刚发现了 net2包含二进制缓冲区的节点分支(请参阅 src/node_buffer.h)。这似乎是网络彻底改革的一部分。
Node is still bleeding edge and seems not yet to have a good handling of binary data.
Node's string encodings are ascii, binary and utf8. [...] "binary" only look[s] at the first 8 bits of the 16bit JavaScript string characters. The problem is that strings according to ECMA are 16bit character strings. If you use UTF-8 (it's the default) there is some normalization when reading into the string, and this corrupts gzip. If you use ascii, it obviously won't work.
It will work if you use binary encoding both reading and writing. The upper 8 bits of a Javascript string character just are not being used. If not, try to send the files directly to client without any loading into Javascript strings, perhaps with the help of a proxy server in front of Node.
I myself hope that Google's V8 engine implements a true binary string datatype, something like this proposal http://groups.google.com/group/nodejs/browse_thread/thread/648a0f5ed2c95211/ef89acfe538931a1?lnk=gst&q=binary+type#ef89acfe538931a1
CommonJS is also proposing Binary/B, and since Node tries to follow CommonJS, there is some hope for the future.
Edit I just discovered the net2 branch of node which contains a binary buffer (see src/node_buffer.h). It is part of a complete overhaul of network it seems.
您是否更新了内容长度以匹配 gzip 压缩的大小?看来这可能会搞砸解码。
Have you updated the Content-Length to match the gzipped size? It seems like that might screw up the decoding.