IHTTPHandler 的无缓冲输出
我想从 IHttpHandler
类流式传输数据。我从数据库加载大量行,序列化并压缩它们,然后通过网络发送它们。另一方面,我希望我的客户端能够在服务器完成所有对象的序列化之前解压缩和反序列化数据。
我正在使用 context.Response.OutputSteam.Write 来写入数据,但看起来输出数据在发送到客户端之前仍然被放入缓冲区中。有没有办法避免这种缓冲?
I want to stream data from an IHttpHandler
class. I'm loading a large number of rows from the DB, serializing, and compressing them, then sending them down the wire. On the other end, I want my client to be able decompress, and deserialize the data before the server is even done serializing all the objects.
I'm using context.Response.OutputSteam.Write
to write my data, but it still seems like the output data is being put into a buffer before being sent to the client. Is there a way to avoid this buffering?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Response.Flush
方法应该将其发送到网络;然而,也有一些例外。如果 IIS 使用动态压缩,即配置为压缩动态内容,则 IIS 将不会刷新流。然后是整个“分块”传输编码。如果您没有指定Content-Length
,那么接收端不知道响应正文有多大。这是通过分块传输编码来完成的。某些 HTTP 服务器要求客户端使用包含分块关键字的Accept-Encoding
请求标头。其他的只是在指定完整长度之前开始写入字节时默认为分块;但是,如果您指定了自己的Transfer-Encoding
响应标头,它们不会执行此操作。在 IIS 7 和禁用压缩的情况下,
Response.Flush
应该总是可以解决问题,对吗?并不真地。 IIS 7 可以有许多模块来拦截请求和响应并与之交互。我不知道默认情况下是否安装/启用了任何一个,但您仍然应该知道它们可以影响您想要的结果。很好奇您正在压缩这个内容。如果您使用GZIP,那么您将无法通过调用flush 来控制发送数据的时间和数量。另外,使用 GZIP 内容意味着接收端也可能无法立即开始读取数据。
您可能希望将记录分成更小的、易于理解的 10、50 或 100 行块。压缩并发送它,然后处理下一组行。当然,现在您需要向客户端写入一些内容,以便他们知道每个压缩行集有多大,以及它们何时到达末尾。有关如何分块的示例,请参阅 http://en.wikipedia.org/wiki/Chunked_transfer_encoding转让工程。
The
Response.Flush
method should send it down the wire; however, there are some exceptions. If IIS is using Dynamic Compression, that is it's configured to compress dynamic content, then IIS will not flush the stream. Then there is the whole 'chunked' transfer encoding. If you have not specifiedContent-Length
then the recieving end does not know how large the response body will be. This is accomplished with the chunked transfer encoding. Some HTTP servers require that the client uses anAccept-Encoding
request header containing the chunked keyword. Others just default to chunked when you begin writing bytes before the full length is specified; however, they do not do this if you have specified your ownTransfer-Encoding
response header.With IIS 7 and compression disabled,
Response.Flush
should then always do the trick, right? Not really. IIS 7 can have many modules that intercept and interact with the request and response. I don't know if any that are installed/enabled by default, but you should still be aware that they can effect your desired result.Curious that you are compressing this content. If you are using GZIP then you will not be in control of when and how much data is sent by calling flush. Additionally using GZIP content means that the receiving end may also be unable to start reading data right away.
You may want to break the records into smaller, digestible chucks of 10, 50, or 100 rows. Compress that and send it, then work on the next set of rows. Of course now you will need to write something to the client so they know how big each compressed set of rows is, and when they have reached the end. see http://en.wikipedia.org/wiki/Chunked_transfer_encoding for an example of how the chunked transfer works.
您可以使用
context.Response.Flush()
或context.Response.OutputSteam.Flush()
强制立即写入缓冲内容。You can use
context.Response.Flush()
orcontext.Response.OutputSteam.Flush()
to force buffered content to be written immediately.