GZipStream解压性能较差

发布于 2024-08-06 12:34:41 字数 1996 浏览 6 评论 0原文

我有一个连接到后端 WAS 服务器的 .NET 2.0 WinForms 应用程序。我正在使用 GZipStream 解码从对服务器进行的 HttpWebRequest 调用返回的数据。返回的数据是压缩的 CSV,Apache 正在对其进行压缩。整个服务器堆栈是Hibernate-->EJB-->Spring-->Apache。

对于小的响应,性能很好(<50ms)。当我收到>150KB的响应时,解压需要超过60秒。大部分时间似乎都花在 GZipStream 构造函数上。

这是显示我从 HttpWebResponse 调用获取响应流的位置的代码:

using (Stream stream = this.Response.GetResponseStream())
{
 if (this.CompressData && this.Response.ContentEncoding == "gzip")
 {
        // Decompress the response
  byte[] b = Decompress(stream);
  this.ResponseBody = encoding.GetString(b);
    }
 else
 {
  // Just read the stream as a string
  using (StreamReader sr = new StreamReader(stream))
  {
   this.ResponseBody = sr.ReadToEnd();
  }
 }
}

编辑 1

根据 Lucero 的评论,我将 Decompress 方法修改为以下内容,但我没有看到任何性能优势在实例化 GZipStream 之前将 ResponseStream 加载到 MemoryStream 中。

private static byte[] Decompress(Stream stream)
{
 using (MemoryStream ms = new MemoryStream())
 {
  byte[] buffer = new byte[4096];
  int read = 0;

  while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
  {
   ms.Write(buffer, 0, read);
  }

  ms.Seek(0, SeekOrigin.Begin);

  using (GZipStream gzipStream = new GZipStream(ms, CompressionMode.Decompress, false))
  {
   read = 0;
   buffer = new byte[4096];

   using (MemoryStream output = new MemoryStream())
   {
    while ((read = gzipStream.Read(buffer, 0, buffer.Length)) > 0)
    {
     output.Write(buffer, 0, read);
    }

    return output.ToArray();
   }
  }
 }
}

根据上面的代码,任何人都可以看到任何问题吗?这对我来说似乎很基本,但它让我发疯。

编辑2

我使用ANTS Profiler对应用程序进行了分析,在60秒的解压过程中,CPU接近于零,内存使用率没有变化。

编辑 3

实际速度下降似乎是在读取期间

this.Response.GetResponseStream
The entire 60s is spent loading the response stream into the MemoryStream. Once it's there, the call to GZipStream is quick.
Edit 4

我发现使用 HttpWebRequest.AutomaticDecompression 会出现相同的性能问题,因此我将结束这个问题。

I have a .NET 2.0 WinForms app that connects to a backend WAS server. I am using GZipStream to decode data coming back from a HttpWebRequest call made to the server. The data returned is compressed CSV, which Apache is compressing. The entire server stack is Hibernate-->EJB-->Spring-->Apache.

For small responses, the performance is fine (<50ms). When I get a response >150KB, it takes more than 60 seconds to decompress. The majority of the time seems to be spent in the GZipStream constructor.

This is the code showing where I get the response stream from the HttpWebResponse call:

using (Stream stream = this.Response.GetResponseStream())
{
 if (this.CompressData && this.Response.ContentEncoding == "gzip")
 {
        // Decompress the response
  byte[] b = Decompress(stream);
  this.ResponseBody = encoding.GetString(b);
    }
 else
 {
  // Just read the stream as a string
  using (StreamReader sr = new StreamReader(stream))
  {
   this.ResponseBody = sr.ReadToEnd();
  }
 }
}

Edit 1

Based on the comment from Lucero, I modified the Decompress method to the following, but I do not see any performance benefit from loading the ResponseStream into a MemoryStream before instantiating the GZipStream.

private static byte[] Decompress(Stream stream)
{
 using (MemoryStream ms = new MemoryStream())
 {
  byte[] buffer = new byte[4096];
  int read = 0;

  while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
  {
   ms.Write(buffer, 0, read);
  }

  ms.Seek(0, SeekOrigin.Begin);

  using (GZipStream gzipStream = new GZipStream(ms, CompressionMode.Decompress, false))
  {
   read = 0;
   buffer = new byte[4096];

   using (MemoryStream output = new MemoryStream())
   {
    while ((read = gzipStream.Read(buffer, 0, buffer.Length)) > 0)
    {
     output.Write(buffer, 0, read);
    }

    return output.ToArray();
   }
  }
 }
}

Based on the code above, can anyone see any issues? This seems quite basic to me, but it's driving me nuts.

Edit 2

I profiled the application using ANTS Profiler, and during the 60s of decompression, the CPU is near zero and the memory usage does not change.

Edit 3

The actual slowdown appears to be during the read of

this.Response.GetResponseStream

The entire 60s is spent loading the response stream into the MemoryStream. Once it's there, the call to GZipStream is quick.

Edit 4

I found that using HttpWebRequest.AutomaticDecompression exhibits the same performance issue, so I'm closing this question.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

泪之魂 2024-08-13 12:34:41

尝试首先将数据加载到 MemoryStream 中,然后解压缩 MemoryStream...

Try first loading the data into a MemoryStream and then decompress the MemoryStream...

半岛未凉 2024-08-13 12:34:41

DotNetZip 有一个 GZipStream 类,可以用作 System.IO.Compression.GZipStream 的直接替代品。

DotNetZip 是免费的。

注意:如果您只使用 GZipStream,那么您需要 Ionic.Zlib.dll,而不是 Ionic.Zip.dll。

DotNetZip has a GZipStream class that can be used as a drop-in replacement for the System.IO.Compression.GZipStream.

DotNetZip is free.

NB: If you are only doing GZipStream, then you need the Ionic.Zlib.dll, not the Ionic.Zip.dll.

心意如水 2024-08-13 12:34:41

我将向这个主题投入三分钱,只是为了通知 C# 用户,7Zip 似乎以纯 C# 方式公开了其 API。我想你们都非常了解 7Zip 工具,至少对我来说,无论它的 API 设计得多么好或不好——知道这对于提高处理 ZIP 文件/流的性能有很大帮助。

参考:http://www.splinter.com。 au/压缩使用-7zip-lzma-算法-in/

I'll drop my three cents to the subject, just to notify C#-users that a 7Zip seems to expose its API in plain C#. I think you all know the 7Zip tool quite well, and at least for me, regardless of how well- or ill- designed its API is --- knowing that is a big help in terms of better performance of handling ZIP files/streams.

ref: http://www.splinter.com.au/compressing-using-the-7zip-lzma-algorithm-in/

诗笺 2024-08-13 12:34:41

很抱歉没有直接回答您的问题,但是您看过 SharpZip 了吗?我发现它比 Gzip 更容易使用。如果您在解决当前问题时遇到困难,也许它会更好地完成任务。

http://www.icsharpcode.net/OpenSource/SharpZipLib/

Sorry to not answer your question directly, but have you looked at SharpZip yet? I found it much easier to use than Gzip. If you have trouble solving your current problem, perhaps it would perform the task better.

http://www.icsharpcode.net/OpenSource/SharpZipLib/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文