ASP.NET - response.outputstream.write 要么写入 16k,然后写入全 0,要么每 64k 写入除 insetrs 之外的所有字符

发布于 2024-12-16 16:39:22 字数 6112 浏览 0 评论 0原文

我有以下代码...

public partial class DownloadFile : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        string FilePath = "[FTPPath]";
        Download downloadFile = new Download();
        Server.ScriptTimeout = 54000;

        try
        {
            long size = downloadFile.GetFileSize(FilePath);

            using (FtpWebResponse ftpResponse = downloadFile.BrowserDownload(FilePath))
            using (Stream streamResponse = ftpResponse.GetResponseStream())
            {
                string fileName = FilePath.Substring(FilePath.LastIndexOf("/") + 1);
                int bufferSize = 65536;
                byte[] buffer = new byte[bufferSize];
                int readCount;

                readCount = streamResponse.Read(buffer, 0, bufferSize);

                // Read file into buffer
                //streamResponse.Read(buffer, 0, (int)size);

                Response.Clear();
                Response.Buffer = false;
                Response.BufferOutput = false;

                //Apparently this line helps with old version of IE that like to cache stuff no matter how much you tell them!
                Response.AddHeader("Pragma", "public");

                //Expires: 0 forces the browser to always thing the page is "stale" therefore forcing it to never cache the page and therefore always re-downloads the page when viewed. Therefore no nasty experiences if we change the authentication details.
                Response.Expires = 0;

                //Again this line forces the browser not to cache the page.
                Response.AddHeader("Cache-Control", "no-cache, must-revalidate");
                Response.AddHeader("Cache-Control", "public");
                Response.AddHeader("Content-Description", "File Transfer");

                Response.ContentType = "application/zip";

                Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
                Response.AddHeader("Content-Transfer-Encoding", "binary");
                Response.AddHeader("Content-Length", size.ToString());

                // writes buffer to OutputStream
                while (readCount > 0)
                {
                    Response.OutputStream.Write(buffer, 0, bufferSize);
                    readCount = streamResponse.Read(buffer, 0, bufferSize);
                    Response.Flush();
                }

                Response.End();
                Server.ScriptTimeout = 90;
            }
        }
        catch (Exception ex)
        {
            Response.Write("<p>" + ex.Message + "</p>");
            Server.ScriptTimeout = 90;
        }
    }
}

从 FTP 下载 .zip 文件(请忽略有关防止缓存的标头垃圾,除非这与问题相关)。

所以 downloadFile 是我使用启用 SSL 的 FTPWebRequest/Response 编写的一个类,它可以做两件事:一种是返回 FTP 上文件的文件大小 (GetFileSize),另一种是设置 FtpWebRequest.Method = WebRequestMethods.Ftp.DownloadFile 以允许下载文件。

现在,代码似乎可以完美运行,您将下载一个与 FTP 上的大小完全相同的精美 zip 文件,但是,这就是怪癖开始的地方。

zip 文件无论多小,总是会损坏。理论上,非常小的文件应该没问题,但稍后您就会明白原因。因此,我决定以二进制形式比较文件。

  • 如果我将 bufferSize 设置为文件大小以外的任何值 (即1024、2048、65536),前16k(16384字节)下载 完美,然后流只是将零写入到末尾 文件。

  • 如果我设置bufferSize = size(文件大小),流似乎会下载完整的文件,直到您仔细观察。该文件是前 64k 的精确副本,然后下载的文件中会出现一个额外的字符(该字符似乎永远不会相同)。

    在这个额外的字节之后,文件再次完全相同。似乎每 64k 就会添加一个额外的字节,这意味着到 65MB 文件结束时,这两个文件严重不同步。由于下载长度受到服务器上文件大小的限制,因此下载文件中的文件末尾会被截断。当所有 CRC 检查失败时,存档将允许对其进行访问。

任何帮助将不胜感激。干杯。

现在对我的代码进行了一些更改,以使用 WebRequestWebResponse 使用 Http 从 Web 服务器本身获取 zip。这是代码...

public partial class DownloadFile : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{

    string FilePath = [http path];
    Server.ScriptTimeout = 54000;
    try
    {
        WebRequest HWR = WebRequest.Create(FilePath);
        HWR.Method = WebRequestMethods.File.DownloadFile;

        using (WebResponse FWR = HWR.GetResponse())
        using (BinaryReader streamResponse = new BinaryReader(FWR.GetResponseStream()))
        {
            string fileName = FilePath.Substring(FilePath.LastIndexOf("/") + 1);
            int bufferSize = 2048;
            byte[] buffer = new byte[bufferSize];
            int readCount;

            readCount = streamResponse.Read(buffer, 0, bufferSize);

            Response.Clear();
            Response.Buffer = false;
            Response.BufferOutput = false;
            //Apparently this line helps with old version of IE that like to cache stuff no matter how much you tell them!
            Response.AddHeader("Pragma", "public");
            //Expires: 0 forces the browser to always thing the page is "stale" therefore forcing it to never cache the page and therefore always re-downloads the page when viewed. Therefore no nasty experiences if we change the authentication details.
            Response.Expires = 0;
            //Again this line forces the browser not to cache the page.
            Response.AddHeader("Cache-Control", "no-cache, must-revalidate");
            Response.AddHeader("Cache-Control", "public");
            Response.AddHeader("Content-Description", "File Transfer");
            Response.ContentType = "application/zip";
            Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
            Response.AddHeader("Content-Transfer-Encoding", "binary");

            // writes buffer to OutputStream
            while (readCount > 0)
            {
                Response.OutputStream.Write(buffer, 0, bufferSize);
                Response.Flush();
                readCount = streamResponse.Read(buffer, 0, bufferSize);
            }

            //Response.Write(testString);
            Response.End();
            Server.ScriptTimeout = 90;

        }
    }
    catch (Exception ex)
    {
        Response.Write("<p>" + ex.Message + "</p>");
        Server.ScriptTimeout = 90;
    }
}
}

这段代码更简单,但它仍然会破坏数据。我确信我做错了一些非常简单的事情,但我就是找不到它或找到一个测试来告诉我哪里出错了。请帮忙:)

I have the following code...

public partial class DownloadFile : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        string FilePath = "[FTPPath]";
        Download downloadFile = new Download();
        Server.ScriptTimeout = 54000;

        try
        {
            long size = downloadFile.GetFileSize(FilePath);

            using (FtpWebResponse ftpResponse = downloadFile.BrowserDownload(FilePath))
            using (Stream streamResponse = ftpResponse.GetResponseStream())
            {
                string fileName = FilePath.Substring(FilePath.LastIndexOf("/") + 1);
                int bufferSize = 65536;
                byte[] buffer = new byte[bufferSize];
                int readCount;

                readCount = streamResponse.Read(buffer, 0, bufferSize);

                // Read file into buffer
                //streamResponse.Read(buffer, 0, (int)size);

                Response.Clear();
                Response.Buffer = false;
                Response.BufferOutput = false;

                //Apparently this line helps with old version of IE that like to cache stuff no matter how much you tell them!
                Response.AddHeader("Pragma", "public");

                //Expires: 0 forces the browser to always thing the page is "stale" therefore forcing it to never cache the page and therefore always re-downloads the page when viewed. Therefore no nasty experiences if we change the authentication details.
                Response.Expires = 0;

                //Again this line forces the browser not to cache the page.
                Response.AddHeader("Cache-Control", "no-cache, must-revalidate");
                Response.AddHeader("Cache-Control", "public");
                Response.AddHeader("Content-Description", "File Transfer");

                Response.ContentType = "application/zip";

                Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
                Response.AddHeader("Content-Transfer-Encoding", "binary");
                Response.AddHeader("Content-Length", size.ToString());

                // writes buffer to OutputStream
                while (readCount > 0)
                {
                    Response.OutputStream.Write(buffer, 0, bufferSize);
                    readCount = streamResponse.Read(buffer, 0, bufferSize);
                    Response.Flush();
                }

                Response.End();
                Server.ScriptTimeout = 90;
            }
        }
        catch (Exception ex)
        {
            Response.Write("<p>" + ex.Message + "</p>");
            Server.ScriptTimeout = 90;
        }
    }
}

To download .zip files from an FTP (please ignore the header rubbish about preventing caching unless this is related to the issue).

So downloadFile is a class I have written using FTPWebRequest/Response with SSL enabled that can do to two things; one is return the file size (GetFileSize) of a file on our FTP and the other is to set FtpWebRequest.Method = WebRequestMethods.Ftp.DownloadFile to allow the download of a file.

Now the code appears to work perfectly, you get a nice zip downloaded of exactly the same size as the one on the FTP however, this is where the quirks begin.

The zip files are always corrupted, no matter how small. In theory, very small files should be okay, but you'll see why in a moment. Because of this, I decided to compare the files in binary.

  • If I set bufferSize to anything other than the size of the file
    (i.e. 1024, 2048, 65536), the first 16k (16384 bytes) downloads
    perfectly, and then the stream just writes zeros to the end of the
    file.

  • If I set bufferSize = size (filesize), the stream appears to download the full file, until you look more closely. The file is an exact replica up to the first 64k, and then an extra character appears in the downloaded file (this chararacter never seems to be the same).

    After this extra byte, the files are exactly the same again. An extra byte appears to get added every 64k, meaning that by the end of 65MB file, the two files are massively out of sync. Because the download length is limited to the size of the file on the server, the end of the file gets truncated in the downloaded file. The archive will allow access to it as all the CRC checks fail.

Any help would be much appreciated. Cheers.

Now changed my code somewhat to use WebRequest and WebResponse to grabe a zip using Http from the web server itself. Here is the code...

public partial class DownloadFile : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{

    string FilePath = [http path];
    Server.ScriptTimeout = 54000;
    try
    {
        WebRequest HWR = WebRequest.Create(FilePath);
        HWR.Method = WebRequestMethods.File.DownloadFile;

        using (WebResponse FWR = HWR.GetResponse())
        using (BinaryReader streamResponse = new BinaryReader(FWR.GetResponseStream()))
        {
            string fileName = FilePath.Substring(FilePath.LastIndexOf("/") + 1);
            int bufferSize = 2048;
            byte[] buffer = new byte[bufferSize];
            int readCount;

            readCount = streamResponse.Read(buffer, 0, bufferSize);

            Response.Clear();
            Response.Buffer = false;
            Response.BufferOutput = false;
            //Apparently this line helps with old version of IE that like to cache stuff no matter how much you tell them!
            Response.AddHeader("Pragma", "public");
            //Expires: 0 forces the browser to always thing the page is "stale" therefore forcing it to never cache the page and therefore always re-downloads the page when viewed. Therefore no nasty experiences if we change the authentication details.
            Response.Expires = 0;
            //Again this line forces the browser not to cache the page.
            Response.AddHeader("Cache-Control", "no-cache, must-revalidate");
            Response.AddHeader("Cache-Control", "public");
            Response.AddHeader("Content-Description", "File Transfer");
            Response.ContentType = "application/zip";
            Response.AddHeader("Content-Disposition", "attachment; filename=" + fileName);
            Response.AddHeader("Content-Transfer-Encoding", "binary");

            // writes buffer to OutputStream
            while (readCount > 0)
            {
                Response.OutputStream.Write(buffer, 0, bufferSize);
                Response.Flush();
                readCount = streamResponse.Read(buffer, 0, bufferSize);
            }

            //Response.Write(testString);
            Response.End();
            Server.ScriptTimeout = 90;

        }
    }
    catch (Exception ex)
    {
        Response.Write("<p>" + ex.Message + "</p>");
        Server.ScriptTimeout = 90;
    }
}
}

This code is more simple but it is still corrupting the data. I'm sure there's something very simple I'm doing wrong, but I just can't spot it or find a test to show me where I am going wrong. Please help :)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

多情出卖 2024-12-23 16:39:22

在您的行中将

Response.OutputStream.Write(buffer, 0, bufferSize); 

bufferSize 更改为 readCount,以便您只写入实际读取的数字。

On your line

Response.OutputStream.Write(buffer, 0, bufferSize); 

change bufferSize to readCount so that you only write the number that you actually read.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文