在生产计算机 ASP.Net 上使用 FtpWebRequest 从 ftp 服务器流式传输和保存文件时的不同行为

发布于 2024-11-08 18:04:38 字数 1912 浏览 5 评论 0原文

对此可能有一些非常简单的答案,但我真的坚持这个问题。

我编写了一些代码,通过 ftp 获取相当大(4GB+)的 xml 文件,将其作为字符串读取,并将文档分割成更小的部分。最后较小的文件被写入光盘。

在我的开发机器上一切都运行得很好,但是当投入生产时,脚本在只读完文件的十分之一后突然结束。不会引发任何异常。每行代码都按预期执行。它在浏览整个文件之前就结束了。这让我认为是某些 IIS 或 web.config 设置需要被篡改。

代码作为自定义用户控件在 Umbraco CMS 内运行。服务器是运行 IIS 的 2008 Windows 计算机。

有什么想法吗?这是代码:

FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverUri);
    request.Credentials = new NetworkCredential("anonymous", "[email protected]");
    request.Method = WebRequestMethods.Ftp.DownloadFile;
    request.Timeout = -1;
    request.KeepAlive = true;
    request.UsePassive = true;
    request.UseBinary = true;
    using (response = (FtpWebResponse)request.GetResponse())
    using (responseStream = response.GetResponseStream())
    using (StreamReader sr = new StreamReader(responseStream))
    {
      ReadStreamIntoNewRecord(fileName, sr, ref progress, ref result);
    }

ReadStreamIntoNewRecord 函数如下所示:

private void ReadStreamIntoNewRecord(string fileName, StreamReader sr, int NumberOfRecordsPerBatch)
{
    string line = "";
    string record = "";
    int i = 0;  
    XDocument xdoc = new XDocument(new XElement("collection"));
    while (sr.Peek() >= 0)
    {
        line = sr.ReadLine();
        if (line.Contains("</record>"))
        {
            xdoc.Element("collection").Add(MakeRecordFromString(record + line));
            record = "";
            i++;
            if (i % NumberOfRecordsPerBatch == 0)
            {
                SaveRecordToFile(fileName, xdoc);
                xdoc = new XDocument(new XElement("collection"));
            }
        }
        else
        {
            record = record + line;
        }

    }
    SaveRecordToFile(fileName, xdoc);            
}

There might be some very simple answer to this, but i am really stuck on this one.

I have written some code that fetches quite a large (4GB+) xml file through ftp, reads it as a string and splits the document into smaller parts. Finally the smaller files are written to disc.

Everything works perfectly well on my developer machine, but when put into production, the script suddenly ends after reading through only a tenth of the file. No exceptions are thrown. Every single line of code is executed as expected. It just ends before going through the whole file. This is making me think that it is either some IIS or web.config settings that need to be tampered with.

Code is running inside the Umbraco CMS as a Custom user control. Server is a 2008 Windows machine running IIS.

Any ideas? This is the code:

FtpWebRequest request = (FtpWebRequest)WebRequest.Create(serverUri);
    request.Credentials = new NetworkCredential("anonymous", "[email protected]");
    request.Method = WebRequestMethods.Ftp.DownloadFile;
    request.Timeout = -1;
    request.KeepAlive = true;
    request.UsePassive = true;
    request.UseBinary = true;
    using (response = (FtpWebResponse)request.GetResponse())
    using (responseStream = response.GetResponseStream())
    using (StreamReader sr = new StreamReader(responseStream))
    {
      ReadStreamIntoNewRecord(fileName, sr, ref progress, ref result);
    }

The ReadStreamIntoNewRecord function looks like this:

private void ReadStreamIntoNewRecord(string fileName, StreamReader sr, int NumberOfRecordsPerBatch)
{
    string line = "";
    string record = "";
    int i = 0;  
    XDocument xdoc = new XDocument(new XElement("collection"));
    while (sr.Peek() >= 0)
    {
        line = sr.ReadLine();
        if (line.Contains("</record>"))
        {
            xdoc.Element("collection").Add(MakeRecordFromString(record + line));
            record = "";
            i++;
            if (i % NumberOfRecordsPerBatch == 0)
            {
                SaveRecordToFile(fileName, xdoc);
                xdoc = new XDocument(new XElement("collection"));
            }
        }
        else
        {
            record = record + line;
        }

    }
    SaveRecordToFile(fileName, xdoc);            
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

锦欢 2024-11-15 18:04:38

哇,将 4GB 文件加载到内存中的字符串中是一个可怕的想法。如果磁盘上的 UTF-8 大小为 4GB,那么内存中的大小将为 8GB,因为所有 .NE 字符串在内存中均为 UTF-16。幸运的是,您并没有真正这样做,您只是说您在描述中。

我相信你应该稍微改变一下 while 循环。正如所写的,当确实有更多数据进入时,它可以检测到流的不正确结束。使用它:除此之外

while ((line = sr.ReadLine()) != null)
{
    ...
}

,您最好使用简单的 StreamWriterXmlTextWriter 来保存文件而不是 XDocumentXDocument 将整个文件保留在内存中,旨在更轻松地使用 Linq-to-Xml 进行遍历。您不使用它,并且可以从更轻的重量级别中受益。

Wow, loading a 4GB file into a string in memory is a horrible idea. If it's 4GB on disk as UTF-8 then it'll be 8GB in memory since all .NE strings are UTF-16 in memory. Luckily, you're not really doing that, you just said you were in the description.

I believe you should change the while loop a little. As written it can be detecting an improper end of stream when there is really more data come in. Use this instead:

while ((line = sr.ReadLine()) != null)
{
    ...
}

Besides that, you would be much better off using either a simple StreamWriter or XmlTextWriter to save the file instead of XDocument. XDocument keeps the whole file in memory and is designed for easier traversal with Linq-to-Xml. You're not using it and can benefit from a much lighter weight class.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文