什么可能导致 FileStream 从明显锁定的文件中读取 0 字节?
我有一个用 J# 编写的“打包文件”方法,它接受一堆文件并将它们全部写入一个流中,以及一些足够的元数据,以便可以通过相关的“解包”方法将它们重建为单独的文件。我在这个问题的末尾提供了一个粗略的版本,它很好地说明了底层 J# 的作用。我通过在 J# 代码上运行 .NET Reflector 来生成等效的 J# 库来获得此信息。
现在,下面的代码在开发中完美运行,但在生产中间歇性地遇到错误。下面注释中标记为“错误 1”和“错误 2”的异常都已在野外出现过。在发生“ERROR 2”的情况下,总是写入0字节。这些错误何时发生并没有明显的模式。所涉及的文件大小通常小于 100kb。
传递到“pack”方法的文件是最近创建的,就像挂钟时间之前的毫秒一样。输出流指向新创建的未共享的文件。
总而言之,有时我会得到一个我知道存在的文件返回的文件长度“0”...因为我刚刚创建了它。由于J#的参与,我无法获取实际的异常。 (当然,如果我正在调试,我可能会在第一次异常时中断,但正如前面提到的,这在开发环境中永远不会发生)。其他时候,即使文件已成功打开,我也无法从文件中读取任何字节。复制过程中出现异常,我只能假设“Read”返回-1。
这里可能发生了什么?有什么想法吗?我的怀疑是,在 Prod 中检查了正在运行的病毒,但在开发中却没有,并且可能以某种方式涉及其中。但是,当我打开并锁定文件(如在 WriteToStream 方法中)时,病毒检查程序会做什么,导致文件停止读取而不会出现错误?我编写了一个可以锁定任意文件的小测试应用程序...但是从另一个进程锁定文件似乎并没有阻止 FileInfo.Length 工作,并且一旦文件流打开,我的测试应用程序就无法再锁定文件,如您所料。
我很困惑,我确实如此。
编辑:
好的 - 这是 J# 代码。已重新标记问题。
编辑 2:
我还应该提到的是,出于故障排除的目的,后来添加了对 0 长度的检查。在此之前,总是在比较“长度”和“书面”后失败。 因此,每当“长度”不等于“写入”时,有时“长度”为 0,有时“写入”为 0。我确信问题不是我的代码中的错误,而是由外部原因引起的。这个问题的目的是找出另一个进程(例如病毒检查程序)可以对这些文件执行什么操作,从而导致我的代码以我所描述的方式失败。
public static void packContentsToStream(Serializable serializable, Stream stream)
{
try
{
OutputStream output = new StreamWrapperOutputStream(stream);
FileRecordPair[] recordPairs = SerializationUtil.getRecords(serializable);
FileRecord[] records = new FileRecord[recordPairs.length];
File[] files = new Files[recordPairs.length];
for (int i = 0; i < recordPairs.length; i++)
{
FileRecordPair pair = recordPairs[i];
records[i] = pair.getRecord();
files[i] = pair.getFile();
}
SerializationUtil.writeToStream(serializable, output, false); // False keeps stream open
SerializationUtil.writeToStream(records, output, false);
for (int i = 0; i < files.length; i++)
{
File file = files[i];
long written = writeToStream(file, output);
if (written != records[i].getFileLength())
{
throw new SystemException("Invalid record. The number of bytes written [" + written + "] did not match the recorded file length [" + records[i].getFileLength() + "]."); // ERROR 2
}
}
}
catch (Exception e)
{
throw new SystemException("Could not write FileRecords", e);
}
}
public static long writeToStream(HapiFile file, OutputStream stream)
{
long written = 0;
if (file.exists())
{
FileInputStream fis = null;
try
{
fis = new FileInputStream(file);
written = copy(fis, stream);
}
catch (Exception e)
{
throw new SystemException("Could not write file to stream", e);
}
finally
{
if (fis != null)
{
try
{
fis.close();
}
catch (IOException ioe)
{
// For now - throw an exception to see if this might be causing the packing error
throw new SystemException("Error closing file", ioe);
}
}
}
}
return written;
}
public static int copy(InputStream is, OutputStream stream) throws IOException
{
int total = 0;
int read = 0;
byte[] buffer = new byte[BUFFER_SIZE];
while (read > -1)
{
read = is.read(buffer);
if (read > 0)
{
stream.write(buffer, 0, read);
total += read;
}
}
return total;
}
// Relevant part of 'SerializationUtil.getRecords'
private static FileRecord GetFor(File file, String recordName, int index, String pathToInstance)
{
String fileName = file.getName();
int length = (int) file.length(); // Safe as long as file is under 2GB
if (length == 0)
{
throw new SystemException("Could not obtain file length for '" + file.getPath() + "'"); // ERROR 1
}
if (index > -1 && recordName != null && recordName.length() > 0)
{
recordName = recordName + "." + index;
}
return new FileRecord(fileName, length, recordName, pathToInstance);
}
// File.length() implementation - obtained using .NET Reflector
public virtual long length()
{
long length = 0L;
if (this.__fileInfo != null)
{
SecurityManager manager = System.getSecurityManager();
if (manager != null)
{
manager.checkRead(this.__mCanonicalPath);
}
try
{
this.__fileInfo.Refresh();
if (!this.exists() || !(this.__fileInfo is FileInfo))
{
return 0L;
}
length = ((FileInfo) this.__fileInfo).Length;
}
catch (FileNotFoundException)
{
}
catch (IOException)
{
}
}
return length;
}
I have a 'pack files' method, written in J# that takes a bunch of files and writes them all into a stream, along with some enough metadata that they can be reconstructed as individual files by a related 'unpack' method. I've provided a rough version at the end of this question that gives a good indication of what the underlying J# does. I obtained this by running .NET reflector over the J# code to generate the equivalent J# libraries.
Now the code below works perfectly in developement, but intermittently experiences errors in production. The exceptions labelled in the comments below as 'Error 1' and 'Error 2' have both been seen in the wild. In the case where 'ERROR 2' occurs, there are always 0 bytes written. There is no apparent pattern to when these errors occur. The file sizes involved are typically under 100kb.
The files that get passed into the 'pack' method have very recently been created, as in milliseconds ago, wall-clock time. The output stream points at a file which has newly been created with no sharing.
So to summarize, sometimes I get a file length of '0' returned for a file that I know exists... because I just created it. Due to the involvement of J# I cannot obtain the actual exception. (Of course if I was debugging I could break on the first chance exception, but as mentioned this never happens in the development environment). Other times I am unable to read any bytes out of the file, even though it has been successfully opened. There is exception during the copy process, I can only assume that 'Read' returned -1.
What could be going on here? Any ideas? My suspicion is that there is a virus checked running in Prod, but not dev, and that maybe it's involved somehow. But what could a virus checker do when I have a file open and locked (as in the WriteToStream method) that would cause it to stop reading without an error? I have written up a little test app that can lock arbitrary files... but locking the file from another process doesn't seem to stop FileInfo.Length from working, and once the file stream opens, my test app can no longer lock the file, as you would expect.
I'm stumped, I am.
EDIT:
Okay - here is the J# code instead. Have re-tagged the question.
EDIT 2:
I should also mention that the check for 0 length was added later for trouble-shooting purposes. Prior to that, it was always just failing after comparing 'length' to 'written'.
So whenever 'length' does not equal 'written', sometimes 'length' is 0 and sometimes 'written' is 0. I am confident the problem is NOT a bug in my code, but is caused by something external. The purpose of this question is to find out what another process (e.g. a virus checker) could do to those files to cause my code to fail in the way I describe.
public static void packContentsToStream(Serializable serializable, Stream stream)
{
try
{
OutputStream output = new StreamWrapperOutputStream(stream);
FileRecordPair[] recordPairs = SerializationUtil.getRecords(serializable);
FileRecord[] records = new FileRecord[recordPairs.length];
File[] files = new Files[recordPairs.length];
for (int i = 0; i < recordPairs.length; i++)
{
FileRecordPair pair = recordPairs[i];
records[i] = pair.getRecord();
files[i] = pair.getFile();
}
SerializationUtil.writeToStream(serializable, output, false); // False keeps stream open
SerializationUtil.writeToStream(records, output, false);
for (int i = 0; i < files.length; i++)
{
File file = files[i];
long written = writeToStream(file, output);
if (written != records[i].getFileLength())
{
throw new SystemException("Invalid record. The number of bytes written [" + written + "] did not match the recorded file length [" + records[i].getFileLength() + "]."); // ERROR 2
}
}
}
catch (Exception e)
{
throw new SystemException("Could not write FileRecords", e);
}
}
public static long writeToStream(HapiFile file, OutputStream stream)
{
long written = 0;
if (file.exists())
{
FileInputStream fis = null;
try
{
fis = new FileInputStream(file);
written = copy(fis, stream);
}
catch (Exception e)
{
throw new SystemException("Could not write file to stream", e);
}
finally
{
if (fis != null)
{
try
{
fis.close();
}
catch (IOException ioe)
{
// For now - throw an exception to see if this might be causing the packing error
throw new SystemException("Error closing file", ioe);
}
}
}
}
return written;
}
public static int copy(InputStream is, OutputStream stream) throws IOException
{
int total = 0;
int read = 0;
byte[] buffer = new byte[BUFFER_SIZE];
while (read > -1)
{
read = is.read(buffer);
if (read > 0)
{
stream.write(buffer, 0, read);
total += read;
}
}
return total;
}
// Relevant part of 'SerializationUtil.getRecords'
private static FileRecord GetFor(File file, String recordName, int index, String pathToInstance)
{
String fileName = file.getName();
int length = (int) file.length(); // Safe as long as file is under 2GB
if (length == 0)
{
throw new SystemException("Could not obtain file length for '" + file.getPath() + "'"); // ERROR 1
}
if (index > -1 && recordName != null && recordName.length() > 0)
{
recordName = recordName + "." + index;
}
return new FileRecord(fileName, length, recordName, pathToInstance);
}
// File.length() implementation - obtained using .NET Reflector
public virtual long length()
{
long length = 0L;
if (this.__fileInfo != null)
{
SecurityManager manager = System.getSecurityManager();
if (manager != null)
{
manager.checkRead(this.__mCanonicalPath);
}
try
{
this.__fileInfo.Refresh();
if (!this.exists() || !(this.__fileInfo is FileInfo))
{
return 0L;
}
length = ((FileInfo) this.__fileInfo).Length;
}
catch (FileNotFoundException)
{
}
catch (IOException)
{
}
}
return length;
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
WriteToStream 方法。请先关闭该文件。总是在那里使用finally 块。
WriteToStream method. Please close the file first. Always use finally block there.
没关系... C# 代码中调用 J# 代码的对象之一有一个终结器,该终结器删除了 J# 代码所依赖的文件。 J# 代码很好。病毒检查程序不应该受到指责。我们队伍里刚刚有一个破坏者。垃圾收集器正在进来并收集一个似乎仍然在范围内的对象。
Never mind... One of the objects in the C# code that was calling the J# code had a finalizer that deleted the file the J# code relied on. The J# code was fine. The virus checker was not to blame. We just had a saboteur in the ranks. The garbage collector was coming in and collecting an object that still appeared to be in scope.
使用刷新功能更新文件的信息
use the refresh function to update your file's info