NTFS 提供哪些可靠性保证?
我想知道 NTFS 为存储在其上的数据提供了什么样的可靠性保证?例如,假设我正在打开一个文件,追加到末尾,然后关闭它,并且在此操作过程中随机时间断电。我能找到完全损坏的文件吗?
我这么问是因为我刚刚遇到了系统锁定,并发现其中两个正在附加的文件完全清零。也就是说,大小正确,但完全由零字节组成。我认为这不应该发生在 NTFS 上,即使事情失败了。
I wonder what kind of reliability guarantees NTFS provides about the data stored on it? For example, suppose I'm opening a file, appending to the end, then closing it, and the power goes out at a random time during this operation. Could I find the file completely corrupted?
I'm asking because I just had a system lock-up and found two of the files that were being appended to completely zeroed out. That is, of the right size, but made entirely of the zero byte. I thought this isn't supposed to happen on NTFS, even when things fail.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
NTFS 是一个事务性文件系统,因此它保证完整性 - 但仅针对元数据 (MFT),而不是(文件)内容。
NTFS is a transactional file system, so it guarantees integrity - but only for the metadata (MFT), not the (file) content.
简而言之,NTFS 会记录元数据,从而确保元数据有效。
其他修改(对文件正文的修改)不会被记录下来,因此无法得到保证。
有些文件系统对所有写入进行日志记录(例如,AIX 有一个,如果内存允许的话),但是使用它们,您往往会在磁盘利用率和写入速度之间进行权衡。 IOW,您需要大量“可用”空间才能获得不错的性能 - 它们基本上只是对可用空间进行所有写入,并将新数据链接到文件中的正确位置。然后他们检查并清除垃圾(即释放已被覆盖的部分,并且通常还将文件的各个部分合并在一起)。但如果他们必须经常这样做,这可能会变得很慢。
The short answer is that NTFS does metadata journaling, which assures valid metadata.
Other modifications (to the body of a file) are not journaled, so they're not guaranteed.
There are file systems that do journaling of all writes (e.g., AIX has one, if memory serves), but with them, you tend to get a tradeoff between disk utilization and write speed. IOW, you need a lot of "free" space to get decent performance -- they basically just do all writes to free space, and link that new data into the right spots in the file. Then they go through and clean out the garbage (i.e., free up parts that have since been overwritten, and usually coalesce the pieces of a file together as well). This can get slow if they have to do it very often though.