如何删除文件并且删除不可逆?
我想删除一个敏感文件(使用 C++),但该文件将无法恢复。
我正在考虑简单地重写该文件,然后将其删除,这是否足够,还是我必须执行更多操作?
I want to delete a sensitive file (using C++), in a way that the file will not be recoverable.
I was thinking of simply rewriting over the file and then delete it, Is it enough or do I have to perform more actions ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
这是一篇有趣的论文:
http://www.filesystems.org/docs/secdel/secdel .html
它解决了覆盖文件的一些问题。特别是您无法确定新写入的数据是否已写入同一位置,并且无法恢复仅被覆盖几次甚至一次(在现代媒体上)的数据。
Here is an interesting paper:
http://www.filesystems.org/docs/secdel/secdel.html
It adresses some issues with overwriting of files. Especially you can't be sure that the newly written data was written to the same location and that it's impossible to recover data that was overwritten just a very few times or even once (on modern media).
最坏的情况是,如果不物理损坏驱动器,您就无法确定是否已完成此操作。您可能正在日志文件系统上运行,每当您修改文件时,该系统都会保留原始文件,以便在修改因电源故障或其他原因中断时允许灾难恢复。这可能意味着修改文件会将其移动到物理驱动器上,而旧位置保持不变。
此外,一些文件系统故意尽可能长时间地保留旧版本,以使其能够恢复。例如,考虑 Windows 上的卷影存储副本,当您修改属于系统还原点的文件的一部分的磁盘块时,新数据将写入新块,而旧数据将保留。
有一些 API 可以禁用文件、目录或整个磁盘的卷影存储副本(不知道详细信息,可能需要管理员权限)。
另一个问题是文件系统级压缩。如果您使用随机数据覆盖文件,则很可能会降低其可压缩性,从而在磁盘上变得更大,即使它仍然具有相同的逻辑大小。因此文件系统可能必须重新定位它。我不知道 Windows 是否保证继续使用旧块来启动新的更大文件。如果用零覆盖,则使其更可压缩,新数据可能无法到达旧数据的末尾。
如果驱动器曾经进行过碎片整理(IIRC Windows 现在默认在后台执行此操作),那么您对文件所做的任何操作都不一定会影响以前位置中的数据副本。
shred
和类似工具在这些相当常见的条件下根本无法工作。延伸一点,您可以想象一个自定义文件系统,其中所有更改都会被记录下来,备份以供将来的回滚恢复,尽快复制到异地备份。我不知道有任何这样的系统(尽管当然有在文件系统级别之上运行的自动备份程序具有相同的基本效果),但是 Windows 肯定没有 API 可以说:“好吧,您可以删除现在进行异地备份”,因为 Windows 并不知道它正在发生。
甚至在您考虑某人拥有特殊套件的可能性之前,该套件可以检测磁盘上的数据,即使在磁盘上的数据被新数据覆盖后也是如此。人们对此类攻击在现代磁盘上的可信度有不同的看法,现代磁盘非常,因此没有太多空间可以容纳旧值的残差。但这确实是学术性的,因为在大多数实际情况下,您甚至无法确定是否会覆盖旧数据,除非卸载驱动器并使用低级工具覆盖每个扇区。
哦,是的,闪存驱动器也好不到哪儿去,它们执行逻辑扇区到物理扇区的重新映射,有点像虚拟内存。这样他们就可以应对故障扇区,进行磨损均衡之类的事情。因此,即使在低级别,仅仅因为您覆盖特定编号的扇区并不意味着旧数据将来不会在其他编号的扇区中弹出。
Worst case scenario, you can't be sure of having done it without physically destroying the drive. It's possible that you're running on a journaling filesystem, that keeps the original whenever you modify a file to allow disaster recovery if the modification is interrupted by power failure or whatever. This might mean that modifying a file moves it on the physical drive, leaving the old location unchanged.
Furthermore, some filesystems deliberately keep the old version around as long as possible to allow it to be recovered. Consider for example shadow storage copies on Windows, when you modify a disk block that's part of a file that's part of a system restore point, the new data is written to a new block, and the old one is kept around.
There's APIs to disable shadow storage copies for a file, directory or the whole disk (don't know the details, might require admin privilege).
Another gotcha is filesystem-level compression. If you overwrite a file with random data, chances are you make it less compressible and hence larger on disk even though it's still the same logical size. So the filesystem might have to relocate it. I don't know off-hand whether Windows guarantees to continue using the old blocks for the start of the new, larger file or not. If you overwrite with zeros, you make it more compressible, the new data might fail to reach as far as the end of the old data.
If the drive has ever been defragged (IIRC Windows nowadays does this in the background by default), then nothing you do to the file necessarily affects copies of the data in previous locations.
shred
and similar tools simply don't work under these fairly common conditions.Stretching a point, you can imagine a custom filesystem where all changes are journalled, backed up for future rollback recovery, and copied to off-site backup as soon as possible. I'm not aware of any such system (although of course there are automatic backup programs that run above the filesystem level with the same basic effect), but Windows certainly doesn't have an API to say, "OK, you can delete the off-site backup now", because Windows has no idea that it's happening.
This is even before you consider the possibility that someone has special kit that can detect data on magnetic disks even after it's been overwritten with new data. Opinions vary how plausible such attacks really are on modern disks, which are very densely packed so there's not a lot of space for residuals of old values. But it's academic, really, since in most practical circumstances you can't even be sure of overwriting the old data short of unmounting the drive and overwriting each sector using low-level tools.
Oh yeah, flash drives are no better, they perform re-mapping of logical sectors to physical sectors, a bit like virtual memory. This is so that they can cope with failed sectors, do wear-leveling, that sort of thing. So even at low level, just because you overwrite a particular numbered sector doesn't mean the old data won't pop up in some other numbered sector in future.
0 和 1 并不是真正的 0 和 1。剩磁和其他技术(我怀疑您试图保留内容的用户正在使用这些技术)可用于在数据被覆盖后恢复数据。
看看这个条目,可能就是您正在寻找的内容。
编辑:
支持我的声明:
0's and 1's aren't really 0's and 1's. Residual magnetism and other techniques (which I doubt are being used by the users you're trying to keep the contents from) can be used to recover data after it was overwritten.
Take a look at this entry, could be what you're looking for.
EDIT:
To back-up my statement:
您应该使用一些随机生成的字节、使用合适的随机数生成器或生成垃圾的加密函数来覆盖它。
要真正确保所有内容都被覆盖,您可以多次覆盖已删除文件的同一内存区域。
You should overwrite it using some randomly generated bytes, using a decent random number generator or cryptographic function that generates garbage.
To be really sure all is overwritten, you could overwrite the same memory area of the deleted file several times.
我认为这肯定有效。
首先删除该文件,然后开始创建一个文件,该文件将填充光盘中的剩余空间。这将覆盖磁盘中存在的所有数据,然后如果您删除了您创建的文件,那么可以肯定地说您的文件无法恢复。
最好不要创建单个大文件,而是创建许多与要删除的文件大小相同或稍小的文件。并且多次重复此操作会增加过载的数据量。
I think this might work for sure.
Delete the file first, then just start creating a file which would fill up the remaining spaces in disc . This will override all the data present in the disk then if you delete the the file you have created then it is safe to say that your file can't be recovered.
Instead of creating a single big file , creating many files with the same size or little less of the file you want to delete will be best. And repeating this for many times will increase the amount of data to overload.
最好在覆盖之前先粉碎数据。因此获取内存地址并交换位置。之后重写数据。
Its better to shred the data first before overwriting. So get the memory address and swap the locations. After that over write the data.