写入文件非常快,覆盖文件需要更长的时间
我在 Linux Fedora Core 11< 上发现 PHP 脚本存在一些性能问题/a> 框,所以我运行了一些命令来寻找瓶颈。我注意到的一件事是写入文件非常快:
[root@localhost ~]# dd if=/dev/zero of=/root/myGfile bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.0817 s, 969 MB/s
但是覆盖它需要更长的时间;
[root@localhost ~]# dd if=/dev/zero of=/root/myGfile bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 23.0658 s, 45.5 MB/s
这是为什么? (我可以重复这些结果。)
I have been seeing a few performance problems with a PHP script on a Linux Fedora Core 11 box, so I was running some commands to look for a bottleneck. One thing I noticed was that writing a file is pretty quick:
[root@localhost ~]# dd if=/dev/zero of=/root/myGfile bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.0817 s, 969 MB/s
But overwriting it takes much longer;
[root@localhost ~]# dd if=/dev/zero of=/root/myGfile bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 23.0658 s, 45.5 MB/s
Why is that? (I can repeat those results.)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
第一次写入文件时,它会缓冲在系统内存中。
第二次写入文件时,文件被截断,由于某种原因,这会导致所有脏页写入磁盘。是的,这看起来很愚蠢:当文件刚刚被截断为长度零时,为什么要写出文件数据?
您可以通过使第二个 dd 只写入 4k 数据来演示这一点。需要同样长的时间。
您还可以使用
conv=notrunc
强制dd
不截断。The first time you write the file, it's buffered in system memory.
The second time you write the file, the file is truncated, which for some reason causes all of the dirty pages to get written out to disk. Yes, this seems stupid: why write out file data when that file just got truncated to length zero?
You can demonstrate this by making the second
dd
only write, say, 4k of data. It takes just as long.You can also force
dd
to not truncate by usingconv=notrunc
.