Right, RAM disks went the way of the dodo when Windows acquired a file system cache. You didn't include any relevant info about the programs and files so guessing is required. If the files are large then the file system cache is probably not big enough to be able to store all the file data. The typical diagnostic is the programs completing quickly at first but then suddenly taking a long time when the cache is storing too much data waiting to be written to the disk. The cure for that is a 64-bit operating system with lots of RAM.
If the files are very small then the processing time would be dominated by each process getting created and initialized before it can pump the data. This is unlikely to be the problem since this normally doesn't take more than a couple of dozen milliseconds. But a program, say, checking for an update for itself by contacting an Internet server isn't unusual.
Which leaves the likeliest explanation: disk I/O is the bottleneck. It cannot make the reading of the first file any faster, the data has to come off the disk and that runs at glacial hard disk speeds. You need a faster disk. SSDs are nice.
发布评论
评论(1)
是的,当 Windows 获得了文件系统缓存后,RAM 磁盘就重蹈了渡渡鸟的覆辙。您没有包含有关程序和文件的任何相关信息,因此需要猜测。如果文件很大,则文件系统缓存可能不够大,无法存储所有文件数据。典型的诊断是程序一开始很快完成,但当缓存存储太多等待写入磁盘的数据时突然花费很长时间。解决这个问题的方法是使用具有大量 RAM 的 64 位操作系统。
如果文件非常小,那么处理时间将主要由每个进程在提取数据之前创建和初始化所决定。这不太可能成为问题,因为这通常不会超过几十毫秒。但是,例如,通过联系互联网服务器来检查自身更新的程序并不罕见。
这留下了最可能的解释:磁盘 I/O 是瓶颈。它无法使第一个文件的读取速度更快,数据必须从磁盘上移出,并且以极慢的硬盘速度运行。您需要更快的磁盘。 SSD 不错。
在您测量之前我们并不真正知道。
Right, RAM disks went the way of the dodo when Windows acquired a file system cache. You didn't include any relevant info about the programs and files so guessing is required. If the files are large then the file system cache is probably not big enough to be able to store all the file data. The typical diagnostic is the programs completing quickly at first but then suddenly taking a long time when the cache is storing too much data waiting to be written to the disk. The cure for that is a 64-bit operating system with lots of RAM.
If the files are very small then the processing time would be dominated by each process getting created and initialized before it can pump the data. This is unlikely to be the problem since this normally doesn't take more than a couple of dozen milliseconds. But a program, say, checking for an update for itself by contacting an Internet server isn't unusual.
Which leaves the likeliest explanation: disk I/O is the bottleneck. It cannot make the reading of the first file any faster, the data has to come off the disk and that runs at glacial hard disk speeds. You need a faster disk. SSDs are nice.
We don't really know until you measure.