缓冲文件(用于更快的磁盘访问)
我正在处理大文件,直接写入磁盘很慢。由于文件很大,我无法将其加载到 TMemoryStream 中。
TFileStream 没有缓冲,所以我想知道是否有一个自定义库可以提供缓冲流,或者我应该只依赖操作系统提供的缓冲。操作系统缓冲可靠吗?我的意思是,如果缓存已满,旧文件(我的)可能会从缓存中刷新,以便为新文件腾出空间。
我的文件在 GB 范围内。它包含数百万条记录。不幸的是,记录的大小不固定。因此,我必须进行数百万次读取(4 到 500 字节之间)。阅读(和写作)是连续的。我不会在文件中上下跳动(我认为这对于缓冲来说是理想的)。
最后,我必须将这样的文件写回磁盘(再次进行数百万次小写入)。
David 提供了他的个人库,该库提供缓冲磁盘访问。
Speed tests:
Input file: 317MB.SFF
Delphi stream: 9.84sec
David's stream: 2.05sec
______________________________________
More tests:
Input file: input2_700MB.txt
Lines: 19 millions
Compiler optimization: ON
I/O check: On
FastMM: release mode
**HDD**
Reading: **linear** (ReadLine) (PS: multiply time with 10)
We see clear performance drop at 8KB. Recommended 16 or 32KB
Time: 618 ms Cache size: 64KB.
Time: 622 ms Cache size: 128KB.
Time: 622 ms Cache size: 24KB.
Time: 622 ms Cache size: 32KB.
Time: 622 ms Cache size: 64KB.
Time: 624 ms Cache size: 256KB.
Time: 625 ms Cache size: 18KB.
Time: 626 ms Cache size: 26KB.
Time: 626 ms Cache size: 1024KB.
Time: 626 ms Cache size: 16KB.
Time: 628 ms Cache size: 42KB.
Time: 644 ms Cache size: 8KB. <--- no difference until 8K
Time: 664 ms Cache size: 4KB.
Time: 705 ms Cache size: 2KB.
Time: 791 ms Cache size: 1KB.
Time: 795 ms Cache size: 1KB.
**SSD**
We see a small improvement as we go towards higher buffers. Recommended 16 or 32KB
Time: 610 ms Cache size: 128KB.
Time: 611 ms Cache size: 256KB.
Time: 614 ms Cache size: 32KB.
Time: 623 ms Cache size: 16KB.
Time: 625 ms Cache size: 66KB.
Time: 639 ms Cache size: 8KB. <--- definitively not good with 8K
Time: 660 ms Cache size: 4KB.
______
Reading: **Random** (ReadInteger) (100000 reads)
SSD
Time: 064 ms. Cache size: 1KB. Count: 100000. RAM: 13.27 MB <-- probably the best buffer size for ReadInteger is 4bytes!
Time: 067 ms. Cache size: 2KB. Count: 100000. RAM: 13.27 MB
Time: 080 ms. Cache size: 4KB. Count: 100000. RAM: 13.27 MB
Time: 098 ms. Cache size: 8KB. Count: 100000. RAM: 13.27 MB
Time: 140 ms. Cache size: 16KB. Count: 100000. RAM: 13.27 MB
Time: 213 ms. Cache size: 32KB. Count: 100000. RAM: 13.27 MB
Time: 360 ms. Cache size: 64KB. Count: 100000. RAM: 13.27 MB
Conclusion: don't use it for "random" reading
2020 年更新:
按顺序读取时,新的 System.Classes.TBufferedFileStream 似乎比上面提供的库快 70%。
I am working with large files and writing directly to disk is slow. Because the file is large I cannot load it in a TMemoryStream.
TFileStream is not buffered so I want to know if there is a custom library that can offer buffered streams or should I rely only on the buffering offered by OS. Is the OS buffering reliable? I mean if the cache is full an old file (mine) might be flushed from cache in order to make room for a new file.
My file is in the GB range. It contains millions of records. Unfortunately, the records are not of fix size. So, I have to do millions of readings (between 4 and 500 bytes). The reading (and the writing) is sequential. I don't jump up and down into the file (which I think is ideal for buffering).
In the end, I have to write such file back to disk (again millions of small writes).
David provided the his personal library that provides buffered disk access.
Speed tests:
Input file: 317MB.SFF
Delphi stream: 9.84sec
David's stream: 2.05sec
______________________________________
More tests:
Input file: input2_700MB.txt
Lines: 19 millions
Compiler optimization: ON
I/O check: On
FastMM: release mode
**HDD**
Reading: **linear** (ReadLine) (PS: multiply time with 10)
We see clear performance drop at 8KB. Recommended 16 or 32KB
Time: 618 ms Cache size: 64KB.
Time: 622 ms Cache size: 128KB.
Time: 622 ms Cache size: 24KB.
Time: 622 ms Cache size: 32KB.
Time: 622 ms Cache size: 64KB.
Time: 624 ms Cache size: 256KB.
Time: 625 ms Cache size: 18KB.
Time: 626 ms Cache size: 26KB.
Time: 626 ms Cache size: 1024KB.
Time: 626 ms Cache size: 16KB.
Time: 628 ms Cache size: 42KB.
Time: 644 ms Cache size: 8KB. <--- no difference until 8K
Time: 664 ms Cache size: 4KB.
Time: 705 ms Cache size: 2KB.
Time: 791 ms Cache size: 1KB.
Time: 795 ms Cache size: 1KB.
**SSD**
We see a small improvement as we go towards higher buffers. Recommended 16 or 32KB
Time: 610 ms Cache size: 128KB.
Time: 611 ms Cache size: 256KB.
Time: 614 ms Cache size: 32KB.
Time: 623 ms Cache size: 16KB.
Time: 625 ms Cache size: 66KB.
Time: 639 ms Cache size: 8KB. <--- definitively not good with 8K
Time: 660 ms Cache size: 4KB.
______
Reading: **Random** (ReadInteger) (100000 reads)
SSD
Time: 064 ms. Cache size: 1KB. Count: 100000. RAM: 13.27 MB <-- probably the best buffer size for ReadInteger is 4bytes!
Time: 067 ms. Cache size: 2KB. Count: 100000. RAM: 13.27 MB
Time: 080 ms. Cache size: 4KB. Count: 100000. RAM: 13.27 MB
Time: 098 ms. Cache size: 8KB. Count: 100000. RAM: 13.27 MB
Time: 140 ms. Cache size: 16KB. Count: 100000. RAM: 13.27 MB
Time: 213 ms. Cache size: 32KB. Count: 100000. RAM: 13.27 MB
Time: 360 ms. Cache size: 64KB. Count: 100000. RAM: 13.27 MB
Conclusion: don't use it for "random" reading
Update 2020:
When reading sequentially, the new System.Classes.TBufferedFileStream seems to be 70% faster than the library presented above.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
Windows 文件缓存非常有效,特别是当您使用 Vista 或更高版本时。
TFileStream
是 WindowsReadFile()
和WriteFile()
API 函数的松散包装,对于许多用例来说,唯一更快的是内存映射文件。但是,在一种常见情况下,
TFileStream
会成为性能瓶颈。也就是说,如果您每次调用流读取或写入函数时读取或写入少量数据。例如,如果您一次读取一个整数数组中的一项,那么在调用ReadFile()
时一次读取 4 个字节会产生很大的开销。同样,内存映射文件是解决此瓶颈的绝佳方法,但另一种常用的方法是读取更大的缓冲区(例如数千字节),然后解决将来从内存缓存中读取流的问题,而不是进一步调用
ReadFile()
。这种方法仅适用于顺序访问。从更新的问题中描述的使用模式来看,我认为您可能会发现以下类可以提高您的性能:
Windows file caching is very effective, especially if you are using Vista or later.
TFileStream
is a loose wrapper around the WindowsReadFile()
andWriteFile()
API functions and for many use cases the only thing faster is a memory mapped file.However, there is one common scenario where
TFileStream
becomes a performance bottleneck. That is if you read or write small amounts of data with each call to the stream read or write functions. For example if you read an array of integers one item at a time then you incur a significant overhead by reading 4 bytes at a time in the calls toReadFile()
.Again, memory mapped files are an excellent way to solve this bottleneck, but the other commonly used approach is to read a much larger buffer, many kilobytes say, and then resolve future reads of the stream from this in memory cache rather than further calls to
ReadFile()
. This approach only really works for sequential access.From the use pattern described in your updated question, I think you may find the following classes would improve performance for you:
TFileStream
类在内部使用CreateFile
函数始终使用缓冲区来管理文件,除非您指定 FILE_FLAG_NO_BUFFERING 标志(请注意,您不能直接使用 TFileStream 指定此标志)。为了有关详细信息,您可以查看这些链接
Windows 文件缓冲
您也可以尝试
TGpHugeFileStream
,它是 的一部分来自 Primoz Gabrijelcic 的GpHugeFile
单元。The
TFileStream
class internally uses theCreateFile
function which always uses a buffer to manage the file, unless which you specify theFILE_FLAG_NO_BUFFERING
flag (be aware which you can't specify this flag directly using the TFileStream). formore information you can check these links
Windows File Buffering
also you can try the
TGpHugeFileStream
which is part of theGpHugeFile
unit from Primoz Gabrijelcic.为了大家的兴趣:Embarcadero 添加了
TBufferedFileStream
(请参阅文档)在Delphi 10.1 Berlin的最新版本中。不幸的是,我无法说出它如何与此处给出的解决方案竞争,因为我还没有购买更新。我也知道这个问题是在 Delphi 7 上提出的,但我确信对 Delphi 自己的实现的引用将来会很有用。
For everybody's interest: Embarcadero added
TBufferedFileStream
(see the documentation) in the latest Release of Delphi 10.1 Berlin.Unfortunately, I can't say how it competes with the solutions given here as I haven't bought the update yet. I am also aware of that the question was asked on Delphi 7 but I am sure the reference to Delphi's own implementation can be useful in the future.
如果您有很多这种代码:
您可以通过将 FileStream.Size 缓存到变量来优化它,它会加快速度。 Stream.Size 使用三个虚拟函数调用来找出实际大小。
If you have this kind of code a lot:
You can optimize it by caching the FileStream.Size to a variable and it will speed up. Stream.Size uses three virtual function calls to find out the actual size.
所以 TFileStream 速度很慢,它从磁盘读取所有内容。
并且TMemoryStream不能足够大。 (如果你这么说的话)
那为什么不使用一个 TFileStream 将一个最大 100MB 的块加载到 TMemoryStream 中进行处理呢?这可以通过一个简单的预解析器来完成,该预解析器仅查看数据中的大小标头,但这会恢复您的问题。
让你的代码意识到它的大文件可能会出现错误并完全避免这种情况并不是一件坏事:允许它处理来自 TMemoryStream 的(不完整)块,如果需要的话,这也会带来线程增强功能(硬盘访问不是瓶颈)。
So a TFileStream is to slow, it reads everything from disk.
And a TMemoryStream can not be large enough. (if you say so)
Then why not use a TFileStream that loads one chunks upto 100MB into a TMemoryStream for processing? This could be done by a simple preparser that just looks at size headers in your data, but that would reinstate your problem.
Its not a bad thing to have your code realize its big file can misbehave and avoid that altogether: allow it to handle (incomplete) chunks from TMemoryStream, this also bring threading enhancements into view (hdd access not being the bottleneck) if needed.