QFile 寻道性能
看来 QFile 在使用常规文件(不是特殊的 Linux I/ O 设备文件)是随机访问,这意味着查找操作具有恒定时间复杂度 O(1)。
不过,我还无法证实这一点。一般来说,当跳转到文件中的特定位置(用于写入或读取)时,std::fstream
和 QFile
是否提供恒定时间的运行时间复杂度?
It appears that QFile when working with a regular file (not a special Linux I/O device file) is random access, meaning that a seek operation has constant-time complexity O(1).
However, I haven't been able to confirm this. In general, when jumping to a specific position in a file (for writing or reading), does std::fstream
and QFile
provide constant-time running time complexity?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
简短的回答是“是的,出于实用目的”。长答案是......这很复杂。
文件流上的查找最终会调用底层文件描述符上的 lseek(),其性能取决于内核。
运行时间取决于您使用的文件系统以及文件的大小。随着文件变大,随机查找需要追逐更多级别的“间接”索引块。但即使对于最大 2^64 字节的文件,级别的数量也很少。
所以理论上来说,查找的时间大概是O(log n);但实际上,对于现代文件系统来说,它本质上是恒定的。
The short answer is "yes, for practical purposes". The long answer is... It's complicated.
Seeking on a file stream ultimately calls lseek() on the underlying file descriptor, whose performance depends on the kernel.
The running time will depend on what file system you are using and how large the files are. As the files get larger, random seeks require chasing more levels of "indirect" indexing blocks. But even for files up to 2^64 bytes, the number of levels is just a handful.
So in theory, seeking is probably O(log n); but in practice, it is essentially constant for a modern file system.