4000 万次页面错误。如何解决这个问题?

发布于 2024-09-28 02:57:27 字数 583 浏览 3 评论 0原文

我有一个应用程序,它从磁盘加载 170 个文件(假设它们是文本文件)到各个对象中,并始终保存在内存中。当我从磁盘加载这些文件时,内存会被分配一次。因此,不涉及内存碎片。我还使用 FastMM 来确保我的应用程序永远不会泄漏内存。

该应用程序会将所有这些文件相互比较以查找相似之处。过于简单化,我们可以说我们比较文本字符串,但算法要复杂得多,因为我必须允许字符串之间存在一些差异。每个文件大约300KB。加载到内存(保存它的对象)中大约需要 0.4MB 的 RAM。因此,正在运行的应用程序大约需要 60MB 或 RAM(工作集)。它处理数据大约 15 分钟。问题是它会产生超过 4000 万个页面错误。

为什么?我有大约 2GB 的可用 RAM。据我所知,页面错误很慢。他们使我的程序减慢了多少? 如何优化程序以减少这些页面错误?我想这与数据局部性有关。有人知道这方面的一些示例算法吗(Delphi)?

更新:
但是看看页面错误的数量(任务管理器中没有其他应用程序可以与我的应用程序相媲美,甚至远远没有),我想如果我设法优化内存布局(减少页面错误),我可以提高应用程序的速度。


Delphi 7,Win 7 32 位,RAM 4GB(3GB 可见,2GB 可用)。

I have an application that loads 170 files (let’s say they are text files) from disk in individual objects and kept in memory all the time. The memory is allocated once when I load those files from disk. So, there is no memory fragmentation involved. I also use FastMM to make sure my applications never leaks memory.

The application compares all these files with each other to find similarities. Over-simplified we can say that we compare text strings but the algorithm is way more complex as I have to allow some differences between strings. Each file is about 300KB. Loaded in memory (the object that holds it) it takes about 0.4MB of RAM. So, the running app takes about 60MB or RAM (working set). It processes the data for about 15 minutes. The thing is that it generates over 40 million page faults.

Why? I have about 2GB of free RAM. From what I know Page Faults are slow. How much they are slowing down my program?
How can I optimize the program to reduce these page faults? I guess it has something to do with data locality. Does anybody know some example algorithms for this (Delphi)?

Update:
But looking at the number of page faults (no other application in Task Manager comes close to mine, not even by far) I guess that I could increase the speed of my application IF I manage to optimize memory layout (reduce the page faults).


Delphi 7, Win 7 32 bit, RAM 4GB (3GB visible, 2GB free).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

z祗昰~ 2024-10-05 02:57:27

警告 - 我只是解决页面错误问题。

我不能确定,但​​您是否考虑过使用内存映射文件?这样,Windows 将使用文件本身作为分页文件(而不是主分页文件 pagrefile.sys)。如果文件是只读的,那么理论上页面错误的数量应该会减少,因为页面不需要通过分页文件写到磁盘,因为 Windows 将根据需要从文件本身加载数据。

现在,为了减少文件的分页进出,您需要尝试以一个方向浏览数据,以便在读取新数据时,可以永远丢弃旧页面。在这里,您需要权衡再次检查文件和缓存数据 - 缓存必须存储在某个地方。

请注意,内存映射文件是 Windows 加载 .dll 和 .exe 等文件的方式。我用它们扫描了千兆字节的文件,而没有达到内存限制(当时我们有 MB,而不是 GB 的内存)。

但是,根据您描述的数据,我建议不返回所有文件的能力将减少重新分页的数量。

Caveat - I'm only addressing the page faulting issue.

I cannot be sure but have you considered using Memory Mapped files? In this way windows will use the files themselves as the paging file (rather than the main paging file pagrefile.sys). If the files are read only then the number of page faults should theoretically decrease as the pages won't need to written out to disk via the paging file as windows will just load the data from the file itself as needed.

Now to reduce files from paging in and out you need to try and go through the data in one direction so that as new data is read, older pages can be discarded for ever. Here is where you trade off going over the files again and caching data - the cache has to be stored somewhere.

Note that Memory Mapped files is how windows loads .dlls and .exes amongst other things. I've used them to scan though gigabyte files without hitting memory limits (we had MBs in those days and not GBs of ram).

However from the data you describe I'd suggest the ability to not go back ovver files will reduce the amount of repaging going on.

蓝眼睛不忧郁 2024-10-05 02:57:27

在我的机器上,大多数页面错误都是针对开发人员工作室报告的,据报告在 30 多分钟的总 CPU 时间后有 4M 页面错误。只需一半的时间,您就能获得 10 倍的成果。而且我的系统内存不足。所以40M的错误看起来很多。

这可能只是也许是你有内存泄漏。

工作集只是物理内存用于您的应用程序。如果你泄漏了内存,并且不碰它,它就会被调出。您将看到虚拟内存使用量(或页面文件使用量)增加。当堆内存遍历堆时,这些页面可能会被换回,并被 Windows 再次换出。

因为您有大量 RAM,所以换出的页面将保留在物理内存中,因为没有其他人需要它们。 (从 RAM 恢复的页面算作软故障,从磁盘恢复的页面算作硬故障)

On my machine most pagefaults are reported for developer studio which is reported to have 4M page faults after 30+ minutes total CPU time. You get 10 times more, in half the time. And memory is scarce on my system. So 40M faults seems like a lot.

It could just maybe be you have a memory leak.

the working set is only the physical memory in use for your application. If you leak memory, and don't touch it, it will get paged out. You will see the virtual memory useage (or page file use) increase. These pages might be swapped back in when the heap memory walks the heap, to get swapped out again by windows.

Because you have a lot of RAM, the swapped out pages will stay in physical memory, as nobody else needs them. (a page recovered from RAM counts as a soft fault, from disk as a hard one)

余生再见 2024-10-05 02:57:27

您使用指数调整大小系统吗?

如果在加载时以太小的增量增加内存块,它可能会不断地从系统请求大块,复制数据,然后释放旧块(假设 fastmm 直接从操作系统(取消)分配非常大的块) )。

也许不知何故,这会导致操作系统从应用程序进程中释放内存,然后再次添加内存,从而导致第一次写入时出现页面错误。

还要避免对非常大的文件使用 Tstringlist.load* 方法,IIRC 这些方法会消耗所需空间的两倍。

Do you use an exponential resize system ?

If you grow the block of memory in too small increments while loading, it might constantly request large blocks from the system, copy the data over, and then release the old block (assuming that fastmm (de)allocates very large blocks directly from the OS).

Maybe somehow this causes a loop where the OS releases memory from your app's process, and then adds it again, causing page faults on first write.

Also avoid Tstringlist.load* methods for very large files, IIRC these consume twice the space needed.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文