在 Windows Server 2008 (sp1) 上运行的 Delphi 应用程序是否不会回收内存?
我们有一个 D2007 应用程序,当在 Windows Server 2008(x64、sp1)上运行时,其内存占用量会稳定增长。
它在 Windows Server 2003(x32 或 x64)、XP 等上表现正常...它按预期上下波动。
我们尝试使用附带的内存管理器或最新的 FastMM4 4.92,得到相同的结果。
有没有人尝试在 Win2008 上监视任何 Delphi 应用程序的内存使用情况并确认?
或者会有什么线索吗?
精度:
- 常识上没有内存泄漏(是的,我对 FastMM 等人非常熟悉)
- 使用 Process Explorer 监控内存使用情况; Win2008 上的虚拟内存(专用字节)和物理内存(专用工作集)都在增长
- 即使存在内存压力,内存消耗仍在增长。 (这就是我们进行调查的方式,因为它导致了失败,但仅限于 Win2008 机器)
更新: //** 重新调整的 **// 代码比我们的应用程序简单得多,但显示了相同的行为.
创建一个包含 10,000,000 个对象的列表,然后创建 10,000,000 个接口,执行 2 次,使用的内存会增加约 60MB,在 Windows Server 2008 上再执行 100 次,会增加大约 300MB,但只会返回到 XP 上的情况。
如果您启动多个实例,则不会释放内存以允许其他实例运行。 相反,页面文件会增长并且服务器会爬行...
更新 2:请参阅 QC报告73347
经过进一步调查,我们追踪到关键部分,如下面的代码所示。
将该代码放入带有按钮的简单 VCL 应用程序中。 并使用 Process Explorer 进行监控:
它开始时约为 2.6 MB,运行 5 次后(单击按钮),它保持在 ~118.6MB。
5 次执行中丢失了 116MB。
//***********************
const
CS_NUMBER = 10000000;
type
TCSArray = Array[1..CS_NUMBER] of TRTLCriticalSection;
PCSArray = ^TCSArray;
procedure TestStatic;
var
csArray: PCSArray;
idx: Integer;
begin
New(csArray);
for idx := 1 to length(csArray^) do
InitializeCriticalSection(csArray^[idx]);
for idx := 1 to length(csArray^) do
DeleteCriticalSection(csArray^[idx]);
Dispose(csArray);
end;
procedure TestDynamic(const Number: Integer);
var
csArray: array of TRTLCriticalSection;
idx: Integer;
begin
SetLength(csArray, Number);
for idx := Low(csArray) to High(csArray) do
InitializeCriticalSection(csArray[idx]);
for idx := Low(csArray) to High(csArray) do
DeleteCriticalSection(csArray[idx]);
end;
procedure TForm4.Button1Click(Sender: TObject);
begin
ReportMemoryLeaksOnShutdown := True;
TestStatic;
TestDynamic(CS_NUMBER);
end;
We have a D2007 application whose memory footprint grows steadily when running on Windows Server 2008 (x64, sp1).
It behaves normally on Windows Server 2003 (x32 or x64), XP, etc... where it goes up and down as expected.
We have tried with the included Memory Manager or the latest FastMM4 4.92 with the same results.
Has anyone tried to monitor the memory usage of any Delphi app on Win2008 and would confirm?
Or would have any clue?
Precisions:
- no memory leaks in the common sense (and yes I'm quite familiar with FastMM et al)
- memory usage was monitored with Process Explorer; both Virtual Memory (Private Bytes) and Physical Memory (WorkingSet Private) are growing on Win2008
- memory consumption was still growing even when there was memory pressure. (that's how we came to investigate as it caused a failure, but only on Win2008 boxes)
Update: the //** repaced **// code is much simpler than our app but shows the same behavior.
Creating a List of 10,000,000 objects then 10,000,000 interfaces, executed 2 times grows the used memory by ~60MB and roughly 300MB more for 100 more executions on Windows Server 2008, but just returns to where it was on XP.
If you launch multiple instances, the memory is not released to allow the other instances to run. Instead, the page file grows and the server crawls...
Update 2: see QC report 73347
After further investigation, we have tracked it down to Critical Sections as shown in the code below.
Put that code into a simple VCL application with a Button. And monitor with Process Explorer:
it starts at ~2.6 MB and after 5 runs (clicks on the button) it stays at ~118.6MB.
116MB lost in 5 executions.
//***********************
const
CS_NUMBER = 10000000;
type
TCSArray = Array[1..CS_NUMBER] of TRTLCriticalSection;
PCSArray = ^TCSArray;
procedure TestStatic;
var
csArray: PCSArray;
idx: Integer;
begin
New(csArray);
for idx := 1 to length(csArray^) do
InitializeCriticalSection(csArray^[idx]);
for idx := 1 to length(csArray^) do
DeleteCriticalSection(csArray^[idx]);
Dispose(csArray);
end;
procedure TestDynamic(const Number: Integer);
var
csArray: array of TRTLCriticalSection;
idx: Integer;
begin
SetLength(csArray, Number);
for idx := Low(csArray) to High(csArray) do
InitializeCriticalSection(csArray[idx]);
for idx := Low(csArray) to High(csArray) do
DeleteCriticalSection(csArray[idx]);
end;
procedure TForm4.Button1Click(Sender: TObject);
begin
ReportMemoryLeaksOnShutdown := True;
TestStatic;
TestDynamic(CS_NUMBER);
end;
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(8)
有一个名为 VMMap 的新 sysinternals 工具,它可以可视化分配的内存。 也许它可以向您展示大内存块是什么。
There is a new sysinternals tool called VMMap which visualizes the allocated memory. Maybe it could show you what the big memory blocks are.
实际上,微软对关键部分进行了更改以添加一些调试信息。 此调试内存直到应用程序结束时才会释放,但会以某种方式缓存和重用,这就是为什么在一段时间后它会趋于稳定。
如果您想创建大量关键部分而不感到内存损失,解决方案是修补 VCL 代码,以通过调用
来替换对
并向其传递标志InitializeCriticalSection
的调用>InitializeCriticalSectionExCRITICAL_SECTION_NO_DEBUG_INFO
以避免创建调试结构。Actually, Microsoft made a change to the Critical Sections to add some debug information. This debug memory is not released until the end of the application but somehow cached and reused which is why after a while it can plateau.
The solution if you want to create a lot of Critical Sections without feeling this memory penalty is to patch the VCL code to replace calls to
InitializeCriticalSection
by calls toInitializeCriticalSectionEx
and pass it the flagCRITICAL_SECTION_NO_DEBUG_INFO
to avoid the creation of the debug structure.您是否包含具有完整调试模式的 FastMM? 只需将 FastMM4 单元直接包含在您的项目中并设置
如果没有报告任何内容,则可能在程序退出时通常会释放所有内容(可能是因为引用计数)。 您可以使用 AQTime 实时监控内存。 通过此应用程序,您可以查看每个类名称和其余已用内存的“计数”字节数。 也许您可以看到谁使用了内存。 限时演示版本足以胜任这项工作。
Did you include FastMM with full debug mode? Just include the FastMM4 unit directly in your project and set
If there is nothing reported, maybe everything is normally freed on program exit (maybe because of reference counting). You could use AQTime to monitor memory in real time. With this application you can see the bytes "counting" for each class name and for rest of the used memory. Maybe you can see who uses the memory. The time limited demo version is enough for this job.
您指的是私有字节、虚拟大小还是工作集? 运行 来自 SysInternals 的 Process Explorer 来监视内存,以便更好地了解到底是怎么回事。
我对此没有任何具体经验(尽管我运行的是 2008 x64 SP1,因此可以测试它),但我建议您创建一个测试应用程序来分配一堆内存,然后释放它。 运行来自 SysInternals 的 Process Explorer 来监视内存。
如果您测试应用程序重现相同的行为,则尝试通过在另一个进程中分配内存来创建一些内存压力 - 压力太大,除非回收第一个进程中先前释放的内存,否则它将失败。
如果仍然失败,请尝试不同的内存管理器。 也许是 FastMM 正在做这件事。
Are you referring to the Private Bytes, Virtual Size or the Working Set? Run Process Explorer from SysInternals to monitor the memory for a better idea of what is going on.
I don't have any specific experience with this (although I am running 2008 x64 SP1, so could test it) but I am going to suggest you create a test application that allocates a bunch of memory and then free it. Run Process Explorer from SysInternals to monitor the memory.
If you test application reproduces the same behavior then try creating some memory pressure by allocating memory in another process - so much that it will fail unless that previously freed memory in the first process is reclaimed.
If that continues to fail, then try a different memory manager. Maybe it is FastMM that is doing it.
检查您是否有此问题(这是另一个问题,与那个,我在对你的问题的评论中提到过)。
Check if you have this issue (this is another issue, unrelated to the one, which I've mentioned in the comments to your question).
我编写此代码是为了纠正我的应用程序上的此问题。
与 FastCode 的情况相同,要使修复运行,您必须将该单元作为项目的第一个单元。
就像本例中的 uRedirectionamentos 一样:
I did this code to correct this problem on my applications.
Is the same case of FastCode, to make the fix run you must to put the unit as the first unit of your project.
Like the uRedirecionamentos in this case:
除了Alexander之外,通常这被称为“堆碎片”。
请注意,FastMM 总体上应该更具弹性和更快,但如果原始应用程序针对 D7 memmanager 进行了调整,FastMM 实际上可能表现更差。
In addition to Alexander, usually this is called "heap fragmentation".
Note that FastMM is supposed to be more resilient and faster overall, but if the original app was tuned for the D7 memmanager, FastMM might actually perform worse.
即使应用程序中没有内存泄漏,内存使用量也会增加。 在这些情况下,您可能会泄漏其他资源。 例如,如果您的代码分配了一个位图,尽管它释放了所有对象,但设法忘记了完成某些 HBITMAP。
FastMM 会告诉您应用程序中没有内存泄漏,因为您已经释放了所有对象和数据。 但您仍然会泄漏其他类型的资源(在我的示例中 - GDI 对象)。 泄漏其他类型的资源也会影响您的记忆。
我建议您尝试其他工具,它不仅检查内存泄漏,还检查其他类型的泄漏。 我认为 AQTime 有能力做到这一点,但我不确定。
此行为的另一个可能原因是内存碎片。 假设您分配了 2000 个 1 Mb 大小的对象(让我们暂时忘记 MM 开销和用户空间中另一个对象的存在)。 现在您已拥有 2 GB 的繁忙内存。 现在,假设您释放了所有偶数对象,那么现在您已经“剥离”了内存空间,其中 1 Mb 的繁忙块和空闲块混合在一起。 虽然您现在确实有 1 Gb 的空闲内存,但您无法为任何 2 Mb 对象分配内存,因为空闲块的最大大小仅为 1 Mb(但您确实有 1000 个这样的块;))。
如果内存管理器为您的对象使用了大于 1 Mb 的块,那么当您释放偶数对象时,它无法将内存块释放回操作系统:
这些大块处于半忙状态,因此 MM 无法将它们交给操作系统。 如果您要求另一个块,即 > 1 Mb,则 MM 将需要从操作系统分配另一个块:
请注意,这些只是增加内存使用量的示例,尽管您没有内存泄漏。 我并不是说你有确切的情况:D
Well, memory usage can increase even if there is no memory leak in your application. In those cases there is possibility than you have leak of another resource. For example, if your code allocates, say, a bitmap and though it releases all objects, but manages to forget about finalizing some HBITMAP.
FastMM will tell you that you have no memory leak in your application, since you've freed all of your objects and data. But you still leaks other types of resources (in my example - GDI objects). Leaking other types of resources can affect your memory too.
I suggest you to try other tool, which checks not only memory leaks, but other types of leaks too. I think that AQTime is capable of doing that, but I'm not sure.
Another possible reason for this behaviour is memory fragmentation. Suppose you mave allocated 2000 objects of 1 Mb in size (let's forget for a minute about MM overheads and presence of another objects in user space). Now you have full 2 Gb of busy memory. Now, suppose that you free all even objects, so now you have "stripped" memory space, where 1 Mb busy and free blocks are mixed. Though you now do have 1 Gb of free memory, but you are not able to allocate a memory for any 2Mb-object, since the maximum size of free block is 1 Mb only (but you do have 1000 of such blocks ;) ).
If memory manager used blocks larger than 1 Mb for your objects, then it can not release memory blocks back to OS, when you've freed your even objects:
Those large [...] blocks are half-busy, so MM can not give them to OS. If you'll ask for another block, which is > 1 Mb then MM will need to allocate yet another block from OS:
Note, that these are just examples of incresing memory usage, though you do not have memory leak. I do not say that you have the EXACT situation :D