为什么删除的内存无法重用

发布于 2024-10-26 05:34:51 字数 1904 浏览 2 评论 0原文

我在带有 MSVC 9.0 的 Windows 7 上使用 C++,并且还能够在带有 MSVC 9.0 的 Windows XP SP3 上进行测试和重现。

如果我分配 1 GB 0.5 MB 大小的对象,当我删除它们时,一切正常并且行为符合预期。但是,如果我在删除对象时分配 1 GB 0.25 MB 大小的对象,则内存将保留(地址空间监视器<中的黄色) /a>),从那时起将只能用于小于 0.25 MB 的分配。

这个简单的代码将让您通过更改类型定义的结构来测试这两种情况。分配并删除结构后,它将分配 1 GB 的 1 MB 字符缓冲区,以查看字符缓冲区是否将使用结构曾经占用的内存。

struct HalfMegStruct
{
    HalfMegStruct():m_Next(0){}

    /* return the number of objects needed to allocate one gig */
    static int getIterations(){ return 2048; }

    int m_Data[131071];
    HalfMegStruct* m_Next;
};

struct QuarterMegStruct
{
    QuarterMegStruct():m_Next(0){}

    /* return the number of objects needed to allocate one gig */
    static int getIterations(){ return 4096; }

    int m_Data[65535];
    QuarterMegStruct* m_Next;
};

// which struct to use
typedef QuarterMegStruct UseType;

int main()
{
    UseType* first = new UseType;
    UseType* current = first;

    for ( int i = 0; i < UseType::getIterations(); ++i )
        current = current->m_Next = new UseType;

    while ( first->m_Next )
    {
        UseType* temp = first->m_Next;
        delete first;
        first = temp;
    }

    delete first;

    for ( unsigned int i = 0; i < 1024; ++i )
        // one meg buffer, i'm aware this is a leak but its for illustrative purposes. 
        new char[ 1048576 ]; 

    return 0;
}

下面您可以在地址空间监视器中看到我的结果。我要强调的是,这两个最终结果之间的唯一区别是分配到 1 GB 标记的结构体的大小

四分之一梅格 Half Meg

这对我来说似乎是一个相当严重的问题,很多人可能正在遭受这个问题,但他们甚至不知道这一问题。

  • 这是设计使然还是应该被视为错误?
  • 我可以让删除的小对象实际上可供更大的分配使用吗?
  • 更令人好奇的是,Mac 或 Linux 机器是否也会遇到同样的问题?

I am using C++ on Windows 7 with MSVC 9.0, and have also been able to test and reproduce on Windows XP SP3 with MSVC 9.0.

If I allocate 1 GB of 0.5 MB sized objects, when I delete them, everything is ok and behaves as expected. However if I allocate 1 GB of 0.25 MB sized objects when I delete them, the memory remains reserved (yellow in Address Space Monitor) and from then on will only be able to be used for allocations smaller than 0.25 MB.

This simple code will let you test both scenarios by changing which struct is typedef'd. After it has allocated and deleted the structs it will then allocate 1 GB of 1 MB char buffers to see if the char buffers will use the memory that the structs once occupied.

struct HalfMegStruct
{
    HalfMegStruct():m_Next(0){}

    /* return the number of objects needed to allocate one gig */
    static int getIterations(){ return 2048; }

    int m_Data[131071];
    HalfMegStruct* m_Next;
};

struct QuarterMegStruct
{
    QuarterMegStruct():m_Next(0){}

    /* return the number of objects needed to allocate one gig */
    static int getIterations(){ return 4096; }

    int m_Data[65535];
    QuarterMegStruct* m_Next;
};

// which struct to use
typedef QuarterMegStruct UseType;

int main()
{
    UseType* first = new UseType;
    UseType* current = first;

    for ( int i = 0; i < UseType::getIterations(); ++i )
        current = current->m_Next = new UseType;

    while ( first->m_Next )
    {
        UseType* temp = first->m_Next;
        delete first;
        first = temp;
    }

    delete first;

    for ( unsigned int i = 0; i < 1024; ++i )
        // one meg buffer, i'm aware this is a leak but its for illustrative purposes. 
        new char[ 1048576 ]; 

    return 0;
}

Below you can see my results from within Address Space Monitor. Let me stress that the only difference between these two end results is the size of the structs being allocated up to the 1 GB marker.

Quarter Meg
Half Meg

This seems like quite a serious problem to me, and one that many people could be suffering from and not even know it.

  • So is this by design or should this be considered a bug?
  • Can I make small deleted objects actually be free for use by larger allocations?
  • And more out of curiosity, does a Mac or a Linux machine suffer from the same problem?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

厌倦 2024-11-02 05:34:51

我不能肯定地说情况确实如此,但这确实看起来像内存碎片(以多种形式之一)。分配器(malloc)可能会保留不同大小的存储桶以实现快速分配,在释放内存后,它不会直接将其返回给操作系统,而是保留存储桶,以便可以从以后处理相同大小的分配。相同的记忆。如果是这种情况,则该内存将可用于相同大小的进一步分配。

这种类型的优化通常对于大对象禁用,因为即使不使用它也需要保留内存。如果阈值介于两个尺寸之间,则可以解释该行为。

请注意,虽然您可能认为这很奇怪,但在大多数程序(不是测试,而是现实生活)中,内存使用模式都是重复的:如果您一次请求 100k 块,那么您通常会再次这样做。保留内存保留可以提高性能,并且实际上减少从同一存储桶授予的所有请求所产生的碎片。

如果您想投入一些时间,可以通过分析行为来了解分配器的工作原理。编写一些测试,将获取大小 X,释放它,然后获取大小 Y,然后显示内存使用情况。修复 X ​​的值并使用 Y。如果从相同的存储桶授予两种大小的请求,您将不会有保留/未使用的内存(左图),而当从不同的存储桶授予大小时,您将看到右图的效果。

我通常不会为 Windows 编写代码,而且我什至没有 Windows 7,所以我不能肯定地说情况就是如此,但看起来确实如此。

I cannot positively state this is the case, but this does look like memory fragmentation (in one of its many forms). The allocator (malloc) might be keeping buckets of different sizes to enable fast allocation, after you release the memory, instead of directly giving it back to the OS it is keeping the buckets so that later allocations of the same size can be processed from the same memory. If this is the case, the memory would be available for further allocations of the same size.

This type of optimization, is usually disabled for big objects, as it requires reserving memory even if not in use. If the threshold is somewhere between your two sizes, that would explain the behavior.

Note that while you might see this as weird, in most programs (not test, but real life) the memory usage patterns are repeated: if you asked for 100k blocks once, it more often than not is the case that you will do it again. And keeping the memory reserved can improve performance and actually reduce fragmentation that would come from all requests being granted from the same bucket.

You can, if you want to invest some time, learn how your allocator works by analyzing the behavior. Write some tests, that will acquire size X, release it, then acquire size Y and then show the memory usage. Fix the value of X and play with Y. If the requests for both sizes are granted from the same buckets, you will not have reserved/unused memory (image on the left), while when the sizes are granted from different buckets you will see the effect on the image on the right.

I don't usually code for windows, and I don't even have Windows 7, so I cannot positively state that this is the case, but it does look like it.

你的呼吸 2024-11-02 05:34:51

我可以在 Windows 7 下使用 g++ 4.4.0 确认相同的行为,因此它不在编译器中。事实上,当 getIterations() 返回 3590 或更多时,程序会失败 - 您是否得到相同的截止值?这看起来像是 Windows 系统内存分配中的一个错误。对于知识渊博的灵魂来说,谈论内存碎片是很好的事情,但这里的所有内容都被删除了,所以观察到的行为绝对不应该发生。

I can confirm the same behaviour with g++ 4.4.0 under Windows 7, so it's not in the compiler. In fact, the program fails when getIterations() returns 3590 or more -- do you get the same cutoff? This looks like a bug in Windows system memory allocation. It's all very well for knowledgeable souls to talk about memory fragmentation, but everything got deleted here, so the observed behaviour definitely shouldn't happen.

转角预定愛 2024-11-02 05:34:51

使用您的代码我执行了您的测试并得到了相同的结果。我怀疑大卫·罗德里格斯在这件事上是对的。

我进行了测试,结果和你一样。看来可能存在这种“桶”行为。

我也尝试了两种不同的测试。我没有使用 1MB 缓冲区分配 1GB 数据,而是采用与删除后首次分配内存相同的方式进行分配。第二个测试我分配了清理过的半兆缓冲区,然后分配了四分之一兆缓冲区,每个缓冲区总计 512MB。两次测试最终都有相同的内存结果,只有 512 分配了没有大块的保留内存。

正如 David 提到的,大多数应用程序倾向于进行相同大小的分配。人们可以很清楚地看到为什么这可能是一个问题。

也许解决这个问题的方法是,如果您以这种方式分配许多较小的对象,您最好分配一大块内存并自己管理它。然后当你完成后释放大块。

Using your code I performed your test and got the same result. I suspect that David Rodríguez is right in this case.

I ran the test and had the same result as you. It seems there might be this "bucket" behaviour going on.

I tried two different tests too. Instead of allocating 1GB of data using 1MB buffers I allocated the same way as the memory was first allocated after deleting. The second test I allocated the half meg buffer cleaned up then allocated the quater meg buffer, adding up to 512MB for each. Both tests had the same memory result in the end, only 512 is allocated an no large chunk of reserved memory.

As David mentions, most applications tend to make allocation of the same size. One can see quite clearly why this could be a problem though.

Perhaps the solution to this is that if you are allocating many smaller objects in this way you would be better to allocate a large block of memory and manage it yourself. Then when you're done free the large block.

蒗幽 2024-11-02 05:34:51

我就这个问题与一些权威人士进行了交谈(格雷格,如果你在场,请打个招呼;D),并且可以确认大卫所说的基本上是正确的。

随着堆在分配 ~0.25MB 对象的第一遍中增长,堆正在保留和提交内存。当堆在删除过程中收缩时,它会以一定的速度取消提交,但不一定会释放它在分配过程中保留的虚拟地址范围。在最后一次分配过程中,1MB 分配由于其大小而绕过堆,因此开始与堆竞争 VA。

请注意,堆保留 VA,而不是保持其提交。 VirtualAllocVirtualFree 可以帮助解释不同之处。这一事实并不能解决您遇到的问题,即该进程耗尽了 虚拟地址空间

I spoke with some authorities on the subject (Greg, if you're out there, say hi ;D) and can confirm that what David is saying is basically right.

As the heap grows in the first pass of allocating ~0.25MB objects, the heap is reserving and committing memory. As the heap shrinks in the delete pass, it decommits at some pace but does not necessarily release the virtual address ranges it reserved in the allocation pass. In the last allocation pass, the 1MB allocations are bypassing the heap due to their size and thus begin to compete with the heap for VA.

Note that the heap is reserving the VA, not keeping it committed. VirtualAlloc and VirtualFree can help explain the different if you're curious. This fact doesn't solve the problem you ran into, which is that the process ran out of virtual address space.

命硬 2024-11-02 05:34:51

这是低碎片堆的副作用。

http://msdn.microsoft.com/en- us/library/aa366750(v=vs.85).aspx

您应该尝试禁用它,看看是否有帮助。针对 GetProcessHeap 和 CRT 堆(以及您可能创建的任何其他堆)运行。

This is a side-effect of the Low-Fragmentation Heap.

http://msdn.microsoft.com/en-us/library/aa366750(v=vs.85).aspx

You should try disabling it to see if that helps. Run against both GetProcessHeap and the CRT heap (and any other heaps you may have created).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文