调用 free() 后内存未释放

发布于 2024-10-24 10:10:31 字数 2145 浏览 6 评论 0原文

我有一个简短的程序,它通过向链表添加节点来生成链表,然后释放链表分配的内存。

Valgrind 不会报告任何内存泄漏错误,但进程继续持有分配的内存。

我只能在将分配的内存从 sizeof(structural_name) 更改为固定数字 512 后才能修复该错误。(请参阅注释代码)

这是错误还是正常操作? 这是代码:

#include <execinfo.h>
#include <stdlib.h>
#include <stdio.h>


typedef struct llist_node {
  int ibody;
  struct llist_node * next;
  struct llist_node * previous;
  struct llist * list;
}llist_node;

typedef struct  llist {
  struct llist_node * head;
  struct llist_node * tail;
  int id;
  int count;
}llist;

llist_node * new_lnode (void) {
  llist_node * nnode = (llist_node *) malloc ( 512 );
  //  llist_node * nnode = (llist_node *) malloc ( sizeof(llist_node) );
  nnode->next = NULL;
  nnode->previous = NULL;
  nnode->list = NULL;
  return nnode;
}

llist * new_llist (void) {
  llist * nlist = (llist *) malloc ( 512 );
  //  llist * nlist = (llist *) malloc ( sizeof(llist) );
  nlist->head = NULL;
  nlist->tail = NULL;
  nlist->count = 0;
  return nlist;
}

void add_int_tail ( int ibody, llist * list ) {
  llist_node * nnode = new_lnode();
  nnode->ibody = ibody;
  list->count++;
  nnode->next = NULL;
  if ( list->head == NULL ) {
    list->head = nnode;
    list->tail = nnode;
  }
  else {
    nnode->previous = list->tail;
    list->tail->next = nnode;
    list->tail = nnode;
  }
}

void destroy_list_nodes ( llist_node * nodes ) {
  llist_node * llnp = NULL;
  llist_node * llnpnext = NULL;
  llist_node * llnp2 = NULL;
  if ( nodes == NULL )
    return;
  for ( llnp = nodes; llnp != NULL; llnp = llnpnext ) {
    llnpnext = llnp->next;
    free (llnp);
  }
  return;
}

void destroy_list ( llist * list ) {
  destroy_list_nodes ( list->head );
  free (list);
}

int main () {
  int i = 0;
  int j = 0;
  llist * list = new_llist ();

  for ( i = 0; i < 100; i++ ) {
    for ( j = 0; j < 100; j++ ) {
      add_int_tail ( i+j, list );
    }
  }
  printf("enter to continue and free memory...");
  getchar();
  destroy_list ( list );
  printf("memory freed. enter to exit...");
  getchar();
  printf( "\n");
  return 0;
}

I have a short program that generates a linked list by adding nodes to it, then frees the memory allocated by the linked list.

Valgrind does not report any memory leak errors, but the process continues to hold the allocated memory.

I was only able to fix the error after I changed the memory allocated from sizeof(structure_name) to fixed number 512. (see commented code)

Is this a bug or normal operation?
Here is the code:

#include <execinfo.h>
#include <stdlib.h>
#include <stdio.h>


typedef struct llist_node {
  int ibody;
  struct llist_node * next;
  struct llist_node * previous;
  struct llist * list;
}llist_node;

typedef struct  llist {
  struct llist_node * head;
  struct llist_node * tail;
  int id;
  int count;
}llist;

llist_node * new_lnode (void) {
  llist_node * nnode = (llist_node *) malloc ( 512 );
  //  llist_node * nnode = (llist_node *) malloc ( sizeof(llist_node) );
  nnode->next = NULL;
  nnode->previous = NULL;
  nnode->list = NULL;
  return nnode;
}

llist * new_llist (void) {
  llist * nlist = (llist *) malloc ( 512 );
  //  llist * nlist = (llist *) malloc ( sizeof(llist) );
  nlist->head = NULL;
  nlist->tail = NULL;
  nlist->count = 0;
  return nlist;
}

void add_int_tail ( int ibody, llist * list ) {
  llist_node * nnode = new_lnode();
  nnode->ibody = ibody;
  list->count++;
  nnode->next = NULL;
  if ( list->head == NULL ) {
    list->head = nnode;
    list->tail = nnode;
  }
  else {
    nnode->previous = list->tail;
    list->tail->next = nnode;
    list->tail = nnode;
  }
}

void destroy_list_nodes ( llist_node * nodes ) {
  llist_node * llnp = NULL;
  llist_node * llnpnext = NULL;
  llist_node * llnp2 = NULL;
  if ( nodes == NULL )
    return;
  for ( llnp = nodes; llnp != NULL; llnp = llnpnext ) {
    llnpnext = llnp->next;
    free (llnp);
  }
  return;
}

void destroy_list ( llist * list ) {
  destroy_list_nodes ( list->head );
  free (list);
}

int main () {
  int i = 0;
  int j = 0;
  llist * list = new_llist ();

  for ( i = 0; i < 100; i++ ) {
    for ( j = 0; j < 100; j++ ) {
      add_int_tail ( i+j, list );
    }
  }
  printf("enter to continue and free memory...");
  getchar();
  destroy_list ( list );
  printf("memory freed. enter to exit...");
  getchar();
  printf( "\n");
  return 0;
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

A君 2024-10-31 10:10:32

我在这里找到了我的问题的答案:

http://linuxupc.upc.es /~pep/OLD/man/malloc.html

如果满足__noshrink配置的条件,堆扩容后的内存可以返回给内核。只有这样 ps 才会注意到内存被释放。

有时配置它很重要,特别是当内存使用量很小,但堆大小大于可用主内存时。因此,即使所需内存小于可用主内存,程序也会产生垃圾。

I discovered the answer to my question here:

http://linuxupc.upc.es/~pep/OLD/man/malloc.html

The memory after expanding the heap can be returned back to kernel if the conditions configured by __noshrink are satisfied. Only then the ps will notice that the memory is freed.

It is important to configure it sometimes particularly when the memory usage is small, but the heap size is bigger than the main memory available. Thus the program trashes even if the required memory is less than the available main memory.

云胡 2024-10-31 10:10:31

如果“进程继续持有分配的内存”意味着 ps 没有报告进程内存使用量的减少,这是完全正常的。由于各种原因,将内存返回到进程的堆并不一定会使进程将其返回到操作系统。如果您在一个大循环中一遍又一遍地创建和销毁列表,并且进程的内存使用量不会无限制地增长,那么您可能没有真正的内存泄漏。

[编辑添加:另请参阅 将 malloc 实现将释放的内存返回给系统?]

[再次编辑添加:顺便说一句,分配 512 字节块使问题消失的最可能原因是您的 malloc 实现以某种方式特殊对待较大的块,使其更容易注意到何时有不再使用的整个页面——如果要将任何内存返回给操作系统,这是必要的。]

If by "the process continues to hold the allocated memory" you mean that ps doesn't report a decrease in the process's memory usage, that's perfectly normal. Returning memory to your process's heap doesn't necessarily make the process return it to the operating system, for all sorts of reasons. If you create and destroy your list over and over again, in a big loop, and the memory usage of your process doesn't grow without limit, then you probably haven't got a real memory leak.

[EDITED to add: See also Will malloc implementations return free-ed memory back to the system? ]

[EDITED again to add: Incidentally, the most likely reason why allocating 512-byte blocks makes the problem go away is that your malloc implementation treats larger blocks specially in some way that makes it easier for it to notice when there are whole pages that are no longer being used -- which is necessary if it's going to return any memory to the OS.]

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文