通用 C Tcp 服务器内存泄漏

发布于 2024-12-24 22:31:25 字数 2759 浏览 0 评论 0 原文

我的 posix tcp 服务器似乎都在泄漏。我用 ps 和 top 等工具密切关注他们,他们认为内存在不断增加。 每当客户进入和/或离开时都会发生这种情况。

例如。假设 ps 首先报告 100 VSZ。客户端进入,它上升到 238。然后客户端退出,它下降到 138。不是 100!每次客户端进入和退出时,内存都会增加。

我尝试过很多不同的内存泄漏工具,例如。 valgrind,他们都没有找到任何东西。 (他们也不认为它泄漏了。)

是 ps 和 top 混淆了吗?看来不太可能。

我制作了一个小型通用示例来演示我的代码和潜在的泄漏:

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <signal.h>
#include <unistd.h>

#define MAX_BUFFER 100*1024

void* server_process_thread(void* args)
{
    int client = (int)args;
    unsigned char* buffer = NULL;
    int r;

    /* allocate something huge */
    if((buffer = malloc(MAX_BUFFER)) == NULL)
    {
        perror("Couldn't allocate");
        goto exit;
    }

    printf("Client processing ...\n");

    //echo all that comes
    while(1)
    {
        r = read(client, buffer, MAX_BUFFER);
        if(r <= 0) break;
        write(client, buffer, r);
    }

exit:
    printf("Client exit\n");
    free(buffer);
    close(client);
    pthread_exit(NULL);
}

int main(void)
{
    struct sockaddr_in server_sockaddr = {0};
    struct sockaddr_in clientSockAddr = {0};
    int flags = 1;
    int server = 0;
    int client = 0;
    pthread_t thread = 0;
    socklen_t clientSockSize = sizeof(clientSockAddr);

    //init tcp
    signal(SIGCHLD, SIG_IGN);
    signal(SIGPIPE, SIG_IGN);
    signal(SIGALRM, SIG_IGN);

    if ((server = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) == -1)
    {
        perror("Couldn't open socket");
        return -1;
    }

    flags = 1;
    if ((setsockopt(server, SOL_SOCKET, SO_REUSEADDR, (void *) &flags, sizeof(flags))) == -1)
    {
        perror("Couldn't set socket reuse");
        return -1;
    }

    server_sockaddr.sin_family = AF_INET;
    server_sockaddr.sin_port = htons(666);
    server_sockaddr.sin_addr.s_addr = htonl(INADDR_ANY); //IP

    if (bind(server, (struct sockaddr *) &server_sockaddr, sizeof(server_sockaddr)) == -1)
    {
        perror("Couldn't bind socket");
        return -1;
    }

    //LISTEN
    if (listen(server, SOMAXCONN) == -1)
    {
        perror("Couldn't listen on socket");
        return -1;
    }

    printf("TCP Echo Server started ...\n");

    //wait for clients
    while(1)
    {
        client = accept(server, (struct sockaddr*) (&clientSockAddr), &clientSockSize);

        if(pthread_create(&thread, NULL, server_process_thread, (void*)client) != 0)
        {
            perror("Couldn't create thread");
            return -1;
        }
    }

    //dispose
    printf("Server exit\n");
    close(server);
    return EXIT_SUCCESS;
}

现在,我可能会错过一些错误处理,但是这段代码中是否存在一些基本缺陷?

My posix tcp servers all seems to be leaking. I'm keeping an eye on them with tools like ps and top and they think that the memory is constantly increasing.
It happens whenever a client is entering and/or leaving.

Eg. let's say that ps reports 100 VSZ at first. A client enters and it rises to 238. The client then exits and it drops to 138. Not 100! Every time a client enters and exits the memory is increased.

I've tried a ton of different memory leak tools like eg. valgrind and none of them finds anything. (And they don't think it's leaking either.)

Is it ps and top that're confused? It seems unlikely.

I've made a small generic sample that demonstrates my code and the potential leak:

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <signal.h>
#include <unistd.h>

#define MAX_BUFFER 100*1024

void* server_process_thread(void* args)
{
    int client = (int)args;
    unsigned char* buffer = NULL;
    int r;

    /* allocate something huge */
    if((buffer = malloc(MAX_BUFFER)) == NULL)
    {
        perror("Couldn't allocate");
        goto exit;
    }

    printf("Client processing ...\n");

    //echo all that comes
    while(1)
    {
        r = read(client, buffer, MAX_BUFFER);
        if(r <= 0) break;
        write(client, buffer, r);
    }

exit:
    printf("Client exit\n");
    free(buffer);
    close(client);
    pthread_exit(NULL);
}

int main(void)
{
    struct sockaddr_in server_sockaddr = {0};
    struct sockaddr_in clientSockAddr = {0};
    int flags = 1;
    int server = 0;
    int client = 0;
    pthread_t thread = 0;
    socklen_t clientSockSize = sizeof(clientSockAddr);

    //init tcp
    signal(SIGCHLD, SIG_IGN);
    signal(SIGPIPE, SIG_IGN);
    signal(SIGALRM, SIG_IGN);

    if ((server = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) == -1)
    {
        perror("Couldn't open socket");
        return -1;
    }

    flags = 1;
    if ((setsockopt(server, SOL_SOCKET, SO_REUSEADDR, (void *) &flags, sizeof(flags))) == -1)
    {
        perror("Couldn't set socket reuse");
        return -1;
    }

    server_sockaddr.sin_family = AF_INET;
    server_sockaddr.sin_port = htons(666);
    server_sockaddr.sin_addr.s_addr = htonl(INADDR_ANY); //IP

    if (bind(server, (struct sockaddr *) &server_sockaddr, sizeof(server_sockaddr)) == -1)
    {
        perror("Couldn't bind socket");
        return -1;
    }

    //LISTEN
    if (listen(server, SOMAXCONN) == -1)
    {
        perror("Couldn't listen on socket");
        return -1;
    }

    printf("TCP Echo Server started ...\n");

    //wait for clients
    while(1)
    {
        client = accept(server, (struct sockaddr*) (&clientSockAddr), &clientSockSize);

        if(pthread_create(&thread, NULL, server_process_thread, (void*)client) != 0)
        {
            perror("Couldn't create thread");
            return -1;
        }
    }

    //dispose
    printf("Server exit\n");
    close(server);
    return EXIT_SUCCESS;
}

Now, I might miss some error handling here and there, but is there some fundamental flaw in this code?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

本王不退位尔等都是臣 2024-12-31 22:31:25

至少你正在泄漏线程。

默认情况下,pthreads 在退出时不会被“清理”,直到有人对其调用 pthread_join() 。

如果您将线程创建为分离线程,则其资源将在退出时被清理(但您不能再对其进行 pthread_join() )。
最简单的方法是调用 pthread_detach(pthread_self()); 作为 server_process_thread 中的第一件事。

You are leaking threads, at least.

By default, pthreads are not "cleaned up" when it exits until someone calls pthread_join() on it.

If you create the thread as a detached thread, its resources will be cleaned up when it exits (but you can no longer pthread_join() it).
The easiest way to do this is to call pthread_detach(pthread_self()); as the first thing in your server_process_thread.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文