Linux内存管理中的RSS和VSZ是什么

发布于 2024-12-11 19:17:03 字数 47 浏览 0 评论 0 原文

Linux内存管理中的RSS和VSZ是什么?在多线程环境中如何管理和跟踪这两者?

What are RSS and VSZ in Linux memory management? In a multithreaded environment how can both of these can be managed and tracked?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

九公里浅绿 2024-12-18 19:17:03

RSS 是驻留集大小,用于显示分配给该进程的 RAM 内存量。它不包括换出的内存。它确实包含来自共享库的内存,只要这些库中的页面实际上位于内存中。它确实包括所有堆栈和堆内存。

VSZ 是虚拟内存大小。它包括进程可以访问的所有内存,包括换出的内存、已分配但未使用的内存以及来自共享库的内存。

因此,如果进程 A 有 500K 的二进制文件并链接到 2500K 的共享库,有 200K 的堆栈/堆分配,其中 100K 实际上在内存中(其余部分已交换或未使用),并且它仅实际加载了 1000K 的共享库以及 400K 的自己的二进制文件:

RSS: 400K + 1000K + 100K = 1500K
VSZ: 500K + 2500K + 200K = 3200K

由于部分内存是共享的,因此许多进程可能会使用它,因此如果将所有 RSS 值相加,您可以轻松得到结果比您的系统拥有更多的空间。

分配的内存在被程序实际使用之前也可能不会出现在 RSS 中。因此,如果您的程序预先分配了一堆内存,然后随着时间的推移使用它,您可能会看到 RSS 上升而 VSZ 保持不变。

还有 PSS(比例设置大小)。这是一种较新的测量方法,它跟踪共享内存作为当前进程使用的比例。因此,如果有两个进程使用之前的同一个共享库:

PSS: 400K + (1000K/2) + 100K = 400K + 500K + 100K = 1000K

线程都共享相同的地址空间,因此每个线程的 RSS、VSZ 和 PSS 与进程中的所有其他线程相同。在 linux/unix 中使用 ps 或 top 查看此信息。

事情远不止于此,要了解更多信息,请查看以下参考文献:

另请参阅:

RSS is the Resident Set Size and is used to show how much memory is allocated to that process and is in RAM. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.

VSZ is the Virtual Memory Size. It includes all memory that the process can access, including memory that is swapped out, memory that is allocated, but not used, and memory that is from shared libraries.

So if process A has a 500K binary and is linked to 2500K of shared libraries, has 200K of stack/heap allocations of which 100K is actually in memory (rest is swapped or unused), and it has only actually loaded 1000K of the shared libraries and 400K of its own binary then:

RSS: 400K + 1000K + 100K = 1500K
VSZ: 500K + 2500K + 200K = 3200K

Since part of the memory is shared, many processes may use it, so if you add up all of the RSS values you can easily end up with more space than your system has.

The memory that is allocated also may not be in RSS until it is actually used by the program. So if your program allocated a bunch of memory up front, then uses it over time, you could see RSS going up and VSZ staying the same.

There is also PSS (proportional set size). This is a newer measure which tracks the shared memory as a proportion used by the current process. So if there were two processes using the same shared library from before:

PSS: 400K + (1000K/2) + 100K = 400K + 500K + 100K = 1000K

Threads all share the same address space, so the RSS, VSZ and PSS for each thread is identical to all of the other threads in the process. Use ps or top to view this information in linux/unix.

There is way more to it than this, to learn more check the following references:

Also see:

三人与歌 2024-12-18 19:17:03

RSS 是 Resident Set Size(物理驻留内存 - 当前在机器物理内存中占用的空间),VSZ 是 Virtual Memory Size(分配的地址空间 - 这在进程的内存映射中分配了地址,但不一定有任何地址)现在就在它背后的实际记忆)。

请注意,在当今常见的虚拟机中,从机器的角度来看,物理内存可能并不是真正的物理内存。

RSS is Resident Set Size (physically resident memory - this is currently occupying space in the machine's physical memory), and VSZ is Virtual Memory Size (address space allocated - this has addresses allocated in the process's memory map, but there isn't necessarily any actual memory behind it all right now).

Note that in these days of commonplace virtual machines, physical memory from the machine's view point may not really be actual physical memory.

夜声 2024-12-18 19:17:03

最小可运行示例

为了使这一点有意义,您必须了解分页的基础知识:x86 分页是如何工作的? 特别是操作系统可以在 RAM 或磁盘上实际拥有后备存储之前通过页表/其内部内存簿记录(VSZ 虚拟内存)分配虚拟内存(RSS 常驻内存)。

现在,为了观察实际情况,让我们创建一个程序:

  • 使用 mmap 分配比物理内存更多的 RAM
  • 在每一页上写入一个字节,以确保每一页都来自虚拟专用内存 (VSZ)实际使用的内存 (RSS) 使用
  • 以下提到的方法之一检查进程的内存使用情况:C 中当前进程的内存使用情况

main.c

#define _GNU_SOURCE
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <unistd.h>

typedef struct {
    unsigned long size,resident,share,text,lib,data,dt;
} ProcStatm;

/* https://stackoverflow.com/questions/1558402/memory-usage-of-current-process-in-c/7212248#7212248 */
void ProcStat_init(ProcStatm *result) {
    const char* statm_path = "/proc/self/statm";
    FILE *f = fopen(statm_path, "r");
    if(!f) {
        perror(statm_path);
        abort();
    }
    if(7 != fscanf(
        f,
        "%lu %lu %lu %lu %lu %lu %lu",
        &(result->size),
        &(result->resident),
        &(result->share),
        &(result->text),
        &(result->lib),
        &(result->data),
        &(result->dt)
    )) {
        perror(statm_path);
        abort();
    }
    fclose(f);
}

int main(int argc, char **argv) {
    ProcStatm proc_statm;
    char *base, *p;
    char system_cmd[1024];
    long page_size;
    size_t i, nbytes, print_interval, bytes_since_last_print;
    int snprintf_return;

    /* Decide how many ints to allocate. */
    if (argc < 2) {
        nbytes = 0x10000;
    } else {
        nbytes = strtoull(argv[1], NULL, 0);
    }
    if (argc < 3) {
        print_interval = 0x1000;
    } else {
        print_interval = strtoull(argv[2], NULL, 0);
    }
    page_size = sysconf(_SC_PAGESIZE);

    /* Allocate the memory. */
    base = mmap(
        NULL,
        nbytes,
        PROT_READ | PROT_WRITE,
        MAP_SHARED | MAP_ANONYMOUS,
        -1,
        0
    );
    if (base == MAP_FAILED) {
        perror("mmap");
        exit(EXIT_FAILURE);
    }

    /* Write to all the allocated pages. */
    i = 0;
    p = base;
    bytes_since_last_print = 0;
    /* Produce the ps command that lists only our VSZ and RSS. */
    snprintf_return = snprintf(
        system_cmd,
        sizeof(system_cmd),
        "ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == \"%ju\") print}'",
        (uintmax_t)getpid()
    );
    assert(snprintf_return >= 0);
    assert((size_t)snprintf_return < sizeof(system_cmd));
    bytes_since_last_print = print_interval;
    do {
        /* Modify a byte in the page. */
        *p = i;
        p += page_size;
        bytes_since_last_print += page_size;
        /* Print process memory usage every print_interval bytes.
         * We count memory using a few techniques from:
         * https://stackoverflow.com/questions/1558402/memory-usage-of-current-process-in-c */
        if (bytes_since_last_print > print_interval) {
            bytes_since_last_print -= print_interval;
            printf("extra_memory_committed %lu KiB\n", (i * page_size) / 1024);
            ProcStat_init(&proc_statm);
            /* Check /proc/self/statm */
            printf(
                "/proc/self/statm size resident %lu %lu KiB\n",
                (proc_statm.size * page_size) / 1024,
                (proc_statm.resident * page_size) / 1024
            );
            /* Check ps. */
            puts(system_cmd);
            system(system_cmd);
            puts("");
        }
        i++;
    } while (p < base + nbytes);

    /* Cleanup. */
    munmap(base, nbytes);
    return EXIT_SUCCESS;
}

GitHub 上游

编译并运行:

gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -o main.out main.c
echo 1 | sudo tee /proc/sys/vm/overcommit_memory
sudo dmesg -c
./main.out 0x1000000000 0x200000000
echo $?
sudo dmesg

其中:

  • 0x1000000000 == 64GiB:2x 我的计算机 32GiB 的物理 RAM
  • 0x200000000 == 8GiB:每 8GiB 打印一次内存,因此我们应该在崩溃之前在 32GiB 左右打印 4 次
  • echo 1 | sudo tee /proc/sys/vm/overcommit_memory:Linux 需要允许我们进行大于物理 RAM 的 mmap 调用:malloc 可以分配的最大内存

程序输出:

extra_memory_committed 0 KiB
/proc/self/statm size resident 67111332 768 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 1648

extra_memory_committed 8388608 KiB
/proc/self/statm size resident 67111332 8390244 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 8390256

extra_memory_committed 16777216 KiB
/proc/self/statm size resident 67111332 16778852 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 16778864

extra_memory_committed 25165824 KiB
/proc/self/statm size resident 67111332 25167460 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 25167472

Killed

退出状态:

137

128 + 信号编号规则意味着我们得到了信号编号9man 7 signal 表示的是 SIGKILL,由 Linux 内存不足杀手

输出解释:

  • mmap 之后,VSZ 虚拟内存保持不变,为 printf '0x%X\n' 0x40009A4 KiB ~= 64GiBps 值以 KiB 为单位)。
  • 仅当我们触摸页面时,RSS“实际内存使用量”才会缓慢增加。例如:
    • 在第一次打印时,我们有 extra_memory_commissed 0,这意味着我们尚未触及任何页面。 RSS 是一个很小的 ​​1648 KiB,已分配用于正常程序启动,如文本区域、全局变量等。
    • 在第二次打印时,我们已经写入了 8388608 KiB == 8GiB 的页面。结果,RSS 正好增加了 8GIB,达到 8390256 KiB == 8388608 KiB + 1648 KiB
    • RSS 继续以 8GiB 的增量增长。最后一次打印显示了大约 24 GiB 的内存,在打印 32 GiB 之前,OOM 杀手杀死了该进程

另请参阅:https://unix.stackexchange.com/questions/35129/need-解释-on-resident-set-size-virtual-size

OOM杀手日志

我们的dmesg命令已经显示了OOM杀手日志。

有关这些内容的确切解释,请访问:

日志的第一行是:

[ 7283.479087] mongod invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

所以我们看到,有趣的是,总是在我的笔记本电脑后台运行的 MongoDB 守护进程首先触发了 OOM 杀手,大概是当这个可怜的东西试图分配一些内存时。

然而,OOM杀手并不一定会杀死唤醒它的人。

调用之后,内核打印一个表或进程,包括 oom_score :

[ 7283.479292] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[ 7283.479303] [    496]     0   496    16126        6   172032      484             0 systemd-journal
[ 7283.479306] [    505]     0   505     1309        0    45056       52             0 blkmapd
[ 7283.479309] [    513]     0   513    19757        0    57344       55             0 lvmetad
[ 7283.479312] [    516]     0   516     4681        1    61440      444         -1000 systemd-udevd

再往前我们看到我们自己的小 main.out 实际上在上一次调用中被杀死了

[ 7283.479871] Out of memory: Kill process 15665 (main.out) score 865 or sacrifice child
[ 7283.479879] Killed process 15665 (main.out) total-vm:67111332kB, anon-rss:92kB, file-rss:4kB, shmem-rss:30080832kB
[ 7283.479951] oom_reaper: reaped process 15665 (main.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:30080832kB

:日志提到该进程的得分为 865,大概是最高(最差)的 OOM 杀手得分,如下所述:https://unix.stackexchange.com/ questions/153585/how-does-the-oom-killer-decide-which-process-to-kill-first

另外有趣的是,一切显然发生得如此之快,以至于在释放的内存被计算之前, oom 再次被 DeadlineMonitor 进程唤醒:

[ 7283.481043] DeadlineMonitor invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

这次杀死了一些 Chromium 进程,该进程通常是我的计算机的正常内存占用:

[ 7283.481773] Out of memory: Kill process 11786 (chromium-browse) score 306 or sacrifice child
[ 7283.481833] Killed process 11786 (chromium-browse) total-vm:1813576kB, anon-rss:208804kB, file-rss:0kB, shmem-rss:8380kB
[ 7283.497847] oom_reaper: reaped process 11786 (chromium-browse), now anon-rss:0kB, file-rss:0kB, shmem-rss:8044kB

在 Ubuntu 19.04、Linux 内核 5.0 中测试。 0。

Linux 内核文档

https:// /github.com/torvalds/linux/blob/v5.17/Documentation/filesystems/proc.rst 有一些要点。那里没有使用术语“VSZ”,但使用了“RSS”,并且没有什么太有启发性的(惊讶?!)

内核似乎使用术语VmSize而不是VSZ,它出现在<代码>/proc/$PID/status。

一些感兴趣的引言:

第一行显示的信息与 /proc/PID/maps 中的映射显示的信息相同。下面几行显示了映射的大小(size);支持VMA时分配的每个页面的大小(KernelPageSize),通常与页表条目中的大小相同;支持 VMA 时 MMU 使用的页面大小(在大多数情况下,与 KernelPageSize 相同);当前驻留在 RAM 中的映射量 (RSS);该映射的进程比例份额 (PSS);以及映射中干净和脏的共享和私有页面的数量。

进程的“比例集大小”(PSS) 是其在内存中拥有的页数,其中每个页除以共享该页的进程数。因此,如果一个进程有 1000 个页面全部属于自己,并且与其他进程共享 1000 个页面,则其 PSS 将为 1500。

请注意,即使是属于 MAP_SHARED 映射一部分但仅映射了一个 pte 的页面,即当前仅由一个进程使用,也将被视为私有页面而不是共享页面。

所以我们可以猜测更多的事情:

  • 单个进程使用的共享库出现在 RSS 中,如果多个进程拥有它们,那么
  • PSS 就不是 由jmh提到,并且在“我是唯一持有共享库的进程”和“有N个进程持有共享库,因此每个进程平均持有内存/N”之间有一个更成比例的方法

Minimal runnable example

For this to make sense, you have to understand the basics of paging: How does x86 paging work? and in particular that the OS can allocate virtual memory via page tables / its internal memory book keeping (VSZ virtual memory) before it actually has a backing storage on RAM or disk (RSS resident memory).

Now to observe this in action, let's create a program that:

  • allocates more RAM than our physical memory with mmap
  • writes one byte on each page to ensure that each of those pages goes from virtual only memory (VSZ) to actually used memory (RSS)
  • checks the memory usage of the process with one of the methods mentioned at: Memory usage of current process in C

main.c

#define _GNU_SOURCE
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <unistd.h>

typedef struct {
    unsigned long size,resident,share,text,lib,data,dt;
} ProcStatm;

/* https://stackoverflow.com/questions/1558402/memory-usage-of-current-process-in-c/7212248#7212248 */
void ProcStat_init(ProcStatm *result) {
    const char* statm_path = "/proc/self/statm";
    FILE *f = fopen(statm_path, "r");
    if(!f) {
        perror(statm_path);
        abort();
    }
    if(7 != fscanf(
        f,
        "%lu %lu %lu %lu %lu %lu %lu",
        &(result->size),
        &(result->resident),
        &(result->share),
        &(result->text),
        &(result->lib),
        &(result->data),
        &(result->dt)
    )) {
        perror(statm_path);
        abort();
    }
    fclose(f);
}

int main(int argc, char **argv) {
    ProcStatm proc_statm;
    char *base, *p;
    char system_cmd[1024];
    long page_size;
    size_t i, nbytes, print_interval, bytes_since_last_print;
    int snprintf_return;

    /* Decide how many ints to allocate. */
    if (argc < 2) {
        nbytes = 0x10000;
    } else {
        nbytes = strtoull(argv[1], NULL, 0);
    }
    if (argc < 3) {
        print_interval = 0x1000;
    } else {
        print_interval = strtoull(argv[2], NULL, 0);
    }
    page_size = sysconf(_SC_PAGESIZE);

    /* Allocate the memory. */
    base = mmap(
        NULL,
        nbytes,
        PROT_READ | PROT_WRITE,
        MAP_SHARED | MAP_ANONYMOUS,
        -1,
        0
    );
    if (base == MAP_FAILED) {
        perror("mmap");
        exit(EXIT_FAILURE);
    }

    /* Write to all the allocated pages. */
    i = 0;
    p = base;
    bytes_since_last_print = 0;
    /* Produce the ps command that lists only our VSZ and RSS. */
    snprintf_return = snprintf(
        system_cmd,
        sizeof(system_cmd),
        "ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == \"%ju\") print}'",
        (uintmax_t)getpid()
    );
    assert(snprintf_return >= 0);
    assert((size_t)snprintf_return < sizeof(system_cmd));
    bytes_since_last_print = print_interval;
    do {
        /* Modify a byte in the page. */
        *p = i;
        p += page_size;
        bytes_since_last_print += page_size;
        /* Print process memory usage every print_interval bytes.
         * We count memory using a few techniques from:
         * https://stackoverflow.com/questions/1558402/memory-usage-of-current-process-in-c */
        if (bytes_since_last_print > print_interval) {
            bytes_since_last_print -= print_interval;
            printf("extra_memory_committed %lu KiB\n", (i * page_size) / 1024);
            ProcStat_init(&proc_statm);
            /* Check /proc/self/statm */
            printf(
                "/proc/self/statm size resident %lu %lu KiB\n",
                (proc_statm.size * page_size) / 1024,
                (proc_statm.resident * page_size) / 1024
            );
            /* Check ps. */
            puts(system_cmd);
            system(system_cmd);
            puts("");
        }
        i++;
    } while (p < base + nbytes);

    /* Cleanup. */
    munmap(base, nbytes);
    return EXIT_SUCCESS;
}

GitHub upstream.

Compile and run:

gcc -ggdb3 -O0 -std=c99 -Wall -Wextra -pedantic -o main.out main.c
echo 1 | sudo tee /proc/sys/vm/overcommit_memory
sudo dmesg -c
./main.out 0x1000000000 0x200000000
echo $?
sudo dmesg

where:

  • 0x1000000000 == 64GiB: 2x my computer's physical RAM of 32GiB
  • 0x200000000 == 8GiB: print the memory every 8GiB, so we should get 4 prints before the crash at around 32GiB
  • echo 1 | sudo tee /proc/sys/vm/overcommit_memory: required for Linux to allow us to make a mmap call larger than physical RAM: Maximum memory which malloc can allocate

Program output:

extra_memory_committed 0 KiB
/proc/self/statm size resident 67111332 768 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 1648

extra_memory_committed 8388608 KiB
/proc/self/statm size resident 67111332 8390244 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 8390256

extra_memory_committed 16777216 KiB
/proc/self/statm size resident 67111332 16778852 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 16778864

extra_memory_committed 25165824 KiB
/proc/self/statm size resident 67111332 25167460 KiB
ps -o pid,vsz,rss | awk '{if (NR == 1 || $1 == "29827") print}'
  PID    VSZ   RSS
29827 67111332 25167472

Killed

Exit status:

137

which by the 128 + signal number rule means we got signal number 9, which man 7 signal says is SIGKILL, which is sent by the Linux out-of-memory killer.

Output interpretation:

  • VSZ virtual memory remains constant at printf '0x%X\n' 0x40009A4 KiB ~= 64GiB (ps values are in KiB) after the mmap.
  • RSS "real memory usage" increases lazily only as we touch the pages. For example:
    • on the first print, we have extra_memory_committed 0, which means we haven't yet touched any pages. RSS is a small 1648 KiB which has been allocated for normal program startup like text area, globals, etc.
    • on the second print, we have written to 8388608 KiB == 8GiB worth of pages. As a result, RSS increased by exactly 8GIB to 8390256 KiB == 8388608 KiB + 1648 KiB
    • RSS continues to increase in 8GiB increments. The last print shows about 24 GiB of memory, and before 32 GiB could be printed, the OOM killer killed the process

See also: https://unix.stackexchange.com/questions/35129/need-explanation-on-resident-set-size-virtual-size

OOM killer logs

Our dmesg commands have shown the OOM killer logs.

An exact interpretation of those has been asked at:

The very first line of the log was:

[ 7283.479087] mongod invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

So we see that interestingly it was the MongoDB daemon that always runs in my laptop on the background that first triggered the OOM killer, presumably when the poor thing was trying to allocate some memory.

However, the OOM killer does not necessarily kill the one who awoke it.

After the invocation, the kernel prints a table or processes including the oom_score:

[ 7283.479292] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[ 7283.479303] [    496]     0   496    16126        6   172032      484             0 systemd-journal
[ 7283.479306] [    505]     0   505     1309        0    45056       52             0 blkmapd
[ 7283.479309] [    513]     0   513    19757        0    57344       55             0 lvmetad
[ 7283.479312] [    516]     0   516     4681        1    61440      444         -1000 systemd-udevd

and further ahead we see that our own little main.out actually got killed on the previous invocation:

[ 7283.479871] Out of memory: Kill process 15665 (main.out) score 865 or sacrifice child
[ 7283.479879] Killed process 15665 (main.out) total-vm:67111332kB, anon-rss:92kB, file-rss:4kB, shmem-rss:30080832kB
[ 7283.479951] oom_reaper: reaped process 15665 (main.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:30080832kB

This log mentions the score 865 which that process had, presumably the highest (worst) OOM killer score as mentioned at: https://unix.stackexchange.com/questions/153585/how-does-the-oom-killer-decide-which-process-to-kill-first

Also interestingly, everything apparently happened so fast that before the freed memory was accounted, the oom was awoken again by the DeadlineMonitor process:

[ 7283.481043] DeadlineMonitor invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0

and this time that killed some Chromium process, which is usually my computers normal memory hog:

[ 7283.481773] Out of memory: Kill process 11786 (chromium-browse) score 306 or sacrifice child
[ 7283.481833] Killed process 11786 (chromium-browse) total-vm:1813576kB, anon-rss:208804kB, file-rss:0kB, shmem-rss:8380kB
[ 7283.497847] oom_reaper: reaped process 11786 (chromium-browse), now anon-rss:0kB, file-rss:0kB, shmem-rss:8044kB

Tested in Ubuntu 19.04, Linux kernel 5.0.0.

Linux kernel docs

https://github.com/torvalds/linux/blob/v5.17/Documentation/filesystems/proc.rst has some points. The term "VSZ" is not used there but "RSS" is, and there's nothing too enlightening (surprise?!)

Instead of VSZ, the kernel seems to use the term VmSize, which appears e.g. on /proc/$PID/status.

Some quotes of interest:

The first of these lines shows the same information as is displayed for the mapping in /proc/PID/maps. Following lines show the size of the mapping (size); the size of each page allocated when backing a VMA (KernelPageSize), which is usually the same as the size in the page table entries; the page size used by the MMU when backing a VMA (in most cases, the same as KernelPageSize); the amount of the mapping that is currently resident in RAM (RSS); the process' proportional share of this mapping (PSS); and the number of clean and dirty shared and private pages in the mapping.

The "proportional set size" (PSS) of a process is the count of pages it has in memory, where each page is divided by the number of processes sharing it. So if a process has 1000 pages all to itself, and 1000 shared with one other process, its PSS will be 1500.

Note that even a page which is part of a MAP_SHARED mapping, but has only a single pte mapped, i.e. is currently used by only one process, is accounted as private and not as shared.

So we can guess a few more things:

  • shared libraries used by a single process appear in RSS, if more than one process has them then not
  • PSS was mentioned by jmh, and has a more proportional approach between "I'm the only process that holds the shared library" and "there are N process holding the shared library, so each one holds memory/N on average"
你怎么这么可爱啊 2024-12-18 19:17:03

VSZ - 虚拟集大小

  • 虚拟集大小是在初始执行期间分配给进程(程序)的内存大小。虚拟集大小内存只是一个进程可用于其执行的内存量。

RSS - 驻留集大小(类似于 RAM)

  • 与 VSZ(虚拟集大小)相反,RSS 是进程当前使用的内存。这是当前进程正在使用的 RAM 量的实际数字(以千字节为单位)。

来源

VSZ - Virtual Set Size

  • The Virtual Set Size is a memory size assigned to a process ( program ) during the initial execution. The Virtual Set Size memory is simply a number of how much memory a process has available for its execution.

RSS - Resident Set Size (kinda RAM)

  • As opposed to VSZ ( Virtual Set Size ), RSS is a memory currently used by a process. This is a actual number in kilobytes of how much RAM the current process is using.

Source

相思故 2024-12-18 19:17:03

我想关于 RSS 与 VSZ 的问题已经说了很多了。从管理员/程序员/用户的角度来看,当我设计/编码应用程序时,我更关心 RSZ(驻留内存),因为当您不断提取越来越多的变量(堆积)时,您会看到该值不断上升。尝试一个简单的程序在循环中构建基于 malloc 的空间分配,并确保在该 malloc 空间中填充数据。 RSS 不断上升。
就VSZ而言,它更多的是Linux所做的虚拟内存映射,其核心功能之一源自传统操作系统概念。 VSZ的管理是通过内核的虚拟内存管理来完成的,有关VSZ的更多信息,请参阅Robert Love对mm_struct和vm_struct的描述,它们是内核中基本task_struct数据结构的一部分。

I think much has already been said, about RSS vs VSZ. From an administrator/programmer/user perspective, when I design/code applications I am more concerned about the RSZ, (Resident memory), as and when you keep pulling more and more variables (heaped) you will see this value shooting up. Try a simple program to build malloc based space allocation in loop, and make sure you fill data in that malloc'd space. RSS keeps moving up.
As far as VSZ is concerned, it's more of virtual memory mapping that linux does, and one of its core features derived out of conventional operating system concepts. The VSZ management is done by Virtual memory management of the kernel, for more info on VSZ, see Robert Love's description on mm_struct and vm_struct, which are part of basic task_struct data structure in kernel.

太阳哥哥 2024-12-18 19:17:03

总结@jmh优秀答案

在#linux中,进程的内存包括:

  • 它自己的二进制文件、
  • 它的共享库、
  • 它的堆栈和堆

由于分页,并非所有这些都始终完全存储在内存中,只有有用的、最近使用的部分(页面)。其他部分被调出(或换出)以为其他进程腾出空间。

下表取自 @jmh 的答案,显示了特定进程的常驻内存和虚拟内存的示例。

+-------------+-------------------------+------------------------+
| portion     | actually in memory      | total (allocated) size |
|-------------+-------------------------+------------------------|
| binary      | 400K                    | 500K                   |
| shared libs | 1000K                   | 2500K                  |
| stack+heap  | 100K                    | 200K                   |
|-------------+-------------------------+------------------------|
|             | RSS (Resident Set Size) | VSZ (Virtual Set Size) |
|-------------+-------------------------+------------------------|
|             | 1500K                   | 3200K                  |
+-------------+-------------------------+------------------------+

总结一下:常驻内存是当前物理内存中实际存在的内存,虚拟大小是加载所有组件所需的总物理内存。

当然,数字不会相加,因为库在多个进程之间共享,并且它们的内存是为每个进程单独计算的,即使它们只加载一次。

To summarize @jmh excellent answer :

In #linux, a process's memory comprises :

  • its own binary
  • its shared libs
  • its stack and heap

Due to paging, not all of those are always fully in memory, only the useful, most recently used parts (pages) are. Other parts are paged out (or swapped out) to make room for other processes.

The table below, taken from @jmh's answer, shows an example of what Resident and Virtual memory are for a specific process.

+-------------+-------------------------+------------------------+
| portion     | actually in memory      | total (allocated) size |
|-------------+-------------------------+------------------------|
| binary      | 400K                    | 500K                   |
| shared libs | 1000K                   | 2500K                  |
| stack+heap  | 100K                    | 200K                   |
|-------------+-------------------------+------------------------|
|             | RSS (Resident Set Size) | VSZ (Virtual Set Size) |
|-------------+-------------------------+------------------------|
|             | 1500K                   | 3200K                  |
+-------------+-------------------------+------------------------+

To summarize : resident memory is what is actually in physical memory right now, and virtual size is the total, necessary physical memory to load all components.

Of course, numbers don't add up, because libraries are shared between multiple processes and their memory is counted for each process separately, even if they are loaded only once.

南薇 2024-12-18 19:17:03

它们不是受管理的,而是经过衡量的,并且可能受到限制(请参阅 getrlimit 系统调用,也在 getrlimit(2) 上) 。

RSS 表示驻留集大小(位于 RAM 中的虚拟地址空间部分)。

您可以使用 虚拟地址空间 man7.org/linux/man-pages/man/proc.5.html" rel="nofollow noreferrer">proc(5) 与 cat /proc/1234/maps 及其状态(包括内存消耗)通过 cat /proc/1234/status

They are not managed, but measured and possibly limited (see getrlimit system call, also on getrlimit(2)).

RSS means resident set size (the part of your virtual address space sitting in RAM).

You can query the virtual address space of process 1234 using proc(5) with cat /proc/1234/maps and its status (including memory consumption) thru cat /proc/1234/status

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文