解析 /proc/ 文件安全吗?

发布于 2024-11-02 08:43:56 字数 115 浏览 4 评论 0原文

我想解析 /proc/net/tcp/,但是安全吗?

我应该如何打开和读取 /proc/ 中的文件而不用担心其他进程(或操作系统本身)会同时更改它?

I want to parse /proc/net/tcp/, but is it safe?

How should I open and read files from /proc/ and not be afraid, that some other process (or the OS itself) will be changing it in the same time?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

就是爱搞怪 2024-11-09 08:43:56

一般来说,不会。(所以这里的大多数答案都是错误的。)它可能是安全的,具体取决于您想要什么财产。但是,如果您对 /proc 中文件的一致性假设过多,则很容易在代码中出现错误。例如,请参阅 此错误来自假设 /proc/ mounts 是一致的快照

例如:

  • /proc/uptime完全原子,正如有人在另一个答案中提到的那样 - 但仅从 Linux 开始2.6.30,还不到两年。因此,即使是这个微小的、琐碎的文件在那之前也受到竞争条件的影响,并且仍然存在于大多数企业内核中。请参阅fs/proc/uptime.c 当前源,或 使其成为原子的提交。在 2.6.30 之前的内核上,您可以打开该文件,读取其中的一部分,然后如果您稍后返回并读取同样,您得到的作品将与第一幅作品不一致。 (我刚刚演示了这一点 - 自己尝试一下吧。)

  • /proc/mounts单次读取中的原子 系统调用。因此,如果您一次读取整个文件,您将获得系统上挂载点的单个一致快照。然而,如果您使用多个read系统调用——并且文件很大,那么如果您使用普通的I/O库并且不特别注意这个问题,就会发生这种情况—— - 你将受到竞争条件的影响。您不仅不会获得一致的快照,而且在您开始之前存在且从未停止存在的挂载点可能会在您看到的内容中丢失。要了解它对于一个 read() 来说是原子的,请查看 m_start() in fs/namespace.c 并查看它获取一个保护挂载点列表的信号量,并将其保留到 < code>m_stop(),当 read() 完成时调用。要了解可能出现的问题,请参阅去年的这个错误 (与我上面链接的相同)在其他高质量软件中,愉快地读取 /proc/mounts

  • /proc/net/tcp,这是您实际询问的,甚至比这更不一致。它仅在表的每一行中是原子的。要查看这一点,请查看 listening_get_next()< /code> 中的 net/ipv4/tcp_ipv4.cbuilted_get_next() 位于同一文件的下方,并查看它们在每个条目上取出的锁反过来。我没有方便的重现代码来证明行与行之间缺乏一致性,但是那里没有锁(或其他任何东西)可以使其保持一致。如果您考虑一下,这是有道理的 - 网络通常是系统中非常繁忙的部分,因此不值得在此诊断工具中提供一致的视图。

在每行中保持 /proc/net/tcp 原子性的另一部分是 seq_read() 中的缓冲,您可以阅读 fs/seq_file.c 中。这可以确保一旦您read()一行的一部分,整行的文本都会保存在缓冲区中,以便下一个read()将获取其余的内容在开始新一行之前。 /proc/mounts 中使用了相同的机制,即使您执行多个 read() 调用,也可以保持每一行的原子性,这也是 /proc 的机制/uptime 在较新的内核中用于保持原子性。该机制不会缓冲整个文件,因为内核对内存使用非常谨慎。

/proc 中的大多数文件至少与 /proc/net/tcp 一样一致,每一行都是它们提供的任何信息中一个条目的一致图片,因为它们大多数都使用相同的 seq_file 抽象。不过,正如 /proc/uptime 示例所示,直到 2009 年,一些文件仍在迁移以使用 seq_file;我敢打赌仍然有一些使用旧的机制,甚至没有那种级别的原子性。这些警告很少被记录下来。对于给定的文件,您唯一的保证就是阅读源代码。

对于/proc/net/tcp,您可以毫无畏惧地读取它并解析每一行。但是,如果您尝试一次从多行中得出任何结论 - 请注意,其他进程和内核在您读取它时正在更改它,并且您可能正在创建一个错误。

In general, no. (So most of the answers here are wrong.) It might be safe, depending on what property you want. But it's easy to end up with bugs in your code if you assume too much about the consistency of a file in /proc. For example, see this bug which came from assuming that /proc/mounts was a consistent snapshot.

For example:

  • /proc/uptime is totally atomic, as someone mentioned in another answer -- but only since Linux 2.6.30, which is less than two years old. So even this tiny, trivial file was subject to a race condition until then, and still is in most enterprise kernels. See fs/proc/uptime.c for the current source, or the commit that made it atomic. On a pre-2.6.30 kernel, you can open the file, read a bit of it, then if you later come back and read again, the piece you get will be inconsistent with the first piece. (I just demonstrated this -- try it yourself for fun.)

  • /proc/mounts is atomic within a single read system call. So if you read the whole file all at once, you get a single consistent snapshot of the mount points on the system. However, if you use several read system calls -- and if the file is big, this is exactly what will happen if you use normal I/O libraries and don't pay special attention to this issue -- you will be subject to a race condition. Not only will you not get a consistent snapshot, but mount points which were present before you started and never stopped being present might go missing in what you see. To see that it's atomic for one read(), look at m_start() in fs/namespace.c and see it grab a semaphore that guards the list of mountpoints, which it keeps until m_stop(), which is called when the read() is done. To see what can go wrong, see this bug from last year (same one I linked above) in otherwise high-quality software that blithely read /proc/mounts.

  • /proc/net/tcp, which is the one you're actually asking about, is even less consistent than that. It's atomic only within each row of the table. To see this, look at listening_get_next() in net/ipv4/tcp_ipv4.c and established_get_next() just below in the same file, and see the locks they take out on each entry in turn. I don't have repro code handy to demonstrate the lack of consistency from row to row, but there are no locks there (or anything else) that would make it consistent. Which makes sense if you think about it -- networking is often a super-busy part of the system, so it's not worth the overhead to present a consistent view in this diagnostic tool.

The other piece that keeps /proc/net/tcp atomic within each row is the buffering in seq_read(), which you can read in fs/seq_file.c. This ensures that once you read() part of one row, the text of the whole row is kept in a buffer so that the next read() will get the rest of that row before starting a new one. The same mechanism is used in /proc/mounts to keep each row atomic even if you do multiple read() calls, and it's also the mechanism that /proc/uptime in newer kernels uses to stay atomic. That mechanism does not buffer the whole file, because the kernel is cautious about memory use.

Most files in /proc will be at least as consistent as /proc/net/tcp, with each row a consistent picture of one entry in whatever information they're providing, because most of them use the same seq_file abstraction. As the /proc/uptime example illustrates, though, some files were still being migrated to use seq_file as recently as 2009; I bet there are still some that use older mechanisms and don't have even that level of atomicity. These caveats are rarely documented. For a given file, your only guarantee is to read the source.

In the case of /proc/net/tcp, you can read it and parse each line without fear. But if you try to draw any conclusions from multiple lines at once -- beware, other processes and the kernel are changing it while you read it, and you are probably creating a bug.

木有鱼丸 2024-11-09 08:43:56

尽管 /proc 中的文件在用户空间中显示为常规文件,但它们并不是真正的文件,而是支持用户空间中标准文件操作的实体(openread关闭)。 请注意,这与磁盘上由内核更改的普通文件完全不同。

内核所做的只是使用 sprintf 类似的函数,并且每当您发出 read(2) 系统调用时,该内存都会被复制到用户空间中。

内核以与常规文件完全不同的方式处理这些调用,这可能意味着您将读取的数据的整个快照可能在您打开(2)时准备就绪,而内核确保并发调用是一致的和原子的。我没有在任何地方读过这篇文章,但除此之外确实没有意义。

我的建议是看一下 proc 文件在您的特定 Unix 风格中的实现。这实际上是一个不受标准管辖的实施问题(输出的格式和内容也是如此)。

最简单的例子是 uptime 的实现Linux 中的 proc 文件。请注意,整个缓冲区是如何在提供给 single_open 的回调函数中生成的。

Although the files in /proc appear as regular files in userspace, they are not really files but rather entities that support the standard file operations from userspace (open, read, close). Note that this is quite different than having an ordinary file on disk that is being changed by the kernel.

All the kernel does is print its internal state into its own memory using a sprintf-like function, and that memory is copied into userspace whenever you issue a read(2) system call.

The kernel handles these calls in an entirely different way than for regular files, which could mean that the entire snapshot of the data you will read could be ready at the time you open(2) it, while the kernel makes sure that concurrent calls are consistent and atomic. I haven't read that anywhere, but it doesn't really make sense to be otherwise.

My advice is to take a look at the implementation of a proc file in your particular Unix flavour. This is really an implementation issue (as is the format and the contents of the output) that is not governed by a standard.

The simplest example would be the implementation of the uptime proc file in Linux. Note how the entire buffer is produced in the callback function supplied to single_open.

就像说晚安 2024-11-09 08:43:56

/proc 是一个虚拟文件系统:事实上,它只是提供了内核内部结构的方便视图。阅读它绝对是安全的(这就是它在这里的原因),但从长远来看它是有风险的,因为这些虚拟文件的内部可能会随着新版本的内核而演变。

编辑

更多信息请参见Linux 内核文档中的 proc 文档< /a>,第 1.4 章网络
我无法找到信息是否随着时间的推移而演变。我以为是打开的时候被冻住了,但无法给出明确的答案。

编辑2

根据Sco doc (不是 linux,但我很确定 *nix 的所有版本都有这样的行为)

虽然进程状态和
因此 /proc 的内容
文件可以从即时更改为
即时,/proc 的单次读取(2)
文件保证返回
国家的“理智”代表
是,读取将是原子的
进程状态的快照。
此类保证不适用于
连续读取应用于 /proc
正在运行的进程的文件。在
另外,原子性具体是
不保证任何 I/O 应用于
as(地址空间)文件;这
任何进程地址的内容
空间可能会同时修改
通过该进程的 LWP 或任何其他
系统中的进程。

/proc is a virtual file system : in fact, it just gives a convenient view of the kernel internals. It's definitely safe to read it (that's why it's here) but it's risky on the long term, as the internal of these virtual files may evolve with newer version of kernel.

EDIT

More information available in proc documentation in Linux kernel doc, chapter 1.4 Networking
I can't find if the information how the information evolve over time. I thought it was frozen on open, but can't have a definite answer.

EDIT2

According to Sco doc (not linux, but I'm pretty sure all flavours of *nix behave like that)

Although process state and
consequently the contents of /proc
files can change from instant to
instant, a single read(2) of a /proc
file is guaranteed to return a
``sane'' representation of state, that
is, the read will be an atomic
snapshot of the state of the process.
No such guarantee applies to
successive reads applied to a /proc
file for a running process. In
addition, atomicity is specifically
not guaranteed for any I/O applied to
the as (address-space) file; the
contents of any process's address
space might be concurrently modified
by an LWP of that process or any other
process in the system.

黑凤梨 2024-11-09 08:43:56

Linux 内核中的 procfs API 提供了一个接口来确保读取返回一致的数据。阅读 __proc_file_read< 中的注释/a>.大注释块中的第 1) 项解释了此接口。

话虽如此,正确使用该接口当然取决于特定proc文件的实现,以确保其返回的数据一致。因此,回答您的问题:不,内核不保证读取期间 proc 文件的一致性,但它为这些文件的实现提供了提供一致性的方法。

The procfs API in the Linux kernel provides an interface to make sure that reads return consistent data. Read the comments in __proc_file_read. Item 1) in the big comment block explains this interface.

That being said, it is of course up to the implementation of a specific proc file to use this interface correctly to make sure its returned data is consistent. So, to answer your question: no, the kernel does not guarantee consistency of the proc files during a read but it provides the means for the implementations of those files to provide consistency.

硪扪都還晓 2024-11-09 08:43:56

我手头有 Linux 2.6.27.8 的源代码,因为我目前正在嵌入式 ARM 目标上进行驱动程序开发。

文件 ...linux-2.6.27.8-lpc32xx/net/ipv4/raw.c 第 934 行包含例如

    seq_printf(seq, "%4d: %08X:%04X %08X:%04X"
            " %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p %d\n",
            i, src, srcp, dest, destp, sp->sk_state,
            atomic_read(&sp->sk_wmem_alloc),
            atomic_read(&sp->sk_rmem_alloc),
            0, 0L, 0, sock_i_uid(sp), 0, sock_i_ino(sp),
            atomic_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));

函数 raw_sock_seq_show() 中的输出

[wally@zenetfedora ~]$ cat /proc/net/tcp
  sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode                                                     
   0: 017AA8C0:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 15160 1 f552de00 299
   1: 00000000:C775 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 13237 1 f552ca00 299
...

,即procfs 处理函数层次结构的一部分。在对 /proc/net/tcp 文件发出 read() 请求之前,不会生成文本,这是一种合理的机制,因为 procfs 读取肯定比更新信息要少见得多。

某些驱动程序(例如我的驱动程序)使用单个 sprintf() 实现 proc_read 函数。核心驱动程序实现中的额外复杂性是处理可能非常长的输出,这些输出在单次读取期间可能不适合中间内核空间缓冲区。

我用一个使用 64K 读取缓冲区的程序对此进行了测试,但它会在我的系统中产生 3072 字节的内核空间缓冲区,供 proc_read 返回数据。需要使用前进指针进行多次调用才能获得返回的更多文本。我不知道当需要多个 i/o 时,使返回数据一致的正确方法是什么。当然,/proc/net/tcp 中的每个条目都是自洽的。并排的线有可能是在不同时间拍摄的快照。

I have the source for Linux 2.6.27.8 handy since I'm doing driver development at the moment on an embedded ARM target.

The file ...linux-2.6.27.8-lpc32xx/net/ipv4/raw.c at line 934 contains, for example

    seq_printf(seq, "%4d: %08X:%04X %08X:%04X"
            " %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p %d\n",
            i, src, srcp, dest, destp, sp->sk_state,
            atomic_read(&sp->sk_wmem_alloc),
            atomic_read(&sp->sk_rmem_alloc),
            0, 0L, 0, sock_i_uid(sp), 0, sock_i_ino(sp),
            atomic_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));

which outputs

[wally@zenetfedora ~]$ cat /proc/net/tcp
  sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode                                                     
   0: 017AA8C0:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 15160 1 f552de00 299
   1: 00000000:C775 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 13237 1 f552ca00 299
...

in function raw_sock_seq_show() which is part of a hierarchy of procfs handling functions. The text is not generated until a read() request is made of the /proc/net/tcp file, a reasonable mechanism since procfs reads are surely much less common than updating the information.

Some drivers (such as mine) implement the proc_read function with a single sprintf(). The extra complication in the core drivers implementation is to handle potentially very long output which may not fit in the intermediate, kernel-space buffer during a single read.

I tested that with a program using a 64K read buffer but it results in a kernel space buffer of 3072 bytes in my system for proc_read to return data. Multiple calls with advancing pointers are needed to get more than that much text returned. I don't know what the right way to make the returned data consistent when more than one i/o is needed. Certainly each entry in /proc/net/tcp is self-consistent. There is some likelihood that lines side-by-side are snapshot at different times.

遗弃M 2024-11-09 08:43:56

由于没有未知错误,/proc 中不存在会导致读取损坏数据或新旧数据混合的竞争条件。从这个意义上来说,它是安全的。然而,仍然存在竞争条件,即您从 /proc 读取的大部分数据在生成后就可能已过时,甚至在您开始读取/处理它时更是如此。例如,进程可以随时死亡,并且可以为新进程分配相同的 pid;唯一可以在没有竞争条件的情况下使用的进程 ID 是您自己的子进程。网络信息(开放端口等)以及 /proc 中的大部分信息也是如此。我认为依赖 /proc 中的任何数据都是准确的,这是不好且危险的做法,除了有关您自己的进程及其可能的子进程的数据之外。当然,将 /proc 中的其他信息提供给用户/管理员以获取信息/记录/等可能仍然有用。目的。

Short of unknown bugs, there are no race conditions in /proc that would lead to reading corrupted data or a mix of old and new data. In this sense, it's safe. However there's still the race condition that much of the data you read from /proc is potentially-outdated as soon as it's generated, and even moreso by the time you get to reading/processing it. For instance processes can die at any time and a new process can be assigned the same pid; the only process ids you can ever use without race conditions are your own child processes'. Same goes for network information (open ports, etc.) and really most of the information in /proc. I would consider it bad and dangerous practice to rely on any data in /proc being accurate, except data about your own process and potentially its child processes. Of course it may still be useful to present other information from /proc to the user/admin for informative/logging/etc. purposes.

扬花落满肩 2024-11-09 08:43:56

当您从 /proc 文件中读取数据时,内核正在调用一个预先注册为该 proc 文件的“读取”函数的函数。请参阅 fs/proc/generic.c 中的 __proc_file_read 函数。

因此,proc 读取的安全性仅与内核调用以满足读取请求的函数一样安全。如果该函数正确锁定了它接触的所有数据并在缓冲区中返回给您,那么使用该函数读取数据是完全安全的。由于 proc 文件(例如用于满足对 /proc/net/tcp 的读取请求的文件)已经存在了一段时间并且经过了严格的审查,因此它们与您所要求的一样安全。事实上,许多常见的 Linux 实用程序都依赖于从 proc 文件系统中读取数据并以不同的方式格式化输出。 (在我的脑海中,我认为“ps”和“netstat”可以做到这一点)。

与往常一样,你不必相信我的话。你可以查看来源来平息你的恐惧。 proc_net_tcp.txt 中的以下文档告诉您 /proc/net/tcp 的“读取”功能在哪里,因此您可以查看从该 proc 文件读取时运行的实际代码,并亲自验证是否存在锁定危险。

本文档描述了接口
/proc/net/tcp 和 /proc/net/tcp6。
请注意,这些接口是
已弃用,转而使用 tcp_diag。
这些 /proc 接口提供有关当前活动 TCP 的信息
连接,并由
net/ipv4/tcp_ipv4.c 中的 tcp4_seq_show()
和 tcp6_seq_show() 中
分别是net/ipv6/tcp_ipv6.c。

When you read from a /proc file, the kernel is calling a function which has been registered in advance to be the "read" function for that proc file. See the __proc_file_read function in fs/proc/generic.c .

Therefore, the safety of the proc read is only as safe as the function the kernel calls to satisfy the read request. If that function properly locks all data it touches and returns to you in a buffer, then it is completely safe to read using that function. Since proc files like the one used for satisfying read requests to /proc/net/tcp have been around for a while and have undergone scrupulous review, they are about as safe as you could ask for. In fact, many common Linux utilities rely on reading from the proc filesystem and formatting the output in a different way. (Off the top of my head, I think 'ps' and 'netstat' do this).

As always, you don't have to take my word for it; you can look at the source to calm your fears. The following documentation from proc_net_tcp.txt tells you where the "read" functions for /proc/net/tcp live, so you can look at the actual code that is run when you read from that proc file and verify for yourself that there are no locking hazards.

This document describes the interfaces
/proc/net/tcp and /proc/net/tcp6.
Note that these interfaces are
deprecated in favor of tcp_diag.
These /proc interfaces provide information about currently active TCP
connections, and are implemented by
tcp4_seq_show() in net/ipv4/tcp_ipv4.c
and tcp6_seq_show() in
net/ipv6/tcp_ipv6.c, respectively.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文