file_operations 问题,我如何知道打开文件进行写入的进程是否决定关闭它?

发布于 2024-09-01 20:36:10 字数 376 浏览 4 评论 0原文

我目前正在编写一个简单的“多播”模块。

只有一个进程可以打开 proc 文件系统文件进行写入,其余进程可以打开它进行读取。 为此,我使用 inode_operation .permission 回调,检查操作,当我检测到有人打开文件进行写入时,我将标志设置为 ON。

我需要一种方法来检测打开文件进行写入的进程是否决定关闭该文件,以便我可以将标志设置为 OFF,以便其他人可以打开文件进行写入。

目前,如果有人愿意写入,我会保存该进程的 current->pid ,并且当调用 .close 回调时,我会检查该进程是否是我之前保存的进程。

有更好的方法吗?不保存pid,也许检查当前进程打开的文件及其权限...

谢谢!

I'm currently writing a simple "multicaster" module.

Only one process can open a proc filesystem file for writing, and the rest can open it for reading.
To do so i use the inode_operation .permission callback, I check the operation and when i detect someone open a file for writing I set a flag ON.

i need a way to detect if a process that opened a file for writing has decided to close the file so i can set the flag OFF, so someone else can open for writing.

Currently in case someone is open for writing i save the current->pid of that process and when the .close callback is called I check if that process is the one I saved earlier.

Is there a better way to do that? Without saving the pid, perhaps checking the files that the current process has opened and it's permission...

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

回忆凄美了谁 2024-09-08 20:36:10

不,这不安全。考虑一些场景:

  • 进程 A 打开文件进行写入,然后使用 fork() 创建进程 B。现在 A 和 B 都打开文件进行写入。当进程 A 关闭它时,您将标志设置为 0,但进程 B 仍将其打开以进行写入。

  • 进程 A 有多个线程。线程 X 打开文件进行写入,但线程 Y 关闭它。现在标志停留在 1。(请记住,内核空间中的 ->pid 实际上是用户空间线程 ID)。

您应该在 file_operations 结构的 .open.release 方法中执行操作,而不是在 inode 级别执行操作。

您的 inode 的私有数据应包含一个struct file *current_writer;,并初始化为NULL。在 file_operations.open 方法中,如果它正在打开以进行写入,则检查 current_writer;如果为NULL,则将其设置为正在打开的struct file *,否则打开失败,并显示EPERM。在 file_operations.release 方法中,检查正在释放的 struct file * 是否等于 inode 的 current_writer - 如果是,则设置 current_writer 返回到 NULL

PS:Bandan 也认为您需要锁定,但使用 inode 现有的 i_mutex 应该足以保护 current_writer。

No, it's not safe. Consider a few scenarios:

  • Process A opens the file for writing, and then fork()s, creating process B. Now both A and B have the file open for writing. When Process A closes it, you set the flag to 0 but process B still has it open for writing.

  • Process A has multiple threads. Thread X opens the file for writing, but Thread Y closes it. Now the flag is stuck at 1. (Remember that ->pid in kernel space is actually the userspace thread ID).

Rather than doing things at the inode level, you should be doing things in the .open and .release methods of your file_operations struct.

Your inode's private data should contain a struct file *current_writer;, initialised to NULL. In the file_operations.open method, if it's being opened for write then check the current_writer; if it's NULL, set it to the struct file * being opened, otherwise fail the open with EPERM. In the file_operations.release method, check if the struct file * being released is equal to the inode's current_writer - if so, set current_writer back to NULL.

PS: Bandan is also correct that you need locking, but the using the inode's existing i_mutex should suffice to protect the current_writer.

迟到的我 2024-09-08 20:36:10

我希望我正确理解你的问题:当有人想要写入你的 proc 文件时,你将一个名为 flag 的变量设置为 1,并将 current->pid 保存在全局变量中。然后,当调用任何 close() 入口点时,您将检查 close() 实例的 current->pid 并将其与保存的值进行比较。如果匹配,则将标志关闭。正确的 ?

考虑这种情况:进程 A 想要写入您的 proc 资源,因此您检查权限回调。你看到flag是0,所以你可以将进程A的flag设置为1。但是此时,调度程序发现进程A已经用完它的时间份额并选择另一个进程来运行(flag仍然是o!)。一段时间后,进程 B 也想写入您的 proc 资源,检查标志是否为 0,将其设置为 1,然后开始写入文件。不幸的是,此时,进程 A 被安排再次运行,因为它认为 flag 为 0(记住,在调度程序抢占它之前,flag 为 0),因此将其设置为 1 并开始写入文件。最终结果:您的 proc 资源中的数据损坏。

您应该使用内核为此类操作提供的良好锁定机制,并且根据您的要求,我认为 RCU 是最好的:看看 RCU 锁定机制

I hope I understood your question correctly: When someone wants to write to your proc file, you set a variable called flag to 1 and also save the current->pid in a global variable. Then, when any close() entry point is called, you check current->pid of the close() instance and compare that with your saved value. If that matches, you turn flag to off. Right ?

Consider this situation : Process A wants to write to your proc resource, and so you check the permission callback. You see that flag is 0, so you can set it to 1 for process A. But at that moment, the scheduler finds out process A has used up its time share and chooses a different process to run(flag is still o!). After sometime, process B comes up wanting to write to your proc resource also, checks that the flag is 0, sets it to 1, and then goes about writing to the file. Unfortunately at this moment, process A gets scheduled to run again and since, it thinks that flag is 0 (remember, before the scheduler pre-empted it, flag was 0) and so sets it to 1 and goes about writing to the file. End result : data in your proc resource goes corrupt.

You should use a good locking mechanism provided by the kernel for this type of operation and based on your requirement, I think RCU is the best : Have a look at RCU locking mechanism

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文