Unix 上具有多个读取器的命名管道 (FIFO)
我有两个程序,Writer 和 Reader。
我有一个从写入器到读取器的 FIFO,因此当我向写入器中的标准输入写入内容时,它会从读取器打印到标准输出。
我尝试在打开两个 Reader 的情况下执行此操作,并且仅从两个 Reader 程序之一将输出输出到标准输出。每次我运行这个程序时,Unix选择从哪个Reader程序打印stdout似乎是任意的,但是一旦它选择了其中一个程序,stdout的每个输出都会从同一个Reader程序打印。
有谁知道为什么会发生这种情况?
如果我有两个 WRITER 程序,它们都可以写入同一个管道。
I have two programs, Writer and Reader.
I have a FIFO from Writer to Reader so when I write something to stdin in Writer, it gets printed out to stdout from Reader.
I tried doing this with TWO Readers open, and I got output to stdout from only one of the two Reader programs. Which Reader program Unix chooses to print stdout from seemed to be arbitrary each time I run this, but once it chooses one of the programs, each output to stdout gets printed from the same Reader program.
Does anyone know why this happens?
If I have two WRITER programs, they both write to the same pipe okay.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
FIFO 中的 O 表示“out”。一旦您的数据“流出”,它就消失了。 :-) 因此,如果另一个进程出现并且其他人已经发出读取,那么数据自然不会出现两次。
要完成您的建议,您应该研究 Unix 域套接字。手册页此处。您可以编写一个可以写入客户端进程的服务器,绑定到文件系统路径。另请参阅
socket()
,bind()
,listen()
,accept()
,connect()
,所有这些您都需要与PF_UNIX
、AF_UNIX
和struct sockaddr_un
一起使用。The O in FIFO means "out". Once your data is "out", it's gone. :-) So naturally if another process comes along and someone else has already issued a read, the data isn't going to be there twice.
To accomplish what you suggest you should look into Unix domain sockets. Manpage here. You can write a server which can write to client processes, binding to a filesystem path. See also
socket()
,bind()
,listen()
,accept()
,connect()
, all of which you'll want to use withPF_UNIX
,AF_UNIX
, andstruct sockaddr_un
.Linux tee() 可能会满足您的需求。
请参阅此处 tee
注意: 此函数适用于 Linux具体的。
Linux tee() may suit your needs.
see here tee
NOTE: this function is Linux specific.
我认为你观察到的行为不仅仅是巧合。考虑这个跟踪,它使用“sed”作为两个读取器,并使用一个循环作为写入器:
如您所见,两个读取器都必须读取一些数据。任何时候安排哪个读者都取决于操作系统的突发奇想。请注意,我小心地使用了 echo 来打印文件的每一行;这些是以原子方式读取的原子写入。
例如,如果我使用 Perl 脚本,在读取和回显一行后有延迟,那么我很可能会看到更确定的行为,(通常)来自 Reader 1 的两行对应来自 Reader 2 的每 1 行。
在 MacOS 上进行的实验X 10.5.8 (Leopard) - 但大多数地方可能相似。
I don't think that the behaviour you observed is more than coincidental. Consider this trace, which uses 'sed' as the two readers and a loop as the writer:
As you can see, both readers got to read some of the data. Which reader was scheduled at any time depended on the whim of the o/s. Note that I carefully used an echo to print each line of the file; those were atomic writes that were read atomically.
Had I used a Perl script with, for example, a delay after reading and echoing a line, then I might well have seen more determinate behaviour with (generally) two lines from Reader 1 for every 1 line from Reader 2.
Experimentation done on MacOS X 10.5.8 (Leopard) - but likely to be similar most places.
我想补充一下上面的解释,即写入(以及可能的读取,尽管我无法从联机帮助页中确认这一点)到管道的原子性达到一定大小(Linux 上为 4KiB)。因此,假设我们从一个空管道开始,写入器将 <=4KiB 数据写入该管道。我认为会发生以下情况:
a) 作者一次性写入所有数据。当这种情况发生时,其他进程没有机会从管道中读取(或写入)。
b) 一名读者计划执行其 I/O。
c) 选定的读取器一次性读取管道中的所有数据,并在稍后的某个时间将它们打印到其标准输出。
我认为这可以解释为什么你只看到一位读者的输出。尝试以较小的块进行写入,也许每次写入后都会休眠。
当然,其他人已经回答了为什么每个数据仅由进程读取。
I would like to add to the above explanations that writes (and presumable reads, though I couldn't confirm this from the manpages) to pipes are atomic up to a certain size (4KiB on Linux). So suppose we start with an empty pipe, and the writer writes <=4KiB data to to the pipe. Here's what I think happens:
a) The writer writes all data in one go. While this is happening no other process has a chance to read from (or write to) the pipe.
b) One of the readers is scheduled to do it's I/O.
c) The chosen reader reads all the data from the pipe in one go, and at some later time prints them to its stdout.
I think this could explain while you are seeing output from only one of the readers. Try writing in smaller chunks, and perhaps sleeping after each write.
Of course, others have answered why each datum is read by only process.
套接字解决方案有效,但如果服务器崩溃,就会变得复杂。为了允许任何进程成为服务器,我在临时文件的末尾使用记录锁,该文件包含对给定文件的位置/长度/数据更改。我使用临时命名管道将追加请求传递给在临时文件末尾具有写锁的进程。
The sockets solution works, but becomes complicated if the server crashes. To allow any process to be the server, I use record locks at the end of a temporary file that contains location/length/data changes to the given file. I use a temporary named pipe to communicate append requests to whichever process has the write lock at the end of the temporary file.