将 posix pipe() 和 dup() 与 C++ 一起使用重定向 I/O 问题
我必须修改为之前的家庭作业编写的一个简单 shell 来处理 I/O 重定向,但我在使管道正常工作时遇到了困难。似乎当我在单独的进程中复制文件描述符后写入和读取到 stdout 和从 stdin 时,管道可以工作,但是如果我使用 printf、fprintf、gets、fgets 等类似的东西来尝试查看输出是否是显示在管道中,它会转到控制台,即使标准输入和标准输出的文件描述符显然是管道的副本(我不知道这是否是正确的表达方式,但我认为这一点很清楚) 。
我 99.9% 确定我所做的一切至少应该是在普通 C 中——例如在 dup() 之后适当地关闭所有文件描述符——并且文件 I/O 工作正常,所以这似乎是一个问题我不知道也找不到任何信息的细节。我花了一天的大部分时间尝试不同的事情,过去几个小时在谷歌上搜索试图弄清楚我是否可以将 cin 和 cout 重定向到管道,看看是否可以修复它,但似乎这比它的价值更麻烦这一点。
由于 cin 和 cout 应该与 stdio 同步,因此应该仅通过重定向 stdin 和 stdout 来工作吗?我认为应该如此,特别是因为这些命令可能是用 C 编写的,所以我认为它们会使用 stdio。但是,如果我尝试像“cat [file1] [file2] | sort”这样的命令,它会将 cat [file1] [file2] 的结果打印到命令行,并且 sort 不会获得任何输入,因此它没有输出。很明显,cout和cin也不受dup()的影响,所以我把两个和两个放在一起得出了这个结论 这是我的代码的一个稍微缩短的版本,减去了所有错误检查和类似的东西,我相信我处理得很好。如果需要的话,我可以发布完整的代码,但是代码很多,所以我将从这个开始。
我重写了该函数,以便父进程为每个命令分叉一个子进程,并根据需要将它们与管道连接,然后等待子进程终止。同样,文件描述符 0 和 1 上的写入和读取工作(即写入管道和从管道读取),文件指针 stdin 和 stdout 上的 stdio 不起作用(不写入管道)。
非常感谢,这一直杀了我...
更新:我没有更改每个不同命令的字符串 cmd,所以它似乎不起作用,因为管道只是转到相同的命令,所以最终输出是同样......抱歉我的愚蠢,但是谢谢,因为我发现了 strace 的问题。
int call_execv( string cmd, vector<string> &argv, int argc,
vector<int> &redirect)
{
int result = 0, pid, /* some other declarations */;
bool file_in, file_out, pipe_in, pipe_out;
queue<int*> pipes; // never has more than 2 pipes
// parse, fork, exec, & loop if there's a pipe until no more pipes
do
{
/* some declarations for variables used in parsing */
file_in = file_out = pipe_in = pipe_out = false;
// parse the next command and set some flags
while( /* there's more redirection */ )
{
string symbol = /* next redirection symbol */
if( symbol == ">" )
{
/* set flags, get filename, etc */
}
else if( symbol == "<" )
{
/* set flags, get filename, etc */
}
else if( pipe_out = (symbol == "|") )
{
/* set flags, and... */
int tempPipes[2];
pipes.push( pipe(tempPipes) );
break;
}
}
/* ... set some more flags ... */
// fork child
pid = fork();
if( pid == 0 ) // child
{
/* if pipe_in and pipe_out set, there are two pipes in queue.
the old pipes read is dup'd to stdin, and the new pipes
write is dup'd to stdout, other two FD's are closed */
/* if only pipe_in or pipe_out, there is one pipe in queue.
the unused end is closed in whichever if statement evaluates */
/* if neither pipe_in or pipe_out is set, no pipe in queue */
// redirect stdout
if( pipe_out ){
// close newest pipes read end
close( pipes.back()[P_READ] );
// dup the newest pipes write end
dup2( pipes.back()[P_WRITE], STDOUT_FILENO );
// close newest pipes write end
close( pipes.back()[P_WRITE] );
}
else if( file_out )
freopen(outfile.c_str(), "w", stdout);
// redirect stdin
if( pipe_in ){
close( pipes.front()[P_WRITE] );
dup2( pipes.front()[P_READ], STDIN_FILENO );
close( pipes.front()[P_READ] );
}
else if ( file_in )
freopen(infile.c_str(), "r", stdin);
// create argument list and exec
char **arglist = make_arglist( argv, start, end );
execv( cmd.c_str(), arglist );
cout << "Execution failed." << endl;
exit(-1); // this only executes is execv fails
} // end child
/* close the newest pipes write end because child is writing to it.
the older pipes write end is closed already */
if( pipe_out )
close( pipes.back()[P_WRITE] );
// remove pipes that have been read from front of queue
if( init_count > 0 )
{
close( pipes.front()[P_READ] ); // close FD first
pipes.pop(); // pop from queue
}
} while ( pipe_out );
// wait for each child process to die
return result;
}
I have to modify a simple shell I wrote for a previous homework assignment to handle I/O redirection and I'm having trouble getting the pipes to work. It seems that when I write and read to stdout and from stdin after duplicating the file descriptors in the separates processes, the pipe works, but if I use anything like printf, fprintf, gets, fgets, etc to try and see if the output is showing up in the pipe, it goes to the console even though the file descriptor for stdin and stdout clearly is a copy of the pipe (I don't know if that's the correct way to phrase that, but the point is clear I think).
I am 99.9% sure that I am doing everything as it should be at least in plain C -- such as closing all the file descriptors appropriately after the dup() -- and file I/O works fine, so this seems like an issue of a detail that I am not aware of and cannot find any information on. I've spent most of the day trying different things and the past few hours googling trying to figure out if I could redirect cin and cout to the pipe to see if that would fix it, but it seems like it's more trouble than it's worth at this point.
Should this work just by redirecting stdin and stdout since cin and cout are supposed to be sync'd with stdio? I thought it should, especially since the commands are probably written in C so they would use stdio, I would think. However, if I try a command like "cat [file1] [file2] | sort", it prints the result of cat [file1] [file2] to the command line, and the sort doesn't get any input so it has no output. It's also clear that cout and cin are not affected by the dup() either, so I put two and two together and came to this conclusion
Here is a somewhat shortened version of my code minus all the error checking and things like that, which I am confident I am handling well. I can post the full code if it come to it, but it's a lot so I'll start with this.
I rewrote the function so that the parent forks off a child for each command and connects them with pipes as necessary and then waits for the child processes to die. Again, write and read on the file descriptors 0 and 1 work (i.e. write to and reads from the pipe), stdio on the FILE pointers stdin and stdout do not work (do not write to pipe).
Thanks a lot, this has been killing me...
UPDATE: I wasn't changing the string cmd for each of the different commands so it didn't appear to work because the pipe just went to the same command so the final output was the same... Sorry for the dumbness, but thanks because I found the problem with strace.
int call_execv( string cmd, vector<string> &argv, int argc,
vector<int> &redirect)
{
int result = 0, pid, /* some other declarations */;
bool file_in, file_out, pipe_in, pipe_out;
queue<int*> pipes; // never has more than 2 pipes
// parse, fork, exec, & loop if there's a pipe until no more pipes
do
{
/* some declarations for variables used in parsing */
file_in = file_out = pipe_in = pipe_out = false;
// parse the next command and set some flags
while( /* there's more redirection */ )
{
string symbol = /* next redirection symbol */
if( symbol == ">" )
{
/* set flags, get filename, etc */
}
else if( symbol == "<" )
{
/* set flags, get filename, etc */
}
else if( pipe_out = (symbol == "|") )
{
/* set flags, and... */
int tempPipes[2];
pipes.push( pipe(tempPipes) );
break;
}
}
/* ... set some more flags ... */
// fork child
pid = fork();
if( pid == 0 ) // child
{
/* if pipe_in and pipe_out set, there are two pipes in queue.
the old pipes read is dup'd to stdin, and the new pipes
write is dup'd to stdout, other two FD's are closed */
/* if only pipe_in or pipe_out, there is one pipe in queue.
the unused end is closed in whichever if statement evaluates */
/* if neither pipe_in or pipe_out is set, no pipe in queue */
// redirect stdout
if( pipe_out ){
// close newest pipes read end
close( pipes.back()[P_READ] );
// dup the newest pipes write end
dup2( pipes.back()[P_WRITE], STDOUT_FILENO );
// close newest pipes write end
close( pipes.back()[P_WRITE] );
}
else if( file_out )
freopen(outfile.c_str(), "w", stdout);
// redirect stdin
if( pipe_in ){
close( pipes.front()[P_WRITE] );
dup2( pipes.front()[P_READ], STDIN_FILENO );
close( pipes.front()[P_READ] );
}
else if ( file_in )
freopen(infile.c_str(), "r", stdin);
// create argument list and exec
char **arglist = make_arglist( argv, start, end );
execv( cmd.c_str(), arglist );
cout << "Execution failed." << endl;
exit(-1); // this only executes is execv fails
} // end child
/* close the newest pipes write end because child is writing to it.
the older pipes write end is closed already */
if( pipe_out )
close( pipes.back()[P_WRITE] );
// remove pipes that have been read from front of queue
if( init_count > 0 )
{
close( pipes.front()[P_READ] ); // close FD first
pipes.pop(); // pop from queue
}
} while ( pipe_out );
// wait for each child process to die
return result;
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
无论出现什么问题,您都没有检查任何返回值。您如何知道
pipe()
或dup2()
命令是否成功?您是否已验证stdout
和stdin
确实指向execv
之前的管道?execv
是否保留您提供的文件描述符?不确定,这里是 execve 文档中的相应段落:文件描述符被关闭,这将导致该进程在底层文件上获得的所有记录锁被释放。有关详细信息,请参阅 fcntl(2)。)POSIX.1-2001 说
如果文件描述符 0、1 和 2 在成功的 execve() 后被关闭,并且进程将获得特权,因为 set-user_ID 或 set-group_ID per-
如果在执行的文件上设置了任务位,则系统可能会为每个文件描述符打开一个未指定的文件。作为一般原则,没有可移植程序,
无论是否有特权,都可以假设这三个文件描述符将在 execve() 期间保持关闭状态。
您应该添加更多调试输出,看看到底发生了什么。您是否在程序中使用了
strace -f
(跟踪孩子)?Whatever the problem, you are not checking any return values. How do you know if the
pipe()
or thedup2()
command succeeded? Have you verified thatstdout
andstdin
really point to the pipe right beforeexecv
? Doesexecv
keep the filedescriptors you give it? Not sure, here is the corresponding paragraph from theexecve
documentation:file descriptor is closed, this will cause the release of all record locks obtained on the underlying file by this process. See fcntl(2) for details.) POSIX.1-2001 says
that if file descriptors 0, 1, and 2 would otherwise be closed after a successful execve(), and the process would gain privilege because the set-user_ID or set-group_ID per‐
mission bit was set on the executed file, then the system may open an unspecified file for each of these file descriptors. As a general principle, no portable program,
whether privileged or not, can assume that these three file descriptors will remain closed across an execve().
You should add more debug output and see what really happens. Did you use
strace -f
(to follow children) on your program?以下内容:
不应该工作。不确定它是如何编译的,因为
pipe()
是int
。请注意,tempPipes
超出范围,其内容会丢失。应该是这样的:
The following:
Is not supposed to work. Not sure how it compiles since the result of
pipe()
isint
. Note only that,tempPipes
goes out of scope and its contents get lost.Should be something like that: