是否可以将 POST 参数发送到 CGI 脚本而无需另一个 HTTP 请求?

发布于 2024-08-13 09:00:41 字数 2105 浏览 2 评论 0原文

我正在尝试从另一个 Perl 模块在当前环境中运行 CGI 脚本。使用 GET 请求的标准系统调用一切都运行良好。 POST 也很好,直到参数列表变得太长,然后它们就会被切断。

有没有人遇到过这个问题,或者对其他方法有什么建议来尝试这个?

为了清楚起见,以下内容进行了一定程度的简化。还有更多的错误检查等。

对于 GET 请求不带参数的 POST 请求,我执行以下操作:

# $query is a CGI object.
my $perl = $^X;
my $cgi  = $cgi_script_location; # /path/file.cgi
system {$perl} $cgi;
  • 参数通过 QUERY_STRING 环境变量。
  • STDOUT 被调用捕获 脚本所以无论 CGI 脚本是什么 打印行为正常。
  • 这部分有效。

对于带有参数的 POST 请求,以下方法有效,但似乎限制了我的可用查询长度:

# $query is a CGI object.
my $perl = $^X;
my $cgi  = $cgi_script_location; # /path/file.cgi

# Gather parameters into a URL-escaped string suitable 
# to pass to a CGI script ran from the command line.
# Null characters are handled properly.
# e.g., param1=This%20is%20a%20string&param2=42&... etc.
# This works.
my $param_string = $self->get_current_param_string();

# Various ways to do this, but system() doesn't pass any 
# parameters (different question).
# Using qx// and printing the return value works as well.
open(my $cgi_pipe, "|$perl $cgi");
print {$cgi_pipe} $param_string;
close($cgi_pipe);
  • 此方法适用于短参数列表,但如果整个命令接近 1000 个字符,则参数列表将被剪切短的。这就是为什么我尝试将参数保存到文件中;以避免 shell 限制。
  • 如果我从执行的 CGI 脚本中转储参数列表,我会得到如下内容:

param1=废话
...一堆其他参数...
paramN=任意
p <-- 在“p”之后截断。还有更多参数。

我做过的其他没有帮助或不起作用的事情

  • 遵循CGI 故障排除指南
  • 使用 CGI->save() 将参数保存到文件中,将该文件传递给 CGI 脚本。使用此方法仅读取第一个参数。

<代码>$> Perl 索引.cgi < temp-param-file

  • 将 $param_string 保存到文件中,将该文件传递给 CGI 脚本,就像上面一样。与通过命令行传递命令的限制相同;仍然被切断。
  • 确保 $CGI::POST_MAX 处于可接受的高水平(-1)。
  • 确保 CGI 的命令行处理正常工作。 (:no_debug 未设置)
  • 使用相同的参数从命令行运行 CGI。这有效。

Leads

  • 显然,这似乎是 Perl 用于执行命令的 shell 的字符限制,但通过文件传递参数并没有解决这个问题。

I'm attempting to run a CGI script in the current environment from another Perl module. All works well using standard systems calls for GET requests. POST is fine too, until the parameter list gets too long, then they get cut off.

Has anyone ran in to this problem, or have any suggestions for other ways to attempt this?

The following are somewhat simplified for clarity. There is more error checking, etc.

For GET requests and POST requests w/o parameters, I do the following:

# $query is a CGI object.
my $perl = $^X;
my $cgi  = $cgi_script_location; # /path/file.cgi
system {$perl} $cgi;
  • Parameters are passed through the
    QUERY_STRING environment variable.
  • STDOUT is captured by the calling
    script so whatever the CGI script
    prints behaves as normal.
  • This part works.

For POST requests with parameters the following works, but seemingly limits my available query length:

# $query is a CGI object.
my $perl = $^X;
my $cgi  = $cgi_script_location; # /path/file.cgi

# Gather parameters into a URL-escaped string suitable 
# to pass to a CGI script ran from the command line.
# Null characters are handled properly.
# e.g., param1=This%20is%20a%20string¶m2=42&... etc.
# This works.
my $param_string = $self->get_current_param_string();

# Various ways to do this, but system() doesn't pass any 
# parameters (different question).
# Using qx// and printing the return value works as well.
open(my $cgi_pipe, "|$perl $cgi");
print {$cgi_pipe} $param_string;
close($cgi_pipe);
  • This method works for short parameter lists, but if the entire command gets to be close to 1000 characters, the parameter list is cut short. This is why I attempted to save the parameters to a file; to avoid shell limitations.
  • If I dump the parameter list from the executed CGI script I get something like the following:


param1=blah
... a bunch of other parameters ...
paramN=whatever
p <-- cut off after 'p'. There are more parameters.

Other things I've done that didn't help or work

  • Followed the CGI troubleshooting guide
  • Saved the parameters to a file using CGI->save(), passing that file to the CGI script. Only the first parameter is read using this method.

$> perl index.cgi < temp-param-file

  • Saved $param_string to a file, passing that file to the CGI script just like above. Same limitations as passing the commands through the command line; still gets cut off.
  • Made sure $CGI::POST_MAX is acceptably high (it's -1).
  • Made sure the CGI's command-line processing was working. (:no_debug is not set)
  • Ran the CGI from the command line with the same parameters. This works.

Leads

  • Obviously, this seems like a character limit of the shell Perl is using to execute the command, but it wasn't resolved by passing the parameters through a file.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

伪心 2024-08-20 09:00:41

从 HTTP 输入将参数作为单个字符串分配给系统是极其危险的。

来自 perldoc -f system

如果只有一个标量参数,则检查该参数是否有 shell 元字符,如果有,则将整个参数传递到系统的命令 shell 进行解析(在 Unix 平台上为 /bin/sh -c,但在其他平台上有所不同)。如果参数中没有 shell 元字符,..

换句话说,如果我传入参数 -e printf("working..."); rm -rf /; 我可以从您的磁盘中删除信息(如果您的网络服务器以 root 身份运行,则所有信息)。如果您选择这样做,请确保调用 system("perl", @cgi)

您遇到的参数长度问题可能是操作系统限制(参见 http ://www.in-ulm.de/~mascheck/various/argmax/):

有不同的方法可以了解上限:

  • 命令:getconf ARG_MAX
  • 系统标头:例如 <[sys/]limits.h> 中的 ARG_MAX

保存到临时文件是有风险的:对 CGI 的多次调用可能会保存到同一个文件,从而产生竞争条件,其中一个用户的参数可能会被另一个用户的进程使用。

您可以尝试打开进程的文件句柄并将参数作为标准输入传递。 打开我的 $perl,'|','perl' 不然就死掉; fprintf(PERL, @cgi);

Passign parameters to system as a single string, from HTTP input, is extremely dangerous.

From perldoc -f system,

If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is /bin/sh -c on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument,..

In other words, if I pass in arguments -e printf("working..."); rm -rf /; I can delete information from your disk (everything if your web server is running as root). If you choose to do this, make sure you call system("perl", @cgi) instead.

The argument length issue you're running into may be an OS limitation (described at http://www.in-ulm.de/~mascheck/various/argmax/):

There are different ways to learn the upper limit:

  • command: getconf ARG_MAX
  • system header: ARG_MAX in e.g. <[sys/]limits.h>

Saving to a temp file is risky: multiple calls to the CGI might save to the same file, creating a race condition where one user's parameters might be used by another user's process.

You might try opening a file handle to the process and passing arguments as standard input, instead. open my $perl, '|', 'perl' or die; fprintf(PERL, @cgi);

说不完的你爱 2024-08-20 09:00:41

我不想这样做,但我采用了最直接的方法并且它有效。我欺骗环境认为请求方法是 GET,以便被调用的 CGI 脚本将从它期望的 QUERY_STRING 环境变量中读取其输入。就像这样:

$ENV{'QUERY_STRING'} = $long_parameter_string . '&' . $ENV{'QUERY_STRING'};
$ENV{'REQUEST_METHOD'} = 'GET';

system {$perl_exec} $cgi_script;

我担心这可能会导致潜在的问题,但我想不出这会造成什么伤害,而且到目前为止效果很好。但是,因为我很担心,我想我应该询问部落是否看到任何潜在的问题:

在服务器上将 POST 请求作为 GET 请求处理时是否存在任何问题

我会将其标记为官方答案直到人们确认或至少就上述帖子进行辩论。

I didn't want to do this, but I've gone with the most direct approach and it works. I'm tricking the environment to think the request method is GET so that the called CGI script will read its input from the QUERY_STRING environment variable it expects. Like so:

$ENV{'QUERY_STRING'} = $long_parameter_string . '&' . $ENV{'QUERY_STRING'};
$ENV{'REQUEST_METHOD'} = 'GET';

system {$perl_exec} $cgi_script;

I'm worried about potential problems this may cause, but I can't think of what this would harm, and it works well so far. But, because I'm worried, I thought I'd ask the horde if they saw any potential problems:

Are there any problems handling a POST request as a GET request on the server

I'll save marking this as the official answer until people have confirmed or at least debated it on the above post.

几味少女 2024-08-20 09:00:41

事实证明,问题实际上与原始参数和我拼凑的参数字符串之间的 Content-Length 差异有关。我没有意识到 CGI 模块使用原始标头中的这个值作为要读取的输入量的限制(有道理!)。显然我所做的额外转义是添加一些字符。

我的解决方案的技巧只是将我将传递的参数字符串拼凑在一起,并修改 CGI 模块将检查的环境变量,以确定内容长度等于 .

这是最终的工作代码:

use CGI::Util qw(escape);

my $params;

foreach my $param (sort $query->param) {
 my $escaped_param  = escape($param);

 foreach my $value ($query->param($param)) {
  $params .= "$escaped_param=" . escape("$value") . "&";
 }
}

foreach (keys %{$query->{'.fieldnames'}}) {
 $params .= ".cgifields=" . escape("$_") . "&";
}

# This is the trick.
$ENV{'CONTENT_LENGTH'} = length($params);

open(my $cgi_pipe, "| $perl $cgi_script") || die("Cannot fork CGI: $!");
local $SIG{PIPE} = sub { warn "spooler pipe broke" };

print {$cgi_pipe} $params;

warn("param chars: " . length($params));

close($cgi_pipe) || warn "Error: CGI exited with value $?";

感谢您的所有帮助!

Turns out that the problem is actually related to the difference in Content-Length between the original parameters and the parameter string I cobbled together. I didn't realize that the CGI module was using this value from the original headers as the limit to how much input to read (makes sense!). Apparently the extra escaping I was doing was adding some characters.

My solution's trick is simply to piece together the parameter string I'll be passing and modify the environment variable the CGI module will check to determine the content length to be equal to the .

Here's the final working code:

use CGI::Util qw(escape);

my $params;

foreach my $param (sort $query->param) {
 my $escaped_param  = escape($param);

 foreach my $value ($query->param($param)) {
  $params .= "$escaped_param=" . escape("$value") . "&";
 }
}

foreach (keys %{$query->{'.fieldnames'}}) {
 $params .= ".cgifields=" . escape("$_") . "&";
}

# This is the trick.
$ENV{'CONTENT_LENGTH'} = length($params);

open(my $cgi_pipe, "| $perl $cgi_script") || die("Cannot fork CGI: $!");
local $SIG{PIPE} = sub { warn "spooler pipe broke" };

print {$cgi_pipe} $params;

warn("param chars: " . length($params));

close($cgi_pipe) || warn "Error: CGI exited with value $?";

Thanks for all the help!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文