使用 Nagios 监控 URL
我正在尝试使用 Nagios 来监控实际的 URL,而不仅仅是主机,因为我运行着一个包含多个网站的共享服务器,而且我认为仅仅监控基本的 HTTP 服务还不够(我包括在最开始的部分)这个问题的底部是我的设想的一个小解释)。
(旁注:请注意,我在 CentOS 系统上的 chroot 中安装并运行了 Nagios。我从源代码构建了 nagios,并使用 yum 将所需的所有依赖项安装到此根目录中,等等...)
我首先发现 check_url,但是将其安装到 /usr/lib/nagios/libexec 后,我不断收到“返回代码 255 超出范围”错误。就在那时,我决定开始写这个问题(但是等等!我决定先尝试另一个插件!)
在审查 这个问题几乎与我在 check_url 中遇到的问题几乎相同,我决定就该主题提出一个新问题,因为 a) 我没有使用 NRPE 进行此检查 b)我尝试了针对我链接的先前问题提出的建议,但没有一个起作用。例如...
./check_url some-domain.com | echo $0
返回“0”(表示检查成功)
然后我按照 Nagios 支持创建一个名为debug_check_url,并将以下内容放入其中(然后由我的命令定义调用):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
假设我没有处于“调试模式”,我运行 check_url 的命令定义如下(在 command.cfg 中):(
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
顺便说一句,您还可以在这个问题的最底部查看我在服务配置文件中使用的内容)
但是,在发布这个问题之前,我决定再尝试一下找出解决方案。我找到了 check_url_status 插件,并且决定尝试一下。为此,我做了以下操作:
- mkdir /usr/lib/nagios/libexec/check_url_status/
- 下载了 check_url_status 和 utils.pm
- 根据 check_url_status 插件页面上的用户评论/评论,我将“lib”更改为正确的目录/usr/lib/nagios/libexec/。
运行以下命令:
./check_user_status -U some-domain.com。 当我运行上述命令时,我不断收到以下错误:
bash-4.1# ./check_url_status -U mydomain.com 无法在 @INC 中找到 utils.pm(@INC 包含:/usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr /share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) 位于 ./check_url_status 第 34 行。 BEGIN 失败 - 编译在 ./check_url_status 第 34 行中止。
所以在这一点上,我放弃了,并有几个问题:
- 您会推荐这两个插件中的哪一个? check_url 或 check_url_status? (读完 check_url_status 的描述后,我觉得这个可能是更好的选择。您的想法是什么?)
- 现在,我如何使用您推荐的插件解决我的问题?
在这个问题的开头,我提到我会对我的设想做一个简短的解释。我有一个名为 services.cfg 的文件,其中包含所有服务定义(想象一下!)。
以下是我的服务定义文件的片段,我编写该文件是为了使用 check_url (因为当时,我认为一切正常)。我将为每个我想要监控的 URL 构建一个服务:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
I'm trying to monitor actual URLs, and not only hosts, with Nagios, as I operate a shared server with several websites, and I don't think its enough just to monitor the basic HTTP service (I'm including at the very bottom of this question a small explanation of what I'm envisioning).
(Side note: please note that I have Nagios installed and running inside a chroot on a CentOS system. I built nagios from source, and have used yum to install into this root all dependencies needed, etc...)
I first found check_url, but after installing it into /usr/lib/nagios/libexec, I kept getting a "return code of 255 is out of bounds" error. That's when I decided to start writing this question (but wait! There's another plugin I decided to try first!)
After reviewing This Question that had almost practically the same problem I'm having with check_url, I decided to open up a new question on the subject because
a) I'm not using NRPE with this check
b) I tried the suggestions made on the earlier question to which I linked, but none of them worked. For example...
./check_url some-domain.com | echo $0
returns "0" (which indicates the check was successful)
I then followed the debugging instructions on Nagios Support to create a temp file called debug_check_url, and put the following in it (to then be called by my command definition):
#!/bin/sh
echo `date` >> /tmp/debug_check_url_plugin
echo $* /tmp/debug_check_url_plugin
/usr/local/nagios/libexec/check_url $*
Assuming I'm not in "debugging mode", my command definition for running check_url is as follows (inside command.cfg):
'check_url' command definition
define command{
command_name check_url
command_line $USER1$/check_url $url$
}
(Incidentally, you can also view what I was using in my service config file at the very bottom of this question)
Before publishing this question, however, I decided to give 1 more shot at figuring out a solution. I found the check_url_status plugin, and decided to give that one a shot. To do that, here's what I did:
- mkdir /usr/lib/nagios/libexec/check_url_status/
- downloaded both check_url_status and utils.pm
- Per the user comment / review on the check_url_status plugin page, I changed "lib" to the proper directory of /usr/lib/nagios/libexec/.
Run the following:
./check_user_status -U some-domain.com.
When I run the above command, I kept getting the following error:
bash-4.1# ./check_url_status -U mydomain.com
Can't locate utils.pm in @INC (@INC contains: /usr/lib/nagios/libexec/ /usr/local/lib/perl5 /usr/local/share/perl5 /usr/lib/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib/perl5 /usr/share/perl5) at ./check_url_status line 34.
BEGIN failed--compilation aborted at ./check_url_status line 34.
So at this point, I give up, and have a couple of questions:
- Which of these two plugins would you recommend? check_url or check_url_status?
(After reading the description of check_url_status, I feel that this one might be the better choice. Your thoughts?) - Now, how would I fix my problem with whichever plugin you recommended?
At the beginning of this question, I mentioned I would include a small explanation of what I'm envisioning. I have a file called services.cfg which is where I have all of my service definitions located (imagine that!).
The following is a snippet of my service definition file, which I wrote to use check_url (because at that time, I thought everything worked). I'll build a service for each URL I want to monitor:
###
# Monitoring Individual URLs...
#
###
define service{
host_name {my-shared-web-server}
service_description URL: somedomain.com
check_command check_url!somedomain.com
max_check_attempts 5
check_interval 3
retry_interval 1
check_period 24x7
notification_interval 30
notification_period workhours
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
我把事情搞得太复杂了。
内置/默认安装的插件 check_http 可以完成我想要的以及更多。以下是我完成此操作的方法:
我的服务定义:
我的命令定义:
I was making things WAY too complicated.
The built-in / installed by default plugin, check_http, can accomplish what I wanted and more. Here's how I have accomplished this:
My Service Definition:
My Command Definition:
监视 url 的更好方法是使用可与 nagios 一起使用的 webinject。
以下问题是由于您没有 perl 软件包 utils 所致,请尝试安装它。
bash-4.1# ./check_url_status -U mydomain.com 无法在 @INC 中找到 utils.pm (@INC 包含:
The better way to monitor urls is by using webinject which can be used with nagios.
The below problem is due to the reason that you dont have the perl package utils try installing it.
bash-4.1# ./check_url_status -U mydomain.com Can't locate utils.pm in @INC (@INC contains:
您可以制作一个脚本插件。这很简单,您只需使用以下内容检查 URL:
$URL 是您通过参数传递给脚本命令的内容。
然后检查结果:如果您的代码大于 399,则说明有问题,否则...一切正常!然后是正确的退出模式和 Nagios 的消息。
You can make an script plugin. It is easy, you only have to check the URL with something like:
$URL is what you pass to the script command by param.
Then check the result: If you have an code greater than 399 you have a problem, else... everything is OK! THen an right exit mode and the message for Nagios.