我需要一种简单的方法来监视分布在多个 HP-UX 服务器上的多个文本日志文件。 它们是来自多个分布式遗留系统的文本和 XML 日志文件的混合。 目前,我们只是通过 ssh 连接到服务器并使用 tail -f 和 grep,但是当您有许多日志需要跟踪时,这无法扩展。
由于日志采用不同的格式,并且只是文件夹中的文件(当它们达到一定大小时自动旋转),我需要远程收集它们并以不同的方式解析每个文件。
我最初的想法是制作一个简单的守护进程,我可以在每个服务器上运行该进程,使用每种文件类型的自定义文件读取器将其解析为可以通过套接字通过网络导出的通用格式。 本地运行的另一个查看器程序将连接到这些套接字,并在一些简单的选项卡式 GUI 中显示解析的日志或聚合到控制台。
如果我要以这种方式实现,我应该尝试转换为什么日志格式?
还有其他更简单的方法吗? 我应该尝试将日志文件转换为 log4j 格式以与 Chainsaw 一起使用还是有更好的日志查看器可以连接到远程套接字吗? 我可以按照 BareTail 18632/how-to-monitor-a-text-file-in-realtime#18690">另一个日志问题? 这不是一个大规模分布式系统并更改所有应用程序的当前日志记录实现不能选择使用 UDP 广播或将消息放入 JMS 队列。
I need a simple way to monitor multiple text log files distributed over a number of HP-UX servers. They are a mix of text and XML log files from several distributed legacy systems. Currently we just ssh to the servers and use tail -f and grep, but that doesn't scale when you have many logs to keep track of.
Since the logs are in different formats and just files in folders (automatically rotated when they reach a certain size) I need to both collect them remotely and parse each one differently.
My initial thought was to make a simple daemon process that I can run on each server using a custom file reader for each file type to parse it into a common format that can be exported over the network via a socket. Another viewer program running locally will connect to these sockets and show the parsed logs in some simple tabbed GUI or aggregated to a console.
What log format should I try to convert to if I am to implement it this way?
Is there some other easier way? Should I attempt to translate the log files to the log4j format to use with Chainsaw or are there better log viewers that can connect to remote sockets? Could I use BareTail as suggested in another log question? This is not a massivly distributed system and changing the current logging implementations for all applications to use UDP broadcast or put messages on a JMS queue is not an option.
发布评论
评论(10)
实时日志观看的最轻量级解决方案可能是使用 Dancer 的shell 在并发模式下使用 tail -f:
Probably the lightest-weight solution for real-time log watching is to use Dancer's shell in concurrent mode with tail -f:
我们使用一个简单的 shell 脚本,如下所示。 显然,您必须对其进行一些调整以告诉它不同的文件名,并决定在哪个框上查找,但您已经了解了基本的想法。 在我们的例子中,我们在多个盒子的同一位置跟踪一个文件。 这需要通过存储的密钥进行 ssh 身份验证,而不是输入密码。
关于迈克·芬克(Mike Funk)关于无法做到这一点的评论
用 ^C 杀死尾部,我将以上内容存储在名为 multitails.sh 的文件中
并将以下内容附加到其末尾。 这将创建一个kill_multitails.sh文件
完成拖尾后运行它,然后它会自行删除。
We use a simple shell script like the one below. You'd, obviously, have to tweak it somewhat to tell it about the different file names and decide which box to look for which on but you get the basic idea. In our case we are tailing a file at the same location on multiple boxes. This requires ssh authentication via stored keys instead of typing in passwords.
Regarding Mike Funk's comment about not being able to
kill the tailing with ^C, I store the above in a file called multitails.sh
and appended the following to the end of it. This creates a kill_multitails.sh file
which you run when you're done tailing, and then it deletes itself.
多尾
或者
“chip是系统的本地和远程日志解析和监控工具管理员和开发人员。
它将 swatch、tee、tail、grep、ccze 和 mail 的功能封装成一个,并带有一些附加功能”
例如:
chip -f -m0='RUN ' -s0='red' -m1='.*' -s1 user1@remote_ip1:'/var/log/log1 /var/log/log2 /var/log/log3
user2@remote_ip2:'/var/log/log1 /var/log/log2 /var/log/log3'' | egrep "RUN |==> /"
这将以红色突出显示 -m0 模式的出现,
预过滤“RUN |==>” /' 所有日志文件中的模式。
multitail
or
"chip is a local and remote log parsing and monitoring tool for system admins and developers.
It wraps the features of swatch, tee, tail, grep, ccze, and mail into one, with some extras"
Eg.
chip -f -m0='RUN ' -s0='red' -m1='.*' -s1 user1@remote_ip1:'/var/log/log1 /var/log/log2 /var/log/log3
user2@remote_ip2:'/var/log/log1 /var/log/log2 /var/log/log3’' | egrep "RUN |==> /"
This will highlight in red the occurences of the -m0 pattern,
pre-filtering the 'RUN |==> /' pattern from all the log files.
Logscape - 就像 splunk 一样,没有价格标签
Logscape - like splunk without the price tag
选项:
Options:
awstats 提供了一个 perl 脚本,可以将多个 apache 日志文件合并在一起。 该脚本可以很好地扩展,因为内存占用非常低,日志文件永远不会加载到内存中。
我知道这并不完全是您所需要的,但也许您可以从这个脚本开始并根据您的需要进行调整。
Awstats provides a perl script that can merge several apache log files together. This script scales well since the memory footprint is very low, logs files are never loaded in memory.
I know that si not exactly what you needs, but perhaps you can start from this script and adapt it for your needs.
我编写 vsConsole 正是为了这个目的 - 轻松访问日志文件 - 然后添加应用程序监控和版本跟踪。 想知道您对此有何看法。 http://vs-console.appspot.com/
I wrote vsConsole for exactly this purpose - easy access to log files - and then added app monitoring and version tracking. Would like to know what you think of it. http://vs-console.appspot.com/
用于 Java 的 XpoLog
XpoLog for Java
您可以使用 Chainsaw 提供的各种接收器(VFSLogFilePatternReceiver 通过 ssh、SocketReceiver、UDPReceiver、CustomSQLDBReceiver 等跟踪文件),然后通过更改默认选项卡标识符或通过提供创建“自定义表达式日志面板”将日志聚合到单个选项卡中与各个源选项卡中的事件相匹配的表达式。
You can use the various receivers available with Chainsaw (VFSLogFilePatternReceiver to tail files over ssh, SocketReceiver, UDPReceiver, CustomSQLDBReceiver, etc) and then aggregate the logs into a single tab by changing the default tab identifier or creating a 'custom expression logpanel' by providing an expression which matches the events in the various source tabs.
gltail - 使用 Ruby、SSH 和 OpenGL 从多个服务器实时可视化服务器流量、事件和统计数据
gltail - real-time visualization of server traffic, events and statistics with Ruby, SSH and OpenGL from multiple servers