通过 http 跟踪日志文件

发布于 2024-08-09 13:19:17 字数 118 浏览 3 评论 0原文

出于安全原因(我是开发人员),我无法通过命令行访问写入日志文件的生产服务器。不过,我可以通过 HTTP 访问这些日志文件。是否有一个“tail -f”方式的实用程序可以仅使用 HTTP“跟踪”纯文本文件?

For security reasons (I'm a developer) I do not have command line access to our Production servers where log files are written. I can, however access those log files over HTTP. Is there a utility in the manner of "tail -f" that can "follow" a plain text file using only HTTP?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

病毒体 2024-08-16 13:19:17

如果 HTTP 服务器接受返回部分资源的请求,则可以执行此操作。例如,如果 HTTP 请求包含标头:

范围:字节=-500

响应将包含资源的最后 500 个字节。您可以获取它,然后将其解析为行等。我不知道有任何现成的客户端可以为您执行此操作 - 我会编写一个脚本来完成这项工作。

您可以使用 Hurl 来试验标头(来自公开可用的资源)。

You can do this if the HTTP server accepts requests to return parts of a resource. For example, if an HTTP request contains the header:

Range: bytes=-500

the response will contain the last 500 bytes of the resource. You can fetch that and then parse it into lines, etc. I don't know of any ready-made clients which will do this for you - I'd write a script to do the job.

You can use Hurl to experiment with headers (from publicly available resources).

甜妞爱困 2024-08-16 13:19:17

我出于同样的目的编写了一个 bash 脚本。您可以在这里找到它 https://github.com/maksim07/url-tail

I wrote a bash script for the same purpose. You can find it here https://github.com/maksim07/url-tail

过气美图社 2024-08-16 13:19:17

您可以使用 PsExec 在远程计算机上执行命令。
Windows 的 tail 命令可以在 http://tailforwin32.sourceforge.net/ 找到

。如果是 HTTP,您可以编写一个轻量级 Web 服务来轻松实现这一点。
例如,读取指定文件中从第 0 行到第 200 行的文本。

You can use PsExec to execute command on remote computer.
The tail command for windows can be found at http://tailforwin32.sourceforge.net/

If it has to be HTTP, you can write a light weight web service to achieve that easily.
e.g., read text within a specified file from line 0 to line 200.

真心难拥有 2024-08-16 13:19:17

您可以使用小型 java 实用程序通过 Apche HTTP 库通过 Http 读取日志文件。

HttpClient client = HttpClientBuilder.create().build();
    HttpGet request = new HttpGet(uri);
    HttpResponse response = client.execute(request);
    BufferedReader rd = new BufferedReader(new InputStreamReader(
            response.getEntity().getContent()));
    String s = "";
    while ((s = rd.readLine()) != null) {
       //Process the line
    }

You can use small java utility to read log file over Http using Apche HTTP Library.

HttpClient client = HttpClientBuilder.create().build();
    HttpGet request = new HttpGet(uri);
    HttpResponse response = client.execute(request);
    BufferedReader rd = new BufferedReader(new InputStreamReader(
            response.getEntity().getContent()));
    String s = "";
    while ((s = rd.readLine()) != null) {
       //Process the line
    }
渔村楼浪 2024-08-16 13:19:17

我编写了一个简单的 bash 脚本,每 2 秒获取一次 URL 内容,并与本地文件 output.txt 进行比较,然后将差异附加到

我想要在 Jenkins 管道中流式传输 AWS amplify 日志的

while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done

同一 文件中忘记创建空文件 output.txt 文件首先

: > output.txt

查看流:

tail -f output.txt

更新:

我在这里使用 wget 找到了更好的解决方案:

while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done

https://superuser.com/a/514078/603774

I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file

I wanted to stream AWS amplify logs in my Jenkins pipeline

while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done

don't forget to create empty file output.txt file first

: > output.txt

view the stream :

tail -f output.txt

UPDATE:

I found better solution using wget here:

while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done

https://superuser.com/a/514078/603774

寻找我们的幸福 2024-08-16 13:19:17

我创建了一个 powershell 脚本,它

  1. 每 30 秒从给定的 url 获取内容,
  2. 使用“Range”HTTP 请求标头仅获取特定数量的数据。
while ($true) {
    $request = [System.Net.WebRequest]::Create("https://raw.githubusercontent.com/fascynacja/blog-demos/master/gwt-marquee/pom.xml")
    $request.AddRange(-2000)
    $response = $request.GetResponse()
    $stream = $response.GetResponseStream()
    $reader = New-Object System.IO.StreamReader($stream)
    $content = $reader.ReadToEnd()
    $reader.Close()
    $stream.Close()
    $response.Close()
 
    Write-Output $content
    
    Start-Sleep -Seconds 30
}

您可以根据自己的需要调整范围和秒数。另外,如果需要,您可以轻松地为特定搜索词添加颜色模式。您还可以将输出重定向到文件。

I have created a powershell script which

  1. Gets the content from given url every 30 secons
  2. Gets only specific amount of data using "Range" HTTP request header.
while ($true) {
    $request = [System.Net.WebRequest]::Create("https://raw.githubusercontent.com/fascynacja/blog-demos/master/gwt-marquee/pom.xml")
    $request.AddRange(-2000)
    $response = $request.GetResponse()
    $stream = $response.GetResponseStream()
    $reader = New-Object System.IO.StreamReader($stream)
    $content = $reader.ReadToEnd()
    $reader.Close()
    $stream.Close()
    $response.Close()
 
    Write-Output $content
    
    Start-Sleep -Seconds 30
}

You can adjust the Range and the Seconds to your own needs. Also If needed you can easily add color patterns for specific search terms. You can also redirect the output to a file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文