定期从日志文件中读取数据

发布于 2024-12-18 18:04:34 字数 49 浏览 2 评论 0原文

我正在生成一个日志文件,我想要的是定期读取数据,而不必每次都从头开始读取。谁能帮忙。

I am generating a log file and what i want is that i want to read the data periodically without having to read from the beginning each time. can anyone help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

君勿笑 2024-12-25 18:04:34

打开文件并进行循环,

  • 获取大小并与您已经读取的大小进行比较。
  • 如果大小增加,则读取那么多字节,不再读取更多字节。这样做意味着您可以稍后阅读更多内容。
  • 如果大小缩小,请关闭文件并重新开始。

您可以使用 FileInputStream 或 RandomAccessFile。

Open the file and have a loop which,

  • get the size and compare with the size you have read already.
  • if the size has grown, read that many bytes and no more. Doing this means you can read more later.
  • if the size has shrink, close the file and start again.

You can use FileInputStream or RandomAccessFile.

甜味拾荒者 2024-12-25 18:04:34

使用unix命令“tail”,选项“-f”和“-F”对于同一命令也非常方便。

请参阅此处http: //www.thegeekstuff.com/2009/08/10-awesome-examples-for-viewing-huge-log-files-in-unix/ 例如或只是谷歌周围的例子。

use unix command 'tail', the option '-f' and '-F' is for the same command is very handy as well.

See here http://www.thegeekstuff.com/2009/08/10-awesome-examples-for-viewing-huge-log-files-in-unix/ for examples or just google around for examples.

倾城花音 2024-12-25 18:04:34

如果您想运行一个程序来定期读取日志文件,那么您可以使用调度程序,例如 Quartz Scheduler定期运行它。

If you want to Run a program to read your log file periodically then you can use schedulers like, Quartz Scheduler to run it periodically.

陌生 2024-12-25 18:04:34

RandomAccessFile 是一个不错的选择。如果您离开应用程序,您必须在离开前保留上次阅读的位置,以避免重新阅读信息。

另一方面,对于繁重的事件流,日志文件往往会变得相当大。轮换日志文件将使您可以将问题稍微转向文件命名。您可以将系统配置为每天生成一个日志文件,如下所示:

app_access.2011-11-28.log, 
app_access.2011-11-29.log, 
app_access.2011-11-30.log,
...

如果您获得的文件仍然很大,您可以按日期和时间轮换它们,并且小时也将作为文件名的一部分。然后,您的文件可以每三个小时甚至每小时轮换一次。这将为您提供更多可供读取的日志文件,但它们会更小,因此更容易处理。您要查找的日期和时间范围将是文件名的一部分。

您还可以另外按文件大小进行旋转。如果您选择可以处理的最大文件大小,则可以完全避免随机访问大文件。

RandomAccessFile is a good option. If you leave the application you will have to persist the place of your last read before leaving, in order to avoid rereading information.

Log files, on the other hand, tend to become quite large for heavy event flow. Rotating log files will allow you to shift your problem a little towards file naming. Your can configure your system to produce one log file per day like here:

app_access.2011-11-28.log, 
app_access.2011-11-29.log, 
app_access.2011-11-30.log,
...

If the files you get are still very large, you may rotate them by date and time and you will have also the hour as part of the file name. Your files could then rotate, let's say, every three hours or even every hour. This will give you more log files to read, but they will be smaller, thus easier to process. The date and time range you want to seek for will be part of the file name.

You could also additionally rotate by file size. If you select a maximum file size you can deal with you could avoid accessing randomly a huge file completely.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文