Perl 中的循环日志文件
我已经实现了一个日志文件,它将在每分钟后存储进程的 cpu 和内存状态。我已将文件的最大大小限制为 3MB(这足以满足我的目的)。
该脚本将在每分钟后由 cron 作业调用,并且该脚本将记录该分钟的详细信息,并且将文件重命名为“Log_.log”。
当大小达到“3MB - 100 字节”时,我将文件指针重置为指向开头,并将覆盖日志文件中的第一个条目,现在将文件重命名为 "Log_<0+some offset>.log “。
由于我每分钟重命名文件以更新文件指针位置,这是一种好的/有效的方法吗?
我不想为此目的维护多个日志文件。
我的另一个选择是维护文件中的文件指针位置,但是......另一个文件!如果这个选项不错的话,我对维护一个不感兴趣:)
提前致谢。
I have implemented a log file that will be storing the cpu and memory state of a process after every minute.I have limited the maximum size of the file to 3MB (thats enough for my purpose).
The script will be called by a cron job after every minute and the script will log the details for that minute and will rename the file as "Log_.log".
When the size reaches "3MB - 100 bytes" I reset the file pointer to point to the begining and will overwrite the first entry in the log file and will now rename the file as "Log_<0+some offset>.log".
As I am renaming the file after every minute to update the file pointer position, is it a good/efficient way ?
I do not want to maintain more than one log file for this purpose.
Another option for me is to maintain the file pointer position in a file ,but ....another file !! not interested in maintaining one if this option is good :)
Thanks in Advance.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
你是工程师吗?这是一些简单任务的一个很好的例子,通过一个完美运行但过于复杂的解决方案来解决。
除非您放入的内容与您取出的内容占用的字节数完全相同,否则“写入”文件实际上会导致您写入位置之后的整个后续部分被重写到磁盘。 附加便宜得多。
重命名文件来存储指针是可行的 - 但它不太优雅,并且使事情变得更加复杂(首先,您的进程需要对文件所在目录的写权限 - 否则只需写访问两个文件就足够了)
除非磁盘空间是一个问题(实际上,很少有问题),否则您的方法的效率低于将所有内容附加到文件,并在文件达到最大大小时旋转文件。这样,您始终可以使用最后 3MB 的日志,并且当前文件中最多还可以保留 3MB 的日志。它也将使解析文件变得更加容易,而不是重新计算整个指针位置。
更新以回答您的评论:
每分钟(甚至每秒)重命名一个文件不应该显着减慢您的系统,不用担心。
我们的担忧主要是“为什么您认为需要重命名文件”。它在技术上并没有更好,从逻辑角度来看也没有更好,它使许多其他(未来)任务变得更加困难。您可以将文件指针存储在单独的文件中,或者存储在文件末尾,并且有更好的^H^H^H^H^H^H 更简单的解决方案,根本不需要文件指针。
Are you an engineer? This is a nice example of some simple task, solved by a perfectly working but overly complex solution.
Unless the content you put in takes exactly as many bytes as the content you take out, writing "in" a file will actually cause the whole following part after your writing position to be rewritten to disk. Append is much cheaper.
Renaming the file to store the pointer works - but it's not very elegant, and makes stuff more complex (for one, your process needs write rights to the directory in which the file resides - else just write access to two files is sufficient)
Unless disk space is an issue (and really, it rarely is), your approach is less efficient than say, append everything to a file, and rotate the file when it reaches its maximum size. This way you always have the last 3MB of logs available, and maximum 3MB more in your current file. It will make parsing the file a lot easier too, instead of recalculating the entire pointer position thing.
Update to answer your comment:
Renaming a file every minute (or even every second) shouldn't slow down your system significantly, don't worry about that.
Our concerns are mainly with "why you think you need to rename the file". It's not better technically, it's not better from a logical point of view, it makes a lot of other (future) tasks harder. You could store the file pointer in a seperate file, or at the end of your file, and there are better^H^H^H^H^H^H simpler solutions that don't require the file pointer at all.
我很困惑你为什么要重命名你的文件。这有什么作用呢?
日志条目的大小是固定的吗?还是可变尺寸?
如果条目是固定大小的,那么从一开始就重写现有文件就没有问题:文件中永远不会有不完整的条目,并且如果您要向文件写入计数器或时间戳,则应该清楚“光标”所在的位置。
如果条目的大小可变,那么您可能不应该从头开始重写文件,而不以某种方式明确“光标”在文件中的位置,并编写能够弹性读取截断日志条目的代码。
您可以重复使用现有工具,例如 RRDtool 吗?
I'm confused why you would rename your file. What does this accomplish?
Are the log entries fixed size? Or variable size?
If the entries are fixed size, then there is no trouble in re-writing the existing file from the start: you won't ever have incomplete entries in your file, and if you are writing a counter or timestamps to the file, it should be clear where the 'cursor' is located.
If the entries are variable size, then you should probably not begin re-writing the file from the beginning without somehow making it clear where the 'cursor' is located in the file, and write code that is resilient to reading truncated log entries.
Can you re-use existing tools such as RRDtool?