log4net 从 BufferedAppender 保存缓冲区
我们使用 log4net 和 AdoNetAppender 将关键日志写入数据库。由于 AdoNetAppender 是 BufferedAppender 的子类,因此可以启用日志事件排队。 我想做的是保存备份并保存它。将日志缓冲区恢复到本地文件,以便在数据库关闭或应用程序崩溃时不会丢失任何日志条目。
有人知道如何做到这一点吗?
we are using log4net with a AdoNetAppender to write critical logs into an database. Since the AdoNetAppender is a subclass of the BufferedAppender there is a possibility to enable queuing of log events.
What I'd like to do is to save the backup & restore the log buffer to a local file, so that no log entry can get lost it the database is down or the application crashes.
Does somebody know how to this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
不要认为不自己编写一些代码就可以保存缓冲区。我宁愿建议将日志发送到 AdoNetAppender 和 RollingFileAppender。第一个将确保您定期记录到数据库,而第二个将确保最新的日志也写入磁盘。
更新:根据您后来的评论,我可以看到日志记录到两个不同的源(一个数据库和一个本地存储,文件或本地数据库)如何变得难以整合。
在我看来,您绝对应该使用 log4net 来实现它最擅长的功能:一个经过验证的真实框架,用于从应用程序收集日志数据并将该数据路由到接收系统。然而,在 log4net 之上构建故障转移系统并不是它的设计目的。例如,没有任何流程模型可以在应用程序崩溃后恢复正常状态。
相反,在接收系统中处理故障转移。数据库级别和网络级别的故障转移可以让您走得很远,但仍然不能保证 100% 的正常运行时间。通过登录到本地存储,然后让进程拾取日志并将其传送到数据库,可以最大限度地降低日志数据丢失的风险,同时您还可以避免合并来自两个不同存储的日志。更好的是,日志记录仍然简单快速,因此对应用程序的影响很小。
另一种方法是记录到本地数据库并让数据库作业将数据拉入主数据库。您还可以使用排队。有 一个示例 MsmqAppender 可帮助您入门。如果您使用的是 MS SQL Server,您甚至可以使用 Service Broker 的排队能力。
Don't think you can save the buffer without writing some code yourself. What I rather would suggest is sending the logs to both a AdoNetAppender and a RollingFileAppender. The first will ensure your regular logging to database while the second will ensure that the latest logs are also written to disk.
Update: in light of your later comments I can see how logging to two different sources (one database and one local store, either a file or local database) gets tough to consolidate.
Imo you should absolutely use log4net for what it is best at: a tried and true framework for collection log data from the application and routing that data to receiving systems. Building a failover system on top of log4net though is not what it is designed for. For instance, there is no process model that can pick up the pieces after an application crash.
Instead, handle failover in the receiving system. Failover at the database level and the network level gets you a long way, still you are not guaranteed 100% uptime. By logging to a local store and then have a process picking up the logs and shipping it to the database would minimize the risk for log data being lost, and at the same time you avoid having to consolidate logs from two different stores. Even better, logging is still simple and fast and thus have a low impact on the application.
An alternative would be logging to a local database and having a database job pull the data into the master database. You could also use queuing. There is a sample MsmqAppender out there to get you started. If you're using MS SQL Server you could even use the Service Broker for its queuing abilities.