使用 Azure 的事件日志
我希望能够使用 Azure 记录事件。
目前,我正在使用 EventLog 和 .WriteEntry 将其写入本地计算机上的日志。但是,当我将其上传到 Azure 时,出现请求错误。
我看过有关在 Microsoft.ServiceHosting.ServiceRuntime 中使用 RoleManager 的指南,但 Microsoft.ServiceHosting.ServiceRuntime 不是可添加的可用参考(它说“过滤到:.NET Framework 4,并且 Microsoft.ServiceHosting.ServiceRuntime 不在列表)
有没有办法让日志记录与 .NET Framework 4 引用一起使用?
I would like to be able to log events with Azure.
Currently, I am using EventLog and .WriteEntry to write it to a log on my local machine. However, I get a Request Error when I upload this to Azure.
I have seen guides that talk about using RoleManager in Microsoft.ServiceHosting.ServiceRuntime, but Microsoft.ServiceHosting.ServiceRuntime is not an available Reference to add (it says "Filtered to: .NET Framework 4, and Microsoft.ServiceHosting.ServiceRuntime is not in the list).
Is there a way to get logging working with .NET Framework 4 references?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
好吧,首先我总是建议人们尝试抽象一些东西,比如直接写入事件日志。它是一种系统依赖性,可以通过使用松散耦合的提供程序更好地表达。这样,您创建的特定代码片段就可以在本地使用,也可以在 Windows Azure 中使用,只需更改提供程序即可。
其次,创建自己的事件源存在安全限制。所以我相当确定,如果这就是您尝试执行的操作,该操作将引发异常。
最后,角色管理器问题不是一个错误。当 Windows Azure 一年前商业化时,该类已被弃用。我写了一篇简短的文章来讨论它: http://brentdacodemonkey.wordpress.com/2010/03/05/azure-service-configuration-updated-or-%e2%80%9cwhere- did-rolemanager-go%e2%80%9d/
在您的情况下,我会考虑创建一个直接写入 Azure 表存储的简单适配器。然后,您可以让您需要的任何事件监视进程定期检查该表。或者,使用 Azure 存储队列,这样您就不必不断扫描表中的新项目。 :) 只需查看队列并在发现该项目时拉取项目即可。
ok, first off I always recommend to people to try and abstract away things like writing directly to the event log. Its a system dependency that is better expressed by using a loosely coupled provider. This way a particular piece of code you've created can be used either on premise or in Windows Azure by just changing the provider.
Secondly, there are security constraints around creating your own event sources. So I'm fairly certain that if this is what you are attempting to do, the operation will throw an exception.
And finally, the role manager issue isn't a bug. That class was deprecated when Windows Azure went commercial over a year ago. I wrote a short post that talked about it: http://brentdacodemonkey.wordpress.com/2010/03/05/azure-service-configuration-updated-or-%e2%80%9cwhere-did-rolemanager-go%e2%80%9d/
In your situation, I'd look into creating a simple adapter that writes directly to Azure Table Storage. Then you can let whatever event monitoring process you need periodically check that table. Alternatively, use Azure Storage Queues so you don't have to continually scan the table for new items. :) Just peek at the queue and pull and item when its found there.
看起来这是一个已知错误:
http://swapsharing.wordpress.com/2009/10/11/microsoft-servicehosting-serviceruntime-is-not-found-reference-problem/
http://social.msdn.microsoft.com/Forums/en/windowssdk/thread/ a9325a96-dd16-489b-9c8b-b74809ca75d0
It looks like this is a known bug:
http://swapsharing.wordpress.com/2009/10/11/microsoft-servicehosting-serviceruntime-is-not-found-reference-problem/
http://social.msdn.microsoft.com/Forums/en/windowssdk/thread/a9325a96-dd16-489b-9c8b-b74809ca75d0
不要在 Azure 中使用 Windows 事件日志。没有意义。开发您自己的日志记录解决方案。您可能需要阅读 Windows 事件日志错误和警告,以便转发到您的日志。这可以节省远程桌面故障排除的一些时间。
也不要使用 Azure 的内置诊断。我合作过的大多数人都发现这种设置和配置不充分且繁琐。另外,由于机器上的批处理,在写入日志之前会有 1 分钟的延迟。
滚动您自己的日志记录。我发现只要您不必担心存储帐户规模问题,最简单的方法就是写入表。您将希望根据表名和/或分区键每天/每小时滚动日志,具体取决于您写入的日志量(我们写了很多)。您可能还希望日志与生产者/消费者模式异步,这样就不会减慢进程。关键日志(错误/警告)您应该同步写入,或者使用其他渠道报告(我们不使用日志报告错误,而是将错误报告视为一等公民)。
如果您开始遇到表的规模问题,您可以执行一些进程内批处理并附加到页面 blob。这需要更多的工作,但是当您记录大量日志时,您将能够更好地扩展。
Don’t use Windows Event Logs in Azure. There’s no point. Develop your own logging solution. You may want to read Windows Event Log errors and warnings in order to forward to your logs. This could save some time with remote desktop troubleshooting.
Also don’t use Azure’s built in diagnostics. Most I have worked with have found this inadequate and burdensome to set up and configure. Also, there is a 1 minute delay before the logs are written due to batching on the machine.
Roll your own logging. What I have found the easiest to due as long as you don’t have to worry about storage account scale issues, is to write to a table. You will want to roll the log on a daily/hourly basis on table name and/or partition key depending upon how much logs you write (we write a LOT). You will probably also want to have your logs asynchronous with a producer/consumer pattern so you don’t slow down your processes. Critical logs (errors/warnings) you should write synchronously, or use another channel to report (we do not use logs to report errors but treat error reporting as a first class citizen).
If you start hitting scale issues of tables, you can do some in-process batching and append to a page blob. This is a bit more work, but you will be able to scale much better when you are logging a LOT.