使用这种异步日志记录代码有什么缺点?

发布于 2024-09-09 02:06:33 字数 1522 浏览 3 评论 0原文

我刚刚编写的一些代码如下。

它演示了将 PostSharp 方面应用于方法,以便以异步方式记录方法调用的持续时间 - 这样,如果日志记录过程很慢,则用该方面装饰的方法的调用者不会看到这种性能损失。

它似乎可以工作,MyFirstMethod 完成,日志记录方法在单独的线程中启动,MySecondMethod 并行运行。这个想法是,高流量 Web 应用程序(即高度多线程环境)中的方法可以用类似的工具来装饰。

这样做有哪些陷阱? (例如,我担心在任何给定时间达到允许的线程数限制)。

using System;
using System.Threading.Tasks;
using NUnit.Framework;
using PostSharp.Aspects;

namespace Test
{
    [TestFixture]
    public class TestClass
    {        
        [Test]
        public void MyTest()
        {            
            MyFirstMethod();
            MySecondMethod();
        }

        [PerformanceInstrument]
        private void MyFirstMethod()
        {
            //do nothing
        }

        private void MySecondMethod()
        {
            for (int x = 0; x < 9999999; x++);
        }
    }

    [Serializable]
    public class PerformanceInstrument : MethodInterceptionAspect
    {                    
        public override void OnInvoke(MethodInterceptionArgs args)
        {            
            var startDtg = DateTime.Now;
            args.Proceed();
            var duration = DateTime.Now - startDtg;
            Task.Factory.StartNew(() => MyLogger.MyLoggingMethod(duration)); //invoke the logging method asynchronously
        }        
    }

    public static class MyLogger
    {
        public static void MyLoggingMethod(TimeSpan duration)
        {
            for (int x = 0; x < 9999999; x++);
            Console.WriteLine(duration);
        }
    }
}

Some code I just wrote follows.

It demonstrates applying a PostSharp aspect to a method for the purposes of recording the duration of the method invocation in an asynchronous manner - so that if the logging process is slow then this performance penalty is not seen by the caller of the method decorated with the aspect.

It seems to work, with MyFirstMethod completing, the logging method being set off in a separate thread, and MySecondMethod running on in parallel. The idea is that methods within a very highly-trafficked web application (i.e. a highly multi-threaded environment) be decorated with similar instrumentation.

What are the pitfalls of doing so? (e.g. I am concerned about reaching a limit on the number of threads permitted at any given time).

using System;
using System.Threading.Tasks;
using NUnit.Framework;
using PostSharp.Aspects;

namespace Test
{
    [TestFixture]
    public class TestClass
    {        
        [Test]
        public void MyTest()
        {            
            MyFirstMethod();
            MySecondMethod();
        }

        [PerformanceInstrument]
        private void MyFirstMethod()
        {
            //do nothing
        }

        private void MySecondMethod()
        {
            for (int x = 0; x < 9999999; x++);
        }
    }

    [Serializable]
    public class PerformanceInstrument : MethodInterceptionAspect
    {                    
        public override void OnInvoke(MethodInterceptionArgs args)
        {            
            var startDtg = DateTime.Now;
            args.Proceed();
            var duration = DateTime.Now - startDtg;
            Task.Factory.StartNew(() => MyLogger.MyLoggingMethod(duration)); //invoke the logging method asynchronously
        }        
    }

    public static class MyLogger
    {
        public static void MyLoggingMethod(TimeSpan duration)
        {
            for (int x = 0; x < 9999999; x++);
            Console.WriteLine(duration);
        }
    }
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

锦上情书 2024-09-16 02:06:34

我在这里看到的唯一可能的缺点是管理任务的开销,我确信这可能微不足道,但我还没有深入研究 TPL 内容来确定。

我在大型 Web 应用程序中使用的另一种方法是让日志记录将日志消息记录写入内存列表,然后我有一个后台线程负责在后台写入日志消息。目前,该解决方案让线程经常检查列表,如果列表长度超过特定阈值或列表没有刷新的时间超过特定时间,则将列表刷新到磁盘(在我们的示例数据库中),这永远是第一位的。

这类似于生产者/消费者模式,您在其中编码生成日志消息,而消费者负责将这些消息刷新到持久性介质。

The only possible downside that I see here is the overhead of managing the Tasks, which I am sure is probably insignificant, but I have not delved deaply into the TPL stuff to be certain.

An alternative approach that I have used in a large scale web application is to have the logging write the log message records to an in memory list, then I have a backgound thread that is responsible for writing the log messages out in the background. Currently the solution has the thread check the list every so often and flush the list to disk (in our case database) if the list length has exceeded a certain threashold or the list has not been flush for longer than a specific amount of time, which ever comes first.

This is something like the Producer/Consumer pattern, where you code produces log messages and the consumer is reponsible for flushing those messages to the persistence medium.

愛放△進行李 2024-09-16 02:06:34

您的方法可能会产生意想不到的后果,因为 ASP.NET 引擎和任务并行库都是在 .NET 线程池上调度任务。每个 Web 请求都由线程池中的一个线程提供服务。如果您安排任务来处理日志记录,那么您将使用线程池上的任务,而该线程池将不再用于服务 Web 请求。这可能会降低吞吐量。

TPL 团队在此处发布了有关此内容的博客:在 ASP.NET 应用中使用 .NET 4 的并行扩展

生产者/消费者模式意味着您的 MethodInterceptionAspect 只需将一个条目添加到全局队列中(如 Ben 建议的那样),然后您将拥有一个处理所有条目的(长时间运行的)任务。因此,您的插播方法将变为:

ConcurrentQueue<TimeSpan> _queue;

public override void OnInvoke(MethodInterceptionArgs args)
{
    var startDtg = DateTime.Now;
    args.Proceed();
    var duration = DateTime.Now - startDtg;
    Task.Factory.StartNew(() => _queue.Add(duration)); 
}

在其他地方处理队列:

foreach (var d in _queue.GetConsumingEnumerable())
{
    Console.WriteLine(d);
}

以下文章显示了类似的实现,其中由 Parallel.For 循环创建的多个任务将图像添加到 BlockingCollection 中,并且单个任务处理图像。

任务并行库 WaitAny 设计

它的工作效果在某种程度上取决于请求处理的长度、每个请求要处理的日志条目数以及总体服务器负载等。您必须注意的一件事是,总体而言,您需要能够比实际情况更快地从队列中删除请求正在添加。

您是否考虑过编写自己的性能计数器并让性能计数器基础设施为您处理繁重的工作的方法?这将节省您实施任何此类记录基础设施的需要。

There may be unintended consequences of your approach because the ASP.NET engine and the Task Parallel Library are both scheduling tasks on the .NET thread pool. Each web request is serviced by a thread from the thread pool. If you as scheduling Tasks to handle logging then you will be using tasks on the thread pool which can no longer be used to service web requests. This may reduce throughput.

The TPL team blogged about this here: Using Parallel Extensions for .NET 4 in ASP.NET apps.

The producer/consumer pattern would mean that your MethodInterceptionAspect would simply add an entry into a global queue (as suggested by Ben) and then you would have a single (long running) Task which processed all the entries. So your intercaption method becomes:

ConcurrentQueue<TimeSpan> _queue;

public override void OnInvoke(MethodInterceptionArgs args)
{
    var startDtg = DateTime.Now;
    args.Proceed();
    var duration = DateTime.Now - startDtg;
    Task.Factory.StartNew(() => _queue.Add(duration)); 
}

Somewhere else you process the queue:

foreach (var d in _queue.GetConsumingEnumerable())
{
    Console.WriteLine(d);
}

The following post shows a similar implementation where multiple tasks created by a Parallel.For loop add images to a BlockingCollection and a single task processes the images.

Task Parallel Library WaitAny design

How well this works depends a bit on the length of your request processing, the number of log entries your want to process per request and the overall server load etc. One thing you have to be aware of is that overall you need to be able to remove requests from the queue faster than they are being added.

Have you thought about an approach where you write your own performance counter and have the perf counter infrastructure handle the heavy lifting for you? This would save you needing to implement any of this recording infrastructure.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文