C++ 中的异步线程安全日志记录(无互斥锁)

发布于 2024-12-16 02:06:17 字数 186 浏览 0 评论 0原文

我实际上正在寻找一种在我的 C++ 中进行异步和线程安全日志记录的方法。

我已经探索过 log4cpp、log4cxx、Boost:log 或 rlog 等线程安全日志记录解决方案,但似乎它们都使用互斥体。据我所知,互斥体是一种同步解决方案,这意味着所有线程在尝试写入消息而其他线程尝试写入消息时都被锁定。

你知道解决办法吗?

I'm actually looking for a way to do an asynchronous and thread-safe logging in my C++.

I have already explored thread-safe logging solutions like log4cpp, log4cxx, Boost:log or rlog, but it seems that all of them use a mutex. And as far as I know, mutex is a synchronous solution, which means that all threads are locked as they try to write their messages while other does.

Do you know a solution?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

相思碎 2024-12-23 02:06:17

我认为你的说法是错误的:使用互斥体不一定等同于同步解决方案。是的,互斥体用于同步控制,但它可以用于许多不同的事情。例如,我们可以在生产者消费者队列中使用互斥体,同时日志记录仍然异步发生。

老实说,我还没有研究过这些日志库的实现,但制作一个异步附加程序(对于 log4j 像 lib 一样)应该是可行的,其中记录器写入生产者消费者队列,另一个工作线程负责写入文件(或甚至委托给另一个附加程序),以防未提供。


编辑:
刚刚在 log4cxx 中进行了简短的扫描,它确实提供了一个 AsyncAppender ,它执行我的建议:缓冲传入的日志记录事件,并异步委托给附加的附加程序。

I think your statement is wrong: using mutex is not necessary equivalent to a synchronous solution. Yes, Mutex is for synchronization control but it can be used for many different thing. We can use mutex in, for example, a producer consumer queue while the logging is still happening asynchronously.

Honestly I haven't looked into the implementation of these logging library but it should be feasible to make a asynchronous appender (for log4j like lib) which logger writes to an producer consumer queue and another worker thread is responsible to write to a file (or even delegate to another appender), in case it is not provided.


Edit:
Just have had a brief scan in log4cxx, it does provide an AsyncAppender which does what I suggested: buffers the incoming logging event, and delegate to attached appender asynchronously.

寂寞笑我太脆弱 2024-12-23 02:06:17

我建议通过仅使用一个线程进行日志记录来避免此问题。为了将必要的数据传递到日志中,您可以使用无锁 fifo 队列(只要生产者和消费者严格分开,并且每个角色只有一个线程,那么线程就是安全的 - 因此每个生产者都需要一个队列。

)包含快速无锁队列:

queue.h:

#ifndef QUEUE_H
#define QUEUE_H

template<typename T> class Queue
{
public:
    virtual void Enqueue(const T &element) = 0;
    virtual T Dequeue() = 0;
    virtual bool Empty() = 0;
};

hybridqueue.h:

#ifndef HYBRIDQUEUE_H
#define HYBRIDQUEUE_H

#include "queue.h"


template <typename T, int size> class HybridQueue : public Queue<T>
{

public:
    virtual bool Empty();
    virtual T Dequeue();
    virtual void Enqueue(const T& element);
    HybridQueue();
    virtual ~HybridQueue();

private:
    struct ItemList
    {
        int start;
        T list[size];
        int end;
        ItemList volatile * volatile next;
    };

    ItemList volatile * volatile start;
    char filler[256];
    ItemList volatile * volatile end;
};

/**
 * Implementation
 * 
 */

#include <stdio.h>

template <typename T, int size> bool HybridQueue<T, size>::Empty()
{
    return (this->start == this->end) && (this->start->start == this->start->end);
}

template <typename T, int size> T HybridQueue<T, size>::Dequeue()
{
    if(this->Empty())
    {
        return NULL;
    }
    if(this->start->start >= size)
    {
        ItemList volatile * volatile old;
        old = this->start;
        this->start = this->start->next;
            delete old;
    }
    T tmp;
    tmp = this->start->list[this->start->start];
    this->start->start++;
    return tmp;
}

template <typename T, int size> void HybridQueue<T, size>::Enqueue(const T& element)
{
    if(this->end->end >= size) {
        this->end->next = new ItemList();
        this->end->next->start = 0;
        this->end->next->list[0] = element;
        this->end->next->end = 1;
        this->end = this->end->next;
    }
    else
    {
        this->end->list[this->end->end] = element;
        this->end->end++;
    }
}

template <typename T, int size> HybridQueue<T, size>::HybridQueue()
{
    this->start = this->end = new ItemList();
    this->start->start = this->start->end = 0;
}

template <typename T, int size> HybridQueue<T, size>::~HybridQueue()
{

}

#endif // HYBRIDQUEUE_H

I'd recomment avoiding the problem by using only one thread for logging. For passing the necessary data to log, you can use lock-free fifo queue (thread safe as long as producer and consumer are strictly separated and only one thread has each role -- therefore you will need one queue for each producer.)

Example of fast lock-free queue is included:

queue.h:

#ifndef QUEUE_H
#define QUEUE_H

template<typename T> class Queue
{
public:
    virtual void Enqueue(const T &element) = 0;
    virtual T Dequeue() = 0;
    virtual bool Empty() = 0;
};

hybridqueue.h:

#ifndef HYBRIDQUEUE_H
#define HYBRIDQUEUE_H

#include "queue.h"


template <typename T, int size> class HybridQueue : public Queue<T>
{

public:
    virtual bool Empty();
    virtual T Dequeue();
    virtual void Enqueue(const T& element);
    HybridQueue();
    virtual ~HybridQueue();

private:
    struct ItemList
    {
        int start;
        T list[size];
        int end;
        ItemList volatile * volatile next;
    };

    ItemList volatile * volatile start;
    char filler[256];
    ItemList volatile * volatile end;
};

/**
 * Implementation
 * 
 */

#include <stdio.h>

template <typename T, int size> bool HybridQueue<T, size>::Empty()
{
    return (this->start == this->end) && (this->start->start == this->start->end);
}

template <typename T, int size> T HybridQueue<T, size>::Dequeue()
{
    if(this->Empty())
    {
        return NULL;
    }
    if(this->start->start >= size)
    {
        ItemList volatile * volatile old;
        old = this->start;
        this->start = this->start->next;
            delete old;
    }
    T tmp;
    tmp = this->start->list[this->start->start];
    this->start->start++;
    return tmp;
}

template <typename T, int size> void HybridQueue<T, size>::Enqueue(const T& element)
{
    if(this->end->end >= size) {
        this->end->next = new ItemList();
        this->end->next->start = 0;
        this->end->next->list[0] = element;
        this->end->next->end = 1;
        this->end = this->end->next;
    }
    else
    {
        this->end->list[this->end->end] = element;
        this->end->end++;
    }
}

template <typename T, int size> HybridQueue<T, size>::HybridQueue()
{
    this->start = this->end = new ItemList();
    this->start->start = this->start->end = 0;
}

template <typename T, int size> HybridQueue<T, size>::~HybridQueue()
{

}

#endif // HYBRIDQUEUE_H
泡沫很甜 2024-12-23 02:06:17

如果我的问题是正确的,那么您担心的是在记录器的关键部分中执行 I/O 操作(可能写入文件)。

Boost:log 允许您定义自定义编写器对象。您可以定义operator()来调用异步I/O或将消息传递给您的日志记录线程(正在执行I/O)。

http://www.torjo.com/log2/doc/html/workflow.html#workflow_2b< /a>

If I get your question right you are concerned about doing I/O operation (probably write to a file) in a logger's critical section.

Boost:log lets you define a custom writer object. You can define operator() to call async I/O or pass a message to your logging thread (which is doing I/Os).

http://www.torjo.com/log2/doc/html/workflow.html#workflow_2b

浅暮の光 2024-12-23 02:06:17

据我所知,没有图书馆会这样做——它太复杂了。你必须自己动手,这是我刚刚想到的一个想法,创建一个每个线程日志文件,确保每个条目中的第一项是时间戳,然后合并日志,然后运行并排序(按时间戳)以获得最终的日志文件。

您可以使用一些线程本地存储(例如 FILE 句柄,据我所知,不可能在线程本地存储中存储流对象)并在每个日志行上查找此句柄并写入那个特定的文件。

所有这些复杂性与锁定互斥体相比?我不知道您的应用程序的性能要求,但如果它是敏感的 - 为什么您要记录(过度)?想想其他无需登录即可获取所需信息的方法吗?

另外要考虑的另一件事是尽可能少地使用互斥体,即首先构造日志条目,然后在写入文件之前获取锁。

No libraries will do this as far as I know - it's too complex. You'll have to roll your own, and here's an idea which I just had, create a per thread log file, ensure that the first item in each entry is a timestamp, and then merge the logs after then run and sort (by timestamp) to get a final log file.

You can use some thread local storage may be (say a FILE handle AFAIK it won't be possible to store a stream object in thread local storage) and look this handle up on each log line and write to that specific file.

All this complexity vs locking the mutex? I don't know the performance requirements of your application, but if it is sensitive - why would you be logging (excessively)? Think of other ways to obtain the information you require without logging?

Also one other thing to consider is to use the mutex for the least amount of time possible, i.e. construct your log entry first and then just before writing to the file, acquire the lock.

埖埖迣鎅 2024-12-23 02:06:17

在Windows程序中,我们使用用户定义的Windows消息。首先,为堆上的日志条目分配内存。然后调用PostMessage,指针作为LPARAM,记录大小作为WPARAM。接收器窗口提取记录,显示它,并将其保存在日志文件中。然后PostMessage返回,并且分配的内存被发送者释放。这种方法是线程安全的,并且您不必使用互斥体。并发是由Windows的消息队列机制来处理的。 不是很优雅,但很有效。

In a Windows program, we use a user-defined Windows message. First, memory is allocated for the log entry on the heap. Then PostMessage is called, with the pointer as the LPARAM, and the record size as the WPARAM. The receiver window extracts the record, displays it, and saves it in the log file. Then PostMessage returns, and the allocated memory is deallocated by the sender. This approach is thread-safe, and you don't have to use mutexes. Concurrency is handled by the message queue mechanism of Windows. Not very elegant, but works.

生生不灭 2024-12-23 02:06:17

无锁算法不一定是最快的算法。定义你的界限。有多少个线程用于日志记录?单次日志操作最多写入多少?

由于阻塞/唤醒线程,I/O 绑定操作比线程上下文切换慢得多。使用无锁/自旋锁算法,10个写线程会给CPU带来很大的负载。

很快,当您写入文件时,会阻止其他线程。

Lock-free algorithms are not necessarily the fastest ones. Define your boundaries. How many threads are there for logging? How much will be written in a single log operation at most?

I/O bound operations are much much slower than thread context switching due to blocking/awaking threads. Using lock-free/spinning lock algorithm with 10 writing threads will bring a heavy load to CPU.

Shortly, block other threads when you are writing to a file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文