您将如何实现基本的事件循环?

发布于 2024-07-15 19:17:33 字数 646 浏览 10 评论 0原文

如果您使用过 gui 工具包,您就会知道应该在完成所有操作后执行一个事件循环/主循环,这将使应用程序保持活动状态并响应不同的事件。 例如,对于 Qt,您可以在 main() 中执行此操作:

int main() {
    QApplication app(argc, argv);
    // init code
    return app.exec();
}

在本例中,app.exec() 是应用程序的主循环。

实现这种循环的明显方法是:

void exec() {
    while (1) {
        process_events(); // create a thread for each new event (possibly?)
    }
}

但这会将 CPU 限制为 100%,并且实际上毫无用处。 现在,我怎样才能实现这样一个响应式事件循环而不完全消耗 CPU 呢?

感谢使用 Python 和/或 C++ 提供答案。 谢谢。

脚注:为了学习,我将实现自己的信号/槽,并使用它们来生成自定义事件(例如 go_forward_event(steps))。 但如果您知道我如何手动使用系统事件,我也想知道。

If you have worked with gui toolkits, you know that there is a event-loop/main-loop that should be executed after everything is done, and that will keep the application alive and responsive to different events. For example, for Qt, you would do this in main():

int main() {
    QApplication app(argc, argv);
    // init code
    return app.exec();
}

Which in this case, app.exec() is the application's main-loop.

The obvious way to implement this kind of loop would be:

void exec() {
    while (1) {
        process_events(); // create a thread for each new event (possibly?)
    }
}

But this caps the CPU to 100% and is practicaly useless. Now, how can I implement such an event loop that is responsive without eating the CPU altogether?

Answers are appreciated in Python and/or C++. Thanks.

Footnote: For the sake of learning, I will implement my own signals/slots, and I would use those to generate custom events (e.g. go_forward_event(steps)). But if you know how I can use system events manually, I would like to know about that too.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

梦里°也失望 2024-07-22 19:17:33

我曾经也想过很多同样的事情!

GUI 主循环看起来像这样,用伪代码表示:

void App::exec() {
    for(;;) {
        vector<Waitable> waitables;
        waitables.push_back(m_networkSocket);
        waitables.push_back(m_xConnection);
        waitables.push_back(m_globalTimer);
        Waitable* whatHappened = System::waitOnAll(waitables);
        switch(whatHappened) {
            case &m_networkSocket: readAndDispatchNetworkEvent(); break;
            case &m_xConnection: readAndDispatchGuiEvent(); break;
            case &m_globalTimer: readAndDispatchTimerEvent(); break;
        }
    }
}

什么是“Waitable”? 嗯,这取决于系统。 在 UNIX 上,它称为“文件描述符”,“waitOnAll”是 ::select 系统调用。 所谓的vector是UNIX上的一个::fd_set,而“whatHappened”实际上是通过FD_ISSET来查询的。 实际的可等待句柄可以通过多种方式获取,例如可以从::XConnectionNumber() 获取m_xConnection。 X11 还为此提供了一个高级、可移植的 API -- ::XNextEvent() -- 但如果您要使用它,您将无法同时等待多个事件源。

阻塞是如何工作的? “waitOnAll”是一个系统调用,告诉操作系统将您的进程放入“睡眠列表”。 这意味着在其中一个等待对象发生事件之前,您不会获得任何 CPU 时间。 这意味着您的进程处于空闲状态,消耗了 0% 的 CPU。 当事件发生时,您的进程将对其做出短暂反应,然后返回到空闲状态。 GUI 应用程序几乎所有时间都在闲置。

当你睡觉时,所有 CPU 周期会发生什么? 依靠。 有时另一个进程会对它们有用。 如果没有,您的操作系统将使CPU忙循环,或将其置于临时低功耗模式等。

请询问更多详细信息!

I used to wonder a lot about the same!

A GUI main loop looks like this, in pseudo-code:

void App::exec() {
    for(;;) {
        vector<Waitable> waitables;
        waitables.push_back(m_networkSocket);
        waitables.push_back(m_xConnection);
        waitables.push_back(m_globalTimer);
        Waitable* whatHappened = System::waitOnAll(waitables);
        switch(whatHappened) {
            case &m_networkSocket: readAndDispatchNetworkEvent(); break;
            case &m_xConnection: readAndDispatchGuiEvent(); break;
            case &m_globalTimer: readAndDispatchTimerEvent(); break;
        }
    }
}

What is a "Waitable"? Well, it's system dependant. On UNIX it's called a "file descriptor" and "waitOnAll" is the ::select system call. The so-called vector<Waitable> is a ::fd_set on UNIX, and "whatHappened" is actually queried via FD_ISSET. The actual waitable-handles are acquired in various ways, for example m_xConnection can be taken from ::XConnectionNumber(). X11 also provides a high-level, portable API for this -- ::XNextEvent() -- but if you were to use that, you wouldn't be able to wait on several event sources simultaneously.

How does the blocking work? "waitOnAll" is a syscall that tells the OS to put your process on a "sleep list". This means you are not given any CPU time until an event occurs on one of the waitables. This, then, means your process is idle, consuming 0% CPU. When an event occurs, your process will briefly react to it and then return to idle state. GUI apps spend almost all their time idling.

What happens to all the CPU cycles while you're sleeping? Depends. Sometimes another process will have a use for them. If not, your OS will busy-loop the CPU, or put it into temporary low-power mode, etc.

Please ask for further details!

风月客 2024-07-22 19:17:33

Python:

您可以查看 Twisted Reactor 的实现,其中可能是 python 中事件循环的最佳实现。 Twisted 中的反应器是接口的实现,您可以指定要运行的类型反应器:select、epoll、kqueue(全部基于使用这些系统调用的 ac api),还有基于 QT 和 GTK 工具包的反应器。

一个简单的实现是使用 select:

#echo server that accepts multiple client connections without forking threads

import select
import socket
import sys

host = ''
port = 50000
backlog = 5
size = 1024
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((host,port))
server.listen(backlog)
input = [server,sys.stdin]
running = 1

#the eventloop running
while running:
    inputready,outputready,exceptready = select.select(input,[],[])

    for s in inputready:

        if s == server:
            # handle the server socket
            client, address = server.accept()
            input.append(client)

        elif s == sys.stdin:
            # handle standard input
            junk = sys.stdin.readline()
            running = 0

        else:
            # handle all other sockets
            data = s.recv(size)
            if data:
                s.send(data)
            else:
                s.close()
                input.remove(s)
server.close() 

Python:

You can look at the implementation of the Twisted reactor which is probably the best implementation for an event loop in python. Reactors in Twisted are implementations of an interface and you can specify a type reactor to run: select, epoll, kqueue (all based on a c api using those system calls), there are also reactors based on the QT and GTK toolkits.

A simple implementation would be to use select:

#echo server that accepts multiple client connections without forking threads

import select
import socket
import sys

host = ''
port = 50000
backlog = 5
size = 1024
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind((host,port))
server.listen(backlog)
input = [server,sys.stdin]
running = 1

#the eventloop running
while running:
    inputready,outputready,exceptready = select.select(input,[],[])

    for s in inputready:

        if s == server:
            # handle the server socket
            client, address = server.accept()
            input.append(client)

        elif s == sys.stdin:
            # handle standard input
            junk = sys.stdin.readline()
            running = 0

        else:
            # handle all other sockets
            data = s.recv(size)
            if data:
                s.send(data)
            else:
                s.close()
                input.remove(s)
server.close() 
半衾梦 2024-07-22 19:17:33

一般来说,我会使用某种计数信号量来做到这一点:

  1. 信号量从零开始。
  2. 事件循环等待信号量。
  3. 事件进入,信号量递增。
  4. 事件处理程序解除阻塞并减少信号量并处理事件。
  5. 当所有事件处理完毕后,信号量为零,事件循环再次阻塞。

如果您不想变得那么复杂,您可以在 while 循环中添加一个 sleep() 调用,并设置一个非常小的睡眠时间。 这将导致您的消息处理线程将 CPU 时间让给其他线程。 CPU 不会再固定在 100%,但这仍然相当浪费。

Generally I would do this with some sort of counting semaphore:

  1. Semaphore starts at zero.
  2. Event loop waits on semaphore.
  3. Event(s) come in, semaphore is incremented.
  4. Event handler unblocks and decrements the semaphore and processes the event.
  5. When all events are processed, semaphore is zero and event loop blocks again.

If you don't want to get that complicated, you could just add a sleep() call in your while loop with a trivially small sleep time. That will cause your message processing thread to yield it's CPU time to other threads. The CPU won't be pegged at 100% any more, but it's still pretty wasteful.

沫雨熙 2024-07-22 19:17:33

我会使用一个简单的、轻量级的消息传递库,称为 ZeroMQ (http://www.zeromq.org/ )。 它是一个开源库(LGPL)。 这是一个很小的图书馆; 在我的服务器上,整个项目的编译时间约为 60 秒。

ZeroMQ 将极大地简化您的事件驱动代码,而且它也是性能方面最有效的解决方案。 使用 ZeroMQ 在线程之间进行通信比使用信号量或本地 UNIX 套接字要快得多(就速度而言)。 ZeroMQ 也是一个 100% 可移植的解决方案,而所有其他解决方案都会将您的代码绑定到特定的操作系统。

I would use a simple, light-weight messaging library called ZeroMQ (http://www.zeromq.org/). It is an open source library (LGPL). This is a very small library; on my server, the whole project compiles in about 60 seconds.

ZeroMQ will hugely simplify your event-driven code, AND it is also THE most efficient solution in terms of performance. Communicating between threads using ZeroMQ is much faster (in terms of speed) than using semaphores or local UNIX sockets. ZeroMQ also be a 100% portable solution, whereas all the other solutions would tie your code down to a specific operating system.

玩世 2024-07-22 19:17:33

这是一个 C++ 事件循环。 在创建对象 EventLoop 时,它会创建一个线程,该线程不断运行分配给它的任何任务。 如果没有可用的任务,主线程将进入休眠状态,直到添加任务为止。

首先,我们需要一个线程安全队列,它允许多个生产者和至少一个消费者(EventLoop 线程)。 EventLoop 对象控制消费者和生产者。 只需稍加改动,就可以添加多个消费者(运行者线程),而不是仅一个线程。

#include <stdio.h>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <iostream>
#include <set>
#include <functional>

#if defined( WIN32 )
    #include <windows.h>
#endif

class EventLoopNoElements : public std::runtime_error
{
public:
    EventLoopNoElements(const char* error)
        : std::runtime_error(error)
    {
    }
};

template <typename Type>
struct EventLoopCompare {
    typedef std::tuple<std::chrono::time_point<std::chrono::system_clock>, Type> TimePoint;

    bool operator()(const typename EventLoopCompare<Type>::TimePoint left, const typename EventLoopCompare<Type>::TimePoint right) {
        return std::get<0>(left) < std::get<0>(right);
    }
};

/**
 * You can enqueue any thing with this event loop. Just use lambda functions, future and promises!
 * With lambda `event.enqueue( 1000, [myvar, myfoo](){ myvar.something(myfoo); } )`
 * With futures we can get values from the event loop:
 * ```
 * std::promise<int> accumulate_promise;
 * event.enqueue( 2000, [&accumulate_promise](){ accumulate_promise.set_value(10); } );
 * std::future<int> accumulate_future = accumulate_promise.get_future();
 * accumulate_future.wait(); // It is not necessary to call wait, except for syncing the output.
 * std::cout << "result=" << std::flush << accumulate_future.get() << std::endl;
 * ```
 * It is just not a nice ideia to add something which hang the whole event loop queue.
 */
template <class Type>
struct EventLoop {
    typedef std::multiset<
        typename EventLoopCompare<Type>::TimePoint,
        EventLoopCompare<Type>
    > EventLoopQueue;

    bool _shutdown;
    bool _free_shutdown;

    std::mutex _mutex;
    std::condition_variable _condition_variable;
    EventLoopQueue _queue;
    std::thread _runner;

    // free_shutdown - if true, run all events on the queue before exiting
    EventLoop(bool free_shutdown)
        : _shutdown(false),
        _free_shutdown(free_shutdown),
        _runner( &EventLoop<Type>::_event_loop, this )
    {
    }

    virtual ~EventLoop() {
        std::unique_lock<std::mutex> dequeuelock(_mutex);
        _shutdown = true;
        _condition_variable.notify_all();
        dequeuelock.unlock();

        if (_runner.joinable()) {
            _runner.join();
        }
    }

    // Mutex and condition variables are not movable and there is no need for smart pointers yet
    EventLoop(const EventLoop&) = delete;
    EventLoop& operator =(const EventLoop&) = delete;
    EventLoop(const EventLoop&&) = delete;
    EventLoop& operator =(const EventLoop&&) = delete;

    // To allow multiple threads to consume data, just add a mutex here and create multiple threads on the constructor
    void _event_loop() {
        while ( true ) {
            try {
                Type call = dequeue();
                call();
            }
            catch (EventLoopNoElements&) {
                return;
            }
            catch (std::exception& error) {
                std::cerr << "Unexpected exception on EventLoop dequeue running: '" << error.what() << "'" << std::endl;
            }
            catch (...) {
                std::cerr << "Unexpected exception on EventLoop dequeue running." << std::endl;
            }
        }
        std::cerr << "The main EventLoop dequeue stopped running unexpectedly!" << std::endl;
    }

    // Add an element to the queue
    void enqueue(int timeout, Type element) {
        std::chrono::time_point<std::chrono::system_clock> timenow = std::chrono::system_clock::now();
        std::chrono::time_point<std::chrono::system_clock> newtime = timenow + std::chrono::milliseconds(timeout);

        std::unique_lock<std::mutex> dequeuelock(_mutex);
        _queue.insert(std::make_tuple(newtime, element));
        _condition_variable.notify_one();
    }

    // Blocks until getting the first-element or throw EventLoopNoElements if it is shutting down
    // Throws EventLoopNoElements when it is shutting down and there are not more elements
    Type dequeue() {
        typename EventLoopQueue::iterator queuebegin;
        typename EventLoopQueue::iterator queueend;
        std::chrono::time_point<std::chrono::system_clock> sleeptime;

        // _mutex prevents multiple consumers from getting the same item or from missing the wake up
        std::unique_lock<std::mutex> dequeuelock(_mutex);
        do {
            queuebegin = _queue.begin();
            queueend = _queue.end();

            if ( queuebegin == queueend ) {
                if ( _shutdown ) {
                    throw EventLoopNoElements( "There are no more elements on the queue because it already shutdown." );
                }
                _condition_variable.wait( dequeuelock );
            }
            else {
                if ( _shutdown ) {
                    if (_free_shutdown) {
                        break;
                    }
                    else {
                        throw EventLoopNoElements( "The queue is shutting down." );
                    }
                }
                std::chrono::time_point<std::chrono::system_clock> timenow = std::chrono::system_clock::now();
                sleeptime = std::get<0>( *queuebegin );
                if ( sleeptime <= timenow ) {
                    break;
                }
                _condition_variable.wait_until( dequeuelock, sleeptime );
            }
        } while ( true );

        Type firstelement = std::get<1>( *queuebegin );
        _queue.erase( queuebegin );
        dequeuelock.unlock();
        return firstelement;
    }
};

打印当前时间戳的实用程序:

std::string getTime() {
    char buffer[20];
#if defined( WIN32 )
    SYSTEMTIME wlocaltime;
    GetLocalTime(&wlocaltime);
    ::snprintf(buffer, sizeof buffer, "%02d:%02d:%02d.%03d ", wlocaltime.wHour, wlocaltime.wMinute, wlocaltime.wSecond, wlocaltime.wMilliseconds);
#else
    std::chrono::time_point< std::chrono::system_clock > now = std::chrono::system_clock::now();
    auto duration = now.time_since_epoch();
    auto hours = std::chrono::duration_cast< std::chrono::hours >( duration );
    duration -= hours;
    auto minutes = std::chrono::duration_cast< std::chrono::minutes >( duration );
    duration -= minutes;
    auto seconds = std::chrono::duration_cast< std::chrono::seconds >( duration );
    duration -= seconds;
    auto milliseconds = std::chrono::duration_cast< std::chrono::milliseconds >( duration );
    duration -= milliseconds;
    time_t theTime = time( NULL );
    struct tm* aTime = localtime( &theTime );
    ::snprintf(buffer, sizeof buffer, "%02d:%02d:%02d.%03ld ", aTime->tm_hour, aTime->tm_min, aTime->tm_sec, milliseconds.count());
#endif
    return buffer;
}

使用这些的示例程序:

// g++ -o test -Wall -Wextra -ggdb -g3 -pthread test.cpp && gdb --args ./test
// valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --verbose ./test
// procdump -accepteula -ma -e -f "" -x c:\ myexe.exe
int main(int argc, char* argv[]) {
    std::cerr << getTime().c_str() << "Creating EventLoop" << std::endl;
    EventLoop<std::function<void()>>* eventloop = new EventLoop<std::function<void()>>(true);

    std::cerr << getTime().c_str() << "Adding event element" << std::endl;
    eventloop->enqueue( 3000, []{ std::cerr << getTime().c_str() << "Running task 3" << std::endl; } );
    eventloop->enqueue( 1000, []{ std::cerr << getTime().c_str() << "Running task 1" << std::endl; } );
    eventloop->enqueue( 2000, []{ std::cerr << getTime().c_str() << "Running task 2" << std::endl; } );

    std::this_thread::sleep_for( std::chrono::milliseconds(5000) );
    delete eventloop;
    std::cerr << getTime().c_str() << "Exiting after 10 seconds..." << std::endl;
    return 0;
}

输出测试示例:

02:08:28.960 Creating EventLoop
02:08:28.960 Adding event element
02:08:29.960 Running task 1
02:08:30.961 Running task 2
02:08:31.961 Running task 3
02:08:33.961 Exiting after 10 seconds...

更新

最后,所呈现的事件循环就像一个时间管理器。 对于时间管理器来说,更好的界面是不强制用户使用线程。 这是一个例子:

class TimerManager    
{
public:
    std::chrono::steady_clock clock_type;
    // setup given function to be executed at given timeout
    // @return unique identifier
    uint64_t start( std::chrono::milliseconds timeout, const std::function< void( void ) >& func );
    // cancel given unique identifier
    void cancel( uint64_t id );
    // handle all expired entries
    // @return next expiration or zero when queue is empty
    std::chrono::milliseconds run( );
}

Here is a C++ event loop. At the creation of the object EventLoop, it creates a thread which continually runs any task given to it. If there are no tasks available, the main thread goes to sleep until some task is added.

First we need a thread safe queue which allow multiple producers and at least a single consumer (the EventLoop thread). The EventLoop object controls the consumers and producers. With a little change, it can be added multiple consumers (runners threads), instead of only one thread.

#include <stdio.h>
#include <thread>
#include <mutex>
#include <condition_variable>
#include <iostream>
#include <set>
#include <functional>

#if defined( WIN32 )
    #include <windows.h>
#endif

class EventLoopNoElements : public std::runtime_error
{
public:
    EventLoopNoElements(const char* error)
        : std::runtime_error(error)
    {
    }
};

template <typename Type>
struct EventLoopCompare {
    typedef std::tuple<std::chrono::time_point<std::chrono::system_clock>, Type> TimePoint;

    bool operator()(const typename EventLoopCompare<Type>::TimePoint left, const typename EventLoopCompare<Type>::TimePoint right) {
        return std::get<0>(left) < std::get<0>(right);
    }
};

/**
 * You can enqueue any thing with this event loop. Just use lambda functions, future and promises!
 * With lambda `event.enqueue( 1000, [myvar, myfoo](){ myvar.something(myfoo); } )`
 * With futures we can get values from the event loop:
 * ```
 * std::promise<int> accumulate_promise;
 * event.enqueue( 2000, [&accumulate_promise](){ accumulate_promise.set_value(10); } );
 * std::future<int> accumulate_future = accumulate_promise.get_future();
 * accumulate_future.wait(); // It is not necessary to call wait, except for syncing the output.
 * std::cout << "result=" << std::flush << accumulate_future.get() << std::endl;
 * ```
 * It is just not a nice ideia to add something which hang the whole event loop queue.
 */
template <class Type>
struct EventLoop {
    typedef std::multiset<
        typename EventLoopCompare<Type>::TimePoint,
        EventLoopCompare<Type>
    > EventLoopQueue;

    bool _shutdown;
    bool _free_shutdown;

    std::mutex _mutex;
    std::condition_variable _condition_variable;
    EventLoopQueue _queue;
    std::thread _runner;

    // free_shutdown - if true, run all events on the queue before exiting
    EventLoop(bool free_shutdown)
        : _shutdown(false),
        _free_shutdown(free_shutdown),
        _runner( &EventLoop<Type>::_event_loop, this )
    {
    }

    virtual ~EventLoop() {
        std::unique_lock<std::mutex> dequeuelock(_mutex);
        _shutdown = true;
        _condition_variable.notify_all();
        dequeuelock.unlock();

        if (_runner.joinable()) {
            _runner.join();
        }
    }

    // Mutex and condition variables are not movable and there is no need for smart pointers yet
    EventLoop(const EventLoop&) = delete;
    EventLoop& operator =(const EventLoop&) = delete;
    EventLoop(const EventLoop&&) = delete;
    EventLoop& operator =(const EventLoop&&) = delete;

    // To allow multiple threads to consume data, just add a mutex here and create multiple threads on the constructor
    void _event_loop() {
        while ( true ) {
            try {
                Type call = dequeue();
                call();
            }
            catch (EventLoopNoElements&) {
                return;
            }
            catch (std::exception& error) {
                std::cerr << "Unexpected exception on EventLoop dequeue running: '" << error.what() << "'" << std::endl;
            }
            catch (...) {
                std::cerr << "Unexpected exception on EventLoop dequeue running." << std::endl;
            }
        }
        std::cerr << "The main EventLoop dequeue stopped running unexpectedly!" << std::endl;
    }

    // Add an element to the queue
    void enqueue(int timeout, Type element) {
        std::chrono::time_point<std::chrono::system_clock> timenow = std::chrono::system_clock::now();
        std::chrono::time_point<std::chrono::system_clock> newtime = timenow + std::chrono::milliseconds(timeout);

        std::unique_lock<std::mutex> dequeuelock(_mutex);
        _queue.insert(std::make_tuple(newtime, element));
        _condition_variable.notify_one();
    }

    // Blocks until getting the first-element or throw EventLoopNoElements if it is shutting down
    // Throws EventLoopNoElements when it is shutting down and there are not more elements
    Type dequeue() {
        typename EventLoopQueue::iterator queuebegin;
        typename EventLoopQueue::iterator queueend;
        std::chrono::time_point<std::chrono::system_clock> sleeptime;

        // _mutex prevents multiple consumers from getting the same item or from missing the wake up
        std::unique_lock<std::mutex> dequeuelock(_mutex);
        do {
            queuebegin = _queue.begin();
            queueend = _queue.end();

            if ( queuebegin == queueend ) {
                if ( _shutdown ) {
                    throw EventLoopNoElements( "There are no more elements on the queue because it already shutdown." );
                }
                _condition_variable.wait( dequeuelock );
            }
            else {
                if ( _shutdown ) {
                    if (_free_shutdown) {
                        break;
                    }
                    else {
                        throw EventLoopNoElements( "The queue is shutting down." );
                    }
                }
                std::chrono::time_point<std::chrono::system_clock> timenow = std::chrono::system_clock::now();
                sleeptime = std::get<0>( *queuebegin );
                if ( sleeptime <= timenow ) {
                    break;
                }
                _condition_variable.wait_until( dequeuelock, sleeptime );
            }
        } while ( true );

        Type firstelement = std::get<1>( *queuebegin );
        _queue.erase( queuebegin );
        dequeuelock.unlock();
        return firstelement;
    }
};

Utility to print the current timestamp:

std::string getTime() {
    char buffer[20];
#if defined( WIN32 )
    SYSTEMTIME wlocaltime;
    GetLocalTime(&wlocaltime);
    ::snprintf(buffer, sizeof buffer, "%02d:%02d:%02d.%03d ", wlocaltime.wHour, wlocaltime.wMinute, wlocaltime.wSecond, wlocaltime.wMilliseconds);
#else
    std::chrono::time_point< std::chrono::system_clock > now = std::chrono::system_clock::now();
    auto duration = now.time_since_epoch();
    auto hours = std::chrono::duration_cast< std::chrono::hours >( duration );
    duration -= hours;
    auto minutes = std::chrono::duration_cast< std::chrono::minutes >( duration );
    duration -= minutes;
    auto seconds = std::chrono::duration_cast< std::chrono::seconds >( duration );
    duration -= seconds;
    auto milliseconds = std::chrono::duration_cast< std::chrono::milliseconds >( duration );
    duration -= milliseconds;
    time_t theTime = time( NULL );
    struct tm* aTime = localtime( &theTime );
    ::snprintf(buffer, sizeof buffer, "%02d:%02d:%02d.%03ld ", aTime->tm_hour, aTime->tm_min, aTime->tm_sec, milliseconds.count());
#endif
    return buffer;
}

Example program using these:

// g++ -o test -Wall -Wextra -ggdb -g3 -pthread test.cpp && gdb --args ./test
// valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --verbose ./test
// procdump -accepteula -ma -e -f "" -x c:\ myexe.exe
int main(int argc, char* argv[]) {
    std::cerr << getTime().c_str() << "Creating EventLoop" << std::endl;
    EventLoop<std::function<void()>>* eventloop = new EventLoop<std::function<void()>>(true);

    std::cerr << getTime().c_str() << "Adding event element" << std::endl;
    eventloop->enqueue( 3000, []{ std::cerr << getTime().c_str() << "Running task 3" << std::endl; } );
    eventloop->enqueue( 1000, []{ std::cerr << getTime().c_str() << "Running task 1" << std::endl; } );
    eventloop->enqueue( 2000, []{ std::cerr << getTime().c_str() << "Running task 2" << std::endl; } );

    std::this_thread::sleep_for( std::chrono::milliseconds(5000) );
    delete eventloop;
    std::cerr << getTime().c_str() << "Exiting after 10 seconds..." << std::endl;
    return 0;
}

Output test example:

02:08:28.960 Creating EventLoop
02:08:28.960 Adding event element
02:08:29.960 Running task 1
02:08:30.961 Running task 2
02:08:31.961 Running task 3
02:08:33.961 Exiting after 10 seconds...

Update

In the end, the Event Loop presented is a like a time manager. A better interface for a time manager would be not to force the user to use threads. This would be an example:

class TimerManager    
{
public:
    std::chrono::steady_clock clock_type;
    // setup given function to be executed at given timeout
    // @return unique identifier
    uint64_t start( std::chrono::milliseconds timeout, const std::function< void( void ) >& func );
    // cancel given unique identifier
    void cancel( uint64_t id );
    // handle all expired entries
    // @return next expiration or zero when queue is empty
    std::chrono::milliseconds run( );
}
薄情伤 2024-07-22 19:17:33

这个答案适用于类unix系统,例如Linux或Mac OS X。我不知道这在Windows中是如何完成的。

select() 或 pselect()。 Linux 也有 poll()。

检查手册页以获取深入的详细信息。
该系统调用需要文件描述符列表、超时和/或信号掩码。 该系统调用让程序等待事件发生。 如果列表中的文件描述符之一已准备好读取或写入(取决于设置,请参阅联机帮助页)、超时到期或信号到达,则此系统调用将返回。 然后程序可以读取/写入文件描述符、处理信号或执行其他操作。 之后,它再次调用 (p)select/poll 并等待下一个事件。

套接字应以非阻塞方式打开,以便在没有数据/缓冲区已满时读/写函数返回。 对于通用显示服务器 X11,GUI 通过套接字进行处理并具有文件描述符。 所以也可以用同样的方法来处理。

This answer is for unix-like system such as Linux or Mac OS X. I do not know how this is done in Windows.

select() or pselect(). Linux also has poll().

Check the man pages for a in depth details.
This syscalls want a lists of file desciptors, a timeout and/or a signal mask. This syscalls let the program wait till an event. If one of the file desciptors in the list is ready to read or write (depends on the settings, see manpages), the timeout expires or a signal arrived, this syscalls will return. The program can then read/write to the file descriptors, processes the signals or does other stuff. After that it calls (p)select/poll again and wait till the next event.

The sockets should be opened as non-blocking so that the read/write function returns when there is no data/buffer full. With the common display server X11, the GUI is handled via a socket and has a file descriptor. So it can be handled the same way.

忆沫 2024-07-22 19:17:33

在Python中创建事件循环的基本应用之前。 让我们了解一下

什么是事件循环?

事件循环是任何异步 I/O 框架的核心组件,它允许您同时执行 I/O 操作,而不会阻塞程序的执行。 事件循环在单个线程中运行,负责在发生时接收和分派 I/O 事件(例如读/写文件或键盘中断)。

import asyncio

async def coroutine():
    print('Start')
    await asyncio.sleep(1)
    print('End')

loop = asyncio.get_event_loop()
loop.run_until_complete(coroutine())

Before creating Basic application of Event-loop in python. Let's understand

what is Event Loop ?

An event loop is a central component of any asynchronous I/O framework that allows you to perform I/O operations concurrently without blocking the execution of your program. An event loop runs in a single thread and is responsible for receiving and dispatching I/O events [like reading/writing to a file -or- keyboard interrupt] as they occur.

import asyncio

async def coroutine():
    print('Start')
    await asyncio.sleep(1)
    print('End')

loop = asyncio.get_event_loop()
loop.run_until_complete(coroutine())
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文