优先级队列中的池内存分配

发布于 2024-10-20 16:42:26 字数 685 浏览 1 评论 0原文

我正在编写一个基于事件的模拟器,其中每个事件都会调用一个可以生成新事件的处理函数(节点),等等。 时间戳与每个事件相关联,并且需要按照时间递增的顺序处理它们(但事件不一定按照该顺序创建)。为此,我使用了一个简单的 priority_queue,其中 Event 是一个类,其中包含指向必须调用的处理节点的指针和时间戳。

因此,一切正常,但每秒分配和释放数百万个事件,这显然限制了模拟器的速度(大约 30% 的执行时间用于事件对象的内存分配和释放)。

我发现这个问题: 对象池与动态分配,看起来我可以从对象中受益匪浅水池。尽管我已经看到 Boost 提供了某种方法来做到这一点,但我不确定这对于在 priority_queue 中实现池是否实用。当谈到自定义内存分配时,我真的很迷失。

所以我的问题是:为我的priority_queue使用对象池是否实用/有益,如果是,是否有一种简单的方法来做到这一点,也许需要一些代码示例(或至少是一个起点),最好不要第一次立即依赖 Boost?

实际上,一些了解池分配如何工作的参考资料也将受到欢迎!

谢谢。

I am writing an event based simulator, in which each event calls a processing function (node) that can generate new events, and so on.
A timestamp is associated to each events and they need to be processed in the order of increasing time (but the events are not necessarily created in that order). To this end, I use a simple priority_queue<Event*>, where Event is a class containing a pointer to the processing node that must be called and the timestamp.

So, everything works fine, but I get millions of events allocated and deallocated per second and this is clearly what is limiting the speed of my simulator (roughly 30% of the execution time is taken by memory allocation and deallocation of Event objects).

I found this question:
Object pool vs. dynamic allocation and it seems like I could very much benefit from an object pool. Although I have seen that Boost is offering some way to do that, I am not sure to understand if this is practical for implementing a pool in a priority_queue. I am really lost when it comes to custom memory allocation.

So my question is: would it be practical / beneficial to use an object pool for my priority_queue, and if yes, is there a simple way to do it, with maybe some code example (or at least a starting point), preferably without immediately relying on Boost in a first time?

Actually some refs to understand how pool allocation works would also be welcome!

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

淡写薰衣草的香 2024-10-27 16:42:26

是的,这样做非常实用。请记住,内置动态分配器的构建速度是为了满足各种目的而尽可能快 - 也就是说,它必须以任何顺序分配和取消分配任何大小、任何类型。如果你事先知道这是没有必要的,那么你可以很大程度上降低分配和解除分配的复杂性。

在这种情况下,您提前知道您始终分配事件。这使得对象池成为适合您的目的的优秀分配器。将旨在与 STL 对象一起使用的自定义分配器添加到 std::priority_queue 是非常实用的 - 队列在内部容器上使用 std::vector 进行模板化code> 作为默认值,您可以在 std::vector 中显式指定自定义分配器。结果应该非常实用且易于使用——像对象池一样基于值的自定义内存分配器(据我所知)相当容易插入。

Yes, it's very practical to do so. Remember that the built-in dynamic allocator is built to be as fast as possible for every purpose- that is, it must allocate and de-allocate any size, any type, and in any order. If you know in advance that this is not necessary, you can reduce the complexity of allocation and de-allocation in a large way.

In this case, you know in advance that you're always allocating an Event. This makes an object pool an excellent allocator for your purposes. It's perfectly practical to add a custom allocator which is designed to be used with STL objects to a std::priority_queue- the queue is templated on an internal container, with std::vector as the default, and you can explicitly specify a custom allocator in a std::vector. The result should be pretty practical and easy to use- custom memory allocators which are value-based like object pools are (as far as I know) are fairly easy to plug in.

橘亓 2024-10-27 16:42:26

对象池的简单示例

EventPool.h

#include <stack>
class Event;

class EventPool
{
 public:
   explicit EventPool(const unsigned int initSize = 0);
   ~EventPool();
   Event* getEvent();
   void returnEvent(Event* e);
 private:
   std::stack<Event*> pool_;
};

EventPool.cxx

#include <Event.h>
#include <EventPool.h>

EventPool::EventPool(const unsigned int initSize)
{
   for(unsigned int i = 0; i < initSize; ++i)
   {
      pool_.push(new Event());
   }
}

EventPool::~EventPool()
{
   while(!pool_.empty())
   {
      delete pool_.top();
      pool_.pop();
   }
}

Event* EventPool::getEvent()
{
   if(pool_.empty())
   {
      return new Event();
   }
   Event* returnValue = pool_.top();
   pool_.pop();
   return returnValue;
}

void EventPool::returnEventToPool(Event* e)
{
   pool_.push(e);
}

通过这样做,您可以允许池控制自身的大小。当另一个对象捕获一个事件时,由捕获者决定是返回该事件还是将其删除。

A quick and dirty example of an object pool

EventPool.h

#include <stack>
class Event;

class EventPool
{
 public:
   explicit EventPool(const unsigned int initSize = 0);
   ~EventPool();
   Event* getEvent();
   void returnEvent(Event* e);
 private:
   std::stack<Event*> pool_;
};

EventPool.cxx

#include <Event.h>
#include <EventPool.h>

EventPool::EventPool(const unsigned int initSize)
{
   for(unsigned int i = 0; i < initSize; ++i)
   {
      pool_.push(new Event());
   }
}

EventPool::~EventPool()
{
   while(!pool_.empty())
   {
      delete pool_.top();
      pool_.pop();
   }
}

Event* EventPool::getEvent()
{
   if(pool_.empty())
   {
      return new Event();
   }
   Event* returnValue = pool_.top();
   pool_.pop();
   return returnValue;
}

void EventPool::returnEventToPool(Event* e)
{
   pool_.push(e);
}

By doing this, you can allow the pool to control the size of itself. When another object grabs an event, it is up to the grabber to either give the event back or delete it.

眼趣 2024-10-27 16:42:26

我猜你正在谈论 std::priority_queue。是的,可以提供您自己的分配方案。

STL优先级队列是根据另一个容器(默认情况下,我认为是std::vector)实现的,可以将其指定为模板参数。然后,您可以使用分配器参数化另一个容器(以通常的 STL 方式)。

它仍然是分配器的实现。如果您不想自己做,我很确定您可以找到很多(例如您提到的 Boost)。我曾经使用过这里的隔离池实现与一些小修改。做得相当不错...

I guess your talking about std::priority_queue. Yes, it's possible to provide your own allocation scheme.

The STL priority queue is implemented in terms of another container (by default, std::vector I think) which can be specified as a template argument. Then you can parameterize this other container with your allocator (in the usual STL way).

It remains then an implementation of the allocator. In the case you don't want to do it yourself I'm pretty sure that you can find quite many out there (you mentioned Boost, for example). I once used a segregated pool implementation from here with some small modifications. It did quite well...

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文