Boost::互斥体和马洛克

发布于 2024-09-05 14:36:27 字数 584 浏览 11 评论 0原文

我正在尝试在 C++ 中使用更快的内存分配器。由于许可/成本问题,我无法使用 Hoard。我在单线程设置中使用 NEDMalloc 并获得了出色的性能,但我想知道是否应该切换到其他东西 - 据我了解,NEDMalloc 只是基于 C 的 malloc() & 的替代品。 free(),不是基于 C++ 的 new &删除运算符(我广泛使用)。

问题是我现在需要线程安全,所以我尝试分配一个引用计数的对象(以防止过度复制),但它也包含一个互斥指针。这样,如果您要删除最后一个副本,您首先需要锁定指针,然后释放对象,最后解锁并释放对象。释放互斥体。

但是,使用 malloc 创建 boost::mutex 似乎是不可能的,因为我无法初始化私有对象,因为直接调用构造函数是禁止的。

所以我遇到了这种奇怪的情况,我使用 new 来分配锁,并使用 nedmalloc 来分配其他所有内容。但是当我分配大量内存时,我遇到了分配错误(当我切换到 malloc 而不是 nedmalloc 时,分配错误消失了~但性能很糟糕)。我的猜测是,这是由于内存碎片以及 nedmalloc 和 new 无法并排放置而造成的。

必须有更好的解决方案。你有什么建议?

I'm trying to use a faster memory allocator in C++. I can't use Hoard due to licensing / cost. I was using NEDMalloc in a single threaded setting and got excellent performance, but I'm wondering if I should switch to something else -- as I understand things, NEDMalloc is just a replacement for C-based malloc() & free(), not the C++-based new & delete operators (which I use extensively).

The problem is that I now need to be thread-safe, so I'm trying to malloc an object which is reference counted (to prevent excess copying), but which also contains a mutex pointer. That way, if you're about to delete the last copy, you first need to lock the pointer, then free the object, and lastly unlock & free the mutex.

However, using malloc to create a boost::mutex appears impossible because I can't initialize the private object as calling the constructor directly ist verboten.

So I'm left with this odd situation, where I'm using new to allocate the lock and nedmalloc to allocate everything else. But when I allocate a large amount of memory, I run into allocation errors (which disappear when I switch to malloc instead of nedmalloc ~ but the performance is terrible). My guess is that this is due to fragmentation in the memory and an inability of nedmalloc and new to place nice side by side.

There has to be a better solution. What would you suggest?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

甜妞爱困 2024-09-12 14:36:28

Google 的 malloc 替换 速度相当快,默认情况下线程安全,并且易于使用。只需将其链接到您的应用程序中,它就会替换 malloc/freenew/delete 的行为。这使得重新分析您的应用程序以验证新分配器是否确实加快了速度变得特别容易。

Google's malloc replacement is quite fast, thread safe by default, and easy to use. Simply link it into your application at it will replace the behavior or malloc/free and new/delete. This makes it particularly easy to re-profile your app to verify the new allocator is actually speeding things up.

近箐 2024-09-12 14:36:28

您可以重载全局运算符 newdelete 来调用您正在使用的新版本的 mallocfree 。这应该会让事情变得更好,尽管如果这还没有发生我会感到惊讶。

至于创建互斥锁,请使用placement new——这是手动调用构造函数的方式。 char 的静态数组将通过缓冲区来完成。例如,全局变量:

static char buf[sizeof(Mutex)];
static Mutex *m=0;

然后初始化 m 指针:(

m=new(buf) Mutex;

如果需要,您还可以对齐指针等,并重命名变量等。)

可能会发生的一件事值得注意的是,如果 Mutex 构造函数本身执行更多内存分配,那么这可能会出现问题。这不太可能,但有可能。 (对于这种可能罕见的情况,跨平台互斥包装器的临时实现通常没有问题,它不进行任何分配——或者,尽管它最终会变得一团糟,只需使用#ifdef 并直接使用平台类型,无论哪种情况,都不需要太多代码,任何对相关系统有经验的人都可以非常轻松地创建相关的代码,没有错误。时间很少。)

正确清理以这种方式创建的对象可能很困难,所以我建议不要打扰(不,说真的)。当你使用它来实现内存管理器时,让这些东西泄漏是完全可以的;没有必要为此发疯。 (如果您正在使用具有进程退出概念的系统,则操作系统几乎可以保证为您清理底层互斥体。)

You can overload global operators new and delete to call the new versions of malloc and free that you're using. This should make things play nicer together, though I'd be surprised if this wasn't happening already.

As for creating the mutex, use placement new -- this is how a constructor is called manually. A static array of char will do by way of buffer. For example, globals:

static char buf[sizeof(Mutex)];
static Mutex *m=0;

Then to initialize the m pointer:

m=new(buf) Mutex;

(You can also align the pointer, and so on, if you need to, and rename the variables, and so on.)

One thing that might be worth noting is that if the Mutex constructor does more memory allocation itself then this can be a problem. This is unlikely, but possible. (For this likely-to-be-rare case, there's usually no problem with an ad-hoc implementation of a cross-platform mutex wrapper, that doesn't do any allocation -- or, though it will end up a mess eventually, just use #ifdef and use the platform types directly. In either case, it's not much code, and anybody experienced with the system(s) in question can create the relevant code, bug-free, in very little time.)

Correct cleanup of objects created this way can be difficult, so I recommend not to bother (no, seriously). It's perfectly OK to let this stuff leak when you're using it to implement the memory manager; no point going mad over it. (If you're working on a system that has a notion of process exit, the OS is pretty much guaranteed to clean up the underlying mutex for you.)

雨后咖啡店 2024-09-12 14:36:28

您是否分析并验证了实际内存分配是一个足够重要的问题,更换分配器可以提供有用的收益?

NEDMalloc 线程安全吗?

通常,默认的 C++ new/delete 运算符将在调用构造函数/析构函数之前/之后在幕后使用 malloc 和 free 来执行实际的内存分配。如果它们不适合您的特定情况,您可以覆盖全局 new 和 delete 运算符来调用您想要的任何分配实现。这需要注意确保内存始终使用相同的分配器分配/释放(尤其是在处理库时)。

Have you profiled and verified that actual memory allocation is a significant enough problem that replacing the allocator provides useful gain?

Is NEDMalloc thread safe?

Often, the default c++ new/delete operators will use malloc and free under the hood to do the actual memory allocation before/after calling the constructor/destructor. If they don't in your particular situation, you can override the global new and delete operators to call whatever allocation implementation you wish. This requires some care making sure that memory is always allocated/deallocated with the same allocator (especially when dealing with libraries).

独自←快乐 2024-09-12 14:36:28

好吧,通常 C++ newdelete 运算符在内部调用普通 C 库函数 mallocfree (加上一些额外的魔法就像调用 ctor 和 dtor 一样),因此为这些函数提供自定义实现可能就足够了(这在嵌入式 C++ 开发中并不罕见,但需要一些链接器级工作)。您的目标是什么系统和什么编译器?

Well, usually C++ new and delete operators internally calls plain C library functions malloc and free (plus some additional magic like calling ctors and dtors), so providing a custom implementation for these functions may be enough (this is not infrequent in embedded C++ development, but requires some linker-level work). What system and what compiler are you targeting?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文