boost::interprocess 准备好迎接黄金时间了吗?

发布于 2024-09-08 08:54:03 字数 299 浏览 10 评论 0原文

我正在开发一个由内存映射文件支持的线程安全队列,该队列相当多地利用了Boost进程间。我提交了它进行代码审查,一位比我在这个星球上拥有更多年经验的开发人员说,他不认为 boost::interprocess 已经“准备好迎接黄金时间”,我应该直接使用 pthreads。

我认为这主要是FUD。我个人认为重新实现诸如 upgradable_named_mutex 或 boost::interprocess::deque 之类的东西是非常荒谬的,但我很想知道其他人的想法。我找不到任何数据来支持他的说法,但也许我只是不知情或天真。 Stackoverflow 启发我!

I was working on a thread safe queue backed by memory mapped files which utilized boost interprocess fairly heavily. I submitted it for code review and a developer with more years of experience than I have on this planet said he didn't feel that boost::interprocess was "ready for prime time" and that I should just use pthreads directly.

I think that's mostly FUD. I personally think it's beyond ridiculous to go about reimplementing things such as upgradable_named_mutex or boost::interprocess::deque, but I'm curious to know what other people think. I couldn't find any data to back up his claim, but maybe I'm just uninformed or naive. Stackoverflow enlighten me!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

享受孤独 2024-09-15 08:54:03

我尝试在一个项目中使用 boost::interprocess ,但感觉很复杂。我主要的疑虑是 boost::offset_ptr 的设计以及它如何处理 NULL 值——简而言之,boost::interprocess 会让诊断 NULL 指针错误变得非常痛苦。问题是共享内存段被映射到进程地址空间中间的某个位置,这意味着“NULL”offset_ptr 在取消引用时将指向有效的内存位置,因此您的应用程序将不会 段错误。这意味着当您的应用程序最终崩溃时,可能是在错误发生很久之后,这使得调试变得非常棘手。

但情况变得更糟。 boost:::interprocess 内部使用的互斥体和条件存储在段的开头。因此,如果您不小心写入 some_null_offset_ptr->some_member,您将开始覆盖 boost::interprocess 段的内部机制,并变得完全奇怪且难以理解的行为。编写协调多个进程并处理可能的竞争条件的代码本身就很困难,因此更加令人抓狂。

我最终编写了自己的最小共享内存库,并使用 POSIX mprotect 系统调用使我的共享内存段的第一页不可读且不可写,这使得 NULL bug 立即出现(您浪费了一页内存,但这么小的牺牲是值得的,除非您使用的是嵌入式系统)。您可以尝试使用 boost::interprocess 但仍然手动调用 mprotect,但这不起作用,因为 boost 会期望它可以写入它存储在段开头的内部信息。

最后,offset_ptr 假设您在共享内存段中存储指向同一共享内存段中其他点的指针。如果您知道您将拥有多个共享内存段(我知道会出现这种情况,因为对我来说,因为我有一个可写段和 1 个只读段),它们将相互存储指针,offset_ptr 会妨碍您并且您必须进行大量手动转换。在我的共享内存库中,我创建了一个模板化的 SegmentPtr 类,其中 SegmentPtr<0> 将是指向一个段的指针,SegmentPtr<1> > 将是指向另一个段等的指针,这样它们就不会混淆(但只有在编译时知道段数的情况下才能这样做)。

您需要权衡自己实现所有内容的成本与跟踪 NULL 错误以及可能混淆指向不同段的指针所花费的额外调试时间(后者对您来说不一定是问题)。对我来说,自己实现一些东西是值得的,但我没有大量使用 boost::interprocess 提供的数据结构,所以这显然是值得的。如果将来允许该库开源(不由我决定),我将通过链接进行更新,但现在不要屏住呼吸;p

不过,关于您的同事:我没有遇到任何不稳定或者 boost::interprocess 本身的错误。我只是认为它的设计让你更难在自己的代码中找到错误。

I attempted to use boost::interprocess for a project and came away with mixed feelings. My main misgiving is the design of boost::offset_ptr and how it handles NULL values -- in short, boost::interprocess can make diagnosing NULL pointers mistakes really painful. The issue is that a shared memory segment is mapped somewhere in the middle of the address space of your process, which means that "NULL" offset_ptr's, when dereferenced, will point to a valid memory location, so your application won't segfault. This means that when your application finally does crash it may be long after the mistake is made, making things very tricky to debug.

But it gets worse. The mutexes and conditions that boost:::interprocess uses internally are stored at the beginning of the segment. So if you accidentally write to some_null_offset_ptr->some_member, you will start overwriting the internal machinery of the boost::interprocess segment and get totally weird and hard to understand behavior. Writing code that coordinates multiple processes and dealing with the possible race conditions can be tough on its own, so it was doubly maddening.

I ended up writing my own minimal shared memory library and using the POSIX mprotect system call to make the first page of my shared memory segments unreadable and unwritable, which made NULL bugs appear immediately (you waste a page of memory but such a small sacrifice is worth it unless you're on an embedded system). You could try using boost::interprocess but still manually calling mprotect, but that won't work because boost will expect it can write to that internal information it stores at the beginning of the segment.

Finally, offset_ptr's assume that you are storing pointers within a shared memory segment to other points in the same shared memory segment. If you know that you are going to have multiple shared memory segments (I knew this would be the case because for me because I had one writable segment and 1 read only segment) which will store pointers into one another, offset_ptr's get in your way and you have to do a bunch of manual conversions. In my shared memory library I made a templated SegmentPtr<i> class where SegmentPtr<0> would be pointers into one segment, SegmentPtr<1> would be pointers into another segment, etc. so that they could not be mixed up (you can only do this though if you know the number of segments at compile time).

You need to weigh the cost of implementing everything yourself versus the extra debugging time you're going to spend tracking down NULL errors and potentially mixing up pointers to different segments (the latter isn't necessarily an issue for you). For me it was worth it to implement things myself, but I wasn't making heavy use of the data structures boost::interprocess provides, so it was clearly worth it. If the library is allowed to be open source in the future (not up to me) I'll update with a link but for now don't hold your breath ;p

In regards to your coworker though: I didn't experience any instability or bugs in boost::interprocess itself. I just think its design makes it harder to find bugs in your own code.

避讳 2024-09-15 08:54:03

我们一直在使用 boost::interprocess 共享内存和 interprocess.synchronization_mechanisms.message_queue 已经使用了大约 6 个月,发现代码可靠、稳定且相当易于使用。

我们将数据保存在相当简单的固定大小的结构中(尽管 12 个区域的大小总计为 2+GB),并且我们按原样使用了 boost::interprocess 示例代码,几乎没有任何问题。

我们确实发现在 Windows 中使用 boost::interprocess 时需要注意两个事项。

  1. 查看 提升共享内存 & ;窗口。如果您使用默认的#include对象,那么您只能通过首先重新启动Windows来增加内存映射区域的大小。这是因为 boost 如何使用文件后备存储。
  2. message_queue 类使用默认的shared_memory_object。因此,如果需要增加消息大小,请再次重新启动 Windows。

我并不是想说 Joseph Garvin 关于 boost::interprocess 问题的帖子无效。我认为我们的经验差异与使用图书馆的不同方面有关。我确实同意他的观点,即 boost::interprocess 似乎不存在任何稳定性问题。

We've been using boost::interprocess shared memory and interprocess.synchronization_mechanisms.message_queue for about 6 months now and found the code to be reliable, stable and fairly easy to use.

We keep our data in fairly simple fixed sized struct's (though 12 regions totaling 2+gb in size) and we used the boost::interprocess example code as is and had almost no problems.

We did find two items to watch out for when using boost::interprocess with windows.

  1. Review Boost Shared Memory & Windows. If you use the default #include <boost/interprocess/shared_memory_object.hpp> objects, then you can only increase the size of the memory mapped region by rebooting Windows first. That is because of how boost uses a file backing store.
  2. The message_queue class uses the default shared_memory_object. So if the message size needs to be increased, reboot Windows time again.

I'm not trying to say that Joseph Garvin's post about his problems with boost::interprocess were not valid. I think differences in our experiences are related to using different aspects of the library. I do agree with him that there do not appear to be any stability issues in boost::interprocess.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文