“Benaphores”是“Benaphores”吗?值得在现代操作系统上实施吗?

发布于 2024-08-08 17:08:05 字数 456 浏览 4 评论 0原文

回到我作为 BeOS 程序员的日子,我读过 Benoit Schillings 的这篇文章,描述了如何创建“benaphore”:一种使用原子变量来强制执行临界区的方法,避免了在常见(无争用)情况下获取/释放互斥体的需要。

我认为这相当聪明,而且似乎您可以在任何支持原子递增/递减的平台上执行相同的技巧。

另一方面,这看起来可以很容易地包含在标准互斥体实现本身中......在这种情况下,在我的程序中实现此逻辑将是多余的,并且不会提供任何好处。

有谁知道现代锁定API(例如pthread_mutex_lock()/pthread_mutex_unlock())是否在内部使用这个技巧?如果没有,为什么不呢?

Back in my days as a BeOS programmer, I read this article by Benoit Schillings, describing how to create a "benaphore": a method of using atomic variable to enforce a critical section that avoids the need acquire/release a mutex in the common (no-contention) case.

I thought that was rather clever, and it seems like you could do the same trick on any platform that supports atomic-increment/decrement.

On the other hand, this looks like something that could just as easily be included in the standard mutex implementation itself... in which case implementing this logic in my program would be redundant and wouldn't provide any benefit.

Does anyone know if modern locking APIs (e.g. pthread_mutex_lock()/pthread_mutex_unlock()) use this trick internally? And if not, why not?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

会发光的星星闪亮亮i 2024-08-15 17:08:05

您的文章所描述的内容如今已很常用。最常见的是“关键部分”,它由一个互锁变量、一堆标志和一个内部同步对象(互斥体,如果我没记错的话)组成。一般来说,在争用很少的场景下,临界区完全在用户态执行,不涉及内核同步对象。这保证了快速执行。当争用较高时,内核对象用于等待,这会释放传导的时间片以实现更快的周转。

一般来说,在当今时代实现同步原语没有什么意义。操作系统附带了各种各样的此类对象,并且它们在比单个程序员想象的更广泛的场景中进行了优化和测试。实际上,发明、实施和测试一个好的同步机制需要数年时间。这并不是说尝试没有价值:)

What your article describes is in common use today. Most often it's called "Critical Section", and it consists of an interlocked variable, a bunch of flags and an internal synchronization object (Mutex, if I remember correctly). Generally, in the scenarios with little contention, the Critical Section executes entirely in user mode, without involving the kernel synchronization object. This guarantees fast execution. When the contention is high, the kernel object is used for waiting, which releases the time slice conductive for faster turnaround.

Generally, there is very little sense in implementing synchronization primitives in this day and age. Operating systems come with a big variety of such objects, and they are optimized and tested in significantly wider range of scenarios than a single programmer can imagine. It literally takes years to invent, implement and test a good synchronization mechanism. That's not to say that there is no value in trying :)

梦途 2024-08-15 17:08:05

Java 的 AbstractQueuedSynchronizer (及其同级 AbstractQueuedLongSynchronizer)的工作原理类似,或者至少可以类似地实现。这些类型构成了 Java 库中多个并发原语的基础,例如 ReentrantLockFutureTask

它的工作原理是使用原子整数来表示状态。锁可以将值 0 定义为未锁定,将 1 定义为锁定。任何希望获取锁的线程都会尝试通过原子比较和设置操作将锁状态从0更改为1;如果尝试失败,则当前状态不为 0,这意味着该锁由其他线程拥有。

AbstractQueuedSynchronizer 还通过维护 CLH 队列 来促进等待锁和条件通知,CLH 队列是无锁链表,表示等待的线程行获取锁或通过条件接收通知。此类通知将等待该条件的一个或所有线程移动到等待获取相关锁的队列的头部。

这种机制的大部分可以通过表示状态的原子整数以及每个等待队列的几个原子指针来实现。哪些线程将竞争检查和更改状态变量的实际调度(例如,通过 AbstractQueuedSynchronizer#tryAcquire(int)) 超出了此类库的范围,属于主机系统的范围调度程序。

Java's AbstractQueuedSynchronizer (and its sibling AbstractQueuedLongSynchronizer) works similarly, or at least it could be implemented similarly. These types form the basis for several concurrency primitives in the Java library, such as ReentrantLock and FutureTask.

It works by way of using an atomic integer to represent state. A lock may define the value 0 as unlocked, and 1 as locked. Any thread wishing to acquire the lock attempts to change the lock state from 0 to 1 via an atomic compare-and-set operation; if the attempt fails, the current state is not 0, which means that the lock is owned by some other thread.

AbstractQueuedSynchronizer also facilitates waiting on locks and notification of conditions by maintaining CLH queues, which are lock-free linked lists representing the line of threads waiting either to acquire the lock or to receive notification via a condition. Such notification moves one or all of the threads waiting on the condition to the head of the queue of those waiting to acquire the related lock.

Most of this machinery can be implemented in terms of an atomic integer representing the state as well as a couple of atomic pointers for each waiting queue. The actual scheduling of which threads will contend to inspect and change the state variable (via, say, AbstractQueuedSynchronizer#tryAcquire(int)) is outside the scope of such a library and falls to the host system's scheduler.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文