AtomicBoolean 与同步块
我试图通过用 AtomicBoolean
替换一些 synchronized
块来减少代码中的线程争用。
下面是一个使用 synchronized
的示例:
public void toggleCondition() {
synchronized (this.mutex) {
if (this.toggled) {
return;
}
this.toggled = true;
// do other stuff
}
}
以及使用 AtomicBoolean
的替代方案:
public void toggleCondition() {
if (!this.condition.getAndSet(true)) {
// do other stuff
}
}
利用 AtomicBoolean
的 CAS 属性应该比依赖同步要快得多,因此我运行了一个小微基准。
对于 10 个并发线程和 1000000 次迭代,AtomicBoolean
仅比 synchronized
块快一点。
在使用 AtomicBoolean 的toggleCondition() 上花费的平均时间(每个线程):0.0338
在使用synchronized 的toggleCondition() 上花费的平均时间(每个线程):0.0357
我知道微基准是值得的,但差异不应该更高?
I was trying to cut thread contention in my code by replacing some synchronized
blocks with AtomicBoolean
.
Here's an example with synchronized
:
public void toggleCondition() {
synchronized (this.mutex) {
if (this.toggled) {
return;
}
this.toggled = true;
// do other stuff
}
}
And the alternative with AtomicBoolean
:
public void toggleCondition() {
if (!this.condition.getAndSet(true)) {
// do other stuff
}
}
Taking advantage of AtomicBoolean
's CAS property should be way faster than relying on synchronization so I ran a little micro-benchmark.
For 10 concurrent threads and 1000000 iterations, AtomicBoolean
comes in only slightly faster than synchronized
block.
Average time (per thread) spent on toggleCondition() with AtomicBoolean: 0.0338
Average time (per thread) spent on toggleCondition() with synchronized: 0.0357
I know micro-benchmarks are worth what they're worth but shouldn't the difference be higher?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我认为问题出在你的基准测试中。看起来每个线程只会切换一次条件。基准测试将大部分时间花在创建和销毁线程上。任何给定线程在任何其他线程切换条件的同时切换条件的机会将接近于零。
当条件存在严重争用时,AtomicBoolean 比原始锁定具有性能优势。对于无争议的条件,我预计不会有什么区别。
更改基准,使每个线程切换条件数百万次。这将保证大量的锁争用,并且我希望您会看到性能差异。
编辑
如果您打算测试的场景仅涉及每个线程(和 10 个线程)一次切换,那么您的应用程序不太可能会遇到争用,因此使用 AtomicBoolean 不太可能产生任何影响。
在这一点上,我应该问你为什么要把注意力集中在这个特定的方面。您是否对您的应用程序进行了分析并确定您确实存在锁争用问题?或者你只是猜测?您是否已经接受过关于过早优化的弊端的标准讲座?
I think the problem is in your benchmark. It looks like each thread is going to toggle the condition just once. The benchmark will spend most of its time creating and destroying threads. The chance that any given thread will be toggling a condition at the same time as any other thread is toggling it will be close to zero.
An AtomicBoolean has a performance advantage over primitive locking when there is significant contention for the condition. For an uncontended condition, I'd expect to see little difference.
Change your benchmark so that each thread toggles the condition a few million times. That will guarantee lots of lock contention, and I expect you will see a performance difference.
EDIT
If the scenario you intended to test only involved one toggle per thread (and 10 threads), then it is unlikely that your application would experience contention, and therefore it is unlikely that using AtomicBoolean will make any difference.
At this point, I should ask why you are focusing your attention on this particular aspect. Have you profiled your application and determined that really you have a lock contention problem? Or are you just guessing? Have you been given the standard lecture on the evils of premature optimization yet??
看看实际的实现,我的意思是查看代码比一些微基准测试(在 Java 或任何其他 GC 运行时中几乎没有用处)要好得多,我并不惊讶它没有“明显更快”。它基本上是在执行隐式同步部分。
然后来自
com.sun.Unsafe.java
的内容没有什么魔力,资源争用非常复杂。这就是为什么使用
final
变量和处理不可变数据在 Erlang 等真正的并发语言中如此普遍。所有这些消耗 CPU 时间的复杂性都被忽略了,或者至少转移到了不太复杂的地方。Looking at the actual implementation, I mean looking at the code is way better than some microbenchmark ( which are less than useless in Java or any other GC runtime ), I am not surprised it isn't "significantly faster". It is basically doing an implicit synchronized section.
And then this from
com.sun.Unsafe.java
there is no magic in this, resource contention is a bitch and very complex. That is why using
final
variables and working with immutable data is so prevalent in real concurrent languages like Erlang. All this complexity that eats CPU time is by passed, or at least shifted somewhere less complex.