Java 互斥锁与 smp

发布于 2024-09-08 14:23:42 字数 426 浏览 6 评论 0原文

我正在学习多线程编程;在练习互斥时,我注意到它似乎无法在我的双核笔记本电脑上正常工作。

互斥量代码位于 http://pastebin.com/axGY5y3c

结果如下:

count value:t[0]1
count value:t[1]1
count value:t[2]2
count value:t[3]3
count value:t[4]4

结果显示似乎线程在开始时会获得相同的初始值。这看起来不正确。

我的代码有什么问题吗?或者是否有任何资源提供有关在 smp/ dule-core/ 多个 cpu 上运行 java mutex 的示例?

感谢您的帮助。

I am learning multi-thread programming; and whilst practising mutex, I notice that it seems doesn't work correctly on my dule-core laptop.

The mutex code is at http://pastebin.com/axGY5y3c

The result is as below:

count value:t[0]1
count value:t[1]1
count value:t[2]2
count value:t[3]3
count value:t[4]4

The result shows that seemly the threads would obtain the same init value at the beginning. That looks incorrect.

Is there anything wrong with my code? Or is there any resource providing examples about running java mutex on smp/ dule-core/ multiple cpus?

Thanks for help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

乞讨 2024-09-15 14:23:42

我对互斥算法不是很熟悉,所以我不能帮助你严格关注并发性。我确实这样做了,但是在您的代码中发现一行解释了为什么您会得到列出的值:

public class MyThread extends Thread{
    // [...]
    private static int count = 0;

当您启动四个线程时,run()被调用四次,递增count每次迭代。

该代码块:

t[i].start();
int v = t[i].getCountValue();
System.out.println("count value:t["+i+"]"+v;

因此实际上是:

count++;
System.out.println("count value:t["+i+"]"+count);

I'm not very familiar with mutex algorithms, so I can't help you strictly concerning the concurrency. I did, however spot one line in your code that explains why you get the values you listed:

public class MyThread extends Thread{
    // [...]
    private static int count = 0;

When you start four threads, run() is called four times, incrementing count each iteration.

This block of code:

t[i].start();
int v = t[i].getCountValue();
System.out.println("count value:t["+i+"]"+v;

Therefore is effectively:

count++;
System.out.println("count value:t["+i+"]"+count);
淡笑忘祈一世凡恋 2024-09-15 14:23:42

正如保罗所提到的,您会感到困惑,“计数”被声明为静态,但是当您检索它时,您暗示您不希望它是静态的。从根本上来说,您需要决定您希望程序执行什么操作。

但是......无论如何,还存在其他问题:

  • 在您的实现中,您正在访问跨线程共享的数据结构(每个线程可能有自己的数组元素,但实际的数组引用是跨线程共享的);根据Java内存模型,您需要采取措施确保其安全(例如声明数组 final 易失性,或使用原子数组);
  • 有一些标准并发库实际上可能在实践中表现更好(或者至少是正确的并且更灵活),尽管作为学术练习理解并发算法当然并不是一件坏事。

As Paul has mentioned, you have the confusion that "count" is declared as static, but then when you retrieve it, you're implying you don't want it to be static. Fundamentally, you need to decide what you want the program to do.

But... in any case, there are other issues:

  • in your implementation, you're accessing data structures shared across threads (each thread may have its own array element, but the actual array reference is shared across threads); according to the Java Memory Model, you need to take steps to make this safe (e.g. declaring the arrays final or volatile, or using an atomic array);
  • there are standard concurrency libraries that may actually perform better in practice (or at least be correct and more flexible), though of course as an academic exercise understanding concurrent algorithms isn't a bad thing.
撩心不撩汉 2024-09-15 14:23:42

我想我大概知道我的代码的问题了。 Test.java 中打印的行包含函数 getCountValue(),该函数不在锁的边界内(mutex.lock()/ mutex.unlock());因此,当线程开始打印计数值时,会导致竞争条件,因为打印计数值不需要等待其他线程。

将 getCountValue() 移动到 run() 函数内之后,该函数位于锁的边界内。结果看起来是正确的。它打印出来,

pid:0 count value:1
pid:2 count value:2
pid:3 count value:3
pid:1 count value:4
pid:4 count value:5

再次感谢您的帮助。
我很感激。

I think I roughly know the problem of my code. The printed line in Test.java contains the function getCountValue(), which is not in the boundary of lock (mutex.lock()/ mutex.unlock()); therefore when thread starts to print the count value, resulting in race condition because printing the value of count needs not to wait for other thread.

After moving the getCountValue() inside the run() function, which is inside the boundary of lock. The result looks correct. It prints out

pid:0 count value:1
pid:2 count value:2
pid:3 count value:3
pid:1 count value:4
pid:4 count value:5

Thanks again for all your help.
I appreciate it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文