最好的无锁并发线程共享内存架构是什么?

发布于 2024-12-09 18:30:34 字数 185 浏览 0 评论 0 原文

我有一个二维数组的内存。我有多个线程自发、任意、并发地读取和写入数组中的单个元素。

构建内存访问代码的最快方法或最佳实践是什么?我不喜欢锁定的想法,因为它会阻塞其他线程。

数据完整性实际上并不那么重要,但它应该(大部分)一致。我的代码可以处理一些内存错误。

它需要非常非常快!

感谢您的反馈。

I have a 2d Array of memory. I have multiple threads reading and writing to single elements in the array spontaneously, arbitrarily, and concurrently.

What is the fastest way or best practice to construct my memory access code? I don't like the idea of locking because it blocks other threads.

Data integrity is actually not that important, but it should be (mostly) consistent. My code can handle a few memory errors.

It needs to be really, really fast!

Thanks for feedback.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

十年不长 2024-12-16 18:30:34

如果数据完整性并不重要,您可以只访问数据而不关心多线程。

但没有人能预测结果。

然而,我不会将这种方法称为“最佳实践”。恕我直言,最佳实践是关心多线程,并使用适当粒度的互斥体保护数据。我的观点是,每个应用程序都应该首先是正确的,然后才是快速的。不一致的结果就是错误的,无论结果快慢如何。

If data integrity is not important, you can just access the data without caring about multithreading at all.

No one can predict the result, though.

I wouldn't call this approach "best practice", however. IMHO best pratice is caring about multithreading, and protecting the data with appropriately-grained mutexes. My opinion is that every application should be first correct, and only then fast. Inconsistent results are just wrong, doesn't matter if they come fast or not.

风和你 2024-12-16 18:30:34

使用 Interlocked 类对数组中的对象/值进行 CAS (CompareAndExchange)。它使操作原子化,确保数据不被损坏。这大约是您能做的最快的事情(除了直接访问/修改数据而无需互锁)。但是,如果您正在修改 2D 数组的大小(增长/收缩),那么除非您在数组上使用某种锁定机制,否则您将遇到一些严重的问题。

Use the Interlocked class to CAS (CompareAndExchange) the objects/values in your array. It makes the operation atomic which ensures that the data is not corrupted. That's about the fastest thing you can do (aside from accessing/modifying the data directly without interlocking). However, if you're modifying the size of the 2D array (growing/shrinking) then you will have some serious problems unless you use some kind of locking mechanism on your array.

绮烟 2024-12-16 18:30:34

将数组声明为易失性,并确保其作用域对所有线程都可见。我通常喜欢避免静态,因此要么通过引用传递数组,要么设置所有线程来运行将数组定义为实例字段的实例类的方法。

但是,我强烈建议您重新考虑“易失性访问”在数据完整性方面的含义。最佳实践是在没有良好的锁定机制的情况下不要做您正在尝试的事情。您可能认为这是一个小问题,但您会发现自己的系统非常不确定,以至于其数据一点也不可靠。

假设您有 8 个线程正在运行,所有线程都会从数组的索引中获取值,进行一些计算,然后将结果添加回数组的索引。线程 1 首先启动并获取索引值 0。然后线程 2-7 全部启动并获取相同的值。线程 1 执行计算,再次获取索引以确保其具有“最新”值,然后尝试更新该值。然而,其他线程正在等待该内存,并且由于某些您一无所知的调度实现,在线程 1 获取索引(仍然为零)和写入其结果之间,线程 2-7 已全部写入其值。然后线程 1 写入其值,覆盖其他 7 个线程所做的所有操作。反过来,其他 7 个线程可能彼此之间存在类似的“竞争”,因此线程 1 覆盖的值可能会覆盖一半线程的结果。

我向你保证,这种行为不是你想要的,无论你多么认为你可以逃脱它;它会导致数据损坏,从而影响系统的其他区域,并且您将被迫实施适当的锁定。

Declare the array as volatile and ensure it's scoped such that it's visible to all your threads. I generally like to avoid statics, so either pass the array by reference, or set up all your threads to run methods of an instance class that has the array defined as an instance field.

However, I strongly urge you to rethink what "volatile access" means in terms of data integrity. Best practice is NOT to do what you are attempting without good locking mechanics. You may think it's a small problem, but you can find yourself with a very non-deterministic system, so much so that its data isn't reliable in the slightest.

Let's say you have 8 threads running, and all of them will get a value from an index of the array, do some calculation, then add the result back to the index of the array. Thread 1 starts first and gets the value of the index, 0. Then threads 2-7 all start and get the same value. Thread 1 performs its calculation, gets the index again to ensure it has the "latest" value, then tries to update the value. However, other threads are waiting for that memory, and due to some scheduling implementation you know nothing about, in between Thread 1 getting the index (still zero) and writing its result, threads 2-7 have ALL written their values. Then Thread 1 writes its value, overwriting everything the other 7 threads have done. The other 7 threads, in turn, probably had similar "races" with each other such that the value overwritten by Thread 1 probably overwrote the results of half the threads anyway.

I guarantee you that this behavior is NOT what you want, no matter how much you think you can get away with it; it WILL cause data corruption, which WILL affect other areas of the system, and you WILL be forced to implement proper locking.

假装爱人 2024-12-16 18:30:34

如果您只对性能感兴趣,那么内存访问的排序方式可能会发挥重要作用。花一个小时左右阅读 来自麻省理工学院的幻灯片href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-172-performance-engineering-of-software-systems-fall-2009/" rel="nofollow">性能工程课程。您可能也对其他讲座感兴趣(例如第 6 讲)。

基本上,您可以优化缓存的使用,以极大地提高性能,具体取决于您的读/写模式以及您正在使用的工作负载。

然而,这不应该阻止你做正确的事情。

If you are interested solely in performance, then the way in which you order your memory accesses can play a big role. Spend an hour or so reading through the slides from Lecture 1 of MIT's Performance Engineering class. The other lectures may also be interesting to you (such as Lecture 6).

Basically, you can optimize your use of the cache to greatly improve performance, depending on your read/write patterns, given the workload you are using.

This should not stop you from doing something that is correct, however.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文