线程范例
Is there a paradigm that gives you a different mindset or have a different take to writing multithreaded applications?
Perhaps something that feels vastly different, like procedural programming to functional programming.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
针对不同的问题,并发有许多不同的模型。 并发 的维基百科页面列出了一些模型,还有一个页面并发模式对于不同类型的并发处理方法有一些很好的起点。
您采取的方法很大程度上取决于当前的问题。 不同的模型解决并发应用程序中可能出现的各种不同问题,有些模型是建立在其他模型之上的。
在课堂上,我被告知并发使用 互斥 和 同步一起解决并发问题。 有些解决方案只需要一个,但同时使用这两个解决方案应该能够解决任何并发问题。
对于一个截然不同的概念,您可以考虑不变性和并发性。 如果所有数据都是不可变的,那么甚至不需要传统的并发方法。 本文探讨了该主题。
Concurrency has many different models for different problems. The Wikipedia page for concurrency lists a few models and there's also a page for concurrency patterns which has some good starting point for different kinds of ways to approach concurrency.
The approach you take is very dependent on the problem at hand. Different models solve various different issues that can arise in concurrent applications, and some build on others.
In class I was taught that concurrency uses mutual exclusion and synchronization together to solve concurrency issues. Some solutions only require one, but with both you should be able to solve any concurrency issue.
For a vastly different concept you could look at immutability and concurrency. If all data is immutable then the conventional approaches to concurrency aren't even required. This article explores that topic.
我不太明白这个问题,但是如果你开始使用 CUDA 进行一些编码,会给你一些不同的东西思考多线程应用程序的方式。
它与一般的多线程技术(如信号量、监视器等)不同,因为您同时有数千个线程。 因此,CUDA 中的并行性问题更多地在于对数据进行分区并稍后混合数据块。
SCAN 算法。 它很简单:
我想要以下集合:
{a, a+b, a+b+c, a+b+c+d, a+b +c+d+e}
其中符号“+”在本例中是任何可交换运算符(不仅是加号,还可以进行乘法)。
如何并行执行此操作? 这是对问题的彻底重新思考,在 论文。
CUDA 中不同算法的更多实现可以在 NVIDIA 网站 中找到
I don't really understand the question, but if you start doing some coding using CUDA give you some different way of thinking about multi-threading applications.
It differs from general multi-threading technics, like Semaphores, Monitors, etc. because you have thousands of threads concurrently. So the problem of parallelism in CUDA resides more in partitioning your data and mixing the chunks of data later.
Just a small example of a complete rethinking of a common serial problem is the SCAN algorithm. It is as simple as:
I want the following set:
{a, a+b, a+b+c, a+b+c+d, a+b+c+d+e}
Where the symbol '+' in this case is any Commutattive operator (not only plus, you can do multiplication also).
How to do this in parallel? It's a complete rethink of the problem, it is described in this paper.
Many more implementations of different algorithms in CUDA can be found in the NVIDIA website
嗯,一个非常保守的范式转变是从以线程为中心的并发(共享一切)到以进程为中心的并发(地址空间分离)。 这样就可以避免意外的数据共享,并且更容易在不同子系统之间实施通信策略。
这个想法很古老,并且被微内核操作系统社区传播(以及其他),以构建更可靠的操作系统。 有趣的是,微软研究院的 Singularity 操作系统原型表明,传统的地址空间是使用此模型时甚至不需要。
Well, a very conservative paradigm shift is from thread-centric concurrency (share everything) towards process-centric concurrency (address-space separation). This way one can avoid unintended data sharing and it's easier to enforce a communication policy between different sub-systems.
This idea is old and was propagated (among others) by the Micro-Kernel OS community to build more reliable operating systems. Interestingly, the Singularity OS prototype by Microsoft Research shows that traditional address spaces are not even required when working with this model.
我最喜欢的相对较新的想法是事务内存:通过确保始终更新来避免并发问题原子。
The relatively new idea I like best is transactional memory: avoid concurrency issues by making sure updates are always atomic.
<这基本上是我对已接受答案中的链接的主观理解的总结。 代码采用伪 C++ 编写。>
主动对象
犹豫,双重检查
受保护的暂停
领导者/跟随者 - 类似于套接字但更智能。
线程池 - 对于短期线程非常有用。
单写锁 - 一种将共享数据视为原子更新的指向常量结构的指针的复杂方法。
监控
反应堆、监听者、观察者
<This is basically a summary of my subjective understanding of the links in the accepted answer. Code is in pseudo-C++.>
Active Object
Balking, Double checking
Guarded suspension
Leader/follower - like sockets but smarter.
Thread pool - very useful for short-lived threads.
Single writer lock - a complicated way to treat shared data as an atomically updated pointer to a constant structure.
Monitor
Reactor, Listener, Observer