对称多处理和分布式系统?
与对称多处理相比,分布式系统是一个完全独立的概念吗(因为在分布式中,每个 CPU 都有单独的内存/磁盘存储,而在对称中,我们有许多 CPU 使用相同的内存/磁盘存储)?
Are distributed systems a completely independent concept compared to symmetric multiprocessing (since in distributed, we have individual memory/disk storage per CPU whereas in symmetric we have many CPU's utilizing the same memory/disk storage)?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我不会说它们是完全不同的概念,因为您可以在分布式系统中获得共享内存(使用 分布式共享内存),并且在同一台机器上运行的多个进程不共享它们的地址空间。因此,这两种环境可以同时存在于两种架构上,但需要付出一定的成本。一般来说,共享内存更容易编程,但更难构建(从硬件的角度来看),而分布式系统更难编程,但更容易构建。
因此,不同的概念实际上是共享内存和非共享内存,至少从编程的角度来看是这样。
I wouldn't say they are completely different concepts, because you can get a shared memory in distributed systems (using Distributed shared memory), and multiple processes running on the same machine don't share their address space. So both environments can exists on both architectures, but with a cost. In general, shared memory is easier to program but harder to build (from the hardware point of view), and distributed systems are harder to program but easier to build.
So, the different concepts are actually shared memory and non-shared memory, at least from the programming point of view.
分布式计算和 SMP 并不相同,尽管 DC 可能使用 SMP。 DC 是一种将独立工作负载数据并行化到异构且松散耦合的不同系统的方法。
SMP 系统是一种具有紧密耦合的 CPU 和内存的机器,受益于低延迟内存访问并在计算发生时在 CPU 之间共享数据。
分布式计算示例:
Einstein@Home 是一个试图从巨型激光干涉仪收集的实验数据中寻找引力波的项目。要处理的数据是相当独立的,因此将数据分布到几台不同的机器上是没有问题的。
对称多重处理示例:
在大型表/矩阵上运行计算需要计算节点(“CPU”/“DC 节点”)具有一定的接近度才能完成计算。如果计算的结果取决于“相邻”节点的结果,则分布式计算范例不会对您有太大帮助。
希望有帮助......
亚历克斯.
Distributed Computing and SMP are not the same, although DC might use SMP. DC is a way of how to parallelize independant workload-data to heterogenous and loosely coupled different systems.
A SMP system is a machine with tightly coupled CPUs and memory, benefiting from low-latency memory-access and sharing data amongst CPUs while the computations happen.
Example for distributed computing:
Einstein@Home is a project trying to find gravitational waves from experimental data gathered from huge Laser interferometers. The data to be crunched is pretty independent, so distributing the data to several different machines is no problem.
Example for Symmetric Multiprocessing:
Running computations on large tables/matrices needs a certain proximity of the computing nodes ("CPUs"/"DC-nodes") to be able to finish the computation. If the result of a computation depends on the result of a "neighboring" node, the Distributed Computing paradigm wouldn't help you much.
Hope that helps...
Alex.