我还没有弄清楚并且谷歌没有帮助我的一件事是,为什么有可能与共享内存发生银行冲突,但在全局内存中却没有?银行与寄存器会存在冲突吗?
更新
哇,我真的很感谢 Tibbit 和 Grizzly 的两个回答。看来我只能给一个答案打绿色复选标记。我对堆栈溢出很陌生。我想我必须选择一个答案作为最佳答案。我可以对我没有打绿勾的答案说声谢谢吗?
One thing I haven't figured out and google isn't helping me, is why is it possible to have bank conflicts with shared memory, but not in global memory? Can there be bank conflicts with registers?
UPDATE
Wow I really appreciate the two answers from Tibbit and Grizzly. It seems that I can only give a green check mark to one answer though. I am newish to stack overflow. I guess I have to pick one answer as the best. Can I do something to say thank you to the answer I don't give a green check to?
发布评论
评论(4)
简短回答:全局内存或寄存器中不存在存储体冲突。
说明:
理解原因的关键是把握操作的粒度。单个线程不访问全局内存。全局内存访问是“合并的”。由于全局内存太慢,块内线程的任何访问都被分组在一起,以尽可能少地向全局内存发出请求。
共享内存可以被线程同时访问。当两个线程尝试访问同一存储体中的地址时,这会导致存储体冲突。
除了分配寄存器的线程之外,任何线程都不能访问寄存器。由于您无法读取或写入我的寄存器,因此您无法阻止我访问它们 - 因此,不存在任何银行冲突。
谁可以阅读&写入全局内存?
仅块
。单个线程可以进行访问,但事务将在块级别处理(实际上是扭曲/半扭曲级别,但我试图不复杂)。如果两个块访问相同的内存,我不认为它会花费更长的时间,而且可能会被最新设备中的 L1 缓存加速——尽管这并不明显。谁可以阅读&写入共享内存?
给定块内的任何线程。
如果每个块只有 1 个线程,则不会出现存储体冲突,但不会获得合理的性能。发生 Bank 冲突是因为一个块分配了多个线程,例如 512 个线程,并且它们都在争夺同一 Bank 内的不同地址(不完全相同的地址)。 CUDA C 编程指南 - 图 G2,第 167 页(实际上是 pdf 的第 177 页)的末尾有一些关于这些冲突的精彩图片。 链接到版本 3.2谁可以阅读&写入寄存器?
仅分配到它的特定线程。
因此,一次只有一个线程访问它。Short Answer: There are no bank conflicts in either global memory or in registers.
Explanation:
The key to understanding why is to grasp the granularity of the operations. A single thread does not access the global memory. Global memory accesses are "coalesced". Since global memory is soo slow, any access by the threads within a block are grouped together to make as few requests to the global memory as possible.
Shared memory can be accessed by threads simultaneously. When two threads attempt to access an address within the same bank, this causes a bank conflict.
Registers cannot be accessed by any thread except the one to which it is allocated. Since you can't read or write to my registers, you can't block me from accessing them -- hence, there aren't any bank conflicts.
Who can read & write to global memory?
Only blocks
. A single thread can make an access, but the transaction will be processed at the block level (actually the warp / half warp level, but I'm trying not be complicated). If two blocks access the same memory, I don't believe it will take longer and it may happen accelerated by the L1 cache in the newest devices -- though this isn't transparently evident.Who can read & write to shared memory?
Any thread within a given block.
If you only have 1 thread per block you can't have a bank conflict, but you won't have reasonable performance. Bank conflicts occur because a block is allocated with several, say 512 threads and they're all vying for different addresses within the same bank (not quite the same address). There are some excellent pictures of these conflicts at the end of the CUDA C Programming Guide -- Figure G2, on page 167 (actually page 177 of the pdf). Link to version 3.2Who can read & write to registers?
Only the specific thread to which it is allocated.
Hence only one thread is accessing it at one time.给定类型的内存是否可能存在存储体冲突显然取决于内存的结构及其用途。
那么为什么共享内存的设计方式会允许存储体冲突呢?
这相对简单,设计一个可以同时处理对同一内存的独立访问的内存控制器并不容易(事实证明)大多数不能)。因此,为了允许 halfwarp 中的每个线程访问单独寻址的字,内存被存储起来,每个存储体都有一个独立的控制器(至少人们可以这么想,不确定实际的硬件)。这些存储体交错排列,使连续线程能够快速访问连续内存。因此,这些存储体中的每一个都可以一次处理一个请求,理想情况下允许同时执行 halfwarp 中的所有请求(显然,由于这些存储体的独立性,该模型理论上可以维持更高的带宽,这也是一个优点)。
寄存器怎么样?
寄存器被设计为作为 ALU 指令的操作数进行访问,这意味着它们必须以非常低的延迟进行访问。因此,他们获得了更多的晶体管/位来实现这一点。我不确定现代处理器中寄存器的访问方式到底如何(不是您经常需要的信息,也不是那么容易找到的信息)。然而,在存储体中组织寄存器显然是非常不切实际的(对于更简单的架构,您通常会看到所有寄存器都挂在一个大的多路复用器上)。所以不会,寄存器不会出现银行冲突。
全局内存
首先,全局内存的工作粒度与共享内存不同。内存以 32、64 或 128 字节块进行访问(至少对于 GT200,对于 fermi 总是 128B,但缓存,AMD 有点不同),每次您想要从一个块中获取某些内容时,都会访问/传输整个块。这就是为什么需要合并访问,因为如果每个线程都从不同的块访问内存,则必须传输所有块。
但谁说没有银行冲突呢?我对此并不完全确定,因为我还没有找到任何实际来源来支持 NVIDIA 硬件,但这似乎是合乎逻辑的:
全局内存通常分布到多个 RAM 芯片(可以通过查看显卡轻松验证)。如果这些芯片中的每一个都像一个本地内存库,那么如果同一存储体上有多个同时请求,您就会遇到存储体冲突,这是有道理的。然而,对于一件事来说,影响不太明显(因为内存访问消耗的大部分时间都是从 A 到 B 获取数据的延迟),并且在一个工作组“内部”不会产生明显的影响(因为一次只有一个 halfwarp 执行,并且如果该 halfwarp 发出多个请求,则您将拥有未合并的内存访问,因此您已经受到了打击,因此很难衡量这一冲突的影响。因此,只有在以下情况下才会发生冲突:在 gpgpu 的典型情况下,您有一个大型数据集位于顺序内存中,因此影响实际上并不明显,因为有足够多的其他工作组同时访问其他银行,但它应该可以构建数据集仅以少数银行为中心的情况,这将对带宽造成影响(因为最大带宽将来自所有银行的平均分配访问,因此每个银行只能拥有其中的一小部分)带宽)。同样,我还没有阅读任何内容来证明这个关于 nvidia 硬件的理论(大多数内容都集中在合并上,这当然更重要,因为它使得这对于自然数据集来说不再是问题)。然而,根据 ATI Stream 计算指南,这就是 Radeon 卡的情况(对于 5xxx:银行相距 2kb,并且您希望确保将访问权限(意味着来自所有工作组同时活动)平等地分布在所有银行上),所以我可以想象 NVidia 卡的行为类似。
当然,对于大多数情况,全局内存上的存储体冲突的可能性不是问题,因此在实践中您可以说:
Whether or not there can be bank conflicts on a given type of memory is obviously dependent on the structure of the memory and therefore of its purpose.
So why is shared memory designed in a way which allows for bank conflicts?
Thats relatively simple, its not easy to design a memory controller which can handle independent accesses to the same memory simultaneously (proven by the fact that most can't). So in order to allow each thread in a halfwarp to access an individualy addressed word the memory is banked, with an independent controller for each bank (at least thats how one can think about it, not sure about the actual hardware). These banks are interleaved to make sequential threads accessing sequential memory fast. So each of these banks can handle one request at a time ideally allowing for concurrent executions of all requests in the halfwarp (obviously this model can theoretically sustain higher bandwidth due to the independence of those banks, which is also a plus).
What about registers?
Registers are designed to be accessed as operands for ALU instructions, meaning they have to be accessed with very low latency. Therefore they get more transistors/bit to make that possible. I'm not sure how exactly registers are accessed in modern processors (not the kind of information you need often and not that easy to find out). However it would obviously be highly unpractical to organize registers in banks (for simpler architectures you typically see all registers hanging on one big multiplexer). So no, there won't be bank conflicts for registers.
Global memory
First of all global memory works on a different granuality then shared memory. Memory is accessed in 32, 64 or 128byte blocks (for GT200 atleast, for fermi it is 128B always, but cached, AMD is a bit different), where everytime you want something from a block the whole block is accessed/transferred. That is why you need coalesced accesses, since if every thread accesses memory from a different block you have to transfer all blocks.
But who says there aren't bank conflicts? I'm not completely sure about this, because I haven't found any actual sources to support this for NVIDIA hardware, but it seems logical:
The global memory is typically distributed to several ram chips (which can be easily verified by looking on a graphicscard). It would make sense, if each of these chips is like a bank of local memory, so you would get bank conflicts if there are several simultaneous requests on the same bank. However the effects would be much less pronounced for one thing (since most of the time consumed by memory accesses is the latency to get the data from A to B anyways), and it won't be an effect noticible "inside" of one workgroup (since only one halfwarp executes at a time and if that halfwarp issues more then one request you have an uncoalesced memory access, so you are already taking a hit making it hard to measure the effects of this conflict. So you would only get conflicts if several workgroups try to access the same bank. In your typical situation for gpgpu you have a large dataset lying in sequential memory so the effects shouldn't really be noticible since there are enough other workgroups accessinng the other banks at the same time, but it should be possible to construct situations where the dataset is centered on just a few banks, which would make for a hit on bandwidth (since the maximal bandwidth would come from equaly distributing access on all banks, so each bank would only have a fraction of that bandwidth). Again I haven't read anything to prove this theory for nvidia hardware (mostly everything focusses on coalescing, which of course is more important as it makes this a nonproblem for natural datasets to). However according to the ATI Stream computing guide this is the situation for Radeon cards (for 5xxx: banks are 2kb apart and you want to make sure that you distribute your accesses (meaning from all worgroups simulateously active) equaly over all banks), so I would imagine that NVidia cards behave similary.
Of course for most scenarious the possibility of bank conflicts on global memory is a non issue, so in practice you can say:
多个线程访问同一存储体并不一定意味着存在存储体冲突。如果线程想要同时从同一存储体中的不同行读取数据,则会发生冲突。
multiple threads accessing the same bank does not necessarily mean there is a bank conflict. There is a conflict if threads want to read at the same time from A DIFFERENT ROW within the same bank.
全局内存访问确实存在存储体冲突和通道冲突。仅当以循环方式均匀访问内存通道和存储体时,才能实现最大全局内存带宽。对于对单个一维阵列的线性存储器访问,存储器控制器通常被设计为自动均匀地交错每个存储体和通道的存储器请求。然而,当同时访问多个一维数组(或多维数组的不同行)时,如果它们的基地址是内存通道或存储体大小的倍数,则可能会出现不完美的内存交错。在这种情况下,一个通道或组比另一通道或组受到的打击更大,从而串行化内存访问并减少可用的全局内存带宽。
由于缺乏文档,我不完全理解它是如何工作的,但它肯定存在。在我的实验中,我发现由于内存基址不吉利,性能下降了 20%。这个问题可能相当隐蔽——根据内存分配大小,性能下降可能会随机发生。有时,内存分配器的默认对齐大小也可能过于聪明 - 当每个数组的基地址都对齐到较大大小时,它会增加通道/存储体冲突的机会,有时使其 100% 发生。时间。我还发现分配大量内存,然后添加手动偏移以“错位”远离同一通道/存储体的较小数组可以帮助缓解问题。
内存交错模式有时可能很棘手。例如,AMD 手册称 Radeon HD 79XX 系列 GPU 有 12 个内存通道 - 这不是 2 的幂,因此如果没有文档,通道映射就很不直观,因为不能仅从内存地址位推导出来。不幸的是,我发现 GPU 供应商的文档记录往往很差,因此可能需要一些反复试验。例如,AMD 的 OpenCL 优化手册仅限于 GCN 硬件,并且不提供比 Radeon HD 7970 更新的硬件的任何信息 - 有关 Vega 中具有 HBM VRAM 的较新 GCN GPU 的信息,或较新的 RDNA/CDNA 架构完全缺席。不过,AMD 提供了 OpenCL 扩展来报告硬件的通道和存储体大小,这可能有助于实验。在我的 Radeon VII / Instinct MI50 上,它们是:
大量通道可能是 4096 位 HBM2 内存的结果。
AMD 的优化手册
AMD 旧的 AMD APP SDK OpenCL 优化指南 提供了以下解释:
还值得注意的是,跨所有通道分配内存访问并不总是有助于提高性能,反而会降低性能。 AMD 警告说,最好在同一工作组中访问相同的内存通道/存储体 - 由于 GPU 同时运行多个工作组,因此可以实现理想的内存交错。另一方面,访问同一工作组中的多个内存通道/存储体会降低性能。
更多硬件实现细节请阅读原版手册,此处不再赘述。
Bank conflicts and channel conflicts indeed exist for global memory accesses. Maximum global memory bandwidth is only achieved when memory channels and banks are evenly accessed in a round-robin manner. For linear memory accesses to a single 1D array, the memory controller is usually designed to automatically interleave memory requests each bank and channel evenly. However, when multiple 1D arrays (or different rows of a multi-dimensional array) are accessed at the same time, and if their base addresses are multiples of the size of a memory channel or bank, imperfect memory interleaving may occur. In this case, one channel or bank is hit harder than another channel or bank, serializing memory access and reducing available global memory bandwidth.
Due to lack of documentation, I don't entirely understand how it works, but it surely exists. In my experiments, I've observed 20% performance degradation due to unlucky memory base addresses. This problem can be rather insidious - depending on the memory allocation size, performance degradation may occur randomly. Sometimes the default alignment size of the memory allocator can also be too clever for its own good - when every array's base address is aligned to a large size, it can increase the chance of channel/bank conflict, sometimes making it happen 100% of the time. I also found allocating a large pool of memory, then adding manual offsets to "misalign" smaller arrays away from the same channel/bank can help mitigating the problem.
The memory interleaving pattern can sometimes be tricky. For example, AMD's manual says Radeon HD 79XX-series GPUs have 12 memory channels - this is not a power of 2, so channel mapping is far from intuitive without documentation, since cannot just be deduced from the memory address bits alone. Unfortunately, I found it's often poorly documented by the GPU vendors so it may require some trial-and-error. For example, AMD's OpenCL optimization manual is only limited to GCN hardware, and it doesn't provide any information for hardware newer than Radeon HD 7970 - information about newer GCN GPUs with HBM VRAM found in Vega, or the newer RDNA/CDNA architectures are completely absent. However, AMD provides OpenCL extensions to report the channel and bank sizes of the hardware, which may help with experiments. On my Radeon VII / Instinct MI50, they're:
The huge number of channels is likely a result of the 4096-bit HBM2 memory.
AMD's Optimization Manual
AMD's old AMD APP SDK OpenCL Optimization Guide provides the following explanation:
It's also worth noting that distributing memory access across all channels does not always help with performance, it can degrade performance instead. AMD warns that, it can be better to access the same memory channel/bank in the same workgroup - as the GPU is running many workgroups simultaneously, ideal memory interleaving is achieved. On the other hand, accessing multiple memory channels/bank in the same workgroup degrades performance.
Read the original manual for more hardware implementation details, which are omitted here.