返回介绍

GPU Architecture

发布于 2025-02-25 23:44:04 字数 10117 浏览 0 评论 0 收藏 0

CPU veruss GPU

A CPU is designed to handle complex tasks - time sliciing, virtual machine emulation, complex control flows and branching, security etc. In contrast, GPUs only do one thing well - handle billions of repetitive low level tasks - originally the rendering of triangles in 3D graphics, and they have thousands of ALUs as compared with the CPUs 4 or 8.. Many scientific prgorams spend most of their time doing just what GPUs are good for - handle billions of repetitive low level tasks - and hence the fidle of GPU computing was born.

Originally, this was called GPCPU (General Purpose GPU programming), and it required mapping scientific code to the matrix operations for manipulating traingles. This was insanely difficult to do and took a lot of dedication. However, with the advent of CUDA and OpenCL, high-level langagues targeting the GPU, GPU programming is rapidly becoming mainstream in the scientific community.

Image(url='http://www.frontiersin.org/files/Articles/70265/fgene-04-00266-HTML/image_m/fgene-04-00266-g001.jpg')

Inside a GPU

Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig1.png')

The streaming multiprocessor

Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig2.png')

The CUDA Core

Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig3.png')

Memory Hiearchy

Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig9.png')

Processing flow

Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/5/59/CUDA_processing_flow_%28En%29.PNG/450px-CUDA_processing_flow_%28En%29.PNG')

CUDA Kernels

Image(url='http://www.biomedcentral.com/content/supplementary/1756-0500-2-73-s2.png')

CUDA execution model

Image(url='http://3dgep.com/wp-content/uploads/2011/11/Cuda-Execution-Model.png')

CUDA threads

Image(url="http://docs.nvidia.com/cuda/cuda-c-programming-guide/graphics/grid-of-thread-blocks.png")

Memoery access levels

Image(url='http://docs.nvidia.com/cuda/parallel-thread-execution/graphics/memory-hierarchy.png')

Recap of CUDA Jargon and Concepts

Generations

  • Tesla (Compute Capability 1)
  • Fermi (Compute Capability 2)
  • Kepler (Compute Capability 3)
  • Maxwell (current generation - Compute Capability 5)
  • Pascal (next generation - not in production yet)

Confusingly, Tesla is also the brand name for NVidia’s GPGPU line of cards as well as the name for the 1st generation microarchitecture. Hence you will hear references to NVidia GTX for gaming and MVidia Tesla for scientific computing. Note that GTX cards can also be used for scieintifc computing, but lack ECC memory and have crippled double precisiion abiiities.

Hardware

Host = CPU Device = GPU

A GPU has multiple streaming multiprocessors (SM) that contain

  • memory registers for threads to use
  • several memory caches
    • shared memory
    • constant cache
    • texture memory
    • L1 cache
  • thread schedulers
  • Several CUDA cores (analagous to streaming processsor in AMD cards) - number depends on microarchitecture generation
    • Each core consists of an Arithmetic logic unit (ALU) that handles integer and single precision calculations and a Floating point unit (FPU) that handles double precsion calculations
  • Special function units (SFU) for transcendental functions (e.g. log, exp, sin, cos, sqrt)

For example, a high-end Kepler card has 15 SMs each with 12 groups of 16 (=192) CUDA cores for a total of 2880 CUDA cores (only 2048 threads can be simultaneoulsy active). Optimal use of CUDA requires feeding data to the threads fast enough to keep them all busy, which is why it is important to understand the memory hiearchy.

Device memory types

  • Registers (only usable by one thread) - veru, very fast (1 clock cycle)
  • Shared memroy (usable by threads in a thread block) - very fast (a few clock cyles)
    • Organized into 32 banks that can be accessed simultaneously
    • However, each concurrent thread needs to access a different bank or there is a bank conflict
    • Banks can only serve one request at a time - a single conflict doubles the access time
  • Device memory (usable by all threads - can transfer to/from CPU) - very slow (hundreds of clock cycles)
    • Global memory is general purpose
    • Local memory is optimized for consecutive access by a thread
    • Constant memory is for read-only data that will not change over the course of a kernel execution
    • Textture and surface memory are for specialized read-only data mainly used in graphics routines

Access speed: Global, local, texture, surface << constant << shared, regiser

  • Device memory to host memory bandwidth (PCI) << device memory to device bandwidth
    • few large transfers are better than many small ones
    • increase computation to communication ratio
  • Device can load 4, 8 or 16-byte words from global memroy into local registers
    • data that is not in one of these multiples (e.g. structs) incurs a mis-aligned penalty
    • mis-alginment is largely mitigated by memory cahces in curent generation GPU cards

In summary, 3 different problems can impede efficient memory access

  • Avoid mis-alignment: when the data units are not in sizes conducive for transfer from global memory to local registers
  • No coalescnce: when requqested by thread of a warp are not laid out consecutively in memory (stride=1)
  • Avoid bank conflict: when multiple concurrentl threads in a block try to access the same memory bank at the same time

Thread scheduling model

Code in a kernel is executed in groups of 32 threads (Nvidia calls a group of 32 threads a warp). When one warp is wating on device memory, the scheduler switches to another ready warp, keeping as many cores busy as possible.

  • Because accessing device memory is so slow, the device coaleseces global memory loads and stores issued by threads of a warp into as few transactions as posisble
  • Because of coalescence, retrieval is optimal when neigboring threads (with consecuitve indexes) access consecutive memory locations - i.e. with a stride of 1
  • A stride of 1 is not possible for indexing the higher dimensions of a multi-dimensinoal array - shared memory is used to overcome this (see matrix multiplication example) as there is no penalty for strided access to shared mmemroy
  • Similarly, a structure consisting of arrays (SoA) allows for efficient access, while an array of structures (AoS) does not

Programming model

  • The NVidia CUDA compiler nvcc targets a virutal machine known as the Parallel Thread Execuation (PTX) Instruction Set Architecture (ISA) that exposes the GPU as a dara parallel computing device
  • High level language compilers (CUDA C/C++, CUDA FOrtran, CUDA Pyton) generate PTX instructions, which are optimized for and translated to native target-architecture instructions that execute on the GPU
  • GPU code is organized as a sequence of kernels (functions executed in parallel on the GPU)
  • Normally only one kernel is exectuted at at time, but concurent execution of kernles is also possible
  • The host launhces kernels, and each kernel can launch sub-kernels
  • Threads are grouped into blocks, and blocks are grouped into a grid
  • Each thread has a unique index within a block, and each block has a unique index within a grid
  • This means that each thread has a global unique index that can be used to (say) access a specific array location
  • Since the smallest unit that can be scheduled is a warp, the size of a thread block is always some mulitple of 32 threads
  • Currently, the maximumn number of threads in a block for Kepleer is 1024 (32 warps) and the maximum nmber of simultaneous threads is 2048 (64 warps)
  • Hence we can launch at most 2 blocks per grid with 1024 threads per block, or 8 blocks per grid with 256 threads per block and so on

Performance tuning

For optimal performance, the programmer has to juggle

  • finding enough parallelism to use all SMs
  • finding enouhg parallelism to keep all cores in an SM busy
  • optimizing use of registers and shared memory
  • optimizing device memory acess for contiguous memory
  • organizing data or using the cache to optimize device memroy acccess for contiguous memory

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文