- Introduction to Python
- Getting started with Python and the IPython notebook
- Functions are first class objects
- Data science is OSEMN
- Working with text
- Preprocessing text data
- Working with structured data
- Using SQLite3
- Using HDF5
- Using numpy
- Using Pandas
- Computational problems in statistics
- Computer numbers and mathematics
- Algorithmic complexity
- Linear Algebra and Linear Systems
- Linear Algebra and Matrix Decompositions
- Change of Basis
- Optimization and Non-linear Methods
- Practical Optimizatio Routines
- Finding roots
- Optimization Primer
- Using scipy.optimize
- Gradient deescent
- Newton’s method and variants
- Constrained optimization
- Curve fitting
- Finding paraemeters for ODE models
- Optimization of graph node placement
- Optimization of standard statistical models
- Fitting ODEs with the Levenberg–Marquardt algorithm
- 1D example
- 2D example
- Algorithms for Optimization and Root Finding for Multivariate Problems
- Expectation Maximizatio (EM) Algorithm
- Monte Carlo Methods
- Resampling methods
- Resampling
- Simulations
- Setting the random seed
- Sampling with and without replacement
- Calculation of Cook’s distance
- Permutation resampling
- Design of simulation experiments
- Example: Simulations to estimate power
- Check with R
- Estimating the CDF
- Estimating the PDF
- Kernel density estimation
- Multivariate kerndel density estimation
- Markov Chain Monte Carlo (MCMC)
- Using PyMC2
- Using PyMC3
- Using PyStan
- C Crash Course
- Code Optimization
- Using C code in Python
- Using functions from various compiled languages in Python
- Julia and Python
- Converting Python Code to C for speed
- Optimization bake-off
- Writing Parallel Code
- Massively parallel programming with GPUs
- Writing CUDA in C
- Distributed computing for Big Data
- Hadoop MapReduce on AWS EMR with mrjob
- Spark on a local mahcine using 4 nodes
- Modules and Packaging
- Tour of the Jupyter (IPython3) notebook
- Polyglot programming
- What you should know and learn more about
- Wrapping R libraries with Rpy
GPU Architecture
CPU veruss GPU
A CPU is designed to handle complex tasks - time sliciing, virtual machine emulation, complex control flows and branching, security etc. In contrast, GPUs only do one thing well - handle billions of repetitive low level tasks - originally the rendering of triangles in 3D graphics, and they have thousands of ALUs as compared with the CPUs 4 or 8.. Many scientific prgorams spend most of their time doing just what GPUs are good for - handle billions of repetitive low level tasks - and hence the fidle of GPU computing was born.
Originally, this was called GPCPU (General Purpose GPU programming), and it required mapping scientific code to the matrix operations for manipulating traingles. This was insanely difficult to do and took a lot of dedication. However, with the advent of CUDA and OpenCL, high-level langagues targeting the GPU, GPU programming is rapidly becoming mainstream in the scientific community.
Image(url='http://www.frontiersin.org/files/Articles/70265/fgene-04-00266-HTML/image_m/fgene-04-00266-g001.jpg')
Inside a GPU
Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig1.png')
The streaming multiprocessor
Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig2.png')
The CUDA Core
Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig3.png')
Memory Hiearchy
Image(url='http://www.orangeowlsolutions.com/wp-content/uploads/2013/03/Fig9.png')
Processing flow
Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/5/59/CUDA_processing_flow_%28En%29.PNG/450px-CUDA_processing_flow_%28En%29.PNG')
CUDA Kernels
Image(url='http://www.biomedcentral.com/content/supplementary/1756-0500-2-73-s2.png')
CUDA execution model
Image(url='http://3dgep.com/wp-content/uploads/2011/11/Cuda-Execution-Model.png')
CUDA threads
Image(url="http://docs.nvidia.com/cuda/cuda-c-programming-guide/graphics/grid-of-thread-blocks.png")
Memoery access levels
Image(url='http://docs.nvidia.com/cuda/parallel-thread-execution/graphics/memory-hierarchy.png')
Recap of CUDA Jargon and Concepts
Generations
- Tesla (Compute Capability 1)
- Fermi (Compute Capability 2)
- Kepler (Compute Capability 3)
- Maxwell (current generation - Compute Capability 5)
- Pascal (next generation - not in production yet)
Confusingly, Tesla is also the brand name for NVidia’s GPGPU line of cards as well as the name for the 1st generation microarchitecture. Hence you will hear references to NVidia GTX for gaming and MVidia Tesla for scientific computing. Note that GTX cards can also be used for scieintifc computing, but lack ECC memory and have crippled double precisiion abiiities.
Hardware
Host = CPU Device = GPU
A GPU has multiple streaming multiprocessors (SM) that contain
- memory registers for threads to use
- several memory caches
- shared memory
- constant cache
- texture memory
- L1 cache
- thread schedulers
- Several CUDA cores (analagous to streaming processsor in AMD cards) - number depends on microarchitecture generation
- Each core consists of an Arithmetic logic unit (ALU) that handles integer and single precision calculations and a Floating point unit (FPU) that handles double precsion calculations
- Special function units (SFU) for transcendental functions (e.g. log, exp, sin, cos, sqrt)
For example, a high-end Kepler card has 15 SMs each with 12 groups of 16 (=192) CUDA cores for a total of 2880 CUDA cores (only 2048 threads can be simultaneoulsy active). Optimal use of CUDA requires feeding data to the threads fast enough to keep them all busy, which is why it is important to understand the memory hiearchy.
Device memory types
- Registers (only usable by one thread) - veru, very fast (1 clock cycle)
- Shared memroy (usable by threads in a thread block) - very fast (a few clock cyles)
- Organized into 32 banks that can be accessed simultaneously
- However, each concurrent thread needs to access a different bank or there is a bank conflict
- Banks can only serve one request at a time - a single conflict doubles the access time
- Device memory (usable by all threads - can transfer to/from CPU) - very slow (hundreds of clock cycles)
- Global memory is general purpose
- Local memory is optimized for consecutive access by a thread
- Constant memory is for read-only data that will not change over the course of a kernel execution
- Textture and surface memory are for specialized read-only data mainly used in graphics routines
Access speed: Global, local, texture, surface << constant << shared, regiser
- Device memory to host memory bandwidth (PCI) << device memory to device bandwidth
- few large transfers are better than many small ones
- increase computation to communication ratio
- Device can load 4, 8 or 16-byte words from global memroy into local registers
- data that is not in one of these multiples (e.g. structs) incurs a mis-aligned penalty
- mis-alginment is largely mitigated by memory cahces in curent generation GPU cards
In summary, 3 different problems can impede efficient memory access
- Avoid mis-alignment: when the data units are not in sizes conducive for transfer from global memory to local registers
- No coalescnce: when requqested by thread of a warp are not laid out consecutively in memory (stride=1)
- Avoid bank conflict: when multiple concurrentl threads in a block try to access the same memory bank at the same time
Thread scheduling model
Code in a kernel is executed in groups of 32 threads (Nvidia calls a group of 32 threads a warp). When one warp is wating on device memory, the scheduler switches to another ready warp, keeping as many cores busy as possible.
- Because accessing device memory is so slow, the device coaleseces global memory loads and stores issued by threads of a warp into as few transactions as posisble
- Because of coalescence, retrieval is optimal when neigboring threads (with consecuitve indexes) access consecutive memory locations - i.e. with a stride of 1
- A stride of 1 is not possible for indexing the higher dimensions of a multi-dimensinoal array - shared memory is used to overcome this (see matrix multiplication example) as there is no penalty for strided access to shared mmemroy
- Similarly, a structure consisting of arrays (SoA) allows for efficient access, while an array of structures (AoS) does not
Programming model
- The NVidia CUDA compiler
nvcc
targets a virutal machine known as the Parallel Thread Execuation (PTX) Instruction Set Architecture (ISA) that exposes the GPU as a dara parallel computing device - High level language compilers (CUDA C/C++, CUDA FOrtran, CUDA Pyton) generate PTX instructions, which are optimized for and translated to native target-architecture instructions that execute on the GPU
- GPU code is organized as a sequence of kernels (functions executed in parallel on the GPU)
- Normally only one kernel is exectuted at at time, but concurent execution of kernles is also possible
- The host launhces kernels, and each kernel can launch sub-kernels
- Threads are grouped into blocks, and blocks are grouped into a grid
- Each thread has a unique index within a block, and each block has a unique index within a grid
- This means that each thread has a global unique index that can be used to (say) access a specific array location
- Since the smallest unit that can be scheduled is a warp, the size of a thread block is always some mulitple of 32 threads
- Currently, the maximumn number of threads in a block for Kepleer is 1024 (32 warps) and the maximum nmber of simultaneous threads is 2048 (64 warps)
- Hence we can launch at most 2 blocks per grid with 1024 threads per block, or 8 blocks per grid with 256 threads per block and so on
Performance tuning
For optimal performance, the programmer has to juggle
- finding enough parallelism to use all SMs
- finding enouhg parallelism to keep all cores in an SM busy
- optimizing use of registers and shared memory
- optimizing device memory acess for contiguous memory
- organizing data or using the cache to optimize device memroy acccess for contiguous memory
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论