Tutorials | Exercises | Abstracts | LC Workshops | Comments | Search | Privacy & Legal Notice


Introduction to Parallel Computing

Author: Blaise Barney, Lawrence Livermore National Laboratory UCRL-MI-133316

Table of Contents

  1. Abstract
  2. Overview
    1. What is Parallel Computing?
    2. Why Use Parallel Computing?
  3. Concepts and Terminology
    1. von Neumann Computer Architecture
    2. Flynn's Classical Taxonomy
    3. Some General Parallel Terminology
  4. Parallel Computer Memory Architectures
    1. Shared Memory
    2. Distributed Memory
    3. Hybrid Distributed-Shared Memory
  5. Parallel Programming Models
    1. Overview
    2. Shared Memory Model
    3. Threads Model
    4. Distributed Memory / Message Passing Model
    5. Data Parallel Model
    6. Hybrid Model
    7. SPMD and MPMP
  6. Designing Parallel Programs
    1. Automatic vs. Manual Parallelization
    2. Understand the Problem and the Program
    3. Partitioning
    4. Communications
    5. Synchronization
    6. Data Dependencies
    7. Load Balancing
    8. Granularity
    9. I/O
    10. Limits and Costs of Parallel Programming
    11. Performance Analysis and Tuning
  7. Parallel Examples
    1. Array Processing
    2. PI Calculation
    3. Simple Heat Equation
    4. 1-D Wave Equation
  8. References and More Information


Abstract


This tutorial is the first of eight tutorials in the 4+ day "Using LLNL's Supercomputers" workshop. It is intended to provide only a very quick overview of the extensive and broad topic of Parallel Computing, as a lead-in for the tutorials that follow it. As such, it covers just the very basics of parallel computing, and is intended for someone who is just becoming acquainted with the subject and who is planning to attend one or more of the other tutorials in this workshop. It is not intended to cover Parallel Programming in depth, as this would require significantly more time. The tutorial begins with a discussion on parallel computing - what it is and how it's used, followed by a discussion on concepts and terminology associated with parallel computing. The topics of parallel memory architectures and programming models are then explored. These topics are followed by a series of practical discussions on a number of the complex issues related to designing and running parallel programs. The tutorial concludes with several examples of how to parallelize simple serial programs.



Overview

What is Parallel Computing?

The Universe is Parallel:

Uses for Parallel Computing:



Overview

Why Use Parallel Computing?

Main Reasons:

  • Save time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel computers can be built from cheap, commodity components.
  • Solve larger problems: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. For example:
    • "Grand Challenge" (en.wikipedia.org/wiki/Grand_Challenge) problems requiring PetaFLOPS and PetaBytes of computing resources.
    • Web search engines/databases processing millions of transactions per second
  • Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid (www.accessgrid.org) provides a global collaboration network where people from around the world can meet and conduct work "virtually".
  • Use of non-local resources: Using compute resources on a wide area network, or even the Internet when local compute resources are scarce. For example:
  • Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers:
    • Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
    • Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.
    • Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.
    • Current computer architectures are increasingly relying upon hardware level parallelism to improve performance:
      • Multiple execution units
      • Pipelined instructions
      • Multi-core

Who and What?

The Future:



Concepts and Terminology

von Neumann Architecture



Concepts and Terminology

Flynn's Classical Taxonomy

Single Instruction, Single Data (SISD):
  • A serial (non-parallel) computer
  • Single Instruction: Only one instruction stream is being acted on by the CPU during any one clock cycle
  • Single Data: Only one data stream is being used as input during any one clock cycle
  • Deterministic execution
  • This is the oldest and even today, the most common type of computer
  • Examples: older generation mainframes, minicomputers and workstations; most modern day PCs.
SISD

Single Instruction, Multiple Data (SIMD):
  • A type of parallel computer
  • Single Instruction: All processing units execute the same instruction at any given clock cycle
  • Multiple Data: Each processing unit can operate on a different data element
  • Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing.
  • Synchronous (lockstep) and deterministic execution
  • Two varieties: Processor Arrays and Vector Pipelines
  • Examples:
    • Processor Arrays: Connection Machine CM-2, MasPar MP-1 & MP-2, ILLIAC IV
    • Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP, NEC SX-2, Hitachi S820, ETA10
  • Most modern computers, particularly those with graphics processor units (GPUs) employ SIMD instructions and execution units.


SIMD

Multiple Instruction, Single Data (MISD):
  • A type of parallel computer
  • Multiple Instruction: Each processing unit operates on the data independently via separate instruction streams.
  • Single Data: A single data stream is fed into multiple processing units.
  • Few actual examples of this class of parallel computer have ever existed. One is the experimental Carnegie-Mellon C.mmp computer (1971).
  • Some conceivable uses might be:
    • multiple frequency filters operating on a single signal stream
    • multiple cryptography algorithms attempting to crack a single coded message.


MISD

Multiple Instruction, Multiple Data (MIMD):
  • A type of parallel computer
  • Multiple Instruction: Every processor may be executing a different instruction stream
  • Multiple Data: Every processor may be working with a different data stream
  • Execution can be synchronous or asynchronous, deterministic or non-deterministic
  • Currently, the most common type of parallel computer - most modern supercomputers fall into this category.
  • Examples: most current supercomputers, networked parallel computer clusters and "grids", multi-processor SMP computers, multi-core PCs.
  • Note: many MIMD architectures also include SIMD execution sub-components


MIMD



Concepts and Terminology

Some General Parallel Terminology

Like everything else, parallel computing has its own "jargon". Some of the more commonly used terms associated with parallel computing are listed below. Most of these will be discussed in more detail later.

Supercomputing / High Performance Computing (HPC)
Using the world's fastest and largest computers to solve large problems.

Node
A standalone "computer in a box". Usually comprised of multiple CPUs/processors/cores. Nodes are networked together to comprise a supercomputer.

CPU / Socket / Processor / Core
This varies, depending upon who you talk to. In the past, a CPU (Central Processing Unit) was a singular execution component for a computer. Then, multiple CPUs were incorporated into a node. Then, individual CPUs were subdivided into multiple "cores", each being a unique execution unit. CPUs with multiple cores are sometimes called "sockets" - vendor dependent. The result is a node with multiple CPUs, each containing multiple cores. The nomenclature is confused at times. Wonder why?

Task
A logically discrete section of computational work. A task is typically a program or program-like set of instructions that is executed by a processor. A parallel program consists of multiple tasks running on multiple processors.

Pipelining
Breaking a task into steps performed by different processor units, with inputs streaming through, much like an assembly line; a type of parallel computing.

Shared Memory
From a strictly hardware point of view, describes a computer architecture where all processors have direct (usually bus based) access to common physical memory. In a programming sense, it describes a model where parallel tasks all have the same "picture" of memory and can directly address and access the same logical memory locations regardless of where the physical memory actually exists.

Symmetric Multi-Processor (SMP)
Hardware architecture where multiple processors share a single address space and access to all resources; shared memory computing.

Distributed Memory
In hardware, refers to network based memory access for physical memory that is not common. As a programming model, tasks can only logically "see" local machine memory and must use communications to access memory on other machines where other tasks are executing.

Communications
Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as through a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employed.

Synchronization
The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point.

Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.

Granularity
In parallel computing, granularity is a qualitative measure of the ratio of computation to communication.

Observed Speedup
Observed speedup of a code which has been parallelized, defined as:

wall-clock time of serial execution
-----------------------------------
 wall-clock time of parallel execution

One of the simplest and most widely used indicators for a parallel program's performance.

Parallel Overhead
The amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel overhead can include factors such as:

Massively Parallel
Refers to the hardware that comprises a given parallel system - having many processors. The meaning of "many" keeps increasing, but currently, the largest parallel computers can be comprised of processors numbering in the hundreds of thousands.

Embarrassingly Parallel
Solving many similar, but independent tasks simultaneously; little to no need for coordination between the tasks.

Scalability
Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include:



Parallel Computer Memory Architectures

Shared Memory

General Characteristics:

  • Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as global address space.

  • Multiple processors can operate independently but share the same memory resources.

  • Changes in a memory location effected by one processor are visible to all other processors.

  • Shared memory machines can be divided into two main classes based upon memory access times: UMA and NUMA.

Uniform Memory Access (UMA):

  • Most commonly represented today by Symmetric Multiprocessor (SMP) machines
  • Identical processors
  • Equal access and access times to memory
  • Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level.

Non-Uniform Memory Access (NUMA):

  • Often made by physically linking two or more SMPs
  • One SMP can directly access memory of another SMP
  • Not all processors have equal access time to all memories
  • Memory access across link is slower
  • If cache coherency is maintained, then may also be called CC-NUMA - Cache Coherent NUMA
Shared memory architecture
Shared Memory (UMA)


NUMA
Shared Memory (NUMA)
Advantages:

Disadvantages:



Parallel Computer Memory Architectures

Distributed Memory

General Characteristics:

Advantages:

Disadvantages:



Parallel Computer Memory Architectures

Hybrid Distributed-Shared Memory



Parallel Programming Models

Overview



Parallel Programming Models

Shared Memory Model (without threads)

Implementations:



Parallel Programming Models

Threads Model

Implementations:

More Information:



Parallel Programming Models

Distributed Memory / Message Passing Model

Implementations:

More Information:



Parallel Programming Models

Data Parallel Model


Implementations:



Parallel Programming Models

Hybrid Model



Parallel Programming Models

SPMD and MPMD

Single Program Multiple Data (SPMD):

Multiple Program Multiple Data (MPMD):



Designing Parallel Programs

Automatic vs. Manual Parallelization



Designing Parallel Programs

Understand the Problem and the Program

  • Identify the program's hotspots:
    • Know where most of the real work is being done. The majority of scientific and technical programs usually accomplish most of their work in a few places.
    • Profilers and performance analysis tools can help here
    • Focus on parallelizing the hotspots and ignore those sections of the program that account for little CPU usage.

  • Identify bottlenecks in the program
    • Are there areas that are disproportionately slow, or cause parallelizable work to halt or be deferred? For example, I/O is usually something that slows a program down.
    • May be possible to restructure the program or use a different algorithm to reduce or eliminate unnecessary slow areas

  • Identify inhibitors to parallelism. One common class of inhibitor is data dependence, as demonstrated by the Fibonacci sequence above.

  • Investigate other algorithms if possible. This may be the single most important consideration when designing a parallel application.


Designing Parallel Programs

Partitioning

Domain Decomposition:

Functional Decomposition:



Designing Parallel Programs

Communications

Who Needs Communications?

Factors to Consider:



Designing Parallel Programs

Synchronization

Types of Synchronization:



Designing Parallel Programs

Data Dependencies

Definition:

Examples:

How to Handle Data Dependencies:



Designing Parallel Programs

Load Balancing

How to Achieve Load Balance:



Designing Parallel Programs

Granularity

Computation / Communication Ratio:

Fine-grain Parallelism:

  • Relatively small amounts of computational work are done between communication events

  • Low computation to communication ratio

  • Facilitates load balancing

  • Implies high communication overhead and less opportunity for performance enhancement

  • If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation.

Coarse-grain Parallelism:

  • Relatively large amounts of computational work are done between communication/synchronization events

  • High computation to communication ratio

  • Implies more opportunity for performance increase

  • Harder to load balance efficiently

Which is Best?

  • The most efficient granularity is dependent on the algorithm and the hardware environment in which it runs.

  • In most cases the overhead associated with communications and synchronization is high relative to execution speed so it is advantageous to have coarse granularity.

  • Fine-grain parallelism can help reduce overheads due to load imbalance.
Granularity


Designing Parallel Programs

I/O

The Bad News:

The Good News:



Designing Parallel Programs

Limits and Costs of Parallel Programming

Amdahl's Law:

  • Amdahl's Law states that potential program speedup is defined by the fraction of code (P) that can be parallelized:

    
                         1
        speedup   =   -------- 
                       1  - P
    
    

  • If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup).

  • If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).

  • If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will run twice as fast.

  • Introducing the number of processors performing the parallel fraction of work, the relationship can be modeled by:

    
                           1  
        speedup   =   ------------ 
                        P   +  S
                       ---
                        N
    
    

    where P = parallel fraction, N = number of processors and S = serial fraction.

  • It soon becomes obvious that there are limits to the scalability of parallelism. For example:

    
                           speedup
                 --------------------------------
        N        P = .50      P = .90     P = .99
      -----      -------      -------     -------
         10         1.82         5.26        9.17
        100         1.98         9.17       50.25     
       1000         1.99         9.91       90.99
      10000         1.99         9.91       99.02
     100000         1.99         9.99       99.90
    
    


Complexity:

Portability:

Resource Requirements:

Scalability:



Designing Parallel Programs

Performance Analysis and Tuning



Parallel Examples

Array Processing

  • This example demonstrates calculations on 2-dimensional array elements, with the computation on each array element being independent from other array elements.

  • The serial program calculates one element at a time in sequential order.

  • Serial code could be of the form:

    
    do j = 1,n
    do i = 1,n
      a(i,j) = fcn(i,j)
    end do
    end do
    
    

  • The calculation of elements is independent of one another - leads to an embarrassingly parallel situation.

  • The problem should be computationally intensive.
Embarrassingly parallel array calculation


Array Processing
Parallel Solution 1

  • Arrays elements are distributed so that each processor owns a portion of an array (subarray).

  • Independent calculation of array elements ensures there is no need for communication between tasks.

  • Distribution scheme is chosen by other criteria, e.g. unit stride (stride of 1) through the subarrays. Unit stride maximizes cache/memory usage.

  • Since it is desirable to have unit stride through the subarrays, the choice of a distribution scheme depends on the programming language. See the Block - Cyclic Distributions Diagram for the options.

  • After the array is distributed, each task executes the portion of the loop corresponding to the data it owns. For example, with Fortran block distribution:

    
    do j = mystart, myend
    do i = 1,n
      a(i,j) = fcn(i,j)
    end do
    end do
    
    

  • Notice that only the outer loop variables are different from the serial solution.
Embarrassingly parallel array calculation data decomposition

One Possible Solution:


Array Processing
Parallel Solution 2: Pool of Tasks

Pool of Tasks Scheme:

Discussion:



Parallel Examples

PI Calculation

  • The value of PI can be calculated in a number of ways. Consider the following method of approximating PI
    1. Inscribe a circle in a square
    2. Randomly generate points in the square
    3. Determine the number of points in the square that are also in the circle
    4. Let r be the number of points in the circle divided by the number of points in the square
    5. PI ~ 4 r
    6. Note that the more points generated, the better the approximation

  • Serial pseudo code for this procedure:

    
    npoints = 10000
    circle_count = 0
    
    do j = 1,npoints
      generate 2 random numbers between 0 and 1
      xcoordinate = random1
      ycoordinate = random2
      if (xcoordinate, ycoordinate) inside circle
      then circle_count = circle_count + 1
    end do
    
    PI = 4.0*circle_count/npoints
    
    

  • Note that most of the time in running this program would be spent executing the loop

  • Leads to an embarrassingly parallel solution
    • Computationally intensive
    • Minimal communication
    • Minimal I/O
One method of determining PI


PI Calculation
Parallel Solution

  • Parallel strategy: break the loop into portions that can be executed by the tasks.

  • For the task of approximating PI:
    • Each task executes its portion of the loop a number of times.
    • Each task can do its work without requiring any information from the other tasks (there are no data dependencies).
    • Uses the SPMD model. One task acts as master and collects the results.

  • Pseudo code solution: red highlights changes for parallelism.

    
    npoints = 10000
    circle_count = 0
    
    p = number of tasks
    num = npoints/p
    
    find out if I am MASTER or WORKER 
    
    do j = 1,num 
      generate 2 random numbers between 0 and 1
      xcoordinate = random1
      ycoordinate = random2
      if (xcoordinate, ycoordinate) inside circle
      then circle_count = circle_count + 1
    end do
    
    if I am MASTER
    
      receive from WORKERS their circle_counts
      compute PI (use MASTER and WORKER calculations)
    
    else if I am WORKER
    
      send to MASTER circle_count
    
    endif
    
    

One method of determining PI


Parallel Examples

Simple Heat Equation

  • Most problems in parallel computing require communication among the tasks. A number of common problems require communication with "neighbor" tasks.

  • The heat equation describes the temperature change over time, given initial temperature distribution and boundary conditions.

  • A finite differencing scheme is employed to solve the heat equation numerically on a square region.

  • The initial temperature is zero on the boundaries and high in the middle.

  • The boundary temperature is held at zero.

  • For the fully explicit problem, a time stepping algorithm is used. The elements of a 2-dimensional array represent the temperature at points on the square.

  • The calculation of an element is dependent upon neighbor element values.

    Heat equation

  • A serial program would contain code like:

    
    do iy = 2, ny - 1
    do ix = 2, nx - 1
      u2(ix, iy) =
        u1(ix, iy)  +
          cx * (u1(ix+1,iy) + u1(ix-1,iy) - 2.*u1(ix,iy)) +
          cy * (u1(ix,iy+1) + u1(ix,iy-1) - 2.*u1(ix,iy))
    end do
    end do
    

Initial heat conditions Heat equation


Simple Heat Equation
Parallel Solution

Heat equation - partitioned data



Parallel Examples

1-D Wave Equation

Wave equation


1-D Wave Equation
Parallel Solution

Wave equation partition




This completes the tutorial.

Evaluation Form       Please complete the online evaluation form.

Where would you like to go now?



References and More Information


https://computing.llnl.gov/tutorials/parallel_comp/#Flynn
Last Modified: 06/12/2012 17:51:43 blaiseb@llnl.gov
UCRL-MI-133316