SE 765 - Distributed Software Development

CS 610 - Introduction to Parallel and Distributed Computing

Lecture 1: Overview

(Lecture adapted from Blaise Barney, Lawrence Livermore National Laboratory (See https://computing.llnl.gov/tutorials/parallel_comp/ )

What is Parallel Computing?

For example:

For example:

The Universe is Parallel:

Parallel computing is an evolution of serial computing that attempts to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a temporal sequence. For example:

The Real World is Massively Parallel



Uses for Parallel Computing:

    • Atmosphere, Earth, Environment

    • Physics - applied, nuclear, particle, condensed matter, high pressure, fusion, photonics

    • Bioscience, Biotechnology, Genetics

    • Chemistry, Molecular Sciences

    • Geology, Seismology

    • Mechanical Engineering - from prosthetics to spacecraft

    • Electrical Engineering, Circuit Design, Microelectronics

    • Computer Science, Mathematics

    • Databases, data mining

    • Oil exploration

    • Web search engines, web based business services

    • Medical imaging and diagnosis

    • Pharmaceutical design

    • Financial and economic modeling

    • Management of national and multi-national corporations

    • Advanced graphics and virtual reality, particularly in the entertainment industry

    • Networked video and multi-media technologies

    • Collaborative work environments

Why Use Parallel Computing?

Main Reasons:

  • Save time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel computers can be built from cheap, commodity components.

  • Solve larger problems: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. For example:

    • "Grand Challenge" (en.wikipedia.org/wiki/Grand_Challenge) problems requiring PetaFLOPS and PetaBytes of computing resources.

    • Web search engines/databases processing millions of transactions per second

  • Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid (www.accessgrid.org) provides a global collaboration network where people from around the world can meet and conduct work "virtually".

  • Use of non-local resources: Using compute resources on a wide area network, or even the Internet when local compute resources are scarce. For example:

  • Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers:

    • Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.

    • Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.

    • Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.

    • Current computer architectures are increasingly relying upon hardware level parallelism to improve performance:

      • Multiple execution units

      • Pipelined instructions

      • Multi-core

Who and What?

The Future:



Concepts and Terminology

von Neumann Architecture

Flynn's Classical Taxonomy

S I S D

Single Instruction, Single Data

S I M D

Single Instruction, Multiple Data

M I S D

Multiple Instruction, Single Data

M I M D

Multiple Instruction, Multiple Data





Single Instruction, Single Data (SISD):

  • A serial (non-parallel) computer

  • Single Instruction: Only one instruction stream is being acted on by the CPU during any one clock cycle

  • Single Data: Only one data stream is being used as input during any one clock cycle

  • Deterministic execution

  • This is the oldest and even today, the most common type of computer

  • Examples: older generation mainframes, minicomputers and workstations; most modern day PCs.



UNIVAC1


IBM 360


CRAY1


CDC 7600


PDP1


Dell Laptop




Single Instruction, Multiple Data (SIMD):

  • A type of parallel computer

  • Single Instruction: All processing units execute the same instruction at any given clock cycle

  • Multiple Data: Each processing unit can operate on a different data element

  • Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing.

  • Synchronous (lockstep) and deterministic execution

  • Two varieties: Processor Arrays and Vector Pipelines

  • Examples:

    • Processor Arrays: Connection Machine CM-2, MasPar MP-1 & MP-2, ILLIAC IV

    • Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP, NEC SX-2, Hitachi S820, ETA10

  • Most modern computers, particularly those with graphics processor units (GPUs) employ SIMD instructions and execution units.





ILLIAC IV


MasPar


        



Cray X-MP


Cray Y-MP


Thinking Machines CM-2


Cell Processor (GPU)




Multiple Instruction, Single Data (MISD):

  • A type of parallel computer

  • Multiple Instruction: Each processing unit operates on the data independently via separate instruction streams.

  • Single Data: A single data stream is fed into multiple processing units.

  • Few actual examples of this class of parallel computer have ever existed. One is the experimental Carnegie-Mellon C.mmp computer (1971).

  • Some conceivable uses might be:

    • multiple frequency filters operating on a single signal stream

    • multiple cryptography algorithms attempting to crack a single coded message.






Multiple Instruction, Multiple Data (MIMD):

  • A type of parallel computer

  • Multiple Instruction: Every processor may be executing a different instruction stream

  • Multiple Data: Every processor may be working with a different data stream

  • Execution can be synchronous or asynchronous, deterministic or non-deterministic

  • Currently, the most common type of parallel computer - most modern supercomputers fall into this category.

  • Examples: most current supercomputers, networked parallel computer clusters and "grids", multi-processor SMP computers, multi-core PCs.

  • Note: many MIMD architectures also include SIMD execution sub-components





IBM POWER5


HP/Compaq Alphaserver


Intel IA32



AMD Opteron


Cray XT3


IBM BlueGene/L



Some General Parallel Terminology

Like everything else, parallel computing has its own "jargon". Some of the more commonly used terms associated with parallel computing are listed below. Most of these will be discussed in more detail later.

Supercomputing / High Performance Computing (HPC) - Using the world's fastest and largest computers to solve large problems.


Node - A standalone "computer in a box". Usually comprised of multiple CPUs/processors/cores. Nodes are networked together to comprise a supercomputer.


CPU / Socket / Processor / Core – This varies, depending upon who you talk to.

Task - A logically discrete section of computational work. A task is typically a program or program-like set of instructions that is executed by a processor. A parallel program consists of multiple tasks running on multiple processors.


Pipelining - Breaking a task into steps performed by different processor units, with inputs streaming through, much like an assembly line; a type of parallel computing.


Shared Memory - From a strictly hardware point of view, describes a computer architecture where all processors have direct (usually bus based) access to common physical memory. In a programming sense, it describes a model where parallel tasks all have the same "picture" of memory and can directly address and access the same logical memory locations regardless of where the physical memory actually exists.


Symmetric Multi-Processor (SMP) - Hardware architecture where multiple processors share a single address space and access to all resources; shared memory computing.


Distributed Memory - In hardware, refers to network based memory access for physical memory that is not common. As a programming model, tasks can only logically "see" local machine memory and must use communications to access memory on other machines where other tasks are executing.


Communications - Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as through a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employed.


Synchronization - The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point.

Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.

Granularity - In parallel computing, granularity is a qualitative measure of the ratio of computation to communication.

Observed Speedup- Observed speedup of a code which has been parallelized, defined as:

wall-clock time of serial execution

-----------------------------------

wall-clock time of parallel execution

One of the simplest and most widely used indicators for a parallel program's performance.

Parallel Overhead - The amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel overhead can include factors such as:

Massively Parallel - Refers to the hardware that comprises a given parallel system - having many processors. The meaning of "many" keeps increasing, but currently, the largest parallel computers can be comprised of processors numbering in the hundreds of thousands.


Embarrassingly Parallel - Solving many similar, but independent tasks simultaneously; little to no need for coordination between the tasks.


Scalability - Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include:



Parallel Computer Memory Architectures

Shared Memory

General Characteristics:

  • Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as global address space.

  • Multiple processors can operate independently but share the same memory resources.

  • Changes in a memory location effected by one processor are visible to all other processors.

  • Shared memory machines can be divided into two main classes based upon memory access times: UMA and NUMA.

Uniform Memory Access (UMA):

  • Most commonly represented today by Symmetric Multiprocessor (SMP) machines

  • Identical processors

  • Equal access and access times to memory

  • Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level.

Non-Uniform Memory Access (NUMA):

  • Often made by physically linking two or more SMPs

  • One SMP can directly access memory of another SMP

  • Not all processors have equal access time to all memories

  • Memory access across link is slower

  • If cache coherency is maintained, then may also be called CC-NUMA - Cache Coherent NUMA


Shared Memory (UMA)



Shared Memory (NUMA)

Advantages:

Disadvantages:

Distributed Memory

General Characteristics:

Advantages:

Disadvantages:

Hybrid Distributed-Shared Memory



Parallel Programming Models

Overview



Shared Memory Model (without threads)

Implementations:

Threads Model

Implementations:

In both cases, the programmer is responsible for determining all parallelism.

More Information:

Distributed Memory / Message Passing Model

Implementations:

More Information:

Data Parallel Model


Implementations:

Implementations are available for most common parallel platforms.

HPF compilers were relatively common in the 1990s, but are no longer commonly implemented.

Hybrid Model

SPMD and MPMD

Single Program Multiple Data (SPMD):

Multiple Program Multiple Data (MPMD):



Designing Parallel Programs

Automatic vs. Manual Parallelization

Understand the Problem and the Program

  • Identify the program's hotspots:

    • Know where most of the real work is being done. The majority of scientific and technical programs usually accomplish most of their work in a few places.

    • Profilers and performance analysis tools can help here

    • Focus on parallelizing the hotspots and ignore those sections of the program that account for little CPU usage.

  • Identify bottlenecks in the program

    • Are there areas that are disproportionately slow, or cause parallelizable work to halt or be deferred? For example, I/O is usually something that slows a program down.

    • May be possible to restructure the program or use a different algorithm to reduce or eliminate unnecessary slow areas

  • Identify inhibitors to parallelism. One common class of inhibitor is data dependence, as demonstrated by the Fibonacci sequence above.

  • Investigate other algorithms if possible. This may be the single most important consideration when designing a parallel application.

Partitioning

Domain Decomposition:

Functional Decomposition:

Ecosystem Modeling
Each program calculates the population of a given group, where each group's growth depends on that of its neighbors. As time progresses, each process calculates its current state, then exchanges information with the neighbor populations. All tasks then progress to calculate the state at the next time step.

Signal Processing
An audio signal data set is passed through four distinct computational filters. Each filter is a separate process. The first segment of data must pass through the first filter before progressing to the second. When it does, the second segment of data passes through the first filter. By the time the fourth segment of data is in the first filter, all four tasks are busy.

Climate Modeling
Each model component can be thought of as a separate task. Arrows represent exchanges of data between components during computation: the atmosphere model generates wind velocity data that are used by the ocean model, the ocean model generates sea surface temperature data that are used by the atmosphere model, and so on.



Communications

Who Needs Communications?

The need for communications between tasks depends upon your problem:

Factors to Consider:

There are a number of important factors to consider when designing your program's inter-task communications:



Synchronization

Types of Synchronization:



Data Dependencies

Definition:

Examples:


DO 500 J = MYSTART,MYEND

A(J) = A(J-1) * 2.0

500 CONTINUE

The value of A(J-1) must be computed before the value of A(J), therefore A(J) exhibits a data dependency on A(J-1). Parallelism is inhibited.

If Task 2 has A(J) and task 1 has A(J-1), computing the correct value of A(J) necessitates:


task 1 task 2

------ ------


X = 2 X = 4

. .

. .

Y = X**2 Y = X**3

As with the previous example, parallelism is inhibited. The value of Y is dependent on:

How to Handle Data Dependencies:



Load Balancing

How to Achieve Load Balance:



Granularity

Computation / Communication Ratio:

Fine-grain Parallelism:

  • Relatively small amounts of computational work are done between communication events

  • Low computation to communication ratio

  • Facilitates load balancing

  • Implies high communication overhead and less opportunity for performance enhancement

  • If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation.

Coarse-grain Parallelism:

  • Relatively large amounts of computational work are done between communication/synchronization events

  • High computation to communication ratio

  • Implies more opportunity for performance increase

  • Harder to load balance efficiently

Which is Best?

  • The most efficient granularity is dependent on the algorithm and the hardware environment in which it runs.

  • In most cases the overhead associated with communications and synchronization is high relative to execution speed so it is advantageous to have coarse granularity.

  • Fine-grain parallelism can help reduce overheads due to load imbalance.

I/O

The Bad News:

The Good News:



Limits and Costs of Parallel Programming

Amdahl's Law:

  • Amdahl's Law states that potential program speedup is defined by the fraction of code (P) that can be parallelized:


    1

    speedup = --------

    1 - P

  • If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup).

  • If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).

  • If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will run twice as fast.

  • Introducing the number of processors performing the parallel fraction of work, the relationship can be modeled by:


1

speedup = ------------

P + S

---

N

where P = parallel fraction, N = number of processors and S = serial fraction.

  • It soon becomes obvious that there are limits to the scalability of parallelism. For example:


speedup

--------------------------------

N P = .50 P = .90 P = .99

----- ------- ------- -------

10 1.82 5.26 9.17

100 1.98 9.17 50.25

1000 1.99 9.91 90.99

10000 1.99 9.91 99.02

100000 1.99 9.99 99.90


We can increase the problem size by doubling the grid dimensions and halving the time step. This results in four times the number of grid points and twice the number of time steps. The timings then look like:


2D Grid Calculations 680 seconds 97.84%

Serial fraction 15 seconds 2.16%

Complexity:

Portability:

Resource Requirements:

Scalability:



Performance Analysis and Tuning



Parallel Examples

Array Processing

  • This example demonstrates calculations on 2-dimensional array elements, with the computation on each array element being independent from other array elements.

  • The serial program calculates one element at a time in sequential order.

  • Serial code could be of the form:


    do j = 1,n

    do i = 1,n

    a(i,j) = fcn(i,j)

    end do

    end do

  • The calculation of elements is independent of one another - leads to an embarrassingly parallel situation.

  • The problem should be computationally intensive.

Array Processing
Parallel Solution 1

  • Arrays elements are distributed so that each processor owns a portion of an array (subarray).

  • Independent calculation of array elements ensures there is no need for communication between tasks.

  • Distribution scheme is chosen by other criteria, e.g. unit stride (stride of 1) through the subarrays. Unit stride maximizes cache/memory usage.

  • Since it is desirable to have unit stride through the subarrays, the choice of a distribution scheme depends on the programming language. See the Block - Cyclic Distributions Diagram for the options.

  • After the array is distributed, each task executes the portion of the loop corresponding to the data it owns. For example, with Fortran block distribution:


    do j = mystart, myend

    do i = 1,n

    a(i,j) = fcn(i,j)

    end do

    end do

  • Notice that only the outer loop variables are different from the serial solution.

One Possible Solution:

Array Processing
Parallel Solution 2: Pool of Tasks

Pool of Tasks Scheme:

Master Process:

Worker Process: repeatedly does the following


find out if I am MASTER or WORKER


if I am MASTER


do until no more jobs

if request send to WORKER next job

else receive results from WORKER

end do


else if I am WORKER


do until no more jobs

request job from MASTER

receive from MASTER next job


calculate array element: a(i,j) = fcn(i,j)


send results to MASTER

end do


endif

Discussion:



PI Calculation

  • The value of PI can be calculated in a number of ways. Consider the following method of approximating PI

    1. Inscribe a circle in a square

    2. Randomly generate points in the square

    3. Determine the number of points in the square that are also in the circle

    4. Let r be the number of points in the circle divided by the number of points in the square

    5. PI ~ 4 r

    6. Note that the more points generated, the better the approximation

  • Serial pseudo code for this procedure:


    npoints = 10000

    circle_count = 0


    do j = 1,npoints

    generate 2 random numbers between 0 and 1

    xcoordinate = random1

    ycoordinate = random2

    if (xcoordinate, ycoordinate) inside circle

    then circle_count = circle_count + 1

    end do


    PI = 4.0*circle_count/npoints

  • Note that most of the time in running this program would be spent executing the loop

  • Leads to an embarrassingly parallel solution

    • Computationally intensive

    • Minimal communication

    • Minimal I/O

PI Calculation
Parallel Solution

  • Parallel strategy: break the loop into portions that can be executed by the tasks.

  • For the task of approximating PI:

    • Each task executes its portion of the loop a number of times.

    • Each task can do its work without requiring any information from the other tasks (there are no data dependencies).

    • Uses the SPMD model. One task acts as master and collects the results.

  • Pseudo code solution: red highlights changes for parallelism.


npoints = 10000

circle_count = 0


p = number of tasks

num = npoints/p


find out if I am MASTER or WORKER


do j = 1,num

generate 2 random numbers between 0 and 1

xcoordinate = random1

ycoordinate = random2

if (xcoordinate, ycoordinate) inside circle

then circle_count = circle_count + 1

end do


if I am MASTER


receive from WORKERS their circle_counts

compute PI (use MASTER and WORKER calculations)


else if I am WORKER


send to MASTER circle_count


endif



Simple Heat Equation

  • Most problems in parallel computing require communication among the tasks. A number of common problems require communication with "neighbor" tasks.

  • The heat equation describes the temperature change over time, given initial temperature distribution and boundary conditions.

  • A finite differencing scheme is employed to solve the heat equation numerically on a square region.

  • The initial temperature is zero on the boundaries and high in the middle.

  • The boundary temperature is held at zero.

  • For the fully explicit problem, a time stepping algorithm is used. The elements of a 2-dimensional array represent the temperature at points on the square.

  • The calculation of an element is dependent upon neighbor element values.

  • A serial program would contain code like:


do iy = 2, ny - 1

do ix = 2, nx - 1

u2(ix, iy) =

u1(ix, iy) +

cx * (u1(ix+1,iy) + u1(ix-1,iy) - 2.*u1(ix,iy)) +

cy * (u1(ix,iy+1) + u1(ix,iy-1) - 2.*u1(ix,iy))

end do

end do

Simple Heat Equation
Parallel Solution



1-D Wave Equation

where c is a constant

1-D Wave Equation
Parallel Solution

References and More Information