Lecture 1: Overview

(Lecture adapted from Blaise Barney, Lawrence Livermore National Laboratory (See https://computing.llnl.gov/tutorials/parallel_comp/ )

What is Parallel Computing?

Description: Serial computing

For example:

Description: Serial computing

Description: Parallel computing

For example:

Description: Parallel computing

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifThe Universe is Parallel:

The Real World is Massively Parallel

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\realWorldCollage1.jpg
Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\realWorldCollage2.jpg
Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\realWorldCollage3.jpg

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifUses for Parallel Computing:

o    Atmosphere, Earth, Environment

o    Physics - applied, nuclear, particle, condensed matter, high pressure, fusion, photonics

o    Bioscience, Biotechnology, Genetics

o    Chemistry, Molecular Sciences

o    Geology, Seismology

o    Mechanical Engineering - from prosthetics to spacecraft

o    Electrical Engineering, Circuit Design, Microelectronics

o    Computer Science, Mathematics

Description: Computer simulations

o    Databases, data mining

o    Oil exploration

o    Web search engines, web based business services

o    Medical imaging and diagnosis

o    Pharmaceutical design

o    Financial and economic modeling

o    Management of national and multi-national corporations

o    Advanced graphics and virtual reality, particularly in the entertainment industry

o    Networked video and multi-media technologies

o    Collaborative work environments

Description: Computer simulations

 

Why Use Parallel Computing?

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifMain Reasons:

  • Save time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel computers can be built from cheap, commodity components.

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\timeMoney.jpg

  • Solve larger problems: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. For example:
    • "Grand Challenge" (en.wikipedia.org/wiki/Grand_Challenge) problems requiring PetaFLOPS and PetaBytes of computing resources.
    • Web search engines/databases processing millions of transactions per second

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\biggerProblems.jpg

  • Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid (www.accessgrid.org) provides a global collaboration network where people from around the world can meet and conduct work "virtually".

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\accessGrid.jpg

  • Use of non-local resources: Using compute resources on a wide area network, or even the Internet when local compute resources are scarce. For example:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\SETILogo.jpg

  • Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers:
    • Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
    • Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.
    • Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.
    • Current computer architectures are increasingly relying upon hardware level parallelism to improve performance:
      • Multiple execution units
      • Pipelined instructions
      • Multi-core

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\xeon5600processorDie3.jpg

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifWho and What?

Description: chart

Description: chart

Description: chart

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifThe Future:

Description: chart

Description: Flops

 

Concepts and Terminology

von Neumann Architecture

Description: von Neumann architecture

o    Comprised of four main components:

§  Memory

§  Control Unit

§  Arithmetic Logic Unit

§  Input/Output

o    Read/write, random access memory is used to store both program instructions and data

§  Program instructions are coded data which tell the computer to do something

§  Data is simply information to be used by the program

o    Control unit fetches instructions/data from memory, decodes the instructions and then sequentially coordinates operations to accomplish the programmed task.

o    Aritmetic Unit performs basic arithmetic operations

o    Input/Output is the interface to the human operator

 

Flynn's Classical Taxonomy

S I S D

Single Instruction, Single Data

S I M D

Single Instruction, Multiple Data

M I S D

Multiple Instruction, Single Data

M I M D

Multiple Instruction, Multiple Data

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifSingle Instruction, Single Data (SISD):

  • A serial (non-parallel) computer
  • Single Instruction: Only one instruction stream is being acted on by the CPU during any one clock cycle
  • Single Data: Only one data stream is being used as input during any one clock cycle
  • Deterministic execution
  • This is the oldest and even today, the most common type of computer
  • Examples: older generation mainframes, minicomputers and workstations; most modern day PCs.

Description: SISD

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\univac1.jpg
UNIVAC1

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\ibm.jpg
IBM 360

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\cray1.jpg
CRAY1

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\cdc7600.jpg
CDC 7600

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\pdp1.jpg
PDP1

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\dellLaptop.jpg
Dell Laptop

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifSingle Instruction, Multiple Data (SIMD):

  • A type of parallel computer
  • Single Instruction: All processing units execute the same instruction at any given clock cycle
  • Multiple Data: Each processing unit can operate on a different data element
  • Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing.
  • Synchronous (lockstep) and deterministic execution
  • Two varieties: Processor Arrays and Vector Pipelines
  • Examples:
    • Processor Arrays: Connection Machine CM-2, MasPar MP-1 & MP-2, ILLIAC IV
    • Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90, Fujitsu VP, NEC SX-2, Hitachi S820, ETA10
  • Most modern computers, particularly those with graphics processor units (GPUs) employ SIMD instructions and execution units.



Description: SIMD

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\illiacIV.jpg
ILLIAC IV

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\MasPar.jpg
MasPar


        
Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\simd2.gif

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\crayXMP.jpg
Cray X-MP

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\crayYMP.jpg
Cray Y-MP

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\cm2.jpg
Thinking Machines CM-2

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\cellProcessor.jpg
Cell Processor (GPU)

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifMultiple Instruction, Single Data (MISD):

  • A type of parallel computer
  • Multiple Instruction: Each processing unit operates on the data independently via separate instruction streams.
  • Single Data: A single data stream is fed into multiple processing units.
  • Few actual examples of this class of parallel computer have ever existed. One is the experimental Carnegie-Mellon C.mmp computer (1971).
  • Some conceivable uses might be:
    • multiple frequency filters operating on a single signal stream
    • multiple cryptography algorithms attempting to crack a single coded message.



Description: MISD

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifMultiple Instruction, Multiple Data (MIMD):

  • A type of parallel computer
  • Multiple Instruction: Every processor may be executing a different instruction stream
  • Multiple Data: Every processor may be working with a different data stream
  • Execution can be synchronous or asynchronous, deterministic or non-deterministic
  • Currently, the most common type of parallel computer - most modern supercomputers fall into this category.
  • Examples: most current supercomputers, networked parallel computer clusters and "grids", multi-processor SMP computers, multi-core PCs.
  • Note: many MIMD architectures also include SIMD execution sub-components



Description: MIMD

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\ibmPower5Cluster.jpg
IBM POWER5

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\alphaserverCluster.jpg
HP/Compaq Alphaserver

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\ia32Cluster.jpg
Intel IA32

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\opteronCluster.jpg
AMD Opteron

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\crayXT3Cluster.jpg
Cray XT3

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\bglCluster.jpg
IBM BG/L

 

Some General Parallel Terminology

Like everything else, parallel computing has its own "jargon". Some of the more commonly used terms associated with parallel computing are listed below. Most of these will be discussed in more detail later.

Supercomputing / High Performance Computing (HPC)

Using the world's fastest and largest computers to solve large problems.

Node

A standalone "computer in a box". Usually comprised of multiple CPUs/processors/cores. Nodes are networked together to comprise a supercomputer.

CPU / Socket / Processor / Core

This varies, depending upon who you talk to. In the past, a CPU (Central Processing Unit) was a singular execution component for a computer. Then, multiple CPUs were incorporated into a node. Then, individual CPUs were subdivided into multiple "cores", each being a unique execution unit. CPUs with multiple cores are sometimes called "sockets" - vendor dependent. The result is a node with multiple CPUs, each containing multiple cores. The nomenclature is confused at times. Wonder why?

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\nodeSocketCores.jpg

Task

A logically discrete section of computational work. A task is typically a program or program-like set of instructions that is executed by a processor. A parallel program consists of multiple tasks running on multiple processors.

Pipelining

Breaking a task into steps performed by different processor units, with inputs streaming through, much like an assembly line; a type of parallel computing.

Shared Memory

From a strictly hardware point of view, describes a computer architecture where all processors have direct (usually bus based) access to common physical memory. In a programming sense, it describes a model where parallel tasks all have the same "picture" of memory and can directly address and access the same logical memory locations regardless of where the physical memory actually exists.

Symmetric Multi-Processor (SMP)

Hardware architecture where multiple processors share a single address space and access to all resources; shared memory computing.

Distributed Memory

In hardware, refers to network based memory access for physical memory that is not common. As a programming model, tasks can only logically "see" local machine memory and must use communications to access memory on other machines where other tasks are executing.

Communications

Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as through a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employed.

Synchronization

The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point.

Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.

Granularity

In parallel computing, granularity is a qualitative measure of the ratio of computation to communication.

·         Coarse: relatively large amounts of computational work are done between communication events

·         Fine: relatively small amounts of computational work are done between communication events

Observed Speedup

Observed speedup of a code which has been parallelized, defined as:

wall-clock time of serial execution

-----------------------------------

 wall-clock time of parallel execution

One of the simplest and most widely used indicators for a parallel program's performance.

Parallel Overhead

The amount of time required to coordinate parallel tasks, as opposed to doing useful work. Parallel overhead can include factors such as:

·         Task start-up time

·         Synchronizations

·         Data communications

·         Software overhead imposed by parallel compilers, libraries, tools, operating system, etc.

·         Task termination time

Massively Parallel

Refers to the hardware that comprises a given parallel system - having many processors. The meaning of "many" keeps increasing, but currently, the largest parallel computers can be comprised of processors numbering in the hundreds of thousands.

Embarrassingly Parallel

Solving many similar, but independent tasks simultaneously; little to no need for coordination between the tasks.

Scalability

Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include:

·         Hardware - particularly memory-cpu bandwidths and network communications

·         Application algorithm

·         Parallel overhead related

·         Characteristics of your specific application and coding

 

Parallel Computer Memory Architectures

Shared Memory

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifGeneral Characteristics:

  • Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as global address space.
  • Multiple processors can operate independently but share the same memory resources.
  • Changes in a memory location effected by one processor are visible to all other processors.
  • Shared memory machines can be divided into two main classes based upon memory access times: UMA and NUMA.

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifUniform Memory Access (UMA):

  • Most commonly represented today by Symmetric Multiprocessor (SMP) machines
  • Identical processors
  • Equal access and access times to memory
  • Sometimes called CC-UMA - Cache Coherent UMA. Cache coherent means if one processor updates a location in shared memory, all the other processors know about the update. Cache coherency is accomplished at the hardware level.

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifNon-Uniform Memory Access (NUMA):

  • Often made by physically linking two or more SMPs
  • One SMP can directly access memory of another SMP
  • Not all processors have equal access time to all memories
  • Memory access across link is slower
  • If cache coherency is maintained, then may also be called CC-NUMA - Cache Coherent NUMA

Description: Shared memory architecture
Shared Memory (UMA)


Description: NUMA
Shared Memory (NUMA)

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifAdvantages:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifDisadvantages:

 

Distributed Memory

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifGeneral Characteristics:

Description: Distributed memory architecture

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifAdvantages:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifDisadvantages:

 

Hybrid Distributed-Shared Memory

Description: Hybrid memory architecture

Description: Hybrid memory architecture

 

Parallel Programming Models

Overview

o    SHARED memory model on a DISTRIBUTED memory machine: Kendall Square Research (KSR) ALLCACHE approach.

Machine memory was physically distributed across networked machines, but appeared to the user as a single shared memory (global address space). Generically, this approach is referred to as "virtual shared memory".

Description: KSR1

o    DISTRIBUTED memory model on a SHARED memory machine: Message Passing Interface (MPI) on SGI Origin 2000.

The SGI Origin 2000 employed the CC-NUMA type of shared memory architecture, where every task has direct access to global address space spread across all machines. However, the ability to send and receive messages using MPI, as is commonly done over a network of distributed memory machines, was implemented and commonly used.

Description: SGI Origin 2000

 

Shared Memory Model (without threads)

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifImplementations:

 

Threads Model

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifImplementations:

In both cases, the programmer is responsible for determining all parallelism.

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifMore Information:

 

Distributed Memory / Message Passing Model

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifImplementations:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifMore Information:

 

Data Parallel Model

 

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifImplementations:

Implementations are available for most common parallel platforms.

HPF compilers were relatively common in the 1990s, but are no longer commonly implemented.

 

Hybrid Model

 

SPMD and MPMD

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifSingle Program Multiple Data (SPMD):

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifMultiple Program Multiple Data (MPMD):

 

Designing Parallel Programs

Automatic vs. Manual Parallelization

 

Understand the Problem and the Program

Calculate the potential energy for each of several thousand independent conformations of a molecule. When done, find the minimum energy conformation.

Calculation of the Fibonacci series (0,1,1,2,3,5,8,13,21,...) by use of the formula:

    F(n) = F(n-1) + F(n-2)    

  • Identify the program's hotspots:
    • Know where most of the real work is being done. The majority of scientific and technical programs usually accomplish most of their work in a few places.
    • Profilers and performance analysis tools can help here
    • Focus on parallelizing the hotspots and ignore those sections of the program that account for little CPU usage.
  • Identify bottlenecks in the program
    • Are there areas that are disproportionately slow, or cause parallelizable work to halt or be deferred? For example, I/O is usually something that slows a program down.
    • May be possible to restructure the program or use a different algorithm to reduce or eliminate unnecessary slow areas
  • Identify inhibitors to parallelism. One common class of inhibitor is data dependence, as demonstrated by the Fibonacci sequence above.
  • Investigate other algorithms if possible. This may be the single most important consideration when designing a parallel application.

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\hotspotBottleneck.jpg

 

Partitioning

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifDomain Decomposition:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\domain_decomp.gif

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\distributions.gif

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifFunctional Decomposition:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\functional_decomp.gif

Ecosystem Modeling
Each program calculates the population of a given group, where each group's growth depends on that of its neighbors. As time progresses, each process calculates its current state, then exchanges information with the neighbor populations. All tasks then progress to calculate the state at the next time step.

Description: Functional decomposition example

Signal Processing
An audio signal data set is passed through four distinct computational filters. Each filter is a separate process. The first segment of data must pass through the first filter before progressing to the second. When it does, the second segment of data passes through the first filter. By the time the fourth segment of data is in the first filter, all four tasks are busy.

Description: Functional decomposition example

Climate Modeling
Each model component can be thought of as a separate task. Arrows represent exchanges of data between components during computation: the atmosphere model generates wind velocity data that are used by the ocean model, the ocean model generates sea surface temperature data that are used by the atmosphere model, and so on.

Description: Functional decomposition example

 

Communications

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifWho Needs Communications?

The need for communications between tasks depends upon your problem:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifFactors to Consider:

There are a number of important factors to consider when designing your program's inter-task communications:

Description: Collective communications examples

Description: Callgraph of parallel hello world program

 

Synchronization

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifTypes of Synchronization:

 

Data Dependencies

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifDefinition:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifExamples:

 

DO 500 J = MYSTART,MYEND

   A(J) = A(J-1) * 2.0

500 CONTINUE

The value of A(J-1) must be computed before the value of A(J), therefore A(J) exhibits a data dependency on A(J-1). Parallelism is inhibited.

If Task 2 has A(J) and task 1 has A(J-1), computing the correct value of A(J) necessitates:

 

task 1        task 2

------        ------

 

X = 2         X = 4

  .             .

  .             .

Y = X**2      Y = X**3

As with the previous example, parallelism is inhibited. The value of Y is dependent on:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifHow to Handle Data Dependencies:

 

Load Balancing

Description: Load Imbalance

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifHow to Achieve Load Balance:

 

Granularity

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifComputation / Communication Ratio:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifFine-grain Parallelism:

  • Relatively small amounts of computational work are done between communication events
  • Low computation to communication ratio
  • Facilitates load balancing
  • Implies high communication overhead and less opportunity for performance enhancement
  • If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation.

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifCoarse-grain Parallelism:

  • Relatively large amounts of computational work are done between communication/synchronization events
  • High computation to communication ratio
  • Implies more opportunity for performance increase
  • Harder to load balance efficiently

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifWhich is Best?

  • The most efficient granularity is dependent on the algorithm and the hardware environment in which it runs.
  • In most cases the overhead associated with communications and synchronization is high relative to execution speed so it is advantageous to have coarse granularity.
  • Fine-grain parallelism can help reduce overheads due to load imbalance.

Description: Granularity

 

I/O

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifThe Bad News:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifThe Good News:

 

Limits and Costs of Parallel Programming

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifAmdahl's Law:

  • Amdahl's Law states that potential program speedup is defined by the fraction of code (P) that can be parallelized:

 

                     1

    speedup   =   --------

                   1  - P

  • If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup).
  • If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).
  • If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will run twice as fast.
  • Introducing the number of processors performing the parallel fraction of work, the relationship can be modeled by:

 

                       1 

    speedup   =   ------------

                    P   +  S

                   ---

                    N

where P = parallel fraction, N = number of processors and S = serial fraction.

  • It soon becomes obvious that there are limits to the scalability of parallelism. For example:

 

                       speedup

             --------------------------------

    N        P = .50      P = .90     P = .99

  -----      -------      -------     -------

     10         1.82         5.26        9.17

    100         1.98         9.17        50.25

   1000         1.99         9.91       90.99

  10000         1.99         9.91       99.02

 100000         1.99         9.99       99.90

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\amdahl1.gif
Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\amdahl2.gif

We can increase the problem size by doubling the grid dimensions and halving the time step. This results in four times the number of grid points and twice the number of time steps. The timings then look like:

 

    2D Grid Calculations     680 seconds   97.84%

    Serial fraction           15 seconds    2.16%

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifComplexity:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifPortability:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifResource Requirements:

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifScalability:

 

Performance Analysis and Tuning

 

Parallel Examples

Array Processing

  • This example demonstrates calculations on 2-dimensional array elements, with the computation on each array element being independent from other array elements.
  • The serial program calculates one element at a time in sequential order.
  • Serial code could be of the form:

 

do j = 1,n

do i = 1,n

  a(i,j) = fcn(i,j)

end do

end do

  • The calculation of elements is independent of one another - leads to an embarrassingly parallel situation.
  • The problem should be computationally intensive.

Description: Embarrassingly parallel array calculation


Array Processing
Parallel Solution 1

  • Arrays elements are distributed so that each processor owns a portion of an array (subarray).
  • Independent calculation of array elements ensures there is no need for communication between tasks.
  • Distribution scheme is chosen by other criteria, e.g. unit stride (stride of 1) through the subarrays. Unit stride maximizes cache/memory usage.
  • Since it is desirable to have unit stride through the subarrays, the choice of a distribution scheme depends on the programming language. See the Block - Cyclic Distributions Diagram for the options.
  • After the array is distributed, each task executes the portion of the loop corresponding to the data it owns. For example, with Fortran block distribution:

 

do j = mystart, myend

do i = 1,n

  a(i,j) = fcn(i,j)

end do

end do

  • Notice that only the outer loop variables are different from the serial solution.

Description: Embarrassingly parallel array calculation data decomposition

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifOne Possible Solution:

 

find out if I am MASTER or WORKER

  

if I am MASTER

  

  initialize the array

  send each WORKER info on part of array it owns

  send each WORKER its portion of initial array

  

  receive from each WORKER results

  

else if I am WORKER

  receive from MASTER info on part of array I own

  receive from MASTER my portion of initial array

 

  # calculate my portion of array

  do j = my first column,my last column

  do i = 1,n

    a(i,j) = fcn(i,j)

  end do

  end do

 

  send MASTER results

 

endif


Array Processing
Parallel Solution 2: Pool of Tasks

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifPool of Tasks Scheme:

Master Process:

Worker Process: repeatedly does the following

 

find out if I am MASTER or WORKER

 

if I am MASTER

 

  do until no more jobs

    if request send to WORKER next job

    else receive results from WORKER

  end do

 

else if I am WORKER

 

  do until no more jobs

    request job from MASTER

    receive from MASTER next job

 

    calculate array element: a(i,j) = fcn(i,j)

 

    send results to MASTER

  end do

 

endif

Description: C:\Webstuff\SE765\L0\Introduction to Parallel Computing_files\arrowBullet.gifDiscussion:

 

Parallel Examples

PI Calculation

  • The value of PI can be calculated in a number of ways. Consider the following method of approximating PI
    1. Inscribe a circle in a square
    2. Randomly generate points in the square
    3. Determine the number of points in the square that are also in the circle
    4. Let r be the number of points in the circle divided by the number of points in the square
    5. PI ~ 4 r
    6. Note that the more points generated, the better the approximation
  • Serial pseudo code for this procedure:

 

npoints = 10000

circle_count = 0

 

do j = 1,npoints

  generate 2 random numbers between 0 and 1

  xcoordinate = random1

  ycoordinate = random2

  if (xcoordinate, ycoordinate) inside circle

  then circle_count = circle_count + 1

end do

 

PI = 4.0*circle_count/npoints

  • Note that most of the time in running this program would be spent executing the loop
  • Leads to an embarrassingly parallel solution
    • Computationally intensive
    • Minimal communication
    • Minimal I/O

Description: One method of determining PI


PI Calculation
Parallel Solution

  • Parallel strategy: break the loop into portions that can be executed by the tasks.
  • For the task of approximating PI:
    • Each task executes its portion of the loop a number of times.
    • Each task can do its work without requiring any information from the other tasks (there are no data dependencies).
    • Uses the SPMD model. One task acts as master and collects the results.
  • Pseudo code solution: red highlights changes for parallelism.

 

npoints = 10000

circle_count = 0

 

p = number of tasks

num = npoints/p

 

find out if I am MASTER or WORKER

 

do j = 1,num

  generate 2 random numbers between 0 and 1

  xcoordinate = random1

  ycoordinate = random2

  if (xcoordinate, ycoordinate) inside circle

  then circle_count = circle_count + 1

end do

 

if I am MASTER

 

  receive from WORKERS their circle_counts

  compute PI (use MASTER and WORKER calculations)

 

else if I am WORKER

 

  send to MASTER circle_count

 

endif

Description: One method of determining PI

 

Simple Heat Equation

  • Most problems in parallel computing require communication among the tasks. A number of common problems require communication with "neighbor" tasks.
  • The heat equation describes the temperature change over time, given initial temperature distribution and boundary conditions.
  • A finite differencing scheme is employed to solve the heat equation numerically on a square region.
  • The initial temperature is zero on the boundaries and high in the middle.
  • The boundary temperature is held at zero.
  • For the fully explicit problem, a time stepping algorithm is used. The elements of a 2-dimensional array represent the temperature at points on the square.
  • The calculation of an element is dependent upon neighbor element values.

Description: Heat equation

  • A serial program would contain code like:

 

do iy = 2, ny - 1

do ix = 2, nx - 1

  u2(ix, iy) =

    u1(ix, iy)  +

      cx * (u1(ix+1,iy) + u1(ix-1,iy) - 2.*u1(ix,iy)) +

      cy * (u1(ix,iy+1) + u1(ix,iy-1) - 2.*u1(ix,iy))

end do

end do

Description: Initial heat conditionsDescription: Heat equation


Simple Heat Equation
Parallel Solution

 

find out if I am MASTER or WORKER

 

if I am MASTER

  initialize array

  send each WORKER starting info and subarray

  receive results from each WORKER

 

else if I am WORKER

  receive from MASTER starting info and subarray

 

  do t = 1, nsteps

    update time

    send neighbors my border info

    receive from neighbors their border info

 

    update my portion of solution array

    

  end do

 

  send MASTER results

     

endif

 

1-D Wave Equation

Description: Wave equation

    

where c is a constant


1-D Wave Equation
Parallel Solution

Description: Wave equation partition

 

find out number of tasks and task identities

 

#Identify left and right neighbors

left_neighbor = mytaskid - 1

right_neighbor = mytaskid +1

if mytaskid = first then left_neigbor = last

if mytaskid = last then right_neighbor = first

 

find out if I am MASTER or WORKER

if I am MASTER

  initialize array

  send each WORKER starting info and subarray

else if I am WORKER`

  receive starting info and subarray from MASTER

endif

 

#Update values for each point along string

#In this example the master participates in calculations

do t = 1, nsteps

  send left endpoint to left neighbor

  receive left endpoint from right neighbor

  send right endpoint to right neighbor

  receive right endpoint from left neighbor

 

#Update points along line

  do i = 1, npoints

    newval(i) = (2.0 * values(i)) - oldval(i)

    + (sqtau * (values(i-1) - (2.0 * values(i)) + values(i+1)))

  end do

 

end do

 

#Collect results and write to file

if I am MASTER

  receive results from each WORKER

  write results to file

else if I am WORKER

  send results to MASTER

endif

 


References and More Information