Nothing Special   »   [go: up one dir, main page]

Foundations of Sequential and Parallel Programming: Raising A New Generation of Leaders

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

www.covenantuniversity.edu.

ng

Raising a new Generation of Leaders

CSC 216
Foundations of Sequential and Parallel
Programming
Introduction to Parallelism
Lecture outline
• Introduction
• Parallel Computing
• Parallel Computer

• Super Computing

• Parallel Programming

• Parallel Architecture

2 2
PARALLEL COMPUTING
• Parallel programming involves the concurrent computation or simultaneous
execution of processes or threads at the same time.
• While Sequential programming involves a consecutive and ordered execution
of processes one after another.
• With sequential programming, computation is modeled after problems with a
chronological sequence of events.
• In other words in sequential programming, processes are run one after
another in a succession while in parallel computing, multiple processes can be
executing at the same time.

3 3
PARALLEL COMPUTING(Cont.)

• The program in such cases will execute a process that will in turn, wait for
user input, then another process is executed that processes a return according
to user input creating a series of cascading events.
• In contrast to sequential computation, parallel programming, processes might
execute concurrently, yet sub-processes or threads might communicate and
exchange signals during execution and therefore programmers have to place
measures in place to allow for such transactions.

4 4
PARALLEL COMPUTING(Cont.)
• Parallel computing is the use of a parallel computer to reduce the time
needed to solve a single computational problem.
• Parallel computing is now considered a standard way of computation for
scientists and engineers to solve diverse problems in different areas such
as galactic evolution, climate modeling, aircraft design, and molecular
dynamics.

5 5
PARALLEL COMPUTING(Cont.)
For example

6 6
Benefits of Parallel Computing
1. Save Time and resources:
In theory, using more resources on a task will shorten its time of completion
with potential cost savings.
Parallel computers can be built from cheap, commodity components.
2. Solve larger problems:
Many problems are so large and complex, that it is impossible to solve them on a
single computer, especially given limited computer memory.
problems requiring PetaFLOPS and PetaBytes of computing resources.
Web search engines or databases, processing millions of transactions per second.

7 7
Benefits of Parallel Computing (Cont.)
• The list envisioned using high-performance computing to improve understanding and solve
problems in:
• Prediction of weather, climate and global change, Challenges in material sciences,
Semiconductor design, Superconductivity, Structural biology, Design of pharmaceutical
drugs, human genome, Quantum chromodynamics.
• Astronomy, Challenges in Transportation
• Vehicle Signature, Turbulence, Vehicle dynamics, Nuclear fusion.
• Efficiency of combustion systems, Enhanced oil and gas recovery
• Computational ocean sciences, Speech, Vision
• Undersea surveillance for anti-submarine warfare.

8 8
Benefits of Parallel Computing (Cont.)
The real world is massive parallel

9 9
Benefits of Parallel Computing (Cont.)

10 10
Benefits of Parallel Computing(Cont.)
3. Provide concurrency
• A single computer resource can only perform one operation at a
time.
• Multiple computing resources can execute many processes
simultaneously.
• For example, the Access Grid (www.accessgrid.org) provides a
global collaboration network where people from around the world can
meet and conduct work virtually.

11 11
Benefits of Parallel Computing(Cont.)
4. Use Of Non-local Resources
• Using computer resources on a wide area network or the Internet when local
computer resources are scarce. For example:
• SETI@home (setiathome.berkeley.edu) over 1.3 million users, 3.2 million
computers in nearly every country in the world. Source:
www.boincsynergy.com/stats/ (July, 2012).
• Folding@home (folding.stanford.edu) uses over 450,000 cpu’s globally (July
2011)

12 12
Benefits of Parallel Computing(Cont.)
5. Limits to serial computing
Both physical and practical reasons pose significant constraints to simply
building ever faster serial computers:

a)Transmission speeds - the speed of a serial computer is directly


dependent upon how fast data can move through hardware.
Absolute limits are the speed of light (30 nanosecond) and the transmission
limit of copper wire (9 nanosecond).
Increasing speeds necessitate increasing proximity of processing elements.

13 13
Benefits of Parallel Computing(Cont.)

b)Limits to miniaturization - processor technology is allowing an increasing


number of transistors to be placed on a chip.
However, even with molecular or atomic-level components, a limit will be reached on
how small components can be.

c)Economic limitations - it is increasingly expensive to make a single processor faster.


Using a larger number of moderately fast commodity processors to achieve the same or
better performance is less expensive.

14 14
Benefits of Parallel Computing(Cont.)
d) Current computer architectures are increasingly relying upon hardware
level parallelism to improve performance:
‒Multiple execution units
‒Pipelined instructions
‒Multi-core

15 15
Parallel Processing
A parallel computer is a multiple-processor computer system supporting parallel
computing.
Two important categories of parallel computers are:
• multi-computers - multiple computers and an interconnection network.
• The processors on different computers interact by passing
messages to each other.
• centralized multiprocessors - integrated system in which all CPUs share
access to a single global memory.
• shared memory supports communication and synchronization
among processors.
16 16
Parallel Processing (Cont.)

• Parallel Programming is programming in a language that allows the user to


explicitly indicate how different portions of the computation may be executed
concurrently by different processors.
• Parallel Processing – It is the concurrent or simultaneous execution of two
or more parts of a single computer program, at speeds far exceeding than that
of a conventional computer.
• Parallel processing requires two or more interconnected processors, each of
which executes a portion of the task, some supercomputer parallel-processing
systems have hundreds of thousands of microprocessors.
17 17
Parallel Processing (Cont.)
• The processors access data through shared memory.
• The efficiency of parallel processing is dependent upon the development of
programming languages that optimize the division of the tasks among the
processors.
• General Purpose Parallel Processing Computer:
• A computer designed to provide general support for parallel programming
so as to be able to meet the parallel processing requirements of both
scientific and business applications.
• General purpose parallel processing computers in general, exhibit the
characteristics of supercomputers.

18 18
Parallel Processing (Cont.)

However, supercomputers may not be general purpose parallel processing


computers, for example those that focus their hardware support on array
processing.

19 19
Supercomputing
General Purpose Computer: A computer that is capable of performing, in a reasonably
efficient manner, the functions required by both scientific and business applications.
Supercomputer: A very fast and powerful computer, outperforming most
mainframes, and used for intensive calculations, scientific simulations, animated
graphics, and other works that requires sophisticated and high-powered
computing.

20 20
Supercomputing(Cont.)
• Supercomputers are the most powerful computers that can be built - some
form of parallelism is built into it.
• As computer speeds increase, the bar for supercomputer increases too.
• Supercomputers cost about $100 million and its super electric bill is
expected to cost about $9 million per year (2012).
• As a result of the high cost of supercomputers, they can only be found
exclusively in government research facilities.

21 21
Supercomputing(Cont.)

22 22
Supercomputing(Cont.)
The meaning of the word supercomputer has changed over time.
• In 1976 supercomputer meant a Cray-l, a single-CPU computer with a high
·performance pipelined Vector processor connected to a high-performance
memory system.
• Today, supercomputer means a parallel computer with thousand of CPUs.

23 23
Parallel Programming Architecture
• Parallel programming is programming in a language that allows users to
explicitly indicate how, different portions of the computation may be executed
concurrently by different processors.
• As seen above, parallel computers are more available, but in other to take
advantage of multiple processors, programmers and compilers must be able to
identify operations that may be performed in parallel (i.e., concurrently).

24 24
Parallel Programming Architecture (Cont.)

• One of the most widely used parallel computer classifications since 1966, is
Flynn’s Taxonomy.
• It distinguishes multiprocessor computers according to the dimensions of
Instruction and Data
• SISD: Single instruction stream, Single data stream
• SIMD: Single instruction stream, Multiple data streams
• MISD: Multiple instruction streams, Single data stream
• MIMD: Multiple instruction streams, Multiple data streams

25 25
Parallel Programming Architecture (Cont.)
SISD (Single-Instruction Single-Data) • Serial computer
MISD (Multiple-Instruction Single-Data) • Special purpose computer
SIMD (Single-Instruction Multi-Data)
• All processors in a parallel computer execute the same instructions but operate
on different data at the same time.
• Only one program can be run at a time.
• Processors run in synchronous, lockstep function
• Shared or distributed memory
• Less flexible in expressing parallel algorithms, usually exploiting parallelism on
array operations,

Examples: CM2, MsPar 26 26


Parallel Programming Architecture (Cont.)
MIMD (Multi-Instruction Multi-data)
• All processors in a parallel computer can execute different instructions and
operate on different data at the same time.
• Parallelism achieved by connecting multiple processors together.
• Shared or distributed memory.
• Different programs can be run simultaneously.
• Each processor can perform any operation regardless of what other processors
are doing.
• Examples: Cray T90, Cray T3E, IBM-SP2

27 27
Parallel Programming Architecture (Cont.)
• SISD (Single-Instruction Single-Data)Machines
• A serial (non-parallel) computer Single instruction: Only one
instruction stream is acted on by the CPU in a clock cycle.

• Single data: Only one data stream is used as input during one
clock cycle.

• Examples: Most PCs, single CPU workstations and mainframes

28 28
Parallel Programming Architecture (Cont.)
SIMD Machines (Single-Instruction Multi-
Data): A type of parallel computer
• Single instruction: All processor units execute the same
instruction at any given clock cycle
• Multiple data: Each processing unit can operate on a different
data element.
• It typically has an instruction dispatcher, a very high-
bandwidth internal network, and a very large array of very
small-capacity instruction units. Best suitable for specialized
problems characterized by a high degree of regularity, e.g.,
image processing.
• Two varieties: Processor Arrays and Vector Pipelines
• Examples: Connection Machines, MasPar-1, MasPar-2;
• IBM 9000, Cray C90, Fujitsu VP, etc
29 29
Parallel Programming Architecture (Cont.)
MISD Machines ((Multiple-Instruction
Single-Data): A single data stream is fed into
multiple processing units.
• Each processing unit operates on the data
independently via independent instruction
streams.
• Possible use: multiple frequency filters
operating on a single signal stream.

30 30
Parallel Programming Architecture (Cont.)
MIMD Machines (Multi-Instruction Multi-data)
• Multiple instruction: Each processor may execute
different instruction stream.
• Multiple data: Each processor may work with a different
data stream.
• Execution can be synchronous or asynchronous,
deterministic or nondeterministic.
• Examples: most current supercomputers, grids,
networked parallel computers, multiprocessor
SMP(Symmetric Multiprocessors) computers.

31 31
Parallel Programming Architecture (Cont.)

Super Computer
32 32
Parallel Programming Architecture (Cont.)
Parallel computers architecture can also be classified according to memory
access:
Shared memory computers
Message-passing (distributed memory) computers
Multi-processor computers
Multi-computers
Clusters
Grids

33 33
TEST 1 – NEXT WEEK

34 34

You might also like