Nothing Special   »   [go: up one dir, main page]

(SMC), (SMP), (MPP) : Symmetric Multi-Computers Symmetric Multi-Processors

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Subject – Parallel and Distributed Systems

Day-2

 How do we increase the speed of computers?


 FLYNN’S TAXONOMY OF COMPUTERS AND TYPES OF PARALLELISM

 Symmetric multi-computers (SMC), Symmetric multi-processors (SMP), Massively Parallel


Processing (MPP)
Day-2
How do we increase the speed of computers?

To increase speed of computers,


 In uni-processor system, we may use several techniques - namely memory interleaving, caching, pipelining, etc. at hardware level

and operating system (e.g., multiprogramming, time sharing, etc, multi-user O/S) at software level.

 We use multi-processors or multi-computers also.

In context of this question, we may refer to Flynn’s classification (taxonomy) of computers (serial and parallel) proposed in 1966. These

are as follows:

 SISD: Single Instruction and Single Data (conventional uni-processor): pipelined and non-pipelined computer

 SIMD: Single Instruction and Multiple Data (Vector Processors, Processor Arrays): known as data parallel model computer

 MIMD: Multiple Instruction and Multiple Data (Multiprocessors/Multicomputer): known as control parallel computer

 MISD: Multiple Instruction, Single Data (Systolic Arrays) - Very difficult to implement and it has very less practical application.
Details discussion of Flynn’s classification:
SISD (uni-processor computer) Day-2

Case – I: Non-pipelined computer

Figure-1: Block-diagram of non-pipelined sequential computer

Hardware level techniques to increase speed of uni-processors:


• Memory interleaving :

o Dividing the entire main memory into number of blocks. Example

o It is a technique for increasing  the effective speed of memory.

• Caching (introducing cache memory between main memory and CPU)

o The speed of fetching data from secondary memory to main memory (low speed) and the speed of fetching data from main memory to CPU (very
Details discussion of Flynn’s classification: (cont….)
SISD (uni-processor computer) Day-2

Case – I: Non-pipelined computer


• Software -level: Operating systems (multi-user, multi-programming, timesharing) assist to improve performance of the system.

Case-II: Pipelined model computer:

• Hardware -level: The ALU part is segmented to work in overlapping fashion (in order to reduce execution time).

Practical example, Drawback


Parallelism achieved via SISD model computer (especially pipelined computer) is called as Temporal parallelism.
.
Details discussion of Flynn’s classification: (cont….)
Parallel computer: Day-2
SIMD (Data parallel) model: Example – Array processor
• Here, instruction is same but data set is distributed (e.g., D = D1 U D2 U … U Dn).

Figure-3: Block-diagram of Data parallel model computer

Note Here, PEi stands for i-th processing element, whereas PEM i denotes the private memory associated with PE i We need interconnection network

to connect the processing elements.

Parallelism
 Practical achieved
example, via SIMD model computer is called as data parallelism (often called as synchronous parallelism).
Drawback
• Synchronous , since the same instruction is executed by different processing elements (on distinct datasets).
Details discussion of Flynn’s classification: (cont….)
Day-2
Parallel computer:
MIMD (control parallel) model:

• Here, instead of processing elements, we use processors, and each processor usually executes distinct instruction on
distinct data set to reduce execution time.

Figure-4: Block-diagram of control parallel model computer


• Problem occurred in SIMD model computer is resolved by MIMD model computer but it is costly.

• Computers under such a model may be of two types –


o Multi-processors (shared memory model/tightly coupled) and

o Multi-computers (distributed memory model/loosely coupled).


Details discussion of Flynn’s classification:
Day-2
Parallel computer:
MIMD (control parallel) model (cont. ..)

• Multi-processors: A Computer with multiple CPUs and a shared (global) memory. It can be divided into two types:
o Shared memory is physically in one place (also known as uniform memory access (UMA))

o Shared memory but distributed among the processors (also known as non-uniform memory access (NUMA)).

Figure 4.1 - UMA interconnection network in shared memory model.


Details discussion of Flynn’s classification:
Day-2
Parallel computer:
MIMD (control parallel) model (cont. ..)

• Multi-processors (but NUMA model):

Figure -4.2: NUMA architecture model


Details discussion of Flynn’s classification:
Day-2

Table-1: Comparison between UMA and NUMA

UMA NUMA
Easy to implement Complex to implement
No dedicated video RAM needed (cost). Dedicated video RAM is needed.
In Uniform Memory Access, Single memory controller In Non-uniform Memory Access, different memory
is used. controller is used.
Uniform Memory Access is slower than non-uniform Non-uniform Memory Access is faster than uniform
Memory Access. Memory Access.
• In uniform Memory Access, memory access time is Memory access time is not equal
balanced or equal.
• It  is applicable for general purpose applications
and time-sharing applications. Applicable for real-time applications 
Details discussion of Flynn’s classification:
Day-2
Parallel computer:
MIMD (control parallel) model (cont. ..)

• Multi-computers/distributed memory model computers

Figure – 5: Block diagram of distributed memory model

• Here, each Pi represents one processor and Mi represents one local (private) memory block associated with Pi , and the memory locations of Mi
are directly accessible to Pi.

• If Pi wants to access any location of Mj (i≠j) associated with Pj, then Pi first sends message to Pj. If granted, then the location is accessed
Details discussion of Flynn’s classification:
Day-2
Parallel computer:
MIMD (control parallel) model (cont. ..)

An example of assigning jobs in MIMD model computer:


o If a system is running code on a 2-processor system (CPUs “ P1" & “P2") in a parallel environment and we wish to do
tasks “T1" and “T2", we may assign T1 to P1 and T2 to P2 as follows to reduce execution time

P1 --- > T1 and P2 - T2

Parallelism achieved via MIMD model computer is called as control parallelism (often called as asynchronous
parallelism). Asynchronous, since the distinct instructions may be executed by different processors (on distinct datasets).
 

Advantage: Saving time

Disadvantages: Costly
Day-2

SHARED MEMORY VS. DISTRIBUTED MEMORY

• In shared memory parallel platform, processors can communicate through simple reads and writes to a single
shared memory.

• In distributed memory platforms, each processor has its own private memory. Processors need to exchange
messages to communicate.

• Shared memory platforms are easier to program. Unfortunately, the connection between the processors and the
shared memory quickly becomes a bottleneck.
• Thus they do not scale to as many processors as distributed memory platforms, and becomes very expensive when the
number of processors increases.

Parallel problem behaviour when shared and when distributed memory are applicable??
Symmetrical Multicomputer (SMC)

• A symmetrical multi-computer is one in which all of the hosts are identical and connected to each other through
interconnection network. They share the same operating system.

Asymmetrical Multicomputer System: All of the hosts are not identical but connected via network.

Symmetric multi-processor (SMP):


• It involves multiple processors with shared main memory.

• They share the same operating system


Massively parallel processing (MPP)

• It is the term for using a large number of  computer  processors (or separate computers) to simultaneously perform a set of
coordinated computations  in parallel.
• One approach is grid computing, where the processing power of many computers in distributed, diverse administrative
domains is opportunistically used whenever a computer is available. An example is  BOINC, a  volunteer-based,
opportunistic grid system.

You might also like