(SMC), (SMP), (MPP) : Symmetric Multi-Computers Symmetric Multi-Processors
(SMC), (SMP), (MPP) : Symmetric Multi-Computers Symmetric Multi-Processors
(SMC), (SMP), (MPP) : Symmetric Multi-Computers Symmetric Multi-Processors
Day-2
and operating system (e.g., multiprogramming, time sharing, etc, multi-user O/S) at software level.
In context of this question, we may refer to Flynn’s classification (taxonomy) of computers (serial and parallel) proposed in 1966. These
are as follows:
SISD: Single Instruction and Single Data (conventional uni-processor): pipelined and non-pipelined computer
SIMD: Single Instruction and Multiple Data (Vector Processors, Processor Arrays): known as data parallel model computer
MIMD: Multiple Instruction and Multiple Data (Multiprocessors/Multicomputer): known as control parallel computer
MISD: Multiple Instruction, Single Data (Systolic Arrays) - Very difficult to implement and it has very less practical application.
Details discussion of Flynn’s classification:
SISD (uni-processor computer) Day-2
o The speed of fetching data from secondary memory to main memory (low speed) and the speed of fetching data from main memory to CPU (very
Details discussion of Flynn’s classification: (cont….)
SISD (uni-processor computer) Day-2
• Hardware -level: The ALU part is segmented to work in overlapping fashion (in order to reduce execution time).
Note Here, PEi stands for i-th processing element, whereas PEM i denotes the private memory associated with PE i We need interconnection network
Parallelism
Practical achieved
example, via SIMD model computer is called as data parallelism (often called as synchronous parallelism).
Drawback
• Synchronous , since the same instruction is executed by different processing elements (on distinct datasets).
Details discussion of Flynn’s classification: (cont….)
Day-2
Parallel computer:
MIMD (control parallel) model:
• Here, instead of processing elements, we use processors, and each processor usually executes distinct instruction on
distinct data set to reduce execution time.
• Multi-processors: A Computer with multiple CPUs and a shared (global) memory. It can be divided into two types:
o Shared memory is physically in one place (also known as uniform memory access (UMA))
o Shared memory but distributed among the processors (also known as non-uniform memory access (NUMA)).
UMA NUMA
Easy to implement Complex to implement
No dedicated video RAM needed (cost). Dedicated video RAM is needed.
In Uniform Memory Access, Single memory controller In Non-uniform Memory Access, different memory
is used. controller is used.
Uniform Memory Access is slower than non-uniform Non-uniform Memory Access is faster than uniform
Memory Access. Memory Access.
• In uniform Memory Access, memory access time is Memory access time is not equal
balanced or equal.
• It is applicable for general purpose applications
and time-sharing applications. Applicable for real-time applications
Details discussion of Flynn’s classification:
Day-2
Parallel computer:
MIMD (control parallel) model (cont. ..)
• Here, each Pi represents one processor and Mi represents one local (private) memory block associated with Pi , and the memory locations of Mi
are directly accessible to Pi.
• If Pi wants to access any location of Mj (i≠j) associated with Pj, then Pi first sends message to Pj. If granted, then the location is accessed
Details discussion of Flynn’s classification:
Day-2
Parallel computer:
MIMD (control parallel) model (cont. ..)
Parallelism achieved via MIMD model computer is called as control parallelism (often called as asynchronous
parallelism). Asynchronous, since the distinct instructions may be executed by different processors (on distinct datasets).
Disadvantages: Costly
Day-2
• In shared memory parallel platform, processors can communicate through simple reads and writes to a single
shared memory.
• In distributed memory platforms, each processor has its own private memory. Processors need to exchange
messages to communicate.
• Shared memory platforms are easier to program. Unfortunately, the connection between the processors and the
shared memory quickly becomes a bottleneck.
• Thus they do not scale to as many processors as distributed memory platforms, and becomes very expensive when the
number of processors increases.
Parallel problem behaviour when shared and when distributed memory are applicable??
Symmetrical Multicomputer (SMC)
• A symmetrical multi-computer is one in which all of the hosts are identical and connected to each other through
interconnection network. They share the same operating system.
Asymmetrical Multicomputer System: All of the hosts are not identical but connected via network.
• It is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of
coordinated computations in parallel.
• One approach is grid computing, where the processing power of many computers in distributed, diverse administrative
domains is opportunistically used whenever a computer is available. An example is BOINC, a volunteer-based,
opportunistic grid system.