Nothing Special   »   [go: up one dir, main page]

CT3 Ak Set A

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

SRM Institute of Science and Technology

College of Engineering and Technology


School of Computing
Department of Computing Technologies SET - A
Academic Year: 2022-23 (EVEN)
Answer Key

Test: CLA-T3 Date: 09.11.23


Course Code & Title: 18CSE356T Distributed Operating Systems Duration: 110 minutes
Year & Sem:III& IV Year / V & VII Sem Max. Marks: 50

Course Articulation Matrix: (to be placed)


S.No. COs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2 PSO3

1 CO1 3 - - 3 - - - - - - - - - - -

2 CO2 3 3 - 2 - - - - - - - - - - -

3 CO3 3 2 2 3 - - - - - - - - - 2 -

4 CO4 3 3 - 3 - - - - - - - - - 2 -

5 CO5 3 3 - 2 - - - - - - - - - - -

Part – A
(10 x 1 = 10 Marks)
Instructions: Answer all
Q. Question Marks BL CO PO PI
No Code
1 All threads have exactly the same ………... 1 L1 4 1 1.6.1
A. Code Section
B. Data Section
C. OS Resource
D. Address Space.

2 The Processor allocation is of two categories namely 1 L1 4 1 1.6.1


A. Centralized and Deterministic
B. Static and Dynamic
C. Migratory and Nonmigratory
D. None of the mentioned.
3 When a process wants to start up a child process, it goes 1 L2 4 1 1.6.1
around and checks out who is currently offering the
service that it needs.
a) Heuristic Algorithm.
b) Bidding Algorithm.
c) Hierarchical Algorithm.
d) Centralized Algorithm.
4 In _______approach, each process submitted by the user 1 L1 4 1 1.6.1
for processing is viewed as a collection of related tasks
and these tasks are scheduled to suitable nodes in order to
improve the performance.

a) Task assignment approach


b) Load balancing approach
c) Load dividing approach
d) Load Sharing approach
5 ________represents solution to the static scheduling 1 L2 4 1 1.6.1
problem that requires a reasonable amount of time and
other resources to perform its function.
a) Approximate solution
b) Heuristics solution
c) Optimal solution
d) Suboptimal solution
6 _________ is a DSM system that is fundamentally based 1 L2 5 1 1.6.1
on software objects, but which can place each object on a
separate page so the hardware MMU can be used for
detecting accesses to shard objects.
a) RPC
b) IPC
c) MUNIN
d) None of the mentioned.
7 Having data belonging to two independent processes in 1 L1 5 1 1.6.1
the same page is called____________.
a) Buffering
b) Blocking
c) Message Passing
d) False Sharing.
8 Splitting the cache into separate instruction and data 1 L2 5 1 1.6.1
caches or by using a set of buffers, usually called
a) Cache Buffer
b) Data Buffer
c) Instruction Buffer
d) Register Buffer
9 What are the characteristics of processor in distributed 1 L1 5 1 1.6.1
system?
a) They vary in size and function
b) They are same in size and function
c) They are manufactured with single purpose
d) They are real-time devices
10 The capability of a system to adapt the increased service 1 L1 5 1 1.6.1
load is called ___________
a) scalability
b) tolerance
c) capacity
d) none of the mentioned
Part B
(5x2 = 10 Marks)
11 Explain the Design issues in Thread packages. 2 L2 4 1 1.6.1
Ans:
 A set of primitives (e.g. library calls) available to
the user relating to threads is called a thread
package.
 Static thread: the choice of how many threads
there will be is made when the program is written
or when it is compiled. Each thread is allocated a
fixed stack. This approach is simple, but
inflexible.
 Dynamic thread: allow threads to be created and
destroyed on-the-fly during execution.

12 State and Explain why do we need diskless workstations. 2 L4 4 2 2.6.2


Ans:
 If the workstations are diskless, the file system
must be implemented by one or more remote file
servers. Diskless workstations are cheaper.
 Ease of maintenance: installing new release of
program on several servers than on hundreds of
machines. Backup and hardware maintenance are
also simpler.
 Diskless does not have fans and noises.

13 State the advantages of A Receiver-Initiated Distributed 2 L4 4 2 2.6.2


Heuristic Algorithm.
 Does not put extra load on the system at critical
times.
 When the system is heavily loaded, the chance of
a machine having insufficient work is small, but
when this does happen, it will be easy to find
work to take over.
 creates considerable probe traffic as all the
unemployed machines desperately hunt for work.

14 Illustrate what do you meant by Bus Arbitration. 2 L3 5 1 1.6.1


Ans:
 To prevent two or more CPUs from trying to
access the memory at the same time, some kind
of bus arbitration is needed.
15 Summarize the advantages of object-based distributed 2 L5 5 1 1.6.1
shared memory.
Ans:
 It is more modular than the other techniques.
 The implementation is more flexible because
accesses are controlled.
 Synchronization and access can be integrated
together cleanly.

Part – C (4*5=20 Marks)


Answer any 4 out of 6 questions
16 Differentiate the advantage and disadvantages of different 5 L4 4 2 2.6.4
types of disk usages.
Ans:

17 Illustrate the steps to run the process remotely? 5 L4 4 2 2.6.2


Ans:
 To start with, it needs the same view of the file
system, the same working directory, and the same
environment variables.
 Some system calls can be done remotely but
some cannot.
 For example, read from keyboard and write to the
screen can never be carried out on remote
machine. (All system calls that query the state of
machine)
 Some must be done remotely, such as the UNIX
system calls SBRK (adjust the size of the data
segment), NICE (set CPU scheduling priority),
and PROFIL (enable profiling of the program
counter).

18 State information exchange policies 5 L3 4 2 2.6.2


for Load-sharing algorithms.
Ans:
 In load-sharing algorithms it is not necessary for
the nodes to periodically exchange state
information, but needs to know the state of other
nodes when it is either underloaded or overloaded
 Broadcast when state changes
o In sender-initiated/receiver-initiated
location policy a node broadcasts State
Information Request when it becomes
overloaded/underloaded
o It is called broadcast-when-idle policy
when receiver-initiated policy is used
 with fixed threshold value of 1
 Poll when state changes
o In large networks polling mechanism is
used
o Polling mechanism randomly asks
different nodes for state information until
find an appropriate one or probe limit is
reached
o It is called poll-when-idle policy when
receiver-initiated policy is used with
fixed threshold value of 1

19 Explain briefly about Write Through Cache Consistency 5 L2 5 2 2.6.2


protocol.
Ans:
 One particularly simple and common protocol is
called write through.
 When a CPU first reads a word from memory,
that word is fetched over the bus and is stored
in the cache of the CPU making the request.
 If that word is needed again later, the CPU can
take it from the cache without making a
memory request, thus reducing bus traffic.
 There are two cases, read miss (word not cached)
and read hit (word cached) as the first two lines
in the table. In simple systems, only the word
requested is cached, but in most, a block of words
of say, 16 or 32 words, is transferred and cached
on the initial access and kept for possible future
use.
20 Draw and Explain NUMA Architecture in detail. 5 L2 5 2 2.6.2
Ans

21 If Someone share a page. How to find the owner of the 5 L5 5 2 2.6.2


page.
Ans:
 The simplest solution is by doing a broadcast,
asking for the owner of the specified page to
respond.
 Once the owner has been located this way, the
protocol can proceed as above. An obvious
optimization is not just to ask who the owner is,
but also to tell whether the sender wants to read
or write and say whether it needs a copy of the
page.
 The owner can then send a single message
transferring ownership and the page as well, if
needed. It is the job of the manager to keep track
of who owns each page.
 When a process, P, wants to read a page it does
not have or wants to write a page it does not own,
it sends a message to the page manager telling
which operation it wants to perform and on which
page.
 The manager then sends back a message telling
who the owner is. P now contacts the owner to
get the page and/or the ownership, as required.
Part – D (1*10=10 Marks)
Answer any 1 question
22 Explain the design and implementation issues of 10 L4 5 2 2.6.2
Distributed Shared Memory.
Ans:
Issues to Design and Implementation of DSM:
 Granularity
 Structure of shared memory space
 Memory coherence and access synchronization
 Data location and access
 Replacement strategy
 Thrashing
 Heterogeneity
1. Granularity: Granularity refers to the block size of a
DSM system. Granularity refers to the unit of sharing
and the unit of data moving across the network when a
network block shortcoming then we can utilize the
estimation of the block size as words/phrases. The block
size might be different for the various networks.
2.Structure of shared memory space: Structure refers
to the design of the shared data in the memory. The
structure of the shared memory space of a DSM
system is regularly dependent on the sort of
applications that the DSM system is intended to support.
3. Memory coherence and access synchronization: In
the DSM system the shared data things ought to be
accessible by different nodes simultaneously in the
network. The fundamental issue in this system is data
irregularity. The data irregularity might be raised by the
synchronous access. To solve this problem in the DSM
system we need to utilize some synchronization
primitives, semaphores, event count, and so on.
4. Data location and access: To share the data in the
DSM system it ought to be possible to locate and
retrieve the data as accessed by clients or processors.
Therefore the DSM system must implement some form
of data block finding system to serve network data to
meet the requirement of the memory coherence
semantics being utilized.
5. Replacement strategy: In the local memory of the
node is full, a cache miss at the node implies not just a
get of the gotten to information block from a remote
node but also a replacement. A data block of the local
memory should be replaced by the new data block.
Accordingly, a position substitution methodology is
additionally vital in the design of a DSM system.
6. Thrashing: In a DSM system data blocks move
between nodes on demand. In this way on the off chance
that 2 nodes compete for write access to the single data
item. The data relating data block might be moved to
back and forth at such a high rate that no genuine work
can get gone. The DSM system should utilize an
approach to keep away from a situation generally known
as thrashing.
7. Heterogeneity: The DSM system worked for
homogeneous systems and need not address the
heterogeneity issue. In any case, assuming the
underlined system environment is heterogeneous, the
DSM system should be designed to deal with
heterogeneous, so it works appropriately with machines
having different architectures.

Or
23 Define the following terms: a) Process Migration b) 10 L4 4 2 2.6.2
Threads c) Processor allocation.
Ans:
Process Migration in Distributed System:
A process is essentially a program in execution. The
execution of a process should advance in a sequential
design. A process is characterized as an entity that
addresses the essential unit of work to be executed in the
system.

Thread:
Thread is a separate execution path. It is a lightweight
process that the operating system can schedule and run
concurrently with other threads. The operating system
creates and manages threads, and they share the same
memory and resources as the program that created them.
This enables multiple threads to collaborate and work
efficiently within a single program.
A thread is a single sequence stream within a process.
Threads are also called lightweight processes as they
possess some of the properties of processes. Each thread
belongs to exactly one process. In an operating system
that supports multithreading, the process can consist of
many threads. But threads can be effective only if CPU
is more than 1 otherwise two threads have to context
switch for that single CPU.
Processor allocation:
 The process scheduling is the activity of the
process manager that handles the removal of the
running process from the CPU and the selection
of another process on the basis of a particular
strategy.
 Process scheduling is an essential part of a
Multiprogramming operating systems. Such
operating systems allow more than one process
to be loaded into the executable memory at a
time and the loaded process shares the CPU
using time multiplexing.

Course Outcome (CO) and Bloom’s level (BL) Coverage in Questions

Approved by the Audit Professor/Course Coordinator

You might also like