Unit 4 Notes
Unit 4 Notes
Unit 4 Notes
Multiprocessor Architectures:
A large number of different computer architectures have more than one
processor and can support some form of concurrent execution.
• Late 1950s - one general-purpose processor and one or more special-purpose
processors for input and output operations
• Early 1960s - multiple complete processors, used for program-level
concurrency
• Mid-1960s - multiple partial processors, used for instruction-level concurrency
• Single-Instruction Multiple-Data (SIMD) machines
• Multiple-Instruction Multiple-Data (MIMD) machines
Categories of Concurrency:-
Logical concurrency:-
One useful technique for visualizing the flow of execution through a program is
to imagine a thread laid on the statements of the source text of the program.
Every statement reached on a particular execution is covered by the thread
representing that execution.
Formally, a thread of control in a program is the sequence of program points
reached as control flows through the program.
Programs that have co-routines but no concurrent subprograms, though they
have sometimes called quasi-concurrent, have a single thread of control.
Programs executed with physical concurrency can have multiple threads of
control.
Program designed to have more than one thread of control is said to be
multithreaded.
Semaphores
A semaphore is a simple mechanism that can be used to provide
synchronization of tasks.
Although semaphores are an early approach to providing synchronization, they
are still used, both in contemporary languages and in library-based concurrency
support systems.
In an effort to provide competition synchronization through mutually exclusive
access to shared data structures, Edsger Dijkstra devised semaphores in 1965
(Dijkstra, 1968b).
Semaphores can also be used to provide cooperation synchronization.
A semaphore is an implementation of a guard. Specifically, a semaphore is a
data structure that consists of an integer and a queue that stores task descriptors.
A task descriptor is a data structure that stores all of the relevant information
about the execution state of a task.
• A semaphore is a data structure consisting of a counter and a queue for storing
task descriptors
• Semaphores can be used to implement guards on the code that accesses shared
data structures
• Semaphores have only two operations, wait and release (originally called P
and V by Dijkstra)
• Semaphores can be used to provide both competition and cooperation
synchronization
Cooperation Synchronization
• Example: A shared buffer
• The buffer is implemented as an ADT with the operations DEPOSIT and
FETCH as the only ways to access the buffer
• Use two semaphores for cooperation: empty spots and full spots
• The semaphore counters are used to store the numbers of empty spots and full
spots in the buffer
• DEPOSIT must first check emptyspots to see if there is room in the buffer
• If there is room, the counter of emptyspots is decremented and the value is
inserted
• If there is no room, the caller is stored in the queue of emptyspots
• When DEPOSIT is finished, it must increment the counter of fullspots
• FETCH must first check fullspots to see if there is a value
• If there is a full spot, the counter of fullspots is decremented and the
value is removed
• If there are no values in the buffer, the caller must be placed in the
queue of fullspots
• When FETCH is finished, it increments the counter of emptyspots
• The operations of FETCH and DEPOSIT on the semaphores are accomplished
through two semaphore operations named wait and release
Competition Synchronization
One of the most important features of monitors is that shared data is resident in
the monitor rather than in any of the client units.
The programmer does not synchronize mutually exclusive access to shared data
through the use of semaphores or other mechanisms.
• Shared data is resident in the monitor (rather than in the client units)
• All access resident in the monitor
– Monitor implementation guarantee synchronized access by allowing
only one access at a time
– Calls to monitor procedures are implicitly queued if the monitor is
busy at the time of the call
Monitors
One solution to some of the problems of semaphores in a concurrent
environment is to encapsulate shared data structures with their operations and
hide their representations—that is, to make shared data structures abstract data
types with some special restrictions.
Cooperation Synchronization
Mutually exclusive access to shared data is intrinsic with a monitor, cooperation
between processes is still the task of the programmer.
Different languages provide different ways of programming cooperation
synchronization, all of which are related to semaphores.
• Cooperation between processes is still a programming task
– Programmer must guarantee that a shared buffer does not experience
underflow or overflow
EVALUATION OF MONITORS:
• A better way to provide competition synchronization than are semaphores
• Semaphores can be used to implement monitors
• Monitors can be used to implement semaphores
• Support for cooperation synchronization is very similar as with semaphores,
so it has the same problems
MESSAGE PASSING
• Message passing is a general model for concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
• Central idea: task communication is like seeing a doctor--most of the time she
waits for you or you wait for her, but when you are both ready, you get
together, or rendezvous
Message Passing Rendezvous
• To support concurrent tasks with message passing, a language needs:
- A mechanism to allow a task to indicate when it is willing to accept messages
- A way to remember who is waiting to have its message accepted and some
―fair‖ way of choosing the next message
• When a sender task’s message is accepted by a receiver task, the actual
message transmission is called a rendezvous
Task Body
• The body task describes the action that takes place when a rendezvous occurs
• A task that sends a message is suspended while waiting for the message to be
accepted and during the rendezvous
• Entry points in the spec are described with accept clauses in the body
accept entry_name (formal parameters) do
…
end entry_name
Example:
task body Task_Example is
begin
loop
accept Entry_1 (Item: in Float) do
...
end Entry_1;
end loop;
end Task_Example;
Multiple Entry Points
• Tasks can have more than one entry point
– The specification task has an entry clause for each
– The task body has an accept clause for each entry clause, placed in a
select clause, which is in a loop
Task Termination
• The execution of a task is completed if control has reached the end of its code
body
• If a task has created no dependent tasks and is completed, it is terminated
• If a task has created dependent tasks and is completed, it is not terminated
until all its dependent tasks are terminated
• A terminate clause in a select is just a terminate statement
• A terminate clause is selected when no accept clause is open
• When a terminate is selected in a task, the task is terminated only when its
master and all of the dependents of its master are either completed or are
waiting at a terminate
Concurrency in Ada 95
• A block or subprogram is not left until all of its dependent tasks are terminated
• Ada 95 includes Ada 83 features for concurrency, plus two new features
• Protected objects: A more efficient way of implementing shared data to
allow access to a shared data structure to be done without rendezvous
• Asynchronous communication
JAVA THREADS
The concurrent units in Java are methods named run, whose code can be in
concurrent execution with other such methods (of other objects) and with the
main method.
The process in which the run methods execute is called a thread.
Java’s threads are lightweight tasks, which means that they all run in the same
address space.
Thread Priorities
• A thread’s default priority is the same as the thread that create it
– If main creates a thread, its default priority is NORM_PRIORITY
• Threads defined two other priority constants, MAX_PRIORITY and
MIN_PRIORITY
• The priority of a thread can be changed with the methods setPriority
Competition Synchronization with Java Threads
• A method that includes the synchronized modifier disallows any other method
from running on the object while it is in execution
…
public synchronized void deposit( int i) {…}
public synchronized int fetch() {…}
…
• The above two methods are synchronized which prevents them from
interfering with each other.
• If only a part of a method must be run without interference, it can be
synchronized through synchronized statement
synchronized (expression)
statement
• Cooperation synchronization in Java is achieved via wait, notify, and notifyAll
methods
– All methods are defined in Object, which is the root class in Java, so
all objects inherit them
• The wait method must be called in a loop
• The notify method is called to tell one waiting thread that the event it was
waiting has happened
• The notifyAll method awakens all of the threads on the object’s wait list
C# THREADS
• Loosely based on Java but there are significant differences
• Basic thread operations
– Any method can run in its own thread
– A thread is created by creating a Thread object
– Creating a thread does not start its concurrent execution; it must be
requested through the Start method
– A thread can be made to wait for another thread to finish with Join
– A thread can be suspended with Sleep
– A thread can be terminated with Abort
Synchronizing Threads
• Three ways to synchronize C# threads
– The Interlocked class
• Used when the only operations that need to be synchronized are
incrementing or decrementing of an integer
– The lock statement
• Used to mark a critical section of code in a thread
lock (expression) {… }
– The Monitor class
• Provides four methods that can be used to provide more sophisticated
Synchronization.
Design Issues
• How and where are exception handlers specified and what is their scope?
• How is an exception occurrence bound to an exception handler?
• Can information about the exception be passed to the handler?
• Where does execution continue, if at all, after an exception handler completes
its execution? (continuation vs. resumption)
• Is some form of finalization provided?
• How are user-defined exceptions specified?
• Should there be default exception handlers for programs that do not provide
their own?
• Can built-in exceptions be explicitly raised?
• Are hardware-detectable errors treated as exceptions that can be handled?
• Are there any built-in exceptions?
• How can exceptions be disabled, if at all?
Exception Handling Control Flow
• Handler form:
when exception_choice{|exception_choice} => statement_sequence
...
[when others =>
statement_sequence]
exception_choice form:
exception_name | others
• Handlers are placed at the end of the block or unit in which they occur
PREDEFINED EXCEPTIONS:
• CONSTRAINT_ERROR - index constraints, range constraints, etc.
• NUMERIC_ERROR - numeric operation cannot return a correct value
(overflow, division by zero, etc.)
• PROGRAM_ERROR - call to a subprogram whose body has not been
elaborated
• STORAGE_ERROR - system runs out of heap
• TASKING_ERROR - an error associated with tasks
Throwing Exceptions
• Exceptions are all raised explicitly by the statement:
throw [expression];
• The brackets are metasymbols
• A throw without an operand can only appear in a handler; when it appears, it
simply re-raises the exception, which is then handled elsewhere
• The type of the expression disambiguates the intended handler
Unhandled Exceptions
• An unhandled exception is propagated to the caller of the function in which it
is raised.
• This propagation continues to the main function
• If no handler is found, the default handler is called
Continuation
• After a handler completes its execution, control flows to the first statement
after the last handler in the sequence of handlers of which it is an element
• Other design choices
– All exceptions are user-defined
– Exceptions are neither specified nor declared
– The default handler, unexpected, simply terminates the program;
unexpected can be redefined by the user
– Functions can list the exceptions they may raise
– Without a specification, a function can raise any exception (the throw
clause)
Evaluation
• It is odd that exceptions are not named and that hardware- and system
software-detectable exceptions cannot be handled
• Binding exceptions to handlers through the type of the parameter certainly
does not promote readability
Classes of Exceptions
• The Java library includes two subclasses of Throwable :
– Error
• Thrown by the Java interpreter for events such as heap
overflow
• Never handled by user programs
– Exception
• User-defined exceptions are usually subclasses of this
• Has two predefined subclasses, IOException and
RuntimeException (e.g., ArrayIndexOutOfBoundsException
and NullPointerException
Continuation:
• If no handler is found in the try construct, the search is continued in the
nearest enclosing try construct, etc.
• If no handler is found in the method, the exception is propagated to the
method’s caller
• If no handler is found (all the way to main), the program is terminated
• To insure that all exceptions are caught, a handler can be included in any try
construct that catches all exceptions
– Simply use an Exception class parameter
– Of course, it must be the last in the try construct
Assertions
• Statements in the program declaring a boolean expression regarding the
current state of the computation.
• When evaluated to true nothing happens.
• When evaluated to false an Assertion Error exception is thrown.
• Can be disabled during runtime without program modification or
recompilation
• Two forms
– assert condition;
– assert condition: expression;
Event Handling in C#
Event handling in C# (and in the other .NET languages) is similar to that of
Java.
.NET provides two approaches to creating GUIs in applications, the original
Windows.
Forms and the more recent Windows Presentation Foundation.
All C# event handlers have the same protocol: the return type is void and the
two parameters are of types object and EventArgs.