Nothing Special   »   [go: up one dir, main page]

Unit 4 Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

UNIT-IV

PRINCIPLES OF PROGRAMMING LANGUAGES


Concurrency Notes
The ability to run multiple tasks or processes at the same time, or in overlapping
time periods.
Concurrency in software execution can occur at four different levels:
1. Instruction level:- executing two or more machine instructions
simultaneously
2. Statement level:- executing two or more high-level language
statements simultaneously.
3. Unit level:- executing two or more subprogram units simultaneously,
4. Program level:- executing two or more programs simultaneously.

Multiprocessor Architectures:
A large number of different computer architectures have more than one
processor and can support some form of concurrent execution.
• Late 1950s - one general-purpose processor and one or more special-purpose
processors for input and output operations
• Early 1960s - multiple complete processors, used for program-level
concurrency
• Mid-1960s - multiple partial processors, used for instruction-level concurrency
• Single-Instruction Multiple-Data (SIMD) machines
• Multiple-Instruction Multiple-Data (MIMD) machines

Categories of Concurrency:-

There are two distinct categories of concurrent unit control.

Physical concurrency:- A slight relaxation of this concept of concurrency


allows the programmer and the application software to assume that there are
multiple processors providing actual concurrency, when in fact the actual
execution of programs is taking place in interleaved fashion on a single
processor.

Logical concurrency:-
One useful technique for visualizing the flow of execution through a program is
to imagine a thread laid on the statements of the source text of the program.
Every statement reached on a particular execution is covered by the thread
representing that execution.
Formally, a thread of control in a program is the sequence of program points
reached as control flows through the program.
Programs that have co-routines but no concurrent subprograms, though they
have sometimes called quasi-concurrent, have a single thread of control.
Programs executed with physical concurrency can have multiple threads of
control.
Program designed to have more than one thread of control is said to be
multithreaded.

Motivations for the Use of Concurrency


There are at least four different reasons to design concurrent software systems.
• Speed of execution of programs on machines with multiple processors.
• When a machine has just one processor, a program written to use concurrent
execution can be faster than the same program written for sequential
(non-concurrent) execution.
• Concurrency provides a different method of conceptualizing program
solutions to problems.
• Concurrency is to program applications that are distributed over several
machines, either locally or through the Internet.

Introduction to Subprogram-Level Concurrency


Task is a unit of a program, similar to a subprogram, that can be in concurrent
execution with other units of the same program. Each task in a program can
support one thread of control. Tasks are sometimes called processes.
In some languages, for example Java and C#, certain methods serve as tasks.
Such methods are executed in objects called threads.
Tasks fall into two general categories: heavyweight and lightweight.
Heavyweight task executes in its own address space.
Lightweight tasks all run in the same address space.
A task can communicate with other tasks through shared nonlocal variables,
through message passing, or through parameters.
If a task does not communicate with or affect the execution of any other task in
the program in any way, it is said to be disjoint.
Synchronization is a mechanism that controls the order in which tasks execute.

Two kinds of synchronization are required when tasks share data:


Co-operation and competition.

Co-operation synchronization is required between task A and task B when


task A must wait for task B to complete some specific activity before task A can
begin or continue its execution. Ex- producer-consumer problem.
Competition synchronization is required between two tasks when both require
the use of some resource that cannot be simultaneously used.
Scheduler
• Providing synchronization requires a mechanism for delaying task execution
• Task execution control is maintained by a program called the scheduler, which
maps task execution onto available processors

Task Execution States


• New - Created but not yet started
• Ready - Ready to run but not currently running (no available processor)
• Running - A process is in the running state when a processor is executing it.
• Blocked - Has been running, but cannot now continue (usually waiting for
some event to occur)
• Dead - No longer active in any sense
Liveness and Deadlock
• Liveness is a characteristic that a program unit may or may not have.
- In sequential code, it means the unit will eventually complete its execution
• In a concurrent environment, a task can easily lose its liveness.
• If all tasks in a concurrent environment lose their liveness, it is called
deadlock.

Design Issues for Concurrency


• Competition and cooperation synchronization
• Controlling task scheduling
• How and when tasks start and end execution
• How and when are tasks created

Methods of Providing Synchronization


• Semaphores
• Monitors
• Message Passing

Semaphores
A semaphore is a simple mechanism that can be used to provide
synchronization of tasks.
Although semaphores are an early approach to providing synchronization, they
are still used, both in contemporary languages and in library-based concurrency
support systems.
In an effort to provide competition synchronization through mutually exclusive
access to shared data structures, Edsger Dijkstra devised semaphores in 1965
(Dijkstra, 1968b).
Semaphores can also be used to provide cooperation synchronization.
A semaphore is an implementation of a guard. Specifically, a semaphore is a
data structure that consists of an integer and a queue that stores task descriptors.
A task descriptor is a data structure that stores all of the relevant information
about the execution state of a task.
• A semaphore is a data structure consisting of a counter and a queue for storing
task descriptors
• Semaphores can be used to implement guards on the code that accesses shared
data structures
• Semaphores have only two operations, wait and release (originally called P
and V by Dijkstra)
• Semaphores can be used to provide both competition and cooperation
synchronization
Cooperation Synchronization
• Example: A shared buffer
• The buffer is implemented as an ADT with the operations DEPOSIT and
FETCH as the only ways to access the buffer
• Use two semaphores for cooperation: empty spots and full spots
• The semaphore counters are used to store the numbers of empty spots and full
spots in the buffer
• DEPOSIT must first check emptyspots to see if there is room in the buffer
• If there is room, the counter of emptyspots is decremented and the value is
inserted
• If there is no room, the caller is stored in the queue of emptyspots
• When DEPOSIT is finished, it must increment the counter of fullspots
• FETCH must first check fullspots to see if there is a value
• If there is a full spot, the counter of fullspots is decremented and the
value is removed
• If there are no values in the buffer, the caller must be placed in the
queue of fullspots
• When FETCH is finished, it increments the counter of emptyspots
• The operations of FETCH and DEPOSIT on the semaphores are accomplished
through two semaphore operations named wait and release

Competition Synchronization
One of the most important features of monitors is that shared data is resident in
the monitor rather than in any of the client units.
The programmer does not synchronize mutually exclusive access to shared data
through the use of semaphores or other mechanisms.
• Shared data is resident in the monitor (rather than in the client units)
• All access resident in the monitor
– Monitor implementation guarantee synchronized access by allowing
only one access at a time
– Calls to monitor procedures are implicitly queued if the monitor is
busy at the time of the call

Monitors
One solution to some of the problems of semaphores in a concurrent
environment is to encapsulate shared data structures with their operations and
hide their representations—that is, to make shared data structures abstract data
types with some special restrictions.
Cooperation Synchronization
Mutually exclusive access to shared data is intrinsic with a monitor, cooperation
between processes is still the task of the programmer.
Different languages provide different ways of programming cooperation
synchronization, all of which are related to semaphores.
• Cooperation between processes is still a programming task
– Programmer must guarantee that a shared buffer does not experience
underflow or overflow

EVALUATION OF MONITORS:
• A better way to provide competition synchronization than are semaphores
• Semaphores can be used to implement monitors
• Monitors can be used to implement semaphores
• Support for cooperation synchronization is very similar as with semaphores,
so it has the same problems

MESSAGE PASSING
• Message passing is a general model for concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
• Central idea: task communication is like seeing a doctor--most of the time she
waits for you or you wait for her, but when you are both ready, you get
together, or rendezvous
Message Passing Rendezvous
• To support concurrent tasks with message passing, a language needs:
- A mechanism to allow a task to indicate when it is willing to accept messages
- A way to remember who is waiting to have its message accepted and some
―fair‖ way of choosing the next message
• When a sender task’s message is accepted by a receiver task, the actual
message transmission is called a rendezvous

Ada Support for Concurrency


Ada tasks can be more active than monitors. Monitors are passive entities that
provide management services for the shared data they store.
They provide their services, though only when those services are requested.
When used to manage shared data, Ada tasks can be thought of as managers that
can reside with the resource they manage.
They have several mechanisms, some deterministic and some nondeterministic,
that allow them to choose among competing requests for access to their
resources.
• The Ada 83 Message-Passing Model
– Ada tasks have specification and body parts, like packages; the spec
has the interface, which is the collection of entry points:
task Task_Example is
entry ENTRY_1 (Item : in Integer);
end Task_Example;

Task Body
• The body task describes the action that takes place when a rendezvous occurs
• A task that sends a message is suspended while waiting for the message to be
accepted and during the rendezvous
• Entry points in the spec are described with accept clauses in the body
accept entry_name (formal parameters) do

end entry_name
Example:
task body Task_Example is
begin
loop
accept Entry_1 (Item: in Float) do
...
end Entry_1;
end loop;
end Task_Example;
Multiple Entry Points
• Tasks can have more than one entry point
– The specification task has an entry clause for each
– The task body has an accept clause for each entry clause, placed in a
select clause, which is in a loop

A Task With Multiple Entries


task body Teller is
loop
select
accept Drive_Up(formal params) do
...
end Drive_Up;
...
or
accept Walk_Up(formal params) do
...
end Walk_Up;
...
end select;
end loop;
end Teller;

Semantics of Tasks with Multiple accept Clauses


• If exactly one entry queue is nonempty, choose a message from it
• If more than one entry queue is nonempty, choose one, non-deterministically,
from which to accept a message
• If all are empty, wait
• The construct is often called a selective wait
• Extended accept clause - code following the clause, but before the next clause
– Executed concurrently with the caller

Cooperation Synchronization with Message Passing


• Provided by Guarded accept clauses
when not Full(Buffer) =>
accept Deposit (New_Value) do
...
end
• An accept clause with a with a when clause is either open or closed
– A clause whose guard is true is called open
– A clause whose guard is false is called closed
– A clause without a guard is always open
Semantics of select with Guarded accept Clauses:
• select first checks the guards on all clauses
• If exactly one is open, its queue is checked for messages
• If more than one are open, non-deterministically choose a queue among them
to check for messages
• If all are closed, it is a runtime error
• A select clause can include an else clause to avoid the error
– When the else clause completes, the loop repeats

Competition Synchronization with Message Passing


• Modeling mutually exclusive access to shared data
• Example--a shared buffer
• Encapsulate the buffer and its operations in a task
• Competition synchronization is implicit in the semantics of accept clauses
– Only one accept clause in a task can be active at any given time

Task Termination
• The execution of a task is completed if control has reached the end of its code
body
• If a task has created no dependent tasks and is completed, it is terminated
• If a task has created dependent tasks and is completed, it is not terminated
until all its dependent tasks are terminated
• A terminate clause in a select is just a terminate statement
• A terminate clause is selected when no accept clause is open
• When a terminate is selected in a task, the task is terminated only when its
master and all of the dependents of its master are either completed or are
waiting at a terminate

Concurrency in Ada 95
• A block or subprogram is not left until all of its dependent tasks are terminated
• Ada 95 includes Ada 83 features for concurrency, plus two new features
• Protected objects: A more efficient way of implementing shared data to
allow access to a shared data structure to be done without rendezvous
• Asynchronous communication

Ada 95: Protected Objects


• A protected object is similar to an abstract data type
• Access to a protected object is either through messages passed to entries, as
with a task, or through protected subprograms
• A protected procedure provides mutually exclusive read-write access to
protected objects
• A protected function provides concurrent read-only access to protected objects
Asynchronous Communication
• Provided through asynchronous select structures
• An asynchronous select has two triggering alternatives, an entry clause or a
delay
– The entry clause is triggered when sent a message
– The delay clause is triggered when its time limit is reached

JAVA THREADS
The concurrent units in Java are methods named run, whose code can be in
concurrent execution with other such methods (of other objects) and with the
main method.
The process in which the run methods execute is called a thread.
Java’s threads are lightweight tasks, which means that they all run in the same
address space.

Class myThread extends Thread


{
public void run ()
{

}
}

Thread myTh = new MyThread ();
myTh.start();

Controlling Thread Execution


• The Thread class has several methods to control the execution of threads
– The yield is a request from the running thread to voluntarily surrender
the processor
– The sleep method can be used by the caller of the method to block the
thread
– The join method is used to force a method to delay its execution until
the run method of another thread has completed its execution

Thread Priorities
• A thread’s default priority is the same as the thread that create it
– If main creates a thread, its default priority is NORM_PRIORITY
• Threads defined two other priority constants, MAX_PRIORITY and
MIN_PRIORITY
• The priority of a thread can be changed with the methods setPriority
Competition Synchronization with Java Threads
• A method that includes the synchronized modifier disallows any other method
from running on the object while it is in execution

public synchronized void deposit( int i) {…}
public synchronized int fetch() {…}

• The above two methods are synchronized which prevents them from
interfering with each other.
• If only a part of a method must be run without interference, it can be
synchronized through synchronized statement

synchronized (expression)
statement
• Cooperation synchronization in Java is achieved via wait, notify, and notifyAll
methods
– All methods are defined in Object, which is the root class in Java, so
all objects inherit them
• The wait method must be called in a loop
• The notify method is called to tell one waiting thread that the event it was
waiting has happened
• The notifyAll method awakens all of the threads on the object’s wait list

C# THREADS
• Loosely based on Java but there are significant differences
• Basic thread operations
– Any method can run in its own thread
– A thread is created by creating a Thread object
– Creating a thread does not start its concurrent execution; it must be
requested through the Start method
– A thread can be made to wait for another thread to finish with Join
– A thread can be suspended with Sleep
– A thread can be terminated with Abort

Synchronizing Threads
• Three ways to synchronize C# threads
– The Interlocked class
• Used when the only operations that need to be synchronized are
incrementing or decrementing of an integer
– The lock statement
• Used to mark a critical section of code in a thread
lock (expression) {… }
– The Monitor class
• Provides four methods that can be used to provide more sophisticated
Synchronization.

EXCEPTION AND EVENT HANDLING


Most computer hardware systems are capable of detecting certain run-time error
conditions, such as floating-point overflow.
Early programming languages were designed and implemented in such a way
that the user program could neither detect nor attempt to deal with such errors.
In these languages, the occurrence of such an error simply causes the program
to be terminated and control to be transferred to the operating system.
Exception to be any unusual event, erroneous or not, that is detectable by either
hardware or software and that may require special processing.
The special processing that may be required when an exception is detected is
called exception handling.
This processing is done by a code unit or segment called an exception handler.
An exception is raised when its associated event occurs.
In some C-based languages, exceptions are said to be thrown, rather than raised.
Different kinds of exceptions require different exception handlers.
Detection of end-of-file nearly always requires some specific program action.

Advantages of Built-in Exception Handling


• Error detection code is tedious to write and it clutters the program
• Exception handling encourages programmers to consider many different
possible errors
• Exception propagation allows a high level of reuse of exception handling code

Design Issues
• How and where are exception handlers specified and what is their scope?
• How is an exception occurrence bound to an exception handler?
• Can information about the exception be passed to the handler?
• Where does execution continue, if at all, after an exception handler completes
its execution? (continuation vs. resumption)
• Is some form of finalization provided?
• How are user-defined exceptions specified?
• Should there be default exception handlers for programs that do not provide
their own?
• Can built-in exceptions be explicitly raised?
• Are hardware-detectable errors treated as exceptions that can be handled?
• Are there any built-in exceptions?
• How can exceptions be disabled, if at all?
Exception Handling Control Flow

EXCEPTION HANDLING IN ADA


Ada exception handlers are often local to the code in which the exception can be
raised (although they can be propagated to other program units).
Because this provides them with the same referencing environment, parameters
for handlers are not necessary and are not allowed.
Therefore, if an exception is handled in a unit different from the unit that raised
the exception, no information about the exception can be passed to the handler.

• Handler form:
when exception_choice{|exception_choice} => statement_sequence
...
[when others =>
statement_sequence]
exception_choice form:
exception_name | others
• Handlers are placed at the end of the block or unit in which they occur

Binding Exceptions to Handlers


• If the block or unit in which an exception is raised does not have a handler for
that exception, the exception is propagated elsewhere to be handled
– Procedures - propagate it to the caller
– Blocks - propagate it to the scope in which it appears
– Package body - propagate it to the declaration part of the unit that
declared the package (if it is a library unit, the program is terminated)
– Task - no propagation; if it has a handler, execute it; in either case,
mark it "completed"
Continuation
• The block or unit that raises an exception but does not handle it is always
terminated (also any block or unit to which it is propagated that does not
handle it)

Other Design Choices


• User-defined Exceptions form:
exception_name_list : exception;
• Raising Exceptions form:
raise [exception_name]
– (the exception name is not required if it is in a handler--in this case, it
propagates the same exception)
– Exception conditions can be disabled with:
pragma SUPPRESS(exception_list)

PREDEFINED EXCEPTIONS:
• CONSTRAINT_ERROR - index constraints, range constraints, etc.
• NUMERIC_ERROR - numeric operation cannot return a correct value
(overflow, division by zero, etc.)
• PROGRAM_ERROR - call to a subprogram whose body has not been
elaborated
• STORAGE_ERROR - system runs out of heap
• TASKING_ERROR - an error associated with tasks

EXCEPTION HANDLING IN C++


The exception handling of C++ was accepted by the ANSI C++ standardization
committee in 1990 and subsequently found its way into C++ implementations.
The design is based in part on the exception handling of CLU, Ada, and ML.
One major difference between the exception handling of C++ and that of Ada is
the absence of predefined exceptions in C++.
C++ uses a special construct that is introduced with the reserved word try for
this purpose.
A try construct includes a compound statement called the try clause and a list
of exception handlers.
• Exception Handlers Form:
try
{
-- code that is expected to raise an exception
}
catch (formal parameter)
{
-- handler code
}
...
catch (formal parameter)
{
-- handler code
}

The Catch Function


• catch is the name of all handlers--it is an overloaded name, so the formal
parameter of each must be unique
• The formal parameter need not have a variable
– It can be simply a type name to distinguish the handler it is in from
others
• The formal parameter can be used to transfer information to the handler
• The formal parameter can be an ellipsis, in which case it handles all
exceptions not yet handled

Throwing Exceptions
• Exceptions are all raised explicitly by the statement:
throw [expression];
• The brackets are metasymbols
• A throw without an operand can only appear in a handler; when it appears, it
simply re-raises the exception, which is then handled elsewhere
• The type of the expression disambiguates the intended handler

Unhandled Exceptions
• An unhandled exception is propagated to the caller of the function in which it
is raised.
• This propagation continues to the main function
• If no handler is found, the default handler is called

Continuation
• After a handler completes its execution, control flows to the first statement
after the last handler in the sequence of handlers of which it is an element
• Other design choices
– All exceptions are user-defined
– Exceptions are neither specified nor declared
– The default handler, unexpected, simply terminates the program;
unexpected can be redefined by the user
– Functions can list the exceptions they may raise
– Without a specification, a function can raise any exception (the throw
clause)
Evaluation
• It is odd that exceptions are not named and that hardware- and system
software-detectable exceptions cannot be handled
• Binding exceptions to handlers through the type of the parameter certainly
does not promote readability

EXCEPTION HANDLING IN JAVA


• Based on that of C++, but more in line with OOP philosophy
• All exceptions are objects of classes that are descendants of the Throwable
class

Classes of Exceptions
• The Java library includes two subclasses of Throwable :
– Error
• Thrown by the Java interpreter for events such as heap
overflow
• Never handled by user programs
– Exception
• User-defined exceptions are usually subclasses of this
• Has two predefined subclasses, IOException and
RuntimeException (e.g., ArrayIndexOutOfBoundsException
and NullPointerException

Java Exception Handlers


• Like those of C++, except every catch requires a named parameter and all
parameters must be descendants of Throwable
• Syntax of try clause is exactly that of C++
• Exceptions are thrown with throw, as in C++, but often the throw includes the
new operator to create the object, as in: throw new MyException();

Binding Exceptions to Handlers


• Binding an exception to a handler is simpler in Java than it is in C++
– An exception is bound to the first handler with a parameter is the same
class as the thrown object or an ancestor of it
• An exception can be handled and rethrown by including a throw in the handler
(a handler could also throw a different exception)

Continuation:
• If no handler is found in the try construct, the search is continued in the
nearest enclosing try construct, etc.
• If no handler is found in the method, the exception is propagated to the
method’s caller
• If no handler is found (all the way to main), the program is terminated
• To insure that all exceptions are caught, a handler can be included in any try
construct that catches all exceptions
– Simply use an Exception class parameter
– Of course, it must be the last in the try construct

Checked and Unchecked Exceptions


• The Java throws clause is quite different from the throw clause of C++
• Exceptions of class Error and RunTimeException and all of their descendants
are called unchecked exceptions; all other exceptions are called checked
exceptions.
• Checked exceptions that may be thrown by a method must be either:
– Listed in the throws clause, or
– Handled in the method

The finally Clause


• Can appear at the end of a try construct
• Form:
finally
{
...
}
• Purpose: To specify code that is to be executed, regardless of what happens in
the try construct

Assertions
• Statements in the program declaring a boolean expression regarding the
current state of the computation.
• When evaluated to true nothing happens.
• When evaluated to false an Assertion Error exception is thrown.
• Can be disabled during runtime without program modification or
recompilation
• Two forms
– assert condition;
– assert condition: expression;

INTRODUCTION TO EVENT HANDLING


Event handling is similar to exception handling.
In both cases, the handlers are implicitly called by the occurrence of something,
either an exception or an event.
While exceptions can be created either explicitly by user code or implicitly by
hardware or a software interpreter, events are created by external actions, such
as user interactions through a graphical user interface (GUI).
• An event is a notification that something specific has occurred, such as a
mouse click on a graphical button.
• An event is created by an external action such as a user interaction through a
GUI.
• An event handler is a segment of code that is executed in response to the
appearance of an event.
Event handlers enable a program to be responsive to user actions.
Another common use of event handlers is to check for simple errors and
omissions in the elements of a form, either when they are changed or when the
form is submitted to the Web server for processing.
Using event handling on the browser to check the validity of form data saves the
time of sending that data to the server, where their correctness then must be
checked by a server-resident program or script before they can be processed.
This kind of event driven programming is often done using a client-side
scripting language, such asJavaScript.

Event Handling with Java

The Java Event Model


• User interactions with GUI components create events that can be caught by
event handlers, called event listeners
• An event generator tells a listener of an event by sending a message
• An interface is used to make event-handling methods conform to a standard
protocol
• A class that implements a listener must implement an interface for the listener
• One class of events is ItemEvent, which is associated with the event of
clicking a checkbox, a radio button, or a list item
• The ItemListener interface prescribes a method, itemStateChanged, which is a
handler for ItemEvent events
• The listener is created with addItemListener.

Event Handling in C#
Event handling in C# (and in the other .NET languages) is similar to that of
Java.
.NET provides two approaches to creating GUIs in applications, the original
Windows.
Forms and the more recent Windows Presentation Foundation.
All C# event handlers have the same protocol: the return type is void and the
two parameters are of types object and EventArgs.

You might also like