Nothing Special   »   [go: up one dir, main page]

Introduction To Programming Methodology

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 48
At a glance
Powered by AI
The key takeaways are introduction to programming methodologies like stepwise refinement and modular programming. Stepwise refinement deals with analyzing, designing and implementing programs in a step by step manner. Modular programming involves decomposing programs into smaller modules.

The steps involved in stepwise refinement are: 1) Define the problem 2) Design an algorithm to solve the problem 3) Write and execute programs and optimize for small programs 4) For large programs, organize each part before writing 5) Involve refinement steps to convert problem to program

The three stages of refinement process are: 1) Modeling - Represent problem using mathematical models 2) Pseudo-language - Write algorithm using pseudo language 3) Implementation - Choose data types and replace pseudo statements with code

Introduction to

Programming
Methodologies

Deal with the analysis, design and


implementation of programs
Step Wise Refinement

– In order to solve a problem, first we want to define the problem. Second step is
to design the algorithm to solve that problem.
– Writing and executing programs and then optimizing them may be effective for
small programs.
– But for a large program, each part of the program must be well organized
before writing the program
– There are few steps of refinement involved when a problem is converted to
program; this method is called stepwise refinement method.
– There are three steps in refinement process
– In the first stage, modeling, we try to represent the problem using an
appropriate mathematical model such as a graph, tree etc. At this stage, the
solution to the problem is an algorithm expressed very informally.
– At the next stage, the algorithm is written in pseudo-language (or formal
algorithm) that is, a mixture of any programming language constructs and less
formal English statements. The operations to be performed on the various types
of data become fixed.
– In the final stage we choose an implementation for each abstract data type and
write the procedures for the various operations on that type. The remaining
informal statements in the pseudo-language algorithm are replaced by (or any
programming language) C/C++ code.
Programming Style

– Procedure Oriented Programming


Procedural programming is a paradigm based on the concept of using
procedures. Procedure (sometimes also called subprogram, routine or method) is a
sequence of commands to be executed. Any procedure can be called from any
point within the general program, including other procedures or even itself
(resulting in a recursion).
Modular Programming

– The program is progressively decomposed into smaller partition called modules.


– A module decomposed into successively subordinate module. Conversely, a
number of modules can combined together to form a superior module.
– A sub-module, are located elsewhere in the program and the superior module,
whenever necessary make a reference to subordinate module and call for its
execution. This activity on part of the superior module is known as a calling
steps needed to develop a
modular program
– 1. Define the problem
– 2. Outline the steps needed to achieve the goal
– 3. Decompose the problem into subtasks
– 4. Prototype a subprogram for each sub task
– 5. Repeat step 3 and 4 for each subprogram until further decomposition seems
counter productive
Modular Programming is heavily procedural. The focus is
entirely on writing code (functions). Data is passive in Modular
Programming.
Advantages of modular
programming
1. Reduce the complexity of the entire problem
2. Avoid the duplication of code
3. debugging program is easier and reliable
4. Improves the performance
5. Reusability- modules can be used in other program without
rewriting and retesting
6. Modular program improves the portability of program
7. It reduces the development work
– Two methods may be used for modular programming. They are known as top-
down and bottom-up
Top- down modular
programming
– The principles of top-down design dictate that a program should be divided into
a main module and its related modules. Each module should also be divided
into sub modules according to software engineering and programming style.
The division of modules processes until the module consists only of elementary
process that are intrinsically understood and cannot be further subdivided.
Bottom-Up modular
programming
– Bottom-up algorithm design is the opposite of top-down design. It refers to a style
of programming where an application is constructed starting with existing primitives
of the programming language, and constructing gradually more and more
complicated features, until the all of the application has been written. That is,
starting the design with specific modules and build them into more complex
structures, ending at the top. The bottom-up method is widely used for testing,
because each of the lowest-level functions is written and tested first. This testing is
done by special test functions that call the low-level functions, providing them with
different parameters and examining the results for correctness. Once lowest-level
functions have been tested and verified to be correct, the next level of functions
may be tested. Since the lowest-level functions already have been tested, any
detected errors are probably due to the higher-level functions. This process
continues, moving up the levels, until finally the main function is tested.
Structured Programming

– Structured programming uses the following


– 1. Sequence of sequentially executed statements
– 2. Conditional execution of statements (i.e., “if” statements)
– 3. Looping or iteration (i.e., “for, do...while, and while” statements)
– 4. Structured subroutine calls (i.e., functions)
– In particular, the following language usage is forbidden:
– • “GoTo” statements
– • “Break” or “continue” out of the middle of loops
– • Multiple exit points to a function/procedure/subroutine (i.e., multiple
“return” statements)
– • Multiple entry points to a function/procedure/subroutine/method

– In this style of programming there is a great risk that implementation details of
many data structures have to be shared between functions, and thus globally
exposed. This in turn tempts other functions to use these implementation
details; thereby creating unwanted dependencies between different parts of
the program.
Advantage

– 1. clarity: structured programming has a clarity and logical pattern to their


control structure and due to this tremendous increase in programming
productivity
– 2. another key to structured programming is that each block of code has a single
entry point and single exit point.so we can break up long sequence of code into
modules
– 3. Maintenance: the clarity and modularity inherent in structured programming
is of great help in finding an error and redesigning the required section of code.

Object oriented programming

– The major motivating factor in the invention of object-oriented approach is to


remove some of the flaws encountered in the procedural or modular approach. OOP
treats data as a critical element in the program development and does not allow it
to flow freely around the system. It ties data more closely to the function that
operate on it, and protects it from accidental modification from outside function.
OOP allows decomposition of a problem into a number of entities called objects and
then builds data and function around these objects.
– The data of an object can be accessed only by the function associated with that
object. However, function of one object can access the function of other objects.


– • Emphasis is on data rather than procedure.
– • Programs are divided into what are known as objects.
– • Data structures are designed such that they characterize the objects.
– • Functions that operate on the data of an object are ties together in the data structure.
– • Data is hidden and cannot be accessed by external function.
– • Objects may communicate with each other through function.
– • New data and functions can be easily added whenever necessary.
– • Follows bottom up approach in program design.


Analysis of Algorithm

After designing an algorithm, it has to be checked and its correctness needs to be


predicted; this is done by analyzing the algorithm.
– The algorithm can be analyzed by tracing all step-by-step instructions, reading
the algorithm for logical correctness, and testing it on some data using
mathematical techniques to prove it correct.
– Another type of analysis is to analyze the simplicity of the algorithm.(However,
the simplest and most Straight forward way of solving a problem may not be
sometimes the best one)
– Moreover there may be more than one algorithm to solve a problem.
– The choice of a particular algorithm depends on following performance analysis
and measurements:
– Space Complexity
– Time complexity
Space Complexity

– Analysis of space complexity of an algorithm or program is the amount of


memory it needs to run to completion.
– The space needed by a program consists of following components.
– Instruction space : Space needed to store the executable version of the program
and it is fixed.
– Data space : Space needed to store all constants, variable values and has further
two components

– • Environment stack space: This space is needed to store the information to
resume the suspended (partially completed) functions.
– Each time a function is invoked the following data is saved on the environment
stack :
– (a) Return address : i.e., from where it has to resume after completion of the
Called function.
– (b) Values of all lead variables and the values of formal parameters in the
function being invoked.

– The amount of space needed by recursive function is called the recursion stack
space. For each recursive function, this space depends on the space needed by
the local variables and the formal parameter. In addition, this space depends on
the maximum depth of the recursion i.e., maximum number of nested recursive
calls.
Time Complexity

– The time complexity of an algorithm or a program is the amount of time it


needs to run to completion.
– The exact time will depend on the implementation of the algorithm,
programming language, optimizing the capabilities of the compiler used, the
CPU speed, other hardware characteristics/specifications and so on.
– To measure the time complexity accurately, we have to count all sorts of
operations performed in an algorithm.
– If we know the time for each one of the primitive operations performed in a
given computer, we can easily compute the time taken by an algorithm to
complete its execution.
– This time will vary from machine to machine.
– By analyzing an algorithm, it is hard to come out with an exact time required.
– To find out exact time complexity, we need to know the exact instructions executed
by the hardware and the time required for the instruction.
– The time complexity also depends on the amount of data inputted to an algorithm.
– But we can calculate the order of magnitude for the time required.
– That is, our intention is to estimate the execution time of an algorithm irrespective
of the computer machine on which it will be used.

– Here, the more sophisticated method is to identify the key operations and
count such operations performed till the program completes its execution.
– A key operation in our algorithm is an operation that takes maximum time
among all possible operations in the algorithm.
– Such an abstract, theoretical approach is not only useful for discussing and
comparing algorithms, but also it is useful to improve solutions to practical
problems.
– The time complexity can now be expressed as function of number of key
operations performed.
– When we analyze an algorithm it depends on the input data, there are three
cases :
– 1. Best case
– 2. Average case
– 3. Worst case
– In the best case, the amount of time a program might be expected to take on
best possible input data. In the average case, the amount of time a program
might be expected to take on typical (or average) input data. In the worst case,
the amount of time a program would take on the worst possible input
configuration.
Frequency Count

– Frequency count method can be used to analyze a program .Here we assume


that every statement takes the same constant amount of time for its execution.
Hence the determination of time complexity of a given program is is the just
matter of summing the frequency counts of all the statements of that program
– Execution time of an algorithm depends on numbers of instruction executed.
– Consider the following algorithm fragment:
– for i = 1 to n do
– sum = sum + i ;
– The for loop executed n+1 times for i values 1,2,....... n, n+1. Each instruction in
the body of the loop is executed once for each value of i = 1,2,......, n. So
number of steps executed is 2n+1.
– Consider another algorithm fragment:
– for i = 1 to n do
– for j = 1 to n do
– k = k +1
– From prevoius example, number of instruction executed in the inner loop is
which is the body of outer 2n+1 loop.
– Total number of instruction executed is
– =n+1+n(2n+1)
To measure the time complexity in absolute time unit has the following
problems
– The time required for an algorithm depends on number of instructions
executed, which is a complex polynomial.
– The execution time of an instruction depends on computer's power. Since,
different computers take different amount of time for the same instruction.
– Different types of instructions take different amount of time on same computer.
Complexity analysis technique abstracts away these machine dependent
factors . In this approach, we assume all instruction takes constant amount
of time for execution.
Asymptotic bounds as polynomials are used as a measure of the estimation
of the number of instructions to be executed by the algorithm . Three main
types of asymptotic order notations are used in practice:
Big-Θ notation
– f(n) = Θ(g(n)) iff there are three positive constants c1, c2 and n0 such that
c1|g(n)| ≤ |f(n)| ≤ c2|g(n)| for all n ≥ n0 .
– If f(n) is nonnegative, we can simplify the last condition to 0 ≤ c1 g(n) ≤ f(n) ≤ c2
g(n) for all n ≥ n0 .then we say that “f(n) is theta of g(n).” .
– As n increases, f(n) grows at the same rate as g(n).
– In other words, g(n) is an asymptotically tight bound on f(n).


𝑛 𝑛 + 1 /2 = 𝜃 𝑛2
– This is because we can find constants c1=1/2 and c2=1 such that
– 1/2 𝑛2 <=n(n+1)/2<= 𝑛2 for all n0>=2
– ½<=2<=1 not satisfying
– 2<=3<=4 satisfying
𝑛2 + 5𝑛 + 7 = 𝜃 𝑛2
– This is because we can find constants c1=1 and c2=13 n0=1 such that
– 𝑛2 <= 𝑛2 + 5𝑛 + 7<=13 𝑛2
Big-0 Notation
f(n) = O(g(n)) iff there are two positive constants c and n0 such that
|f(n)| ≤ c |g(n)| for all n ≥ n0 .
If f(n) is nonnegative, we can simplify the last condition to
0 ≤ f(n) ≤ c g(n) for all n ≥ n0 . then we say that “f(n) is big-O of g(n).” .
As n increases, f(n) grows no faster than g(n).
In other words, g(n) is an asymptotic upper bound on f(n).
– 2n^2+5n+6=O(n^2)
– This is because we can find constants c=13 n0>=1
2n^2+5n+6<=13n^2
or
– This is because we can find constants C=6 n0>=2 -
– 2n^2+5n+6<=6n^2
Big-Ω notation
– f(n) = Ω(g(n)) iff there are two positive constants c and n0 such that
– |f(n)| ≥ c |g(n)| for all n ≥ n0 .
– If f(n) is nonnegative, we can simplify the last condition to 0 ≤ c g(n) ≤ f(n) for all
n ≥ n0
– then we say that “f(n) is omega of g(n).”
– As n increases, f(n) grows no slower than g(n). In other words, g(n) is an
asymptotic lower bound on f(n)

– 𝑛3 = 𝛺 𝑛2
– This is because we can find constants c=1 and n>=1
– 𝑛3 + 4 𝑛2 = Ω(𝑛2 )
– This is because we can find constants c=1 and n>=1
Growth analysis of an algorithm
– The function that involves ‘n’ as an exponent, i.e., 2^n, n^n, n ! are called
exponential functions, which is too slow except for small size input function
where growth is less than or equal to n^c,(where ‘c’ is a constant) i.e.; n^3, n^2,
n logn, n, logn are said to be polynomial.
– Algorithms with polynomial time can solve reasonable sized problems if the
constant in the exponent is small.

You might also like