Nothing Special   »   [go: up one dir, main page]

Unit 3

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

UNIT 3

1 Software Design

Software design is a mechanism to transform user requirements into some suitable form, which helps the programmer in software coding and implementation. It deals
with representing the client's requirement, as described in SRS (Software Requirement Specification) document, into a form, i.e., easily implementable using
programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which moves the concentration from the problem domain to the solution domain. In
software design, we consider the system to be a set of components or modules with clearly defined behaviors & boundaries.

Objectives of Software Design

Following are the purposes of Software design:

 Correctness: Software design should be correct as per requirement.


 Completeness: The design should have all components like data structures, modules, and external interfaces, etc.
 Efficiency: Resources should be used efficiently by the program.
 Flexibility: Able to modify on changing needs.
 Consistency: There should not be any inconsistency in the design.
 Maintainability: The design should be so simple so that it can be easily maintainable by other designers.

2 Basic Concept of Software Design

Software Design Concepts


Concepts are defined as a principal idea or invention that comes into our mind or in thought to understand something. The software design concept simply means
the idea or principle behind the design. It describes how you plan to solve the problem of designing software, and the logic, or thinking behind how you will design
software. It allows the software engineer to create the model of the system software or product that is to be developed or built. The software design concept provides
a supporting and essential structure or model for developing the right software. There are many concepts of software design and some of them are given below:
Software Design Concepts
Points to be Considered While Designing Software

 Abstraction (Hide Irrelevant data): Abstraction simply means to hide the details to reduce complexity and increase efficiency or quality. Different levels of
Abstraction are necessary and must be applied at each stage of the design process so that any error that is present can be removed to increase the efficiency of the
software solution and to refine the software solution. The solution should be described in broad ways that cover a wide range of different things at a higher level of
abstraction and a more detailed description of a solution of software should be given at the lower level of abstraction.
 Modularity (subdivide the system): Modularity simply means dividing the system or project into smaller parts to reduce the complexity of the system or project.
In the same way, modularity in design means subdividing a system into smaller parts so that these parts can be created independently and then use these parts in
different systems to perform different functions. It is necessary to divide the software into components known as modules because nowadays, there are different
software available like Monolithic software that is hard to grasp for software engineers. So, modularity in design has now become a trend and is also important. If
the system contains fewer components then it would mean the system is complex which requires a lot of effort (cost) but if we can divide the system into
components then the cost would be small.
 Architecture (design a structure of something): Architecture simply means a technique to design a structure of something. Architecture in designing software is
a concept that focuses on various elements and the data of the structure. These components interact with each other and use the data of the structure in architecture.
 Refinement (removes impurities): Refinement simply means to refine something to remove any impurities if present and increase the quality. The refinement
concept of software design is a process of developing or presenting the software or system in a detailed manner which means elaborating a system or software.
Refinement is very necessary to find out any error if present and then to reduce it.
 Pattern (a Repeated form): A pattern simply means a repeated form or design in which the same shape is repeated several times to form a pattern. The pattern in
the design process means the repetition of a solution to a common recurring problem within a certain context.
 Information Hiding (Hide the Information): Information hiding simply means to hide the information so that it cannot be accessed by an unwanted party. In
software design, information hiding is achieved by designing the modules in a manner that the information gathered or contained in one module is hidden and can’t
be accessed by any other modules.
 Refactoring (Reconstruct something): Refactoring simply means reconstructing something in such a way that it does not affect the behavior of any other
features. Refactoring in software design means reconstructing the design to reduce complexity and simplify it without impacting the behavior or its functions.
Fowler has defined refactoring as “the process of changing a software system in a way that it won’t impact the behavior of the design and improves the internal
structure”.

3 Architectural Design

The software needs an architectural design to represent the design of the software. IEEE defines architectural design as “the process of defining a collection of
hardware and software components and their interfaces to establish the framework for the development of a computer system.” The software that is built for
computer-based systems can exhibit one of these many architectural styles.

System Category Consists of


 A set of components(eg: a database, computational modules) that will perform a function required by the system.
 The set of connectors will help in coordination, communication, and cooperation between the components.
 Conditions that how components can be integrated to form the system.
 Semantic models that help the designer to understand the overall properties of the system.
The use of architectural styles is to establish a structure for all the components of the system.

Taxonomy of Architectural Styles

1] Data centered architectures:


 A data store will reside at the center of this architecture and is accessed frequently by the other components that update, add, delete, or modify the data present
within the store.
 The figure illustrates a typical data-centered style. The client software accesses a central repository. Variations of this approach are used to transform the
repository into a blackboard when data related to the client or data of interest for the client change the notifications to client software.
 This data-centered architecture will promote integrability. This means that the existing components can be changed and new client components can be added to the
architecture without the permission or concern of other clients.
 Data can be passed among clients using the blackboard mechanism.

Advantages of Data centered architecture:


 Repository of data is independent of clients
 Client work independent of each other
 It may be simple to add additional clients.
 Modification can be very easy
Data centered architecture

2] Data flow architectures:


 This kind of architecture is used when input data is transformed into output data through a series of computational manipulative components.
 The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a set of components called filters connected by lines.
 Pipes are used to transmitting data from one component to the next.
 Each filter will work independently and is designed to take data input of a certain form and produces data output to the next filter of a specified form. The filters
don’t require any knowledge of the working of neighboring filters.
 If the data flow degenerates into a single line of transforms, then it is termed as batch sequential. This structure accepts the batch of data and then applies a series
of sequential components to transform it.
Advantages of Data Flow architecture:
 It encourages upkeep, repurposing, and modification.
 With this design, concurrent execution is supported.
Disadvantage of Data Flow architecture:
 It frequently degenerates to batch sequential system
 Data flow architecture does not allow applications that require greater user engagement.
 It is not easy to coordinate two different but related streams

Data Flow architecture

3] Call and Return architectures


It is used to create a program that is easy to scale and modify. Many sub-styles exist within this category. Two of them are explained below.
 Remote procedure call architecture: This components is used to present in a main program or sub program architecture distributed among multiple computers
on a network.
 Main program or Subprogram architectures: The main program structure decomposes into number of subprograms or function into a control hierarchy. Main
program contains number of subprograms that can invoke other components.

4] Object Oriented architecture


The components of a system encapsulate data and the operations that must be applied to manipulate the data. The coordination and communication between the
components are established via the message passing.
Characteristics of Object Oriented architecture:
 Object protect the system’s integrity.
 An object is unaware of the depiction of other items.
Advantage of Object Oriented architecture:
 It enables the designer to separate a challenge into a collection of autonomous objects.
 Other objects are aware of the implementation details of the object, allowing changes to be made without having an impact on other objects.

5] Layered architecture
 A number of different layers are defined with each layer performing a well-defined set of operations. Each layer will do some operations that becomes closer to
machine instruction set progressively.
 At the outer layer, components will receive the user interface operations and at the inner layers, components will perform the operating system
interfacing(communication and coordination with OS)
 Intermediate layers to utility services and application software functions.
 One common example of this architectural style is OSI-ISO (Open Systems Interconnection-International Organisation for Standardisation) communication
system.
4 Low Level Design

LLD, or Low-Level Design, is a phase in the software development process where detailed system components and their interactions are specified. It involves
converting the high-level design into a more detailed blueprint, addressing specific algorithms, data structures, and interfaces. LLD serves as a guide for developers
during coding, ensuring the accurate and efficient implementation of the system’s functionality. LLD describes class diagrams with the help of methods and
relations between classes and program specs.
Remember: Low-level designing is also known as object-level designing or micro-level or detailed designing.

How is LLD different from HLD


As studied, High Level Design or HLD is a general system design where we do tradeoffs between different frameworks, components, and different databases and
we choose the best considering what the business needs and how the system should work, both in terms of functional and non functional aspects . Here we define
the components and how these components will be communicating with one another. Hence here we are bothered with generic stuff as follows and not bothered
about the code.
 Selection of components, platforms, and different tools.
 Database design.
 Brief description of relationships between services and modules.

How to form LLD from HLD?


As studied above, input for framing low-level design (LLD) is HLD. Here in LLD, we take care of how our components will look like, the structure possessed by
different entities, and how different entities will have their responsibility(operations supported). For this conversion, we use Unified Modelling Language (UML)
diagrams. Adding to these diagrams we use OOPS principles and SOLID principles while designing. Hence, using these 3 paradigm we can convert any HLD to
LLD so as to get implemented.

Roadmap to Low-level Designing


In order to bridge concepts of LLD with real code let us In order to understand how to design any low-level diagram let us understand via the steps:

1. Object-oriented Principles
The user requirement is processed by using concepts of OOPS programming. Hence it is recommended to have a strong grip on OOPS concepts prior to moving
ahead in designing any low-level system. Object-oriented programming concept 4 pillars are must-have to go start learning low-level designing and the
programmer should be very well versed with these 4 pillars namely as follows:
 Inheritance
 encapsulation
 polymorphism
 abstraction
Within polymorphism, we should be clear-cut with compile-time ad run-time polymorphism. Programmers should be absolutely clear about the OOPS concepts to
depth right to classes, and objects because OOPS is the foundation on which low-leveling on any system is based. Acing low-level design is ‘extremely subjective’
because we have to optimally use these concepts while coding to build a low-level system via implementing coding software entities(classes, functions, modules,
etc)

2. Process of analyzing and design


It is a analyzing phase which is our 1 st step where we are claying real-world problems into object-world problems using OOPS concepts and SOLID principles.

3. Design Patterns
Now the implementation of our above object oriented problem is carried out with the help of design patterns. Design patterns are reusable solutions to common
problems encountered in software design. They provide a structured approach to design by capturing best practices and proven solutions, making it easier to develop
scalable, maintainable, and efficient software. Design patterns help streamline the development process, promote code reuse, and enhance the overall quality of
software systems.
Each pattern describes a problem that occurs over and over multiple times in the environment, and their solutions can be applied repeatedly without redundancy.
Why there is a need for design patterns?
These problems have occurred over and over again corresponding to which these solutions have been laid out. These problems are been faced and solved by expert
designers in the world of programming and the solutions are robust over time saving a lot of time and energy. Hence the complex and classic problems in the
software world are being solved by tried and tested solutions.
Tip: It is strongly recommended to have good understanding of common design patterns to have a good hold over low-level designing.
Different Types of Design Patterns
There are widely many types of design patterns, let us discuss 4 types of design patterns that are extensively used globally:
 Factory Design Pattern
 Abstract factory Pattern
 Singleton Pattern
 Observer Pattern
It is also recommended to study the below 5 design patterns as these are less required but it is recommended to learn for the basic understanding of the design
patterns.
 Builder Pattern
 Chain of responsibility Pattern
 Adapter Pattern
 Facade Pattern
 Flyweight Pattern

4. UML Diagram
They are 2 types of UML Diagrams:
 Structural UML diagram: These types of diagrams basically defines how different entities and objects will be structured and defining the relationship between
them. They are helpful in representing how components will appear with respect to structure.
 Behavioural UML diagram: These types of diagrams basically defines what are the different operations that it supports. Here different behavioural UML
showcases different behavioral

5. SOLID Principles
These are sets of 5 principles(rules) that are strictly followed as per requirements of the system or requirements for optimal designing.
 In order to write scalable, flexible, maintainable, and reusable code:
 Single-responsibility principle (SRP)
 Open-closed principle (OCP)
 Liskov’s Substitution Principle(LSP)
 Interface Segregation Principle (ISP)
 Dependency Inversion Principle (DIP)
It’s important to keep in mind that SOLID principles are just guidelines and not strict rules to be followed. The key is to strike a balance between adhering to these
principles and considering the specific needs and constraints of your business requirement.

5 Modularization

Modularization is a technique to divide a software system into multiple discrete and independent modules, which are expected to be capable of carrying out task(s)
independently. These modules may work as basic constructs for the entire software. Designers tend to design modules such that they can be executed and/or compiled
separately and independently.

Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving strategy this is because there are many other benefits attached with the
modular design of a software.

Advantage of modularization:

 Smaller components are easier to maintain


 Program can be divided based on functional aspects
 Desired level of abstraction can be brought in the program
 Components with high cohesion can be re-used again
 Concurrent execution can be made possible
 Desired from security aspect

6 Design Structure Charts

Structure Chart represents the hierarchical structure of modules. It breaks down the entire system into the lowest functional modules and describes the functions and
sub-functions of each module of a system in greater detail. This article focuses on discussing Structure Charts in detail.
What is a Structure Chart?
Structure Chart partitions the system into black boxes (functionality of the system is known to the users, but inner details are unknown).
 Inputs are given to the black boxes and appropriate outputs are generated.
 Modules at the top level are called modules at low level.
 Components are read from top to bottom and left to right.
 When a module calls another, it views the called module as a black box, passing the required parameters and receiving results.
Symbols in Structured Chart

1. Module
It represents the process or task of the system. It is of three types:
 Control Module: A control module branches to more than one submodule.
 Sub Module: Sub Module is a module which is the part (Child) of another module.
 Library Module: Library Module are reusable and invokable from any module.
2. Conditional Call
It represents that control module can select any of the sub module on the basis of some condition.

3. Loop (Repetitive call of module)

It represents the repetitive execution of module by the sub module. A curved arrow represents a loop in the module. All the submodules cover by the loop repeat
execution of module.

4. Data Flow
It represents the flow of data between the modules. It is represented by a directed arrow with an empty circle at the end.

5. Control Flow
It represents the flow of control between the modules. It is represented by a directed arrow with a filled circle at the end.

6. Physical Storage
It is that where all the information are to be stored.

Example Structure chart for an Email server

7 Pseudo Codes
A Pseudocode is defined as a step-by-step description of an algorithm. Pseudocode does not use any programming language in its representation instead it uses
the simple English language text as it is intended for human understanding rather than machine reading.
Pseudocode is the intermediate state between an idea and its implementation(code) in a high-level language.

What is the need for Pseudocode


Pseudocode is an important part of designing an algorithm, it helps the programmer in planning the solution to the problem as well as the reader in understanding
the approach to the problem. Pseudocode is an intermediate state between algorithm and program that plays supports the transition of the algorithm into the
program.

Pseudocode is an intermediate state between algorithm and program


How to write Pseudocode?
Before writing the pseudocode of any algorithm the following points must be kept in mind.
 Organize the sequence of tasks and write the pseudocode accordingly.
 At first, establishes the main goal or the aim.

Example:
IF “1”
print response
“I AM CASE 1”
IF “2”
print response
“I AM CASE 2”
 Use appropriate naming conventions. The human tendency follows the approach of following what we see. If a programmer goes through a pseudo code, his
approach will be the same as per that, so the naming must be simple and distinct.
 Reserved commands or keywords must be represented in capital letters.

Example: if you are writing IF…ELSE statements then make sure IF and ELSE be in capital letters.
 Check whether all the sections of a pseudo code are complete, finite, and clear to understand and comprehend. Also, explain everything that is going to happen in
the actual code.
 Don’t write the pseudocode in a programming language. It is necessary that the pseudocode is simple and easy to understand even for a layman or client,
minimizing the use of technical terms.

Pseudocode Examples:
1. Binary search Pseudocode:
Binary search is a searching algorithm that works only for sorted search space. It repeatedly divides the search space into half by using the fact that the search
space is sorted and checking if the desired search result will be found in the left or right half.
Example: Given a sorted array Arr[] and a value X, The task is to find the index at which X is present in Arr[].
Below is the pseudocode for Binary search.
BinarySearch(ARR, X, LOW, HIGH)
repeat till LOW = HIGH
MID = (LOW + HIGH)/2
if (X == ARR[mid])
return MID

else if (x > ARR[MID])


LOW = MID + 1

else
HIGH = MID – 1

2. Quick sort Pseudocode:


QuickSort is a Divide and Conquer algorithm. It picks an element as a pivot and partitions the given array around the picked pivot.
Say last element of array is picked as pivot then all elements smaller than pivot element are shifted on the left side of pivot and elements greater than pivot are
shifted towards the right of pivot by swapping, the same algorithm is repeatedly followed for the left and right side of pivot until the whole array is sorted.
Below is the pseudocode for Quick sort
QUICKSORT(Arr[], LOW, HIGH) {
if (LOW < HIGH) {
PIVOT = PARTITION(Arr, LOW, HIGH);
QUICKSORT(ARR, LOW, PIVOT – 1);
QUICKSORT(ARR, PIVOT + 1, HIGH);
}
}
Here, LOW is the starting index and HIGH is the ending index.
Difference between Algorithm and Pseudocode
Algorithm Pseudocode

An Algorithm is used to provide a solution to a particular problem in form of a A Pseudocode is a step-by-step description of an algorithm in code-like
well-defined step-based form. structure using plain English text.

An algorithm only uses simple English words Pseudocode also uses reserved keywords like if-else, for, while, etc.

These are a sequence of steps of a solution to a problem These are fake codes as the word pseudo means fake, using code like
Algorithm Pseudocode

structure and plain English text

There are no rules to writing algorithms There are certain rules for writing pseudocode

Algorithms can be considered pseudocode Pseudocode cannot be considered an algorithm

It is difficult to understand and interpret It is easy to understand and interpret

Difference between Flowchart and Pseudocode


Flowchart Pseudocode

A Pseudocode is a step-by-step description of an algorithm in code like


A Flowchart is pictorial representation of flow of an algorithm. structure using plain English text.

A Flowchart uses standard symbols for input, output decisions and start stop
Pseudocode uses reserved keywords like if-else, for, while, etc.
statements. Only uses different shapes like box, circle and arrow.

This is a way of visually representing data, these are nothing but the graphical These are fake codes as the word pseudo means fake, using code like
representation of the algorithm for a better understanding of the code structure but plain English text instead of programming language

Flowcharts are good for documentation Pseudocode is better suited for the purpose of understanding

8 Flow Charts

Flowcharts are nothing but the graphical representation of the data or the algorithm for a better understanding of the code visually. It displays step-by-step
solutions to a problem, algorithm, or process. It is a pictorial way of representing steps that are preferred by most beginner-level programmers to understand
algorithms of computer science, thus it contributes to troubleshooting the issues in the algorithm. A flowchart is a picture of boxes that indicates the process flow
sequentially. Since a flowchart is a pictorial representation of a process or algorithm, it’s easy to interpret and understand the process. To draw a flowchart, certain
rules need to be followed which are followed by all professionals to draw a flowchart and are widely accepted all over the countries.
What is FlowChart?
A flowchart is a type of diagram that represents a workflow or process. A flowchart can also be defined as a diagrammatic representation of an algorithm, a step-
by-step approach to solving a task.
Flowchart symbols
Different types of boxes are used to make flowcharts flowchart Symbols. All the different kinds of boxes are connected by arrow lines. Arrow lines are used to
display the flow of control. Let’s learn about each box in detail.
https://www.geeksforgeeks.org/what-is-a-flowchart-and-its-types/ (follow this)

Advantages of Flowchart
 It is the most efficient way of communicating the logic of the system.
 It acts as a guide for a blueprint during the program design.
 It also helps in the debugging process.
 Using flowcharts we can easily analyze the programs.
 flowcharts are good for documentation.

Disadvantages of Flowchart
 Flowcharts are challenging to draw for large and complex programs.
 It does not contain the proper amount of details.
 Flowcharts are very difficult to reproduce.
 Flowcharts are very difficult to modify.

Solved Examples on FlowChart


Question 1. Draw a flowchart to find the greatest number among the 2 numbers.
Solution:
Algorithm:
1. Start
2. Input 2 variables from user
3. Now check the condition If a > b, goto step 4, else goto step 5.
4. Print a is greater, goto step 6
5. Print b is greater
6. Stop
FlowChart:
Question 2. Draw a flowchart to check whether the input number is odd or even
Solution:
Algorithm:
1. Start
2. Put input a
3. Now check the condition if a % 2 == 0, goto step 5. Else goto step 4
4. Now print(“number is odd”) and goto step 6
5. Print(“number is even”)
6. Stop
FlowChart:

9 Coupling and Cohesion Measures

What is Coupling and Cohesion?


Coupling refers to the degree of interdependence between software modules. High coupling means that modules are closely connected and changes in one module
may affect other modules. Low coupling means that modules are independent, and changes in one module have little impact on other modules.
Coupling
Cohesion refers to the degree to which elements within a module work together to fulfill a single, well-defined purpose. High cohesion means that elements are
closely related and focused on a single purpose, while low cohesion means that elements are loosely related and serve multiple purposes.

Cohesion
Both coupling and cohesion are important factors in determining the maintainability, scalability, and reliability of a software system. High coupling and low
cohesion can make a system difficult to change and test, while low coupling and high cohesion make a system easier to maintain and improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells the customer what the system will do. Second is Technical Design
which allows the system builders to understand the actual hardware and software needed to solve a customer’s problem.

Conceptual design of the system:


 Written in simple language i.e. customer understandable language.
 Detailed explanation about system characteristics.
 Describes the functionality of the system.
 It is independent of implementation.
 Linked with requirement document.

Technical Design of the System:


 Hardware component and design.
 Functionality and hierarchy of software components.
 Software architecture
 Network architecture
 Data structure and flow of data.
 I/O component of the system.
 Shows interface.
Types of Coupling
Coupling is the measure of the degree of interdependence between the modules. A good software will have low coupling.

Types of Coupling
Following are the types of Coupling:
 Data Coupling: If the dependency between the modules is based on the fact that they communicate by passing only data, then the modules are said to be data
coupled. In data coupling, the components are independent of each other and communicate through data. Module communications don’t contain tramp data.
Example-customer billing system.
 Stamp Coupling In stamp coupling, the complete data structure is passed from one module to another module. Therefore, it involves tramp data. It may be
necessary due to efficiency factors- this choice was made by the insightful designer, not a lazy programmer.
 Control Coupling: If the modules communicate by passing control information, then they are said to be control coupled. It can be bad if parameters indicate
completely different behavior and good if parameters allow factoring and reuse of functionality. Example- sort function that takes comparison function as an
argument.
 External Coupling: In external coupling, the modules depend on other modules, external to the software being developed or to a particular type of hardware. Ex-
protocol, external file, device format, etc.
 Common Coupling: The modules have shared data such as global data structures. The changes in global data mean tracing back to all modules which access that
data to evaluate the effect of the change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control data accesses, and reduced
maintainability.
 Content Coupling: In a content coupling, one module can modify the data of another module, or control flow is passed from one module to the other module.
This is the worst form of coupling and should be avoided.
 Temporal Coupling: Temporal coupling occurs when two modules depend on the timing or order of events, such as one module needing to execute before
another. This type of coupling can result in design issues and difficulties in testing and maintenance.
 Sequential Coupling: Sequential coupling occurs when the output of one module is used as the input of another module, creating a chain or sequence of
dependencies. This type of coupling can be difficult to maintain and modify.
 Communicational Coupling: Communicational coupling occurs when two or more modules share a common communication mechanism, such as a shared
message queue or database. This type of coupling can lead to performance issues and difficulty in debugging.
 Functional Coupling: Functional coupling occurs when two modules depend on each other’s functionality, such as one module calling a function from another
module. This type of coupling can result in tightly-coupled code that is difficult to modify and maintain.
 Data-Structured Coupling: Data-structured coupling occurs when two or more modules share a common data structure, such as a database table or data file. This
type of coupling can lead to difficulty in maintaining the integrity of the data structure and can result in performance issues.
 Interaction Coupling: Interaction coupling occurs due to the methods of a class invoking methods of other classes. Like with functions, the worst form of
coupling here is if methods directly access internal parts of other methods. Coupling is lowest if methods communicate directly through parameters.
 Component Coupling: Component coupling refers to the interaction between two classes where a class has variables of the other class. Three clear situations
exist as to how this can happen. A class C can be component coupled with another class C1, if C has an instance variable of type C1, or C has a method whose
parameter is of type C1,or if C has a method which has a local variable of type C1. It should be clear that whenever there is component coupling, there is likely to
be interaction coupling.

Types of Cohesion
Cohesion is a measure of the degree to which the elements of the module are functionally related. It is the degree to which all elements directed towards performing
a single task are contained in the component. Basically, cohesion is the internal glue that keeps the module together. A good software design will have high
cohesion.
Types of Cohesion
Following are the types of Cohesion:
 Functional Cohesion: Every essential element for a single computation is contained in the component. A functional cohesion performs the task and functions. It
is an ideal situation.
 Sequential Cohesion: An element outputs some data that becomes the input for other element, i.e., data flow between the parts. It occurs naturally in functional
programming languages.
 Communicational Cohesion: Two elements operate on the same input data or contribute towards the same output data. Example- update record in the database
and send it to the printer.
 Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions are still weakly connected and unlikely to be reusable. Ex-
calculate student GPA, print student record, calculate cumulative GPA, print cumulative GPA.
 Temporal Cohesion: The elements are related by their timing involved. A module connected with temporal cohesion all the tasks must be executed in the same
time span. This cohesion contains the code for initializing all the parts of the system. Lots of different activities occur, all at unit time.
 Logical Cohesion: The elements are logically related and not functionally. Ex- A component reads inputs from tape, disk, and network. All the code for these
functions is in the same component. Operations are related, but the functions are significantly different.
 Coincidental Cohesion: The elements are not related(unrelated). The elements have no conceptual relationship other than location in source code. It is accidental
and the worst form of cohesion. Ex- print next line and reverse the characters of a string in a single component.
 Procedural Cohesion: This type of cohesion occurs when elements or tasks are grouped together in a module based on their sequence of execution, such as a
module that performs a set of related procedures in a specific order. Procedural cohesion can be found in structured programming languages.
 Communicational Cohesion: Communicational cohesion occurs when elements or tasks are grouped together in a module based on their interactions with each
other, such as a module that handles all interactions with a specific external system or module. This type of cohesion can be found in object-oriented programming
languages.
 Temporal Cohesion: Temporal cohesion occurs when elements or tasks are grouped together in a module based on their timing or frequency of execution, such as
a module that handles all periodic or scheduled tasks in a system. Temporal cohesion is commonly used in real-time and embedded systems.
 Informational Cohesion: Informational cohesion occurs when elements or tasks are grouped together in a module based on their relationship to a specific data
structure or object, such as a module that operates on a specific data type or object. Informational cohesion is commonly used in object-oriented programming.
 Functional Cohesion: This type of cohesion occurs when all elements or tasks in a module contribute to a single well-defined function or purpose, and there is
little or no coupling between the elements. Functional cohesion is considered the most desirable type of cohesion as it leads to more maintainable and reusable code.
 Layer Cohesion: Layer cohesion occurs when elements or tasks in a module are grouped together based on their level of abstraction or responsibility, such as a
module that handles only low-level hardware interactions or a module that handles only high-level business logic. Layer cohesion is commonly used in large-scale
software systems to organize code into manageable layers.

Advantages of low coupling


 Improved maintainability: Low coupling reduces the impact of changes in one module on other modules, making it easier to modify or replace individual
components without affecting the entire system.
 Enhanced modularity: Low coupling allows modules to be developed and tested in isolation, improving the modularity and reusability of code.
 Better scalability: Low coupling facilitates the addition of new modules and the removal of existing ones, making it easier to scale the system as needed.

Advantages of high cohesion


 Improved readability and understandability: High cohesion results in clear, focused modules with a single, well-defined purpose, making it easier for developers to
understand the code and make changes.
 Better error isolation: High cohesion reduces the likelihood that a change in one part of a module will affect other parts, making it easier to
 Improved reliability: High cohesion leads to modules that are less prone to errors and that function more consistently,
 leading to an overall improvement in the reliability of the system.

Disadvantages of high coupling


 Increased complexity: High coupling increases the interdependence between modules, making the system more complex and difficult to understand.
 Reduced flexibility: High coupling makes it more difficult to modify or replace individual components without affecting the entire system.
 Decreased modularity: High coupling makes it more difficult to develop and test modules in isolation, reducing the modularity and reusability of code.

Disadvantages of low cohesion


 Increased code duplication: Low cohesion can lead to the duplication of code, as elements that belong together are split into separate modules.
 Reduced functionality: Low cohesion can result in modules that lack a clear purpose and contain elements that don’t belong together, reducing their functionality
and making them harder to maintain.
 Difficulty in understanding the module: Low cohesion can make it harder for developers to understand the purpose and behavior of a module, leading to errors and
a lack of clarity.

10 Function Oriented Design, Object Oriented Design


1. Function Oriented Design : Function oriented design is the result of focusing attention to the function of the program. This is based on the stepwise refinement.
Stepwise refinement is based on the iterative procedural decomposition. Stepwise refinement is a top-down strategy where a program is refined as a hierarchy of
increasing levels of details.

We start with a high level description of what the program does. Then, in each step, we take one part of our high level description and refine it. Refinement is
actually a process of elaboration. The process should proceed from a highly conceptual model to lower level details. The refinement of each module is done until we
reach the statement level of our programming language.
2. Object Oriented Design : Object oriented design is the result of focusing attention not on the function performed by the program, but instead on the data that are
to be manipulated by the program. Thus, it is orthogonal to function -oriented design. Object-oriented design begins with an examination of the real world “things”.
These things are characteristics individually in terms of their attributes and behavior.
Objects are independent entities that may readily be changed because all state and representation information is held within the object itself. Object may be
distributed and may execute sequentially or in parallel. Object oriented technology contains following three keywords –

 Objects –
Software package are designed and developed to correspond with real world entities that contain all the data and services to function as their associated entities
messages.

 Communication –
Communication mechanisms are established that provide the means by which object work together.

 Methods –
Methods are services that objects perform to satisfy the functional requirements of the problem domain. Objects request services of the other objects through
messages.

Difference Between Function Oriented Design and Object Oriented Design :


COMPARISON
FACTORS FUNCTION ORIENTED DESIGN OBJECT ORIENTED DESIGN

The basic abstractions, which are given to the The basic abstractions are not the real world functions but are the data
Abstraction
user, are real world functions. abstraction where the real world entities are represented.

Functions are grouped together by which a higher Function are grouped together on the basis of the data they operate since
Function
level function is obtained. the classes are associated with their methods.

carried out using structured analysis and


execute Carried out using UML
structured design i.e, data flow diagram

In this approach the state information is not represented in a centralized


In this approach the state information is often
State information memory but is implemented or distributed among the objects of the
represented in a centralized shared memory.
system.

Approach It is a top down approach. It is a bottom up approach.

Begins by considering the use case diagrams and


Begins basis Begins by identifying objects and classes.
the scenarios.

In function oriented design we decompose in


Decompose We decompose in class level.
function/procedure level.

This approach is mainly used for computation This approach is mainly used for evolving system which mimics a
Use
sensitive application. business or business case.

11 Top-Down and Bottom-Up Design

Top-Down Design Model: In the top-down model, an overview of the system is formulated without going into detail for any part of it. Each part of it then refined
into more details, defining it in yet more details until the entire specification is detailed enough to validate the model. if we glance at a haul as a full, it’s going to
appear not possible as a result of it’s so complicated For example: Writing a University system program, writing a word processor. Complicated issues may be
resolved victimization high down style, conjointly referred to as Stepwise refinement where,
 We break the problem into parts,
 Then break the parts into parts soon and now each of parts will be easy to do.

Advantages:
 Breaking problems into parts help us to identify what needs to be done.
 At each step of refinement, new parts will become less complex and therefore easier to solve.
 Parts of the solution may turn out to be reusable.
 Breaking problems into parts allows more than one person to solve the problem.

Bottom-Up Design Model: In this design, individual parts of the system are specified in detail. The parts are linked to form larger components, which are in turn
linked until a complete system is formed. Object-oriented language such as C++ or java uses a bottom-up approach where each object is identified first.

Advantage:
 Make decisions about reusable low-level utilities then decide how there will be put together to create high-level construct.
TOP DOWN APPROACH BOTTOM UP APPROACH

In this approach We focus on breaking up the problem into smaller In bottom up approach, we solve smaller problems and integrate it as whole and
parts. complete the solution.

Mainly used by structured programming language such as COBOL,


Mainly used by object oriented programming language such as C++, C#, Python.
Fortran, C, etc.

Each part is programmed separately therefore contain redundancy. Redundancy is minimized by using data encapsulation and data hiding.

In this the communications is less among modules. In this module must have communication.

It is used in debugging, module documentation, etc. It is basically used in testing.

In top down approach, decomposition takes place. In bottom up approach composition takes place.

In this top function of system might be hard to identify. In this sometimes we can not build a program from the piece we have started.

In this implementation details may differ. This is not natural for people to assemble.

Pros-
Pros-
 Easier isolation of interface errors
 Easy to create test conditions
 It benefits in the case error occurs towards the top of the program.
 Test results are easy to observe
 Defects in design get detected early and can be corrected as an
 It is suited if defects occur at the bottom of the program.
early working module of the program is available.

Cons-
Cons-
 There is no representation of the working model once several modules have been
 Difficulty in observing the output of test case.
constructed.
 Stub writing is quite crucial as it leads to setting of output
 There is no existence of the program as an entity without the addition of the last
parameters.
module.
 When stubs are located far from the top level module, choosing
 From a partially integrated system, test engineers cannot observe system-level
test cases and designing stubs become more challenging.
functions. It can be possible only with the installation of the top-level test driver.

Software Measurement and Metrics: Various Size Oriented Measures: Halestead’s Software Science, Function Point (FP) Based Measures, Cyclomatic Complexity
Measures: Control Flow Graphs.

12 Software Metrics

A software metric is a measure of software characteristics which are measurable or countable. Software metrics are valuable for many reasons, including measuring
software performance, planning work items, measuring productivity, and many other uses.

Within the software development process, many metrics are that are all connected. Software metrics are similar to the four functions of management: Planning,
Organization, Control, or Improvement.

Advantage of Software Metrics

 Comparative study of various design methodology of software systems.

 For analysis, comparison, and critical study of different programming language concerning their characteristics.

 In comparing and evaluating the capabilities and productivity of people involved in software development.

 In the preparation of software quality specifications.

 In the verification of compliance of software systems requirements and specifications.

 In making inference about the effort to be put in the design and development of the software systems.

 In getting an idea about the complexity of the code.

 In taking decisions regarding further division of a complex module is to be done or not.

 In guiding resource manager for their proper utilization.

 In comparison and making design tradeoffs between software development and maintenance cost.

 In providing feedback to software managers about the progress and quality during various phases of the software development life cycle.

 In the allocation of testing resources for testing the code.

Disadvantage of Software Metrics

 The application of software metrics is not always easy, and in some cases, it is difficult and costly.
 The verification and justification of software metrics are based on historical/empirical data whose validity is difficult to verify.

 These are useful for managing software products but not for evaluating the performance of the technical staff.

 The definition and derivation of Software metrics are usually based on assuming which are not standardized and may depend upon tools available and working
environment.

 Most of the predictive models rely on estimates of certain variables which are often not known precisely.

13 Size Metric

Size Metrics derived by normalizing quality and productivity Point Metrics measures by considering size of the software that has been produced. The organization
builds a simple record of size measure for the software projects. It is built on past experiences of organizations. It is a direct measure of software.
This metrics is one of simplest and earliest metrics that is used for computer program to measure size. Size Oriented Metrics are also used for measuring and
comparing productivity of programmers. It is a direct measure of a Software. The size measurement is based on lines of code computation. The lines of code are
defined as one line of text in a source file.
While counting lines of code, simplest standard is:
 Don’t count blank lines
 Don’t count comments
 Count everything else
 The size-oriented measure is not a universally accepted method.

Simple set of size measure that can be developed is as given below:


1. Size = Kilo Lines of Code (KLOC)
2. Effort = Person / month
3. Productivity = KLOC / person-month
4. Quality = Number of faults / KLOC
5. Cost = $ / KLOC
6. Documentation = Pages of documentation / KLOC
Advantages:

 Using these metrics, it is very simple to measure size.


 Artifact of Software development which is easily counted.
 LOC is used by many methods that are already existing as a key input.
 A large body of literature and data based on LOC already exists.

Disadvantages:
 This measure is dependent upon programming language.
 This method is well designed upon programming language.
 It does not accommodate non-procedural languages.
 Sometimes, it is very difficult to estimate LOC in early stage of development.
 Though it is simple to measure but it is very hard to understand it for users.
 It cannot measure size of specification as it is defined on code.

LOC

A Line of Code (LOC) is any line of text in a code that is not a comment or blank line, and also header lines, in any case of the number of statements or fragments
of statements on the line. LOC clearly consists of all lines containing the declaration of any variable, and executable and non-executable statements. As LOC only
counts the volume of code, you can only use it to compare or estimate projects that use the same language and are coded using the same coding standards.

Features:
 Variations such as “source lines of code”, are used to set out a codebase.
 LOC is frequently used in some kinds of arguments.
 They are used in assessing a project’s performance or efficiency.

Advantages:
 Most used metric in cost estimation.
 Its alternates have many problems as compared to this metric.
 It is very easy in estimating the efforts.

Disadvantages:
 Very difficult to estimate the LOC of the final program from the problem specification.
 It correlates poorly with quality and efficiency of code.
 It doesn’t consider complexity.

Research has shown a rough correlation between LOC and the overall cost and length of developing a project/ product in Software Development, and between LOC
and the number of defects. This means the lower your LOC measurement is, the better off you probably are in the development of your product.
Let’s take an example and check how the Line of code works in the simple sorting program given below:

void selSort(int x[], int n) {


//Below function sorts an array in ascending order
int i, j, min, temp;
for (i = 0; i < n - 1; i++) {
min = i;
for (j = i + 1; j < n; j++)
if (x[j] < x[min])
min = j;
temp = x[i];
x[i] = x[min];
x[min] = temp;
}
}

So, now If LOC is simply a count of the number of lines then the above function shown contains 13 lines of code. But when comments and blank lines are ignored,
the function shown above contains 12 lines of code.

Let’s take another example and check how the Line of code works the given below:

void main()
{
int fN, sN, tN;
cout << "Enter the 2 integers: ";
cin >> fN >> sN;
// sum of two numbers in stored in variable sum
sum = fN + sN;
// Prints sum
cout << fN << " + " << sN << " = " << sum;
return 0;
}

Here also, If LOC is simply a count of the numbers of lines then the above function shown contains 11 lines of code. But when comments and blank lines are
ignored, the function shown above contains 9 lines of code.

Token Count

Halstead's Software Metrics/ Token Count

According to Halstead's "A computer program is an implementation of an algorithm considered to be a collection of tokens which can be classified as either operators
or operand." In these metrics, a computer program is considered to be a collection of tokens, which may be classified as either operators or operands. All software
science metrics can be defined in terms of these basic symbols. These symbols are called as a token.

The basic measures are

n1 = count of unique operators.


n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.

In terms of the total tokens used, the size of the program can be expressed as N = N1 + N2.

Halstead metrics are:

Program Volume (V)

The unit of measurement of volume is the standard unit for size "bits." It is the actual size of a program if a uniform binary encoding for the vocabulary is used.

V=N*log2n

Program Level (L)

The value of L ranges between zero and one, with L=1 representing a program written at the highest possible level (i.e., with minimum size).

L=V*/V

Program Difficulty

The difficulty level or error-proneness (D) of the program is proportional to the number of the unique operator in the program.

D= (n1/2) * (N2/n2)

Programming Effort (E)

The unit of measurement of E is elementary mental discriminations.

E=V/L=D*V

Estimated Program Length

According to Halstead, The first Hypothesis of software science is that the length of a well-structured program is a function only of the number of unique operators and
operands.

N=N1+N2

And estimated program length is denoted by N^

N^ = n1log2n1 + n2log2n2

The following alternate expressions have been published to estimate program length:

o NJ = log2 (n1!) + log2 (n2!)


o NB = n1 * log2n2 + n2 * log2n1
o NC = n1 * sqrt(n1) + n2 * sqrt(n2)
o NS = (n * log2n) / 2

Potential Minimum Volume

The potential minimum volume V* is defined as the volume of the most short program in which a problem can be coded.

V* = (2 + n2*) * log2 (2 + n2*)

Here, n2* is the count of unique input and output parameters

Size of Vocabulary (n)

The size of the vocabulary of a program, which consists of the number of unique tokens used to build a program, is defined as:

n=n1+n2

Where

n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands

Language Level - Shows the algorithm implementation program language level. The same algorithm demands additional effort if it is written in a low-level program
language. For example, it is easier to program in Pascal than in Assembler.

L' = V / D / D
lambda = L * V* = L2 * V

Counting rules for C language

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {...} while ( ), while ( ) {...}, for ( ) {...}, all control statements e.g., if ( ) {...}, if ( ) {...} else {...}, etc. are considered as operators.
8. In control construct switch ( ) {case:...}, switch as well as all the case statements are considered as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are considered as operators.
10.All the brackets, commas, and terminators are considered as operators.
11.GOTO is counted as an operator, and the label is counted as an operand.
12.The unary and binary occurrence of "+" and "-" are dealt with separately. Similarly "*" (multiplication operator) are dealt separately.
13.In the array variables such as "array-name [index]" "array-name" and "index" are considered as operands and [ ] is considered an operator.
14.In the structure variables such as "struct-name, member-name" or "struct-name -> member-name," struct-name, member-name are considered as operands and '.', '->'
are taken as operators. Some names of member elements in different structure variables are counted as unique operands.
15.All the hash directive is ignored.

Example: Consider the sorting program as shown in fig: List out the operators and operands and also calculate the value of software science measure like n, N, V, E, λ,
etc.

Solution: The list of operators and operands is given in the table

Oper Occurr Oper Occurr


ators ences ands ences

Int 4 SORT 1

() 5 x 7

, 4 n 3

[] 7 i 8

if 2 j 7

< 2 save 3

; 11 im1 3

for 2 2 2

= 6 1 3

- 1 0 1

<= 2 - -

++ 2 - -

return 2 - -
{} 3 - -

n1=14 N1=53 n2=10 N2=38

Here N1=53 and N2=38. The program length N=N1+N2=53+38=91

Vocabulary of the program n=n1+n2=14+10=24

Volume V= N * log2N=91 x log2 24=417 bits.

The estimate program length N of the program

= 14 log214+10 log2)10
= 14 * 3.81+10 * 3.32
= 53.34+33.2=86.45

Conceptually unique input and output parameters are represented by n2*.

n2*=3 {x: array holding the integer to be sorted. This is used as both input and output}

{N: the size of the array to be sorted}

The Potential Volume V*=5log25=11.6

Since L=V*/V

We may use another formula

V^=V x L^= 417 x 0.038=15.67


E^=V/L^=D^ x V

Therefore, 10974 elementary mental discrimination is required to construct the program.

This is probably a reasonable time to produce the program, which is very simple.

Function Count/Functional Point (FP) Analysis

Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been further modified by the International Function Point Users Group (IFPUG).
FPA is used to make estimate of the software project, including its testing I n terms of functionality or function size of the software product. However, functional point
analysis may be used for the test estimation of the product. The functional size of the product is measured in terms of the function point, which is a standard of
measurement to measure the software application.

Objectives of FPA

The basic and primary purpose of the functional point analysis is to measure and provide the software application functional size to the client, customer, and the
stakeholder on their request. Further, it is used to measure the software project development along with its maintenance, consistently throughout the project irrespective
of the tools and the technologies.

Following are the points regarding FPs

1. FPs of an application is found out by counting the number and types of functions used in the applications. Various functions used in an application can be put under
five types, as shown in Table:

Types of FP Attributes

Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports


3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces Shared databases and shared routines.


(EIF)

All these parameters are then individually assessed for complexity.

The FPA functional units are shown in Fig:

2. FP characterizes the complexity of the software system and hence can be used to depict the project time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.

5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain characteristics.

Example: Compute the function point, productivity, documentation, cost per function for the following data:

 Number of user inputs = 24


 Number of user outputs = 46
 Number of inquiries = 8
 Number of files = 4
 Number of external interfaces = 2
 Effort = 36.9 p-m
 Technical documents = 265 pages
 User documents = 122 pages
 Cost = $7744/ month

Complexity factors: 14.3, Weighing factor= 4,4,6,10,5

Solution:

Measurement Parameter Count Weighing


factor

1. Number of external inputs (EI) 24 * 4 = 96

2. Number of external outputs (EO) 46 * 4 = 184

3. Number of external inquiries (EQ) 8 * 6 = 48

4. Number of internal files (ILF) 4 * 10 = 40

5. Number of external interfaces (EIF) 2 * 5=10


Total=
378

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408
Total pages of documentation = technical document + user document
= 265 + 122 = 387pages

Documentation = Pages of documentation/FP


= 387/408 = 0.94

Cyclomatic Complexity

The cyclomatic complexity of a code section is the quantitative measure of the number of linearly independent paths in it. It is a software metric used to indicate the
complexity of a program. It is computed using the Control Flow Graph of the program. The nodes in the graph indicate the smallest group of commands of a program, and a
directed edge in it connects the two nodes i.e. if the second command might immediately follow the first command.
For example, if the source code contains no control flow statement then its cyclomatic complexity will be 1, and the source code contains a single path in it. Similarly, if the
source code contains one if condition then cyclomatic complexity will be 2 because there will be two paths one for true and the other for false.
Mathematically, for a structured program, the directed graph inside the control flow is the edge joining two basic blocks of the program as control may pass from first to
second.
So, cyclomatic complexity M would be defined as,

M = E – N + 2P where E = the number of edges in the control flow graph


N = the number of nodes in the control flow graph
P = the number of connected components
In case, when exit point is directly connected back to the entry point. Here, the graph is strongly connected, and cyclometric complexity is defined as

M=E–N+P
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can be defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components

How to Calculate Cyclomatic Complexity?


Steps that should be followed in calculating cyclomatic complexity and test cases design are:
Construction of graph with nodes and edges from code.
 Identification of independent paths.
 Cyclomatic Complexity Calculation
 Design of Test Cases
Let a section of code as such:
A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C
Control Flow Graph of the above code

Cyclomatic Complexity
The cyclomatic complexity calculated for the above code will be from the control flow graph. The graph shows seven shapes(nodes), and seven lines(edges), hence
cyclomatic complexity is 7-7+2 = 2.
Use of Cyclomatic Complexity
 Determining the independent path executions thus proven to be very helpful for Developers and Testers.
 It can make sure that every path has been tested at least once.
 Thus help to focus more on uncovered paths.
 Code coverage can be improved.
 Risks associated with the program can be evaluated.
 These metrics being used earlier in the program help in reducing the risks.
Advantages of Cyclomatic Complexity
 It can be used as a quality metric, given the relative complexity of various designs.
 It is able to compute faster than Halstead’s metrics.
 It is used to measure the minimum effort and best areas of concentration for testing.
 It is able to guide the testing process.
 It is easy to apply.
Disadvantages of Cyclomatic Complexity
 It is the measure of the program’s control complexity and not the data complexity.
 In this, nested conditional structures are harder to understand than non-nested structures.
 In the case of simple comparisons and decision structures, it may give a misleading figure.

You might also like