Nothing Special   »   [go: up one dir, main page]

0% found this document useful (0 votes)
2 views52 pages

SOFTWARE ENGINEERING

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 52

SOFTWARE ENGINEERING

(UNIT-1)

What is software?
A set of instructions, data or a program used to operate
computers and execute specific tasks
Software is a generic term used to refer the applications , scripts
and programs that run in a device

Software Engineering : Software engineering is the process of


designing, developing, testing, and maintaining software. It involves
applying engineering principles and systematic methods to create
high-quality software that meets the needs of users and
stakeholders.

As per IBM Reports : “31% of projects get canceled before


completion, 53% exceed their cost estimates”

Operating Procedures:
Instructions to set-up and use, Reactions to system
failure, operating manuals.
Documentations:
Testing, Analysis, Design, Implementation

SOFTWARE

System Software:
System software is a collection of programs written to service other
programs.Example : Compilers, Operating Systems , etc
Software Crisis :
Software crisis is the failure of software development that leads to
incomplete and degrading performance of software products.

Causes:
1. Project Running over - budget
2. Project Running over - time
3. Not enough resources

#Software Engineering Layers


Also known as Layered Technology
It refers to di erent levels of abstraction used in the development
process

1. A quality focus : This layer emphasizes the importance of producing


high-quality software that meets the needs of the users and stakeholders.
It involves defining quality criteria, testing the software to ensure it meets
those criteria, and continuously improving the software based on
feedback.

2. Process model : This layer defines the overall process for developing
software. It includes the steps involved in the development process, the
roles and responsibilities of team members, and the tools and techniques
used to manage the process.
3. Methods : This layer refers to the specific techniques and practices used
to develop software. It includes software design patterns, coding
standards, and testing methodologies.

4. Tools : This layer includes the software tools and technologies used to
support the development process. It includes IDEs (Integrated
Development Environments), testing frameworks, version control systems,
and other software tools.

Software Myths:
Beliefs about software and the process used to build it. Myths have a
number of attributes that have made them dangerous.
Misleading Attitudes - caused serious problem for managers
and technical people.

Myth1: If we get behind schedule, we can add more programmers and catch up
Reality: Software development is not a mechanistic process like manufacturing.
Adding people to a late software project makes it Later.

Myth2: If I decide to outsource the software project to a third party, I can just
relax and let that firm build it.
Reality: If an organization does not understand how to manage and control
software projects internally, it will invariably struggle when it outsource
software projects

Myth3: Project requirements continually change, but change can be easily


accommodated because software is flexible.
Reality: Customers can review requirements and recommend modifications with
relatively little impact on cost. When changes are requested during software
design, the cost impact grows rapidly.

Myth4: Once we write the program and get it to work, our job is done.
Reality: Someone once said that "the sooner you begin 'writing code', the longer
it'll take you to get done." Industry data indicate that between 60 and 80
percent of all e ort expended on software will be expended after it is delivered
to the customer for the first time.

Myth5: Until I get the program "running" I have no way of assessing its quality.
Reality: One of the most e ective software quality assurance mechanisms can
be applied from the inception of a project—the formal technical review.

#Software process
A software process is a set of activities, methods, and practices that are used to
develop, test, deploy, and maintain software. It provides a structured approach
to software development that helps ensure that the resulting software is of high
quality, meets the needs of users and stakeholders, and is delivered on time and
within budget.

● Software Specifications : The functionality of the software and


constraints on its operations must be defined.

● Software development : Software to meet the requirement must be


produced

● Software development : Software must be tested to ensure it does what


customer wants.

● Software evolution : Software must evolve

Generic Process Framework Activities -

1. Communication:
● Heavy communication with customers, stakeholders, team
● Encompasses requirements gathering and related activities
2. Planning:
● Workflow that is to follow
● Describe technical task, likely risk, resources will require, work
products to be produced and a work schedule.
3. Modeling:
● Help developer and customer to understand requirements
(Analysis of requirements) & Design of software Construction
4. Code generation:
● either manual or automated or both
● Testing – to uncover error in the code.
5. Deployment:
● Delivery to the customer for evaluation
● Customer provide feedback

‘ Why is it di cult to organize / improve software process ?


1. Not enough time
2. Lack of knowledge
3. Insu cient commitment

“If our process is weak, the end product will su er”

Umbrella Activities :

Umbrella activities are the high-level activities that are performed throughout
the software development process. They provide a framework for the entire
software development life cycle and help ensure that the development process
is comprehensive, consistent, and e ective.

There are several di erent umbrella activities, including:

1. Project Management: This includes activities such as project planning,


scheduling, monitoring, and control.
2. Software Quality Assurance: This includes activities such as quality
planning, quality control, and quality improvement.
3. Configuration Management: This includes activities such as version
control, change management, and release management.
4. Documentation: This includes activities such as requirements
documentation, design documentation, user manuals, and system
documentation.
5. Risk Management: This includes activities such as risk identification, risk
assessment, and risk mitigation.
#Software Life Cycle (SDLC)
The period of time that starts when a software product is conceived(thought up)
and ends when the product is no longer available for use.

Life cycle includes : Requirement phase, design phase, implementation


phase, test phase, installation phase and check-out phase, operation and
maintenance phase, and sometimes retirement phase

SOFTWARE PROCESS MODEL -

● Process models prescribe a distinct set of activities, actions, tasks,


milestones, and work products required to engineer high quality software.

● Process models are not perfect, but provide a roadmap for software
engineering work.

● Software models provide stability, control, and organization to a process


that if not managed can easily get out of control

● Software process models are adapted to meet the needs of software


engineers and managers for a specific project.

#BUILD AND FIX MODEL

Product is constructed without specification or any attempt at design.

developers simply build a product that is reworked as many times as


necessary to satisfy the client.

model may work for small projects but is totally unsatisfactory for products
of any reasonable size.

Maintenance is high.

Source of di culties and deficiencies :


● impossible to predict
● impossible to manage

Process as “Black Box” Process as “White Box”

>Problem with black box :


● The assumption is that requirements can be fully understood prior to
development
● Interaction with the customer occurs only at the beginning (requirements)
and end (after delivery)
● Unfortunately the assumption almost never holds
>Advantages of White box :
● Reduce risks by improving visibility
● Allow project changes as the project progresses
-based on feedback from the customer

#Waterfall Model (Classic Life Cycle)

Phases always occur in order and don’t overlap


Developers must complete each phase before starting next phase
It resembles a cascade of waterfall

The Waterfall model is a linear software development model that follows a


sequential approach. In this model, the development process is divided into
di erent phases, and each phase is completed before moving on to the next
one. The di erent phases of the Waterfall model are:

1. Requirements gathering & Analysis : In this phase, the requirements for


the software are gathered and documented and the requirements are
analyzed to determine how the software will function and what features it
will have.
● The resultant document is SRS document (Software Requirement
Specification)
2. Design: In this phase, the software design is created, and the architecture
and system components are defined.
3. Implementation: In this phase, the actual code is written and the
software is developed.
4. Testing: In this phase, the software is tested to ensure that it meets the
requirements and functions as expected.
5. Deployment: In this phase, the software is deployed to the customer or
end-user.
6. Maintenance: In this phase, any bugs or issues that arise are fixed, and
the software is maintained and updated as needed.

Problems of waterfall model :


● Di cult to define all the requirements in start
● Working version is seen very late
● Real projects are rarely sequential
● Not suitable for accommodating any change

#Incremental Process Model


Delivers software in small but usable pieces, each piece builds on pieces
already delivered

1. Rather than deliver the system as a single delivery, the development


and delivery is broken down into increments with each increment
delivering part of the required functionality.

2. First Increment is often core product


● Includes basic requirement
● Many supplementary features (known & unknown) remain
undelivered

3. A plan of next increment is prepared


● Modifications of the first increment
● Additional features of the first increment

4. It is particularly useful when enough sta ng is not available for the


whole project

5. Increment can be planned to manage technical risks.

6. Lower risk of overall project failure.

7. User requirements are prioritized and the highest priority requirements


are included in early increments.

#Rapid Application Development (RAD) Model

● Makes heavy use of reusable software components with an extremely


short development cycle
● In this model, the development process is divided into di erent stages,
and each stage is completed in a short amount of time.
Drawbacks:
● For large but scalable projects
● RAD requires su cient human resources
● Projects fail if developers and customers are not committed in a much
shortened time-frame
● Problematic if system can not be modularized
● Not appropriate when technical risks are high ( heavy use of new
technology)

#Evolutionary Process Model

Produce an increasingly more complete version of the software with


each iteration.

Evolutionary Models are iterative.

It should be used when it is not necessary to provide a minimal version


of the system quickly

It is useful when requirements are unstable or not well understood in


the beginning
A usable product is not needed at the end of each cycle

Evolutionary models are:

1)Prototype Model

First a working prototype of software instead of developing actual software

The prototype is evaluated by customer and feedback is given to refine the final
software

Problems :
● Customers can be lazy
● Prototype must be delivered quickly

2)Spiral Model

Barry Bohem recognized lack of “ Project risk” factor into a life cycle model
and tried to correct this using spiral model
4 PHASES OF SPIRAL MODEL :

● Planning : The aim of each cycle in the spiral,Each phase begins with the
gathering of requirements from the clients and the identification,
elaboration, and analysis of the objectives.
● Risk Analysis: All potential solutions are assessed in order to choose the
best one. The risks connected to that solution are then determined, and
the best method for addressing those risks is selected.
● Development: Product development and testing
● Assessment : Customer evaluation

It maintains the systematic stepwise approach suggested by the classic life


cycle but also incorporates it into an iterative framework activity.

If risks cannot be resolved, project is immediately terminated

Cycles :
1. Concept development projects
2. New product development projects
3. Product enhancement projects
4. Product maintenance project
#Software Metrics

A unit of measurement of a software product or software related process

What are we measuring?


● Execution speed
● No. of errors
● Lines of code
● E ciency
● Reliability
● Portability

Using numbers to improve the software or the process of developing software


To improve management process (estimate size and cost , prediction of
quality levels for software)

-Lines Of Code(LOC)

● Basic idea is to estimate size


● No general agreement about what is line of code. Some include data
declarations, comments or non executable statements, other exclude
them
● Productivity = LOC / PM (PM = Person - month )
● Measuring by lines of code is like measuring a building by no. of bricks
used.
● Lines of code can predict programming time, but fail to tell anything
about e ciency , reliability etc

#Cocomo Model

Constructive Cost model


Hierarchy of software cost estimation model i.e it consists several models
under it ( basic, intermediate and detailed sub models)
3 classes :
● Organic Projects : 2-50 KLOC, Experienced team (less people so can hire
expensive people) , In-House. Eg: payroll project
● Semi-detached : 50-300 KLOC , Low experience , Variable. Eg: Database
Systems
● Embedded : 300 KLOC or more, Little or no experience, Complex
hardware. Eg: Air Tra c Control

-------------------------------------------------------

-Basic Model
● Estimates very quickly and roughly

b
E = ab (KLOC) b

d
D = cb (E ort Applied) b
SS = E ort Applied /
Development Time
P = KLOC / E

E : e orts applied in person / month


D : development time in months
SS : Sta size
P : Productivity

—-----------------------------------------------------

-Intermediate Model

● More accuracy as compared to basic model


● Cose is predicted according to actual project environment
b
E = ai(KLOC) i x EAF
b
D = ci(E) i
(where EAF ∈ [0.9, 1.4] )

New Attributes (to be checked) :


★ Required software reliability
★ Database size
★ Main storage constraint
★ Programmers capability
(A total of 15 new attributes is there as compared to Basic Model)

—------------------------------------------------------------

-Detailed Model
● Refined version of intermediate model
● Cost is calculated phase by phase :
1. Plan / requirements
2. Product design
3. Programming
4. Integration / Test

#Data Flow Diagrams (DFD)

● A picture of the movement of data between external entities and the


processes and data stores within a system

● It is not a flow chart.It contains arrows to represent flowing data, but any
order is not represented by them

● DFDs depict logical data flow independent of Technology while


Flowcharts depict details of physical systems
Symbols :

Process: work or actions performed on data (inside the system)


Data store: data at rest (inside the system)
Source/sink: external entity that is origin or destination of data (outside the
system)
Data flow: arrows depicting movement of data

DFD Rules :
● No process can have only outputs or only inputs,processes must
have both outputs and inputs.
● Process labels should be verb phrases.
● All flows to or from a data store must move through a process.
● Data store labels should be noun phrases.
● No data moves directly between external entities without going
through a process.
● Bidirectional flow between process and data store is represented by
two separate arrows.
● Data flow cannot go directly from a process to itself, must go
through Intervening processes.

DFD Levels :

➢ Level-0 DFD : Representation of system’s major processes at high


level of abstraction
➢ Level-1 DFD : Results from decomposition of Level 0 diagram
➢ Level-n DFD : Results from decomposition of Level n-1 diagram

Four Types of DFDs :


1. Current Physical
2. Current Logical
3. New logical
4. New physical
UNIT-2

Requirement
A function, constraint or other property that the system must provide to fill the
needs of the system’s intended user(s)

Requirement Engineering
means that requirements for a product are defined, managed and tested
systematically
RE establishes a solid base for design and construction. Without it, resulting
software has a high probability of not meeting customer needs.

Characteristics of a Good Requirement :


➢ Clear and Unambiguous
➢ A requirement contributes to a real need
➢ Understandable (A reader can easily understand the meaning of the
requirement)
➢ Verifiable (A requirement can be tested)
➢ Complete
➢ Consistent
➢ Traceable

Requirements Engineering Tasks :

● Inception —Establish a basic understanding of the problem and the


nature of the solution.
● Elicitation —Draw out the requirements from stakeholders.
● Elaboration (Highly structured)—Create an analysis model that
represents information, functional, and behavioral aspects of the
requirements.
● Negotiation—Agree on a deliverable system that is realistic for
developers and customers.
● Specification—Describe the requirements formally or informally.
● Validation —Review the requirement specification for errors,ambiguities,
omissions, and conflicts.
● Requirements management —Manage changing requirements.
Software Requirements Document:
A Software Requirements Document (SRD) is a formal document that outlines
the detailed requirements of a software system. It serves as a reference for
stakeholders, including developers, testers, and project managers, to
understand what the software should do and how it should behave.

The Software Requirements Document plays a crucial role in the software


development lifecycle by providing a common understanding of the software
system's requirements. It serves as a blueprint for the development team and
helps in ensuring that the final software product meets the stakeholders'
expectations.
Inception :
Ask “context-free” questions that establish …

➢ Basic understanding of the problem


➢ The people who want a solution
➢ The nature of the solution that is desired, and
➢ The e ectiveness of preliminary communication and collaboration
between the customer and the developer

Elicitation:
Requirement elicitation in software engineering is the process of gathering and
understanding what people want and need from a software system.

1. Identify who needs to be involved: Figure out who the important people
are that should be part of the process, like the users, customers, and
others who have a stake in the software.
2. Use di erent techniques to gather information: Talk to people through
interviews, surveys, and meetings to find out what they need. Also,
observe how they work and gather any existing documentation that can
help understand the requirements.
3. Write down the requirements clearly: Document the gathered
information in a way that is easy to understand. Use techniques like
describing user scenarios or writing specific user stories to capture the
requirements.
4. Check that the requirements make sense: Review the requirements with
the people involved to make sure they are accurate, complete, and
feasible. This may involve discussions, feedback sessions, and analyzing
the requirements for any potential problems.
5. Keep talking and working together: Requirement elicitation is not a
one-time thing. It requires ongoing communication and collaboration with
the people involved to clarify and refine the requirements as needed.

By following these steps, software developers can better understand what users
and stakeholders want, ensuring that the resulting software meets their
expectations and needs.
Why is Requirement elicitation di cult?

★ The boundary of the system is ill-defined.


★ Customers/users specify unnecessary technical detail that may confuse
rather than clarify objectives.
★ Customers are not completely sure of what is needed.
★ Customers have a poor understanding of the capabilities and limitations
of the computing environment.
★ Customers have trouble communicating needs to the system engineer.
★ Requirements change over time.

Elaboration:
● Focuses on developing a refined technical model of software
functions, features, and constraints using the information obtained
during inception and elicitation
● Create an analysis model that identifies data, function and
behavioral requirements.
● It is driven by the creation and refinement of user scenarios that
describe how the end-user will interact with the system.
● Each event parsed into extracted.
● End result defines informational, functional and behavioral domain
of the problem

Negotiation:
Agree on a deliverable system that is realistic for developers and
customers
➢ Requirements are categorized and organized into subsets
➢ Relations among requirements identified
➢ Requirements reviewed for correctness
➢ Requirements prioritized based on customer needs
➢ Negotiation about requirements, project cost and project timeline.
➢ There should be no winner and no loser in e ective negotiation.
Specification:

Requirement specification in software engineering involves documenting


the specific details of what a software system should do and how it should
behave.

It can be –
★ Written Document
★ A set of graphical models
★ A formal mathematical models
★ Collection of usage scenarios
★ A prototype
★ Combination of above.

It involves gathering requirements through various techniques such as


interviews, surveys, and workshops, ensuring that all relevant information
is considered.

The Formality and format of a specification varies with the size and the
complexity of the software to be built.

➢ For large systems, written documents, language descriptions, and


graphical models may be the best approach.
➢ For small systems or products, usage scenarios

Validation :

Requirement validation is a crucial step in software engineering to ensure


the accuracy and completeness of gathered requirements. It involves
reviewing the requirements for errors, inconsistencies, and gaps before
moving forward with development.
It looks for -

➢ Errors in content or interpretation


➢ Areas where clarification may be required
➢ Missing information
➢ Inconsistencies (a major problem when large products or systems
are engineered)
➢ Conflicting or unrealistic (unachievable) requirements.

Requirement Management :

Set of activities that help project team to identify, control, and track
requirements and changes as project proceeds

Requirements begin with identification. Each requirement is assigned


a unique identifier. Once requirement have been identified,
traceability table are developed.

Traceability in software engineering refers to the ability to track and


understand the relationships between di erent artifacts or elements
throughout the software development lifecycle.

Traceability tables, also known as trace matrices or traceability matrices,


are tools used in software engineering to establish and maintain
traceability links between various artifacts throughout the software
development process. These tables provide a structured way to track and
document the relationships between di erent elements, such as
requirements, design components, test cases, and code.
Types :
➔ Features traceability table - shows how requirements relate to
customer observable features
➔ Source traceability table - identifies source of each requirement
➔ Dependency traceability table - indicate relations among
requirements
➔ Subsystem traceability table - requirements categorized by
subsystem
➔ Interface traceability table - shows requirement relations to
internal and external interfaces

It will help to track, if change in one requirement will a ect di erent


aspects of the system.

Requirement Analysis:

Requirement analysis is a crucial phase in the software development


process that involves understanding, refining, and documenting the
requirements of a software system.
Throughout analysis modeling, the SE’s primary focus is on what not
on how.
Analysis model and the requirements specification provide the
developer and the customer with means to assess quality once
software is built.
Analysis Modeling Principles :
Analysis methods are related by a set of operational principles:
➢ The information domain of a problem must be represented and
understood.
➢ The functions that are software performs must be defined.
➢ The behavior of the software must be represented.
➢ The models that depict information, function and behavior must be
partitioned in a manner that uncovers detail in a layered fashion.
➢ The analysis task should move from essential information toward
implementation detail.

Data Dictionary -

A data dictionary is a centralized repository that provides a


comprehensive description of data elements used within a software
system.
By maintaining a data dictionary, organizations can ensure
consistency, accuracy, and e ective management of their data assets. It
helps streamline system development, maintenance, and collaboration
among stakeholders.

Some key points :

1. Definition and Description: A data dictionary defines and describes


data elements, including their names, types, lengths, and meanings.
2. Consistency and Standardization: It ensures consistent
terminology and understanding of data elements across the system.
3. Metadata Management: Manages metadata such as data formats,
constraints, relationships, and validation rules.
4. Communication and Collaboration: Facilitates e ective
communication and collaboration among stakeholders.
5. Impact Analysis and Maintenance: Helps assess the impact of
changes on data elements and update the system accordingly.
6. Documentation: Serves as documentation for data elements, aiding
in system maintenance and troubleshooting.
UNIT 3

SOFTWARE DESIGN is a process to transform user requirements into some suitable


form, which helps the programmer in software coding and implementation.

Imagine you want to build a treehouse. First, you have an idea of what you want it to
look like and what it should have, like a ladder, a slide, and a cozy space inside. That's
the "planning" stage. Then, you start drawing a detailed picture of your treehouse,
deciding where each part will go and how they will fit together. That's the "design"
stage.
So, in the software life cycle, the design part is like drawing a detailed picture of how
the software will be built. It's when we think about what the software should do, how
it should look, and how di erent parts will work together. Just like the treehouse
design helps us understand how everything will be put together, the software design
helps us plan and organize all the code and components needed to create the
software.
Once the design is finished, we can start building the software based on that plan. So,
the design phase comes after the planning phase and before the actual coding and
implementation of the software.

For assessing user requirements, an SRS (Software Requirement Specification)


document is created whereas for coding and implementation, there is a need of more
specific and detailed requirements in software terms. The output of this process can
directly be used into implementation in programming languages.

Software design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain. It tries to specify how to fulfill
the requirements mentioned in SRS.
Software Design Levels Software design yields three levels of results:

● Architectural Design - The architectural design is the highest abstract version of


the system. It identifies the software as a system with many components
interacting with each other. At this level, the designers get the idea of proposed
solution domain.

● High-level Design- The high-level design breaks the ‘single entity-multiple


component’ concept of architectural design into less-abstracted view of
subsystems and modules and depicts their interaction with each other. High-level
design focuses on how the system along with all of its components can be
implemented in forms of modules. It recognizes modular structure of each
sub-system and their relation and interaction among each other.

● Detailed Design- Detailed design deals with the implementation part of what is
seen as a system and its sub-systems in the previous two designs. It is more
detailed towards modules and their implementations. It defines logical structure
of each module and their interfaces to communicate with other modules.

ABSTRACTION

Abstraction involves simplifying complex concepts by focusing on the essential details


while hiding unnecessary complexities. It allows us to view a system at a higher level
without getting into the implementation details.

These topics come under High level design


REFINEMENT

Refinement in software design refers to the process of breaking down a complex


problem or system into smaller, more manageable components. It involves adding
more detail and specificity to each component, making the overall system easier to
understand, implement, and maintain.
During the refinement process, we start with a high-level view of the system and
progressively decompose it into smaller modules, classes, or functions. Each level of
refinement provides more detailed information about the design and behavior of the
software.
By refining the design, we improve the clarity and understandability of the system,
making it easier to implement, test, and maintain. It also enables parallel development,
as di erent team members can work on di erent components simultaneously.
Refinement is an essential aspect of software design that promotes modular, scalable,
and maintainable solutions

MODULARITY

Modularity is the practice of dividing a system into separate modules or components


that can be developed and maintained independently. Each module performs a specific
task or has a well-defined responsibility.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem solving
strategy. This is because there are many other benefits attached with the modular
design of a software.

Advantage of modularization:

1. Smaller components are easier to maintain


2. Program can be divided based on functional aspects
3. Desired level of abstraction ca n be brought in the program
4. Components with high cohesion can be re-used again.
5. Concurrent execution can be made possible
6. Desired from security aspect

CONCURRENCY

Back in time, all softwares were meant to be executed sequentially. By sequential


execution we mean that the coded instruction will be executed one after another
implying only one portion of program being activated at any given time. Say, a
software has multiple modules, then only one of all the modules can be found active at
any time of execution.

In software design, concurrency is implemented by splitting the software into multiple


independent units of execution, like modules and executing them in parallel. In other
words, concurrency provides capability to the software to execute more than one part
of code in parallel to each other.

When a software program is modularized, its tasks are divided into several modules
based on some characteristics. As we know, modules are set of instructions put
together in order to achieve some tasks. They are though, considered as single entity
but may refer to each other to work together. There are measures by which the quality
of a design of modules and their interaction among them can be measured. These
measures are called coupling and cohesion.

COHESION

Cohesion in software design refers to how closely related and focused the
responsibilities of a module or component are. It measures how well a module or
component performs a single, well-defined task. The greater the cohesion, the better is
the program design.

Seven types of cohesion, ranked from the strongest to the weakest:

1. Functional Cohesion: The module performs a single, well-defined function, such


as calculating the square root of a number.

2. Sequential Cohesion: The module performs a sequence of related actions, such


as reading input, processing it, and generating output.

3. Communicational Cohesion: The module operates on the same data or shares


data with other modules, such as a module that handles database operations.

4. Procedural Cohesion: The module performs a set of tasks necessary to


complete a procedure, such as a module responsible for user authentication.

5. Temporal Cohesion: The module performs tasks that are executed at the same
time or within the same timeframe, such as a module that handles event
scheduling.

6. Logical Cohesion: The module performs tasks that are logically related but do
not fit into the above categories, such as a module that handles error logging
and reporting.

7. Coincidental Cohesion: The module performs unrelated tasks that do not have a
common purpose, indicating a poor design where responsibilities are not
well-defined or organized.

In this context, the term "strongest" refers to the highest degree of cohesion, indicating
that the module performs a single, well-defined function or has a highly focused
purpose. Conversely, the term "weakest" refers to the lowest degree of cohesion,
indicating that the module performs unrelated tasks or lacks a clear and specific
purpose.

COUPLING
Coupling in software design refers to the degree of interdependence or interaction
between software modules or components. It measures how closely connected or
reliant one module is on another.
The five levels of coupling, from weakest to strongest, are:

1. Content Coupling: Modules directly access and modify each other's data,
indicating a strong interdependency.
2. Common Coupling: Modules share a global data area, which can lead to
unintended side e ects and reduced maintainability.
3. Control Coupling: One module controls the execution of another module by
passing control information or flags.
4. Stamp Coupling: Modules share a data structure, but they have no direct
coupling beyond that shared structure.
5. Data Coupling: Modules communicate by passing data through parameters or
arguments, promoting loose coupling and better modularization.

ARCHITECTURAL DESIGN Its actually the first Step

Architectural design in software engineering is the process of creating the conceptual


structure and framework of a software system. It involves making decisions regarding
the organization, arrangement, and interaction of system components to meet the
desired functionality and quality requirements. Architectural design provides a
high-level view of the system, allowing developers to understand and communicate the
system's structure e ectively.

Types of architectural design include:


1. Monolithic Architecture: Single, self-contained system.
2. Client-Server Architecture: Separation of client and server components.
3. Layered Architecture: Hierarchical arrangement of layers.
4. Microservices Architecture: Decomposition into small, independent services.
5. Event-Driven Architecture: Communication through event notifications.

The architectural design process involves understanding system requirements,


identifying key components and their interactions, specifying interfaces and data flow,
and considering performance, security, and maintainability aspects. It often includes
creating architectural diagrams, such as block diagrams, component diagrams, and
deployment diagrams, to visualize the system's structure and behavior.

DETAILED DESIGN

The detailed design phase in software engineering is where the high-level concepts and
requirements of a software system are transformed into a detailed blueprint for
implementation. It involves breaking down the system into smaller components,
specifying their structure, behavior, and interaction.

During this phase, designers create detailed design documents that provide
instructions for developers on how to build each component. These documents outline
the data structures, algorithms, interfaces, and other technical details required for
implementation.

The key activities in the detailed design phase includes:

1. Component Design
2. Interface Design
3. Database Design
4. Algorithm Design
5. User Interface Design

The goal of the detailed design phase is to provide developers with clear and precise
instructions, ensuring that the software system is implemented accurately and
e ciently.
—---------------------------------------------------------------------------------------------------------

Transaction transformation is a crucial aspect of the detailed design phase in


software engineering. It involves the translation of high-level transaction specifications
into detailed transaction programs or code. This process focuses on breaking down
transactions into smaller, manageable units for implementation.

The goal of transaction transformation is to simplify the implementation and


management of transactions by breaking them into smaller units that can be executed
and coordinated more e ciently. It helps in ensuring transactional integrity,
concurrency control, and error handling at a more granular level.

Key considerations during transaction transformation include:

● Transaction Partitioning: Dividing a transaction into smaller components to


ensure modularization and ease of implementation.
● Data Access and Manipulation: Defining how data will be accessed and
modified within each transaction, considering data integrity and concurrency
control.
● Error Handling: Determining how to handle exceptions, errors, and transaction
failures, ensuring proper rollback or recovery mechanisms are in place.
● Control Flow: Establishing the sequence and logic of operations within each
transaction, ensuring proper execution and synchronization with other
components.
● Security and Authorization: Incorporating appropriate security measures to
protect sensitive data and enforcing access control policies for each transaction.
● Performance Optimization: Identifying opportunities for optimizing transaction
execution, such as minimizing resource utilization or reducing response time.

REFACTORING OF DESIGNS

Refactoring means making improvements to the code without changing its


functionality. It's like cleaning up and organizing the code to make it easier to
understand, maintain, and enhance in the future.

Imagine you have a piece of code that works correctly, but it's messy and hard to read.
Refactoring involves rewriting parts of the code to make it cleaner, more e cient, and
easier to work with. This could involve renaming variables or functions to have more
meaningful names, splitting large chunks of code into smaller, reusable parts, or
simplifying complex logic to make it easier to follow.

The purpose of refactoring is to make the code easier to work with, reducing the
chances of introducing bugs and making it more flexible for future changes. It's like
tidying up your room to make it more organized and inviting.
1. Improves code readability and maintainability.
2. Enhances code e ciency and performance.
3. Increases code reusability and modularity.
4. Reduces the risk of introducing bugs during software development.

OBJECT ORIENTED DESIGN

Object-oriented design is an approach to software design that focuses on creating


modular and reusable components called objects. These objects encapsulate both data
and the operations that can be performed on that data. Here are some key points
about object-oriented design:

1. Classes and Objects: Objects are instances of classes, which are blueprint
templates that define the properties (data) and behaviors (methods) of the
objects. For example, if we have a class called "Car", an object of that class could
be a specific car with its own unique characteristics.

2. Encapsulation: Encapsulation is the concept of hiding the internal details of an


object and providing access to its functionality through well-defined interfaces.
This allows for better control over data and protects it from unauthorized access.
For example, a car object may have private variables for its speed and fuel level,
but provides public methods to set and retrieve these values.

3. Inheritance: Inheritance allows for the creation of new classes based on


existing ones. This promotes code reuse and supports the concept of "is-a"
relationships. For example, we can have a class "SUV" that inherits from the
"Car" class, inheriting its properties and behaviors while also adding new ones
specific to SUVs.

4. Polymorphism: Polymorphism allows objects of di erent classes to be treated


as objects of a common superclass. This promotes flexibility and extensibility.
For example, we can have a method that takes a "Vehicle" object as a
parameter, and it can accept any subclass of "Vehicle" such as a car or a bike.

Overall, object-oriented design provides a structured and modular approach to


software development, making it easier to manage complexity, improve code reuse,
and create flexible and maintainable systems.
USER INTERFACE DESIGN (UI DESIGN)

User interface design focuses on creating visually appealing and user-friendly


interfaces for software applications. It involves designing the visual elements,
organizing information, defining interactions, and ensuring usability and user
experience. Here are five key aspects of user interface design:

1. Visual Design: Creating visually appealing interfaces using appropriate colors,


typography, icons, and images.
2. Information Architecture: Organizing and structuring information in a logical
and intuitive way for easy navigation.
3. Interaction Design: Designing interactive elements like buttons, forms, and
input fields to ensure smooth user interactions.
4. Usability: Ensuring the interface is easy to use, intuitive, and provides a positive
user experience.
5. Accessibility: Designing interfaces that can be accessed and used by all users,
including those with disabilities.

These aspects work together to create interfaces that are visually engaging, easy to
navigate, and provide a seamless and enjoyable user experience.

SOFTWARE TESTING

Software testing is the process of evaluating a software application to ensure that it


meets the specified requirements and works as expected. It involves executing the
software with the intention of finding bugs, errors, and defects. The goal of testing is to
identify and fix any issues before the software is released to the end-users, ensuring its
quality, reliability, and functionality.

Software testing is one element of a broader topic that is often referred to as


verification and validation (V&V). Verification refers to the set of tasks that ensure that
software correctly implements a specific function. Validation refers to a di erent set of
tasks that ensure that the software that has been built is traceable to customer
requirements.

Boehm states this another way:


Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Verification and validation includes a wide array of Software QA activities:

Technical reviews, quality and configuration audits, performance monitoring,


simulation, feasibility study, documentation review, database review, algorithm
analysis, development testing, usability testing, qualification testing, acceptance
testing, and installation testing.

WHITE BOX AND BLACK BOX TESTING

White Box Testing: White box testing is performed with knowledge of the internal
structure, code, and implementation of the software. Testers use this information to
design test cases that exercise di erent paths and conditions within the code. The aim
is to ensure that all statements, branches, and logical paths are tested thoroughly.
White box testing techniques include statement coverage, branch coverage, and path
coverage.

● Example: Suppose you have a function that calculates the average of a list of
numbers. In white box testing, you would examine the code and design test
cases to cover di erent scenarios such as an empty list, a list with only one
number, and a list with multiple numbers. You would also ensure that the code
handles edge cases correctly, such as handling negative numbers or handling
large lists.

Black Box Testing: Black box testing is performed without any knowledge of the
internal structure or code of the software. Testers focus solely on the external behavior
and functionality of the software. Test cases are designed based on the specified
requirements, input/output specifications, and user expectations. The aim is to validate
that the software functions correctly from the user's perspective.

● Example: Suppose you have a login feature in a web application. In black box
testing, you would design test cases to cover various scenarios such as valid
login credentials, invalid login credentials, empty fields, and special characters in
the input fields. You would test the behavior of the application without knowing
how the authentication process is implemented internally.
p Stress TESTING
SOFTWARE

Stress testing is a type of software testing that evaluates the performance and stability
of a system under extreme or peak loads. It aims to identify the breaking point or
limitations of the system and how it handles high-stress conditions.

In stress testing, the software is subjected to heavy workloads, high tra c, or data
volume to assess its response and behavior. The goal is to understand how the system
handles these stressful situations and to ensure it can handle them without crashing or
experiencing performance degradation.

Example: Let's consider an e-commerce website that experiences a significant surge in


tra c during a flash sale event. To perform stress testing, the website is simulated to
receive an unusually high number of concurrent user requests, placing a heavy load on
the server and network infrastructure. Testers monitor the response time, system
resources utilization, and any errors or failures that occur during this peak load. The
objective is to determine if the website can handle the increased tra c smoothly or if it
becomes unresponsive or crashes under the stress.

Stress testing helps identify bottlenecks, weaknesses, or vulnerabilities in the system


under intense conditions. It allows developers and testers to make necessary
optimizations, such as improving server capacity, optimizing database queries, or
optimizing code, to ensure the system can handle high-stress situations e ectively. By
conducting stress testing, organizations can ensure their software is robust, reliable,
and capable of handling demanding scenarios.

ALPHA TESTING

Alpha testing is a type of software testing conducted by the developers or a select


group of users before releasing the software to the public. It aims to evaluate the
software's functionality, usability, and overall quality.

During alpha testing, real-world scenarios are simulated to identify any bugs, usability
issues, or design flaws. Feedback from the alpha testers helps developers make
improvements and fine-tune the software before the o cial release.

For example, a software company may invite a small group of users to test a new
mobile app. These testers will use the app in di erent scenarios, report any issues they
encounter, and provide feedback on the app's features and usability. The developers
will then make necessary changes based on this feedback to enhance the app's
performance and user experience.

BETA TESTING

Beta testing is a type of software testing conducted by a larger group of external users
before the final release of the software. It allows developers to gather feedback from a
diverse user base and uncover any remaining issues or bugs.

During beta testing, the software is made available to a wider audience who use it in
real-world scenarios. The testers provide feedback on their experience, report any
problems they encounter, and suggest improvements. This feedback helps the
developers identify and address any remaining issues, enhance the software's usability,
and ensure its stability.

For example, a software company may release a beta version of a new video editing
software to a group of volunteer testers. These testers will use the software, explore its
features, and provide feedback on any issues or suggestions for improvement. The
developers will then analyze this feedback and make necessary changes before the
final release of the software to the general public.
ACCEPTANCE TESTING

Acceptance testing is a type of software testing performed to determine if a system


meets the specified requirements and is acceptable to the end-users or stakeholders. It
is conducted at the end of the development process to ensure that the software is
ready for deployment. Acceptance testing verifies whether the software functions as
expected and meets the user's needs. It focuses on validating the system from a user's
perspective and ensuring that it meets the predefined acceptance criteria.

In acceptance testing, real-world scenarios are simulated to evaluate the software's


performance, usability, reliability, and compatibility. It aims to gain user confidence
and ensure that the software is ready for production use. Acceptance testing can be
conducted by both the end-users and the development team, depending on the
project's requirements and stakeholders' involvement. The results of acceptance testing
help determine whether the software can be accepted for use or if further
modifications or improvements are needed.

In acceptance testing, an example could be testing a mobile banking application.


Testers would perform tasks such as creating an account, logging in, checking account
balances, transferring funds, and verifying that the transactions are accurately
reflected. The aim is to ensure that the application functions correctly, meets user
requirements, and provides a seamless user experience. Any issues identified are
reported and resolved before the application is approved for release to end-users.

DEBUGGING

Debugging occurs as a consequence of successful testing. That is, when a test case
uncovers an error, debugging is the process that results in the removal of the error.

Debugging is not testing but often occurs as a consequence of testing. The debugging
process attempts to match symptom with cause, thereby leading to error correction.

The debugging process will usually have one of two outcomes:

(1) the cause will be found and corrected or


(2) the cause will not be found.
● Debugging strategies involve locating the source of a problem using approaches
like brute force, backtracking, and cause elimination.

● Brute force debugging is a common but ine cient method used when other
approaches fail.

● Backtracking involves tracing the source code backward from the identified
symptom to find the cause, but it becomes challenging as the program size
increases.

● Cause elimination uses induction or deduction and binary partitioning to narrow


down the potential causes based on error occurrence data.

● Before correcting a bug, consider if the bug pattern exists elsewhere, potential
new bugs that may be introduced, and preventive measures for future bugs.
UNIT 4

SOFTWARE MAINTENANCE

Software Maintenance is the process of modifying a software product after it has been
delivered to the customer. The main purpose of software maintenance is to modify and
update software application after delivery to correct faults and to improve
performance.

Software Maintenance must be performed in order to:

● Correct faults.
● Improve the design.
● Implement enhancements.
● Interface with other systems.
● Accommodate programs so that di erent hardware, software, system features,
and telecommunications facilities can be used.
● Migrate legacy software.
● Retire software.

Categories of Software Maintenance –

1. Corrective maintenance: Corrective maintenance of a software product may be


essential either to rectify some bugs observed while the system is in use, or to
enhance the performance of the system.

2. Adaptive maintenance: This includes modifications and updations when the


customers need the product to run on new platforms, on new operating systems,
or when they need the product to interface with new hardware and software.

3. Perfective maintenance: A software product needs maintenance to support the


new features that the users want or to change di erent types of functionalities
of the system according to the customer demands.

4. Preventive maintenance: This type of maintenance includes modifications and


updations to prevent future problems of the software. It goals to attend
problems, which are not significant at this moment but may cause serious issues
in future.
SOFTWARE CONFIGURATION MANAGEMENT

Software configuration management (SCM) is like a caretaker for software systems. It


helps to manage and control the changes that occur during the lifecycle of a software
project. It involves tasks such as version control, change management, and release
management.

Version control is about keeping track of di erent versions of software files and
managing changes made by multiple developers. It ensures that everyone is working
on the correct and latest version of the code.

Change management involves handling requests for modifications or updates to the


software. It tracks and evaluates these requests, ensures they are properly
documented, and coordinates the implementation of approved changes.

Release management is responsible for preparing and delivering software releases. It


involves packaging the software, creating release notes and documentation, and
coordinating the deployment to users or customers.

RE-ENGINEERING

Software reengineering is like giving a makeover to existing software. It involves the


process of transforming or improving the structure, functionality, or performance of
software systems without changing their external behavior.

When we need to update the software to keep it to the current market, without
impacting its functionality, it is called software re-engineering. It is a thorough process
where the design of software is changed and programs are re-written.

Re-Engineering Process

● Decide what to re-engineer. Is it whole software or a part of it?


● Perform Reverse Engineering, in order to obtain specifications of existing
software.
● Restructure Program if required. For example, changing function-oriented
programs into object- oriented programs.
● Re-structure data as required.
● Apply Forward engineering concepts in order to get re-engineered software.
REVERSE ENGINEERING

Reverse engineering is like unraveling a mystery by analyzing and understanding an


existing software system. It involves the process of examining the code, structure, and
behavior of a software system to extract information and gain insights about its
design, functionality, and implementation.

Just like how a detective investigates a crime scene to understand how it happened,
reverse engineering involves studying the software's components, interactions, and
logic to comprehend how it works. This can be done by analyzing the compiled code,
disassembling binaries, or examining system artifacts.

Reverse engineering is often used when the original source code or documentation is
not available or when there is a need to understand and modify existing software
systems. It helps in uncovering hidden knowledge, identifying vulnerabilities, or reusing
existing components for further development or improvement.

FORWARD ENGINEERING

Forward engineering is the process of creating new software systems from scratch,
starting with requirements analysis and design, and proceeding with implementation
and testing. It involves moving forward in the software development life cycle, building
the system step by step.

In forward engineering, software engineers use their knowledge and expertise to


design and develop software solutions based on the specified requirements. They follow
a systematic approach, translating the design into code and integrating di erent
components to create a fully functional software system.

Unlike reverse engineering, which focuses on understanding and analyzing existing


software, forward engineering is about creating new software solutions by applying
design principles, coding techniques, and best practices. It involves planning,
designing, and implementing software systems to meet the desired objectives and
functionality.

Forward engineering is commonly used in software development projects where there


is a need to build custom software tailored to specific requirements or to enhance
existing systems by adding new features and functionality.
Software measurement is concerned with deriving a numeric value for an attribute
of a software product or process. This allows for objective comparisons between
techniques and processes. Although some companies have introduced measurement
programs, most organizations still don’t make systematic use of software
measurement. There are few established standards in this area.

Software metric is any type of measurement which relates to a software system,


process or related documentation: lines of code in a program, the fog index (a code
readability test), number of person-days required to develop a component. Allow the
software and the software process to be quantified. May be used to predict product
attributes or to control the software process. Product metrics can be used for general
predictions or to identify anomalous components.

Process metrics include:


● The time taken for a particular process to be completed This can be the total
time devoted to the process, calendar time, the time spent on the process by
particular engineers, and so on.

● The resources required for a particular process Resources might include total
e ort in person-days, travel costs or computer resources.

● The number of occurrences of a particular event Examples of events that might


be monitored include the number of defects discovered during code inspection,
the number of requirements changes requested, the number of bug reports in a
delivered system and the average number of lines of code modified in response
to a requirements change.

—---------------------------------------------------------------------------------------------------------
METRICS FOR SOFTWARE QUALITY

● Software quality metrics are objective measurements used to assess the quality
of a software product or process.
● They provide quantitative information on aspects like reliability, maintainability,
e ciency, usability, and security.
● Examples of metrics include defect density, code coverage, customer
satisfaction index, and mean time to repair.
● Metrics help identify areas for improvement, track progress, and ensure software
meets quality standards.
● Measurement techniques include static analysis, dynamic analysis, code reviews,
testing, and user surveys.
● Metrics should be interpreted in the context of the specific project and
stakeholder requirements.
● They enable data-driven decision making and continuous improvement of
software quality.

RISK MANAGEMENT
The three types of risks in software engineering:

1. Project Risks: These risks are associated with the management and execution of
the software development project. They include factors such as inaccurate
project estimation, insu cient resources, unrealistic deadlines, and ine ective
communication among team members. Project risks can lead to delays, cost
overruns, and failure to meet project goals.

2. Technical Risks: Technical risks are related to the software development process
itself. They involve challenges or uncertainties associated with technology,
design, implementation, and integration. Examples of technical risks include
compatibility issues, software complexity, scalability limitations, security
vulnerabilities, and performance bottlenecks. Technical risks can result in system
failures, poor quality software, or the inability to deliver desired functionality.

3. Business Risks: Business risks are concerned with the impact of software
development on the overall business objectives and success. They involve factors
such as market competition, changing customer needs, financial constraints,
and legal or regulatory compliance. Business risks may include failure to achieve
desired market share, loss of revenue, negative impact on brand reputation, or
legal repercussions.

Risk identification: Risk identification is the process of identifying potential risks and
uncertainties that could a ect the success of a project. It involves carefully examining
various aspects of the project, including the project scope, requirements, available
resources, and external factors. The goal is to identify any factors that could pose a
threat to the project's objectives.
For example, in a software development project, the risk of inadequate user
involvement during the requirements gathering phase could lead to misunderstandings
and project delays. By identifying this risk early on, appropriate measures can be taken
to address it.

Risk projection: Once risks are identified, they need to be assessed in terms of their
potential impact and likelihood of occurrence. This step helps in prioritizing risks and
determining the level of attention and resources required for their management. Risk
projection involves analyzing each identified risk and projecting its potential
consequences on the project.

For instance, if there is a risk related to software compatibility issues, the project team
may project the potential impact on the project's timeline and budget. This information
enables them to allocate resources and plan for mitigation strategies accordingly.

Risk refinement: Risk refinement involves further analyzing and evaluating identified
risks to gain a deeper understanding of their severity, probability, and potential
consequences. It involves assessing the risks in more detail, gathering additional
information, and refining the risk assessment. This helps in focusing on the most critical
risks and developing specific strategies for risk mitigation.

For example, if the risk is identified as inadequate testing resources, the team can
further refine this risk by identifying specific constraints and potential solutions, such
as outsourcing testing tasks or acquiring additional resources.

By actively managing risks through e ective identification, projection, and refinement,


organizations can increase the chances of project success and minimize the negative
impacts of uncertainties and unforeseen events. It is an ongoing process throughout
the project lifecycle, requiring regular monitoring and adjustment of risk management
strategies as new risks arise or existing risks change in severity.

—---------------------------------------------------------------------------------------------------------

RMMM

Risk Mitigation Monitoring and Management (RMMM) is a systematic approach used


to identify, track, and manage risks throughout the project lifecycle. It involves
implementing strategies to reduce the impact and likelihood of identified risks, while
continuously monitoring and managing them to ensure their e ectiveness. RMMM aims
to proactively address risks and minimize their negative impact on the project.
The process of RMMM begins with risk mitigation, which involves developing and
implementing strategies to reduce the impact and likelihood of identified risks. This
may include implementing preventive measures, establishing contingency plans, or
defining risk response strategies. For example, if a software development project
identifies a risk related to resource constraints, the team may decide to allocate
additional resources, outsource certain tasks, or adjust the project schedule to mitigate
the impact of the risk.

Once the mitigation strategies are in place, the next step is risk monitoring. This
involves actively tracking and observing the identified risks to ensure that the
implemented mitigation strategies are e ective and that new risks are promptly
identified. Regular monitoring helps in identifying any changes in the risk landscape
and allows for timely adjustments in mitigation approaches. For instance, if the risk of
software compatibility issues persists despite mitigation e orts, the project team may
need to reassess the strategies and consider alternative solutions.

Risk management is an ongoing process that requires continuous evaluation and


management of risks. This includes assessing the e ectiveness of mitigation strategies,
analyzing the impact of changes in project conditions, and adapting the risk
management approach accordingly. For example, if a new risk emerges during the
project execution phase, the team may need to revise the risk management plan and
implement additional mitigation measures.

Overall, RMMM provides a structured framework for identifying, mitigating, monitoring,


and managing risks throughout the project. It helps in reducing uncertainty, improving
project outcomes, and enhancing overall project success. By e ectively implementing
RMMM practices, organizations can proactively address risks, minimize disruptions, and
ensure the successful completion of their projects.

RELIABILITY

Reliability in software refers to the ability of a software system to perform its intended
functions without failure or errors, under specified conditions, for a defined period of
time. It is a crucial aspect of software quality and ensures that the software behaves
consistently and predictably.

Six commonly used reliability metrics to quantify the reliability of software products:
1. Failure Rate: The failure rate is a measure of the frequency at which failures or
errors occur in the software system over a specified period of time. It is typically
expressed as the number of failures per unit of time, such as failures per hour or
failures per month.

2. Mean Time Between Failures (MTBF): MTBF represents the average time
interval between consecutive failures in the software system. It provides an
indication of the system's reliability by measuring the average time it operates
without experiencing a failure. MTBF is calculated by dividing the total operating
time by the number of failures.

3. Availability: Availability measures the proportion of time that the software


system is operational and accessible to users. It takes into account both planned
and unplanned downtime. High availability indicates a reliable system that is
consistently available for use.

4. Mean Time to Repair (MTTR): MTTR is the average time taken to repair or
restore the software system after a failure occurs. It includes the time required
for diagnosing the problem, implementing a fix, and bringing the system back to
a fully functional state. A lower MTTR indicates quicker recovery and better
reliability.

5. Mean Time to Failure (MTTF): MTTF represents the average time between the
start of operation of the software system and the occurrence of its first failure. It
is a measure of the system's reliability during its normal operating conditions.

6. Mean Residual Life (MRL): MRL measures the average remaining useful life of
the software system after a failure has occurred. It indicates the reliability of the
system after experiencing a failure and provides insights into the system's ability
to continue functioning without further failures.

QUALITY MANAGEMENT

Quality management is a systematic approach to ensure that products or services


meet customer expectations and comply with quality standards. It involves processes
and techniques to monitor, control, and improve quality throughout the entire product
lifecycle.
CMM : The Capability Maturity Model (CMM) is a framework for assessing and
improving software development processes. It consists of five levels, ranging from
Initial to Optimizing. At each level, specific process areas are defined, such as
requirements management, project planning, and defect prevention. Organizations
progress through the levels by implementing best practices and continuously improving
their processes. For example, at the Initial level, processes are ad hoc and
unpredictable, while at the Optimizing level, processes are refined and continuously
improved based on feedback and data analysis. The CMM helps organizations enhance
their process capability and deliver high-quality software products.

ISO 9000 : ISO 9000 is a set of international standards developed by the International
Organization for Standardization (ISO) for quality management systems. It provides
guidelines and criteria for organizations to establish, implement, maintain, and improve
their quality processes. The ISO 9000 standards focus on customer satisfaction, process
e ciency, and continuous improvement. Compliance with ISO 9000 demonstrates an
organization's commitment to delivering consistent and high-quality products or
services. The standards cover various aspects, including management responsibility,
resource management, product realization, and measurement, analysis, and
improvement. By adopting ISO 9000, organizations can establish a systematic
approach to quality management and enhance customer confidence in their products
or services.

Six Sigma : Six Sigma is a data-driven approach for process improvement that aims to
reduce defects and variations in products or services. It is based on the concept of
achieving a defect rate of 3.4 per million opportunities.

Six Sigma combines statistical analysis, problem-solving methodologies, and process


management techniques to identify and eliminate root causes of defects. The
methodology follows a structured approach known as DMAIC (Define, Measure,
Analyze, Improve, Control) to guide improvement projects.

By using data and statistical tools, organizations can identify process ine ciencies,
optimize performance, and enhance customer satisfaction. For example, in a
manufacturing setting, Six Sigma can be used to minimize product defects and
improve production e ciency by analyzing process data and implementing process
modifications. The goal of Six Sigma is to drive continuous improvement, reduce waste,
and enhance overall organizational performance.

You might also like