Nothing Special   »   [go: up one dir, main page]

Se Unit 4 Notes 10 Mar 2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

SOFTWARE ENGINEERING

UNIT-4
TESTING STRATEGIES

1. A Specific Approach to Software Testing :

The general characteristics of ‘Software Testing’ are :

a) To perform effective testing, a software team should conduct


effective formal technical reviews. This will remove many errors
before starting testing.

b) Testing begins at component level and works ‘outward’ toward the


integration of the entire computer-based system.

c) Different testing techniques are appropriate at different points of


time.

d) Testing is conducted by the developer of the software and an


Independent Test Group (ITG).

e) Testing and Debugging are different activities, but debugging myst be


accommodated in any testing strategy.

2. Verification and Validation :


Basic Software Testing refers ‘Verification’ and ‘Validation’.
‘Verification’ refers to the set of activities that ensure that software
correctly implements a specific function.
‘Validation’ refers to a different set of activities that ensure that the
software that has been built is traceable to customer requirements.
Verification : Are we building the product right ?
Validation : Are we building the right product ?

1
Testing is the unavoidable part of any responsible effort to develop a
software system.
The role of ‘Independent Test Group’ (ITG) is to remove the inherent
problems associated with letting the builder test the thing that has been
built.
Types of Testing :
a) Unit Testing : begins at the vertex of spiral and concentrates on each
unit of the software as implemented in the source code.

b) Integration Testing : This testing progresses by moving outward


along the spiral. Here the focus is on design and the construction of
the software architecture.

c) Validation Testing : Here requirements established as part of the


software requirements analysis are validated against the software
that has been constructed.

d) System Testing : Here the software and other system elements are
tested as a whole.

Testing Strategy :

2
Consider the process from a procedural point of view.

Testing within the context of software engineering is actually a series of


four steps that are implemented sequentially.

Initially, tests focus on each component individually, ensuring that it


functions properly as a unit.

Next components must be assembled or integrated to form the


complete software package.

Then validation testing provides final assurance that software meets all
functional, behavioral, and performance requirement.

The last high-order testing step falls outside the boundary of software
engineering and into the broader context of computer system engi-
neering.

Software once validated, must be combined with other system elements


(eg. hardware, people, database).

System testing verifies that all the elements mesh properly and that
overall system function / performance is achieved.
Software Testing Steps :

3
3. Unit Testing :
‘Unit Testing’ focuses verification effort on the smallest unit of software
design – the software component or module.
The important control paths are tested to uncover errors within the
boundary of the module.
The unit test focuses on the internal processing logic and data structures
within the boundaries of a component.
The following errors are commonly found during unit testing :
a) Misunderstood or incorrect arithmetic precedence.
b) Mixed mode operations
c) Incorrect initialization
d) Precision inaccuracy
e) Incorrect symbolic representation of an expression,
comparison and control flow are closely coupled to one
another.
Unit Test :

4
Good design dictates that error conditions be anticipated and error-
handling paths set up to reroute or cleanly terminate processing when
an error does occur. The approach is called ‘anti-bugging’.
Be sure that you design tests to execute every error-handling path.
Unit Test Procedure :
Here, a ‘driver’ is a program that accepts test case data, passes such
data to the component (to be tested), and prints the relevant results.
‘Stubs’ serve to replace modules that are subordinate to (called by) the
component to be tested.
A stub (dummy sub program) uses the subordinate module’s interface,
may do minimal data manipulation, provides verification of entry, and
returns control to the module undergoing testing.
Unit Test Environment :

5
Test cases should uncover errors such as :
a) Comparison of different data types

b) Incorrect logical operators or precedence

c) Expectation of equality when precision error makes equality


unlikely

d) Incorrect comparison of variables

e) Improper or nonexistent loop termination

f) Failure to exit when divergent iteration is encountered

g) Improperly modified loop variables.

In the diagram :
a) The module interface is tested to ensure that information
properly flows into and out of the program unit under test.

b) ‘Local Data Structures’ are examined to ensure that data


stored temporarily maintains its integrity during al steps in an
algorithm’s execution.

All independent paths through the control structure are


exercised to ensure that all statements in a module have been
executed at least once.

c) ‘Boundary Conditions’ are tested to ensure that the module


operates properly at boundaries established to limit or restrict
processing.

d) All independent paths are tested for the errors.

e) Finally, all error handling paths are tested.

Note : Unit Testing is simplified when a component with high


cohesion is designed.

6
4. Integration Testing :
‘Integration Testing’ is a systematic technique for constructing the
software architecture while at the same time conducting tests to
uncover errors associated with interfacing.
The objective is to take unit tested components and build a program
structure that has been dictated by design.
Top-down Integration :

Incremental Integration : is constructed and tested in small increments,


where errors are easier to isolate and correct.
Here, interfaces are likely to be tested completely and a systematic test
to be applied.

Types of Incremental Integration :


a) Top-down Integration : Top-down integration testing is an
incremental approach to construction of the software architecture.

Modules are integrated by moving downward through the control


hierarchy, beginning with the main control module.

7
In the above figure, ‘depth-first integration’ integrates all
components on a major control path of the program structure.

For example, selecting the left hand path, components M1, M2, M5
and M6 are integrated first.

Then the central and right hand paths are built.

The steps in top-down integration process :

a) The main control module is used as a test driver, and stubs are
substituted for all components directly subordinate to the main
control module.

b) Depending on the integration approach, subordinate stubs are


replaced one at a time with actual components.

c) Tests are conducted as each component is integrated.

d) On completion of each set of tests, another stub is replaced with


the real component.

e) Regression testing may be conducted to ensure that new errors


have not been introduced.

Bottom-UpIntegration : begins construction and testing with atomic


modules. Because components are integrated from the bottom-up
processing required for components subordinate to a given level
is always available and the need for stubs is eliminated.

The steps in bottom-up integration process :

a) Low-level components are combined into clusters that performs a


specific software sub function.

b) A driver (a control program for testing) is written to coordinate


test case input and output.

c) The cluster is tested.

d) Drivers are removed and clusters are combined moving upward in


the program structure.
8
Bottom-Up Integration :

Regression Testing : is the re-execution of some subset of tests that


have already been conducted to ensure that have not propagated
unintended side effects.
Regression testing may be conducted manually by re-executing a
subset of all test cases or using automated capture/playback tools.
Capture /Playback tools enable the software engineer to capture test
cases and results for subsequent playback and comparison.
Smoke Testing : is an integration testing approach that is commonly
used when software products are being developed.
The benefits of ‘smoke testing’ :
a) Integration risk is minimized.

b) The quality of the end product is improved.

c) Error diagnosis and correction are simplified.

d) Progress is easier to assess.

9
5. Types of Testing :

a) Alpha Testing :

‘The alpha test’ is conducted at developer’s site by end users.

Alpha testing is a type of software testing performed to identify bugs


before releasing the product to real users or to the public.

The alpha testing is done at an early stage and close to the end of
software development life cycle.

b) Beta Testing :
The beta test is conducted at end-user sites.
‘Beta testing’is the process of testing a software product in a real-
world environment or at the end users site.
Beta testing is an opportunity for real users to use a product in a
production environment to uncover any bugs or issues before a
general release.
c) System Testing :

‘System Testing’ is actually a series of tests whose primary purpose is


to fully exercise the computer-based system.

d) Recovery Testing :
In general, computer-based systems must recover from faults and
resume processing within a pre-specified time.
A system must be ‘fault-tolerant’, i.e., processing faults must not
cause overall system function to cease.
‘Recovery Testing’ is a system test that forces the software to fall in a
variety of ways and verifies that recovery is properly performed.
If recovery is automatic, then re-initialization, check point
mechanisms, data recovery, and restart are evaluated for
correctness.

10
If recovery requires human intervention, the mean-time-to-repair
(MTTR) is evaluated to determine whether it is within acceptable
limits.
e) Security Testing :
‘Security Testing’ verifies that protection mechanism built into a
system will, in fact, protect it from improper penetration.
Security Testing is a type of software testing that uncovers
vulnerabilities of the system and determines that the data and
resources of the system are protected from possible intruders.
It ensures that the software system and application are free from any
threats or risks that can cause a loss.
f) Stress Testing :
‘Stress Testing’ executes a system in a manner that demands
resources in abnormal quantity, frequency or volume.
For example,
a) Special tests may be designed that generate ten interrupts per
second, when one or two is the average rate.
b) Input data rates may be increased by an order of magnitude to
determine how input functions will respond.
c) Test cases that require maximum memory or other resources are
executed.
d) Test cases that may cause memory management problems are
designed.
e) Test cases that may cause excessive hunting for risk-resident data
are created.

g) Performance Testing :
‘Performance Testing’ is designed to test the run-time performance
of software within the context of an integrated system.
Performance Testing occurs throughout all steps in the testing
process.
Performance Test are often coupled with stress testing and usually
require both hardware and software instrumentation.

11
h) Black Box Testing :
‘Black Box Testing’ alludes to tests that are conducted at the software
interface.
In other words, Black box testing involves testing a system with no
prior knowledge of its internal workings.
The black box test is a test that only considers the external behavior
of the system; the internal workings of the software is not taken into
account.

i) White Box Testing :


‘White Box Testing’ of software is predicated on close examination of
procedural detail.
The logical paths through the software and collaborations between
components are tested by providing test cases that exercise specific
sets of conditions and / or loops.
‘White box testing’ is a form of application testing that provides the
tester with complete knowledge of the application being tested,
including access to source code and design documents.

12
6. Product Metrics :
Software Quality :
‘Software Quality’ is conformance to explicitly stated functional and
performance requirements, explicitly documented development
standards, and implicit characteristics that are expected of all
professionally developed software.

a) McCall’s Quality Factors :


The factors that affect quality can be categorized into the following
two groups :
- Factors that can be directly measured.
(defects uncovered during testing)

- Factors that can be measured only indirectly.


(usability and maintainability)
McCall quality factors focus on three important aspects of software
product :
i) Its operational characteristics

ii) Its ability to undergo change

iii) Its adaptability to new environments

Factors connected to this model :


Correctness : The extent to which a program satisfies its specification
and fulfills the customer’s mission objectives.
Reliability :The extent to which a program can be expected to perform
intended function with required precision.
Efficiency : The amount of computing resources and code required by
a program to perform its function.
Integrity : The extent to which access to software or data by unauthori-
zed persons can be controlled.
Usability : The effort required to learn, operate, prepare input for, and
Interpret output of a program.

13
McCall’s Software Quality Factors :

Maintainability : The effort required to locate and fix an error in a


program.
Flexibility : The effort required to modify an operational program.
Testability : The effort required to test a program to ensure that it
performs its intended function.
Portability : The effort required to transfer the program from one
hardware and / or software system environment to another.
Reusability : The extent to which a program can be reused in another
applications.
Interoperability : The effort required to couple one system to another.
“ A product quality is a function of how much it changes the world
for the better”.

14
b) ISO 9126 Quality Factors :

The ISO 9126 standard was developed in an attempt to identify quality


attributes for computer software.
This standard identifies six key quality attributes :
j) Functionality : The degree to which the software satisfies stated
needs as indicated by the following sub-attributes :
- suitability - accuracy - interoperability
- compliance - security

ii) Reliability : The amount of time that the software is available for use
as indicated by the following sub-attributes :
- maturity - fault tolerance - recoverability

iii) Usability : The degree to which the software is easy to use as indica-
ted by the following sub-attributes :
- understandability - learnability - operability

iv) Efficiency : The degree to which the software makes optimal use of
system resources as indicated by the following sub-attributes :
- time behavior - resource behavior

v) Maintainability : The ease with which repair may be made to the


software as indicated by the following sub-attributes :
- analyzability - changeability
- stability - testability

vi) Portability :The ease with which the software can be transposed
one environment to another as indicated by the following
sub-attributes :
- Adaptability - installability
- Conformance - replaceability

15
7. Function Based Metrics :
The ‘Function Point Metric’ (FP) can be used effectively as a means for
measuring the functionality delivered by a system.
Using historical data, the FP can then be used to :
a) Estimate the cost or effort required to design, code, and test
the software

b) Predict the number of errors that will be encountered during


testing

c) Forecast the number of components and / or the number of


projected source lines in the implemented system.

Function Points are derived using an empirical relationship based on


countable measures of software’s information domain and assessments
of software complexity.
Information domain values are defined as follows :
a) Number of External Inputs (EIs) :

Each ‘external input’ originates from a user or is transmitted from


another application and provides distinct application-oriented data or
control information.

b) Number of External Outputs (EOs) :

Each ‘external output’ is derived within the application and provides


information to the user.

c) Number of External Inquiries (EQs) :

An ‘external inquiry’ is defined as an on-line input that results in the


generation of some immediate software response in the form of an
on-line output.

d) Number of Internal Logical Files (ILFs) :

Each ‘internal logical file’ is a logical grouping of data that resides


within the application’s boundary and is maintained via external
inputs.
16
e) Number of external interface files (EIFs) :

Each ‘external interface file’ is a logical grouping of data that resides


external to the application but provides data that may be of used to
the application.
With these values, the following figure is completed and a complexity
value is associated with each count.
Organizations that use function point methods develop criteria for
determining whether a particular entry is simple.
Here, the determination of complexity is somewhat subjective.
To compute function points (FP), the following equation is used :
FP = count total X [ 0.65 + 0.01 x ∑(Fi) ] … (1)
Here, count total is the sum of all FP entries obtained in the figure
below.
Computing Function Points :

17
The Fi ( i = 1 to 14) are ‘Value Adjustment Factors’ (VAF), based on
responses to the following fourteen questions :
a) Does the system require reliable backup and recovery.
b) Are specialize data communications required to transfer information
to or from the application.
c) Are there distributed processing functions.
d) Is Performance critical.
e) Will the system run in an existing , heavily utilized operational envi-
ronment.
f) Does the system require on-line data entry.

g) Does the on-line data entry require the input transaction to be built
Over multiple screens or operations.

h) Are the ILFs updated on-line.

i) Are the inputs, outputs, files, or inquiries complex.

j) Is the internal processing complex.

k) Is the code designed to be reusable.

l) Are conversion and installation included in the design.

m) Is the system designed for multiple installations in different organi-


zations.

n) Is the application designed to facilitate change and for ease of use


by the user.

Each of the above questions is answered using a scale that ranges from 0
(not important or applicable) to 5 (absolutely essential).
To illustrate the use of the FP metric in this context, the following
example is considered, which is a simple analysis model representation.
Referring to the figure, a data flow diagram for a function within the
‘SafeHome’ software is represented.

18
The function manages :
- user interaction

- accepting a user password to activate or deactivate the system

- allows inquiries on the status of security zones and various


security sensors.
The DFD is evaluated to determine a set of key information domain
measures required for computation of the function point metric.
Three external inputs :
Password panic-button activate/deactivate
Two external inquiries :
Zone-inquiry sensor-inquiry
One ILF : System Configuration File
Two external output :
- messages -sensor status
Four EIFs :
- test sensor - zone setting
- activate / deactivate - alarm alert
Data Flow model for ‘SafeHome’ software :

19
Let us assume, the ∑ (Fi) = 46.

So, FP = 50 x [0.65 + (0.01 x 46) ] = 56.

Computing Function Points :

Based on the projected FP value derived from the analysis model, the
project team can estimate the overall implemented size of the
‘SafeHome’ user interaction function.

Assume that past data indicates that one FP translates into 60 lines of
code and that 12 FPs are produced for person-month of effort.

These historical data provide the project manager with important


planning information that is based on the analysis model rather than
preliminary estimates.

Assume further that past projects have found an average of three


errors per function point during analysis and design reviews and four
errors per function point during unit and integration testing.

These data can help software engineers assess the completeness of


their review and testing activities.

20
8. Class Oriented Metrics – The CK Metrics Suite :
The class is the fundamental unit of an OO system.
So, measures and metrics for an individual class, the class hierarchy, and
class collaborations will be invaluable to a software engineer who must
asses design quality.
The class encapsulates operations and data.
The class is often ‘parent’ class for subclasses that inherits its attributes
and operations.
The class often collaborates with other classes.
One of the most widely referenced sets of OO software metrics is the ‘CK
metrics suite’, which is six class-based design metrics for OO systems.
a) Weighted Methods per Class (WMC) :
Assume that n methods of complexity C1, C2,…,Cn are defined for the
class C.
The specific complexity metric that is chosen (eg. cyclomatic
complexity) should be normalized so that nominal complexity for a
method takes on a value of 1.0.
WMC = ∑ Ci ( i = 1 to n)
The number of methods and their complexity are reasonable
indicators of the amount of effort required to implement and test a
class.
Finally, as the number of methods grows for a given class, it is likely
to become more and more application specific.
b) Depth of the Inheritance Tree (DIT) :
This metric is ‘the maximum length from the node to the root of the
tree’.
In the following figure, the value of DIT for the class-hierarchy shown
is 4 (four).
As DIT grows, it is likely that lower-level classes will inherit many
methods.

21
This lead to potential difficulties when attempting to predict the
behavior of a class.
A deep class hierarchy (DIT is large) also leads to greater design
complexity.
On the positive side, large DIT values imply that many methods may
be reused.

c) No. of Children (NOC) :


The sub classes that are immediately subordinate to a class in the
class hierarchy are termed its ‘children’.
In the above diagram, class C2 has three children – C21, C22 and C23.
As the number of children grows, reuse increases but also, as NOC
increases, the abstraction represented by the parent class can be
diluted, if some of the children are not appropriate members of the
parent class.
As NOC increases, the amount of testing will also increase.

22
d) Coupling between object classes (CBO) :

The CRC model may be used to determine the value for CBO.

In essence, CBO is the number of collaborations listed for a class on


its CRC index card.
As CBO increases, it is likely that the reusability of a class will
decrease.

High values of CBO also complicate modifications and the testing that
ensues when modifications are made.

e) Response for a class (RFC) :


The response set of a class is ‘a set of methods that can potentially be
executed in response to a message received by an object of that
class.’
RFC is the number of methods in the response set.
As RFC increases, the effort required for testing also increases
because the test sequence grows.
It also follows that, as RFC increases, the overall design complexity of
the class increases.
f) Lack of Cohesion in methods (LCOM) :
Each method within a class, C, accesses one or more attributes (also
called ‘instance variables’).
LCOM is the number of methods that access one or more of the same
attributes.
If no methods access the same attributes, then LCOM = 0.
To illustrate the case, where LCOM ≠ 0, consider a class with six
methods.
Four of the methods have one or more attributes in common.
Therefore, LCOM = 4.
If LCOM is high, methods may be coupled to one another via
attributes. This increases the complexity of the class design.

23
9. Class Oriented Metrics – The MOOD Metrics Suite :
In the MOOD metrics suite, a set of metrics for object-oriented design
that provide quantitative indicators for OO design characteristics.
Method Inheritance factor (MIF) :
The degree to which the class architecture of an OO system makes
use of inheritance for both methods (operations) and attributes
is defined as :
MIF = ∑Mi (Ci) / ∑ Ma (Ci)
Here, the summation occurs over i = 1 to Tc.
Tc is defined as the total number of classes in the architecture; Ci is a
class within the architecture and
Ma(Ci) = Md(Ci) + Mi(Ci)
where,
Ma(Ci) = the number of methods that can be invoked in
association with Ci.
Md(Ci) = the number of methods declared in the class Ci.
Mi(Ci) = the number of methods inherited.
The value of MIF provides an indication of the impact of inheritance on
the OO software.
Coupling Factor (CF) :
Coupling is an indication of the connections between elements of OO
design.
The MOOD metrics suite defines coupling as follows :

CF = ∑i ∑j is_client (Ci,Cj) / (Tc2 – Tc)

Here, the summations occur over i = 1 to Tc and j = 1 to Tc.


The function is_client = 1, iff a relationship exists between the client Cc
and the server class , Cs and Cc ≠ Cs.
= 0, otherwise.

24
10. Component-Level Design Metrics :

a) Cohesion Metrics :
The following is a collection of metrics that provide and indication of
the cohesiveness of a module :
i) Data Slice : is a backward walk through a module that looks for
data values that affect the state of the module, when the walk
began.
ii) Data Tokens : The variables defined for a module can be defined as
data tokens for the module.
iii) Glue Tokens : The set of tokens lies on one or more data slice.
iv) Super Glue Tokens : These data tokens are common to every data
slice in a module.
v) Stickiness : The relative stickiness of a glue token is directly propo-
rtional to the number of data slices that it binds.
b) Coupling Metrics :

Module coupling provides an indication of the ‘connectedness’ of a


module to other modules, global data and the outside environment.

For data and control flow coupling :

Di : number of input data parameters

Ci : number of input control parameters

Do : number of output data parameters

Co : number of output control parameters

For global coupling,


Gd : number of global variables used as data
Gc : number of global variables used as control
For environmental coupling :
W : number of modules called
R : number of modules calling the module under consideration
25
Using these measures, a module coupling factor Mc is
defined as following :
Mc = k / M
Where k is proportionality constant and
M = Di + (A x Ci) + Do + (B X Co) + Gd + (C x Gc) + W + R
The values for k, A, B, C, must be derived empirically..
As the value of Mc increases, the overall module coupling
decreases.
A revised coupling metric is :
C = 1 – Mc.
c) Complexity Metrics :
Complexity metrics can be used to predict critical information about
reliability and maintainability of software systems from automatic
analysis of source code.
Complexity metrics also provide feedback during the software
project to help control the design activity.
During testing and maintenance, they provide detailed information
about software modules to help pinpoint areas of potential
instability.
d) Operation-oriented Metrics :

There are three simple metrics for O-O metrics :

Average Operation Size (OSavg) :

Lines of Code (LOC) is generally used as an indicator for operation


size.

In addition, the number of messages sent by the operation provides


an alternative for operation size.

As the number of messages sent by a single operation increases, it is


likely that responsibilities have not been well-allocated within a class.

26
e) Operation Complexity Metrics :

The complexity of an operation can be computed using any of the


complexity metrics proposed for conventional software.

Because operations should be limited to a specific responsibility, the


designed should strive to keep OC as low as possible.

f) Average number of parameters per operation (NPavg) :

The larger number of operation parameters, the more complex the


collaboration between objects.

In general, NPavg should be kept as low as possible.


11. Metrics for Source Code :

Quantitative laws can be assigned to the development of computer


software, using a set of primitive measures that may be derived after
code is generated or estimated one design is complete.

n1 : The number of distinct operators that appear in a program.

n2 : The number of distinct operands that appear in a program.

N1 : The total number operator occurrences.

N2 : The total number of operand occurrences.

These primitive measures are used to develop expressions for the


overall program length, potential minimum volume for an algorithm, the
actual volume, the program level , the language level and other features
such as development effort, development time, and even the projected
number of faults in the software.

Here, the length N can be estimated as :

N = n1 log2 n1 + n2 log2 n2

And program volume may be defined as :

V = N log2 (n1 + n2)

27
Here, V will vary with programming language and represents the volume
of information required to specify a program.

The volume ratio L as the ratio of volume of the most compact form of a
program to the volume of the actual program.

In actuality, L must always be less than 1.

In terms of primitive measures, the volume ratio may be expressed as :

L = 2 / n1 X n2 / N2
12.Metrics for Maintenance :
IEEE Std. suggests a ‘software maturity index’ (SMI) that provides an
indication of the stability of a software product.
The following information is determined :

MT = The number of modules in the current release.

Fc = The number of modules in the current release that have


been changed.

Fa = The number of modules in the current release that have


been added.
Fd = The number of modules from the preceding release that
were deleted in the current release.
The software maturity index (SMI) is computed in the following manner :

SMI = [ MT – (Fa + Fc + Fd) ] / MT


The SMI approaches 1.0, the product begins to stabilize.
SMI may also be used as a metric for planning software maintenance
activities.

* * * * *

28

You might also like