Se Unit 4 Notes 10 Mar 2024
Se Unit 4 Notes 10 Mar 2024
Se Unit 4 Notes 10 Mar 2024
UNIT-4
TESTING STRATEGIES
1
Testing is the unavoidable part of any responsible effort to develop a
software system.
The role of ‘Independent Test Group’ (ITG) is to remove the inherent
problems associated with letting the builder test the thing that has been
built.
Types of Testing :
a) Unit Testing : begins at the vertex of spiral and concentrates on each
unit of the software as implemented in the source code.
d) System Testing : Here the software and other system elements are
tested as a whole.
Testing Strategy :
2
Consider the process from a procedural point of view.
Then validation testing provides final assurance that software meets all
functional, behavioral, and performance requirement.
The last high-order testing step falls outside the boundary of software
engineering and into the broader context of computer system engi-
neering.
System testing verifies that all the elements mesh properly and that
overall system function / performance is achieved.
Software Testing Steps :
3
3. Unit Testing :
‘Unit Testing’ focuses verification effort on the smallest unit of software
design – the software component or module.
The important control paths are tested to uncover errors within the
boundary of the module.
The unit test focuses on the internal processing logic and data structures
within the boundaries of a component.
The following errors are commonly found during unit testing :
a) Misunderstood or incorrect arithmetic precedence.
b) Mixed mode operations
c) Incorrect initialization
d) Precision inaccuracy
e) Incorrect symbolic representation of an expression,
comparison and control flow are closely coupled to one
another.
Unit Test :
4
Good design dictates that error conditions be anticipated and error-
handling paths set up to reroute or cleanly terminate processing when
an error does occur. The approach is called ‘anti-bugging’.
Be sure that you design tests to execute every error-handling path.
Unit Test Procedure :
Here, a ‘driver’ is a program that accepts test case data, passes such
data to the component (to be tested), and prints the relevant results.
‘Stubs’ serve to replace modules that are subordinate to (called by) the
component to be tested.
A stub (dummy sub program) uses the subordinate module’s interface,
may do minimal data manipulation, provides verification of entry, and
returns control to the module undergoing testing.
Unit Test Environment :
5
Test cases should uncover errors such as :
a) Comparison of different data types
In the diagram :
a) The module interface is tested to ensure that information
properly flows into and out of the program unit under test.
6
4. Integration Testing :
‘Integration Testing’ is a systematic technique for constructing the
software architecture while at the same time conducting tests to
uncover errors associated with interfacing.
The objective is to take unit tested components and build a program
structure that has been dictated by design.
Top-down Integration :
7
In the above figure, ‘depth-first integration’ integrates all
components on a major control path of the program structure.
For example, selecting the left hand path, components M1, M2, M5
and M6 are integrated first.
a) The main control module is used as a test driver, and stubs are
substituted for all components directly subordinate to the main
control module.
9
5. Types of Testing :
a) Alpha Testing :
The alpha testing is done at an early stage and close to the end of
software development life cycle.
b) Beta Testing :
The beta test is conducted at end-user sites.
‘Beta testing’is the process of testing a software product in a real-
world environment or at the end users site.
Beta testing is an opportunity for real users to use a product in a
production environment to uncover any bugs or issues before a
general release.
c) System Testing :
d) Recovery Testing :
In general, computer-based systems must recover from faults and
resume processing within a pre-specified time.
A system must be ‘fault-tolerant’, i.e., processing faults must not
cause overall system function to cease.
‘Recovery Testing’ is a system test that forces the software to fall in a
variety of ways and verifies that recovery is properly performed.
If recovery is automatic, then re-initialization, check point
mechanisms, data recovery, and restart are evaluated for
correctness.
10
If recovery requires human intervention, the mean-time-to-repair
(MTTR) is evaluated to determine whether it is within acceptable
limits.
e) Security Testing :
‘Security Testing’ verifies that protection mechanism built into a
system will, in fact, protect it from improper penetration.
Security Testing is a type of software testing that uncovers
vulnerabilities of the system and determines that the data and
resources of the system are protected from possible intruders.
It ensures that the software system and application are free from any
threats or risks that can cause a loss.
f) Stress Testing :
‘Stress Testing’ executes a system in a manner that demands
resources in abnormal quantity, frequency or volume.
For example,
a) Special tests may be designed that generate ten interrupts per
second, when one or two is the average rate.
b) Input data rates may be increased by an order of magnitude to
determine how input functions will respond.
c) Test cases that require maximum memory or other resources are
executed.
d) Test cases that may cause memory management problems are
designed.
e) Test cases that may cause excessive hunting for risk-resident data
are created.
g) Performance Testing :
‘Performance Testing’ is designed to test the run-time performance
of software within the context of an integrated system.
Performance Testing occurs throughout all steps in the testing
process.
Performance Test are often coupled with stress testing and usually
require both hardware and software instrumentation.
11
h) Black Box Testing :
‘Black Box Testing’ alludes to tests that are conducted at the software
interface.
In other words, Black box testing involves testing a system with no
prior knowledge of its internal workings.
The black box test is a test that only considers the external behavior
of the system; the internal workings of the software is not taken into
account.
12
6. Product Metrics :
Software Quality :
‘Software Quality’ is conformance to explicitly stated functional and
performance requirements, explicitly documented development
standards, and implicit characteristics that are expected of all
professionally developed software.
13
McCall’s Software Quality Factors :
14
b) ISO 9126 Quality Factors :
ii) Reliability : The amount of time that the software is available for use
as indicated by the following sub-attributes :
- maturity - fault tolerance - recoverability
iii) Usability : The degree to which the software is easy to use as indica-
ted by the following sub-attributes :
- understandability - learnability - operability
iv) Efficiency : The degree to which the software makes optimal use of
system resources as indicated by the following sub-attributes :
- time behavior - resource behavior
vi) Portability :The ease with which the software can be transposed
one environment to another as indicated by the following
sub-attributes :
- Adaptability - installability
- Conformance - replaceability
15
7. Function Based Metrics :
The ‘Function Point Metric’ (FP) can be used effectively as a means for
measuring the functionality delivered by a system.
Using historical data, the FP can then be used to :
a) Estimate the cost or effort required to design, code, and test
the software
17
The Fi ( i = 1 to 14) are ‘Value Adjustment Factors’ (VAF), based on
responses to the following fourteen questions :
a) Does the system require reliable backup and recovery.
b) Are specialize data communications required to transfer information
to or from the application.
c) Are there distributed processing functions.
d) Is Performance critical.
e) Will the system run in an existing , heavily utilized operational envi-
ronment.
f) Does the system require on-line data entry.
g) Does the on-line data entry require the input transaction to be built
Over multiple screens or operations.
Each of the above questions is answered using a scale that ranges from 0
(not important or applicable) to 5 (absolutely essential).
To illustrate the use of the FP metric in this context, the following
example is considered, which is a simple analysis model representation.
Referring to the figure, a data flow diagram for a function within the
‘SafeHome’ software is represented.
18
The function manages :
- user interaction
19
Let us assume, the ∑ (Fi) = 46.
Based on the projected FP value derived from the analysis model, the
project team can estimate the overall implemented size of the
‘SafeHome’ user interaction function.
Assume that past data indicates that one FP translates into 60 lines of
code and that 12 FPs are produced for person-month of effort.
20
8. Class Oriented Metrics – The CK Metrics Suite :
The class is the fundamental unit of an OO system.
So, measures and metrics for an individual class, the class hierarchy, and
class collaborations will be invaluable to a software engineer who must
asses design quality.
The class encapsulates operations and data.
The class is often ‘parent’ class for subclasses that inherits its attributes
and operations.
The class often collaborates with other classes.
One of the most widely referenced sets of OO software metrics is the ‘CK
metrics suite’, which is six class-based design metrics for OO systems.
a) Weighted Methods per Class (WMC) :
Assume that n methods of complexity C1, C2,…,Cn are defined for the
class C.
The specific complexity metric that is chosen (eg. cyclomatic
complexity) should be normalized so that nominal complexity for a
method takes on a value of 1.0.
WMC = ∑ Ci ( i = 1 to n)
The number of methods and their complexity are reasonable
indicators of the amount of effort required to implement and test a
class.
Finally, as the number of methods grows for a given class, it is likely
to become more and more application specific.
b) Depth of the Inheritance Tree (DIT) :
This metric is ‘the maximum length from the node to the root of the
tree’.
In the following figure, the value of DIT for the class-hierarchy shown
is 4 (four).
As DIT grows, it is likely that lower-level classes will inherit many
methods.
21
This lead to potential difficulties when attempting to predict the
behavior of a class.
A deep class hierarchy (DIT is large) also leads to greater design
complexity.
On the positive side, large DIT values imply that many methods may
be reused.
22
d) Coupling between object classes (CBO) :
The CRC model may be used to determine the value for CBO.
High values of CBO also complicate modifications and the testing that
ensues when modifications are made.
23
9. Class Oriented Metrics – The MOOD Metrics Suite :
In the MOOD metrics suite, a set of metrics for object-oriented design
that provide quantitative indicators for OO design characteristics.
Method Inheritance factor (MIF) :
The degree to which the class architecture of an OO system makes
use of inheritance for both methods (operations) and attributes
is defined as :
MIF = ∑Mi (Ci) / ∑ Ma (Ci)
Here, the summation occurs over i = 1 to Tc.
Tc is defined as the total number of classes in the architecture; Ci is a
class within the architecture and
Ma(Ci) = Md(Ci) + Mi(Ci)
where,
Ma(Ci) = the number of methods that can be invoked in
association with Ci.
Md(Ci) = the number of methods declared in the class Ci.
Mi(Ci) = the number of methods inherited.
The value of MIF provides an indication of the impact of inheritance on
the OO software.
Coupling Factor (CF) :
Coupling is an indication of the connections between elements of OO
design.
The MOOD metrics suite defines coupling as follows :
24
10. Component-Level Design Metrics :
a) Cohesion Metrics :
The following is a collection of metrics that provide and indication of
the cohesiveness of a module :
i) Data Slice : is a backward walk through a module that looks for
data values that affect the state of the module, when the walk
began.
ii) Data Tokens : The variables defined for a module can be defined as
data tokens for the module.
iii) Glue Tokens : The set of tokens lies on one or more data slice.
iv) Super Glue Tokens : These data tokens are common to every data
slice in a module.
v) Stickiness : The relative stickiness of a glue token is directly propo-
rtional to the number of data slices that it binds.
b) Coupling Metrics :
26
e) Operation Complexity Metrics :
N = n1 log2 n1 + n2 log2 n2
27
Here, V will vary with programming language and represents the volume
of information required to specify a program.
The volume ratio L as the ratio of volume of the most compact form of a
program to the volume of the actual program.
L = 2 / n1 X n2 / N2
12.Metrics for Maintenance :
IEEE Std. suggests a ‘software maturity index’ (SMI) that provides an
indication of the stability of a software product.
The following information is determined :
* * * * *
28