Nothing Special   »   [go: up one dir, main page]

Topic 4 - Testing Through The Lifecyle - Part 1

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 71

Testing Through the Life Cycle

Outline
• Software Development models
• Test levels
• Test types
• Maintenance testing
Principles for Good Testing
• Test does not come at the end, but it is integrated in
the model!
• Only test execution is at the end, everything else as
early as possible.
• The earlier one develops test cases, the earlier one
can find bugs or possible bugs.
V-Model

User/Business Acceptance Test Acceptance

Requirements Plan Test

System Test System


System
Plan
Requirements Test

Integration Integration
Technical
Test Plan
Test
Specification

Unit Test Test


Development Program Unit
Plan
Levels Levels
Specification Test

Coding
Validation

• In each stage of testing is to prove whether the


development results meet the requirements
which are specified or relevant for the
respective level .
• To check the development results against the
original requirements is called validation
• While validating the testers evaluate whether a
(partial) product can actually solve a fixed
(specified) task and therefore is suitable
respectively useful for its purpose.
• Examines whether the product is useful in the
context of the intended product use.

• Checking whether we build the right product


Verification

• Besides validating testing the V-model request


so-called verification testing.
• Verification, unlike validation, is based on a
single development phase and has to prove
the correctness and completeness of a phase
results relative to its direct specification
(phase input documents).
• Examines whether the specifications have
been implemented correctly, regardless of the
intended purpose or use of the product.
Checking whether we build the product right
V-Model
The benefits of the V Model include:
• The testing phases are given the same level of management
attention and commitment as the corresponding development
phases
• The outputs from the development phases are reviewed by the
testing team to ensure their testability
• Verification and validation (and early test design) can be carried
out during the development of the software work products
• The early planning and preliminary design of tests provides
additional review comments on the outputs from the
development phase
How to Apply the V-Model
• Run a V for every increment / release.
• Run a V on every prototype.
• Make test checklists as early as possible.
• Make test data during coding.
• Keep the test material, update and automate it.
– Update it for every release as you learn more about
defects.
• Regression test for every release!
(Run a smoke test for every Build)
The Real Way of Working with Test
Principles for Good Testing in All Life
Cycle Models
• Every development activity has its corresponding test activity.
• Every test level has a specific objective.
• Test preparation shall start at the same time as the
corresponding development activity  feedback, thinking,
improvement / adaptation.
• Testers can review drafts for development documents, at the
same time as test design.
• The number of test levels depends on system complexity.
– Especially integration test is often partitioned into several levels. Test
levels can also be combined.
Test Levels
• Component Testing Fault-directed testing
• Integration Testing
• System Testing Conformance-directed testing
• Acceptance Testing
Tasks for the Test Levels
• Component (Unit, Module) test:
– Coding problems, problems in detailed design, algorithms.
• Integration test:
– Problems in interfaces, working together, design, architecture.
• System test:
– Problems in requirements, problems with system attributes.
• Acceptance test:
– Deviations from the customer interpretation of requirements and
needs. The system does not work on the customer platform and in the
customer environment.
Design of Test Environment
Test Environment
• Test environment contains drivers and stubs.
• Driver:
– A piece of code that passes test cases to another
piece of code.
– is used to simulate a calling module and call the
program unit being tested by passing input
arguments.
– The driver can be written in various ways, (e.g.,
prompt interactively for the input arguments or
accept input arguments from a file).
– Can be generated automatically by tools.
Test Environment

• Stubs:
– A Stub is a dummy procedure, module or unit that
stands in for an unfinished portion of a system.
– Four basic types of Stubs for Top-Down Testing are:
1 Display a trace message
2 Display parameter value(s)
3 Return a value from a table
4 Return table value selected by parameter
– More than one stub may be needed, depending on the
number of programs the unit calls.
– Can be generated automatically by a tool.
– Should be kept for regression test.
Example

• X module is ready and we need to test it, but it calls


functions from y and z (which are not yet ready).
– To test at such a module, we write a small dummy piece a code
which Simulates Y and Z and which will return values for X.
– This piece of dummy code is called Stub in a Top down Integration
Testing.
• Y and Z modules ready and x module is not ready,
and we need to test Y and Z modules which accepts
values from X,
– To get the values from X, a small piece of dummy code for
X which returns values for Y and Z is needed.
– So this piece of code is called Driver in Bottom up
Integration testing.
Stub Example

public int generateRandInt()


{
return 1;
}
Driver Example

public class RandIntTest


{
public static void main(String[] args)
{
RandInt myRand = new RandInt();
System.out.println("My first rand int
is :" + myRand.generateRandInt());
}
}
Component Testing

User/Business Acceptance Test Acceptance

Requirements Plan Test

System Test System


System
Plan
Requirements Test

Integration Integration
Technical
Test Plan
Test
Specification

Unit Test Test


Development Program Unit/Component
Plan
Levels Levels
Specification Test

Coding
Component Testing

Definition

• Component – A minimal software item that can be tested in isolation.


• Component Testing – The testing of individual software components.
• Sometimes known as Unit Testing, Model Testing or Program Testing
• Component can be tested in isolation – stubs/drivers may be
employed
• Test cases derived from component specification (module/program
spec)
• Functional and Non-Functional testing
• Usually performed by the developer, with debugging tool
• Quick and informal defect fixing
Component Testing
Definition
• Test-First/Test-Driven approach – create the tests to drive the design and code
construction!
• Instead of creating a design to tell you how to structure your code, you create a
test that defines how a small part of the system should function.
• Three steps:
1. Design test that defines how you think a small part of the software should
behave (Incremental development).
2. Make the test run as easily and quickly as you can. Don't be concerned about
the design of code, just get it to work!
3. Clean up the code. Now that the code is working correctly, take a step back
and re-factor to remove any duplication or any other problems that were
introduced to get the test to run.
Russell Gold, Thomas Hammell and Tom Snyder - 2005
Component Testing

• Exercising the smallest individually executable


code units
• It is a defect testing process.
• Component or unit testing is the process of
testing
• individual components in isolation.
• Objectives
– Finding faults
– Assure correct functional behaviour of units
• Usually performed by programmers
Component Testing
• Components may be:
– Individual functions or methods within an object;
– Object classes with several attributes and methods;
– Composite components with defined interfaces used
to access their functionality.
• Object Class Testing
– Complete test coverage of a class involves: Testing all
operations associated with an object; Setting and
interrogating all object attributes; Exercising the
object in all possible states.
An Example of Object Class Testing
• Need to define test cases
for reportWeather, calibrate, test,
WeatherStation • startup and shutdown.
• Using a state model,
identify sequences of state
identifier transitions to be tested and
the event sequences to
reportWeather() cause these transitions.
calibrate (instruments) • For example:
test ()
startup (instruments)
– Waiting -> Calibrating ->
shutdown (instruments)
Testing -> Transmitting ->
Waiting
Integration Testing

Definition

Component Integration Testing

System Integration Testing


Integration Testing

Definition

• Integration Testing - Testing performed to expose defects in the


interfaces and in the interactions between integrated components or
systems
• Components may be code modules, operating systems, hardware and
even complete systems
• There are 2 levels of Integration Testing
– Component Integration Testing
– System Integration Testing
Component Integration Testing

User/Business Acceptance Test Acceptance

Requirements Plan Test

System Test System


System
Plan
Requirements Test

Integration Integration
Technical
Test Plan
Test
Specification

Unit Test Test


Development Program Unit/Component
Plan
Levels Levels
Test
Specification

Coding
Component Integration Testing

Definition

• component integration testing Testing performed to expose


defects in the interfaces and interaction between integrated
components
• Usually performed by the Developer, but could involve the test
team usually formal (records of test design and execution are
kept)
• all individual components should be integration tested prior to
system testing
Component Integration Testing
Test Planning
• To consider - should the integration testing approach:

– Start from top level components and work down?


– Start from bottom level components and work up?
– Use the big bang method?
– Be based on functional groups?
– Start on critical components first?
– Be based on business sequencing? Maybe suit System Test
needs.
• Knowledge of the system architecture is important
• The greater the scope of the integration approach the
more difficult it is to isolate defects
• Non-Functional requirements testing may start here –
e.g. early performance measures
Integration Strategies
• Depend on system architecture
• Depend on cost for test environments (drivers,
stubs).
• No general strategy is good for every problem.
Top-Down Integration

• Baselines:
– baseline 0: component a a
– baseline 1: a + b
– baseline 2: a + b + c b c
– baseline 3: a + b + c + d
– etc. d e f g
• Need to call to lower
level components not h i j k l m
yet integrated
n o
Component Integration Testing
Top-down testing
• Test commences with the top module in the system
and tests in layers descending through the
dependency graph for the system.
• This may require successive layers of `stub' modules
that replace modules lower in the dependency
graph.
A
Layer I

B C D Layer II

E F G
Layer III

Test
Test A Test A, B, C, D A, B, C, D,
E, F, G

Layer I
Layer I + II
All Layers
Requires
stubs: BCD EFG
Component Integration Testing
Top-down testing
Pro’s Con’s
• provides a limited working system early in • stubs only provide limited
the design process simulations of lower level
• depth first integration demonstrates end- components and could influence
to-end functions early in the development spurious results
process • breadth first means that higher
• early detection of design errors through levels of the system must be
early implementation of the design artificially forced to generate output
structure for test observations
• early testing of major control or decision
points
• this approach may allow an overlap with
Component Testing
Component Integration Testing
Bottom-up testing
• Initiate testing with unit tests for the bottom
modules in the dependency graph
• Candidates for inclusion in the next batch of tests
depend on the dependency structure ( a module
can be included if all the modules it depends on
have been tested (issue about potential
circularity need to consider connected
components).
• Prioritisation of modules for inclusion in the test
sequence should include their `criticality' to the
correct operation of the system.
Component Integration Testing
Bottom-up testing
Component Integration Testing
Bottom-up testing
Pro´s Con´s
• using drivers instead of upper level • unavailability of a demonstrable
modules to simulate the system until late in the
environment for lower level development process
modules
• late detection of system structure
• necessary for critical, low level
system components errors

• testing can be observed on the


components under test from an
early stage

Slide 39 • EDS Internal


Bottom-up Integration
• Baselines:
– baseline 0: component n
– baseline 1: n + i a
– baseline 2: n + i + o
– baseline 3: n + i + o + d b c
– etc.
• Needs drivers to call d e f g
the baseline configuration
• Also needs stubs h i j k l m
for some baselines
n o
Component Integration Testing

Big Bang Approach


In theory:
– if we have already tested components why not just
combine them all at once? Wouldn’t this save time?
– (based on false assumption of no faults)
In practice:
– takes longer to locate and fix faults
– re-testing after fixes more extensive
– end result? takes more time
Unit Test Not recommended
A in all but the
simplest systems!
Unit Test
B
Unit Test
C
System Test
Unit Test
D
Unit Test
E
Unit Test
F
Component Integration Testing

Suggested Integration Testing Methodology

The following testing techniques are appropriate for


Integration Testing:
• Functional Testing using Black Box Testing techniques
against the interfacing requirements for the component
under test
• Non-functional Testing (where appropriate, for
performance or reliability testing of the component
interfaces, for example)
Steps in Integration Testing
System Integration Testing
• We’ll talk about System Integration Testing later.

• For now, we should stick to the sequence of the Test lifecycle.

• Which means System Testing next.


Account Customer Order Package

1 0..* 1 * 1 *

* 1
* *
USAccount OtherAccount

CustomerCare LineItem

JPAccount EUAccount UKAccount


CompositeItem SimpleItem

* *
* *

Model PriceList Component

from a class diagram... * 1 * 1 0..1


*

Slot

*
1 1 1

ModelDB SlotDB ComponentDB

CSVdb
....to a hierarchy
Customer Order Package

USAccount OtherAccount
PriceList Component
CustomerCare

Model
JPAccount EUAccount UKAccount

ComponentDB

Slot

ModelDB SlotDB
Tester Tasks with Integration
• Early feedback
– Which interfaces make testing difficult?
– Which components must be delivered first (to ease
integration?)
System Testing

Context

Definition

Functional Systems testing

Non-Functional Systems Testing

Good Practices for System Testing


System Testing

User/Business Acceptance Test Acceptance

Requirements Plan Test

System Test System


System
Plan
Requirements Test

Integration Integration
Technical
Test Plan
Test
Specification

Unit Test Test


Development Program Unit/Component
Plan
Levels Levels
Test
Specification

Coding
System Testing
Definition

• System Testing - process of testing an integrated


system to verify that it meets specified requirements

• Concerned with the behaviour of the whole system,


not with the workings of individual components.

• Carried out by the Test Team

Slide 51 • EDS Internal


System Testing
• Typical test objects:
– System, user and operation manuals, system configuration
and configuration data.
• Test basis:
– System and SRS, functional specifications, risk analysis
reports.
System Testing – What to include?
• Function test
• Function interaction, flow
• Non-functional attributes
– Efficiency (stress, load, volume test)
– Usability
– Safety
– Robustness
– Portability
– Maintainability
System Testing – Who?
• Normally independent test team
• Large task to make test environment
• Customer viewpoint
• Problem:
– Often incomplete / undocumented requirements!!!
System Integration Testing

Context

Definition

Objectives

Interfaces to External Systems


System Integration Testing

User/Business Acceptance Test Acceptance


Plan System
Requirements Test
Integration
Testing
System Test System
System
Plan
Requirements Test

Integration Integration
Technical
Test Plan
Test
Specification

Unit Test Test


Development Program Unit/Component
Plan
Levels Levels
Test
Specification

Coding
System Integration Testing
Definition

• System Integration Testing is testing between the


‘System’ and ‘Acceptance’ phases.
• The System has already proven to be functionally
correct, what remains to be tested is how the
system reacts to other systems and/or
organisations.
System Integration Testing

Objectives of Systems Integration Testing


• The objective of Systems Integration Testing is to provide confidence that
the system or application is able to interoperate successfully with other
specified software systems and does not have an adverse affect on other
systems that may also be present in the live environment, or vice versa

• It is possible that the testing tasks performed during System Integration


Testing may be combined with System Testing, particularly if the system
or application has little or no requirement to interoperate with other
systems

• In terms of the V Model, Systems Integration Testing corresponds to the


Functional and Technical Specification phases of the software
development lifecycle
System Integration Testing

Testing Interfaces to External Systems


• Having completed Component Integration testing and
Systems testing, one must execute the plan for system-to-
system integration

• Infrastructure may need to be transformed in order to feed to


an external system

• Black Box testing techniques used


Acceptance Testing
User/Business Acceptance Test Acceptance

Requirements Plan Test

System Test System


System
Plan
Requirements Test

Integration Integration
Technical
Test Plan
Test
Specification

Unit Test Test


Development Program Unit/Component
Plan
Levels Levels
Test
Specification

Coding
Acceptance Testing
Definition

• Acceptance testing: Formal testing with


respect to user needs, requirements, and
business processes conducted to determine
whether or not a system satisfies the
acceptance criteria and to enable the user,
customers or other authorized entity to
determine whether or not to accept the system.

Slide 61 • EDS Internal


Acceptance Testing
Definition
• Usually the responsibility of the Customer/End user,
though other stakeholders may be involved. Customer
may sub-contract the Acceptance test to a third party
• Goal is to establish confidence in the system/part-
system or specific non-functional characteristics (e.g.
performance)
• Usually for ensuring the system is ready for
deployment into production
• May also occur at other stages, e.g.
– Acceptance testing of a COTS product before System
Testing commences
– Acceptance testing a component’s usability during
Component testing
– Acceptance testing a new significant functional
enhancement/middleware release prior to deployment
into System Test environment.
User Acceptance Testing (UAT)

• Usually the final stage of validation


• conducted by or visible to the end user and customer
• testing is based on the defined user requirements
• Often uses the ‘Thread Testing’ approach:
• ‘A testing technique used to test the business functionality or business logic of the
application in an end-to-end manner, in much the same way a User or an operator might
interact with the system during its normal use.’
- Watkins 2001

This approach is also often used for Functional Systems


Test - The same Threads serve both test activities
User Acceptance Testing

• Often use a big bang approach


• black box testing techniques most commonly used
• Regression testing to ensure changes have not
regressed other areas of the system
User Acceptance Testing
Testing Pearl of Wisdom

• “If love is like an extended software Q.A.


suite, then true love is like a final
Acceptance Test – one often has to be
willing to endure compromise, bug fixes
and work-arounds; otherwise, the software
is never done ”

The Usenet Oracle


Operational Acceptance Testing (OAT)

• The Acceptance of the system by those who have to


administer it.
• Features covered include:
– testing of backup/restore
– disaster recovery
– user management
– maintenance tasks
– periodic checks of security vulnerabilities
• ‘The objective of OAT is to confirm that the Application
Under Test (AUT) meets its operational requirements,
and to provide confidence that the system works
correctly and is usable before it is formally "handed
over" to the operation user. OAT is conducted by one
or more Operations Representatives with the
assistance of the Test Team’ 1 –Watkins 2001
Operational Acceptance Testing (OAT)

• Employs a Black Box Approach for some activities


• Also employs a Thread Testing approach –
Operations representatives performing typical
tasks that they would perform during their
normal usage of the system
• Also addresses testing of System Documentation,
such as Operations manuals
Contract / Regulation Acceptance Testing

• Contract Acceptance Testing - testing against the


acceptance criteria defined in the contract
– final payment to the developer depends on contract
acceptance testing being successfully completed
– acceptance criteria defined at contract time are often
imprecise, poorly defined, incomplete and out-of-step
with subsequent changes to the application
• Regulation Acceptance testing is performed
against any regulations which must be adhered
to, such as governmental, legal or safety
regulations
Alpha & Beta Testing

• early testing of stable product by


customers/users
• feedback provided by alpha and beta testers
• alpha tests performed at developer’s site by
customer
• beta tests conducted at the customer site by end
user/customer
• published reviews of beta release test results can
make or break a product (e.g. PC games)
Other Acceptance Test Terms

• Factory Acceptance Testing (FAT)

• Site Acceptance Testing (SAT)

• Both address acceptance testing for systems that are


tested before and after being moved to a customer’s
site

You might also like