Nothing Special   »   [go: up one dir, main page]

Manual Testing Notes ...

Download as pdf or txt
Download as pdf or txt
You are on page 1of 86
At a glance
Powered by AI
The key takeaways from the document are that software testing is important for quality assurance and delivering high quality products to customers. It involves finding and removing errors from the product.

The major principles of good testing script for Automation are that scripts should be reusable, follow coding standards, be environment and data independent, generalized, modular, and readable with appropriate comments.

Testing activities that can be automated include test case preparation, regression testing, functional testing, load/performance testing, test report generation, test status notifications, and bug tracking.

Index

1. Introduction

2. Principle of Testing

3. Software Development Life Cycle (SDLC)

4. Software Development Lifecycle Models

5. Some of the Software Testing Terms and Definitions

6. Verification and validation

7. Project Management

8. Quality Management

9. Risk Management

10. Configuration Management

11. Types of Software Testing

12. Testing levels

13. Types of Testing Techniques

14. Testing Life Cycle

15. Defect Tracking

16. Test Reports

17. Software Metric

18. Other Testing Terms

19. Test Standards

20. Web Testing

21. Testing Terms

22. Technical Questions


1

❖ Software Testing

Introduction

Software testing is a critical element of software quality assurance and represents


the ultimate process to ensure the correctness of the product. The quality product
always enhances the customer confidence in using the product thereby increases the
business economics. In other words, a good quality product means zero defects,
which is derived from a better quality process in testing.

Software is an integrated set of Program codes, designed logically to implement a


particular function or to automate a particular process. To develop a software
product or project, user needs and constraints must be determined and explicitly
stated. The development process is broadly classified into two.

1. Product development
2. Project development

Product development is done assuming a wide range of customers and their needs.
This type of development involves customers from all domains and collecting requirements
from many different environments.

Project Development is done by focusing a particular customer's need, gathering data


from his environment and bringing out a valid set of information that will help as a
pillar to development process.

Testing is a necessary stage in the software life cycle: it gives the programmer and
user some sense of correctness, though never "proof of correctness. With effective
testing techniques, software is more easily debugged, less likely to "break," more
"correct", and, in summary, better.

Most development processes in the IT industry always seem to follow a tight


schedule. Often, these schedules adversely affect the testing process, resulting in
step motherly treatment meted out to the testing process. As a result, defects
accumulate in the application and are overlooked so as to meet deadlines. The

2
developers convince themselves that the overlooked errors can be rectified in
subsequent releases.

The definition of testing is not well understood. People use a totally incorrect
definition of the word testing, and that this is the primary cause for poor program
testing.

Testing the product means adding value to it by raising the quality or reliability of
the product. Raising the reliability of the product means finding and removing errors.
Hence one should not test a product to show that it works; rather, one should start
with the assumption that the program contains errors and then test the program to
find as many of the errors as possible.

Definitions of Testing:

“Testing is the process of executing a program with the intent of finding


errors ”
Or
“Testing is the process of evaluating a system by manual or automatic
means and verify that it satisfies specified requirements”
Or
"... the process of exercising or evaluating a system or system component
by manual or automated means to verify that it satisfies specified
requirements or to identify differences / between expected and actual
results..."

3
Why software Testing?
Software testing helps to deliver quality software products that satisfy user’s
requirements, needs and expectations. If done poorly,
➢ defects are found during operation,
➢ it results in high maintenance cost and user dissatisfaction
➢ It may cause mission failure
➢ Impact on operational performance and reliability

Some of the case studies

Disney’s Lion King, 1994-1995

In the fall of 1994, Disney company Released its first multimedia CD-ROM game for
children, The Lion King Animated storybook. This was Disney’s first venture into the
market and it was highly promoted and advertised. Sales were huge. It was “the
game to buy” for children that holiday season. What happened, however, was a huge
debacle. On December 26, the day after Christmas, Disney’s customer support
phones began to ring, and ring, and ring. Soon the phones support technicians were
swamped with calls from angry parents with crying children who couldn’t get the
software to work. Numerous stories appeared in newspapers and on TV news. This
problem later was found out, due to non performance of software testing for all
conditions.

4
Software Bug: A Formal Definition

Calling any and all software problems bugs may sound simple enough, but doing so
hasn’t really addressed the issue. To keep from running in circular definitions, there
needs to be a definitive description of what a bug is.

A software bug occurs when one or more of the following five rules are true:

1) The software doesn’t do something that the product specification says it


should do.
2) The software does something that the product specification says it
shouldn’t do.
3) The software does something that the product specification doesn’t
mention.
4) The software doesn’t do something that the product specification doesn’t
mention but should.
5) The software is difficult to understand, hard to use, slow, or –in the
software tester’s eyes- will be viewed by the end user as just plain not
right.

What exactly does Software Tester Do? (Or Role of Tester)

From the above Examples you have seen how nasty bugs can be and you know what
is the definition of a bug is, and you can think how costly they can be. So main goal
of tester is

“The goal of Software Tester is to find bugs”

As a software tester you shouldn’t be content at just finding bugs, you should think
about how to find them sooner in the development process, thus making them
cheaper to fix.

“The goal of a Software Tester is to find bugs, and find them as early as
possible”.

But, finding bugs early isn’t enough.

“The goal of a Software Tester is to find bugs, and find them as early as
possible and make sure they get fixed”

5
2

Principle of Testing

The main objective of testing is to find defects in requirements, design,


documentation, and code as early as possible. The test process should be such that the
software product that will be delivered to the customer is defect less. All Tests should be
traceable to customer requirements.

Test cases must be written for invalid and unexpected, as well as for valid and expected
input conditions. A necessary part of a test case is a definition of the expected output or
result. A good test case is one that has high probability of detecting an as-yet
undiscovered error.

Eight Basic Principles of Testing

• Define the expected output or result.

• Don't test your own programs.

• Inspect the results of each test completely.


• Include test cases for invalid or unexpected conditions.

• Test the program to see if it does what it is not supposed to do as well

as what it is supposed to do.

• Avoid disposable test cases unless the program itself is disposable.

• Do not plan tests assuming that no errors will be found.

The probability of locating more errors in any one module is directly proportional to

the number of errors already found in that module.

6
Best Testing Practices to be followed during testing

• Testing and evaluation responsibility is given to every member, so as to


generate team responsibility among all.
• Develop Master Test Plan so that resource and responsibilities are
understood and assigned as early in the project as possible.
• Systematic evaluation and preliminary test design are established as a
part of all system engineering and specification work.
• Testing is used to verify that all project deliverables and
components are complete, and to demonstrate and track true project
progress.
• A-risk prioritized list of test requirements and objectives (such as
requirements-based, design-based, etc) are developed and
maintained.
• Conduct Reviews as early and as often as possible to provide developer
feedback and get problems found and fixed as they occur.

7
3

❖ Software Development Life Cycle (SDLC)

Let us look at the Traditional Software Development life cycle vs Presently or Mostly
commonly used life cycle.

Requirements Requirements

T
Design Design E
S
Development Development T
I
N
Testing Implementation G

Implementation Maintenance

Maintenance
Fig A (Traditional)

Fig B (Most commonly used)

In the above Fig A, the Testing Phase comes after the Development or coding is
complete and before the product is launched and goes into Maintenance phase. We
have some disadvantages using this model - cost of fixing errors will be high because
we are not able to find errors until coding is completed. If there is error at
Requirements phase then all phases should be changed. So, total cost becomes very
high.

The Fig B shows the recommended Test Process involves testing in every phase of
the life cycle. During the Requirements phase, the emphasis is upon validation to
determine that the defined requirements meet the needs of the organization. During
Design and Development phases, the emphasis is on verification to ensure that the
design and program accomplish the defined requirements. During the Test and

8
Installation phases, the emphasis is on inspection to determine that the implemented
system meets the system specification. During the maintenance phases, the system
will be re-tested to determine that the changes work and that the unchanged portion
continues to work.

Requirements and Analysis Specification

The main objective of the requirement analysis is to prepare a document, which


includes all the client requirements. That is, the Software Requirement Specification
(SRS) document is the primary output of this phase. Proper requirements and
specifications are critical for having a successful project. Removing errors at this
phase can reduce the cost as much as errors found in the Design phase. And also
you should verify the following activities:

• Determine Verification Approach.


• Determine Adequacy of Requirements.
• Generate functional test data.
• Determine consistency of design with requirements.

Design phase

In this phase we are going to design entire project into two

• High –Level Design or System Design.


• Low –Level Design or Detailed Design.

High –Level Design or System Design (HLD)

High – level Design gives the overall System Design in terms of Functional
Architecture and Database design. This is very useful for the developers to
understand the flow of the system. In this phase design team, review team (testers)
and customers plays a major role. For this the entry criteria are the requirement
document that is SRS. And the exit criteria will be HLD, projects standards, the
functional design documents, and the database design document.

Low – Level Design (LLD)

During the detailed phase, the view of the application developed during the high
level design is broken down into modules and programs. Logic design is done for
every program and then documented as program specifications. For every
program, a unit test plan is created.

9
The entry criteria for this will be the HLD document. And the exit criteria will the
program specification and unit test plan (LLD).

Development Phase

This is the phase where actually coding starts. After the preparation of HLD and LLD,
the developers know what is their role and according to the specifications they
develop the project. This stage produces the source code, executables, and
database. The output of this phase is the subject to subsequent testing and
validation.

And we should also verify these activities:

• Determine adequacy of implementation.


• Generate structural and functional test data for programs.

The inputs for this phase are the physical database design document, project
standards, program specification, unit test plan, program skeletons, and utilities
tools. The output will be test data, source data, executables, and code reviews.

Testing phase

This phase is intended to find defects that can be exposed only by testing the entire
system. This can be done by Static Testing or Dynamic Testing. Static testing means
testing the product, which is not executing, we do it by examining and conducting
the reviews. Dynamic testing is what you would normally think of testing. We test
the executing part of the project.

A series of different tests are done to verify that all system elements have been
properly integrated and the system performs all its functions.

Note that the system test planning can occur before coding is completed. Indeed, it
is often done in parallel with coding. The input for this is requirements specification
document, and the output are the system test plan and test result.

10
Implementation phase or the Acceptance phase

This phase includes two basic tasks:

• Getting the software accepted


• Installing the software at the customer site.

Acceptance consist of formal testing conducted by the customer according to the


Acceptance test plan prepared earlier and analysis of the test results to determine
whether the system satisfies its acceptance criteria. When the result of the analysis
satisfies the acceptance criteria, the user accepts the software.

Maintenance phase

This phase is for all modifications, which is not meeting the customer requirements
or any thing to append to the present system. All types of corrections for the project
or product take place in this phase. The cost of risk will be very high in this phase.
This is the last phase of software development life cycle. The input to this will be
project to be corrected and the output will be modified version of the project.

11
4

❖ Software Development Lifecycle Models

The process used to create a software product from its initial conception to its public
release is known as the software development lifecycle model.

There are many different methods that can be used for developing software, and no
model is necessarily the best for a particular project. There are four frequently used
models:

• Big –Bang Model


• Waterfall Model
• Prototype Model
• Spiral Model

Bin – Bang Model

The Big- Bang Model is the one in which we put huge amount of matter (people or
money) is put together, a lot of energy is expended – often violently – and out
comes the perfect software product or it doesn’t.

The beauty of this model is that it’s simple. There is little planning, scheduling, or
Formal development process. All the effort is spent developing the software and
writing the code. It’s and ideal process if the product requirements aren’t well
understood and the final release date is flexible. It’s also important to have flexible
customers, too, because they won’t know what they’re getting until the very end.

Waterfall Model

A project using waterfall model moves down a series of steps starting from an initial
idea to a final product. At the end of each step, the project team holds a review to
determine if they’re ready to move to the next step. If the project isn’t ready to
progress, it stays at that level until it’s ready. Each phase requires well-defined
information, utilizes well-defined process, and results in well-defined outputs.
Resources are required to complete the process in each phase and each phase is
accomplished through the application of explicit methods, tools and techniques.

12
The Waterfall model is also called the Phased model because of the sequential move
from one phase to another, the implication being that systems cascade from one level
to the next in smooth progression. It has the following seven phases of development:

The figure represents the Waterfall Model.

Requirement phase

Analysis phase

Design phase

Development phase

Testing phase

Implementation phase

Maintenance phase

Notice three important points about this model.

▪ There’s a large emphasis on specifying what the product will be.


▪ The steps are discrete; there’s no overlap.
▪ There’s no way to back up. As soon as you’re on a step, you need to
complete the tasks for that step and then move on.

13
Prototype model

The Prototyping model, also known as the Evolutionary model, came into SDLC
because of certain failures in the first version of application software. A failure in the
first version of an application inevitably leads to need for redoing it. To avoid failure
of SDLC, the concept of Prototyping is used. The basic idea of Prototyping is that
instead of fixing requirements before the design and coding can begin, a prototype is
to understand the requirements. The prototype is built using known requirements.
By viewing or using the prototype, the user can actually feel how the system will
work.

The prototyping model has been defined as:

“A model whose stages consist of expanding increments of an operational software


with the direction of evolution being determined by operational experience.”

Prototyping Process

The following activities are carried out in the prototyping process:

• The developer and die user work together to define the specifications of the
critical parts of the system.
• The developer constructs a working model of the system.
• The resulting prototype is a partial representation of the system.
• The prototype is demonstrated to the user.
• The user identifies problems and redefines the requirements.
• The designer uses the validated requirements as a basis for designing the
actual or production software

Prototyping is used in the following situations:

• When an earlier version of the system does not exist.


• When the user's needs are not clearly definable/identifiable.
• When the user is unable to state his/her requirements.
• When user interfaces are an important part of the system being developed.

14
Spiral model

The traditional software process models don't deal with the risks that may be faced
during project development. One of the major causes of project failure in the past
has been negligence of project risks. Due to this, nobody was prepared when
something unforeseen happened. Barry Boehm recognized this and tried to
incorporate the factor, project risk, into a life cycle model. The result is the Spiral
model, which was first presented in 1986. The new model aims at incorporating the
strengths and avoiding the different of the other models by shifting the management
emphasis to risk evaluation and resolution.

Each phase in the spiral model is split into four sectors of major activities.

These activities are as follows:

Objective setting:

This activity involves specifying the project and process objectives in terms of their
functionality and performance.

Risk analysis:

It involves identifying and analyzing alternative solutions. It also involves identifying


the risks that may be faced during project development.

Engineering:

This activity involves the actual construction of the system.

Customer evaluation:

During this phase, the customer evaluates the product for any errors and
modifications.

15
5

❖ Software Testing Terms and Definitions

• Verification and validation


• Project Management
• Quality Management
• Risk Management
• Configuration Management
• Cost Management
• Compatibility Management

16
6

▪ Verification & validation

Verification and validation are often used interchangeably but have different
definitions. These differences are important to software testing.

Verification is the process confirming that software meets its specifications.


Validation is the process confirming that it meets the user’s requirements.

Verification can be conducted through Reviews. Quality reviews provides visibility


into the development process throughout the software development life cycle, and
help teams determine whether to continue development activity at various
checkpoints or milestones in the process. They are conducted to identify defects in a
product early in the life cycle.

Types of Reviews

• In-process Reviews :-

They look at the product during a specific time period of life cycle,
such as during the design activity. They are usually limited to a
segment of a project, with the goal of identifying defects as work
progresses, rather than at the close of a phase or even later, when
they are more costly to correct.

• Decision-point or phase-end Reviews: -

This type of review is helpful in determining whether to continue with


planed activities or not. They are held at the end of each phase.

• Post implementation Reviews: -

These reviews are held after implementation is complete to audit the


process based on actual results. Post-implementation reviews are
also know as “ Postmortems”, and are held to assess the success of

17
the overall process after release and identify any opportunities for
process improvements.

Classes of Reviews

• Informal or Peer Review: -

In this type of review generally a one-to one meeting between the


author of a work product and a peer, initiated as a request for input
regarding a particular artifact or problem. There is no agenda, and
results are not formally reported. These reviews occur as need-based
through each phase of a project.

• Semiformal or Walkthrough Review: -

The author of the material being reviewed facilitates this. The


participants are led through the material in one of the two formats:
the presentation is made without interruptions and comments are
made at the end, or comments are made throughout. Possible
solutions for uncovered defects are not discussed during the review.

• Formal or Inspection Review: -

An inspection is more formalized than a 'walkthrough', typically with


3-8 people including a moderator, reader, and a recorder to take
notes. The subject of the inspection is typically a document such as a
requirements spec or a test plan, and the purpose is to find problems
and see what's missing, not to fix anything. Attendees should
prepare for this type of meeting by reading thru the document; most
problems will be found during this preparation. The result of the
inspection meeting should be a written report. Thorough preparation
for inspections is difficult, painstaking work, but is one of the most
cost effective methods of ensuring quality.

Three rules should be followed for all reviews:

1. The product is reviewed, not the producer.


2. Defects and issues are identified, not corrected.
3. All members of the reviewing team are responsible for the results of the
review.

18
7

Project Management

Project management is Organizing, Planning and Scheduling software projects. It is


concerned with activities involved in ensuring that software is delivered on schedule
and in accordance with the requirements of the organization developing and
procuring the software. Project management is needed because software
development is always subject to budget and schedule constraints that are set by the
organization developing the software.

Project management activities includes

• Project planning.
• Project scheduling.
• Iterative Code/Test/Release Phases
• Production Phase
• Post Mortem

Project planning

This is the most time-consuming project management activity. It is a continuous


activity from initial concept through to system delivery. Project Plan must be
regularly updated as new information becomes available. With out proper plan, the
development of the project will cause errors or it may lead to increase the cost,
which is higher than the schedule cost. Review.

Project scheduling

This activity involves splitting project into tasks and estimate time and resources
required to complete each task. Organize tasks concurrently to make optional use of
workforce. Minimize task dependencies to avoid delays caused by one task waiting
for another to complete. Project Manager has to take into consideration various
aspects like scheduling, estimating manpower resources, so that the cost of
developing a solution is within the limits. Project Manager also has to allow for
contingency in planning.

19
Iterative Code/Test/Release Phases

After the planning and design phases, the client and development team has to agree
on the feature set and the timeframe in which the product will be delivered. This
includes iterative releases of the product as to let the client see fully implemented
functionality early and to allow the developers to discover performance and
architectural issues early in the development. Each iterative release is treated as if
the product were going to production. Full testing and user acceptance is performed
for each iterative release. Experience shows that one should space iterations at least
2 – 3 months a part. If iterations are closer than that, more time will be spent on
convergence and the project timeframe expands. During this phase, code reviews
must be done weekly to ensure that the developers are delivering to specification
and all source code is put under source control. Also, full installation routines are to
be used for each iterative release, as it would be done in production.

Deliverables

• Triage
• Weekly Status with Project Plan and Budget Analysis
• Risk Assessment
• System Documentation
• User Documentation (if needed)
• Test Signoff for each iteration
• Customer Signoff for each iteration
Production Phase

Once all iterations are complete, the final product is presented to the client for a final
signoff. Since the client has been involved in all iterations, this phase should go very
smoothly.

Deliverables

• Final Test Signoff


• Final Customer Signoff

Post Mortem Phase

The post mortem phase allows to step back and review the things that went well and
the things that need improvement. Post mortem reviews cover processes that need
adjustment, highlight the most effective processes and provide action items that will
improve future projects.

20
To conduct a post mortem review, announce the meeting at least a week in advance
so that everyone has time to reflect on the project issues they faced. Everyone has
to be asked to come to the meeting with the following:

1. Items that were done well during the project


2. Items that were done poorly during the project
3. Suggestions for future improvements

During the meeting, collection of the information listed above is required. As each
person offers their input, categorize the input so that all comments are collected.
This will allow one to see how many people had the same observations during the
project. At the end of observation review, a list of the items will be available that
were mentioned most often. The list of items allowing the team to prioritize the
importance of each item has to be perused. This will allow drawing a distinction of
the most important items. Finally, a list of action items has to be made that will be
used to improve the process and publish the results. When the next project begins,
everyone on the team should review the Post Mortem Report from the prior release
as to improve the next release.

21
8

Quality Management

The project quality management knowledge area is comprised of the set of processes
that ensure the result of a project meets the needs for which the project was
executed. Processes such as quality planning, assurance, and control are included in
this area. Each process has a set of input and a set of output. Each process also has
a set of tools and techniques that are used to turn input into output.

Definition of Quality:

• Quality is the totality of features and characteristics of a product or service


that bare on its ability to satisfy stated or implied needs.
Or
• Quality is defined as meeting the customer’s requirement for the first time
and for every time. This is much more that absence of defects which allows us
to meet the requirements.

Some goals of quality programs include:

• Fitness for use. (Is the product or service capable of being used?)
• Fitness for purpose. (Does the product or service meet its
intended purpose?)
• Customer satisfaction. (Does the product or service meet the
customer's expectations?)

22
Quality Management Processes

Quality Planning:

The process of identifying which quality standards is relevant to the project and
determining how to satisfy them.

• Input includes: Quality policy, scope statement, product description,


standards and regulations, and other process Output.
• Methods used: benefit / cost analysis, benchmarking, flowcharting, and
design of experiments.
• Output includes: Quality Management Plan, operational definitions,
checklists, and Input to other processes.

Quality Assurance

The process of evaluating overall projects performance on a regular basis to


provide confidence that the project will satisfy the relevant quality standards.

• Input includes: Quality Management Plan, results of quality control


measurements, and operational definitions.
• Methods used: quality planning tools and techniques and quality audits.
• Output includes: quality improvement.

Quality Control

The process of monitoring specific project results to determine if they comply


with relevant quality standards and identifying ways to eliminate causes of
unsatisfactory performance.

• Input includes: work results, Quality Management Plan, operational


definitions, and checklists.
• Methods used include: inspection, control charts, pareto charts, statistical
sampling, flowcharting, and trend analysis.
• Output includes: quality improvements, acceptance decisions, rework,
completed checklists, and process adjustments.

Quality Policy
The overall quality intentions and direction of an organization as regards quality,
as formally expressed by top management

23
Total Quality Management (TQM)

A common approach to implementing a quality improvement program within an


organization

Quality Concepts

• Zero Defects
• The Customer is the Next Person in the Process
• Do the Right Thing Right the First Time (DTRTRTFT)
• Continuous Improvement Process (CIP) (From Japanese word, Kaizen)

Tools of Quality Management

Problem Identification Tools :

• Pareto Chart
1. Ranks defects in order of frequency of occurrence to depict
100% of the defects. (Displayed as a histogram)
2. Defects with most frequent occurrence should be targeted for
corrective action.
3. 80-20 rule: 80% of problems are found in 20% of the work.
4. Does not account for severity of the defects

• Cause and Effect Diagrams (fishbone diagrams or Ishikawa diagrams)


1. Analyzes the Input to a process to identify the causes of errors.
2. Generally consists of 8 major Input to a quality process to
permit the characterization of each input.

• Histograms
1. Shows frequency of occurrence of items within a range of
activity.
2. Can be used to organize data collected for measurements done
on a product or process.

• Scatter diagrams
1. Used to determine the relationship between two or more pieces
of corresponding data.

24
2. The data are plotted on an "X-Y" chart to determine correlation
(highly positive, positive, no correlation, negative, and highly
negative)

Problem Analysis Tools

1. Graphs
2. Check sheets (tic sheets) and check lists
3. Flowcharts

25
9

Risk Management

Risk management must be an integral part of any project. Everything does not
always happen as planned. Project risk management contains the processes for
identifying, analyzing, and responding to project risk. Each process has a set of input
and a set of output. Each process also has a set of tools and techniques that are
used to turn the input into output

Risk Management Processes

Risk Management Planning

Used to decide how to approach and plan the risk management activities for a
project.
• Input includes: The project charter, risk management policies, and WBS all
serve as input to this process
• Methods used: Many planning meeting will be held in order to generate the
risk management plan
• Output includes: The major output is the risk management plan, which does
not include the response to specific risks. However, it does include
methodology to be used, budgeting, timing, and other information

Risk Identification

Determining which risks might affect the project and documenting their
characteristics
• Input includes: The risk management plan is used as input to this process
• Methods used: Documentation reviews should be performed in this process.
Diagramming techniques can also be used
• Output includes: Risk and risk symptoms are identified as part of this
process. There are generally two types of risks. They are business risks that
are risks of gain or loss. Then there are pure risks that represent only a risk
of loss. Pure risks are also known as insurable risks

26
Risk Analysis

A qualitative analysis of risks and conditions is done to prioritize their affects


on project objectives.
• Input includes: There are many items used as input into this process. They
include things such as the risk management plan. The risks should already be
identified as well. Use of low precision data may lead to an analysis that is not
useable. Risks are rated against how they impact the projects objectives for
cost, schedule, scope, and quality
• Methods used: Several tools and techniques can be used for this process.
Probability and Impact will have to be evaluated
• Output includes: An overall project risk ranking is produced as a result of
this process. The risks are also prioritized. Trends should be observed. Risks
calculated as high or moderate are prime candidates for further analysis

Risk Monitoring and Control

Used to monitor risks, identify new risks, execute risk reduction plans, and
evaluate their effectiveness throughout the project life cycle.
• Input includes: Input to this process includes the risk management plan,
risk identification and analysis, and scope changes
• Methods used: Audits should be used in this process to ensure that risks are
still risks as well as discover other conditions that may arise.
• Output includes: Output includes work-around plans, corrective action,
project change requests, as well as other items

Risk Management Concepts

Expected Monetary Value (EMV)

• A Risk Quantification Tool


• EMV is the product of the risk event probability and the risk event value
• Risk Event Probability: An estimate of the probability that a given risk
event will occur

Decision Trees

A diagram that depicts key interactions among decisions and associated chance
events as understood by the decision maker. Can be used in conjunction with EMV
since risk events can occur individually or in groups and in parallel or in sequence.

27
10

Configuration Management

Configuration management (CM) is the processes of controlling, coordinate, and


tracking the Standards and procedures for managing changes in an evolving
software product. Configuration Testing is the process of checking the operation
of the software being tested on various types of hardware.

Configuration management involves the development and application of


procedures and standards to manage an evolving software product. This can be
seen as part of a more general quality management process. When released to
CM, software systems are sometimes called baselines, as they are a starting
point for further development. The best bet in this situation is for the testers to go
through the process of reporting whatever bugs or blocking-type problems initially
show up, with the focus being on critical bugs. Since this type of problem can
severely affect schedules, and indicates deeper problems in the software
development process (such as insufficient unit testing or insufficient integration
testing, poor design, improper build or release procedures, etc.) managers should be
notified, and provided with some documentation as evidence of the problem.

Configuration management can be managed through

• Version control.
• Changes made in the project.

Version Control and Release management

Version is an instance of system, which is functionally distinct in some way from


other system instances. It is nothing but the updated or added features of the
previous versions of software. It has to be planned as to when the new system
version is to be produced and it has to be ensured that version management
procedures and tools are properly applied.
Release is the means of distributing the software outside the development team.
Releases must incorporate changes forced on the system by errors discovered by
users and by hardware changes. They must also incorporate new system
functionality.

28
Changes made in the project
This is one of most useful way of configuring the system. All changes will have to be
maintained that were made to the previous versions of the software. This is more
important when the system fails or not meeting the requirements. By making note of
it one can get the original functionality. This can include documents, data, or
simulation.
Configuration Management Planning

This starts at the early phases of the project and must define the documents or
document classes, which are to be managed. Documents, which might be required
for future system maintenance, should be identified and included as managed
documents. It defines

➢ the types of documents to be managed

➢ document-naming scheme

➢ who takes responsibility for the CM procedures and creation of baselines

➢ polices for change control and version management.

This contains three important documents they are

• Change management items.


• Change request documents.
• Change control board. (CCB)

Change management

Software systems are subject to continual change requests from users, from
developers, from market forces. Change management is concerned with keeping,
managing of changes and ensuring that they are implemented in the most cost-
effective way.

Change request form

Definition of change request form is part of CM planning process. It records changes

required, reason "why change -was suggested and urgency of change ( from

requestor of the change). It also records change evaluation, impact analysis, change

29
cost and recommendations (System maintenance staff), A major problem in change

management is tracking change status. Change tracking tools keep track the status

of each change request and automatically ensure that change requests are sent to

the right people at the right time. Integrated with Email systems allowing electronic

change request distribution.

Change control board

A group, who decide, whether or not they are cost-effective from a strategic,
organizational and technical viewpoint, should review the changes. This group is
sometimes called a change control board and includes members from project team.

11

30
❖ Types of Software Testing

Testing

Static Dynamic

Structural Functional
Testing Testing

Static Testing

Static testing refers to testing something that’s not running. It is examining and
reviewing it. The specification is a document and not an executing program, so it’s
considered as static. It’s also something that was created using written or graphical
documents or a combination of both.

High-level Reviews of specification

• Pretend to be the customer.


• Research existing Standards and Guidelines.
• Review and Test similar software.

Low-level Reviews of specification

• Specification Attributes checklist.


• Specification terminology checklist.

Dynamic Testing

Techniques used are determined by type of testing that must be conducted.

• Structural (usually called "white box") testing.


• Functional ("black box") testing.

Structural testing or White box testing

Structural tests verify the structure of the software itself and require complete
access to the source code. This is known as ‘white box’ testing because you see into
the internal workings of the code.

31
White-box tests make sure that the software structure itself contributes to proper
and efficient program execution. Complicated loop structures, common data areas,
100,000 lines of spaghetti code and nests of ifs are evil. Well-designed control
structures, sub-routines and reusable modular programs are good.

White-box testing strength is also its weakness. The code needs to be examined by
highly skilled technicians. That means that tools and skills are highly specialized to
the particular language and environment. Also, large or distributed system
execution goes beyond one program, so a correct procedure might call another
program that provides bad data. In large systems, it is the execution path as
defined by the program calls, their input and output and the structure of common
files that is important. This gets into a hybrid kind of testing that is often employed
in intermediate or integration stages of testing.

Functional or Black Box Testing

Functional tests examine the behavior of software as evidenced by its outputs


without reference to internal functions. Hence it is also called ‘black box’ testing. If
the program consistently provides the desired features with acceptable performance,
then specific source code features are irrelevant. It's a pragmatic and down-to-earth
assessment of software.

Functional or Black box tests better address the modern programming paradigm. As
object-oriented programming, automatic code generation and code re-use becomes
more prevalent, analysis of source code itself becomes less important and functional
tests become more important. Black box tests also better attack the quality target.
Since only the people paying for an application can determine if it meets their needs,
it is an advantage to create the quality criteria from this point of view from the
beginning.

Black box tests have a basis in the scientific method. Like the process of
science, Black box tests must have a hypothesis (specifications), a defined method
or procedure (test plan), reproducible components (test data), and a standard
notation to record the results. One can re-run black box tests after a change to make
sure the change only produced intended results with no inadvertent effects.

32
12

Testing levels

There are several types of testing in a comprehensive software test process, many of
which occur simultaneously.

• Unit Testing
• Integration Testing
• System Testing
• Performance / Stress Test
• Regression Test
• Quality Assurance Test
• User Acceptance Test and Installation Test

Unit Testing

Testing each module individually is called Unit Testing. This follows a White-Box
testing. In some organizations, a peer review panel performs the design and/or code
inspections. Unit or component tests usually involve some combination of structural
and functional tests by programmers in their own systems. Component tests often
require building some kind of supporting framework that allows components to
execute.

Integration testing

The individual components are combined with other components to make sure that
necessary communications, links and data sharing occur properly. It is not truly
system testing because the components are not implemented in the operating
environment. The integration phase requires more planning and some reasonable
sub-set of production-type data. Larger systems often require several integration
steps.

There are three basic integration test methods:

• All-at-once
• Bottom-up
• Top-down

33
The all-at-once method provides a useful solution for simple
integration problems, involving a small program possibly using a few
previously tested modules.

Bottom-up testing involves individual testing of each module using a


driver routine that calls the module and provides it with needed
resources. Bottom-up testing often works well in less structured shops
because there is less dependency on availability of other resources to
accomplish the test. It is a more intuitive approach to testing that also
usually finds errors in critical routines earlier than the top-down
method. However, in a new system many modules must be integrated
to produce system-level behavior, thus interface errors surface late in
the process.

Top-down testing fits a prototyping environment that establishes an


initial skeleton that fills individual modules that is completed. The
method lends itself to more structured organizations that plan out the
entire test process. Although interface errors are found earlier, errors
in critical low-level modules can be found later than you would like.

System Testing

The system test phase begins once modules are integrated enough to perform tests
in a whole system environment. System testing can occur in parallel with integration
test, especially with the top-down method.

Performance / Stress Testing

An important phase of the system testing, often-called load or volume or


performance test, stress tests tries to determine the failure point of a system under
extreme pressure. Stress tests are most useful when systems are being scaled up to
larger environments or being implemented for the first time. Web sites, like any
other large-scale system that requires multiple accesses and processing, contain
vulnerable nodes that should be tested before deployment. Unfortunately,
most stress testing can only simulate loads on various points of the system and
cannot truly stress the entire network, as the users would experience it.
Fortunately, once stress and load factors have been successfully overcome, it is only
necessary to stress test again if major changes take place.

A drawback of performance testing is it confirms the system can handle heavy loads,
but cannot so easily determine if the system is producing the correct information.

34
Regression Testing

Regression tests confirm that implementation of changes have not adversely affected
other functions. Regression testing is a type of test as opposed to a phase in
testing. Regression tests apply at all phases whenever a change is made.

Quality Assurance Testing

Some organizations maintain a Quality Group that provides a different point of view,
uses a different set of tests, and applies the tests in a different, more complete test
environment. The group might look to see that organization standards have been
followed in the specification, coding and documentation of the software. They might
check to see that the original requirement is documented, verify that the software
properly implements the required functions, and see that everything is ready for the
users to take a crack at it.

User Acceptance Test and Installation Testing

Traditionally, this is where the users ‘get their first crack’ at the software.
Unfortunately, by this time, it's usually too late. If the users have not seen
prototypes, been involved with the design, and understood the evolution of the
system, they are inevitably going to be unhappy with the result. If one can perform
every test as user acceptance tests, there is much better chance of a successful
project.

35
13

Types of Testing Techniques

White Box Testing Technique

White box testing examines the basic program structure and it derives the test data
from the program logic, ensuring that all statements and conditions have been
executed at least once.

White box tests verify that the software design is valid and also whether it was built
according to the specified design.

Different methods used are:

Statement coverage – executes all statements at least once. (each and every line)

Decision coverage – executes each decision direction at least once.

Condition coverage – executes each and every condition in the program with all
possible outcomes at least once.

36
Black Box Testing Technique

Black-box test technique treats the system as a "black-box", so it doesn't explicitly


use knowledge of the internal structure. Black-box test design is usually described as
focusing on testing functional requirements. Synonyms for black box include:
Behavioral, Functional, Opaque-box, and Closed-box.

Black box testing is conducted on integrated, functional components whose design


integrity has been verified through completion of traceable white box tests. Black
box testing traces the requirements focusing on system externals. It validates that
the software meets the requirements irrespective of the paths of execution taken to
meet each requirements.

Three successful techniques for managing the amount of input data required includes
:

• Equivalence Partitioning

• Boundary Analysis

• Error Guessing

Equivalence Partitioning:

Equivalence partitioning is the process of methodically reducing the huge(infinite)set


of possible test cases into a much smaller, but still equally effective set. An
Equivalence class is a subset of data that is representative of a larger class.
Equivalence partitioning is a technique for testing equivalence classes rather than
undertaking exhaustive testing of each value of the larger class, when looking for
equivalence partitions, think about ways to group similar inputs, similar outputs, and
similar operations of the software. These groups are the equivalence partitions.

For example

A program that edits credit limits within a given range ($20,000-$50,000) would
have three equivalence classes:

Less than $20,000(invalid)


Between $20,000 and $50,000 (valid)
Greater than $50,000(invalid)

37
Boundary value analysis:

If one can safely and confidently walk along the edge of a cliff without falling off, he
can almost certainly walk in the middle of a field. If software can operate on the
edge of its capabilities, it will almost certainly operate well under normal conditions.

This technique consist of developing test cases and data that focus on the input and
output boundaries of a given function. In same credit limit example, boundary
analysis would test:

Low boundary plus or minus one ($19,999 and $20,001)


On the boundary ($20,000 and $50,000)
Upper boundary plus or minus one ($49,999 and $50,001)

Error Guessing

This is based on the theory that test cases can be developed based upon the intuition
and experience of the Test-Engineer.

Example: In the example of date, where one of the inputs is the date, a test may try
February 29, 2000 or 9.9.99

Incremental testing

Incremental testing is a disciplined method of testing the interfaces between unit-


tested programs as well as between system components. It involves adding unit-
tested programs to a given module or component one by one, and testing each
result and combination.

There are two types of incremental testing:

Top-down: - This begins testing from top of the module hierarchy and work
down to the bottom using interim stubs to simulate lower interfacing modules
or programs. Modules are added in descending hierarchical order.

Bottom-up: - This begins testing from the bottom of the hierarchy and works
up to the top. Modules are added in ascending hierarchical order. Bottom-up
testing requires the development of driver modules, which provide the test
input, call the module or program being tested, and display test output.

There are procedures and constraints associated with each of these methods,
although bottom-up testing is often thought to be easier to use. Drivers are often
easier to create than stubs, and can serve multiple purposes. Output is also often

38
easier to examine in bottom-up testing, as the output always comes from the
module directly above the module under test.

Thread testing

This test technique, which is often used during early integration testing,
demonstrates key functional capabilities by testing a string of units that accomplish a
specific function in the application. Thread testing and incremental testing are
usually utilized together. For example, units can undergo incremental until enough
units are integrated and a single business function can be performed, threading

Software Development Life cycle – Phases

Requirement User Maintenance


Acceptance

Release Closure
Design

Internal
Development Testing

Requirements Analysis process


This process aims at gathering and detailing of Customer’s software requirements.
Mainly the Techno Functional Team takes inputs from customer & prepares a
Requirement specification document, which is approved by the customer.
Design process
This process details out the design of the software. In this phase all the customer’s
requirements are translated in to Design Architecture.
Coding process
This process is executed to translate the design in to software code.
Testing process
This process is essential to detect the defects before release of software to the
customer.

39
Release process
This phase is essential to plan the Installation and release Plan. The phase is
important as it ensures that the code delivered meets the customer’s criteria in all
respect – the technical and non- technical.

User Acceptance
This phase focuses on functionality testing to check whether the system
meets user acceptance criteria or not.
Maintenance
This phase focuses on post delivery support provided at the client site. This includes
handling change requests & documentation support.
Closure
This process is essential to understand the learning’s gained at the end of the
project. This phase is important as it ensures that the project resources are released
and the metrics are analyzed.

14

Testing Life Cycle

Test Plan Preparation

Test case Design

Test Execution & Test


Log Preparation

Defect Tracking
40

Test Report Preparation


Test Plan Preparation

The software test plan is the primary means by which software testers communicate
to the product development team what they intend to do. The purpose of the
software test plan is to prescribe the scope, approach, resource, and schedule of the
testing activities. To identify the items being tested, the features to be tested, the
testing tasks to be preformed, the personnel responsible for each task, and the risks
associated with the plan.

The test plan is simply a by-product of the detailed planning process that’s
undertaken to create it. It’s the planning that matters, not the resulting documents.
The ultimate goal of the test planning process is communicating the software test
team’s intent, its expectations, and its understanding of the testing that’s to be
performed.

The following are the important topics, which helps in preparation of Test plan.

• High-Level Expectations

The first topics to address in the planning process are the ones that
define the test team’s high-level expectations. They are fundamental
topics that must be agreed to, by everyone on the project team, but
they are often overlooked. They might be considered “too obvious” and
assumed to be understood by everyone, but a good tester knows never
to assume anything.

• People, Places and Things

Test plan needs to identify the people working on the project, what they
do, and how to contact them. The test team will likely work with all of
them and knowing who they are and how to contact them is very
important.

Similarly, where documents are stored, where the software can be


downloaded from, where the test tools are located, and so on need to
be identified.

41
• Inter-Group Responsibilities

Inter-Group responsibilities identify tasks and deliverables that


potentially affect the test effort. The test team’s work is driven by many
other functional groups – programmers, project manages, technical
writers, and so on. If the responsibilities aren’t planned out, the project,
specifically the testing, can become a worst or resulting in important
tasks been forgotten.

• Test phases

To plan the test phases, the test team will look at the proposed
development model and decide whether unique phases, or stages, of
testing should be performed over the course of the project. The test
planning process should identify each proposed test phase and make
each phase known to the project team. This process often helps the
entire team from and understands the overall development model.

• Test strategy

The test strategy describes the approach that the test team will use to
test the software both overall and in each phase. Deciding on the
strategy is a complex task- one that needs to be made by very
experienced testers because it can determine the successes or failure of
the test effort.

• Bug Reporting

Exactly what process will be used to manage the bugs needs to be


planned so that each and every bug is tracked, from when it’s found to
when it’s fixed – and never, ever forgotten.

• Metrics and Statistics

Metrics and statistics are the means by which the progress and the
success of the project, and the testing, are tracked. The test planning
process should identify exactly what information will be gathered, what
decisions will be made with them, and who will be responsible for
collecting them.

• Risks and Issues

A common and very useful part of test planning is to identify potential


problem or risky areas of the project – ones that could have an impact
on the test effort.

Test Case Design

42
The test case design specification refines the test approach and identifies the
features to be covered by the design and its associated tests. It also identifies the
test cases and test procedures, if any, required to accomplish the testing and
specifics the feature pass or fail criteria. The purpose of the test design specification
is to organize and describe the testing needs to be performed on a specific feature.

The following topics address this purpose and should be part of the test design
specification that is created:

• Test case ID or identification

A unique identifier that can be used to reference and locate the test
design specification the specification should also reference the overall
test plan and contain pointers to any other plans or specifications that it
references.

• Test Case Description

It is a description of the software feature covered by the test design


specification for example, “ the addition function of calculator,” “font
size selection and display in word pad,” and “video card configuration
testing of quick time.”

• Test case procedure

It is a description of the general approach that will be used to test the


features. It should expand on the approach, if any, listed in the test
plan, describe the technique to be used, and explain how the results will
be verified.

• Test case Input or Test Data

It is the input the data to be tested using the test case. The input may
be in any form. Different inputs can be tried for the same test case and
test the data entered is correct or not.

• Expected result

It describes exactly what constitutes a pass and a fail of the tested


feature. Which is expected to get from the given input.

Test Execution and Test Log Preparation

43
After test case design, each and every test case is checked and actual result
obtained. After getting actual result, with the expected column in the design stage is
compared, if both the actual and expected are same, then the test is passed
otherwise it will be treated as failed.

Now the test log is prepared, which consists of entire data that were recorded,
whether the test failed or passed. It records each and every test case so that it will
be useful at the time of revision.

Example

Test case ID Test case Description Test status/ result

Sys_xyz_01 Checking the login window Fail

Sys_xyz_02 Checking the main window True

44
15

Defect Tracking

A defect can be defined in one or two ways. From the producer's viewpoint, a defect
is a deviation from specifications, whether missing, wrong, etc. From the Customer's
viewpoint, a defect is any that causes customer dissatisfaction, whether in the
requirements or not, this is known as "fit for use". It is critical that defects identified
at each stage of the project life cycle be tracked to resolution.

Defects are recorded for following major purposes:

• To correct the defect


• To report status of the application
• To gather statistics used to develop defect expectations in future
applications
• To improve the software development process

Most project teams utilize some type of tool to support the defect tracking process.
This tool could be as simple as a white board or a table created and maintained in a
word processor or one of the more robust tools available today, on the market, such
as Mercury's Test Director etc. Tools marketed for this purpose usually come with
some number of customizable fields for tracking project specific data in addition to
the basics. They also provide advanced features such as standard and ad-hoc
reporting, e-mail notification to developers and/or testers when a problem is
assigned to them, and graphing capabilities.

45
At a minimum, the tool selected should support the recording and communication
significant information about a defect. For example, a defect log could include:

• Defect ID number
• Descriptive defect name and type
• Source of defect -test case or other source
• Defect severity
• Defect priority
• Defect status (e.g. open, fixed, closed, user error, design, and so on)
-more robust tools provide a status history for the defect
• Date and time tracking for either the most recent status change, or
for each change in the status history
• Detailed description, including the steps necessary to reproduce the
defect
• Component or program where defect was found
• Screen prints, logs, etc. that will aid the developer in resolution
process
• Stage of origination
• Person assigned to research and/or correct the defect

Severity versus Priority

The severity of a defect should be assigned objectively by the test team based on
pre defined severity descriptions. For example a "severity one" defects maybe
defined as one that causes data corruption, a system crash, security violations, etc.
In large project, it
may also be necessary to assign a priority to the defect, which determines the order
in
which defects should be fixed. The priority assigned to a defect is usually more
subjective based upon input from users regarding which defects are most important
to them, and therefore should be fixed first.

It is recommended that severity levels be defined at the start of the project so that
they intently assigned and understood by the team. This foresight can help test
teams avoid the common disagreements with development teams about the
criticality of a defect.

46
Some general principles

• The primary goal is to prevent defects. Wherever this is not possible


or practical, the goals are to both find the defect as quickly as
possible and minimize the impact of the defect.

• The defect management process, like the entire software


development process, should be risk driven, i.e., strategies, priorities
and resources should be based on an assessment of the risk and the
degree to which the expected impact of risk can be reduced.

• Defect measurement should be integrated into the development


process and be used by the project team to improve the
development process. In other words, information on defects should
be captured at the source as a natural by-product of doing the job.
People unrelated to the project or system should not do it.

• As much as possible, the capture and analysis of the information


should be automated. There should be a document, which includes a
list of tools, which have defect management capabilities and can be
used to automate some of the defect management processes.

• Defect information should be used to improve the process. This, in


fact, is the primary reason for gathering defect information.

• Imperfect or flawed processes cause most defects. Thus, to prevent


defects, the process must be altered.

The Defect Management Process

The key elements of a defect management process are as follows.

• Defect prevention
• Deliverable base-lining
• Defect discovery/defect naming
• Defect resolution
• Process improvement
• Management reporting

Defect Deliverable Defect Defect Process


Prevention Baseline Discovery Resolution 16
Improvement

Management Reporting
47
16

Test Reports

A final test report should be prepared at the conclusion of each test activity. This
might include

• Individual Project Test Report (e.g., a single software system)


• Integration Test Report
• System Test Report
• Acceptance Test Report

The test reports are designed to document the results of testing as defined in the
test plan. Without a well-developed test plan, which has been executed in
accordance with its criteria, it is difficult to develop a meaningful test report.

It is designed to accomplish three objectives:


• Define the scope of testing - normally a brief recap of the test plan;
• Present the results of testing; and
• Draw conclusions and make recommendations based on those
results

The test report may be a combination of electronic data and hard copy. For example,
if the function test matrix is maintained electronically, there is no reason to print
that, as the paper report will summarize that data, draws the appropriate
conclusions, and present recommendations.

The test report has one immediate and three long-term purposes. The immediate
purpose is to provide information to the customers of the software system so that
they can determine whether the system is ready for production: and if so, to assess
the potential consequences and initiate appropriate actions to minimize those
consequences.

The first of the three long-term uses is for the project to trace problems in the event
the application malfunctions in production. Knowing which functions have been
correctly tested and which ones still contain defects can assist in taking corrective
action.

48
The second long-term purpose is to use the data to analyze the rework process for
making changes to prevent defects from occurring in the future. Accumulating the
results of many test reports to identify which components of the rework process are
detect-prone does this. These defect-prone components identify tasks/steps that, if
improved, could eliminate or minimize the occurrence of high-frequency defects.

The third long-term purpose is to show what was accomplished.

Individual Project Test Report

These reports focus on individual projects (e.g., software system). When different
testers test individual projects, they should prepare a report on their results.

Integration Test Report

Integration testing tests the interfaces between individual projects. A good test plan
will identify the interfaces and institute test conditions that will validate interfaces.
Given this, the interface report follows the same format as the individual Project Test
report, except that the conditions tested are the interfaces.

System Test Report

A system test plan standard that identified the objectives of testing, what was to be
tested, how it was to be tested and when tests should occur. The System Test report
should present the results of executing that test plan. If this is maintained
electronically, it need only be referenced, not included in the report.

Acceptance Test Report

There are two primary objectives for testing. The first is to ensure that the system as
implemented meets the real operating needs of the user or customer. If the defined
requirements are those true needs, the testing should have accomplished this
objective. The second objective is to ensure that the software system can operate in
the real-world user environment, which includes people skills and attitudes, time
pressures, changing business conditions, and so forth.

49
Eight Interim Reports:

1. Functional Testing Status


2. Functions Working Timeline
3. Expected verses Actual Defects Detected Timeline
4. Defects Detected verses Corrected Gap Timeline
5. Average Age of Detected Defects by Type
6. Defect Distribution
7. Relative Defect Distribution
8. Testing Action

Functional Testing Status Report

This report will show percentages of the functions, which have been:

• Fully Tested
• Tested With Open Defects
• Not Tested

Functions Working Timeline report

This report will show the actual plan to have all functions working verses the current
status of functions working. An ideal format could be a line graph.

Expected verses Actual Defects Detected report

This report will provide an analysis between the number of defects being generated
against the expected number of defects expected from the planning stage

Defects Detected verses Corrected Gap report

This report, ideally in a line graph format, will show the number of defects uncovered
verses the number of defects being corrected and accepted by the testing group. If
the gap grows too large, the project may not be ready when originally planned.

50
Average Age Detected Defects by Type report

This report will show the average outstanding defects by type (severity 1, severity 2,
etc.). In the planning stage, it is benefic determine the acceptable open days by
defect type.

Defect Distribution report

This report will show the defect distribution by function or module. It can also include
items such as numbers of tests completed.

Relative Defect Distribution report

This report will take the previous report (Defect Distribution) and normalize the level
of defects. An example would be one application might be more in depth than
another, and would probably have a higher level of defects. However, when
normalized over the number of functions or lines of code, would show a more
accurate level of defects.

Testing action report

This report can show many different things, including possible shortfalls in testing.
Examples of data to show might be number of severity defects, tests that are behind
schedule, and other information that would present an accurate testing picture

51
17

Software Metric

Effective management of any process requires quantification, measurement, and


modeling. Software metrics provide a quantitative basis for the development and
validation of models of the software development process. Metrics can be used to
improve software productivity and quality. This module introduces the most
commonly used software and reviews their use in constructing models of the
software development process.

Definition of Software Metrics

A metric is a mathematical number that shows a relationship between two variables. It


is a quantitative measure of the degree to which a system, component or process
possesses a given attribute. Software Metrics are measures that are used to quantify
the software, software development resource and software development process.

Metric generally classified into 2 types.

• Process Metric
• Product Metric

Process Metric a metric used to measure the characteristic of the methods,


techniques and tools employed in developing, implementing and maintaining the

software system.

Product Metric a metric used to measure the characteristic of the documentation


and code

52
The metrics for the test process would include status of test activities against the
plan, test coverage achieved so far, among others. An important metric is the
number of defects found in internal testing compared to the defects found in
customer tests, which indicate the effectiveness of the test process itself.

Test Metrics

The following are the Metrics collected in testing process

User participation = User Participation Test Time Vs Total Test Time

Path Tested = Number of Path Tested Total Number of Paths

Acceptance Criteria Tested = Acceptance Criteria Verified Vs Total Acceptance


Criteria

Cost to Locate Defect

Test Cost
=
No of Defects located in the Testing

This metric shows the cost to locate a defect Detected

Production Defect

No of Defects detected in production


=
Application System size

Test Automation

Cost of Manual Test Effort


=
Total Test Cost

53
18
❖ Other Testing Terms

Usability Testing

Determines how well the user will be able to understand and interact with the
system. It identifies areas of poor human factors design that may make the system
difficult to use. Ideally this test is conducted on a system prototype before
development actually beings. If a navigational or operational prototype is not
available, screen prints of all of the applications screens or windows can be used to
walk the user through various business scenarios.

Conversion Testing

Specifically designed to validate the effectiveness of the conversion process. This test
may be conducted jointly by developers and testers during integration testing, or at
the start of system testing, since system testing must be conducted with the
converted data. Field -to -Field mapping and data translation is validated and, if a foil
copy of production data will be used in the test.

Vendor Validation Testing

Verifies that the functionality of contracted or third party software meets the
organization's requirements, prior to accepting it and installing it into a production
environment. This test can be conducted jointly by the software vendor and the test
team, and focuses on ensuring that all requested functionality has been delivered.

54
Stress / Load Testing

Conducted to validate the application, database, and network, they may handle
projected volumes of users and data effectively. The test is conducted jointly by
developers, testers, DBA's and network associates after the system testing. During
the test, the complete system is subjected to environmental conditions that defer
expectations to answer question such as:

• How large can the database grow before performance degrades?


• At what point will more storage space be required?
• How many users can use the system simultaneously before it slows
down or fails?

Performance Testing

Usually conducted in parallel with stress and load testing in order to measure
performance against specified service-level objectives under various conditions. For
instance, one may need to ensure that batch processing will complete within the
allocated amount of time, or that on-line response times meet performance
requirements.

Recovery Testing

Evaluates the contingency features built into the application for handling inter and
for returning to specific points in the application processing. Any restoration, and
restart capabilities are also tested here. The test team may conduct this test during
system test or by another team specifically gathered for this purpose.

Configuration Testing

In the IT Industry, a large percentage of new applications are either client/server or


web-based, validating that they will run on the various combinations of hardware
and software. For instance, configuration testing for an web-based application
would
incorporate versions and releases of operating systems, internet browsers, modem
speeds, and various off the shelf applications that might be integrated (e.g. e-mail
application)

55
Benefits Realization Test

With the increased focus on the value of business returns obtained from investments
information technology this type of test or analysis is becoming more critical. The
benefits Realization Test is a test or analysis conducted after an application is moved
into production in order to determine whether the application is likely to deliver the
original projected benefits. The analysis is usually conducted by- the business user or
client group who requested the project, and results are reported back to executive
management.

56
19

Test Standards

External Standards- Familiarity with and adoption of industry test standards from
Organizations.

Internal Standards-Development and enforcement of the test standards that


testers
must meet

IEEE

• Institute of Electrical and Electronics Engineers


• Founded in 1884
• Have an entire set of standards devoted to Software
• Testers should be familiar with all the standards mentioned in IEEE.

IEEE STANDARDS: That a Tester should be aware of

1.610.12-1990 IEEE Standard Glossary of Software Engineering


Terminology

2. 730-1998 IEEE Standard for Software Quality Assurance Plans

3. 828-1998 IEEE Standard for Software Configuration Management Plan

4.829-1998 IEEE Standard for Software Test Documentation.

5. 830-1998 IEEE Recommended Practice for Software Requirement


Specification

6.1008-1987 (R1993) IEEE Standard for Software Unit Testing (ANSI)

7. 1012-1998 IEEE Standard for Software Verification and Validation.

8. 1012a-1998 IEEE Standard for Software Verification and Validation


Supplement to 1012-1998 Content Map to IEEE 122207.1

9. 1016-1998 IEEE Recommended Practice for Software


Descriptions

57
10. 1028-1997 IEEE Standard for Software Reviews

11. 1044-1993 IEEE Standard classification for Software Anomalies

12. 1045-1992 IEEE Standard for Software Productivity Metrics(ANSI)

13. 1058-1998 IEEE Standard for Software Project Management Plans

14. 1058.1-1987 IEEE Standard for Software Management

15. 1061-1998.1 IEEE Standard for Software Quality Metrics Methodology.

Other Standards:

• ISO-International Organization for Standards

• SPICE -Software Process Improvement and Capability


Determination

• NIST -National Institute of Standards and Technology

• DoD-Department of Defense

Internal Standards

The use of Standards...

• Simplifies communication
• Promotes consistency and uniformity
• Eliminates the need to invent yet another solution to the same
problem
• Provides continuity
• Presents a way of preserving proven practices
• Supplies benchmarks and framework

58
20

Web Testing

Introduction

The Web Testing is mainly concerned on 6 parts they are

• Usability
• Functionality
• Server side Interface
• Client side Compatibility
• Performance
• Security

Usability

One of the reasons the web browser is being used as the front end to applications is
the ease of use. Users who have been on the web before will probably know how to
navigate a well-built web site. While 7012 are concentrating on tin's portion of
testing it is important to verify that the application is easy to use. Many will believe
that this is the least important area to test, the site should be better and easy to
use. Even if the web site is simple, there will always be some one who needs some
clarification. Additionally, the documentation needs also to be verified, so that the
instructions are correct.

The following are the some of the things to be checked for easy navigation through
website:
Site map or navigational bar
Does the site have a map? Sometimes power users know exactly where they want
to go and don't want to go through lengthy introductions. Or new users get lost
easily. Either way a site map and/or ever-present navigational map can guide the
user. The site map needs to be verified for its correctness. Does each link on the
map actually exist? Are there links on the site that are not represented on the
map? Is the navigational bar present on every screen? Is it consistent? Does each
link work on each page? Is it organized in an intuitive manner?

59
• Content

To a developer, functionality comes before wording. Anyone can slap together


some fancy mission statement later, but while they are developing, they just need
some filler to verify alignment and layout. Unfortunately, text produce like this may
sneak through the cracks. It is important to check with the public relations
department on the exact wording of the content. Otherwise, the company can get
into a lot of trouble, legally. One has to make sure the site looks professional.
Overuse of bold text, big fonts and blinking can turn away a customer quickly. It
might be a good idea to consult a graphic designer to look over the site during User
Acceptance Testing. Finally, one has to make sure that any time a web reference is
given, that it is hyper linked. Plenty of sites ask them to email them at a specific
address or to download a browser from an address. But if the user can't click on it,
they are going to be annoyed.

• Colors/backgrounds

Ever since the web became popular, everyone thinks they are a graphic designer.
Unfortunately, some developers are more interested in their new backgrounds,
than ease of use. Sites will have yellow text on a purple picture of a fractal pattern.
This may seem "pretty neat", but it's not easy to use. Usually, the best idea is to
use little or no background. If there is a background, it might be a single color on
the left side of the page, containing the navigational bar. But, patterns and pictures
distract the user.

• Images

Whether it's a screen grab or a little icon that points the way, a picture is worth a
thousand words. Sometimes, the best way to tell the user something is to simply
show them. However, bandwidth is precious to the client and the server, so you
need to conserve memory usage. Do all the images and value to each page, or do
they simply waste bandwidth? Can a different file type (.GIF, JPG) be used for 30k
less? In general, one doesn't want large pictures on the front page, since most
users who abandon a load will do it on the front page. If the front page is available
quickly, it will increase the chance they will stay.

60
• Tables

It has to be verified that tables are setup properly. Does the user constantly have
to scroll right to see the price of the item? Would it be more efficient to put the
price closer to the left and put miniscule details to the right? Are the columns wide
enough or does every row have to wrap around? Are certain rows excessively high
because of one entry? These are some of the points to be taken care of.

• Wrap-around

Finally, it has to be verified whether the wrap-around occurs properly. If the text
refers to a picture on the right, make sure the picture is on the right. Make sure
that widow and orphan sentences and paragraphs don't layout in an awkward
manner because of pictures.

Functionality

The functionality of the web site is why the company hired a developer and not just
an artist. This is the part that interfaces with the server and actually "does stuff".

• Links

A link is the vehicle that gets the user from page to page. Two things has to be
verified for each link - that the link which brings to the page it said it would and
that the pages it is trying to link, exist. It may sound a little silly but many of the
web sites exist with internal broken links.
• Forms
When a user submits information through a form it needs to work properly.
The submit button needs to work If the form is for an online registration, the user
should be given login information (that works) after successful completion. If the
form gathers shipping information, it should be handled properly and the customer
should receive their package. In order to test this, you need to verify that the
server stores the information properly and that systems down the line can interpret
and use that information.

• Data verification
If the system verifies user input according to business rules, then that needs
to work properly. For example, a State field may be checked against a list of valid
values. If this is the case, you need to verify that the list is complete and that the
program actually calls the list properly (add a bogus value to the list and make
sure the system accepts it).

61
• Cookies

Most users only like the kind with sugar, but developers love web cookies. If the
system uses them, you need to check them. If they store login information, make
sure the cookies work and make sure it's encrypted in the cookie file. If the cookie
is used for statistics, verify that totals are being counted properly. And you'll
probably want to make sure those cookies are encrypted too, otherwise people can
edit their cookies and skew your statistics.

• Application specific functional requirements

Most importantly, one may want to verify the application specific functional
requirements, Try to perform all functions a user would: place an order, change an
order, cancel an order, check the status of the order, change shipping information
before an order is shipped, pay online, ad naseum. This is why users will show up
on the developer’s doorstep, so one need to make sure that he can do what is
advertised.

Server side Interface

Many times, a web site is not an island. The site will call external servers for
additional data, verification of data or fulfillment of orders.

• Server interface

The first interface one should test is the interface between the browser and the
server, transactions should be attempted, then the server logs viewed and verified
that what is seen in the browser is actually happening on the server. It's also a
good idea to run queries on the database to make sure the transaction data is
being stored properly.

• External interfaces

Some web systems have external interfaces. For example, a merchant might verify
credit card transactions real-time in order to reduce fraud. Several test
transactions may have to be sent using the web interface. Try credit cards that are
valid, invalid, and stolen. If the merchant only takes Visa and MasterCard, try using
a Discover card. (A simple client-side script can check 3 for American Express, 4
for Visa, 5 for MasterCard, or 6 for Discover, before the transaction is sent.)

62
Basically, it has to be ensured that the software can handle every possible message
returned by the external server.

• Error handling

One of the areas left untested most often is interface error handling. Usually we try
to make sure our system can handle all our errors, but we never plan for the other
systems' errors or for the unexpected. Try leaving the site mid-transaction - what
happens? Does the order complete anyway? Try losing the Internet connection
from the user to the server. Try losing the connection from the server to the credit
card verification server. Is there proper error handling for all these situations? Are
charges still made to credit cards? Is the interruption is not user initiated, does the
order get stored so customer service reps can call back if the user doesn't come
back to the site?

Client side Compatibility

It has to be verified that the application can work on the machines your customers
will be using. If the product is going to the web for the world to use, every operating
system, browser, video setting and modem speed has to be tried with various
combinations.

• Operating systems

Does the site work for both MAC and IBM Compatibles? Some fonts are not
available on both systems, so make sure that secondary fonts are selected. Make
sure that the site doesn't use plug-ins only available for one OS, if the users using
both.

• Browsers

Does the site work with Netscape? Internet Explorer? Linux? Some HTML
commands or scripts only work for certain browsers. Make sure there are alternate
tags for images, in case someone is using a text browser. If SSL security is used, it
has to be checked whether browsers 3.0 and higher, but it has to be verified that
there is a message for those using older browsers.

• Video settings

Does the layout still look good on 640x400 or 600x800? Are fonts too small to
read? Are they too big? Does all the text and graphic alignment still work?

63
• Modem/connection speeds

Does it take 10 minutes to load a page with a 28.8 modem, but whether it is tested
after hooking up to high-speed connections? Users will expect long download times
when they are grabbing documents or demos, but not on the front page. It has to
be ensured that the images aren't too large. Make sure that marketing don't put
50k of font size -6 keywords for search engines.

• Printers

Users like to print. The concept behind the web should save paper and reduce
printing, but most people would rather read on paper than on the screen. So, you
need to verify that the pages print properly. Sometimes images and text align on
the screen differently than on the printed page. It has to be verified that order
confirmation screens can be printed properly.

• Combinations

A different combination has to be tried. Maybe 600x800 looks good on the MAC but
not on the IBM. Maybe IBM with Netscape works, but not with Linux. If the web
site will be used internally it might make testing a little easier. If the company has
an official web browser choke, then it has to be verified that it works for that
browser. If everyone has a high-speed connection, load times need not be
checked. (But it has to be kept in mind, some people may dial in from home.) With
internal applications, the development team can make disclaimers about system
requirements and only support those systems setups. But, ideally, the site should
work on all machines without limit growth and changes in the future.

Performance Testing

It need to be verified that the system can handle a large number of users at the
same time, a large amount of data from each user, and a long period of continuous
use.
Accessibility is extremely important to users. If they get a "busy signal", they hang
up and call the competition. Not only must the system be checked so the customers
can gain access, but many times hackers will attempt to gain access to a system by
overloading it, For the sake of security, the system needs to know what to do when
it's overloaded; not simply blow up.

64
• Concurrent users at the same time

If the site just put up the results of a national lottery, it will be better to handle
millions of users right after the winning numbers are posted. A load test tool would
be able simulate concurrent users accessing the site at the same time.

• Large amount of data from each user

Most customers may only order 1-5 books from your new online bookstore, but
what if a university bookstore decides to order 5000 copies of Intro to Psychology?
Or what if one user wants to send a gift to larger number of his/her friends for
Christmas (separate mailing addresses for each, of course.) Can the system handle
large amounts of from a single user?

• Long period of continuous use

If the site is intended to take orders for specific occasion, then it will be better to
handle well before the occasion. If the site offers web-based email, it will be better
to run months or even years, without downtimes. It may probably be required to
use an automated test tool to implement these types of tests, since they are
difficult to do manually. Imagine coordinating 100 people to hit the site at the
same time. Now try 100,000 people. Generally, the tool will pay for itself the
second time you use it. Once the tool is set up, running another test is just a click
away.

Security

Even if credit card payments are not accepted, security is very important. The web
site will be the only exposure for some customers to know about a company. And, if
that exposure is a hacked page, the customers won't feel safe doing business with
the company using internet.

• Directory setup

The most elementary step of web security is proper setup of directories. Each
directory should have an index.html or main.html page so a directory listing
doesn't appear.

65
• SSL (Secured Socket Layer)

Many sites use SSL for secure transactions. While entering an SSL site, there will
be a browser warning and the HTTP in the location field on the browser will change
to HTTPS. If the development group uses SSL it is to be ensured that, there is an
alternate page for browser with versions less than 3.0, since SSL is not compatible
with those browsers. Sufficient warnings while entering and leaving the secured
site are to be provided. Also it needs to be checked whether there is a time-out
limit or what happens if the user tries a transaction after the timeout?

• Logins
In order to validate users, several sites require customers to login. This makes it
easier for the customer since they don't have to re-enter personal information
every time. You need to verify that the system does not allow invalid
usernames/password and that does allow valid logins. Is there a maximum number
of failed logins allowed before the server locks out the current user? Is the lockout
based on IP? What happens after the maximum failed login attempts; what are the
rules for password selection – these needs to be checked.

• Log files
Behind the scenes, it needs to be verified that server logs are working properly.
Does the log track every transaction? Does it track unsuccessful login attempts?
Does it only track stolen credit card usage? What does it store for each
transaction? IP address? User name?

• Scripting languages

Scripting languages are a constant source of security holes. The details are
different for each language. Some allow access to the root directory. Others only
allow access to the mail server, but a resourceful hacker could mail the servers
username and password files to themselves. Find out what scripting languages are
being used and research the loopholes. It might also be a good idea to subscribe to
a security newsgroup that discusses the language that is being tested.

Conclusion

Whether an Internet or intranet or extranet application is being tested, testing for


the web can be more challenging than non-web applications. Users have high
expectations for web page quality. In many cases, the page is up for public relations,
just as much as functionality, so the impression must be perfect.

66
21

Testing Terms

Application: A single software product that mayor may not fully support a

business function.

Audit: This is an inspection/assessment activity that verifies compliance with

plans, policies, and procedures, and ensures that resources are conserved. Audit

is a staff function; it serves as the "eyes and ears" of management

Baseline: A quantitative measure of the current level of performance.

Benchmarking: Comparing your company's products, services, or processes

against best practices, or competitive practices, to help define superior

performance of a product, service, or support process.

Benefits Realization Test: A test or analysis conducted after an application is

moved into production to determine whether it is likely to meet the originating

business case.

Black-box Testing: A test technique that focuses on testing the functionality of

the program, component, or application against its specifications without

knowledge of how the system is constructed; usually data or business process

driven.

67
Boundary Value Analysis: A data selection technique in which test data is chosen

from the "boundaries" of the input or output domain classes, data structures, and

procedure parameters. Choices often include the actual minimum and maximum

boundary values, the maximum value plus or minus one, and the minimum value

plus or minus one.

Bug: A catchall term for all software defects or errors.

Certification: Acceptance of software by an authorized agent after the software has

been validated by the agent or after its validity has been demonstrated to the agent.

Check sheet: A form used to record data as it is gathered.

Checkpoint: A formal review of key project deliverables. One checkpoint is defined

for each key project deliverable, and verification and validation must be done for

each of these deliverables that is produced.

Condition Coverage: A white-box testing technique that measures the number of

percentage of decision outcomes covered by the test cases designed. 100% Condition

coverage would indicate that every possible outcome of each decision had been executed

at least once during testing.

Configuration Testing: Testing of an application on all supported hardware and

software platforms. This may include various combinations of hardware types,

configuration settings, and software versions.

Cost of Quality (COQ): Money spent above and beyond expected production costs

(labor, materials, equipment) to ensure that the product the customer receives is a quality

(defect free) product The Cost of Quality includes prevention, appraisal, and correction or

repair costs.

68
Conversion Testing: Validates the effectiveness of data conversion processes, including

field-to-field mapping, and data translation.

Decision Coverage: A white-box testing technique that measures the number of -or

percentage -of decision directions executed by the test case designed. 100% Decision

coverage would indicate that all decision directions had been executed at least once

during testing. Alternatively, each logical path through the program can be tested. Often,

paths through the program are grouped into a finite set of classes, and one path from

each class is tested

Decision/Condition Coverage: A white-box testing technique that executes possible

combinations of condition outcomes in each decision.

Defect: Operationally, it is useful to work with two definitions of a defect: (1) From the

producer's viewpoint: a product requirement that has not been met or a product attribute

possessed by a product or a function performed by a product that is not in the statement

of requirements that define the product; or (2) From the customer's viewpoint: anything

that causes customer dissatisfaction, whether in the statement of requirements or not.

Driver: Code that sets up an environment and calls a module for test.

Defect Tracking Tools: Tools for documenting defects as they are found during

testing and for tracking their status through to resolution.

Desk Checking: The most traditional means for analyzing a system to a program.

The developer of a system or program conducts desk checking. The process involves

reviewing the complete product to ensure that it is structurally sound and that the

standards and requirements have been met. This tool can also be used on artifacts

created during analysis and design.

69
Entrance Criteria: Required conditions and standards for work product quality that

must be present or met for entry into the next stage of the software development

process.

Equivalence Partitioning: A test technique that utilizes a subset of data that is

representative of a larger class. This is done in place of undertaking exhaustive testing of

each value of the larger class of data. For example, a business rule that indicates that a

program should edit salaries within a given range ($10,000 -$15,000) might have 3

equivalence classes to test:

Less than $10,000 (invalid)

Between $10,000 and $15,000

(valid) Greater than $15,000

(invalid)

Error or Defect: 1. A discrepancy between a computed, observed, or measured value

or condition and the true, specified, or theoretically corrects value or condition. 2.

Human action that results in software containing a fault (e.g., omission or

misinterpretation of user requirements in a software specification, incorrect

translation, or omission of a requirement in the design specification).

Error Guessing: The data selection technique for picking values that seems likely to

cause defects. This technique is based upon the theory that test cases and test data can be

developed based on the intuition and experience of the tester.

Exhaustive Testing: Executing the program through all possible combinations of

values for program variables. Exit Criteria: Standards for work product quality, which

block the promotion of incomplete or defective work products to subsequent stages of

the software development process.

70
Functional Testing: Application of test data derived from the specified functional

requirements without regard to the final program structure.

Inspection: A formal assessment of a work product conducted by one or more qualified

independent reviewers to detect defects, violations of development standards, and other

problems. Inspections involve authors only when specific questions concerning

deliverables exist. An inspection identifies defects, but does not attempt to correct them.

Authors take corrective actions and arrange follow-up reviews as needed.

Integration Testing: This test begins after two or more programs or application

components have been successfully unit tested. The development team to validate the

technical quality or design of the application conducts it. It is the first level of testing which

formally integrates a set of programs that communicate among themselves via

messages or files (a client and its server(s), a string of batch programs, or a set of on-line

modules within a dialog or conversation).

Life Cycle Testing: The process of verifying the consistency, completeness, and

correctness of software at each stage of the development lifecycle.

Performance Test: Validates that both the on-line response time and batch run times

meet the defined performance requirements.

Quality: A product is a quality product if it is defect free. To the producer, a product is a

quality product if it meets or conforms to the statement of requirements that defines the

product. This statement is usually shortened to: quality means meets requirements. From

a customer's perspective, quality means "fit for use".

Quality Assurance (QA): The set of support activities (including facilitation, training,

measurement, and analysis) needed to provide adequate confidence that processes are

71
established and continuously improved to produce products that meet specifications and

are fit for use.

Quality Control (QC): The process by which product quality is compared with

applicable standards, and the action taken when nonconformance is detected. Its focus is

defect detection and removal. This is a line function; that is, the performance of these

tasks is the responsibility of the people working within the process.

Recovery Test: Evaluate the contingency features built into the application for handling

interruptions and for returning to specific points in Life application processing cycle,

including. -checkpoints, backups, restores, and restarts. This test also assures that disaster

recovery is possible.

Regression Testing: Regression testing is the process of retesting software to

detect errors that may have been caused by program changes. The technique

requires the use of a set of test cases that have been developed to test all of the

software's functional capabilities.

Stress Testing: This test subjects a system, or components of a system, to varying

environmental conditions that delay normal expectations. For example: high

transaction volume, large database size or restart/recovery circumstances. The

intention of stress testing is to identify constraints and to ensu4re that there are no

performance problems,

Structural Testing: A testing method in which the test data are derived solely from

the program structure.

Stub: Special code segments -that when invoked by a code segment under testing

sinuate the behavior of designed and specified modules not yet constructed. I

72
System test: During this event, the entire system is tested to verify that all

functional, information, structural and quality requirements have been met. A

predetermined combination of tests is designed that, when executed successfully,

satisfy management that the system meets specifications. System testing verifies

the functional quality of the system in addition to all external interfaces, manual

procedures, restart and recovery, and human-computer interfaces. It also verifies

that interfaces between the application and open environment work correctly, that

JCL functions correctly, and that the application functions appropriately with the

Database Management System, Operations environment, and any communications

systems.

Test Case -

A test case is a document that describes an input, action, or event and an expected

response, to determine if a feature of an application is working correctly. A test case

should contain particulars such as test case identifier, test case name, objective, test

conditions/setup, input data requirements, steps, and expected results.

Test Case Specification: -An individual test condition, executed as part of a larger test

contributes to the test's objectives. Test cases document the input, expected results,

execution conditions of a given test item. Test cases are broken down into one or more

detailed test scripts and test data conditions for execution.

Test Data Set: Set of input elements used in the testing process

Test Design Specification: A document that specifies the details of the test

approach for a software feature or a combination of features and identifies the

associated tests.

Test Item: A software item that is an object of testing.

Test Log: A chronological record of relevant details about the execution of tests.

73
Test Plan: A document describing the intended scope, approach, resources, and

schedule of testing activities. It identifies test items, the features to be tested, the

testing tasks, the personnel performing each task, and any risks requiring

contingency planning.

Test Procedure Specification: A document specifying a sequence of actions for

the execution of a test.

Test Summary Report A document that describes testing activities and results and

evaluates the corresponding test items.

Testing: Examination by manual or automated means of the behaviour of a program

by executing the program on sample data sets to verify that it satisfies specified

requirements or to verify differences between expected and actual results.

Test Scripts: A tool that specifies an order of actions that should be performed

during a test session. The script also contains expected results. Test scripts may be

manually prepared using paper forms, or may be automated using capture/playback

tools or other kinds of automated scripting tools.

Usability Test: The purpose of this event is to review the application user interface

and other human factors of the application with the people who will be using the

application. This is to ensure that the design (layout and sequence, etc.) enables the

business functions to be executed as easily and intuitively as possible. This review

includes assuring that the user interface adheres to documented User Interface

standards, and should be conducted early in the design stage of development.

Ideally, an application prototype is used to walk the client group through various

business scenarios, although paper copies of screens, windows, menus, and reports

can be used.

74
User Acceptance Test: User Acceptance Testing (UAT) is conducted to ensure that

the system meets the needs of the organization and the end user/customer. It

validates that the system will work as intended by the test in the real world, and is

based on real world business scenarios, not system requirements. Essentially, this

test validates that the RIGHT system was built.

Validation: Determination of the correctness of the final program or software

produced from a development project with respect to the user needs and

requirements. Validation is usually accomplished by verifying each stage of the

software development life cycle.

Verification:
I) The process of determining whether the products of a given phase of

the software development cycle fulfill the requirements established

during the previous phase.

II) The act of reviewing, inspecting, testing, checking, auditing, or

otherwise establishing and documenting whether items, processes,

services, or documents conform to specified requirements.

Walkthrough: A manual analysis technique in which the module author describes

the module's structure and logic to an audience of colleagues. Techniques focus on

error detection, not correction. Will usually sue a formal set of standards or criteria

as the basis of the review.

White-box Testing: A testing technique that assumes that the path of the logic in a

program unit or component is known. White-box testing usually consists of testing

paths, branch by branch, to produce predictable results. This technique is usually

used during tests executed by the development team, such as Unit or Component

testing.

75
22

Technical Questions

1. What is Software Testing?

The process of exercising or evaluating a system or system component by manual or


automated means to verify that it satisfies specified requirements or to identify
differences between expected and actual results.

2. What is the Purpose of Testing?

• To uncover hidden errors


• To achieve the maximum usability of the system
• To Demonstrate expected performance of the system.

3. What types of testing do testers perform?

Black box testing, White box testing is the basic type of testing testers performs.
Apart from that they also perform a lot of tests like Ad - Hoc testing, Cookie Testing,
CET (Customer Experience Test), Client-Server Test, Configuration Tests,
Compatibility testing, Conformance Testing

4. What is the Outcome of Testing?

A stable application, performing its task as expected.

5. What is the need for testing?

The Primary need is to match requirements get satisfied with the functionality and
also to answer two questions
• Whether the system is doing what it supposes to do?
• Whether the system is not performing what it is not suppose to do?

76
6. What are the entry criteria for Functionality and Performance testing?

Functional testing:

Functional Specification /BRS (CRS)/User Manual. An integrated application, Stable


for testing.

7. Why do you go for White box testing, when Black box testing is available?

A benchmark that certifies Commercial (Business) aspects and also functional


(technical) aspects is objectives of black box testing. Here loops, structures, arrays,
conditions, files, etc are very micro level but they arc Basement for any application,
So White box takes these things in Macro level and test these things

8. What are the entry criteria for Automation testing?

Application should be stable. Clear Design and Flow of the application is needed

9. What is Baseline document, Can you say any two?

A baseline document, which starts the understanding of the application before the
tester, starts actual testing. Functional Specification and Business Requirement
Document

10. What are the Qualities of a Tester?


• Should be perfectionist
• Should be tactful and diplomatic
• Should be innovative and creative
• Should be relentless
• Should possess negative thinking with good judgment skills
• Should possess the attitude to break the system

11. Tell names of some testing type which you learnt or experienced?

Any 5 or 6 types which are related to companies profile is good to say in the
interview,
• Ad - Hoc testing
• Cookie Testing
• CET (Customer Experience Test)
• Depth Test
• Event-Driven

77
• Performance Testing
• Recovery testing
• Sanity Test
• Security Testing
• Smoke testing
• Web Testing

12. What exactly is Heuristic checklist approach for unit testing?

It is method of achieving the most appropriate solution of several found by


alternative methods is selected at successive stages testing. The checklist Prepared
to Proceed is called Heuristic checklist

13. After completing testing, what would you deliver to the client?

Test deliverables namely Test plan Test Data Test design Documents
(Condition/Cases)
• Defect Reports
• Test Closure Documents
• Test Metrics

14. What is a Test Bed?

Before Starting the Actual testing the element; which supports the testing activity
such as Test data, Data guide lines. Are collectively called as test Bed.

15. What is a Data Guideline?

Data Guidelines are used to specify the data required to populate the test bed and
prepare test scripts. It includes all data parameters that are required to test the
conditions derived from the requirement / specification The Document, which
supports in preparing test data are called Data guidelines

16. Why do you go for Test Bed?

When Test Condition is executed its result should be compared to Test result
(expected result), as Test data is needed for this here comes the role of test Bed
where Test data is made ready.

78
17. Can Automation testing replace manual testing? If it so, how?

Automated testing can never replace manual Testing. As these tools to Follow GIGO
principle of computer tools. Absence of creativity and innovative thinking. But It
speeds up the process. Follow a clear Process, which can be reviewed easily. Better
Suited for Regression testing of Manually tested Application and Performance testing.

18. What is the difference between quality and testing?

"Quality is giving more cushions for user to use system with all its expected
characteristics”. It is usually said as Journey towards Excellence.

“Testing is an activity done to achieve the quality”.

SQA is responsible for prevention of the defects while Testing detects the defects

SQA is concerned with the process used to develop the product whereas Testing is
concerned with the product developed
SQA involves Verification while Testing involves Validation

19. Why do we prepare test condition, test cases, test script (Before
Starting Testing)?

These are test design document which are used to execute the actual testing Without
which execution of testing is impossible, finally this execution is going to find the
bugs to be fixed so we have prepare this documents.

20. Is it not waste of time in preparing the test condition, test case & Test
Script?

No document prepared in any process is waste of rime, That too test design
documents which plays vital role in test execution can never be said waste of time as
without which proper testing cannot be done.

21. How do you go about testing of Web Application?

To approach a web application testing, the first attack on the application should be
on its performance behavior as that is very important for a web application and then

79
transfer of data between web server and. front-end server, security server and back
end server.

22. What kind of Document you need for going for a Functional testing?

Functional specification is the ultimate document, which expresses all the


functionalities of the application and other documents like user manual and BRS are
also need for functional testing. Gap analysis document will add value to understand
expected and existing system.

23. Can the System testing be done at any stage?

No, .The system as a whole can be tested only if all modules arc integrated and all
modules work correctly System testing should be done before UAT (User Acceptance
testing) and Before Unit Testing.

24. What is Mutation testing & when can it be done?

Mutation testing is a powerful fault-based testing technique for unit level testing.
Since it is a fault-based testing technique, it is aimed at testing and uncovering some
specific kinds of faults, namely simple syntactic changes to a program. Mutation
testing is based on two assumptions: the competent programmer hypothesis and the
coupling effect. The competent programmer hypothesis assumes that competent
programmers turn to write nearly "correct" programs. The coupling effect stated that
a set of test data that can uncover all simple faults in a program is also capable of
detecting more complex faults. Mutation testing injects faults into code to determine
optimal test inputs.

25. Why it is impossible to test a program completely?

With any software other than the smallest and simplest program, there are too many
inputs, too many outputs, and too many path combinations to fully test. Also,
software specifications can be subjective and be interpreted in different ways.

80
Test Automation:

26. What automating testing tools are you familiar with?

Win Runner and Load Runner

27. What is the use of automating testing tools in any job?

The automation testing tools are used for Regression and Performance testing.

28. Describe some problem with automating testing tool.

Several problems are encountered while working with test automation tools like,

a. Tools Limitations for Object Detections


b. Tools Configuration / Deployment in various Environments
c. Tools Precision / Default Skeleton Script Issues like window
synchronization issues etc.
d. Tools bugs with respect to exception handling.
e. Tools abnormal polymorphism in behavior like sometimes it works but
sometimes not for the same application / same script/same environment
etc.

29. How test automation is planned?

Planning is the most important task in Test Automation. Test Automation Plan should
cover the following task items,

a. Tool Selection: Type of Test Automation Expected (Regression /


Performance etc.)
b. Tool Evaluation: Tool Availability / Tool License Availability / Tool License
Limitations.
c. Tool Cost Estimation Vs Project Cost Estimation Statistics for Testing.
d. Resource Requirements Vs Availability Study.
e. Time Availability Vs Time Estimations Calculations and Definitions.
f. Production Requirements Analysis Results Consideration with respect to
Factors like Load-Performance / Functionality Expected / Scalability etc.
g. Test Automation Process Definitions including Standard to be followed
while performing Test Automation.
h. Test Automation Scope Definition.
i. Automation Risk Analysis and planning to overcome if defined Risks
Emerge in the Automation Process.
j. Reference Document Requirement as Perquisites for Test Automation.

81
30. Can test automation improve test effectiveness?

Yes, Definitely Test Automation plays a vital role in improving Test Effectiveness in
various ways like,

a. Reduction in Slippage caused due to human errors.


b. Object / Object Properties Level UI Verifications.
c. Virtual Load / Users usage in Load/Performance Testing wherein its not
possible to use so many resources physically performing test and get so
accurate results.
d. Précised Time Calculations.
e. And many more…

31. What is data - driven automation?

Data Driven Automation is the most important part of test automation where the
requirement is to execute the same test cases for different set of test input data so
that test can executed for pre-defined iterations with different set of test input data
for each iteration.

32. What are the main attributes of test automation?

Here are some of the attributes of test automation that can be measured,

Maintainability

• Definition: The effort needed to update the test automation suites for each
new release.
• Possible measurements: The possible measurements can be e.g. the average
work effort in hours to update a test suite.

Reliability

• Definition: The accuracy and repeatability of your test automation.


• Possible measurements: Number of times a test failed due to defects in the
tests or in the test scripts.

Flexibility

• Definition: The ease of working with all the different kinds of automation test
ware.
• Possible measurements: The time and effort needed to identify, locate,
restore, combine and execute the different test automation test ware.

Efficiency

• Definition: The total cost related to the effort needed for the automation.
• Possible measurements: Monitoring over time the total cost of automated
testing, i.e. resources, material, etc.

Portability

82
• Definition: The ability of the automated test to run on different environments.
• Possible measurements: The effort and time needed to set-up and run test
automation in a new environment.

Robustness

• Definition: The effectiveness of automation on an unstable or rapidly changing


system.
• Possible measurements: Number of tests failed due to unexpected events.

Usability

• Definition: The extent to which automation can be used by different types of


users (Developers, non-technical people or other users etc.,)
• Possible measurements: The time needed to train users to become confident
and productive with test automation.

33. Does automation replace manual testing?

We cannot actually replace manual testing 100% using Automation but yes definitely
it can replace almost 90% of the manual test efforts if the automation is done
efficiently.

34. How a tool for test automation is chosen?

Below are factors to be considered while choosing Test Automation Tool,

a. Test Type Expected. (E.g. Regression Testing / Functional Testing /


Performance-Load Testing)
b. Tool Cost Vs Project Testing Budget Estimation.
c. Protocol Support by Tool Vs. Application Designed Protocol.
d. Tools Limitations Vs Application Test Requirements
e. H/W, S/W & Platform Support of Tool Vs Application test Scope for these
attributes.
f. Tool License Limitations / Availability Vs Test Requirements.(Tools
Scalability)

35. How one will evaluate the tool for test automation?

Whenever a Tool has to be evaluated one need to go through few important


verifications / validations of the tool like,

a. Platform Support from the Tool.


b. Protocols / Technologies Support.
c. Tool Cost
d. Tool Type with its Features Vs Our Requirements Analysis.
e. Tool Usage Comparisons with other similar available tools in market.

83
f. Tool’s Compatibility with our Application Architecture and Development
Technologies.
g. Tool Configuration & Deployment Requirements.
h. Tools Limitations Analysis.

36. What are main benefits of test automation?

The main benefits of Test Automation are,

a. Test Automation Saves Major Testing Time.


b. Saves Resources (Human / H/w / S/W resources)
c. Reduction in Verification Slippages cased due to human errors.
d. Object Properties Level Verifications can be done which is difficult
manually.
e. Virtual Load / Users Generation for load testing which is not worth doing
manually as it needs lots of resources and also it might not give that
precise results which can be achieved using a Automation Tool.
f. Regression Testing Purposes.
g. For Data Driven Testing.

37. What could go wrong with test automation?

While using Test Automation there are various factors that can affect the testing
process like,

a. Tool’s Limitations might result in Application Defects.


b. Automation Tool’s abnormal behavior like Scalability Variations due
to memory violations might be considered as Applications memory
violation in heavy load tests.
c. Environment Settings Required for Tool (e.g. Java-CORBA required
JDK to be present in System) causes Application to show up Bugs
which are just due to the JDK installation in System which I had
experienced myself as on un-installation of JDK and Java-Addins my
application works fine.

38. How are the testing activities described?

The basic Testing activities are as follows:

a. Test Planning (Pre-Requisite: Get Adequate Documents of the Project to


test)
b. Test Cases (Pre-Requisite: Get Adequate Documents of the Project to test)
c. Cursor Test (A Very Basic Test to make sure that all screens are coming
and application is ready for test or to automate)
d. Manual Testing
e. Test Automation (Provided if the product had reached Stability enough to
be automated).
f. Bug Tracking & Bug Reporting.
g. Analysis of the Test and Test Report Creation.
h. If Bug Fixing Cycle repeats then Steps c-h repeats.

84
39. What testing activities one may want to automate?

Anything, which is repeated, should be automated if possible. Thus the following


testing activities can be automated,

a. Test Case Preparation


b. Tests like Cursor, Regression, Functional & Load / Performance testing.
c. Test Report Generation.
d. Test Status/Results Notifications.
e. Bug Tracking System. Etc.
f.

40. Describe common problems of test automation?

In Test Automation we come across several problems, out of which I would like to
highlight few as given below,

a. Automation Script Maintenance, which becomes tough if product gets


through frequent changes.
b. Automation Tool’s Limitations for objects Recognizing.
c. Automation Tool’s Third Part Integration Limitations.
d. Automation Tool’s abnormal behavior due to its Scalability Issues.
e. Due to Tool’s Defects, We might assume its Application Defect and
consider any issue as Application Bug.
f. Environmental Settings and API’s / Addins Required by Tool to make it
compatible to work with Specialized Environments like JAVA-CORBA
creates JAVA Environmental Issues for the Application to work. (E.g.
WinRunner 7.05 Java-Support Environmental Variables Creates Application
Under Test 0to malfunction)
g. There are many issues, which we come across while actual automation.

41. What are the types of scripting techniques for test automation ?

Scripting Technique: how to structure automated test scripts for maximum


benefit and
Minimum impact of software changes, scripting issues, scripting approaches:
linear,
Shared, data-driven and programmed, script pre-processing, minimizing the
impact of
Software changes on test scripts.

The major ones used are,


a. Data-Driven Scripting
b. Centralized Application Specific / Generic Compiled Modules / Library
Development.
c. Parent Child Scripting.

85
d. Techniques to Generalize the Scripts.
e. Increasing the factor of Reusability of the Script.

42. What are principles of good testing scripts for automation?

The major principles of good testing script for Automation are,

a. Automation Scripts should be reusable.


b. Coding Standards should be followed for Scripting, which makes Script
Updating, Understanding, Debugging easier.
c. Scripts should be Environment, data Independent as much as possible
which can be achieved using parameterization.
d. Script should be generalized.
e. Scripts should be modular.
f. Repeated Tasks should be kept in Functions while scripting to avoid code
repeat, complexity and make script easy for debugging.
g. Script should be readable and appropriate comments should be written for
each line / section of script.
h. Script Header should contain script developer name, script updated date,
script environmental requirements, scripted environmental details, script
pre-requisites from application side, script description in brief, script
contents, script scope etc.

43. What tools are available for support of testing during software
development life cycle?
Test Director for Test Management, Bugzilla for Bug Tracking and Notification etc are
the tools for Support of Testing.
44. Can the activities of test case design be automated?
Yes, Test Director is one of such tool, which has the feature of Test Case Design and
execution.
45. What are the limitations of automating software testing?
If one talk about limitations of automating software testing, then to mention few,
a. Automation Needs lots of time in the initial stage of automation.
b. Every tool will have its own limitations with respect to protocol support,
technologies supported, object recognition, platform supported etc due
to which not 100% of the Application can be automation because there
is always something limited to the tool which we have to overcome
with R&D.
c. Tool’s Memory Utilization is also one the important factor which blocks
the application’s memory resources and creates problems to application
in few cases like Java Applications etc.

86

You might also like