Testing Tools Material
Testing Tools Material
Testing Tools Material
Testing - Page 1
Every profession has its own vocabulary.To learn a profession, the first and crucial step is
to master its vocabulary.The entire knowledge of a profession is compressed and kept it in
its vocabulary.
Take our own software testing profession, while communicating with our collegues, we
frequently use terms like 'regression testing', 'System testing', now imagine communicating
the same to a person who is not in our profession or who doesn't understand our testing
vocabulary, we need to explain in detail each and every term .Communication becomes so
difficult and painful.To speak the language of testing, you need to learn its vocabulary.
Affinity Diagram: A group process that takes large amounts of language data, such as
developing by brainstorming, and divides it into categories
Audit: This is an inspection/assessment activity that verifies compliance with plans, policies
and procedures and ensures that resources are conserved.
Black-box Testing: A test technique that focuses on testing the functionality of the
program component or application against its specifications without knowlegde of how the
system constructed.
Boundary value analysis: A data selection technique in which test data is chosen from the
"boundaries" of the input or output domain classes, data structures and procedure
parameters. Choices often include the actual minimum and maximum boundary values, the
maximum value plus or minus one and the minimum value plus or minus one.
Branch Testing: A test method that requires that each possible branch on each decision be
executed on at least once.
Brainstorming: A group process for generating creative and diverse ideas.
1
Certification testing: Acceptance of software by an authorized agent after the software
has been validated by the agent or after its validity has been demonstrated to the agent.
Client: The customer that pays for the product received and receives the benefit from the
use of the product.
Correctness: The extent to which software is free from design and coding defects. It is
also the extent to which software meets the specified requirements and user objectives.
Cost of Quality: Money spent above and beyond expected production costs to ensure that
the product the customer receives is a quality product. The cost of quality includes
prevention, appraisal, and correction or repair costs.
Testing - Page 2
2
Debugging: The process of analysing and correcting syntactic, logic and other errors
identified during testing.
Decision Table
A tool for documenting the unique combinations of conditions and associated results in
order to derive unique test cases for validation testing.
Desk Check: A verification technique conducted by the author of the artifcat to verify the
completeness of their own work. This technique does not involve anyone else.
Entrance Criteria: Required conditions and standards for work product quality that must be
present or met for entry into the next stage of the software development process.
Error Guessing: Test data selection techniques for picking values that seem likely to cause
defects. This technique is based upon the theory that test cases and test data can be
developed based on intuition and experience of the tester.
Exhaustive Testing: Executing the program through all possible combination of values for
program variables.
Exit criteria: Standards for work product quality which block the promotion of incomplete
or defective work products to subsequent stages of the software development process.
3
Flowchart
Pictorial representations of data flow and computer logic. It is frequently
easier to understand and assess the structure and logic of an application system by
developing a flow chart than to attempt to understand narrative descriptions or verbal
explanations. The flowcharts for systems are normally developed manually, while flowcharts
of programs can be produced.
Formal Analysis
Technique that uses rigorous mathematical techniques to analyze the
algorithms of a solution for numerical properties, efficiency, and correctness.
Functional Testing
Testing that ensures all functional requirements are met without regard to the final
program structure.
Testing - Page 3
4
Histogram
A graphical description of individually measured values in a data set that is organized
according to the frequency or relative frequency of occurrence. A histogram illustrates the
shape of the distribution of individual values in a data set along with information regarding
the average and variation.
Inspection
A formal assessment of a work product conducted by one or more qualified independent
reviewers to detect defects, violations of development standards, and other problems.
Inspections involve authors only when specific questions concerning deliverables exist. An
inspection identifies defects, but does not attempt to correct them. Authors take
corrective actions and arrange follow-up reviews as needed.
Integration Testing
This test begins after two or more programs or application components have been
successfully unit tested. It is conducted by the development team to validate the
interaction or communication/flow of information between the individual components which
will be integrated.
Pass/Fail Criteria
Decision rules used to determine whether a software item or feature passes or fails a test.
Path Testing
A test method satisfying the coverage criteria that each logical path through the program
be tested. Often, paths through the program are grouped into a finite set of classes and
one path from each class is tested.
Performance Test
Validates that both the online response time and batch run times meet the
defined performance requirements.
Policy
Managerial desires and intents concerning either process (intended objectives) or products
(desired attributes).
Population Analysis
Analyzes production data to identify, independent from the specifications, the types and
frequency of data that the system will have to process/produce. This verifies that the
specs can handle types and frequency of actual data and can be used to create validation
tests.
5
Procedure
The step-by-step method followed to ensure that standards are met.
Process
1. The work effort that produces a product. This includes efforts of people and equipment
guided by policies, standards, and procedures.
2. A statement of purpose and an essential set of practices (activities) that address that
purpose.
Proof of Correctness
The use of mathematical logic techniques to show that a relationship between program
variables assumed true at program entry implies that another relationship between program
variables holds at program exit.
Quality
A product is a quality product if it is defect free. To the producer, a product is a quality
product if it meets or conforms to the statement of requirements that defines the product.
This statement is usually shortened to: quality means meets requirements. From a
customer’s perspective, quality means “fit for use.”
Quality Improvement
To change a production process so that the rate at which defective products (defects) are
produced is reduced. Some process changes may require the product to be changed.
Testing - Page 4
Recovery Test
Evaluates the contingency features built into the application for handling
interruptions and for returning to specific points in the application processing cycle,
6
including checkpoints, backups, restores, and restarts. This test also assures that disaster
recovery is possible.
Regression Testing
Testing of a previously verified program or application following program
modification for extension or correction to ensure no new defects have been introduced.
Risk Matrix
Shows the controls within application systems used to reduce the identified risk, and in
what segment of the application those risks exist. One dimension of the matrix is the risk,
the second dimension is the segment of the application system, and within the matrix at the
intersections are the controls. For example, if a risk is “incorrect input” and the systems
segment is “data entry,” then the intersection within the matrix would show the controls
designed to reduce the risk of incorrect input during the data entry segment of the
application system.
Standards
The measure used to evaluate products and identify nonconformance. The basis upon which
adherence to policies is measured.
Statement of Requirements
The exhaustive list of requirements that define a product.
Statement Testing
A test method that executes each statement in a program at least once during program
testing.
Static Analysis
Analysis of a program that is performed without executing the program. It
may be applied to the requirements, design, or code.
Stress Testing
This test subjects a system, or components of a system, to varying
environmental conditions that defy normal expectations. For example, high transaction
volume, large database size or restart/recovery circumstances. The intention of stress
testing is to identify constraints and to ensure that there are no performance problems.
Structural Testing
A testing method in which the test data is derived solely from the program structure.
7
Stub
Special code segments that when invoked by a code segment under testing, simulate the
behavior of designed and specified modules not yet constructed.
System Test
During this event, the entire system is tested to verify that all functional,
information, structural and quality requirements have been met.
Test Case
Test cases document the input, expected results, and
execution conditions of a given test item.
Test Plan
A document describing the intended scope, approach, resources, and schedule of testing
activities. It identifies test items, the features to be tested, the testing tasks, the
personnel performing each task, and any risks requiring contingency planning.
Test Scripts
A tool that specifies an order of actions that should be performed during a test session.
The script also contains expected results. Test scripts may be manually prepared using
paper forms, or may be automated using
capture/playback tools or other kinds of automated scripting tools.
Unit Test
Testing individual programs, modules, or components to demonstrate that the work package
executes per specification, and validate the design and technical quality of the application.
The focus is on ensuring that the detailed logic within the component is accurate and
reliable according to pre-determined specifications. Testing stubs or drivers may be used to
simulate behavior of interfacing modules.
Usability Test
The purpose of this event is to review the application user interface and other human
factors of the application with the people who will be using the application. This is to ensure
that the design (layout and sequence, etc.) enables the business functions to be executed as
easily and intuitively as possible. This review includes assuring that the user interface
adheres to documented User Interface standards, and should be conducted early in the
design stage of development. Ideally, an application prototype is used to walk the client
group through various business scenarios, although paper copies of screens, windows, menus,
and reports can be used.
8
intended by the user in the real world, and is based on real world business scenarios, not
system requirements. Essentially, this test validates that the right system was built.
Validation
Determination of the correctness of the final program or software produced from a
development project with respect to the user needs and requirements.
Verification
1. The process of determining whether the products of a given phase of the software
development cycle fulfill the requirements established during the previous phase.
2. The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and
documenting whether items, processes, services, or documents conform to specified
requirements.
Walkthroughs
During a walkthrough, the producer of a product “walks through” or
paraphrases the products content, while a team of other individuals follow along. The team’s
job is to ask questions and raise issues about the product that may lead to defect
identification.
White-box Testing
A testing technique that assumes that the path of the logic in a program unit or component
is known. White-box testing usually consists of testing paths, branch by branch, to produce
predictable results. This technique is usually used during tests executed by the
development team, such as Unit or Component testing.
CHAPTER 2
What is Quality?
What is quality?
or
9
Define quality?
Lot of quality pioneers defined quality in different ways
A quality product is defined as the one that meets product requirements But Quality can
only be seen through customer eyes.So the most important definition of quality is meeting
customer needs or Understanding customer requirements, expectations and exceeding those
expectations.Customer must be satisfied by using the product, then its a quality product.
Whats the difference between meeting product requirements and meeting customer
needs? Aren't customer needs tranlsated into product requirements?
Not always.Though our aim is to accurately capture customer needs into requirements and
build a product that satisfies those needs, we sometimes fail to do so because of the
following reasons
-Customers fail to accurately communicate their exact needs
-captured requirements can be misinterpreted
If the product has some defects, can it be still called a quality product?
It depends on the nature of those bugs.But in some cases, even though a product has bugs,
it can be still called a quality product.
Unless the product is very critical, aiming for zero defects is not cost effective always.We
should aim for 100% defect 'detection', but given the budget, time and resources
constraints, we can still release the product with some unfixed or open bugs. If the open
bugs cause no loss to the customer,then it can be still called a quality product.
Are there any other quality control practices apart from testing?
Yes.Inspections, design and code walkthroughs, reviews etc.
10
what are software quality factors?
software quality factors are attributes of the software that, if they are wanted and not
present, pose a risk to the success of the software. There are 11 main factors and their
definitions are given below. The priority and importance of the these attributes keeps
changing from product to product.Like if the product being developed needs to be changed
quite frequently, then flexibility and reusability of the product needs to be given priority.
The following are the quality factors
Reliability: Extent to which a program can be expected to perform its intended function
with required precision.
Efficiency: The amount of computing resources and code required by a program to perform
a function.
Usability: Effort required learning, operating, preparing input, and interpreting output of a
program.
Testability: Effort required testing a program to ensure that it performs its intended
function.
Reusability: Extent to which a program can be used in other applications – related to the
packaging and scope of the functions that programs perform.
11
identifying weakness in them.You many not reap great benefits immediately but over a long
run you can make significant savings by reducing the cost of quality.
CHAPTER 3
12
Life cycle testing or V testing aims at catching the defects as early as possible and thus
reduces the cost of fixing them.It achieves this by continuously testing the system during
all phases of the development process rather than just limiting testing to the last phase.
The life cycle testing can be best accomplished by the formation of a separate test team.
when the project starts both the system development process and system test process
begins. The team that is developing the system begins the systems development process and
the team that is conducting the system test begins planning the system test process.Both
teams start at the same point using the same information.The systems development team
has the
and document the requirements for developmental purposes. The test team will likewise use
those same requirements, but for the purpose of testing the system. At appropriate points
during the developmental process, the test team will test the developmental process in an
attempt to uncover defects.
The following is the software testing process which follows life cycle testing
Design phase:
Verify whether the design achieves the objectives of the requirements as well as the design
being effective and efficient
Verification Techniques: Design walkthroughs, Design Inspections
Coding phase:
Verify that the design is correctly translated to code
Verify coding is as per company's standards and policies
Verification Techniques: Code walkthroughs, code Inspections
Validation Techniques: Unit testing and Integration techniques
13
Maintenance phase:
After the software is implemented, any changes to the software must be thoroughly tested
and care should be taken not to introduce regression issues.
The life cycle testing is also called V testing. The project’s Do and Check procedures slowly
converge from start to finish (see above figure), which indicates that as the Do team
attempts to implement a solution, the Check team concurrently develops a process to
minimize or eliminate the risk. If the two groups work closely together, the high level of
risk at a project’s inception will decrease to an acceptable level by the project’s conclusion.
CHAPTER 4
Black box testing - not based on any knowledge of internal design or code. Tests are based
on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests
are based on coverage of code statements, branches, paths, conditions.
14
Unit testing - Unit is the smallest compilable component. A unit typically is
the work of one programmer.This unit is tested in isolation with the help of
stubs or drivers.Typically done by the programmer and not by testers.
End-to-end testing - similar to system testing but involves testing of the application in a
environment that mimics real-world use, such as interacting with a database, using network
communications, or interacting with other hardware, applications, or systems if appropriate.
Even the transactions performed mimics the end users usage of the application.
Sanity testing - typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or
destroying databases, the software may not be in a 'sane' enough condition to warrant
further testing in its current state.
Smoke testing - The general definition (related to Hardware) of Smoke Testing is:
Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and
drain lines to detect sources of unwanted leaks and sources of sewer odors.
In relation to software, the definition is Smoke testing is non-exhaustive software testing,
ascertaining that the most crucial functions of a program work, but not bothering with finer
details.
Static testing - Test activities that are performed without running the software is called
static testing. Static testing includes code inspections, walkthroughs, and desk checks
Dynamic testing - test activities that involve running the software are called dynamic
testing.
15
Regression testing - Testing of a previously verified program or application following
program modification for extension or correction to ensure no new defects have been
introduced.Automated testing tools can be especially useful for this type of testing.
Load testing -Load testing is a test whose objective is to determine the maximum
sustainable load the system can handle. Load is varied from a minimum (zero) to the
maximum level the system can sustain without running out of resources or having,
transactions suffer (application-specific) excessive delay.
Stress testing - Stress testing is subjecting a system to an unreasonable load while denying
it the resources (e.g., RAM, disc, mips, interrupts) needed to process that load. The idea is
to stress a system to the breaking point in order to find bugs that will make that break
potentially harmful. The system is not expected to process the overload without adequate
resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data).
The load (incoming transaction stream) in stress testing is often deliberately distorted so
as to force the system into resource depletion.
Performance testing - Validates that both the online response time and batch run times
meet the defined performance requirements.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend
on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions, and other techniques can be used. Programmers and testers are usually not
appropriate as usability testers.
Recovery testing - testing how well a system recovers from crashes, hardware failures, or
other catastrophic problems.
16
Security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
Exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test
it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The
monkey in this case is the producer of any input data (whether that be file data, or input
device data).
Keep pressing some keys randomely and check whether the software fails or not.
Beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources
Cross browser testing - application tested with different browser for usablity testing &
compatiblity testing
17
Negative testing - Testing the application for fail conditions,negative
testing is testing the tool with improper inputs.for example entering the
special characters for phone number
CHAPTER 5
When creating black-box test cases, the input data used is critical. Three successful
techniques for managing the amount of input data required include:
Equivalence Partitioning
An equivalence class is a subset of data that is representative of a larger class.Equivalence
partitioning is a technique for testing equivalence classes rather thanundertaking
exhaustive testing of each value of the larger class. For example, aprogram which edits
credit limits within a given range (1,000 - 1,500) would have three equivalence classes:
< 1,000 (invalid)
18
Between 1,000 and 1,500 (valid)
> 1,500 (invalid)
Boundary Analysis
A technique that consists of developing test cases and data that focus on the input and
output boundaries of a given function. In same credit limit example, boundary analysis would
test:
Low boundary +/- one (999 and 1,001)
On the boundary (1,000 and 1,500)
Upper boundary +/- one (1,499 and 1,501)
Error Guessing
Test cases can be developed based upon the intuition and experience of the tester. For
example, in an example where one of the inputs is the date, a tester may try February 29,
2000
White-box testing assumes that the path of logic in a unit or program is known. White-box
testing consists of testing paths, branch by branch, to produce predictable results. The
following are white-box testing techniques:
Statement Coverage
Execute all statements at least once.
Decision Coverage
Execute each decision direction at least once.
Condition Coverage
Execute each decision with all possible outcomes at least once.
Decision/Condition Coverage
Execute all possible combinations of condition outcomes in each decision. Treat all iterations
as two-way conditions exercising the loop zero times and one time.
19
CHAPTER 6
Testing Metrics
While testing a product, test manager has to take a lot of decisions like when to stop
testing or when is the application ready for production, how to track testing progress, how
to measure the quality of a product at a certain point in the testing cycle?Testing metrics
can help to take better and accurate decisions
Not only the testing progress but also the following metrics are helpful to measure the
quality of the product
% Test cases Passed = (Number of test cases Passed)/(Number of test cases executed)
20
% Test cases Failed = (Number of test cases Passed)/(Number of test cases executed)
Note: A test case is Failed when atleast one bug is found while executing it, otherwise
Passed
If the coverage of code is good, Mean time between failure is quite large, defect density is
very ow and not may high severity bugs still open, then 'may' be you should stop testing.
'Good', 'large', 'low' and 'high' are subjective terms and depends on the product being
tested.Finally, the risk associated with moving the application into production, as well as the
risk of not moving forward, must be taken into consideration.
21
CHAPTER 7
Test Planning: is the selection of techniques and methods to be used to validate the
product against its approved requirements and design.In this activity we assess the
software application risks, and then develop a plan to determine if the software minimizes
those risks.We document this planning in a Test Plan document.
Document Signoff: Usually a test plan document is a contract between testing team and all
the other teams involved in developing the product including the higher management folks.
Before signoff all interested parties thoroughly reviews the test plan and gives feedback,
raises issues or concerns, if any.Once everybody is satisfied with the test plan, they signoff
the document and which is a green signal for the testing team to start executing the test
plan.
Change History: Under this section, you specify, who changed what in the document and
when, along with the version of the document which contain the changes.
Review and Approval History: This captures who reviewed the document and whether they
Approved the test plan or not. The reviewer may suggest some changes or comments(if any)
to be incorporated in the test plan.
Document References: Any additional documents that will help better understand the test
plan like design documents and/or Requirements document etc.
22
Document Scope: In this section specify what the test plan covers and who its intended
audience is.
Product Summary: In this section describe briefly about the product that is to be tested.
Product Quality Goals: In this section describe important quality goals of the product.
Following are some of the typical quality goals
-Reliability, proper functioning as specified and expected.
-Robustness, acceptable response to unusual inputs, loads and conditions.
-Efficiency of use by the frequent users
-Easy to use even for the less frequent users
Testing Objectives: In this section specify the testing goals that need to be accomplished
by the testing team. The goals must be measurable and should be prioritized. The following
are some example test objectives.
Verify functional correctness
Test product robustness and stability.
Measure performance ‘hot spots’ (locations or features that are problem areas).
Assumptions: In this section specify the expectations, which if not met could have negative
impact on this test plan execution. Some of the assumptions can be on the test budget that
must be allocated, resources needed etc.
Testing Scope: In this section specify ‘what will be covered in testing’ and ‘what will not be
covered’.
Testing Strategy: In this section specify different testing types used to test the product.
Tools needed to execute the strategy are also specified.
Testing Schedule: In this section specify, first the entire project schedule and then
detailed testing schedule.
Resources: In this section specify all the resources needed to execute the plan successfully
Communication Approach: In this section specify how the testing team will
report the bugs to the development, how it will report the testing progress
to management, how it will report issues and concerns to higher ups.
23
CHAPTER 8
Test Outline: This document is written before writing test cases.This is a planning
document in which the flows or scenarios are written at a high level. These flows or
scenarios are later expanded to test cases, in which they are written in detail.Also the
biggest advantage of writing this document, before going to test cases is the 'traceability
matrix', where you ensure that the project/feature is sufficiently or thoroughly covered by
the individual test cases.
Change History: Under this section, you specify, who changed what in the document and
when, along with the version of the document which contain the changes.
Review and Approval History: This captures who reviewed the document and whether they
Approved the test outline or not. If approved, the reviewer will specify the review
comments(if any) to be incorporated in the test outline.There is a review template at the
end of the testcase_template.doc, which can be used to specify the comments for test
outline also.If the test outline document is 'Not Approved', then either the scenarios
mentioned are not sufficient or the scenarios are in a very bad shape(not in a state to be
reviewed) etc.
Document References: Any additional documents that will help better understand the test
outline document like design documents or Requirements document etc.
Projects Covered in Test Outline: Projects can be features of the product or modules
which are covered in the test outline document.
Traceability Matrix: This Matrix is filled after finishing writing all scenarios in the
outline.This is to ensure that all requiremnts or features are sufficiently covered by the
test cases and none are missing.So you map the requirement or feature and subfeature to
24
the test case that will be covering it. The following IDs uniquely identify the requirements
or feature and subfeature.You can add your own IDs based on the need
REQ_ID = Requirement ID from the SRS document
DD_ID = Detailed Design ID from the Detailed Design document
Setup Requirements: Any setup that has to be done in the application being tested, prior to
executing this test case, should be mentioned here.For example, if the test case needs
certain login IDs with certain settings to begin, which are not created as part of the test
case, then such things need to mentioned in this section.
Test Objectives: Specify at a very high level, what the test case is intended to achieve or
verify.
Test Case Limitations: Does the test case achieve the above mentioned test objective
completely or are there any exceptions?These exceptions need to be specified in this
section.For example, test case has to verify 'something' on type A, type B and type X, but
because of some reason it could NOT verify that 'something' on type X, then its a
limitation.
Test Case Dependencies / Assumptions: Prior to executing this test case, any other test
case needs to be run? All those dependencies need to mentioned here.
Process Flow: In this section, we specify at a high level what the flow of the test case
is.Suppose there are multiple users in the test case, then a process flow can look like
user1: does something
user2: does something else
user1: does again something
user2: says good bye
Test Outline Table column - 'User': Who has to perform the action. Suppose in a
application, there are two roles 'Buyer' and 'Supplier', then user can be those role names.
Test Outline Table column - 'Action': Under Action you specify the following
Flow Name - A high level name given to action performed by the user.Suppose Buyer has to
create certain purchase orders in the applications, then the flow name can be 'Create
Purchase Orders'
Description - The following things should be mentioned here at a high level
Description of what actions should be performed
What is the type or characteristics of data to be used.
What should be verified or checked after performing the action.
Effort Estimates: In this section you specify the effort needed to write each test case
and the effort needed to execute them.
25
CHAPTER 9
Change History: Under this section, you specify, who changed what in the document and
when, along with the version of the document which contain the changes.
Review and Approval History: This captures who reviewed the document and whether they
Approved the test case or not. If approved, the reviewer will specify the review comments
to be incorporated in the test case.There is a review template at the end of the template
document, which can be used to specify the comments.If the test case document is 'Not
Approved', then either the test case is not necessary(redundant) or it is in a very bad
shape(not in a state to be reviewed)
Document References: Any additional documents that will help better understand the test
case like test oulines or design documents or Requirements document etc.
Introduction/Overall Test Objectives: Specify at a very high level, what the test case is
intended to achieve or verify.
Test Case Limitations: Does the test case achieve the above mentioned test objective
completely or are there any exceptions?These exceptions need to be specified in this
section.For example, test case has to verify something on type A, type B and type X, but
because of some reason it could NOT verify that something on type X, then its a limitation.
Test Case Dependencies / Assumptions: Prior to executing this test case, any other test
case needs to be run? All those dependencies need to mentioned here.
Setup Requirements: Any setup that has to be done in the application being tested, prior to
executing this test script should be mentioned here.For example, if the test case needs
certain Login IDs with certain settings to begin, which are not created as part of the test
case, then such things need to mentioned in this section
Process Flow: In this section, we mention who does what in the test case. Suppose there
are multiple users in the test case, then a process flow can look like
user1: does something
26
user2: does something else
user1: does again something
user2: says good bye
Test Case: The actual test case begins in section 5, which can be further divided into
subsections upon convenience and need.For example, if the test case is for an integrated
application, then everytime we login to a new application, we can have a new subsection.
Following is the example of how a test case looks like
Step Num: 1
Step Description: check login
Path and Action: Enter user name, Enter pwd, click Login
Test Data: abcd, abcd
Expected Results: Verify error message is thrown that username and password entered are
wrong
Appendix: This section contain any additional data that the test case refers.For example if
your test case has large amounts of 'Test Data' which is difficult to put under the column
'Test Data' for each step, then you can use the appendix section to hold the data and in the
test case, can give reference to appendix.
Test Case Review Template: This template can be used by the reviewers to provide their
review comments.They can classify the comments based on their severity.The Test Engineer
who incorporates the comments in the test case, should specify the action taken by him in
the template and then 'Close' the comment.
27
CHAPTER 10
Once a bug(defect or error) is found, it should be communicated to the developers who can
fix it. Once the bug is fixed/resolved, the fix should be verified by the testers and should
be closed.
Bug information
The following information should be captured in the bug so that developers can clearly
understand the bug, get an idea of it's severity, and reproduce it if necessary.Also the
developer should mention in the bug, the cause of the problem, steps he has taken to fix
it/fix description, steps he has taken to verify the fix and any information that helps to
prevent such issues in future.
Bug status: In the long road between logging bug and fixing it, the status of a bug
communicates where it is.Eg: New,Assigned,fixed,closed etc.
A list of different bug statuses are mentioned below along with their descriptions.
Application details: Details of the application like application name, version, URL, database
details etc.
Component and/or subcomponent: The part of the application in which the bug was found
by tester
Severity/Criticality:
Priority: For bugs of same severity, this field can be used to decide which one's to fix first.
28
Test case name/number/identifier:
Data used:
Additional information: File excerpts,error messages,log file excerpts, screen shots that
would be helpful in finding the cause of the problem or fix it.
Tester name:
Description of fix:
Date of fix:
New: When a bug is found, the tester logs the bug and the status of ‘New’ is assigned to
the bug.
Assigned: The development team verifies if the bug is valid. If the bug is valid, development
leader assigns it to a developer to fix it and a status of ‘Assigned’ is set to it.
Not Reproducible: When dev lead could not reproduce the bug
Not a Bug: Invalid bug(a bug that does not require any code fix)
29
Duplicate Bug: Already a bug is logged for the same issue
Fixed but not patched: The bug is resolved but the fix is yet to pushed to testing
instance.
Ready for retesting : The fix is pushed to testing instance and ready for retesting by
tester
Closed,fix verified: The tester verifies the fix and the bug is resolved completely
Closed,Not a bug: The tester verifies the bug and finds the bug does not require code fix
Closed,Duplicate bug:
Reopened: The tester verifies and finds the bug is not fixed(either completely or partially)
Valid bug: New -> Assigned -> Fixed but not patched -> Ready for retesting -> Closed,fix
verified
Reopened bug: New -> Assigned -> Fixed but not patched -> Ready for retesting -> Reopened
-> Fixed but not patched -> Ready for retesting -> Closed,fix verified
Analysis of bugs
Bugs logged during a testing phase a invaluable source to improve the existing testing
processes.The holygrail for any testing team is zero customer bugs.Once a product is
released, majority of the customer bugs come within 6months to 1 year of product usage.
But immediately after a testing of product is over the following can be done.
-Testing Team should analyze each and every customer bug,find out why they have missed
them in their testing effort and take appropriate measures.
30