Nothing Special   »   [go: up one dir, main page]

What Is Manual Testing?

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

Software testing is an integral part of the software development life cycle (SDLC).

Software testing is nothing but subjecting a piece of code to both, controlled as well as
uncontrolled operating conditions, in an attempt to observe the output and examine whether it is
in accordance with certain pre-specified conditions. Different sets of test cases and testing
strategies are prepared, all of which aim at achieving one common goal - removing all the bugs
and errors from the code and making the software error-free and capable enough of providing
accurate and optimum outputs. There are different types of software testing techniques and
methodologies. A software testing methodology is different from a software testing technique.
We will have a look at a few software testing methodologies in the later part of this article.

What is Manual Testing?


Manual testing is the method used to check a software for defects manually. In this type of
testing, the tester wears the grabs of the end user. All the features of a software are tested to
know, if the behavior of the software is exactly according to the expectations of the customer.
Normally, the tester has a test plan, which he uses for testing. Other than the test plan, there are
test cases written, which are used for implementing the test plan.

Stages of Manual Testing


The entire process of manual testing goes through four phases. The first phase is called as unit
testing. It is the job of the developer to test the units of the code written by the developer himself.
In some cases, the code may also be tested by a peer. Integration testing is the second phase of
software testing. It is carried out, when bigger chunks of codes are integrated to form a bigger
block of code. Either black box or white box testing is carried out in this phase. The next phase is
the system testing phase. The software is tested for all possibilities to rule out any kind of
abnormality in the system. Normally black box testing technique is used in the system testing
phase. User acceptance testing is per say the last stage of manual testing. In this phase, the
software is tested keeping the end user in mind. There are two types of acceptance testing, which
are used, namely alpha testing and beta testing.

Software Testing Life Cycle


Like the software development life cycle, the software also goes through the software testing life
cycle. It is often seen that the software testing interview questions and answers revolve a lot
around the software testing life cycle. The different phases in the software testing life cycle are:
 Requirement Phase
 Test Planning Phase
 Test Analysis Phase
 Test Design Phase
 Test Verification and Construction Phase
 Test Execution Phase
 Result Analysis Phase
 Bug Tracking and Reporting Phase
 Rework Phase
 Final Test and Implementation Phase
Software Testing Strategy
There are three software testing types, under which all the software testing activities are carried
out. They are:
 White Box Testing Strategy
 Black Box Testing Strategy
 Gray Box Testing Strategy
There are other types of software testing, which are used to test the product to ensure that the
software meets the requirements of the end user. They include:
 Functional Testing
 Smoke Testing
 Usability Testing
 Validation Testing
 Compatibility Testing
 Sanity Testing
 Exploratory Testing
 Security Testing
 Regression Testing
 Recovery Testing
 Performance Testing (This includes 2 sub-types - Load Testing and Stress Testing)

White Box Testing

White box testing as the name suggests gives the internal view of the software. This type of
testing is also known as structural testing or glass box testing as well, as the interest lies in what
lies inside the box. It is often used to measure the thoroughness of testing through the coverage
of a set of structural elements or coverage items.

Unit Testing
Unit testing is also known as component testing, module testing or program testing. The aim of
this testing type is to search for defects in and verify the functioning of the individual software
component.
Static Testing
It is the testing of software or a component of the software at the specification or implementation
level without any sort of execution of the software. The different types of methodologies used
include different forms of reviews, coding standard implementation, code metrics, code
structure, etc.

Code Coverage
It is an analysis method implemented to determine, which parts of the software have been
covered by the test suite and which parts of the software have not been executed. There are
different types of coverage methods that are used for the same. They are statement coverage,
decision coverage and condition coverage. Statement coverage is the process, which gives the
percentage of executable statements, which have been exercised by a test suite. The decision
coverage on the other hand, is the percentage of decision outcomes, which have been exercised
by a test suite. 100% decision coverage means 100% statement coverage.

Error Guessing
A test design technique where an experienced tester is used to anticipate the defects, that might
be a present in the software or in a component of the software under test, as a result of errors
made. The tests are designed to specifically expose such defects.

Black Box Testing


Black box as the name suggests gives only the external view of the software. This type of testing
involves, testing either functional or non-functional aspects of the software, without any sort of
reference to the internal structure of the software. We will now see the different types of black
box testing techniques.

Integration Testing
Integration testing involves testing the interfaces between components, interactions to different
parts if a system, like the computer operating system, file system, hardware or interfaces between
different software systems.

Functional Testing
It is is the testing based on an analysis of the specification of the functionality of a particular
software or a component of a software. Functional testing is often based on five main points.
They are suitability, interoperability, security, accuracy and compliance.
Performance Testing
The testing methodology used to determine the performance of a software product. To
understand performance testing better, we will take an example of a website. How does the
website work in an environment of third party products, like servers and middleware software.
This type of testing helps to identify any kind of performance bottlenecks in high use
applications. Normally automation tests are used for performance testing, which have normal
peal and exceptional load conditions and the response of the software to these conditions.

Load Testing
This is a test conducted to determine and measure the behavior of a component or a software by
increasing the load on the component or the software. For example, a number of parallel users
and/or number of transactions are carried out on the system simultaneously, to find out what is
the highest amount of load, which can be handled by the component or the software.

Stress Testing
There is often a confusion between stress testing and load testing and they may be used
interchangeably, which is wrong. Stress testing involves conducting a test on the software to
evaluate the system at or beyond the limits of its specified requirements. It helps to determine the
load under which the software fails and how. The process used for stress testing is similar to
performance testing, but load employed is of very high level and stimulated.

Exploratory Testing
This is one of the software testing techniques, which has a hands-on approach. There is
minimum planning and maximum test execution carried out in exploratory testing. The tester
actively controls the design of the tests, when those tests are performed. The information gained
while testing is used to design and better tests.

Usability Testing
Usability testing involves tests, which are carried out to determine the extent to which the
software product is understood, easy to learn and operate and attractive to the users under
specific conditions. The user friendliness of the software is under check in this type of testing.
The application flow is checked to know, what is the flow of the software.

Reliability Testing
The ability of the software to perform its required functions under stated conditions for a specific
period of time and/or for a specific number of operations or transactions;

Ad-Hoc Testing
It is the least formal method implemented for testing a software. It helps in deciding the scope
and duration of the various testing, which need to be carried out on the application. It also helps
the tester in better understanding of the software.

Smoke Testing
This software testing type covers the main functionality of a component or the software. It helps
to determine the most crucial functions of the software, but does not concern the finer details.

System Testing
This type of software testing involves testing the entire system in accordance with the
requirements of the client. It is based on overall requirements specifications and covers all
combined parts of a system.

End to End Testing


This software testing type involves testing the entire application in real world like scenario. Here
the software interacts with the database, uses the network for communication, interacts with
other hardware, applications or systems, if necessary. Compatibility testing and security testing
are a part of end to end testing.

Regression Testing
One of the important type of testing carried out on the software product. The focus of regression
testing is on retesting the software to check if new defects are not introduced into the software
product after certain defects have been fixed.

Acceptance Testing
This is a formal testing carried out to determine whether or not the system satisfies the
acceptance criteria and to enable the users or other authorized entity to determine if the system
has to be accepted or not. Acceptance testing is carried in respect to needs of the user,
requirements of the users and the business processes to be carried out using the software.

Alpha Testing
Alpha testing involves stimulated or actual operational testing by potential users or an
independent test team at the developers site, but outside the development arena. It is often
performed on off-the-shelf software products, as a form of internal acceptance testing.

Beta Testing
Operational testing carried out by potential or existing users at an external site to determine if the
system satisfies the user needs and fits within the business processes is known as beta testing. It
is carried out as a form of acceptance testing for off-the-shelf software to acquire feedback from
the market.

Using the software testing types, the development team as well as the end user is able to
ascertain if the software indeed does satisfy the requirements. Various organization have
different methods, which are used to test a software. In some cases testing can begin at the start
of the development process, and in some organization tester can be involved in later stages of the
software development. The earlier testers are involved in the software development process,
lesser is the amount of time and money to be spent towards the end of the development process.
Software Testing Techniques
The software testing methodologies are divided into static testing techniques and dynamic testing
techniques. Software review and static analysis by using tools are the methods, which come
under static testing techniques. Specification based testing techniques, structure based testing
techniques and experience based testing techniques are all included under dynamic testing
technique. Equivalence partitioning is one of the important strategy used in specification based
testing technique. Take a look at the article titled ‘software testing technique’ for detailed
information.

Bug Life Cycle


The aim of the entire software testing activity is to find defects in the software, before it is
released to the end user for use. The bug life cycles starts after the tester logs a bug. The phases
in the bug life cycle are:
 New
 Open
 Assign
 Test
 Deferred
 Rejected
 Duplicate
 Verified
 Reopened
 Closed
New
This is the first stage of bug life cycle in which the tester reports a bug. The presence of the bug
becomes evident when the tester tries to run the newly developed application and it does not
respond in an expected manner. This bug is then send to the testing lead for approval.

Open
When the bug is reported to the testing lead, he examines the bug by retesting the product. If he
finds that the bug is genuine, he approves it and changes its status to 'open'.
Assign
Once the bug has been approved and found genuine by the testing lead, it is then send to the
concerned software development team for its resolution. It can be assigned to the team who
created the software or it may be assigned to some specialized team. After assigning the bug to
the software team, the status of the bug is changed to 'assign'.
Test
The team to which the bug has been assigned works on the removal of bug. Once, they are
finished with fixing the bug, it is sent back to the testing team for a retest. However, before
sending the bug back to the testing team, its status is changed to 'test' in the report.

Deferred
If the development team changes the status of the bug to 'deferred', it means that the bug will be
fixed in the next releases of the software. There can be myriad reason why the software team
may not consider fixing the bug urgently. This includes lack of time, low impact of the bug or
negligible potential of the bug to induce major changes in the normal functioning of the software.

Rejected
Although, the testing lead might have approved the bug stating it as a genuine one, the software
development team may not always agree. Ultimately, it is the prerogative of the development
team to decide if the bug is really genuine or not. If they doubt the presence or impact of the bug,
then they may change its status to 'rejected'.
Duplicate
If the development team finds that the same bug has been repeated twice or there are two bugs
which point to the same concept, then the status of one bug is changed to 'duplicate'. In this case,
fixing one bug automatically takes care of the other bug.
Verified
If the software development team sends the fixed bug back for retesting, then the bug undergoes
rigorous testing procedure again. If at the end of the test, it is not found then its status is changed
to 'verified.'
Reopened
If the bug still exists, then its status is changed to 'reopened'. The bug then traverses the entire of
its life cycle once again.
Closed
If no occurrence of bug is reported and if the software functions normally, then the bug is
'closed.' This is the final stage in which the bug has been fixed, tested and approved.
Software Testing Models
There are different software testing models, which the software testing team can choose from.
Each of these models have different methods, as they are based on different principles. A number
of factors are taken into consideration, before a particular model is chosen. The different models
that are used are:
 Waterfall Model in Testing
 Validation and Verification Model
 Spiral Model
 Rational Unified Process (RUP) Model
 Agile Model
 Rapid Application Development (RAD) Model
V Model
The V model gets its name from the fact that the graphical representation of the different test
process activities involved in this methodology resembles the letter 'V'. The basic steps involved
in this methodology are more or less the same as those in the waterfall model. However, this
model follows both a 'top-down' as well as a 'bottom-up' approach (you can visualize them
forming the letter 'V'). The benefit of this methodology is that in this case, both the development
and testing activities go hand-in-hand. For example, as the development team goes about its
requirement analysis activities, the testing team simultaneously begins with its acceptance testing
activities. By following this approach, time delays are minimized and optimum utilization of
resources is assured.
Spiral Model
As the name implies, the spiral model follows an approach in which there are a number of cycles
(or spirals) of all the sequential steps of the waterfall model. Once the initial cycle is completed,
a thorough analysis and review of the achieved product or output is performed. If it is not as per
the specified requirements or expected standards, a second cycle follows, and so on. This
methodology follows an iterative approach and is generally suited for very large projects having
complex and constantly changing requirements.
Rational Unified Process (RUP)
The RUP methodology is also similar to the spiral model in the sense that the entire testing
procedure is broken up into multiple cycles or processes. Each cycle consists of four phases
namely; inception, elaboration, construction and transition. At the end of each cycle, the product
or the output is reviewed and a further cycle (made up of the same four phases) follows if
necessary. Today, you will find certain organizations and companies adopting a slightly modified
version of the RUP, which goes by the name of Enterprise Unified Process (EUP).
Agile Model
This methodology follows neither a purely sequential approach nor does it follow a purely
iterative approach. It is a selective mix of both of these approaches in addition to quite a few new
developmental methods. Fast and incremental development is one of the key principles of this
methodology. The focus is on obtaining quick, practical and visible outputs and results, rather
than merely following theoretical processes. Continuous customer interaction and participation is
an integral part of the entire development process.
Rapid Application Development (RAD)
The name says it all. In this case, the methodology adopts a rapid development approach by
using the principle of component-based construction. After understanding the various
requirements, a rapid prototype is prepared and is then compared with the expected set of output
conditions and standards. Necessary changes and modifications are made after joint discussions
with the customer or the development team (in the context of software testing). Though this
approach does have its share of advantages, it can be unsuitable if the project is large, complex
and happens to be of an extremely dynamic nature, wherein the requirements are constantly
changing. Here are some more advantages of rapid application development
What is a Black Box Testing Strategy?

Back Box Testing is not a type of testing; it instead is a testing strategy, which does not need any
knowledge of internal design or code etc. As the name "black box" suggests, no knowledge of
internal logic or code structure is required. The types of testing under this strategy are totally
based/focused on the testing for requirements and functionality of the work product/software
application. Black box testing is sometimes also called as "Opaque Testing",
"Functional/Behavioral Testing" and "Closed Box Testing".

The base of the Black box testing strategy lies in the selection of appropriate data as per
functionality and testing it against the functional specifications in order to check for normal and
abnormal behavior of the system. Now a days, it is becoming common to route the Testing work
to a third party as the developer of the system knows too much of the internal logic and coding of
the system, which makes it unfit to test the application by the developer.

In order to implement Black Box Testing Strategy, the tester is needed to be thorough with the
requirement specifications of the system and as a user, should know, how the system should
behave in response to the particular action.

Various testing types that fall under the Black Box Testing strategy are: functional testing, stress
testing, recovery testing, volume testing, User Acceptance Testing (also known as UAT), system
testing, Sanity or Smoke testing, load testing, Usability testing, Exploratory testing, ad-hoc
testing, alpha testing, beta testing etc.

These testing types are again divided in two groups: a) Testing in which user plays a role of
tester and b) User is not required.

Testing method where user is not required:

Functional Testing:
In this type of testing, the software is tested for the functional requirements. The tests are written
in order to check if the application behaves as expected.
Stress Testing:
The application is tested against heavy load such as complex numerical values, large number of
inputs, large number of queries etc. which checks for the stress/load the applications can
withstand.
Load Testing:
The application is tested against heavy loads or inputs such as testing of web sites in order to find
out at what point the web-site/application fails or at what point its performance degrades.
Ad-hoc Testing:
This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing
helps in deciding the scope and duration of the various other testing and it also helps testers in
learning the application prior starting with any other testing.
Exploratory Testing:
This testing is similar to the ad-hoc testing and is done in order to learn/explore the application.

Usability Testing:
This testing is also called as ‘Testing for User-Friendliness’. This testing is done if User Interface
of the application stands an important consideration and needs to be specific for the specific type
of user.
Smoke Testing:
This type of testing is also called sanity testing and is done in order to check if the application is
ready for further major testing and is working properly without failing up to least expected level.

Recovery Testing:
Recovery testing is basically done in order to check how fast and better the application can
recover against any type of crash or hardware failure etc. Type or extent of recovery is specified
in the requirement specifications.
Volume Testing:
Volume testing is done against the efficiency of the application. Huge amount of data is
processed through the application (which is being tested) in order to check the extreme
limitations of the system.
Testing where user plays a role/user is required:
User Acceptance Testing:
In this type of testing, the software is handed over to the user in order to find out if the software
meets the user expectations and works as it is expected to.

Alpha Testing:
In this type of testing, the users are invited at the development center where they use the
application and the developers note every particular input or action carried out by the user. Any
type of abnormal behavior of the system is noted and rectified by the developers.
Beta Testing:
In this type of testing, the software is distributed as a beta version to the users and users test the
application at their sites. As the users explore the software, in case if any exception/defect occurs
that is reported to the developers.
What is Verification?
The standard definition of Verification goes like this: "Are we building the product RIGHT?" i.e.
Verification is a process that makes it sure that the software product is developed the right way.
The software should confirm to its predefined specifications, as the product development goes
through different stages, an analysis is done to ensure that all required specifications are met.
Methods and techniques used in the Verification and Validation shall be designed carefully, the
planning of which starts right from the beginning of the development process. The Verification
part of ‘Verification and Validation Model’ comes before Validation, which incorporates
Software inspections, reviews, audits, walkthroughs, buddy checks etc. in each phase of
verification (every phase of Verification is a phase of the Testing Life Cycle)
During the Verification, the work product (the ready part of the Software being developed and
various documentations) is reviewed/examined personally by one ore more persons in order to
find and point out the defects in it. This process helps in prevention of potential bugs, which may
cause in failure of the project.
Few terms involved in Verification:
Inspection:
Inspection involves a team of about 3-6 people, led by a leader, which formally reviews the
documents and work product during various phases of the product development life cycle. The
work product and related documents are presented in front of the inspection team, the member of
which carry different interpretations of the presentation. The bugs that are detected during the
inspection are communicated to the next level in order to take care of them.
Walkthroughs:
Walkthrough can be considered same as inspection without formal preparation (of any
presentation or documentations). During the walkthrough meeting, the presenter/author
introduces the material to all the participants in order to make them familiar with it. Even when
the walkthroughs can help in finding potential bugs, they are used for knowledge sharing or
communication purpose.
Buddy Checks:
This is the simplest type of review activity used to find out bugs in a work product during the
verification. In buddy check, one person goes through the documents prepared by another person
in order to find out if that person has made mistake(s) i.e. to find out bugs which the author
couldn’t find previously.
The activities involved in Verification process are: Requirement Specification verification,
Functional design verification, internal/system design verification and code verification (these
phases can also subdivided further). Each activity makes it sure that the product is developed
right way and every requirement, every specification, design code etc. is verified!
What is Validation?
Validation is a process of finding out if the product being built is right?
i.e. whatever the software product is being developed, it should do what the user expects it to do.
The software product should functionally do what it is supposed to, it should satisfy all the
functional requirements set by the user. Validation is done during or at the end of the
development process in order to determine whether the product satisfies specified requirements.
Validation and Verification processes go hand in hand, but visibly Validation process starts after
Verification process ends (after coding of the product ends). Each Verification activity (such as
Requirement Specification Verification, Functional design Verification etc.) has its
corresponding Validation activity (such as Functional Validation/Testing, Code
Validation/Testing, System/Integration Validation etc.).
All types of testing methods are basically carried out during the Validation process. Test plan,
test suits and test cases are developed, which are used during the various phases of Validation
process. The phases involved in Validation process are: Code Validation/Testing, Integration
Validation/Integration Testing, Functional Validation/Functional Testing, and System/User
Acceptance Testing/Validation.
Terms used in Validation process:
Code Validation/Testing:
Developers as well as testers do the code validation. Unit Code Validation or Unit Testing is a
type of testing, which the developers conduct in order to find out any bug in the code
unit/module developed by them. Code testing other than Unit Testing can be done by testers or
developers.
Integration Validation/Testing:
Integration testing is carried out in order to find out if different (two or more) units/modules co-
ordinate properly. This test helps in finding out if there is any defect in the interface between
different modules.
Functional Validation/Testing:
This type of testing is carried out in order to find if the system meets the functional requirements.
In this type of testing, the system is validated for its functional behavior. Functional testing does
not deal with internal coding of the project, instead, it checks if the system behaves as per the
expectations.
User Acceptance Testing or System Validation:
In this type of testing, the developed product is handed over to the user/paid testers in order to
test it in real time scenario. The product is validated to find out if it works according to the
system specifications and satisfies all the user requirements. As the user/paid testers use the
software, it may happen that bugs that are yet undiscovered, come up, which are communicated
to the developers to be fixed. This helps in improvement of the final product
STLC
Software testing has its own life cycle that meets every stage of the SDLC. The software testing
life cycle diagram can help one visualize the various software testing life cycle phases. They are
1. Requirement Stage
2. Test Planning
3. Test Analysis
4. Test Design
5. Test Verification and Construction
6. Test Execution
7. Result Analysis
8. Bug Tracking
9. Reporting and Rework
10. Final Testing and Implementation
11. Post Implementation

Requirement Stage
This is the initial stage of the life cycle process in which the developers take part in analyzing the
requirements for designing a product. Testers can also involve themselves as they can think from
the users' point of view which the developers may not. Thus a panel of developers, testers and
users can be formed. Formal meetings of the panel can be held in order to document the
requirements discussed which can be further used as software requirements specifications or
SRS.
Test Planning
Test planning is predetermining a plan well in advance to reduce further risks. Without a good
plan, no work can lead to success be it software-related or routine work. A test plan document
plays an important role in achieving a process-oriented approach. Once the requirements of the
project are confirmed, a test plan is documented. The test plan structure is as follows:
1. Introduction: This describes the objective of the test plan.
2. Test Items The items that are referred to prepare this document will be listed here such as
SRS, project plan.
3. Features to be tested: This describes the coverage area of the test plan, ie. the list of
features that are to be tested that are based on the implicit and explicit requirements from
the customer.
4. Features not to be tested: The incorporated or comprised features that can be skipped
from the testing phase are listed here. Features that are out of scope of testing, like
incomplete modules or those on low severity eg. GUI features that don't hamper the
further process can be included in the list.
5. Approach: This is the test strategy that should be appropriate to the level of the plan. It
should be in acceptance with the higher and lower levels of the plan.
6. Item pass/fail criteria: Related to the show stopper issue. The criterion which is used has
to explain which test item has passed or failed.
7. Suspension criteria and resumption requirements: The suspension criterion specifies the
criterion that is to be used to suspend all or a portion of the testing activities, whereas
resumption criterion specifies when testing can resume with the suspended portion.
8. Test deliverable: This includes a list of documents, reports, charts that are required to be
presented to the stakeholders on a regular basis during testing and when testing is
completed.
9. Testing tasks: This stage is needed to avoid confusion whether the defects should be
reported for future function. This also helps users and testers to avoid incomplete
functions and prevent waste of resources.
10. Environmental needs: The special requirements of that test plan depending on the
environment in which that application has to be designed are listed here.
11. Responsibilities: This phase assigns responsibilities to the person who can be held
responsible in case of a risk.
12. Staffing and training needs: Training on the application/system and training on the
testing tools to be used needs to be given to the staff members who are responsible for the
application.
13. Risks and contingencies:This emphasizes on the probable risks and various events that
can occur and what can be done in such situation.
14. Approval: This decides who can approve the process as complete and allow the project to
proceed to the next level that depends on the level of the plan.

Test Analysis
Once the test plan documentation is done, the next stage is to analyze what types of software
testing should be carried out at the various stages of SDLC.

Test Design
Test design is done based on the requirements of the project documented in the SRS. This phase
decides whether manual or automated testing is to be done. In automation testing, different paths
for testing are to be identified first and writing of scripts has to be done if required. There
originates a need for an end to end checklist that covers all the features of the project.

Test Verification and Construction


In this phase test plans, the test design and automated script tests are completed. Stress and
performance testing plans are also completed at this stage. When the development team is done
with a unit of code, the testing team is required to help them in testing that unit and reporting of
the bug if found. Integration testing and bug reporting is done in this phase of the software
testing life cycle.
Test Execution
Planning and execution of various test cases is done in this phase. Once the unit testing is
completed, the functionality of the tests is done in this phase. At first, top level testing is done to
find out top level failures and bugs are reported immediately to the development team to get the
required workaround. Test reports have to be documented properly and the bugs have to be
reported to the development team.
Result Analysis
Once the bug is fixed by the development team, i.e after the successful execution of the test case,
the testing team has to retest it to compare the expected values with the actual values, and declare
the result as pass/fail.
Bug Tracking
This is one of the important stages as the Defect Profile Document (DPD) has to be updated for
letting the developers know about the defect. Defect Profile Document contains the following
1. Defect Id: Unique identification of the Defect.
2. Test Case Id: Test case identification for that defect.
3. Description: Detailed description of the bug.
4. Summary: This field contains some keyword information about the bug, which can help
in minimizing the number of records to be searched.
5. Defect Submitted By: Name of the tester who detected/reported the bug.
6. Date of Submission: Date at which the bug was detected and reported.
7. Build No.: Number of test runs required.
8. Version No.: The version information of the software application in which the bug was
detected and fixed.
9. Assigned To: Name of the developer who is supposed to fix the bug.
10. Severity: Degree of severity of the defect.
11. Priority: Priority of fixing the bug.
12. Status: This field displays current status of the bug.

Reporting and Rework


Testing is an iterative process. The bug once reported and as the development team fixes the bug,
it has to undergo the testing process again to assure that the bug found is resolved. Regression
testing has to be done. Once the Quality Analyst assures that the product is ready, the software is
released for production. Before release, the software has to undergo one more round of top level
testing. Thus testing is an ongoing process.
Final Testing and Implementation
This phase focuses on the remaining levels of testing, such as acceptance, load, stress,
performance and recovery testing. The application needs to be verified under specified
conditions with respect to the SRS. Various documents are updated and different matrices for
testing are completed at this stage of the software testing life cycle.

Post Implementation
Once the tests are evaluated, the recording of errors that occurred during various levels of the
software testing life cycle, is done. Creating plans for improvement and enhancement is an
ongoing process. This helps to prevent similar problems from occuring in the future projects. In
short, planning for improvement of the testing process for future applications is done in this
phase.
Requirements Traceability Matrix
How to Draft a Requirements Traceability Matrix
Requirement traceability matrices are often used to determine whether the requirements of the
ongoing project are being met or not. An RTM is also used to create a Request for Proposal,
project plan tasks and deliverables documents. Let’s take a look at the typical steps involved in
drawing up a requirements traceability matrix.
 Draft a template depending upon your project requirements. You can take the help of
many free RTM templates that are available online.
 Take all the required data from your business requirements catalog and transfer it to the
template.
 Mark each requirement with a different identification sign and put each sign against the
corresponding requirement.
 Include the use case ID into the requirements traceability matrix if you have made use of
them while developing the requirements.
 Incorporate the System Requirements Specification ID in the RTM even though you
yourself haven’t created them.
 Appropriately include the testing data into the requirement traceability matrix in such a
way that the different types of tests and changes made in the project gets accounted for.
 Run a double check to see if your RTM shows each specific deliverable requirement right
from the point of conception throughout the entire testing phase.
A Requirements Traceability Matrix should be such that it ensures that nothing is given the green
flag to proceed for production/ development in a haphazard manner and your project manager
gets all the information he needs from you ready and in order! Now that we have been through
all necessary steps for creating a Requirements Traceability Matrix, let us proceed to see what
one is expected to look like.
Requirements Traceability Matrix Example
The following Requirements Traceability Matrix template would clear up any doubts regarding
how an RTM looks like and how it should be drafted.

Req. 1 Req. 1 Req. 1 Req. 1


Requirement Identifiers Requirements Tested UC UC Tech Tech
1.1 1.2 1.1 1.2

Test Cases 409 4 2 3 1 1

Implicitly Tested 25 - - - - -

1.1.1 2 - * - * -

1.1.2 1 - * - - -

That was Requirements Traceability Matrix for débutante software testing engineers! Make sure
you include all those parameters which are the bare necessities and do not overcrowd the RTM
as it would look too heavy to serve the purpose of simple traceability. It is always good to keep a
regularly updated requirements traceability matrix by your side to know how the project is
progressing. If you are aware of software testing basics, I’m sure figuring out how to draft your
own RTM would be a cakewalk! All the best!
Ad Hoc Testing
Ad hoc testing is a term commonly used for the tests carried out without planning software and
documentation. The tests are intended to be executed only once, unless a defect is discovered.
This is part of the exploratory test, which is the least formal of test methods. In this context, it
has been criticized because it is not structured, but it can also be strength:
The important bugs can be found quickly. It is performed with improvisation; the tester tries to
find bugs with all means that seem appropriate. This test is most often used as a complement to
other types of tests such as regression tests. We affirm that the ad hoc test is a special case of
Exploratory Testing. During the exploration testing, we will find a large number of ad hoc tests
cases (one-off tests), but some will not. One way to distinguish between the two is to examine
the notes associated with an exploratory test. In general, exploratory tests have little or no formal
documentation, but the result and more notes. If the notes are detailed enough that the test can be
repeated by reading, the it is less likely to be an ad hoc test. Conversely, if there is no note for an
exploratory test, or if the notes are intended to guide the efforts of more tests to reproduce the
test, then it is almost certainly an ad hoc test.
Where it fits in STLC:
It finds its place throughout the test cycle. From the beginning, this trial offer extended to testers'
understanding your program, which helps in the discovery of early bugs. In the middle of a
project, the data used to set priorities and timetables. As the project moves closer to the date of
shipment, it can be used to examine defect fixes more rigorously, as described above.
Strengths of Ad-Hoc testing
One of the best uses of this test is to discover early bugs. Reading the specifications or
requirements (if any) you seldom get a good idea of how a program behaves in fact. Even the
user documentation May not capture the look and feel "of a program. You can find holes in your
testing strategy, and may explain the relationship between sub-systems that would otherwise not
be obvious. In this way, it serves as a tool for verifying the completeness of your tests. Lack of
cases may be found and added to your arsenal of test. Find new tests in this way can also be a
sign that you need to perform root causeanalysis. Ask yourself or your test team, "What other
tests in this category should we be running?" Defects found by testing are often ad hoc examples
of whole categories of forgetting the test case.
Alpha & Beta Testing
Before any software product can be released it must be tested. Typically a formal test strategy is
planned and executed on the software before it can be considered for release. Often after the
formal phases of testing have been completed, additional testing is performed called Alpha and
Beta testing.
Alpha testing is done before the software is made available to the general public. Typically, the
developers will perform the Alpha testing using white box testing techniques. Subsequent black
box and grey box techniques will be carried out afterwards. The focus is on simulating real users
by using these techniques and carrying out tasks and operations that a typical user might
perform. Normally, the actual Alpha testing itself will be carried out in a lab type environment
and not in the usual workplaces. Once these techniques have been satisfactorily completed, the
Alpha testing is considered to be complete.
The next phase of testing is known as Beta testing. Unlike Alpha testing, people outside of the
company are included to perform the testing. As the aim is to perform a sanity check before the
products release, there may be defects found during this stage, so the distribution of the software
is limited to a selection of users outside of the company. Typically, outsourced testing companies
are used as their feedback is independent and from a different perspective than that of the
software development company employees. The feedback can be used to fix defects that were
missed, assist in preparing support teams for expected issues or in some cases even enforce last
minute changes to functionality.
In some cases, the Beta version of software will be made available to the general public. This can
give vital 'real-world' information for software/systems that rely on acceptable performance and
load to function correctly.
The types of techniques used during a public Beta test are typically restricted to Black box
techniques. This is due to the fact that the general public does not have inside knowledge of the
software code under test, and secondly the aim of the Beta test is often to gain a sanity check,
and also to retrieve future customer feedback from how the product will be used in the real
world.
Acceptance Testing
Acceptance testing (also known as user acceptance testing) is a type of testing carried out in
order to verify if the product is developed as per the standards and specified criteria and meets all
the requirements specified by customer. This type of testing is generally carried out by a
user/customer where the product is developed externally by another party.

Acceptance testing falls under black box testing methodology where the user is not very much
interested in internal working/coding of the system, but evaluates the overall functioning of the
system and compares it with the requirements specified by them. User acceptance testing is
considered to be one of the most important testing by user before the system is finally delivered
or handed over to the end user.

Acceptance testing is also known as validation testing, final testing, QA testing, factory
acceptance testing and application testing etc. And in software engineering, acceptance testing
may be carried out at two different levels; one at the system provider level and another at the end
user level (hence called user acceptance testing, field acceptance testing or end-user testing).

Acceptance testing in software engineering generally involves execution of number test cases
which constitute to a particular functionality based on the requirements specified by the user.
During acceptance testing, the system has to pass through or operate in a computing environment
that imitates the actual operating environment existing with user. The user may choose to
perform the testing in an iterative manner or in the form of a set of varying parameters (for
example: missile guidance software can be tested under varying payload, different weather
conditions etc.).

The outcome of the acceptance testing can be termed as success or failure based on the critical
operating conditions the system passes through successfully/unsuccessfully and the user’s final
evaluation of the system.

The test cases and test criterion in acceptance testing are generally created by end user and
cannot be achieved without business scenario criteria input by user. This type of testing and test
case creation involves most experienced people from both sides (developers and users) like
business analysts, specialized testers, developers, end users etc.
Process involved in Acceptance Testing
1. Test cases are created with the help of business analysts, business customers (end users),
developers, test specialists etc.
2. Test cases suites are run against the input data provided by the user and for the number of
iterations that the customer sets as base/minimum required test runs.
3. The outputs of the test cases run are evaluated against the criterion/requirements specified
by user.
4. Depending upon the outcome if it is as desired by the user or consistent over the number
of test suites run or non conclusive, user may call it successful/unsuccessful or suggest
some more test case runs.
5. Based on the outcome of the test runs, the system may get rejected or accepted by the
user with or without any specific condition.
Acceptance testing is done in order to demonstrate the ability of system/product to perform as
per the expectations of the user and induce confidence in the newly developed system/product. A
sign-off on contract stating the system as satisfactory is possible only after successful acceptance
testing.

Types of Acceptance Testing


User Acceptance Testing: User acceptance testing in software engineering is considered to be
an essential step before the system is finally accepted by the end user. In general terms, user
acceptance testing is a process of testing the system before it is finally accepted by user.

Alpha Testing & Beta Testing: Alpha testing is a type of acceptance testing carried out at
developer’s site by users (internal staff). In this type of testing, the user goes on testing the
system and the outcome is noted and observed by the developer simultaneously.

Beta testing is a type of testing done at user’s site. The users provide their feedback to the
developer for the outcome of testing. This type of testing is also known as field testing. Feedback
from users is used to improve the system/product before it is released to other users/customers.

Operational Acceptance Testing: This type of testing is also known as operational


readiness/preparedness testing. It is a process of ensuring all the required components (processes
and procedures) of the system are in place in order to allow user/tester to use it.

Contact and Regulation Acceptance Testing: In contract and regulation acceptance testing, the
system is tested against the specified criteria as mentioned in the contract document and also
tested to check if it meets/obeys all the government and local authority regulations and laws and
also all the basic standards.
What is Accessibility Testing?
Accessibility Testing is an approach to measuring a product's ability to be easily customized or
modified for the benefit of users with disabilities. Users should be able to change input and
output features, keyboard features, screen colors, sounds, and even the ability to zoom in on text
and images.

Our goal at nResult is to help you ensure that people with disabilities can access and use your
product as effectively as people without disabilities.
What is the purpose of Accessibility Testing?
The purpose of accessibility testing is to pinpoint problems within web sites and products, which
may otherwise prevent users with disabilities from accessing the information they are searching
for.
Accessibility Testing can help you determine the following:
 Compliance - How your product complies with legal requirements regarding accessibility
 Effectiveness - How fast users with disabilities can use your product to accomplish basic
and complex tasks
 Usefulness - The likelihood that users with disabilities will want to use your product
again because it meets their needs and expectations
 Satisfaction - How appealing your product is to users with disabilities

The following are some of the interview questions for manual testing. This will give you a
Fair idea of what these questions are like.

 What is the accessibility testing?


 What is Ad Hoc Testing?
 What is the Alpha Testing?
 What is Beta Testing?
 What is Component Testing?
 What is Compatibility Testing?
 What is Data Driven Testing?
 What is Concurrency Testing?
 What is Conformance Testing?
 What is Context Driven Testing?
 What is Conversion Testing?
 What is Depth Testing?
 What is Dynamic Testing?
 What is End-to-End testing?
 What is Endurance Testing?
 What is Installation Testing?
 What is Gorilla Testing?
 What is Exhaustive Testing?
 What is Localization Testing?
 What is Loop Testing?
 What is Mutation Testing?
 What is Positive Testing?
 What is Monkey Testing?
 What is Negative Testing?
 What is Path Testing?
 What is Ramp Testing?
 What is Performance Testing?
 What is Recovery Testing?
 What is the Regression testing?
 What is the Re-testing testing?
 What is Stress Testing?
 What is Sanity Testing?
 What is Smoke Testing?
 What’s the Volume Testing?
 What’s the Usability testing?
 What is Scalability Testing?
 What is Soak Testing?
 What’s the User Acceptance testing?
 Can you explain the V model in manual testing?
 What is the water fall model in manual testing?
 What is the structure of bug life cycle?
 What is the difference between bug, error and defect?
 How does one add objects into the Object Repository?
 What are the different modes of recording?
 What does 'testing' mean?
 What is the purpose of carrying out manual testing for a background process that does not
have a user interface and how do you go about it?
 Explain with an example what test case and bug report are.
 How does one go about reviewing a test case and what are the types that are available?
 What is AUT?
 What is compatibility testing?
 What is alpha testing and beta testing?
 What is the V model?
 What is debugging?
 What is the difference between debugging and testing? Explain in detail.
 What is the fish model?
 What is port testing?
 Explain in detail the difference between smoke and sanity testing.
 What is the difference between usability testing and GUI?
 Why does one require object spy in QTP?
 What is the test case life cycle?
 Why does one save .vsb in library files in qtp winrunner?
 When do we do update mode in qtp?
 What is virtual memory?
 What is visual source safe?
 What is the difference between test scenarios and test strategy?
 What is the difference between properties and methods in qtp?

You might also like