Testing
Testing
Testing
Answer: objective, scope, entrance criteria, exit criteria, features to be tested, features
not to be tested, approach, item pass fail criteria, Suspension Criteria and Resumption
Requirements, Test Deliverables, Environmental Needs, Staffing and Training Needs,
Schedule, risk analysis, approval.
A document describing the scope, approach, resources, and schedule of intended testing
activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning.
106. What are the differences between interface and integration testing? Are system specification
and functional specification the same? What are the differences between system and functional
testing?
107. What is Multi Unit testing?
108. What are the different types, methodologies, approaches, methods in software testing
109. What is the difference between test techniques and test methodology?
One test methodology is a three-step process. Creating a test strategy, Creating a test
plan/design, and Executing tests. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.
What is exploratory testing?
Answer: exploratory testing is simultaneous learning, test plan and execution. Usually done by exp
testers. Done whenever we have vague requirements about the product and time constraints
110. What types of testing does non-functional testing?
Answer: Non-functional testing: This testing is used to test the other quality factors of
our build other than the usability testing and functionality testing. This testing
includes 7 types of testings: 1) Compatibility testing2) Configuration testing3) Load
testing4) Stress testing5) Storage testing6) Data-volume testing7) Installation testing
1. What is the difference between use case, test case, test plan?
Use Case: It is prepared by Business analyst in the Functional Requirement
Specification(FRS), which are nothing but a steps which are given by the customer.
Test cases: It is prepared by test engineer based on the use cases from FRS to check the
functionality of an application thoroughly
Test Plan: Team lead prepares test plan, in it he represents the scope of the test, what to
test and what not to test, scheduling, what to test using automation etc.
2. How can we design the test cases from requirements? Do the requirements,
represent exact functionality of AUT?
Yes, requirements should represents exact functionality of AUT.
First of all you have to analyze the requirements very thoroughly in terms of
functionality. Then we have to thing about suitable test case design technique [Black Box
design techniques like Equivalence Class Partitioning (ECP), Boundary Valve Analysis
(BVA),Error guessing and Cause Effect Graphing] for writing the test cases.
By these concepts you should design a test case, which should have the capability of
finding the absence of defects.
25. What is Capture/Replay Tool?
A test tool that records test input as it is sent to the software under test. The input cases
stored can then be used to reproduce the test at a later time. Most commonly applied to
GUI test tools.
Phase of development where functionality is implemented in entirety; bug fixes are all
that are left. All functions found in the Functional Specifications have been implemented.
A formal testing technique where the programmer reviews source code with a group who
ask questions analyzing the program logic, analyzing the code with respect to a checklist
of historically common programming errors, and analyzing its compliance with coding
standards.
A database that contains definitions of all data items defined during analysis.
A device, computer program, or system that accepts the same inputs and produces the
same outputs as a given system.
A group of individuals with related interests that meet at regular intervals to consider
problems or other matters related to the quality of outputs of a process and to the
correction of problems or to the improvement of quality.
That aspect of the overall management function that determines and implements the
quality policy.
A pre-release version, which contains the desired functionality of the final version, but
which needs to be tested for bugs (which ideally should be removed before the final
version is released).
A deliverable that describes all data, functional and behavioral requirements, all
constraints, and all validation requirements for software/
The degree to which a system or component facilitates the establishment of test criteria
and the performance of tests to determine whether those criteria have been met.
An execution environment configured for testing. May consist of specific hardware, OS,
network topology, configuration of the product under test, other application or system
software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
Test Case is a commonly used term for a specific test. This is usually the smallest unit of
testing. A Test Case will consist of information such as requirements testing, test steps,
verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution
preconditions, and expected outcomes developed for a particular objective, such as to
exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development? Testing methodology associated with Agile Programming in
which every chunk of code is covered by unit tests, which must all pass all the time, in an
effort to eliminate unit-level and regression bugs during development. Practitioners of
TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the
production code.
A program or test tool used to execute a tests. Also known as a Test Harness.
The hardware and software environment in which tests will be run, and any other
software with which the software under test interacts when under test including stubs and
test drivers.
A program or test tool used to execute a tests. Also known as a Test Driver.
108. What is Test Procedure?
A document providing detailed instructions for the execution of one or more test cases.
A collection of tests used to validate the behavior of a product. The scope of a Test Suite
varies from organization to organization. There may be several Test Suites for a
particular product for example. In most cases however a Test Suite is a high level
concept, grouping together hundreds or thousands of tests related by what they are
intended to test.
Computer programs used in the testing of a system, a component of the system, or its
documentation.
A company commitment to develop a process that achieves high quality product and
customer satisfaction.
Scripted end-to-end testing which duplicates specific workflows which are expected to be
utilized by the end-user.
Reported tester: developer ratios range from 10:1 to 1:10. There's no simple answer. It
depends on so many things, Amount of reused code, number and type of interfaces,
platform, quality goals, etc. It also can depend on the development model. The more
specs, the less testers. The roles can play a big part also. Does QA own beta? Do you
include process auditors or planning activities? These figures can all vary very widely
depending on how you define 'tester' and 'developer'. In some organizations, a 'tester' is
anyone who happens to be testing software at the time -- such as their own. In other
organizations, a 'tester' is only a member of an independent test group. It is better to ask
about the test labor content than it is to ask about the tester/developer ratio. The test labor
content, across most applications is generally accepted as 50%, when people do honest
accounting. For life-critical software, this can go up to 80%.
127. How can new Software QA processes be introduced in an existing organization?
- A lot depends on the size of the organizattion and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.
- Where the risk is lower, management and orrganizational buy-in and QA
implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand.
- For small groups or projects, a more ad-hooc process may be appropriate, depending on
the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.
- In all cases the most value for effort willl be in requirements management processes,
with a goal of clear, complete, testable requirement specifications or expectations.
One of the most reliable methods of insuring problems, or failure, in a complex software
project is to have poorly documented requirements specifications. Requirements are the
details describing an application's externally-perceived functionality and properties.
Requirements should be clear, complete, reasonably detailed, cohesive, attainable, and
testable. A non-testable requirement would be, for example, 'user-friendly' (too
subjective). A testable requirement would be something like 'the user must enter their
previously-assigned password to access the application'. Determining and organizing
requirements details in a useful and efficient way can be a difficult effort; different
methods are available depending on the particular project. Many books are available that
describe various approaches to this task. Care should be taken to involve ALL of a
project's significant 'customers' in the requirements process. 'Customers' could be in-
house personnel or out, and could include end-users, customer acceptance testers,
customer contract officers, customer management, future software maintenance
engineers, salespeople, etc. Anyone who could later derail the project if their expectations
aren't met should be included if possible. Organizations vary considerably in their
handling of requirements specifications. Ideally, the requirements are spelled out in a
document with statements such as 'The product shall.....'. 'Design' specifications should
not be confused with 'requirements'; design specifications should be traceable back to the
requirements. In some organizations requirements may end up in high level project plans,
functional specification documents, in design documents, or in other documents at
various levels of detail. No matter what they are called, some type of documentation with
detailed requirements will be needed by testers in order to properly plan and execute
tests. Without such documentation, there will be no clear-cut way to determine if a
software application is performing correctly.
137. What steps are needed to develop and run software tests?
The following are some of the steps to consider:
- Obtain requirements, functional design, annd internal design specifications and other
necessary documents
- Obtain budget and schedule requirements - Determine project-related personnel and
thheir responsibilities, reporting requirements, required standards and processes (such as
release processes, change processes, etc.)
- Identify application's higher-risk aspectss, set priorities, and determine scope and
limitations of tests
- Determine test approaches and methods - unnit, integration, functional, system, load,
usability tests, etc.
- Determine test environment requirements (hhardware, software, communications, etc.)
-Determine testware requirements (record/plaayback tools, coverage analyzers, test
tracking, problem/bug tracking, etc.)
- Determine test input data requirements - Identify tasks, those responsible for taskks, and
labor requirements
- Set schedule estimates, timelines, milestoones
- Determine input equivalence classes, bounddary value analyses, error classes
- Prepare test plan document and have neededd reviews/approvals
- Write test cases
- Have needed reviews/inspections/approvals of test cases
- Prepare test environment and testware, obttain needed user manuals/reference
documents/configuration guides/installation guides, set up test tracking processes, set up
logging and archiving processes, set up or obtain test input data
- Obtain and install software releases
- Perform tests
- Evaluate and report results
- Track problems/bugs and fixes
- Retest as needed
- Maintain and update test plans, test casess, test environment, and testware through life
cycle
143. What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive
testing is still not justified, risk analysis is again needed and the same considerations as
described previously in 'What if there isn't enough time for thorough testing?' apply. The
tester might then do ad hoc testing, or write up a limited test plan based on the risk
analysis.
144. What if the application has functionality that wasn't in the requirements?
146. What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas.
There is no easy solution in this situation, other than:
- Hire good people
- Management should 'ruthlessly prioritize' quality issues and maintain focus on the
customer
- Everyone in the organization should be cleear on what 'quality' means to the customer
Client/server applications can be quite complex due to the multiple dependencies among
clients, data communications, hardware, and servers. Thus testing requirements can be
extensive. When time is limited (as it usually is) the focus should be on integration and
system testing. Additionally, load/stress/performance testing may be useful in
determining client/server application limitations and capabilities. There are commercial
tools to assist with such testing.
Web sites are essentially client/server applications - with web servers and 'browser'
clients. Consideration should be given to the interactions between html pages, TCP/IP
communications, Internet connections, firewalls, applications that run in web pages (such
as applets, javascript, plug-in applications), and applications that run on the server side
(such as cgi scripts, database interfaces, logging applications, dynamic page generators,
asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions
of each, small but sometimes significant differences between them, variations in
connection speeds, rapidly changing technologies, and multiple standards and protocols.
The end result is that testing for web sites can become a major ongoing effort. Other
considerations might include:
- What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time,
database query response times). What kinds of tools will be needed for performance
testing (such as web load testing tools, other tools already in house that can be adapted,
web robot downloading tools, etc.)?
- Who is the target audience? What kind of bbrowsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?
- What kind of performance is expected on thhe client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?
- Will down time for server and content mainntenance/upgrades be allowed? how much?
- What kinds of security (firewalls, encrypttions, passwords, etc.) will be required and
what is it expected to do? How can it be tested?
- How reliable are the site's Internet conneections required to be? And how does that
affect backup system or redundant connection requirements and testing?
- What processes will be required to manage updates to the web site's content, and what
are the requirements for maintaining, tracking, and controlling page content, graphics,
links, etc.?
- Which HTML specification will be adhered tto? How strictly? What variations will be
allowed for targeted browsers?
- Will there be any standards or requirementts for page appearance and/or graphics
throughout a site or parts of a site??
- How will internal and external links be vaalidated and updated? how often?
- Can testing be done on the production systtem, or will a separate test system be
required? How are browser caching, variations in browser option settings, dial-up
connection variabilities, and real-world internet 'traffic congestion' problems to be
accounted for in testing?
- How extensive or customized are the serverr logging and reporting requirements; are
they considered an integral part of the system and do they require testing?
- How are cgi programs, applets, javascriptss, ActiveX components, etc. to be
maintained, tracked, controlled, and tested?
- Pages should be 3-5 screens max unless conntent is tightly focused on a single topic. If
larger, provide internal links within the page.
- The page layouts and design elements shoulld be consistent throughout a site, so that it's
clear to the user that they're still within a site.
- Pages should be as browser-independent as possible, or pages should be provided or
generated based on the browser-type.
- All pages should have links external to thhe page; there should be no dead-end pages.
- The page owner, revision date, and a link to a contact person or organization should be
included on each page.
Well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box
testing (where an understanding of the internal design of the application is unnecessary),
white-box testing can be oriented to the application's objects. If the application was well-
designed this can simplify test design.
You are the test manager starting on system testing. The development team says that due
to a change in the requirements, they will be able to deliver the system for SQA 5 days
past the deadline. You cannot change the resources (work hours, days, or test tools).
What steps will you take to be able to finish the testing in time?
Your company is about to roll out an e-commerce application. It’s not possible to test the
application on all types of browsers on all platforms and operating systems. What steps
would you take in the testing environment to reduce the business risks and commercial
risks?
In your organization, testers are delivering code for system testing without performing
unit testing. Give an example of test policy:
Policy statement
Methodology
Measurement
Testers in your organization are performing tests on the deliverables even after significant
defects have been found. This has resulted in unnecessary testing of little value, because
re-testing needs to be done after defects have been rectified. You are going to update the
test plan with recommendations on when to halt testing. Wwhat recommendations are
you going to make?
How do you measure:
Test Effectiveness
Test Efficiency
You found out the senior testers are making more mistakes then junior testers; you need
to communicate this aspect to the senior tester. Also, you don’t want to lose this tester.
How should one go about constructive criticism?
You are assigned to be the test lead for a new program that will automate take-offs and
landings at an airport. How would you write a test strategy for this new program?
When should you begin test planning?
When should you begin testing?
How do you scope out the size of the testing effort?
How many hours a week should a tester work?
How should your staff be managed? How about your overtime?
How do you estimate staff requirements?
What do you do (with the project tasks) when the schedule fails?
How do you handle conflict with programmers?
How do you know when the product is tested well enough?
What characteristics would you seek in a candidate for test-group manager?
What do you think the role of test-group manager should be? Relative to senior
management? Relative to other technical groups in the company? Relative to your staff?
How do your characteristics compare to the profile of the ideal manager that you just
described?
How does your preferred work style work with the ideal test-manager role that you just
described? What is different between the way you work and the role you described?
Who should you hire in a testing group and why?
Can testability features be added to the product code?
Do testers and developers work cooperatively and with mutual respect?
What are the benefits of creating multiple actions within any virtual user script?
Who should be involved in each level of testing? What should be their responsibilities?
You have more verifiable QA experience testing:
It insures that every piece of code written is tested in some way
Tests give confidence that every part of the code is working
Your experience with Programming within the context of Quality Assurance is:
N/A - I have no programming experience in C, C++ or Java.
You have done some programming in my role as a QA Engineer, and am comfortable
meeting such requirements in Java, C and C++ or VC++.
You have developed applications of moderate complexity that have taken up to three
months to complete.
Your skill in maintaining and debugging an application is best described as:
N/A - You have not participated in debugging a product.
You have worked under the mentorship of a team lead to learn various debugging
techniques and strategies.
You have both an interest in getting to the root of a problem and understand the steps
You need to take to document it fully for the developer.
You am experienced in working with great autonomy on debugging/maintenance efforts
and have a track record of successful projects You can discuss.
Why does testing not prove a program is 100 percent correct (except for extremely simple
programs)?
Because we can only test a finite number of cases, but the program may have an infinite
number of possible combinations of inputs and outputs
Because the people who test the program are not the people who write the code
Because the program is too long
All of the above
We CAN prove a program is 100 percent correct by testing
Which statement regarding Validation is correct:
It refers to the set of activities that ensures the software has been built according to the
customer's requirements.
It refers to the set of activities that ensure the software correctly implements specific
functions.
Are regression tests required or do you feel there is a better use for resources?
Our software designers use UML for modeling applications. Based on their use cases, we
would like to plan a test strategy. Do you agree with this approach or would this mean
more effort for the testers.
Tell me about a difficult time you had at work and how you worked through it.
Give me an example of something you tried at work but did not work out so you had to
go at things another way.
How can one file compare future dated output files from a program which has change,
against the baseline run which used current date for input. The client does not want to
mask dates on the output files to allow compares. - Answer-Rerun baseline and future
date input files same # of days as future dated run of program with change. Now run a
file compare against the baseline future dated output and the changed programs' future
dated output.
What would you do if management pressure is stating that testing is complete and you
feel differently?
Why did you ever become involved in QA/testing?
What is the testing lifecycle and explain each of its phases?
test plan is a document that contains the scope, approach, test design and test strategies. it
includes the following:-
1. test case identifier
2. scope
3.features to be tested
4. features not to be tested.
5. test strategy.
6. test approach
7. test deliverables
8. responsibilities.
9 staffing and training
10.risk and contingencies
11. approval
while a test case is a noted/documented set of steps/activities that are carried out or
executed on the software in order to confirm its functionality/behavior to certain set of
inputs.
156. what are the table contents in testplans and test cases?
test plan is a document which is prepared with the details of the testing priority. a test
plan generally includes:
1. objective of testing
2. scope of testing
3. reason for testing
4. timeframe
5. environment
6. entrance and exit criteria
7. risk factors involved
8. deliverables
180. how will you test the field that generates auto numbers of aut when we click the
button 'new" in the application?
we can create a textfile in a certain location, and update the auto generated value each
time we run the test and compare the currently generated value with the previous one will
be one solution.
181. how will you evaluate the fields in the application under test using automation
tool?
182. can we perform the test of single application at the same time using different
tools on the same machine?
no. the testing tools will be in the ambiguity to determine which browser is opened by
which tool.
the basic difference in webtesting is here we have to test for url's coverage and links
coverage. using winrunner we can conduct webtesting. but we have to make sure that
webtest option is selected in "add in manager". using wr we cannot test xml objects.
186. what are the problems encountered during the testing the application
compatibility on different browsers and on different operating systems
188. how exactly the testing the application compatibility on different browsers and
on different operating systems is done
189. how testing is proceeded when srs or any other docccument is not given?
if srs is not there we can perform exploratory testing. in exploratory testing the basic
module is executed and depending on its results, the next plan is executed.
by using endurance testing . endurance testing means checking for memory leaks or other
problems that may occur with prolonged execution.
memory leaks means incomplete deallocation - are bugs that happen very often. buffer
overflow means data sent as input to the server that overflows the boundaries of the input
area, thus causing the server to misbehave. buffer overflows can be used.
• our software designers use uml for modeling applications. based on their use
cases, we would like to plan a test strategy. do you agree with this approach or
would this mean more effort for the testers.
• how can one file compare future dated output files from a program which has
change, against the baseline run which used current date for input. the client does
not want to mask dates on the output files to allow compares
• what are basic, core, practices for a qa specialist?
• what is the value of a testing group? how do you justify your work and budget?
• what is the role of the test group vis-à-vis documentation, tech support, and so
forth?
• how much interaction with users should testers have, and why?
• how should you learn about problems discovered in the field, and what should
you learn from those problems?
• what are the roles of glass-box and black-box testing tools?
• what development model should programmers and the test group use?
• how do you get programmers to build testability support into their code?
• what are the key challenges of testing?
• have you ever completely tested any part of a product? how?
• have you done exploratory or specification-driven testing?
• should every business test its software the same way?
• describe components of a typical test plan, such as tools for interactive products
and for database products, as well as cause-and-effect graphs and data-flow
diagrams.
• when have you had to focus on data integrity?
• how do you prioritize testing tasks within a project?
• how do you develop a test plan and schedule? describe bottom-up and top-down
approaches.
• when should you begin test planning?
• how do you know when the product is tested well enough?
• what characteristics would you seek in a candidate for test-group manager?
• what do you think the role of test-group manager should be? relative to senior
management? relative to other technical groups in the company? relative to your
staff?
• how do your characteristics compare to the profile of the ideal manager that you
just described?
• how does your preferred work style work with the ideal test-manager role that you
just described? what is different between the way you work and the role you
described?
• who should you hire in a testing group and why?
• how do you estimate staff requirements?
• why did you ever become involved in qa/testing?
• what are two of your strengths that you will bring to our qa/testing team?
• your experience with programming within the context of quality assurance is:
• why does testing not prove a program is 100 percent correct (except for extremely
simple programs)?
a. because we can only test a finite number of cases, but the program may have an infinite
number of possible combinations of inputs and outputs
b. because the people who test the program are not the people who write the
code
c. because the program is too long
d. all of the above
e. we can prove a program is 100 percent correct by testing
• which statement regarding validation is correct:
a. it refers to the set of activities that ensures the software has been built according to the
customer's requirements.
b. it refers to the set of activities that ensure the software correctly
implements specific functions.
software quality assurance interview questions only (8)
• which of the following testing strategies ignores the internal structure of the
software?
a. interface testing
b. top down testing
c. white box testing
d. black box testing
e. sandwich testing
• regarding your experience with xml:
a. an expert with computers, the internet and windows, and am often asked to help others.
b. new to computers and would need a little help to get started.
c. comfortable with e-mail and the internet, but would need help with other
applications required for the position.
d. comfortable with e-mail and a variety of computer software, but not an
expert.
• your knowledge and experience in linux is:
a. n/a - you have no direct linux operating system experience. d need help to become
functionally proficient.
b. you have a good understanding of linux and run this os on your home pc.
c. you have experience with multiple linux variants and feel as comfortable
with it as most people do working in windows.
• our software designers use uml for modeling applications. based on their use
cases, we would like to plan a test strategy. do you agree with this approach or
would this mean more effort for the testers.
• how can one file compare future dated output files from a program which has
change, against the baseline run which used current date for input. the client does
not want to mask dates on the output files to allow compares. - answer-rerun
baseline and future date input files same # of days as future dated run of program
with change. now run a file compare against the baseline future dated output and
the changed programs' future dated output.
software quality assurance interview questions only 10)
• what tools are available for support of testing during software development life
cycle?
4.Why is it often hard for management to get serious about quality assurance?
* Solving problems is a high-visibility process; preventing problems is low-visibility.
This is illustrated by an old parable: In ancient China there was a family of healers,
one of whom was known throughout the land and employed as a physician to a great
lord.
• A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious
management buy-in is required and a formalized QA process is necessary.
• Where the risk is lower, management and organizational buy-in and QA
implementation may be a slower, step-at-a-time process. QA processes should be
balanced with productivity so as to keep bureaucracy from getting out of hand.
• For small groups or projects, a more ad-hoc process may be appropriate,
depending on the type of customers and projects. A lot will depend on team leads
or managers, feedback to developers, and ensuring adequate communications
among customers, managers, developers, and testers.
• The most value for effort will often be in (a) requirements management processes,
with a goal of clear, complete, testable requirement specifications embodied in
requirements or design documentation, or in 'agile'-type environments extensive
continuous coordination with end-users, (b) design inspections and code
inspections, and (c) post-mortems/retrospectives.
• Other possibilities include incremental self-managed team approaches such as
'Kaizen' methods of continuous process improvement, the Deming-Shewhart
Plan-Do-Check-Act cycle, and others.
20. How do you develop a test plan and schedule? Describe bottom-up and top-down
approaches.
7. What are the entry criteria for Functionality and Performance testing?
A. Entry criteria for Functionality testing is Functional Specification /BRS
(CRS)/User Manual.An integrated application, Stable for testing.
Entry criteria for Performance testing is successfully of functional testing,once all the
requirements related to functional are covered and tested, and approved or validated.
9. Why do you go for White box testing, when Black box testing is available?
A. A benchmark that certifies Commercial (Business) aspects and also functional
(technical)aspects is objectives of black box testing. Here loops, structures, arrays,
conditions,files, etc are very micro level but they arc Basement for any application,
So White box takes these things in Macro level and test these things
Even though Black box testing is available,we should go for White box testing also,to
check the correctness of code and for integrating the modules.
11.When to start and Stop Testing?
A. This can be difficult to determine. Many modern software applications are so
complex,and run in such an interdependent environment, that complete testing can
never be done.
Common factors in deciding when to stop are:
22.What are the types of testing you know and you experienced?
A. I am experienced in Black Box testing.
52. What is the Diff between Two Tier & Three tier Architecture?
A. Two Tier Architecture:It is nothing but client server Architecture,where client will
hit request directly to server and client will get response directly from server.
Three tier Architecture:It is nothing but Web Based application,here in between client
and server middle ware will be there,if client hits a request it will go to the middle
ware and middle ware will send to server and vise-versa.
dynamic testing:Test activities that are performed by running the software is called
dynamic Testing.
1. What are the differences between interface and integration testing? Are system
specification and functional specification the same? What are the differences
between system and functional testing?
2. What is Multi Unit testing?
3. What are the different types, methodologies, approaches, methods in software
testing
4. What is the difference between test techniques and test methodology?
5. What is meant by test environment,… what is meant by DB installing and
configuring and deploying skills?
6. What is logsheet? And what are the components in it?
7. What is Red Box testing? What is Yellow Box testing? What is Grey Box testing?
8. What is business process in software testing?
9. What is the difference between Desktop application testing and Web testing?
10. Find the values of each of the alphabets. N O O N S O O N + M O O N J YOU N
E
11. With multiple testers how does one know which test cases are assigned to them? •
Folder structure • Test process
12. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A
Test Case? What’s is their order of succession in the STLC?
13. How many functional testing tools are available? What is the easiest scripting
language used?
14. Which phase is called as the Blackout or Quite Phase in SDLC?
15. When an application is given for testing, with what initial testing the testing will
be started and when are all the different types of testing done following the initial
testing?
16.
17. Who are the three stake holders in testing?
18. What is meant by bucket testing?
19. What is test case analysis?
20. The recruiter asked if I have Experience in Pathways. What is this?
21. What are the main things we have to keep in mind while writing the test cases?
Explain with format by giving an example
22. How we can write functional and integration test cases? Explain with format by
giving examples.
23. Explain the water fall model and V- model of software development life cycles
with block diagrams.
24. For notepad application can any one write the functional and system test cases?
25. What is installation shield in testing
26. What is one key element of the test case?
27. What are the management tools we have in testing?
28. Can we write Functional test case based on only BRD or only Use case?
29. What’s main difference between smoke and sanity testing? When are these
performed?
30. What Technical Environments have you worked with?
31. Have you ever converted Test Scenarios into Test Cases?
32. What is the ONE key element of ‘test case’?
33. What is the ONE key element of a Test Plan?
34. What is SQA testing? tell us steps of SQA testing
35. Which Methodology you follow in your test case?
36. What are the test cases prepared by the testing team
37. During the start of the project how will the company come to an conclusion that
tool is required for testing or not?
38. What is a Test procedure?
39. What is the difference between SYSTEM testing and END-TO-END testing?
40. What is the difference between an exception and an error?
41. How much time is/should be allocated for testing out of total Development time
based on industry standards?
42. Define Quality - bug free, Functionality working or both?
43. What is the major difference between Web services & client server environment?
44. Is there any tool to calculate how much time should be allocated for testing out of
total development?
45. What is Scalability testing? Which tool is used?
46.
47. What is scalability testing? What are the phases of the scalability testing?
48. What kind of things does one need to know before starting an automation project?
49. What is difference between a Test Plan, a Test Strategy, A Test Scenario, and A
Test Case? What’s is their order of succession in the STLC?
50. How many functional testing tools are available? What is the easiest scripting
language used?
51. Project is completed. Completed means that UAT testing is going. In that
situation as a tester what will you do?
1. I-soft
What should be done after writing test case??
3. Define the components present in test strategy
4. Define the components present in test plan
5. Define database testing ?
6. What are different types of test case that u have written in your project..
9. Have u written Test plan ?….
8. How will you validate the functionality of the Test cases, if there is no business
requirement document or user requirement document as such…
9. Testing process followed in your company?
10. Tell me about CMM LEVEL -4 …what are steps that to be followed to achieve the
CMM -IV standards?
11. What is Back End testing?
13. How will u write test cases for an given scenario…i.e. main page, login screen,
transaction, Report Verification?
15. What is CVS and why it is used?
2. Explain about the Project. …And draw the architecture of your project?
3. What are the different types of severity?
5. what are the responsibilities of an tester?
6. Give some example how will you write the test cases if an scenario involves Login
screen.
11. How will u ensure that you have covered all the functionality while writing test cases
if there is no functional spec and there is no KT about the application?
17. Install/uninstall testing: testing of full, partial, or upgrade install/uninstall processes.
21. Exploratory testing: often taken to mean a creative, informal software test that is not
based on formal test plans of test cases; testers may be learning the software as they test
it.
22. Ad-hoc testing: similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software testing it.
24. Comparison testing: comparing software weakness and strengths to competing
products.
1) What is SCORM?
2) What is Sec 508?
3) Have u done any portal testing?
4) DO u have any idea about LMS or LCMS?
5) Have u done any compliance testing
6) Have u done any compatibility testing?
7) What are the critical issues found while testing the projects in your organization?
Functionality:
In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.
Performance:
Performance testing can be applied to understand the web site’s scalability, or to
benchmark the performance in the environment of third party products such as servers
and middle ware for potential purchase.
Connection speed:
Tested over various Networks like Dial up, ISDN etc
Load
What is the no. of users per time?
Check for peak loads & how system behaves.
Large amount of data accessed by user.
Stress
Continuous load
Performance of memory, cpu, file handling etc.
Usability :
Usability testing is the process by which the human-computer interaction characteristics
of a system are measured, and weaknesses are identified for correction. Usability can be
defined as the degree to which a given piece of software assists the person sitting at the
keyboard to accomplish a task, as opposed to becoming an additional impediment to such
accomplishment. The broad goal of usable systems is often assessed using several
Criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance
Security:
The primary reason for testing the security of an web is to identify potential
vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection
Performance Testing
Performance testing is a rigorous usability evaluation of a working system under realistic
conditions to identify usability problems and to compare measures such as success
rate, task time and user satisfaction with requirements. The goal of performance testing is
not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression
testing.
Compatability Testing
A Testing to ensure compatibility of an application or Web site with different browsers,
OS and hardware platforms. Different versions, configurations, display resolutions, and
Internet connect speeds all can impact the behavior of the product and introduce costly
and embarrassing bugs. We test for compatibility using real test environments. That is
testing how will the system performs in the particular software, hardware or network
environment. Compatibility testing can be performed manually or can be driven by an
automated functional or reg The purpose of compatibility testing is to reveal issues
related to the product& interaction session test suite.with other software as well as
hardware. The product compatibility is evaluated by first identifying the
hardware/software/browser components that the product is designed to support. Then a
hardware/software/browser matrix is designed that indicates the configurations on which
the product will be tested. Then, with input from the client, a testing script is designed
that will be sufficient to evaluate compatibility between the product and the
hardware/software/browser matrix. Finally, the script is executed against the matrix,and
any anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)
A:– Test strategy comes first ans this is the high level document…. and approach for the
testing starts from test strategy and then based on this the test lead prepares the
test plan….
63. what is the difference between web based application and client server application as
a testers point of view?
A:– To check for the bug fixes. And this fix should not disturb other functionality
The date field we can check in different ways Possitive testing: first we enter the date in
given format
47. If project wants to release in 3months what type of Risk analysis u do in Test plan?
A:– Use risk analysis to determine where testing should be focused. Since it’s rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is appropriate
to most software development projects. This requires judgment skills, common sense, and
experience. (If warranted, formal methods are also available.) Considerations can
include:
49. Where you involve in testing life cycle ,what type of test you perform ?
A:– Generally test engineers involved from entire test life cycle i.e, test plan, test case
preparation, execution, reporting. Generally system testing, regression testing, adhoc
testing
etc.
50. what is Testing environment in your company ,means hwo testing process start ?
A:– In Any company except the small company Business analyst prepares the use cases
But in small company Business analyst prepares along with team lead
55. what is the exact difference between a product and a project.give an example ?
A:– Project Developed for particular client requirements are defined by client Product
developed for market Requirements are defined by company itself by conducting market
survey
Example
Project: the shirt which we are interested stitching with tailor as per our specifications is
project
Product: Example is “Ready made Shirt” where the particular company will imagine
particular measurements they made the product
Mainframes is a product
Product has many mo of versions
but project has fewer versions i.e depends upon change request and enhancements
59. what is the difference between three tier and two tier application?
A:– Client server is a 2-tier application. In this, front end or client is connected to
‘Data base server’ through ‘Data Source Name’,front end is the monitoring level.
Web based architecture is a 3-tier application. In this, browser is connected to web server
through TCP/IP and web server is connected to Data base server,browser is the
monitoring level. In general, Black box testers are concentrating on monitoring level of
any type of application.
Here Business Logic is stored in one Server, and all the clients are dumb terminals. If
user requests anything the request first sent to server, the server will bring the data from
DB Sever and send it to clients. This is the flow for 3-tier architecture.
Assume for the above. Ex. if i want to give some discount, all my business logic is there
in Server. So i need to change at one place, not at each client. This is the main advantage
of 3-tier architecture.
27. If the actual result doesn’t match with expected result in this situation what should we
do?
28. 29. What is the difference between functional testing & black box testing?
30. What is heuristic checklist used in Unit Testing?
31. What is the difference between System Testing,Integration Testing & System
Integration Testing?
32. How to calculate the estimate for test case design and review?
34. What are the contents of Risk management Plan? Have you ever prepared a Risk
Management Plan ?
35. 31. If we have no SRS, BRS but we have test cases does u execute the test cases
blindly or do u follow any other process?
A: — Test case would have detail steps of what the application is supposed to do. SO
1) Functionality of application is known.
2) In addition you can refer to Backend, I mean look into the Database. To gain more
knowledge of the application
A: — ST:
Smoke testing is non-exhaustive software testing, as pertaining that the most crucial
functions of a program work, but not bothering with finer details. The term comes to
software testing from a similarly basic type of hardware testing.
UIT:
I did a bit or R n D on this…. some says it’s nothing but Usability testing. Testing to
determine the ease with which a user can learn to operate, input, and interpret outputs of a
system or component.
Smoke testing is nothing but to check whether basic functionality of the build is stable or
not?
I.e. if it possesses 70% of the functionality we say build is stable.
User interface testing: We check all the fields whether they are existing or not as per the
format we check spelling graphic font sizes everything in the window present or not|
38. What is the diff b/w functional testing and integration testing?
A: — functional testing is testing the whole functionality of the system or the application
whether it is meeting the functional specifications
Integration testing means testing the functionality of integrated module when two
individual modules are integrated for this we use top-down approach and bottom up
approach
39. what type of testing u perform in organization while u do System Testing, give
clearly?
A: — Functional testing
User interface testing
Usability testing
Compatibility testing
Model based testing
Error exit testing
User help testing
Security testing
Capacity testing
Performance testing
Sanity testing
Regression testing
Reliability testing
Recovery testing
Installation testing
Maintenance testing
Accessibility testing, including compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)
40.
A:–
UT:
Testing the ease with which users can learn and use a product.
ST:
It’s a Web Testing defn.allows web site capability improvement.
PT:
Testing to determine whether the system/software meets the specified portability
requirements.
42. What does u mean by Positive and Negative testing & what is the diff’s between
them. Can anyone explain with an example?
A: — Positive Testing: Testing the application functionality with valid inputs and
verifying that output is correct
Negative testing: Testing the application functionality with invalid inputs and verifying
the output.
Difference is nothing but how the application behaves when we enter some invalid inputs
suppose if it accepts invalid input the application
Functionality is wrong
Positive test: testing aimed to show that s/w work i.e. with valid inputs. This is also called
as “test to pass’
Negative testing: testing aimed at showing s/w doesn’t work. Which is also know as ‘test
to fail” BVA is the best example of -ve testing.
44. What is risk analysis, what type of risk analysis u did in u r project?
A: — Risk Analysis:
A systematic use of available information to determine how often specified events and
unspecified events may occur and the magnitude of their likely consequences
OR
procedure to identify threats & vulnerabilities, analyze them to ascertain the exposures,
and highlight how the impact can be eliminated or reduced
Types :
System testing - Entire system is tested as per the requirements. Black-box type testing
that is based on overall requirements specifications, covers all combined parts of a
system.
Active Test
Introducing test data and analyzing the results. Contrast with "passive test" (below).
Dirty Test
Same as "negative test."
Environment Test
A test of new software that determines whether all transactions flow properly between
input, output and storage devices. See environment test.
Fuzz Test
Testing for software bugs by feeding it randomly generated data. See fuzz testing.
Passive Test
Monitoring the results of a running system without introducing any special test data.
Contrast with "active test" (above).
System Test
Overall testing in the lab and in the user environment. See alpha test and beta test.
Test Case
A set of test data, test programs and expected results. See test case.
Benchmark Testing: Tests that use representative sets of programs and data designed to
evaluate the performance of computer hardware and software in a given configuration.
Binary Portability Testing: Testing an executable application for portability across system
platforms and environments, usually for conformation to an ABI specification.
Data Dictionary: A database that contains definitions of all data items defined during
analysis.
Emulator: A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system.
Equivalence Class: A portion of a component's input or output domains for which the
component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test
cases are designed to execute representatives from equivalence classes.
Gray Box Testing: A combination of Black Box and White Box testing methodologies:
testing a piece of software against its specification but using some knowledge of its
internal workings.
High Order Tests: Black-box tests conducted once the software has been integrated.
Inspection: A group review quality improvement process for written material. It consists
of two aspects; product (document itself) improvement and process improvement (of both
document production and inspection).
N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles
in which errors found in test cycle N are resolved and the solution is retested in test cycle
N+1. The cycles are typically repeated until the solution reaches a steady state and there
are no errors. See also Regression Testing.
Quality Circle: A group of individuals with related interests that meet at regular intervals
to consider problems or other matters related to the quality of outputs of a process and to
the correction of problems or to the improvement of quality.
Quality Management: That aspect of the overall management function that determines
and implements the quality policy.
Quality Policy: The overall intentions and direction of an organization as regards quality
as formally expressed by top management.
Release Candidate: A pre-release version, which contains the desired functionality of the
final version, but which needs to be tested for bugs (which ideally should be removed
before the final version is released).
<>Software Testing: A set of activities conducted with the intent of finding errors in
software.
Test Driven Development: Testing methodology associated with Agile Programming in
which every chunk of code is covered by unit tests, which must all pass all the time, in an
effort to eliminate unit-level and regression bugs during development. Practitioners of
TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the
production code.
Test Scenario: Definition of a set of test cases or test scripts and the sequence in which
they are to be executed.
Test Specification: A document specifying the test approach for a software feature or
combination or features and the inputs, predicted results and execution conditions for the
associated tests.
Test Tools: Computer programs used in the testing of a system, a component of the
system, or its documentation.
Usability Testing: Testing the ease with which users can learn and use a product.
47. If project wants to release in 3months what type of Risk analysis u do in Test
plan?
A:– Use risk analysis to determine where testing should be focused. Since it’s rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is appropriate
to most software development projects. This requires judgment skills, common sense, and
experience. (If warranted, formal methods are also available.) Considerations can
include:
49. Where you involve in testing life cycle ,what type of test you perform ?
A:– Generally test engineers involved from entire test life cycle i.e, test plan, test case
preparation, execution, reporting. Generally system testing, regression testing, adhoc
testing
etc.
50. what is Testing environment in your company ,means hwo testing process start ?
What does black-box testing mean at the unit, integration, and system levels?
Test case would have detail steps of what the application is supposed to do. SO
1) Functionality of application is known.
2) In addition you can refer to Backend, is mean look into the Database. To gain more
knowledge of the application.
Smoke test? Do you use any automation tool for smoke testing??
Testing the application whether it’s performing its basic functionality properly or not, so
that the test team can go ahead with the application. Definitely can use.
Testing methodology?
Varies from company to company (refer to symphony and emphasis websites for
different methodologies)
When a new build comes what is 1st action? (Performing smoke test).
What is Testing environment in your company, means how testing process start
What are the main key components in Web applications and client and Server
applications? (differences)
For Web Applications: Web application can be implemented using any kind of
technology like Java, .NET, VB, ASP, CGI& PERL. Based on the technology,We can
derive the components.
If you take .NET Application, Presentation (ASP, HTML, DHTML), Business Tier
(DLL) & Data Tier ( Database like Oracle, SQL Server etc.,)
Client Server Applications: It will have only 2 tiers. One is Presentation (Java, Swing)
and Data Tier (Oracle, SQL Server). If it is client Server architecture, the entire
application has to be installed on the client machine. When ever you do any changes in
your code, Again, It has to be installed on all the client machines. Where as in Web
Applications, Core Application will reside on the server and client can be thin
Client(browser). Whatever the changes you do, you have to install the application in the
server. NO need to worry about the clients. Because, You will not install any thing on the
client machine.
Actually how many positive and negetive testcases will write for a module?
That depends on the module & complexity of logic. For every test case, we can identify
+ve and -ve points. Based on the criteria, we will write the test cases, If it is crucial
process or screen. We should check the screen,in all the boundary conditions.
What is difference between Access(DBMS) and RDBMS like SQL Server or Oracle?.
Why Access is not used in web based application?
difference is nothing but in access we dont have relations to carry database we dont have
normalization,joinsbut in oracle we have normalized data or relations
Six Sigma:
A quality discipline that focuses on product and service excellence to create a culture that
demands perfection on target, every time.
What test data would you need to test that a specific date occurs on a specific day of
week?
What would you do if management pressure is stating that testing is complete and
you feel differently?
Who should be involved in each level of testing? What should be their
responsibilities?
Your experience with Programming within the context of Quality Assurance is:
N/A - I have no programming experience in C, C++ or Java.
You have done some programming in my role as a QA Engineer, and am comfortable
meeting such requirements in Java, C and C++ or VC++.
You have developed applications of moderate complexity that have taken up to three
months to complete.
Why does testing not prove a program is 100 percent correct (except for extremely
simple programs)?
Because we can only test a finite number of cases, but the program may have an infinite
number of possible combinations of inputs and outputs
Because the people who test the program are not the people who write the code
Because the program is too long
All of the above
We CAN prove a program is 100 percent correct by testing
Which of the following testing strategies ignores the internal structure of the
software?
Interface testing
Top down testing
White box testing
Black box testing
Sandwich testing
You are the test manager starting on system testing. The development team says
that due to a change in the requirements, they will be able to deliver the system for
SQA 5 days past the deadline. You cannot change the resources (work hours, days,
or test tools). What steps will you take to be able to finish the testing in time?
Your company is about to roll out an e-commerce application. It’s not possible to
test the application on all types of browsers on all platforms and operating systems.
What steps would you take in the testing environment to reduce the business risks
and commercial risks?
In your organization, testers are delivering code for system testing without
performing unit testing.
Give an example of
test policy:
Policy statement
Methodology
Measurement
Testers in your organization are performing tests on the deliverables even after
significant defects have been found. This has resulted in unnecessary testing of little
value, because re-testing needs to be done after defects have been rectified. You are
going to update the test plan with recommendations on when to halt testing. What
recommendations are you going to make?
How do you test if you have minimal or no documentation about the product?
Realising you won't be able to test everything - how do you decide what to test first?
Have you defined the requirements and success criteria for automation?
What are two of your strengths that you will bring to our QA/testing team?
Can you build a good audit trail using Compuware's QACenter products. Explain why.
Do you think tools are required for managing change. Explain and please list some
tools/practices which can help you managing change.
We believe in ad-hoc software processes for projects. Do you agree with this? Please
explain your answer.
Our software designers use UML for modeling applications. Based on their use cases, we
would like to plan a test strategy. Do you agree with this approach or would this mean
more effort for the testers.
How can one file compare future dated output files from a program which has change,
against the baseline run which used current date for input. The client does not want to
mask dates on the output files to allow compares. - Answer-Rerun baseline and future
date input files same # of days as future dated run of program with change. Now run a
file compare against the baseline future dated output and the changed programs' future
dated output.
What criteria would you use to select Web transactions for load testing?
What are the reasons why parameterization is necessary when load testing the Web server
and the database server?
How can data caching have a negative effect on load testing results?
What usually indicates that your virtual user script has dynamic data that is dependent on
you parameterized fields?
What are the benefits of creating multiple actions within any virtual user script?
The top management was feeling that when there are any changes in the technology being
used, development schedules etc, it was a waste of time to update the Test Plan. Instead,
they were emphasizing that you should put your time into testing than working on the test
plan. Your Project Manager asked for your opinion. You have argued that Test Plan is
very important and you need to update your test plan from time to time. It’s not a waste
of time and testing activities would be more effective when you have your plan clear. Use
some metrics. How you would support your argument to have the test plan consistently
updated all the time.
The QAI is starting a project to put the CSTE certification online. They will use an
automated process for recording candidate information, scheduling candidates for exams,
keeping track of results and sending out certificates. Write a brief test plan for this new
project. The project had a very high cost of testing. After going in detail, someone found
out that the testers are spending their time on software that doesn’t have too many
defects. How will you make sure that this is correct?
What happens to the test plan if the application has a functionality not mentioned in the
requirements?
You are given two scenarios to test. Scenario 1 has only one terminal for entry and
processing whereas scenario 2 has several terminals where the data input can be made.
Assuming that the processing work is the same, what would be the specific tests that you
would perform in Scenario 2, which you would not carry on Scenario 1?
What is the need for Test Planning?
What would be the Test Objective for Unit Testing? What are the quality measurements
to assure that unit testing is complete?
Prepare a checklist for the developers on Unit Testing before the application comes to
testing department.
Draw a pictorial diagram of a report you would create for developers to determine project
status.
Draw a pictorial diagram of a report you would create for users and management to
determine project status.
What 3 tools would you purchase for your company for use in testing? Justify the need?
Put the following concepts, put them in order, and provide a brief description of each:
system testing
acceptance testing
unit testing
integration testing
benefits realization testing
Write any three attributes which will impact the Testing Process?
You are a tester for testing a large system. The system data model is very large with
many attributes and there are a lot of inter-dependencies within the fields. What steps
would you use to test the system and also what are the effects of the steps you have taken
on the test plan?
A: It depends on the size of the organization and the risks involved. For large
organizations with high-risk projects, a serious management buy-in is required and a
formalized QA process is necessary. For medium size organizations with lower risk
projects, management and organizational buy-in and a slower, step-by-step process is
required. Generally speaking, QA processes should be balanced with productivity, in
order to keep any bureaucracy from getting out of hand. For smaller groups or
projects, an ad-hoc process is more appropriate. A lot depends on team leads and
managers, feedback to developers and good communication is essential among
customers, managers, developers, test engineers and testers. Regardless the size of the
company, the greatest value for effort is in managing requirement processes, where
the goal is requirements that are clear, complete and testable.
requirements?
A: Requirement specifications are important and one of the most reliable methods of
insuring problems in a complex software project is to have poorly documented
requirement specifications. Requirements are the details describing an application's
externally perceived functionality and properties. Requirements should be clear,
complete, reasonably detailed, cohesive, attainable and testable. A non-testable
requirement would be, for example, "user-friendly", which is too subjective. A testable
requirement would be something such as, "the product shall allow the user to enter their
previously-assigned password to access the application". Care should be taken to involve
all of a project's significant customers in the requirements process. Customers could be
in-house or external and could include end-users, customer acceptance test engineers,
testers, customer contract officers, customer management, future software maintenance
engineers, salespeople and anyone who could later derail the project. If his/her
expectations aren't met, they should be included as a customer, if possible. In some
organizations, requirements may end up in high-level project plans, functional
specification documents, design documents, or other documents at various levels of
detail. No matter what they are called, some type of documentation with detailed
requirements will be needed by test engineers in order to properly plan and execute tests.
Without such documentation there will be no clear-cut way to determine if a software
application is performing correctly. You CAN learn to capture requirements, with little or
no outside help. Get CAN get free information. Click on a link!
test case?
A: A test case is a document that describes an input, action, or event and its expected
result, in order to determine if a feature of an application is working correctly. A test case
should contain particulars such as ...
• Test case identifier;
• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.
• Please note, the process of developing test cases can help find problems in the
requirements or design of an application, since it requires you to completely think
through the operation of the application. For this reason, it is useful to prepare test
cases early in the development cycle, if possible.
Q33. Why do you recommended that we test during the design phase?
A: Because testing during the design phase can prevent defects later on. We recommend
verifying three things...
1. Verify the design is good, efficient, compact, testable and maintainable.
1. Verify the design meets the requirements and is complete (specifies all
relationships between modules, how to pass data, what happens in exceptional
circumstances, starting state of each module and how to guarantee the state of
each module).
1. Verify the design incorporates enough memory, I/O devices and quick enough
runtime for the final product.
parallel/audit testing?
A: Parallel/audit testing is testing where the user reconciles the output of the new system
to the output of the current system to verify the new system performs the operations
correctly.
comparison testing?
A: Comparison testing is testing that compares software weaknesses and strengths to
those of competitors' products.
Q61. What testing roles are standard on most testing projects?
A: Depending on the organization, the following roles are more or less standard on most
testing projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System
Administrator, Database Administrator, Technical Analyst, Test Build Manager and Test
Configuration Manager. Depending on the project, one person may wear more than one
hat. For instance, Test Engineers may also wear the hat of Technical Analyst, Test Build
Manager and Test Configuration Manager. You CAN get a job in testing. Click on a link!
• Test engineers define unit test requirements and unit test cases. Test engineers
also execute unit test cases.
• It is the test team that, with assistance of developers and clients, develops test
cases and scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts include the specific data that will be used for testing the
process or transaction.
• Test scripts are mapped back to the requirements and traceability matrices are
used to ensure each test is within scope.
• Test data is captured and base lined, prior to testing. This data serves as the
foundation for unit and system testing and used to exercise system functionality in
a controlled environment.
• Some output data is also base-lined for future comparison. Base-lined data is used
to support future application maintenance via regression testing.
• A pretest meeting is held to assess the readiness of the application and the
environment and data to be tested. A test readiness document is created to indicate
the status of the entrance criteria of the release.
Inputs for this process:
• Approved Test Strategy Document.
• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived
from general and detailed design documents, e.g. software design document,
source code and software complexity data.
Outputs for this process:
• Approved documents of test scenarios, test cases, test conditions and test data.
• Reports of software design issues, given to software developers for correction.
execute tests?
A: Execution of tests is completed by following the test documents in a methodical
manner. As each test procedure is performed, an entry is recorded in a test execution log
to note the execution of the procedure and whether or not the test procedure uncovered
any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint
meetings are held daily, if required, to address and discuss testing issues, status and
activities.
• The output from the execution of test procedures is known as test results. Test
results are evaluated by test engineers to determine whether the expected results
have been obtained. All discrepancies/anomalies are logged and discussed with
the software team lead, hardware test lead, programmers, software engineers and
documented for further investigation and resolution. Every company has a
different process for logging and reporting bugs/defects uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are
recorded in a test summary report. The severity of a problem, found during
system testing, is defined in accordance to the customer's risk assessment and
recorded in their selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the severity of
the problem. Fixes are regression tested and flawless fixes are migrated to a new
baseline. Following completion of the test, members of the test team prepare a
summary report. The summary report is reviewed by the Project Manager,
Software QA Manager and/or Test Team Lead.
• After a particular level of testing has been certified, it is the responsibility of the
Configuration Manager to coordinate the migration of the release software
components to the next test level, as documented in the Configuration
Management Plan. The software is only migrated to the production environment
after the Project Manager's formal acceptance.
• The test team reviews test document problems identified during testing, and
update documents where appropriate.
Inputs for this process:
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software
Design Document.
• A software that has been migrated to the test environment, i.e. unit tested code,
via the Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
Outputs for this process:
• Log and summary of the test results. Usually this is part of the Test Report. This
needs to be approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are
Requirements document and Design Document problems.
• Reports on software design issues, given to software developers for correction.
Examples are bug reports on code issues.
• Formal record of test incidents, usually part of problem tracking.
• Base-lined package, also known as tested source and object code, ready for
migration to the next level.
testing approaches can you tell me about?
A: Each of the followings represents a different testing approach:
• Black box testing,
• White box testing,
• Unit testing,
• Incremental testing,
• Integration testing,
• Functional testing,
• System testing,
• End-to-end testing,
• Sanity testing,
• Regression testing,
• Acceptance testing,
• Load testing,
• Performance testing,
• Usability testing,
• Install/uninstall testing,
• Recovery testing,
• Security testing,
• Compatibility testing,
• Exploratory testing, ad-hoc testing,
• User acceptance testing,
• Comparison testing,
• Alpha testing,
• Beta testing, and
• Mutation testing.
stress testing?
A: Stress testing is testing that investigates the behavior of software (and hardware)
under extraordinary operating conditions. For example, when a web server is stress
tested, testing aims to find out how many users can be on-line, at the same time, without
crashing the server. Stress testing tests the stability of a given system or entity. It tests
something beyond its normal operational capacity, in order to observe any negative
results. For example, a web server is stress tested, using scripts, bots, and various denial
of service tools.
difference between reliability testing and load testing?
A: Load testing is a blanket term that is used in many different ways across the
professional software testing community. The term, load testing, is often used
synonymously with stress testing, performance testing, reliability testing, and volume
testing. Load testing generally stops short of stress testing. During stress testing, the load
is so great that errors are the expected results, though there is gray area in between stress
testing and load testing.
software testing?
A: Software testing is a process that identifies the correctness, completenes, and quality
of software. Actually, testing cannot establish the correctness of software. It can find
defects, but cannot prove there are no defects. You CAN learn software testing, with little
or no outside help. Get CAN get free information. Click on a link!
A: Software faults are hidden programming errors. Software faults are errors in the
correctness of the semantics of computer programs.
software failure?
A: A software failure occurs when the software does not do what the user expects to see
difference between a software fault and a software failure?
A: A software failure occurs when the software does not do what the user expects to see.
A software fault, on the other hand, is a hidden programming error. A software fault
becomes a software failure only when the exact computation conditions are met, and the
faulty portion of the code is executed on the CPU. This can occur during normal usage.
Or, when the software is ported to a different hardware platform. Or, when the software is
ported to a different complier. Or, when the software gets extended
test engineer?
A: Test engineers are engineers who specialize in testing. We, test engineers, create test
cases, procedures, scripts and generate data. We execute test procedures and scripts,
analyze standards of measurements, evaluate results of system/integration/regression
testing.
A: Test engineers speed up the work of the development staff, and reduce the risk of your
company's legal liability. We, test engineers, also give the company the evidence that the
software is correct and operates properly. We also improve problem tracking and
reporting, maximize the value of the software, and the value of the devices that use it. We
also assure the successful launch of the product by discovering bugs and design flaws,
before...
users get discouraged, before shareholders loose their cool and before employees get
bogged down. We, test engineers help the work of software development staff, so the
development team can devote its time to build up the product. We, test engineers also
promote continual improvement. They provide documentation required by FDA, FAA,
other regulatory agencies, and your customers. We, test engineers save your company
money by discovering defects EARLY in the design process, before failures occur in
production, or in the field. We save the reputation of your company by discovering bugs
and design flaws, before bugs and design flaws damage the reputation of your company.
QA engineer?
A: QA engineers are test engineers, but QA engineers do more than just testing. Good
QA engineers understand the entire software development process and how it fits into the
business approach and the goals of the organization. Communication skills and the ability
to understand various sides of issues are important. We, QA engineers, are successful if
people listen to us, if people use our tests, if people think that we're useful, and if we're
happy doing our work. I would love to see QA departments staffed with experienced
software developers who coach development teams to write better code. But I've never
seen it. Instead of coaching, we, QA engineers, tend to be process people.
A: The QA Engineer's function is to use the system much like real users would, find all
the bugs, find ways to replicate the bugs, submit bug reports to the developers, and to
provide feedback to the developers, i.e. tell them if they've achieved the desired level of
quality.
responsibilities of a QA engineer?
A: Let's say, an engineer is hired for a small software company's QA role, and there is no
QA team. Should he take responsibility to set up a QA infrastructure/process, testing and
quality of the entire product? No, because taking this responsibility is a classic trap that
QA people get caught in. Why? Because we QA engineers cannot assure quality. And
because QA departments cannot create quality. What we CAN do is to detect lack of
quality, and prevent low-quality products from going out the door. What is the solution?
We need to drop the QA label, and tell the developers, they are responsible for the quality
of their own work. The problem is, sometimes, as soon as the developers learn that there
is a test department, they will slack off on their testing. We need to offer to help with
quality assessment only.
A: First, unit testing has to be completed. Upon completion of unit testing, integration
testing begins. Integration testing is black box testing. The purpose of integration testing
is to ensure distinct components of the application still work in accordance to customer
requirements. Test cases are developed with the express purpose of exercising the
interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are
either in line or differences are explainable/acceptable based on client input. You CAN
learn to perform integration testing, with little or no outside help. Get CAN get free
information. Click on a link!
A: The test plan document template helps to generate test plan documents that describe
the objectives, scope, approach and focus of a software testing effort. Test document
templates are often in the form of documents that are divided into sections and
subsections. One example of this template is a 4-section document, where section 1 is the
description of the "Test Objective", section 2 is the the description of "Scope of Testing",
section 3 is the the description of the "Test Approach", and section 4 is the "Focus of the
Testing Effort". All documents should be written to a certain standard and template.
Standards and templates maintain document uniformity. They also help in learning where
information is located, making it easier for a user to find what they want. With standards
and templates, information will not be accidentally omitted from a document. Once Rob
Davis has learned and reviewed your standards and templates, he will use them. He will
also recommend improvements and/or additions. A software project test plan is a
document that describes the objectives, scope, approach and focus of a software testing
effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will
help people outside the test group understand the why and how of product validation.
You CAN learn to generate test plan templates, with little or no outside help. Get CAN
get free information. Click on a link!
A: For larger projects, or ongoing long-term projects, automated testing can be valuable.
But for small projects, the time needed to learn and implement the automated testing
tools is usually not worthwhile. Automated testing tools sometimes do not make testing
easier. One problem with automated testing tools is that if there are continual changes to
the product being tested, the recordings have to be changed so often, that it becomes a
very time-consuming task to continuously update the scripts. Another problem with such
tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-
consuming task. You CAN learn to use automated tools, with little or no outside help.
Get CAN get free information. Click on a link!
A: This ratio is not a fixed one, but depends on what phase of the software development
life cycle the project is in. When a product is first conceived, organized, and developed,
this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp
contrast, when the product is near the end of the software development life cycle, this
ratio tends to be 1:1, or even 1:2, in favor of testers.
A: I'm a Software QA Engineer. I use the system much like real users would. I find all
the bugs, find ways to replicate the bugs, submit bug reports to developers, and provides
feedback to the developers, i.e. tell them if they've achieved the desired level of quality.
A: Depending on the organization, the following roles are more or less standard on most
testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers,
System Administrators, Database Administrators, Technical Analysts, Test Build
Managers, and Test Configuration Managers. Depending on the project, one person can
and often wear more than one hat. For instance, we Test Engineers often wear the hat of
Technical Analyst, Test Build Manager and Test Configuration Manager as well.
Which of these roles are the best and most popular?
A: "Efficient" means having a high ratio of output to input; working or producing with a
minimum of waste. For example, "An efficient engine saves gas". "Effective", on the
other hand, means producing, or capable of producing, an intended result, or having a
striking effect. For example, "For rapid long-distance transportation, the jet engine is
more effective than a witch's broomstick".
What is up time?
A: Up time is the time period when a system is operational and in service. Up time is the
sum of busy time and idle time.
A: Usability means ease of use; the ease with which a user can learn to operate, prepare
inputs for, and interpret outputs of a software product.
A: When a distinction is made between those who operate and use a computer system for
its intended purpose, a separate user documentation and user manual is created. Operators
get user documentation, and users get user manuals.
A: A computer program is user friendly, when it is designed with ease of use, as one of
the primary objectives of its design.
A: A document is user friendly, when it is designed with ease of use, as one of the
primary objectives of its design.
A: User guide is the same as the user manual. It is a document that presents information
necessary to employ a system or component to obtain the desired results. Typically, what
is described are system and component capabilities, limitations, options, permitted inputs,
expected outputs, error messages, and special instructions.
What is user interface?
A: User interface is the interface between a human user and a computer system. It
enables the passage of information between a human user and hardware or software
components of a computer system.
What is a utility?
A: Utility is a software tool designed to perform some frequently used support function.
For example, a program to print files.
What is utilization?
A: Utilization is the ratio of time a system is busy, divided by the time it is available.
Uilization is a useful measure in evaluating computer performance.
A: Variable trace is a record of the names and values of variables accessed and changed
during the execution of a computer program.
A: Value trace is same as variable trace. It is a record of the names and values of
variables accessed and changed during the execution of a computer program.
A: Variables are data items whose values can change. For example: "capacitor_voltage".
There are local and global variables, and constants.
A: Variants are versions of a program. Variants result from the application of software
diversity.
A: In virtual storage systems, virtual addresses are assigned to auxiliary storage locations.
They allow those location to be accessed as though they were part of the main storage.
Q159
Q160. What is the waterfall model?
A: Waterfall is a model of the software development process in which the concept phase,
requirements phase, design phase, implementation phase, test phase, installation phase,
and checkout phase are performed in that order, probably with overlap, but with little or
no iteration.
A: The software development process consists of the concept phase, requirements phase,
design phase, implementation phase, test phase, installation phase, and checkout phase.
A: In software development process the following models are used: waterfall model,
incremental development model, rapid prototyping model, and spiral model.
Q164. Can you give me more information on software QA/testing, from a tester's point
of view?
A: Yes, I can. You can visit my web site, and on pages www.robdavispe.com/free and
www.robdavispe.com/free2 you can find answers to many questions on software QA,
documentation, and software testing, from a tester's point of view. As to questions and
answers that are not on my web site now, please be patient, as I am going to add more
answers, as soon as time permits.
A: Each of the followings represents a different type of testing approach: black box
testing, white box testing, unit testing, incremental testing, integration testing, functional
testing, system testing, end-to-end testing, sanity testing, regression testing, acceptance
testing, load testing, performance testing, usability testing, install/uninstall testing,
recovery testing, security testing, compatibility testing, exploratory testing, ad-hoc
testing, user acceptance testing, comparison testing, alpha testing, beta testing, and
mutation testing.
Q169. How do you conduct peer reviews?
A: The peer review, sometimes called PDR, is a formal meeting, more formalized than a
walk-through, and typically consists of 3-10 people including a test lead, task lead (the
author of whatever is being reviewed), and a facilitator (to make notes). The subject of
the PDR is typically a code block, release, feature, or document, e.g. requirements
document or test plan. The purpose of the PDR is to find problems and see what is
missing, not to fix anything. The result of the meeting should be documented in a written
report. Attendees should prepare for this type of meeting by reading through documents,
before the meeting starts; most problems are found during this preparation. Preparation
for PDRs is difficult, but is one of the most cost-effective methods of ensuring quality,
since bug prevention is more cost effective than bug detection.
A: When testing the password field, one needs to verify that passwords are encrypted.
A: The objective of regression testing is to test that the fixes have not created any other
problems elsewhere. In other words, the objective is to ensure the software has remained
intact. A baseline set of data and scripts are maintained and executed, to verify that
changes introduced during the release have not "undone" any previous code. Expected
results from the baseline are compared to results of the software under test. All
discrepancies are highlighted and accounted for, before testing proceeds to the next level.
Q175. What types of white box testing can you tell me about?
A: White box testing is a testing approach that examines the application's program
structure, and derives test cases from the application's program logic. Clear box testing is
a white box type of testing. Glass box testing is also a white box type of testing. Open
box testing is also a white box type of testing.
Q176. What types of black box testing can you tell me about?
A: Black box testing is functional testing, not based on any knowledge of internal
software design or code. Black box testing is based on requirements and functionality.
Functional testing is also a black-box type of testing geared to functional requirements of
an application. System testing is also a black box type of testing. Acceptance testing is
also a black box type of testing. Functional testing is also a black box type of testing.
Closed box testing is also a black box type of testing. Integration testing is also a black
box type of testing.
A: It depends on the initial testing approach. If the initial testing approach is manual
testing, then, usually the regression testing is performed manually. Conversely, if the
initial testing approach is automated
A: If we use detailed and well-written processes and procedures, we ensure the correct
steps are being executed. This facilitates a successful completion of a task. This is a way
we also ensure a process is repeatable.
A: The test strategy document is a formal description of how a software product will be
tested. A test strategy is developed for all levels of testing, as required. The test team
analyzes the requirements, writes the test strategy and reviews the plan with the project
team. The test plan may include test cases, conditions, the test environment, and a list of
related tasks, pass/fail criteria and risk assessment. Additional sections in the test strategy
document include: A description of the required hardware and software components,
including test tools. This information comes from the test environment, including test tool
data. A description of roles and responsibilities of the resources required for the test and
schedule constraints. This information comes from man-hours and schedules. Testing
methodology. This is based on known standards. Functional and technical requirements
of the application. This information comes from requirements, change request, technical,
and functional design documents. Requirements that the system cannot provide, e.g.
system limitations.
A: One test methodology is a three-step process. Creating a test strategy, Creating a test
plan/design, and Executing tests. This methodology can be used and molded to your
organization's needs. Rob Davis believes that using this methodology is important in the
development and ongoing maintenance of his customers' applications.
In general, how do you see automation fitting into the overall process of testing?
Using automation method is adopted if our process is repeatable in nature and the
time taken
to do manual ways are not efficient and economical. We adopt the testing to be done
using
automation tools either already available or developed own script/software to do the
testing
process
4. What is the Outcome of Testing?
A. The outcome of testing will be a stable application which meets the customer
Req's.
5. What kind of testing have you done?
A. Usability, Functionality, System testing, regression testing, UAT
(it depends on the person).
6. What is the need for testing?
A. The Primary need is to match requirements get satisfied with the
functionality
and also to answer two questions
1· Whether the system is doing what it supposes to do?
2· Whether the system is not performing what it is not suppose to do?
7. What are the entry criteria for Functionality and Performance testing?
A. Entry criteria for Functionality testing is Functional Specification /BRS
(CRS)/User Manual. An integrated application, Stable for testing.
Entry criteria for Performance testing is successfully of functional testing,
once all the requirements related to functional are covered and tested, and
approved or validated.
9. Why do you go for White box testing, when Black box testing is available?
A. A benchmark that certifies Commercial (Business) aspects and also functional
(technical) aspects is objectives of black box testing. Here loops, structures,
arrays, conditions, files, etc are very micro level but they arc Basement for
any application, So White box takes these things in Macro level and test these
things
Even though Black box testing is available, we should go for White box testing
also, to check the correctness of code and for integrating the modules.
36.What are the different types of testing techniques?
A. 1.white Box testing 2.Black Box testing.
37.What are the different types of test case techniques?
A. 1.Equilance Partition. 2.Boundary Value Analysis. 3.Error guesing.
38.What are the risks involved in testing?
48.What is the difference between unit testing and integration testing?
A. Unit Testing:It is a testing activity typically done by the developers not by testers,as it
requires
detailed knowledge of the internal program design and code. Not always easily done
unless the
application has a well-designed architecture with tight code.
integration testing:testing of combined parts of an application to determine if they
function
together correctly. The 'parts' can be code modules, individual applications,client and
server
applications on a network, etc. This type of testing is especially relevant to client/server
and
distributed systems.
Let's say as part of the interview process for Test Engineer position after you successfully
answered the interview question about creating test automation framework based on
Selenium , explained the difference between white and black box testing, with genuine
expression in your voice recited agile manifesto principles, even solved the interview
puzzle about one hundred prisoners and when you feel that you almost hired as tester, the
interviewer asked you how would you test a toaster.
One more QA manager interview question could help to analyze the management
abilities of candidate for QA manager position. This interview questions especially
critical for the QA teams working in Agile environment.
In certain domains, there is some amount of testing that cannot reasonably be automated
in the proper time. This testing requires that QA Engineer eyeballs carefully look at the
screen and work through the application under test. It isn't necessary for these QA
Engineer to be developers, in fact, it might be better if they aren't programmers, since
developers view the world differently than most people. You may want non-programmers
QA Engineer for the following mundane tests:
* UI testing
* Usability testing
* Internationalization testing
In the same time QA Manager definitely wants your testers to have the coding skills and
be developers when doing the following kinds of tests:
There is the carrot and stick approach on resolving the work performance of tester, who
instead of doing requested work tries to dig into the code and fix developers bugs in the
application code. The carrot approach requires convincing QA Engineer to do something
meaningful about the software application quality like discovering the weak point in
application logic, set up testing environment for hard to test features, decide when the
testing should be completed and finally sign off the production for production. QA
Manager could also use the stick approach by asking QA Engineer serious questions
about functionality of the new features, testing coverage, number of high severity bug
logged, number of usefully executed test cases and when the application under test would
be ready for production.
For exmaple out of 100 test cases if I ask you to automate how many you can
automate
What is CM Plan
Write a query to fetch the data is the table such as Employee Table and Dept Table
Can you test DB using WR
What kinds of testing do you know? What is it system testing What is it integration
testing What is a unit testing What is a regression testing
You theoretical background and home work may shine in this question. System testing is
a testing of ...
What are all the basic strategies for dealing with new code
.Start with obvious and simple test
.Test each function sympathetically
.Test broadly befo...
What are all the main actions which will be taken by the project manager for testing
a product
1) Assess risks for the project as a whole
2) Assess the risk associated with the testing sub-p...
What are all the important factors want to be trade-off when building a product
1. Time to market
2. Cost to market
3. Reliability of delivered product
4. Feature se...
.What are all the favorite risks will be arised during the project plan
Are there fixed dates that must be met for milestones or components of the product?\
.How likel...