Black Box Software Testing: Bug Advocacy
Black Box Software Testing: Bug Advocacy
Black Box Software Testing: Bug Advocacy
Bug Advocacy
How to
Win Friends,
influence programmers
and
SToMp BUGs.
ASSIGNED READING: Kaner, Bach, Pettichord, Bug Advocacy 179 Black Box Software Testing Copyright 2003 Cem Kaner & James Bach
Copyright Notice
These slides are distributed under the Creative Commons License. In brief summary, you may make and distribute copies of these slides so long as you give the original author credit and, if you alter, transform or build upon this work, you distribute the resulting work only under a license identical to this one. For the rest of the details of the license, see http://creativecommons.org/licenses/by-sa/2.0/legalcode.
Copyright
2003
180
Copyright
2003
181
Customer Satisfiers
the right features adequate instruction
Dissatisfiers
unreliable hard to use too slow incompatible with the customers equipment
Copyright
2003
182
But consider the implication: Its appropriate to report any deviation from high quality as a software error. Therefore many issues will be reported that will be errors to some and non-errors to others.
Glen Myers definition: A software error is present when the program does not do what its user reasonably expects it to do.
Black Box Software Testing Copyright
2003
183
Quality is Multidimensional
Programming Glass Box Testing Multimedia Production Content Development User Interface Design Marketing Black Box Testing Writing Customer Service
Project Manager
Manufacturing
When you sit in a project team meeting, discussing a bug, a new feature, or some other issue in the project, you must understand that each person in the room has a different vision of what a quality product would be. Fixing bugs is just one issue. The next slide gives some examples.
Black Box Software Testing Copyright
2003
184
Copyright
2003
185
Copyright
2003
This is counterproductive. It leads to infighting instead of communication, and it leads to squabbling over bugs instead of research and bug fixing.
And as we saw when we discussed private bug rates, programmers actually find and fix the large majority of their own bugs. Bugs come into the code for many reasons. Its worth considering some common systematic (as distinct from poor individual performance) factors. You will learn to vary your strategic approaches as you learn your companies weaknesses.
Copyright
2003
187
If you graduated from a Computer Science program, how much training did you have in task analysis? Requirements definition? Usability analysis? Negotiation and clear communication of negotiated agreements? Not much? Hmmmm . . . .
Black Box Software Testing Copyright
2003
188
Copyright
2003
189
2003
190
Bug Advocacy?
1. The point of testing is to find bugs.
2.
3. The best tester isnt the one who finds the most bugs or who embarrasses the most programmers. The best tester is the one who gets the most bugs fixed.
4. Programmers operate under time constraints and competing priorities. For
what people outside of the testing group will most notice and most remember of your work.
This is
A bug report is a tool that you use to sell the programmer on the idea of spending her time and energy to fix a bug.
Note: When I say the best tester is the one who gets the most bugs fixed, I am not encouraging bug counting metrics, which are almost always counterproductive. Instead, what I am suggesting is that the effective tester looks to the effect of the bug report, and tries to write it in a way that gives each bug its best chance of being fixed. Also, a bug report is successful if it enables an informed business decision. Sometimes, the best decision is to not fix the bug. The excellent bug report raises the issue and provides sufficient data for a good decision.
example, outside of the 8-hour workday, some programmers prefer sleeping and watching Star Wars to fixing bugs.
Copyright
2003
191
Selling Bugs
Time is in short supply. If you want to convince the programmer to spend his time fixing your bug, you may have to sell him on it. (Your bug? How can it be your bug? The programmer made it, not you, right? Its the programmers bug. Well, yes, but you found it so now its yours too.) Sales revolves around two fundamental objectives: Motivate the buyer (Make him WANT to fix the bug.) Overcome objections (Get past his excuses and reasons for not fixing the bug.)
Copyright
2003
192
It looks really bad. It looks like an interesting puzzle and piques the programmers curiosity. It will affect lots of people. Getting to it is trivially easy. It has embarrassed the company, or a bug like it embarrassed a competitor. One of its cousins embarrassed the company or a competitor. Management (that is, someone with influence) has said that they really want it fixed. Youve said that you want the bug fixed, and the programmer likes you, trusts your judgment, is susceptible to flattery from you, owes you a favor or accepted bribes from you.
Black Box Software Testing Copyright
2003
193
Overcoming Objections
These make programmers resist spending time on a bug: The programmer cant replicate the defect. Strange and complex set of steps required to induce the failure. Not enough information to know what steps are required, and it will take a lot of work to figure them out. The programmer doesnt understand the report. Unrealistic (e.g. corner case) It will take a lot of work to fix the defect. A fix will introduce too much risk into the code. No perceived customer impact Unimportant (no one will care if this is wrong: minor error or unused feature.) Thats not a bug, its a feature. Management doesnt care about bugs like this. The programmer doesnt like / trust you (or the customer who is complaining about the bug). Black Box Software Testing Copyright 2003 Cem Kaner & James Bach
194
Bug Advocacy
Copyright
2003
195
2003
196
What is the fault? What is the critical condition? What will we see as the failure?
Copyright
2003
197
is more serious than it first appears. is more general than it first appears.
Copyright
2003
198
Vary my behavior (change the conditions by changing what I do) Vary the options and settings of the program (change the conditions by changing something about the program under test). Vary the software and hardware environment.
Black Box Software Testing Copyright
2003
199
2003
200
2003
201
Copyright
2003
202
find a new variation or a new symptom that didnt exist in the previous release. What you are showing is that the new versions code interacts with this error in new ways. Thats a new problem. This type of follow-up testing is especially important during a maintenance release that is just getting rid of a few bugs. Bugs wont be fixed unless they were (a) scheduled to be fixed because they are critical or (b) new side effects of the new bug fixing code.
Copyright
2003
203
Question: How many programmers does it take to change a light bulb? Answer: Whats the problem? The bulb at my desk works fine!
Black Box Software Testing Copyright
2003
204
2003
205
Copyright
2003
206
Copyright
2003
207
Bug Advocacy
Overcoming
OBJECTIONS
By Better Researching The Failure Conditions
Copyright
2003
208
Copyright
2003
209
You must describe the failure as precisely as possible. If you can identify a display or a message well enough, the programmer can often identify a specific point in the code that the failure had to pass through.
When you realize that you cant reproduce the bug, write down everything you can remember. Do it now, before you forget even more. As you write, ask yourself whether youre sure that you did this step (or saw this thing) exactly as you are describing it. If not, say so. Draw these distinctions right away. The longer you wait, the more youll forget. Maybe the failure was a delayed reaction to something you did before starting this test or series of tests. Before you forget, note the tasks you did before running this test. Check the bug tracking system. Are there similar failures? Maybe you can find a pattern. Find ways to affect timing of the program or devices, Slow down, speed up. Talk to the programmer and/or read the code.
Copyright
2003
210
Non-Reproducible Errors
The fact that a bug is not reproducible is data. The program is telling you that you have a hole in your logic. You are not entertaining certain relevant conditions. Why not? See Watts Humphrey, Personal Software Process, for recommendations to programmers of a system for discovering and then eliminating characteristic errors from their code. A non-reproducible bug is a testers error, just like a design bug is a programmers error. Its valuable to develop a system for discovering your blind spots. To improve over time, keep track of the bugs youre missing and what conditions you are not attending to (or find too hard to manipulate). The following pages give a list of some conditions commonly ignored or missed by testers. Your personal list will be different in some ways, but maybe this is a good start. When you run into a irreproducible defect look at this list and ask whether any of these conditions could be the critical one. If it could, vary your tests on that basis and you might reproduce the failure.
----------------------------------------------------------------------------------------
(Note: Watts Humphrey suggested to me the idea of keeping a list of commonly missed conditions. It has been a valuable idea.)
Copyright
2003
211
stack corruption might not turn into a stack overflow until you do the same task many times. a wild pointer might not have an easily observable effect until hours after it was mis-set.
If you suspect that you have time-delayed failures, use tools such as videotape, capture programs, debuggers, debugloggers, or memory meters to record a long series of events over time.
Copyright
2003
212
2003
213
Copyright
2003
214
Copyright
2003
215
An interrupt was received at an unexpected time. The program received a message from another device or system at an inappropriate time (e.g. after a time-out.) Data was received or changed at an unexpected time.
The bug is caused by an error in error-handling. You have to generate a previous error message or bug to set up the program for this one. Time-outs trigger a special class of multiprocessing error handling failures. These used to be mainly of interest to real-time applications, but they come up in client/server work and are very pesky. Process A sends a message to Process B and expects a response. B fails to respond. What should A do? What if B responds later?
Copyright
2003
216
Copyright
2003
217
Copyright
2003
218
Copyright
2003
Copyright
2003
2003
221
Strange and complex set of steps required to induce the failure. Not enough information to know what steps are required, and it will take a lot of work to figure them out. The programmer doesnt understand the report.
Unrealistic (e.g. corner case) Its a feature!
Copyright
2003
222
Bug Advocacy
Copyright
2003
223
Copyright
2003
224
Reporting Errors
As soon as you run into a problem in the software, fill out a Problem Report form. In the well written report, you:
Explain how to reproduce the problem. Analyze the error so you can describe it in a minimum number of steps. Include all the steps. Make the report easy to understand. Keep your tone neutral and non-antagonistic. Keep it simple: one bug per report. If a sample test file is essential to reproducing a problem, reference it and attach the test file. To the extent that you have time, describe the dimensions of the bug and characterize it. Describe what events are and are not relevant to the bug. And what the results are (any characteristics of the failure) and how they varied across tests.
Black Box Software Testing Copyright
2003
225
2003
227
2003
228
2003
2003
231
Copyright
2003
232
Start from a known place (e.g. boot the program) and Then describe each step until you hit the bug. NUMBER THE STEPS. Take it one step at a time. If anything interesting happens on the way, describe it. (You are giving people directions to a bug. Especially in long reports, people need landmarks.)
Describe the erroneous behavior and, if necessary, explain what should have happened. (Why is this a bug? Be clear.) List the environmental variables (config, etc.) that are not covered elsewhere in the bug tracking form. If you expect the reader to have any trouble reproducing the bug (special circumstances are required), be clear about them.
Black Box Software Testing Copyright
2003
233
Copyright
2003
234
2003
235
2003
236
Copyright
2003
237
Copyright
2003
239
2003
240
Its a feature!
Later in the course, well think about this. The usual issues involve the costs of fixing bugs, the companys understanding of the definitions of bugs, and your personal credibility.
Copyright
2003
241
Bug Advocacy
Copyright
2003
242
checks that critical information is present and intelligible checks whether she can reproduce the bug asks whether the report might be simplified, generalized or strengthened.
If there are problems, she takes the bug back to the original reporter.
If the reporter was outside the test group, she simply checks basic facts with him. If the reporter was a tester, she points out problems with an objective of furthering the testers training.
Black Box Software Testing Copyright
2003
243
Copyright
2003
244
Copyright
2003
Bug Advocacy
2003
246
This curve maps the traditionally expected increase of cost as you find and fix errors later and later in development.
Black Box Software Testing Copyright
2003
247
Copyright
2003
248
2003
249
250
The Cost of Quality is the total amount the company spends to achieve and cope with the quality of its product.
This includes the companys investments in improving quality, and its expenses arising from inadequate quality. A key goal of the quality engineer is to help the company minimize its cost of quality.
Refer to the paper, Quality Cost Analysis: Benefits & Risks.
Black Box Software Testing Copyright
2003
251
Quality-Related Costs
Prevention Appraisal
Cost of preventing Cost of inspection customer dissatisfaction, (testing, reviews, etc.). including errors or weaknesses in software, design, documentation, and support. Internal Failure External Failure Cost of dealing with Cost of dealing with errors discovered during errors that affect your development and testing. customers, after the Note that the company product is released. loses money as a user Black Box Software Testing (who cantCopyright 2003 Cem Kaner & James Bach make the
252
Staff training Requirements analysis & early prototyping Fault-tolerant design Defensive programming Usability analysis Clear specification Accurate internal documentation Pre-purchase evaluation of the reliability of development tools
Internal Failure Bug fixes Regression testing Wasted in-house user time Wasted tester time Wasted writer time Wasted marketer time Wasted advertisements Direct cost of late shipment Opportunity cost of late shipment
Design review Code inspection Glass box testing Black box testing Training testers Beta testing Usability testing Pre-release out-of-box testing by customer service staff
External Failure
Lost sales and lost customer goodwill Technical support calls Writing answer books (for Support) Investigating complaints Supporting multiple versions in the field Refunds, recalls, warranty, liability costs Interim bug fix releases Shipping updated product PR to soften bad reviews Discounts to resellers
253
Copyright
2003
2003
254
2003
255