Abstract
The competitive nature of most algorithmic experimentation is a source of problems that are all too familiar to the research community. It is hard to make fair comparisons between algorithms and to assemble realistic test problems. Competitive testing tells us which algorithm is faster but not why. Because it requires polished code, it consumes time and energy that could be better spent doing more experiments. This article argues that a more scientific approach of controlled experimentation, similar to that used in other empirical sciences, avoids or alleviates these problems. We have confused research and development; competitive testing is suited only for the latter.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Aspvall, B. (1980). Recognizing disguised NR(1) instances of the satisfiability problem.Journal of Algorithms, 1, 97–103.
Böhm, H. (1992). Report on a SAT competition. Technical report no. 110, Universität Paderborn, Germany.
Chandru, V., Coullard, C.R., Hammer, P.L., Montanez, M., and Sun, X. (1990). On renamable Horn and generalired Horn functions. InAnnals of Mathematics and Artificial Intelligence, Basel: Baltzer AG.
Chandru, V., and Hooker, J.N. (1992). Detecting extended Horn structure in propositional logic.Information Processing Letters, 42, 109–111.
Cheeseman, P., Kanefsky, B., and Taylor, W.M. (1991). Where the really hard problems are. InProceedings of the International Joint Conference on Artificial Intelligence, ICAI91, Sydney, Springer-Verlag (pp. 331–337).
Crawford, J.M., and Auton, L.D. (1993). Experimental results on the crossover point in satisfiability problems. InProceedings of the Eleventh National Conference on Artificial Intelligence, AAAI93, Washington, D.C., MIT Press (pp. 21–27).
Gent, I.P., and Walsh, T. (1994). The SAT phase transition. In A.G. Cohn (Ed.),Proceedings of the Eleventh European Conference on Artificial Intelligence, ECAI94 (pp. 105–109).
Harche, F., Hooker, J.N., and Thompson, G.L. (1994). A computational study of satisfiability algorithms for propositional logic.ORSA Journal on Computing 26, 423–435.
Hooker, J.N. (1994). Needed: An empirical science of algorithms.Operations Research, 42, 201–212.
Hooker, J.N., and Fedjki, C. (1990). Branch-and-cut solution of inference problems in propositional logic.Annals of Mathematics and Artificial Intelligence, 1, 123–139.
Hooker, J.N., and Vinay, V. (Forthcoming). Branching rules for satisfiability.Journal of Automated Reasoning.
Larrabee, T., and Tsujii, Y. (1993). Evidence for a satisfiability threshold for random 3cnf formulas. In H. Hirsch et al. (Eds.),Proceedings of the Spring Symposium on Artificial Intelligence and NP-Hard Problems (pp. 112–118). Stanford, CA.
Lustig, I.J., Marsten, R.E., and Shanno, D.F. (1994). Interior point methods for linear programming: Computational state of the art.ORSA Journal on Computing, 6, pp. 1–14).
McGeoch, C.C. (Forthcoming). Toward an experimental method for algorithm simulations.ORSA Journal on Computing.
Mitchell, D., Selman, B., and Levesque, H. (1992). Hard and easy distributions of SAT problems. InProceedings, Tenth National Conference on Artificial Intelligence, AAAI92 (pp. 459–465). Cambridge, MA: MIT Press.
Trick, J., and Johnson, D.S. (Eds.). (1995).Second DIMACS Challenge: Cliques, Coloring and Satisfiability. Series in Discrete Mathematics and Theoretical Computer Science. Providence, RI: American Mathematical Society.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Hooker, J.N. Testing heuristics: We have it all wrong. J Heuristics 1, 33–42 (1995). https://doi.org/10.1007/BF02430364
Issue Date:
DOI: https://doi.org/10.1007/BF02430364