Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

User participation in the requirements engineering process: An empirical study

  • Published:
Requirements Engineering Aims and scope Submit manuscript

Abstract

In the development of information systems, user participation in the requirements engineering (RE) process is hypothesised to be necessary for RE success. In this paper we develop a theoretical model which predicts that the interaction between user participation in the RE process and uncertainty has an impact on RE success. This theory is empirically tested using survey data. We develop instruments to measure user participation and uncertainty. An existing instrument for measuring RE success was used. This instrument covers two dimensions of RE success: (a) the quality of RE service, and (b) the quality of RE products. The results, indicate that as uncertainty increases, greater user participation alleviates the negative influence of uncertainty on the quality of RE service, and that as uncertainty decreases, the beneficial effects on the quality of RE service of increasing user participation diminish. Furthermore, we did not find that the interaction between user participation and uncertainty had an impact on the quality of RE products. Based on these results, we make recommendations for managing user participation in the RE process, and provide directions for future research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ginzberg M. Steps towards more effective implementation of MS and MIS. Interfaces 1978; 8(3): 57–63

    Google Scholar 

  2. Ginzberg M. A study of the implementation process. In: TIMS studies in the management sciences, vol 13. North-Holland, Amsterdam, 1979, pp 85–102

    Google Scholar 

  3. Zmud R, Cox J. The implementation process: a change approach. MIS Q 1979; June: 35–43

    Google Scholar 

  4. Keen P. Information systems and organizational change. Commun ACM 1981; 24(1): 24–33

    Google Scholar 

  5. Bostrom R, Heinen J. MIS problems and failures: a socio-technical perspective. MIS Q 1977; September: 17–32

    Google Scholar 

  6. Edstrom A. User influence and the success of MIS projects: a contingency approach. Hum Relations 1977; 30(7): 589–607

    Google Scholar 

  7. Zand D, Sorenson R. Theory of change and the effective use of management science. Admin Sci Q 1975; 20: 532–545

    Google Scholar 

  8. Berry D. Involving users in expert system development. Expert Syst 1994; 11(1): 23–28

    Google Scholar 

  9. Ives B, Olson M. User involvement and MIS success: a review of research. Manage Sci 1984; 30(5): 586–603

    Google Scholar 

  10. Vanlommel E, de Brabander B. The organization of electronic data processing (EDP) activities and computer use. J Business 1975; 48(2): 391–410

    Google Scholar 

  11. Torkzadeh G, Doll W. The test-retest reliability of user involvement instruments. Inform Manage 1994; 26: 21–31

    Google Scholar 

  12. Mumford E. Participation: from Aristotle to today. In: Bemelmans T (ed.). Beyond productivity: information systems development for organizational effectiveness. Elsevier, Amsterdam, 1984

    Google Scholar 

  13. Sack K. User participation in software development: what is it, why, and how? In: Briefs U, Tagg E (eds) Education for system designer/user cooperation. Elsevier, Amsterdam, 1985

    Google Scholar 

  14. Mumford E. Defining system requirements to meet business needs: a case study example. Comput J 1985; 28(2): 97–104.

    Google Scholar 

  15. Pine B. Design, test and validation of Application System/400 through early user involvement. IBM Syst J 1989; 28(3): 376–385

    Google Scholar 

  16. Sulack A, Lindner R, Dietz D. A new development rhythm for AS/400 Software. IBM Syst J 1989; 28(3): 386–406

    Google Scholar 

  17. Pettingell K, Marshall T, Remington W. A review of the influence of user involvement on system success. In: Proceedings of the 9th international conference on information systems, 1988, pp 227–236

  18. Lubars M, Potts C, Richter C. A. review of the state of the practice in requirements modeling. In: Proceedings of the IEEE international symposium on requirements engineering, 1993, pp 2–14

  19. El Emam K, Madhavji NH. A field study of requirements engineering practices in information systems development. In: Proceedings of the second IEEE international symposium on requirements engineering, 1995, pp 68–80

  20. Curtis B, Krasner H, Iscoe N. A field study of the software design process for large systems. Commun ACM 1988;31(11): 1268–1286

    Google Scholar 

  21. El Emam K, Madhavji NH. Measuring the success of requirements engineering, processes. In: Proceedings of the second IEEE international symposium on requirements engineering, 1995, pp 204–211

  22. Davis G. Strategies for information requirements determination. IBM Syst J 1982; 21(1): 4–31

    Google Scholar 

  23. Fazlollahi B, Tanniru M. Selecting a requirements determination methodology: contingency approach revisited. In Proceedings of the 9th international conference on information systems, 1988, pp 55–64

  24. Davis A. Software requirements: objects, functions, and states. Prentice-Hall, Englewood Cliffs, NJ, 1993

    Google Scholar 

  25. Davis G, Olson M. Management information systems: conceptual foundations, structure, and development, McGraw Hill, New York, 1985

    Google Scholar 

  26. Palvia P, Palvia S. The feasibility study in information systems: an analysis of criteria and contents. Inform Manage 1988; 14: 211–224

    Google Scholar 

  27. Barki H, Hartwick J. Rethinking the concept of user involvement. MIS Q 1989; March: 52–63

    Google Scholar 

  28. Naumann J, Davis G, McKeen J. Determining information requirements: a contingency method for selection of a requirements assurance strategy. J Syst Software 1980; 1: 273–281

    Google Scholar 

  29. Daft R, Lengel R, Trevino L. Message equivocality, media selection, and manager performance: implications for information systems. MIS Q 1987; September: 355–366

    Google Scholar 

  30. Gorry G, Scott Morton M. A framework for management information systems. Sloan Manage Rev 1971; Fall: 55–70

    Google Scholar 

  31. Ein-Dor P, Segev E. Organizational context and the success of management information systems. Manage Sci 1978; 24(10): 1064–1077

    Google Scholar 

  32. Ginzberg M. An organizational contingencies view of accounting and information systems implementation. Accounting Organ Soc 1980; 5(4): 369–382

    Google Scholar 

  33. Wetherbe J, Whitehead C. A contingency view of managing the data processing organization. In MIS Q 1977; March: 19–25

  34. Alexander L, Davis A. Criteria for selecting software process models. In: Proceedings of the 15th international IEEE COMPSAC, 1991, pp 521–528

  35. Janson M, Smith L. Prototyping for systems development: a critical appraisal. MIS Q 1985; December: 305–316

    Google Scholar 

  36. Swanson E. Management information systems: appreciation and involvement. Manage Sci 1974; 21(2): 178–188

    Google Scholar 

  37. Franz C, Robey D. Organizational context, user involvement, and the usefulness of information systems. Decision Sci 1986; 17: 329–356

    Google Scholar 

  38. Kim E, Lee J. An exploratory contingency model of user participation and MIS use. Inform Manage 1986; 11: 87–97

    Google Scholar 

  39. Tait P, Vessey I. The effect of user involvement on system success: a contingency approach. MIS Q 1988; March: 91–108

    Google Scholar 

  40. Amoako-Gyampah K, White K. User Involvement and user satisfaction: an exploratory contingency model. Inform Manage 1993; 25: 1–10

    Google Scholar 

  41. McKeen J, Guimaraes T, Wetherbe J. The relationship between user participation and user satisfaction: an investigation of four contingency factors. MIS Q 1994; December: 427–451

    Google Scholar 

  42. Alter S, Ginzberg M. Managing uncertainty in MIS implementation. Sloan Manage Rev 1978; Fall: 23–31

    Google Scholar 

  43. Schonberger R. MIS design: a contingency approach. MIS Q 1980; March: 13–20

    Google Scholar 

  44. Naumann J, Davis G. A contingency theory to select an information requirements determination methodology. In: Proceedings of the second software life cycle management workshop, 1978, pp 63–65

  45. Newman M, Robey D. A social process of user-analyst relationships. MIS Q 1992; June 249–266

    Google Scholar 

  46. Jaccard J, Turrisi R, Wan C. Interaction effects in multiple regression. Sage, Beverly Hills, CA, 1990

    Google Scholar 

  47. Fisher G. Problems in the use and interpretation of product variables. In: Long J (ed). Common problems/proper solutions: avoiding error in quantitative research. Sage, Beverly Hills, CA, 1988

    Google Scholar 

  48. Joyce W, Slocum J Jr, von Glinow M. Person-situation interaction: competing models of fit. J Occup Behav 1982: 3: 265–280

    Google Scholar 

  49. Friedrich R. In defence of multiplicative terms in multiple regression equations. Am J Polit Sci 1982; 26(4): 797–833

    Google Scholar 

  50. Hunter J, Schmidt F, Jackson G. Meta-analysis: cumulating research findings across studies. Sage, Beverly Hills, CA, 1982

    Google Scholar 

  51. Lamal P. On the importance of replication. In: Neuliep J (ed). Replication research in the social sciences. Sage, Beverly Hills, CA, 1991

    Google Scholar 

  52. Lindsay R, Ehrenberg A. The design, of replicated studies. Am Statist 1993; 47(3): 217–228

    Google Scholar 

  53. Rosenthal R. Experimenter effects in behavioral research. Irvington, 1976

  54. Rosenthal R. Replication in behavioral research. In: Neuliep J (ed). Replication research in the social sciences. Sage, Beverly Hills, CA, 1991

    Google Scholar 

  55. McGrath J. Dilemmatics: the study of research choices and dilemmas. In: McGrath J, Martin J, Kulka R (eds). Judgement calls in research. Sage, Beverly Hills, CA, 1982

    Google Scholar 

  56. Brewer J, Hunter A. Multimethod research: a synthesis of styles. Sage, Beverly Hills, CA, 1989

    Google Scholar 

  57. Jenkins A. Research methodologies and MIS research. In: Mumford E et al (eds). Research methods in information systems. Elsevier, Amsterdam, 1985

    Google Scholar 

  58. Galliers R, Land F. Choosing appropriate information systems research methodologies. Commun ACM 1987; 30(11): 900–902

    Google Scholar 

  59. Olson M, Ives B. Measuring user involvement in information systems development. In: Proceedings of the international conference on information systems, 1980, pp 130–143

  60. Olson M, Ives B. User involvement in system design: an empirical test of alternative approaches. Inform Manage 1981; 4: 183–195

    Google Scholar 

  61. Kerlinger F. Foundations of behavioral research. Holt, Rinehart & Winston, 1986

  62. Nunnally J. Psychometric theory. McGraw Hill, New York, 1978

    Google Scholar 

  63. Straub D. Validating instruments in MIS research. MIS Q 1989; June: 147–169

    Google Scholar 

  64. Ghiselli E, Campbell J, Zedeck S. Measurement theory for the behavioral sciences. Freeman, 1981

  65. Allen M, Yen W. Introduction to measurement theory. Brooks/Cole, 1979

  66. Bailey, J, Pearson S. Development of a tool for measuring and analyzing computer user satisfaction. Manage Sci 1983; 29(5): 530–545

    Google Scholar 

  67. Davis F. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 1989; 13(3): 319–340

    Google Scholar 

  68. Ives B, Olson M, Baroudi J. The measurement of user information satisfaction. Commun ACM 1983; 26(10): 785–793

    Google Scholar 

  69. Doll W, Torkzadeh G. The measurement of end-user software involvement. OMEGA: Int J Manage Sci 1990; 18(4): 399–406

    Google Scholar 

  70. Baroudi J, Olson M, Ives B. An empirical study of the impact of user involvement on system usage and information satisfaction. Commun ACM 1986; 29(3): 232–238

    Google Scholar 

  71. El Emam K, Moukheiber N, Madhavji NH. An empirical evaluation of the G/Q/M method. In: Proceedings of CASCON '93. Centre for Advanced Studies, IBM Canada Ltd Laboratories, October 1993, pp 265–289

  72. El Emam K, Madhavji NH. A method for instrumenting software evolution processes and an example application. Paper presented at the International workshop on software evolution processes and measurements, Victoria, Canada, 1994

  73. Cronbach L. Coefficient alpha and the internal consistency of tests. Psychometrika 1951; September: 297–334

    Google Scholar 

  74. Subramanian A, Nilakanta S. Measurement: a blueprint for theory-building in MIS. Inform Manage 1994; 26: 13–20

    Google Scholar 

  75. Sethi V, King W. Construct measurement in information systems research: an illustration in strategic systems. Decis Sci 1991; 22: 455–472

    Google Scholar 

  76. Cohen J, Cohen, P. Applied multiple regression/correlation analysis for the behavioral sciences. Erlbaum, Hillsdale, NJ, 1983

    Google Scholar 

  77. McIver J, Carmines E. Unidimensional scaling. Sage, Beverly Hills, CA, 1981

    Google Scholar 

  78. Osgood C, Suci G, Tannenbaum P. The measurement of meaning. University of Illinois Press, 1967

  79. Yourdon E. Modern structured analysis. Prentice-Hall, Englewood Cliffs, NJ, 1989

    Google Scholar 

  80. Zeller R, Carmines E. Measurement in the social sciences: the link between theory and data. Cambridge University Press, Cambridge, 1980

    Google Scholar 

  81. Barki H, Hartwick J. Measuring user participation and involvement. Technical Report, Ecole des Hautes Etudes Commerciales and the Faculty of Management, McGill University, 1992

  82. Hawk S, Aldag R. Measurement biases in user involvement research. OMEGA: Int J Manage Sci 1990; 18(6): 605–613

    Google Scholar 

  83. Munro M, Davis G. Determining management information needs: a comparison of methods. MIS Q 1977; June: 55–67

    Google Scholar 

  84. Mathiassen L, Stage J. The principle of limited reduction in software design. Inform Technol People 1992; 6(2–3): 171–185

    Google Scholar 

  85. Aldenderfer M, Blashfield R. Cluster analysis. Sage, Beverly Hills, CA, 1984

    Google Scholar 

  86. Armstrong J, Overton T. Estimating nonresponse bias in mail surveys. J Marketing Res 1977; XIV: 396–402

    Google Scholar 

  87. Siegel S, Castellan J. Nonparametric statistics for the behavioral sciences. McGraw Hill, New York, 1988

    Google Scholar 

  88. Finn D, Wang C-K, Lamb C. An examination of the effects of sample composition bias in a mail survey. J Marketing Res Soc 1983; 25(4): 331–338

    Google Scholar 

  89. Cronbach L. Statistical tests for moderator variables: flaws in analysis recently proposed. Psychol Bull 1987; 102: 414–417

    Google Scholar 

  90. Neter J, Wasserman W, Kutner M. Applied linear statistical models: regression, analysis of variance, and experimental designs. Irwin, 1990

  91. Bohrnstedt G, Carter T. Robustness in regression analysis. In: Costner H (ed). Sociological methodology. Jossey-Bass, San Francisco, 1971

    Google Scholar 

  92. Stevens S. Mathematics, measurement, and psychophysics. In: Stevens S (ed). Handbook of experimental psychology. Wiley, New York, 1951

    Google Scholar 

  93. Galletta D, Lederer A. Some cautions on the measurement of user information satisfaction. Decis Sci 1989; 20: 419–438

    Google Scholar 

  94. Labovitz S. The assignment of numbers to rank order categories. Am Sociol Rev 1970; 35: 515–524

    Google Scholar 

  95. Labovitz S. Some observations on measurement and statistics. Social Forces 1967; 46(2): 151–160

    Google Scholar 

  96. Baker B, Hardyck C, Petrinovich L. Weak measurements vs. strong statistics: an empirical critique of S. S. Stevens' proscriptions on statistics. Educ Psychol Meas 1966; 26: 291–309

    Google Scholar 

  97. Arnold H, Evans M. Testing multiplicative models does not require ratio scales. Organ Behav Hum Perform 1979; 24: 41–59

    Google Scholar 

  98. Gardner P. Scales and statistics. Rev Educ Res 1975; 45(1): 43–57

    Google Scholar 

  99. Velleman P, Wilkinson L. Nominal, ordinal, interval, and ratio typologies are misleading. Am Statist 1993; 47(1): 65–72

    Google Scholar 

  100. Briand L, El Emam K, Morasca S. On the application of measurement theory in software engineering. Empirical Software Eng 1996; 1(1) (to appear) (also as International Software Engineering Research Network technical report ISERN-95-04)

  101. Wolf F.: Meta-analysis: quantitative methods for research synthesis. Sage, Beverly Hills, CA, 1986

    Google Scholar 

  102. DeVellis R. Scale development: theory and applications. Sage, Beverly Hills, CA, 1991

    Google Scholar 

  103. Fry L, Smith D. Congruence, contingency, and theory building. Acad Manage Rev 1987; 12(1): 117–132

    Google Scholar 

  104. Wright G Jr. Linear models for evaluating conditional relationships. Am J Polit Sci 1976; XX(2): 349–373

    Google Scholar 

  105. Votta L, Porter A. Experimental software engineering: a report on the state of the art. In: Proceedings of the 17th international conference on software engineering, 1995

  106. Curtis B. Measurement and experimentation in software engineering. In: Proc IEEE 1980; 68(9): 1144–1197

  107. Shaw M. When is ‘good’ enough? Evaluating and selecting software metrics. In: Perlis A, Sayward F, Shaw M (eds). Software metrics: an analysis and evaluation. MIT Press, Cambridge, MA, 1981

    Google Scholar 

  108. Hirschheim R. Assessing participative systems design: some conclusions from an exploratory study. Inform Manage 1983; 6: 317–327

    Google Scholar 

  109. Hirschheim R. User experience with and assessment of participative systems design. MIS Q 1985; December: 295–304

    Google Scholar 

  110. Simon H. The new science of management decision. Harper & Row, New York, 1960

    Google Scholar 

  111. Burns R, Dennis A. Selecting the appropriate application development methodology. Data Base 1985; Fall: 19–23

    Google Scholar 

  112. Miller G. A psychological method to investigate verbal concepts. J Math Psychol 1969; 6: 169–191

    Google Scholar 

  113. Cochran W. Some methods for strengthening the common chi-square tests. Biometrics 1954; 10: 417–451

    Google Scholar 

  114. Lewontin R, Felsenstein J. The robustness of homogeneity tests in 2 x N tables. Biometrics 1965; 21: 19–33

    Google Scholar 

  115. Pedhazur E. Multiple regression in behavioral research. Harcourt Brace Jovanovich, 1982

  116. Kmenta, J. Elements of econometrics. Macmillan, New York, 1986

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This work has been supported, in part, by the IT Macroscope Project and NSERC Canada.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Emam, K.E., Quintin, S. & Madhavji, N.H. User participation in the requirements engineering process: An empirical study. Requirements Eng 1, 4–26 (1996). https://doi.org/10.1007/BF01235763

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01235763

Keywords

Navigation