Decision Support, Knowledge Representation and Management in Medicine
Decision Support, Knowledge Representation and Management in Medicine
Decision Support, Knowledge Representation and Management in Medicine
Management in Medicine
1
Department of Management Information Systems, University of Haifa, Haifa, 31905, Israel
2
Stanford Medical Informatics, Stanford University, Stanford, CA 94305, USA
Summary
Objective: Clinical decision-support systems (CDSSs) are being recognized as important tools
for improving quality of care. In this paper, we review the literature to find trends in CDSSs that
were developed over the last few decades and give some indication of future directions in
Methods: We searched PubMed for papers that were published during the past five years with
the words Decision Support Systems appearing in the title and used our own knowledge of the
Results: The goals of developers of modern CDSSs are to develop systems that deliver needed
information and could be integrated with the healthcare's organizational dynamics. Such CDSSs
excel. During the past few decades, we have witnessed a gradual maturation of knowledge
representation formalisms and the needed infrastructure for developing integrated CDSSs,
including electronic health record systems (EHR), standard terminologies, and messaging
standards for exchange of clinical data. The demand for CDSSs that are effective and that will
evolve as circumstances change gave rise to methodologies that guide developers on the
Conclusion: Although there exist many approaches for representing, managing and delivering
clinical knowledge, the design and implementation of good and useful systems that will last and
evolve are still active areas of research. The gradual maturation of EHR and infrastructure
standards should make it possible for CDSSs implementers to make major contributions to the
delivery of healthcare.
Health System [1] and Crossing the Quality Chasm: A New Health System for the 21stCentury
[2]—respectively called attention on serious patient safety concerns surrounding the large
number of preventable medical errors and the sizable gaps between best practices and actual
practices. Clinicians not only use their medical knowledge to make diagnostic and therapeutic
decisions, but also coordinate patent care over time and among multiple providers and settings.
These decision-making and coordination processes rely on accessing, understanding and using
vast amount of knowledge and information. Being human, the clinicians cannot be expected to
remember every relevant piece of information and relate to it during the care process. The
Crossing the Quality Chasm report highlighted the potential of using information technology,
and in particular clinical decision support systems (CDSSs), to aid clinicians in gathering
relevant data, making clinical decisions, and managing medical actions more effectively, and
thus achieving reduced practice errors, a higher standard of care, and reduced costs [2]. Many
CDSSs are now in routine use in acute care settings, clinical laboratories, educational
institutions, and are incorporated into electronic medical record systems [3]. Systematic reviews
of CDSSs show that, when effective, CDSSs change processes of care (e.g., appropriate ordering
of tests, correct drug dosing), but few studies have reported that the use of CDSSs led to better
focused our review of the CDSS literature to find trends in CDSSs that were developed over the
years. For this purpose, we defined several topics that we believe are important for developing
successful, usable clinical decision support systems. Following a life-cycle approach, the first
topic focuses on the goals of CDS, as it is the vision that drives the way by which CDSSs will be
developed, implemented, integrated with the environment, and evaluated. The next two topics
modeling tasks that are required to analyze and represent the knowledge of CDSSs. The
following topic addresses design considerations that are important for the success of CDSSs.
Next, we review current standardization efforts, which are important for integration of CDSSs as
part of an organization's health information system. The last topic addresses evaluation of CDSSs
– a step that is both a quality control mechanism and the start of a new cycle in the development
of effective CDSSs.
clinical decisions. A standard textbook in medical informatics [7] characterizes CDSSs as tools
for information management, for focusing attention, and for providing patient-specific
recommendations. Looking at research on CDSSs, we can see that the perception of CDSSs has
shifted over the years. Early CDSSs used statistical methods (e.g., de Dombal’s abdominal pain
program [8]), decision analysis [9], and rule chaining (e.g., the MYCIN program [10]). Medical
sociologists have characterized these systems as embodying a vision where medical decision
knowledge and encoded algorithms, can perform at the level of clinicians or even better than
clinicians can [11]. The goal of the systems was to excel in the complex tasks of differential
diagnosis and therapy planning. Thus, evaluations of these systems involved comparing the
performance of the CDSSs with those of novice or expert physicians [12, 13]. The
systems is realizable in the clinic [11]. The conflict between these requirements and the evolving,
contingent, emergent nature of medical work contributed toward difficulties in the adoption of
CDSSs [14]. Instead of seeing CDSSs as vehicles for rationalizing medicine, developers of
modern CDSSs are more likely to take a socio-technical approach, which recognizes that
introduction of CDSSs needs to take into account their potential affect on the division of work
among care providers and how CDS would shape and, in turn, be shaped by the organizational
structure and practices of providers [15]. In this context, the goals of modern CDS go beyond the
original focus of producing expert-level advisories and extend to include support for tasks such
among providers. These additional goals contribute toward improving the overall quality of care.
In his review from 1994, Miller notes the differences between the CDSSs of the early 1970's and
those of the 1990's [16]. The trends that Miller saw in the 1990's— a shift toward specialized and
focused system, interacting systems that are integrated into the clinical environment and
workflow, and the importance of evaluating CDSSs and designing them to be cost-effective—are
also seen in systems that were developed during the last decade. In addition, researchers now
recognize the need to consider patient preferences [17] and base the knowledge represented in
organizations as those practices that, through more effective utilization of their knowledge assets,
collaborative processes. In this view, CDSSs are part of a knowledge-management toolkit that a
healthcare organization can employ to deliver the “right knowledge to the right people in the
right form at the right time” [20]. To accomplish this objective, developers of CDSSs have two
knowledge-management tasks: (1) a process-oriented task that elucidates the organization goals,
the information flow and the work flow, the roles and responsibilities, and the communication
and co-ordination patterns of the care process in which a CDSS system has to operate and (2) a
knowledge-modeling task in which modelers represent the medical knowledge that enables the
CDSS to deliver appropriate decision-support services during the care process. Quaglini and
colleagues developed the concept of care-flow management system (CfMS) [21] to allow explicit
The process-oriented task overlaps with the traditional requirement analysis in software
engineering. Many methodologies have been developed for helping system analysts elicit an
understanding of business processes, their participants, and their information needs, and
should be developed to support the organization. The most widely used methodology for eliciting
and specifying design requirements in the industrial world is the Unified Modeling Language
[22]. In [23], Osheroff and colleagues take a more informal approach, where they developed a
workbook that implementers of a CDSS can use to work through the process of identifying
stakeholders, determining the goals and objectives of the CDSS, cataloging the host information
system’s capabilities, and selecting, deploying, and monitoring specific CDS interventions.
Osheroff et al. emphasize the need to identify opportunities for CDS and to incorporate different
types of CDS within clinical workflow. Berg and Toussanint [24], on the other hand, argue that
implementing new information and communication technology is always a process-improvement
project where the CDS intervention necessarily changes existing practices. Thus, the main
challenge in implementing CDS is not so much trying to fit CDS into existing workflow, as it is
managing the ongoing process of organizational development that was triggered by the CDS
intervention.
literature, Nonaka and Takeuchi built a theory of knowledge management on the basis of the
distinction between tacit and explicit knowledge [25]. Tacit knowledge is implicit in human’s
capability to perform particular tasks and cannot be expressed easily. It is context-specific and
personal. Tacit and explicit knowledge are converted from each other during social processes:
and explicit-to-tacit (internalization). The medical field has elaborate schemes for production and
dissemination of medical knowledge, in which tacit and explicit knowledge are inter-converted.
For example, the development of clinical practice guidelines involves synthesis of opinions of
expert panels and evidences explicitly reported in the literature. In this case, knowledge
change clinicians’ behaviors, with the final aim of helping clinicians to internalize these changes
– an explicit-to-tacit conversion.
Eliciting knowledge from experts is a difficult process. A number of methodologies, such as the
repertory grid method, based on personal construct psychology [27], have been developed.
However, these methodologies are not routinely used to elicit medical knowledge. Instead, more
traditional ways to elicit knowledge are used, such as literature review, interviews, observation
of experts at field setting, examining experts at work while they "think aloud", and
depending on its representation formalism. For example, rules can be elicited from experts using
automated questioning [28], or they can be discovered from databases using various forms of
data mining and machine learning [29]. Machine learning techniques have been used to learn
classifications from examples that have been classified by experts (see next section).
Knowledge representation provides a means for expressing knowledge in a way that can be
interpreted and reasoned with by humans and machines. We discuss knowledge representations
for CDSSs in more detail in Section 4. Represented knowledge may be leveraged by using it in
more than one institution, achieving knowledge sharing. Sharing is enhanced through standards.
Some of the standards that form the infrastructure for CDSSs are covered in Section 5. Another
form of knowledge sharing is sharing of executable knowledge components from which CDSSs
Medical knowledge is ever evolving; new risk factors, drugs, diagnostic tests, clinical studies,
pathogen incidence, and drug resistance are some examples of knowledge changes. When
knowledge evolves, its representation needs to be updated for the CDSS to provide appropriate
recommendations. An updated knowledge base is released for use in a new version. This
necessitates mechanisms for version management so that reasoning can relate to the information
existing in different versions, which may be used by different people at a single point in time, or
can be used in retrospective studies. Version management of medical knowledge representations
has been researched mainly in the domain of ontology evolution [31], vocabulary versioning [32,
knowledge models, basic change operations are derived from the basic elements of knowledge
models and enable adding, removing, and changing those elements. Research in the medical
domain emphasizes the recording of reasons for making the changes, so that users of the updated
Delivery of knowledge for CDS involves not only provision of patient-specific recommendations
but also retrieval of reference information and guidance. Information retrieval systems vary in
their indexing and mark-up techniques and search methods. For example, Berrios and colleagues
[35] developed a method for indexing medical knowledge according to questions that the
knowledge source answers. The questions are formed as combinations of four basic concepts:
pathology, manifestation, investigation, and therapy (e.g., how does chemotherapy therapy
information retrieval strategies. It was evaluated by testing a feedback algorithm that builds upon
representation is not the driving force in CDS work in recent years; many of the representations
Bayesian statistical systems and influence diagrams, neural networks, fuzzy set theory, and
symbolic reasoning or "expert" systems, have been around since the 1970's [37] and 1980's [16]
Most of the current CDSSs use one or more of these formalisms for representing and reasoning
with medical knowledge. In this section, we discuss some noticeable trends in the use of these
In the last decade, ontologies have often been used to formalize a shared understanding of a
domain. In knowledge engineering, the term ontology is used to mean definitions of concepts in
a domain of interest and the relationships among them (“a specification of a conceptualization of
a domain” [38]). An ontology enables software applications and humans to share and reuse the
description logic, allow logical inference over the set of concepts and relationships to provide
decision support and explanation facilities [39]. Ontologies can be complemented by other
knowledge representation formalisms, such as rules, which have been used to create medical
knowledge bases since the 1970's. Such knowledge bases encode non-numeric qualitative models
where symbolic reasoning is performed to reach abstract conclusions about a case (e.g., what
therapy should be given, what is the probable organism causing an infectious disease) [7].
In recent years, ontologies have been often used to represent clinical guidelines. Evidence-based
decisions about appropriate healthcare for specific clinical circumstances [40]. They aim to
improve clinical care, to reduce practice errors and to save costs. Clinical guidelines have been
around since the 1970's, yet the movement toward a safer, evidence-based medical practice has
brought a resurgence of interest in them. In the last decade, much of the research on CDSSs has
networks of component tasks that unfold over time. Typical tasks involve medical actions (e.g.,
medication prescriptions), data queries, and clinical decision. Many decision-support systems
that are based on these formalisms have been implemented in recent years [42, 43]. Much of the
current research emphasizes the importance of modeling the integration of a CDSS with the
organizational workflow and information systems. Formalisms such as EON, SAGE, PRODIGY,
GLIF3, and GLARE include a patient data model intended to facilitate interfacing the guideline
model with an EMR [41]. The Guide/NewGuide and SAGE formalisms represent relevant
organizational aspects, including available resources, organizational roles that perform activities
As acquiring knowledge from experts is difficult, a plethora of CDSSs have been developed in
recent years using machine-learning (ML) techniques. These techniques can discover knowledge
automatically by learning from examples. One of the most common ML techniques is neural
nets. Neural nets are a network of interonnected simple processing elements. The net's global
behavior is determined by the connections between the processing elements and element
parameters. Neural nets recognize patterns in the input data and classify the input. The
examples. Examples of CDSSs that have been developed using ML techniques include learning
of pulmonary gas exchange parameters to support the selection of inspired oxygen fraction [44],
automated interpretation of diagnostic heart images [45], and determining preterm birth risk [29].
Many of the recently developed CDSSs are based on models that support probabilistic reasoning.
Examples include decision theoretic models, such as Bayesian networks [46], influence diagrams
[47], and decision trees [48]. These models are specifically designed for reasoning under
uncertainty – a common theme in medical decisions in which the outcome of decision
(alternative actions), state variables that describe the states of the world, preferences, and
relationships among states of the world. These relationships may be probabilistic, logical, or
qualitative [49]. Unlike decision trees, Bayesian networks and influence diagrams can express
reasoning.
Many of the current CDSSs still use rules as the representation formalism. Rules are most
suitable for expressing single medical decisions and are often implemented as alerts and
reminders [50]. Most of the rule-based systems support categorical (deterministic) reasoning, but
some use fuzzy rules to support reasoning under uncertainty. An example of such a system is
care plan on-line (CPOL), an intranet-based chronic disease care planning system for general
fuzziness, CPOL represents guidelines as fuzzy If. . .Then rules and attaches a membership
function to each linguistic variable. In this way, concepts like underweight and overweight
sophisticated CDSSs but do not guarantee their successful implementation. This led to a
literature that provides general recommendations on how to develop successful CDSSs [17, 18,
23, 52-55]. Many of these papers list the following factors as being important for success of
CDSSs: (1) decision support should be computerized rather than paper-based, (2) workflow
integration should be considered, (3) timely advice should be provided, (4) clinical effects and
costs of the system should be evaluated, and (5) the system should be developed with an ability
CDSSs includes maintenance of the knowledge and its evolution, as discussed in section 3.
The Evidence and Decision Support track of the 2000 AMIA Spring Symposium examined the
the following recommendations for developers of evidence adaptive CDSS [18]: (1) capture
computer-based decision support; (3) evaluate the clinical effects and costs of CDSSs; (4)
integrate the system into workflow; and (5) establish public policies that provide incentives for
implementing CDSSs.
Bates and coauthors [52] suggest Ten Commandments for effective clinical decision support: (1)
Speed Is Everything, (2) Anticipate Needs and Deliver in Real Time, (3) Fit into the User’s
Workflow, (4) Little Things Can Make a Big Difference, (5) Recognize that Physicians Will
Strongly Resist Stopping, (6) Changing Direction Is Easier than Stopping, (7) Simple
Interventions Work Best., (8) Ask for Additional Information Only When You Really Need It,
(9) Monitor Impact, Get Feedback, and Respond, and (10) Manage and Maintain Your
Knowledge-based Systems.
Wetter [53] lists the following factors as being important for achieving successful
implementations: (1) timely advice, (2) workflow integration, (3) integration into IT
environment, (4) flexibility, (5) response to user needs, (6) physicians' ability to change the
Kawamoto and coauthors [54] systematically reviewed the literature in order to determine why
some clinical decision support systems succeed while others fail. They identified 22 factors
clinical practice, and evaluated 15 of these features in randomized controlled trials of clinical
decision support systems. They identified four of these features as independent predictors of a
system’s ability to improve clinical practice: (1) automatic provision of decision support as part
of clinician workflow, (2) provision of a direct recommendation rather than just an assessment
that is presented to the clinician for consideration, and (3) computer-based generation of decision
Ruland and Bakken [17] report a model for developing, implementing, and evaluating CDSSs
that include patients' perspectives of their health problems and preferences for treatment and care
(shared decision support). The model includes eight steps: (1) identify the clinical decision
patients, (2) define the purpose, users, and clinical context, (3) define the dimensions of the
decision problem, (4) select a measurement technique for eliciting patient preferences, (5)
validate measurement technique, (6) determine the application platform, (7) address practice
implementation issues, and (8) identify outcome measures and methods for outcome evaluation.
supply the patient data CDSSs need, that allow CDSSs to respond to decision-support
opportunities in clinicians’ workflow, and that supply applications such as alerting mechanisms
and order entry systems that allows effective delivery of decision-support services. Thus,
infrastructure, including standard terminology, data model, data exchange format, and other
clinical information systems services. Developing and promoting such standards is the work of
standard development organizations (SDOs), of which Health Level 7 (HL7) and CEN are most
administrative data sharing. Its mission is to provide standards for the exchange, management
and integration of data that support clinical patient care and the management, delivery and
evaluation of healthcare services. The HL7 Clinical Decision Support Technical Committee
decision-support formalism. Its members have defined the Arden Syntax for Medical Logic
Module as an HL7 standard for representing and sharing clinical knowledge that expresses single
Much work has been done to define a messaging standard for "infobutton" queries at the point of
care to retrieve context-sensitive information [56]. So far, the CDSTC has not tried to develop a
standard for clinical guidelines. Instead, the CDSTC focuses on the development of standards for
an expression language for decision criteria and a virtual medical record (i.e., a view of a patient
medical record that is simplified for decision-support purposes). Recently, the GELLO
as a standard.
CEN (http://www.cenorm.be/cenorm/index.htm) is the European Committee for Standardization.
worldwide bodies and its partners in Europe. CEN's Technical Committee 251 handles medical
informatics, including work on (1) communications: information models, messaging and smart
cards, (2) terminology, (3) security, safety and quality, and (4) technology for interoperability
(devices).
7. Evaluation of CDSSs
The complexity of medical practices and the high cost of implementing CDSSs make evaluation
of CDSSs both a challenge and a necessity. Among many possible definitions of evaluation, we
adopt one that views the evaluation of a CDSS as the process of collecting and analyzing data
about a CDSS for the purpose of answering certain questions [57]. The range of questions that
can be posed for possible evaluation is enormous and the evaluation methodology is necessarily
tied to the questions being asked. It is therefore not surprising that different researchers raise
different questions and suggest different evaluation methodologies. Friedman and Wyatt
formulated a framework for evaluation in terms of (1) the interests of stakeholders (e.g., user,
developer, patient, and funding institutions) in the CDSS, (2) the need for an information
resource, and the development process, intrinsic structure, function, and effect of that resource,
and (3) the objectivist and subjectivist approaches to study design [58]. The objectivist method
requires careful measurements of outcome variables where the presence or absence of CDS
interventions is the independent variable. At the heart of the objectivist approach is the
a CDSS [59] that compares the output of a system against a gold standard or some other
validated systems, or it may focus on the clinical impact, both in terms of process and outcome
variables, as published evaluation studies of CDSSs have typically done [5]. Garg and colleagues
identified 100 randomized and non-randomized controlled trials that evaluated the effect of
implementing a CDSS compared with care provided without a CDSS. The variables that were
with CDSSs) and patient outcomes (where 7 out of 53 studies reported improvement with
CDSSs).
The results of systematic reviews such as [5] need to be read with caution. First, publication bias
tends to favor projects that report successful outcomes. Second, as Wears and Berg points out in
their editorial accompanying the paper [60], the lack of improved performance could be due to
any number of factors, such as human-computer interface problems or lack of time or support
among colleagues. Furthermore, they noted that most of the systems being studied were
evaluated by developers of these systems. When evaluators were not also the system developers,
Wears and Berg’s critique of the dominant objectivist evaluation strategy reflects the tension
between the socio-technical and the more technologically-oriented objectivist views of CDS
discussed above. Unlike the objectivist view, the socio-technical approach views clinical work as
and reactive” [60] and the implementations of CDSSs as systems involving organizational
1
Hawthorne effect is the possibility that clinicians performance may improve if they know that they are being
studied
2
"Secular trends" refers to changes of dependent variables over time, where the changes are outside the control of
investigators.
dynamics and power relationships. Objectivist evaluation methodologies necessarily cannot
capture the qualitative relationships that, in the socio-technical view, are critical determinants of
a computer system’s success or failure; often it is difficult to distinguish effects caused by the
CDSS from effects caused by the change in the work practices induced by the implementation of
CDSS.
borrows from ethnography and uses techniques such as participant observations, interviews, and
analysis of documents and artifacts to study the impact of introducing a CDSS on the clinical
work in its natural setting [58]. A recent example that illustrates the use of the subjectivist
entry on the workflow in an intensive-care unit [61]. The researchers found that the introduction
of the system caused an increase in the number of coordination and verification requirements,
sharing of login sessions by different users, and disruptions of workflow due to the geographical
locations of the clinical workstations. The need to combine qualitative and quantitative
becoming apparent [62]. However, such multi-method evaluation is still uncommon [63].
8. Conclusion
This paper reviews some major themes in developing and deploying CDSSs. We saw, in recent
years, the emergence of a powerful critique of the technology-centric vision of CDSSs. This
critique is rooted in a conception of medical work as contingent and emergent, where clinical
data and decisions are re-interpreted as clinicians manage the trajectory of a patient’s problem
and where clinicians' professional expertise and autonomy permit them to make decisions
independently of any fixed protocol. According to this conception, the provision of decision-
support services must be conscious of social roles and be consistent with the distributed nature of
the care process, and not focus on the mind of a single decision maker. Yet, at the same time, the
imperatives of standardization and the accelerated rate of knowledge production also mean that
are in the midst of a transitional period where, although there exist many approaches for
representing, managing and delivering clinical knowledge, we do not know a-priori how to
design and implement good and useful systems that will last and evolve. In addition, the shift in
the conceptualization of the goal of CDS raises evaluation questions, which require new
methodologies that integrate the insights from different approaches that exist currently.
This is an incredibly exciting time for implementers of CDSSs. For years, workers in medical
systems. Yet few of the early systems ever saw successful deployment. With the gradual
maturation of electronic health record systems, the emergence of standard terminologies and
messaging standards for exchange of clinical data, and the widespread recognition that CDS
should play a crucial role in reducing medical errors and in improving the quality of healthcare
and the efficiency of the healthcare delivery system, implementers of CDSS are poised to make
Acknowledgements
We would like to thank Dongwen Wang for his very helpful comments and suggestions.
References
1. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health
System. Washington DC: Committee on Quality of Health Care in America, Institute of
Medicine, National Academy Press; 1999.
2. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st
Century: National Academy Press; 2001.
3. Coiera E. Chapter 25 - Clinical Decision Support Systems. In: Guide to Health
Informatics (2nd Edition). London: Arnold; 2003.
4. Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision
support systems on physician performance and patient outcomes: a systematic review.
JAMA 1998;280(15):1339-46.
5. Garg AX, Adhikari NKJ, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J, et
al. Effects of Computerized Clinical Decision Support Systems on Practitioner
Performance and Patient Outcomes: A Systematic Review. JAMA 2005;293(10):1223-
1238.
6. Kucher N, Koo S, Quiroz R, Cooper JM, Paterno MD, Soukonnikov B, et al. Electronic
Alerts to Prevent Venous Thromboembolism among Hospitalized Patients. N Engl J Med
2005;352(10):969-77.
7. Musen MA, Shahar Y, Shortliffe EH. Clinical Decision-Support Systems. In: Shortliffe
EH, Perreault LE, Wiederhold G, Fagan LM, editors. Medical Informatics: Computer
Applications in Health Care and Biomedicine. Second ed. New York: Springer; 2001. p.
573-609.
8. de-Dombal FT, Leaper DJ, Staniland JR, McCann AP, Horrocks JC. Computer-aided
diagnosis of acute abdominal pain. BMJ 1972(2):9-13.
9. Gorry GA, Kassirer JP, Essig A, Schwartz WB. Decision analysis as the basis for
computer-aided management of acute renal failure. American Journal of Medicine
1973(55):473-84.
10. Shortliffe EH. Computer-based medical consultations: Mycin. New York: Elsevier/North
Holland; 1976.
11. Berg M. Rationalizing Medical Work: Decision-Support Techniques and Medical
Practices. Cambridge, MA: The MIT Press; 1997.
12. Yu VL, Fagan LM, Wraith SM, Clancey WJ, Scott AC, Hannigan J, et al. Antimicrobial
selection by a computer. A blinded evaluation by infectious diseases experts. JAMA
1979;242(12):1279-1282.
13. Miller RA, Pople HEJ, Myers JD. Internist-1, an experimental computer-based diagnostic
consultant for general internal medicine. New England Journal of Medicine
1982;307(8):468-476.
14. Wyatt J, Spiegelhalter D. Evaluating medical expert systems: What to test, and how? In:
Knowledge-based systems in medicine: Proceedings of workshop on system
enginneering in medicine: Springer; 1991.
15. Berg M. Patient care information systems and health care work: A sociotechnical
approach. Int J Med Inf 1999;55(2):87-101.
16. Miller RA. Medical diagnostic decision support systems--past, present, and future: a
threaded bibliography and brief commentary. J Am Med Inform Assoc 1994;1(1):8-27.
17. Ruland CM, Bakken S. Developing, implementing, and evaluating decision support
systems for shared decision making in patient care: a conceptual model and case
illustration. J Biomed Inform 2002;35(5-6):313-21.
18. Sim I, Gorman P, Greenes RA, Haynes RB, Kaplan B, Lehmann H, et al. Clinical
decision support systems for the practice of evidence-based medicine. J Am Med Inform
Assoc 2001;8(6):527-34.
19. Stefanelli M. Knowledge and Process Management in Health Care Organizations.
Methods Inf Med 2004;43(5):525-535.
20. Schreiber G, Akkermans H, Anjewierden A, de Hoog R, Shadbolt, Nigel, Van de Velde
W, Wielinga B. Knowledge Engineering and Management: The CommonKADS
Methodology. Cambridge, MA: The MIT Press; 2000.
21. Quaglini S, Stefanelli M, Cavallini A, Micieli G, Fassino C, Mossa C. Guideline-Based
Careflow Systems. Artif Intell Med 2000;5(22):5-22.
22. Rumbaugh, Jacobson, Booch. The Unified Modeling Language Reference Manual:
Addison-Wesley; 1999.
23. Osheroff JA, Pifer EA, Sittig DF, Jenders RA, Teich JM. Clinical Decision Support
Implementer's Workbook. In: HIMSS;
2004.http://www.himss.org/ASP/topics_cds_workbook.asp?faid=108&tid=14
24. Berg M, Toussaint P. The mantra of modeling and the forgotten powers of paper: a
sociotechnical view on the development of process-oriented ICT in health care.
International Journal of Medical Informatics 2003;69(2-3):223-234.
25. Nonaka I, Takeuchi H. The Knowledge-Creating Company: How Japanese Companies
Create the Dynamics of Innovation: Oxford University Press; 1995.
26. Panzarasa S, Madde S, Quaglini S, Pistarini C, Stefanelli M. Evidence-based careflow
management systems: the case of post-stroke rehabilitation. J Biomed Inform
2002;35(2):123-39.
27. Gaines BR, Shaw MLG. WebGrid: knowledge modeling and inference through the
World Wide Web. In: Musen BRGaM, editor. Proceedings of the Tenth Knowledge
Acquisition for Knowledge-Based Systems Workshop; 1996; Banff; 1996. p. 65-1– 65-
14.
28. Eshelman L, Ehret D, McDermott J, Tan M. MOLE: A tenacious knowledge-acquisition
tool. Intl J of Man-Machine Studies 1987;26(1):41-54.
29. Woolery LK, Grzymala-Busse J, ., Nov-Dec;1(6):439-46. JAMIA. Machine learning for
an expert system to predict preterm birth risk. J Am Med Inform Assoc 1994;1(6):439-
46.
30. Peleg M, Steele R, Thomson R, Patkar V, Fox J. Open-source Publishing of Medical
Knowledge. In: Tenth Conference on Artificial Intelligence in Medicine; 2005; Lecture
Notes in Computer Science, Vol 3581.
31. Noy NF, Klein M. Ontology Evolution: Not the Same as Schema Evolution. Knowledge
and Information Systems, 2003;6(4): 428 - 440 .
32. Cimino JJ. Formal Descriptions and Adaptive MEchanisms for Changes in Controlled
Medical Vocabularies. Meth Inform Med 1996;35:202-210.
33. Oliver DE, Shahar Y. ChangeManagement of Shared and Local Versions of Health-Care
Terminologies. MEth Inform Med 2000;39:278-290.
34. Peleg M, Kantor R. Approaches for guideline versioning using GLIF. In: Proc AMIA
Symp. 2003. p. 509-13.
35. Berrios DC, Cucina RJ, Fagan LM. Methods for semi-automated indexing for high
precision information retrieval. J Am Med Inform Assoc 2002;9(6):637-52.
36. Bernstam E. MedlineQBE (Query-by-Example). In: Proc AMIA Symp; 2001. p. 47-51.
37. Shortliffe EH, Buchanan BG, Feifenbaum EA. Knowledge engineering for medical
decision-making: a review of computer-based clinical deicision aids. Proc. IEEE
1997;67:1207-24.
38. Gruber TR. Toward Principles for the Design of Ontologies Used for Knowledge
Sharing. Int. Journal of Human-Computer Studies 1995;43(907-928).
39. Schulze-Kremer S. Ontologies for Molecular Biology. In: Proceedings of the Third
Pacific Symposium on Biocomputing; 1998. p. 693-704.
40. Field MJ, Lohr KN. Guidelines for Clinical Practice: Directions for a New Program.
Washington DC: Institute of Medicine, National Academy Press; 1990.
41. Peleg M, Tu SW, Bury J, Ciccarese P, Fox J, Greenes RA, et al. Comparing Computer-
Interpretable Guideline Models: A Case-Study Approach. J Am Med Inform Assoc
2003;10(1):52-68.
42. de-Clercq PA, Blom JA, Korsten HHM, Hasman A. Approaches for creating computer-
interpretable guidelines that facilitate decision support. Artificial Intelligence in Medicine
2004;31:1-27.
43. Thomson R. Guideline Modelling Methods Summeries. In: Open Clinical Organization;
2005.http://www.openclinical.org/gmmsummaries.html
44. Murley D, Rees S, Rasmussen B, Andreassen S. Decision support of inspired oxygen
selection based on Bayesian learning of pulmonary gas exchange parameters. Artif Intell
Med 2005;34(1):53-63.
45. Ohlsson M. WeAidU-a decision support system for myocardial perfusion images using
artificial neural networks. Artif Intell Med 2004;30(1):49-60.
46. Ogunyemi O, Clarke JR, Ash N, Webber BL. Combining geometric and probabilistic
reasoning for computer-based penetrating-trauma assessment. J Am Med Inform Assoc
2002;9(3):273-82.
47. Quaglini S, Dazzi L, Gatti L, Stefanelli M, Fassino C, Tondini C. Supporting tools for
guideline development and dissemination. Artif Intell Med 1998;14(1-2):119-37.
48. Jerez-Aragones JM, Gomez-Ruiz JA, Ramos-Jimenez G, Munoz-Perez J, Alba-Conejo.
E. A combined neural network and decision trees model for prognosis of breast cancer
relapse. Artif Intell Med 2003;27(1):45-63.
49. Horvitz EJ, Breese JS, Henrion M. Decision Theory in Expert Systems and AI. Intl J
Approximate Reasoning 1988;2(3):247-302.
50. Peleg M, Boxwala AA, Bernstam E, Tu S, Greenes RA, Shortliffe EH. Sharable
Representation of Clinical Guidelines in GLIF: Relationship to the Arden Syntax. Journal
of Biomedical Informatics 2000;34(3):170-81.
51. Beliakov G, Warren J. Fuzzy logic for decision support in chronic care. Artificial
Intelligence in Medicine 2001;21(1-3):209-13.
52. Bates DW, Kuperman GJ, Wang S, Gandhi T, Kittler A, Volk L, et al. Ten
Commandments for Effective Clinical Decision Support: Making the Practice of
Evidence-based Medicine a Reality. J Am Med Inform Assoc 2003;10(6):523-530.
53. Wetter T. Lessons learnt from bringing knowledge-based decision support into routine
use. Artif Intell Med 2002;24(3):195-203.
54. Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using
clinical decision support systems: a systematic review of trials to identify features critical
to success. BMJ 2005;330(7494):765.
55. Electronic Decision Support Subgroup. Electronic Decision Support Evaluation
Methodology. In: Australian Health Information Council;
2005.http://www.ahic.org.au/subgroups/EDSevaluation.html
56. Cimino JJ, Li J, Bakken S, Patel VL. Theoretical, empirical and practical approaches to
resolving the unmet information needs of clinical information system users. In: Proc
AMIA Symp; 2002;. p. 170-174.
57. Wyatt J. Evaluation of Clinical Information Systems. In: van Bemmel JH, Musen MA,
editors. Handbook of Medical Informatics: Springer; 1997.
58. Friedman CP, Wyatt JC. Evaluation Methods in Medical Informatics. 1st ed. New York:
Springer; 1997.
59. Smith AE, Nugent CD, McClean SI. Evaluation of inherent performance of intelligent
medical decision support systems: utilising neural networks as an example. Artificial
Intelligence in Medicine 2003;27(1):1-27.
60. Wears RL, Berg M. Computer Technology and Clinical Work: Still Waiting for Godot.
JAMA 2005;293(10):1261-1263.
61. Cheng C, Goldstein M, Geller E, Levitt R. The Effects of CPOE on ICU workflow: an
observational study. In: AMIA Annu Symp Proc; 2003. p. 150-154.
62. Heathfield H, Pitty D, Hanka R. Evaluating information technology in health care:
barriers and challenges. BMJ 1998;316(7149):1959-1961.
63. National Electronic Decision Support Task Force. Electronic Decision Support for
Australia's Health Sector. 2003.http://www.ahic.org.au/downloads/nedsrept.pdf