Bibliography on Evaluation Papers and Books (last updated: Nov. 2013)

The bibliography on health IT evaluation publications was started in 2006. It contains publications found to be milestone publications on health IT evaluaiton. The last update of the bibliography was conducted in Nov. 2013 in cooperation with the AMIA Working Group on Health IT Evaluation. Thanks to all that contributed!


Friedman CP, Wyatt J C. Evaluation Methods in Bioedical Informatics. New York: Springer; 2006. 2nd edition.

A classic textbook on health IT evaluation methods. Covers mostly quantitative aspects, but contains also reflections on qualitative methods.

Brender J. Handbook of Evaluation Methods for Health Informatics. Academic Press: 2005.

A comprehensive overview on quantitatie and qualitative methods of health IT evaluation.

Morgan Mayhew DJ. The Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design. Kaufman Publishers; 1999.

Brender J. Methodology for Assessment of Medical IT-based Systems in an Organisational Context. Studies in Health Technology and Informatics, volume 42. Amsterdam: IOS Press; 1997.

There is no doubt that the assumptions for application of traditional approaches for user requirements specification are more or less unfulfilled. This indicates a need for evolutionary system development combined with constructive assessment throughout the life-cycle of an IT-based solution. A methodology for constructive technology assessment is presented, which 1) covers the entire system life-cycle; 2) have users from the application domain of the future system - or their representatives - as the target users of the assessment methodology; 3) enables constructive assessment during the development of an IT-based solution; 4) is applicable independently of the system development approach; and 5) provides users of the IT-based solution with information enabling them to decide whether or not to take the system into real-life clinical usage.

Van Gennip EMSJ, Talmon J, editors. Assessment and Evaluation of Information Technologies in Medicine. Studies in Health Technology and Informatics, volume 17. Amsterdam: IOS Press; 1995.

Nielsen J, Usability engineering, Academic Press, USA, 1993

A classic in usability testing. Presents methods and issues to be considered in usability testing, including much referenced Nielsen's 10 usability rules.

Rossi PH and Freeman HE, Evaluation. A systematic approach. Fifth edition. Sage Publications, USA, 1993.

Presents a broad set of activities related to evaluation: Evaluation research, role of evaluators, tailoring evaluations, strategies for impact assessment, randomized designs, measuring efficiency and social context of evaluation. The book is targeted for evaluation of social programs, but even in IT and health care area we can learn about presented principles and theories of evaluation.

Book chapters

Gorman P. Evaluation of Electronic Health Record Systems, in: Lehmann H. et col. (eds). Aspects of Electronic Health Record Systems. March 21, 2006. ISBN-13: 978-0387291543. Second Edition

The aim of this evaluation chapter is to provide an overview of issues and approaches relevant to the evaluation of EHRs. It is NOT a how-to perform an evaluation in HIT, but it very good as an introduction to the domain

Nykänen P and Karimaa E, Evaluation during design of a regional seamless network of social and health care services - information technology perspective. In: Surjan G, Engelbrecht R and McNair P (eds.), Health data in the information society. Technology and Informatics 90, IOS Press, Amsterdam 2002, 539 -542.

Reports results from a constructive study applying the VATAM evaluation approach. Models of health care processes were studied and the needs and requirements of the stakeholders. The results show the importance to focus on the regional information model that would support interoperability of the legacy systems and decribe concepts and their relations. The study also emphasises application of theoretical frameworks of health informatics in design and development of health information systtems and integration of constructive evaluation with the development process.

Brender J; Talmon J, Nykänen P, McNair P, Demeester M and Beuscart R, On the evaluation of system integration. In: van Gennip EMSJ and Talmon JL (eds), Assessment and evaluation of information technologies in medicine. IOS Press, Amsterdam, 1995, 189-208.

The VATAM guidelines are applied on evaluation of the integration process. The integration process aims at integration of several decision support systems developed in EU TAP R&D projects with the hospital information system in a real user environment.

Clarke K, O'Moore R, Smeets R, Talmon J, Brender J, McNair P, Nykänen P, Grimson J, Barber B. A Methodology for Evaluation of Knowledge-Based Systems in Medicine. In: van Bemmel JH, McCray AT, eds. Yearbook of Medical Informatics 1995:513-527.

Evaluation is critical to the development and successful integration of knowledge-based systems into their application environment. This is of particular importance in the medical domain- not only for reasons of safety and correctness, but also to reinforce the users' confidence in these systems. This paper describes an iterative, four-phased development evaluation cycle covering the following areas: (i) early prototype development, (ii) validity of the system, (iii) functionality of the system, and (iv) impact of the system.

Special Issues

Special Issue of Artif Intell Med: Evaluation of Clinical Decision Support Systems. Artif Intell Med 2013; 59(1): 1-54.

Contains six papers on the evaluation of clinical decision support systems in health care.

Special Issue Int Journal of Medical Informatics: Organisational issues and technology assessment in health care, vol 56, nos 1-3, 1999

Reports papers and discussions from the Joint IMIA Working Conference of Working Groups 13 (Organisational Impacts of Medical Informatics) and 15 (Technology Assessment in Health Care), Helsinki, February 1998.

Journal Papers

Ancker JS, Kern LM, Abramson E, Kaushal R. The Triangle Model for evaluating the effect of health information technology on healthcare quality and safety. J Am Med Inform Assoc 2012; 19(1): 61-5.

The authors propose a model for evaluation, the Triangle Model, developed for designing studies of quality and safety outcomes of health IT. This model identifies structure-level predictors, including characteristics of: (1) the technology itself; (2) the provider using the technology; (3) the organizational setting; and (4) the patient population.

Dixon BE, Zafar A, Overhage JM. A Framework for evaluating the costs, effort, and value of nationwide health information exchange. J Am Med Inform Assoc 2010; 17(3): 295-301.

Using a literature review and knowledge gained from active NHIN technology and policy development, the authors constructed a framework for evaluating the costs, effort, and value of data exchange between an health information exchange entity and the nationwide health information network.

Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykänen P, Rigby M. STARE-HI--Statement on reporting of evaluation studies in Health Informatics. Int J Med Inform. 2009 Jan;78(1):1-9.

This is "The" guideline for publication of evaluation studies of Health Informatics applications, adopted by journals such as Methods Inf Med and ACI.

Holden RJ, Karsh BT. A theoretical model of health information technology usage behaviour with implications for patient safety. Behav Inf Technol. 2009 Jan-Feb;28(1):21-38.

It offers a theoretical model of health IT usage behavior that I think is generalizable and useful at various levels of evaluation (individual, organizational, etc.).

Poon EG, Cusack CM, McGowan JJ. Evaluating healthcare information technology outside of academia: observations from the national resource center for healthcare information technology at the Agency for Healthcare Research and Quality. J Am Med Inform Assoc 2009; 16(5): 631-6.

This manuscript highlights some common challenges experienced by health IT project teams at nonacademic institutions, including inappropriately scoped and resourced evaluation efforts, inappropriate choice of metrics, inadequate planning for data collection and analysis, and lack of consideration of qualitative methodologies.

Ammenwerth E, de Keizer N. An inventory of evaluation studies of information technology in health care: Trends in evaluation research 1982 - 2002. Methods of Information in Medicine. 2005;44:44-56.

Authors analysed 1.035 abstracts of evaluation studies published in Medline between 1982 and 2002 and describes developments with regard to e.g. methods, evaluation approaches, setting, study design, and evaluation criteria.

DeLone WH, McLean ER. The Delone and McLean model of information systems success: a ten-year update. J Manage Inform Syst. 2003 Spring;19(4):9-30.

It's not specific to health informatics but still a very useful framework, based on an extensive review of literature, that suggests how to frame and evaluate "success" of health IT or any other information systems.

Moehr J. R. Evaluation: salvation or nemesis of medical informatics? Comput Biol Med 2002; 32(3):113-25.

Discusses the shortcomings of the objectivist evaluation approach, and then presents the subjectivist approach as well as extensions to both approaches to circumvent the analysed problems.

Murphy E, Dingwall R, Greatbatch D, Parker S, Watson P. Qualitative research methods in health technology assessment: a review of literature. Health Technology Assessment 2001; 3(16): 1 - 276.

Comprehensive analysis of theories and methods of qualitative research, and their strengths and weaknesses.

Barbour R. S. The case for combining qualitative and quantitative approaches in health services research. 1999; 4(1):39-43.

Argues in favour of a combination of qualitative and quantiative methods in evaluation studies and discusses various ways how to combine both approaches in a multi-method approach.

Klein HK, Myers MD. A Set of Principles for Conducting and Evaluating Field Studies in Information Systems. MIS Quarterly 1999; 23(1):67-93.

Discusses the conduct and evaluation of interpretive (qualitative) field studies in the evaluation of information systems. He illustrates the usefulness of the principles by three field studies.

Brender J. Trends in Assessment of IT-Based Solutions in Healthcare and Recommendations for the Future. Int. J. of Med. Informatics 1998;52(1-3):217-227.

State-of-the-Art with respect to assessment of IT-based solutions in Healthcare is discussed. Special emphasis is put on the human and organisation-centred perspective at the development of IT-based solutions. Based on this and basic conditions for system analysis and design reported in the literature, requirements for a methodology for user-driven, constructive assessment during the entire life-cycle of the IT-based system are synthesised.

Heathfield H, Pitty D, Hanka R. Evaluating information technology in health care: barriers and challenges. BMJ 1998; 316:1959-61.

Analyses recent problems in quantiative evaluation of information technology in health care, and argues in favour of a more qualitative, multi-method evaluation approach.

van Gennip EM, Bakker AR: Assessment of effects and costs of information systems. Int J Biomed Comput 1995;39:67-72.

Description of a case study on assessment, where different characteristics of the involved departments - and implied second order effects - are carefully addressed, while handling the confounding effects within matching control groups during the design and interpretation of the study outcome.

Goodhue DL. Understanding user evaluations of information systems. Manage Sci 1995; 41(12):1827-44.

Presents his task-technology-fit theory (TTF) which describes that users can both be supported or hindered by information technology, based on the 'fit' between user characteristics, tasks characteristics, and technology characteristics.

Stead WW, Haynes RB, Fuller S, Friedman CP, Travis LE, Beck JR, Fenichel CH, Chandrasekaran B, Buchanan BG, Abola EE, et al. Designing medical informatics research and library--resource projects to increase what is learned. J Am Med Inform Assoc. 1994 Jan-Feb;1(1):28-33. PubMed PMID: 7719785;

Medical informatics research projects can present unique problems with respect to evaluation. It is not always possible to adapt directly the evaluation methods that are commonly employed in the natural and social sciences. Problems in evaluating medical informatics projects may be overcome by formulating system development work in terms of a testable hypothesis; subdividing complex projects into modules, each of which can be developed, tested and evaluated rigorously; and utilizing qualitative studies in situations where more definitive quantitative studies are impractical

Grémy F, Degoulet P. Assessment of health information technology: which questions for which systems? Proposal for a taxonomy. Med Inform (Lond). 1993 Jul-Sep;18(3):185-93.

A little old but a classic for me, after a brief description of the main domains of application of medical informatics, a typology of questions to be assessed, and of the involved actors, is presented. Dimensions of technology assessment include techniques, medical and health efficacy, economics, sociology, and law and ethics. Barriers to evaluation are analysed. They include barriers related to the domain complexity, human motivation and methodological barriers. Finally categories of solutions and research areas are suggested

Sittig D F. Work Sampling: A Statistical Approach to Evaluation of the Effect of Computers on Work Patterns in The Healthcare Industry. Meth Inform Med 1993; 32(2):167-174.

Techniques have been developed to evaluate effects of computer systems on the work patterns of health-care workers including: time-motion analysis, subjective evaluations, review of departmental statistics, personal activity records, and work-sampling. This study reviews these techniques, discusses both positive and negative aspects, and presents a detailed step-by-step description of work-sampling.

Nykänen P, Chowdhury S and Wigertz O, Evaluation of decision support systems in medicine. Int J Computer Methods and Programs in Biomedicine 34; 2/3; 1991, 229 - 238. Reprinted in: van Bemmel JH and McCray T (eds.), IMIA Yearbook of Medical Informatics 1992, Schattauer, Heidelberg, 1992, 301-310

The paper presents an approach to evaluation of medical decision supportsystems. The approah emphasises the need to approach evaluation from three perspectives: Knowledge acquisition, system development lifecycle and user-system integrated behavior. The three perspectives help to manage the development of decision support systems through considering the development divided into phases and assessing the validity at each phase.

Littenberg B. Technology assessment in medicine. Acad Med. 1992 Jul;67(7):424-8.

Another classic, the author defines the concepts medical technology and technology assessment and presents a paradigm for the evaluation of medical technologies. He proposes a hierarchical assessment scheme in which level I, biologic plausibility, compares the technology's proposed mode of action with current biologic information and theory. Level II, technical feasibility, determines whether the technology can be delivered to the target population. Level III, intermediate outcomes, assesses whether the technology has a short-term impact on the biologic or physiologic process that is diseased. Level IV, patient outcomes, investigates the overall medical, psychologic, and financial impacts of the technology upon the patient, including unintended side effects and long-term morbidity and mortality. Level V, societal outcomes, measures the cost of the technology to society in terms of resource use, ethical issues, and social and political hazards. As an example, the author employs this scheme to analyze the use of screening tests for hypercholesterolemia.

Goodman C. It's time to rethink health care technology assessment. Int J Technol Assess Health Care 1992;8:335-358.

This paper puts health care technology assessment into perspective, discussing its concepts, role and a number of it key issues, among others the role of RCT and the distinction between efficacy, effectiveness and efficiency measures.

Wyatt J and Spiegelhalter D, Evaluating medical expert systems: What to test and how? Int J Medical Informatics 15, 1990, 205-217

This paper presents a methodology to evaluate clinical decision support systems. A comprehensive discussion on what to test and methods how to test.

Kaplan B, Duchon D. Combining qualitative and quantitative approaches in information systems research: a case study. MIS Quarterly 1988; (4):571-86.

Reports on a case study on the combination of quantiative and qualitative methods in a longitudinal study of an information system. Thorough description of the chances, but also the practical problems, in this triangulation approach.

Review papers

Sintchenko V, Magrabi F, Tipper S. Are we measuring the right end-points? Variables that affect the impact of computerised decision support on patient outcomes: a systematic review. Med Inform Internet Med. 2007 Sep;32(3):225-40.

It's a systematic review of impact of CDS with important methodological implications

Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, Peterson DE, Finkelstein J. The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc. 2006 Jan-Feb;13(1):16-23. Epub 2005 Oct 12.

Little has been written about the benefits and limitations of the quasi-experimental approach as applied to informatics studies. This paper outlines a relative hierarchy and nomenclature of quasi-experimental study designs that is applicable to medical informatics intervention studies.

Kaplan B. Evaluating informatics applications - some alternative approaches: theory, social interactionism, and call for methodological pluralism. International Journal of Medical Informatics 2001; 64:39-56.

A review of evaluation literature concerning clinical decision support systems indicates that randomized controlled clinical trials (RCTs) are the 'gold standard' for evaluation. This paper critiques RCT and experimental evaluation approaches and presents alternative approaches to evaluation that address questions outside the scope of the usual RCT and experimental designs.

Lorenzi NM, Riley RT, Blyth AJC, Southon G, Dixon BJ. Antecedents of the People and Organizational Aspects of Medical Informatics: Review of the Literature. 1997; 4(2):79-93.

People and organizational issues are critical in both implementing medical informatics systems and in dealing with the altered organizations that new systems often create. This article reviews the behavioral and business referent disciplines that can potentially contribute to improved implementations and on-going management of change in the medical informatics arena

Kaplan B, Addressing organisational issues into the evaluation of medical systems. Journal of American Medical Informatics Association 4; 2; 1997, 94-101.

These two (Kaplan, Lorenzi et al. ) are really not evaluation papers, but they focus on consideration of organisational issues in developing and evaluating medical information systems. These issues are often forgotten and therefore these two papers might be important to emphasise the need to see health information systems as socio-technical systems.

van der Loo RP, van Gennip EMSJ, Bakker AR, Hasman A, Rutter FFH, Evaluation of automated information systems in health care:An approach to classifying comparative studies. Int J Computer Methods and Programs in Biomediicne 48; 1995, 45-52.

In this paper 76 evaluative studies of IT systems in health care were analysed for the criteria used in evaluation. The three most often studied criteria were performance of the user when using the system, time changes in personnel workload and the performance of the information system itself. Only 10 of the studied 76 studies had included some kind of economic evaluation.

Proceeding Papers

Li J, Finkelstein J. Qualitative methods used in medical informatics research: a 12-year review. AMIA Annu Symp Proc. 2008 Nov 6:1024

Qualitative methodology is gaining popularity in medical informatics research. The goal of the paper is to describe the emerging trends of using qualitative methodology in medical informatics research and to access the methodological quality of these qualitative studies

Kushniruk AW, Patel VL, Cimino JJ. Usability testing in medical informatics: cognitive approaches to evaluation of information systems and user interfaces. Proceedings of AMIA Annual Fall Symposium; 1997. p. 218-222.

Triangulation of methods in this contribution clearly indicates the problem of interviewees' biased judgement when applying a questionnaire approach and thus illustrates some of the psychological biases in questionnaires.

Brender J, McNair P. Tools for Constructive Assessment of bids to a Call for Tender - some experiences. In: Surján G, Engelbrecht R, McNair P, eds.Health Data in the Information Society, Proceedings of the MIE2002. Amsterdam:IOS Press. Studies in Health Technology and Informatics 2002;90:527-532.

The paper describes the experiences from a full-scale case study applying a number of novel assessment techniques for selecting among income bids at a call for tender, based on a User Requirements Document that comprises non-prescriptive, goal-oriented requirements.

Wyatt J, Spiegelhalter D. Field trials of medical decision-aids: potential problems and solutions. In: Clayton P, editor. Proceedings of the Annual Symposium on Computer Applications in Medical Care; 1991. p. 3-7.

The authors review a number of biases related to the assessment of medical decision support systems based on cases from the literature.

Other Sources

Best Practices for Mixed Methods Research in the Health Sciences

Evidence in the published literature attests to the current use of mixed methods approaches in health-related research. The growing interest in mixed methods research recently has been documented in a study of funded NIH investigations that incorporated "mixed methods" or "multimethods" in their abstracts.

User-friendly Handbook for Mixed Method Evaluation

This handbook was initiated because of the recognition that by focusing primarily on quantitative techniques, evaluators may miss important parts of a story. Experienced evaluators have found that most often the best results are achieved through the use of mixed method evaluations, which combine quantitative and qualitative techniques. This handbook was therefore initiated to provide more information on qualitative techniques, and to discuss how they can be combined effectively with quantitative measures.

Homepage of the Working Group
Last change: 12.11.2013