Critical appraisal

Introduction

This is a guideline for the steps that should be followed in critically appraising the quality and validity of published literature. Although it is of some benefit on its own, it needs to be practised and applied a few times preferably under the supervision of an experienced mentor for maximum benefit to be attainable.

Title

  • Does this accurately reflect the main scope and nature of the work?

Abstract

  • Is this a well structured, accurate and balanced summary of the work?
  • Does it distinguish between the results and the conclusions drawn?

Objectives

  • Are these clearly stated in the introduction?
  • How specific are they?
  • Have they been met?
  • Do they raise hypotheses or test hypotheses?

Study design, methods, subjects

  • What approach has been used? e.g. case series, cross-sectional, cohort, case-control.
  • Was it appropriate and could it reasonably be expected to fulfil the objectives?
  • Cross refer to other teaching material in Epidemiology and Environmental Risk Assessment etc as appropriate.

Study 'populations' and sampling

  • Was the study "population" clearly defined?
  • Is it representative of the group from which it is drawn?
  • How satisfactory was its sampling?
  • How was the sample size chosen?

Methods used

  • How has the information been obtained?
  • Have sources of data been clearly described?
  • Have they been validated?
  • Are they reproducible?
  • Could they have been biased?
  • Is quality control of collection of information mentioned?

Remember: even a review article should have a method; including criteria for identifying, selecting, and evaluating original published work.

Controls

  • Are these appropriate?
  • How distinct from the cases were they?
  • Could there have been misclassification?
  • Has matching been carried out correctly?

Exposure

  • How well was this 'speciated', (i.e. characterised as to its identity, and other relevant co-exposures assessed) ?
  • How was it measured?
  • How well was it quantified?
  • Was it studied in such a way as to explore a possible exposure-response relationship?

Statistical methods

  • Were they appropriate and necessary?
  • Could chance have been responsible for the results?

Other aspects

  • Remember that each paper is essentially unique - there may be special aspects of methodology which may warrant focussed attention

Results

  • Do these appear in enough detail to permit some checking for accuracy (between the text, tables, figures etc)?
  • Are they consistent?
  • Are they detailed enough to justify the conclusions?
  • If appropriate, are they consistent with an exposure-response relationship?
  • Cross refer to other teaching material in Epidemiology and Environmental Risk Assessment etc as appropriate.

Response rates

  • Are these quoted?
  • Could a poor response rate hide the possibility of important bias?

Bias and other distortions

  • What are the most important sources? i.e. interviewer, observer, recall, selection, response, etc.

Discussion and conclusions

  • Are the conclusions consistent with the reported results?
  • Are they plausible?
  • Were the sources, direction and magnitude of bias adequately discussed?
  • Have the confounders been adequately considered?
  • Could other conclusions be drawn from the same results? (e.g. if they rely on temporality alone).
  • Has there been an adequate comparison with other relevant literature?
  • How relevant were the study population and conditions of exposure to the conclusions drawn?

Authors and citation

  • Is the source of the reference clear?
  • Are the authors and their affiliation clear?
  • Where can further information be sought?

Other points

  • How differently would you have undertaken a study to fulfil the same objectives?
  • Why did the authors not follow the approach you might have advocated?
  • What other information would you seek about this particular study?
  • What other information would you seek to corroborate or refute the conclusions of this particular study?
  • Is this study likely to make a difference in relation to understanding or practice?

References

  • Fowkes FGR and Fulton PM. Critical appraisal of published research: Introductory guidelines. British Medical Journal 1991; 302: 1136-1140.
  • Fletcher RH, Fletcher SW, Wagner EH. Clinical epidemiology - the essentials. 2nd Ed. Williams & Wilkins Chapter 12 (Summing up) 226 - 240
  • For an excellent resource to help evaluate the quality of information found on the World Wide Web, see: Being critical (University of Manchester Library)