Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice. Keywords: Evidence-based practice, Method assessment, Research design Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [1]. Critical appraisal is essential to:
Carrying out Critical Appraisal: Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design. Standard Common Questions:
The Critical Appraisal starts by double checking the following main sections: I. Overview of the paper:
The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [2]. II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.
III. Introduction/Background section: An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [2]. -Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’ -What has been already achieved and how does this study be at variance? -Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations? IV. Methods and Materials section: Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [3]. V. Results section: This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs. VI. Discussion section: This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated. Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).
Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist. 1-What is the research question? For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [3]. P = Patient or problem: Patient/Problem/Population: It involves identifying if the research has a focused question. What is the chief complaint? E.g.,: Disease status, previous ailments, current medications etc., I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc., C= Comparison: A suitable control or alternative E.g.,: specific and limited to one alternative choice. O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc., The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [Table/Fig-1]. Categories of clinical questions and the related study designs.
2- What is the study type (design)? The study design of the research is fundamental to the usefulness of the study. In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered. Participants/Sample Population: Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population. The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [4]. Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [5].
Researchers use measuring techniques and instruments that have been shown to be valid and reliable. Validity refers to the extent to which a test measures what it is supposed to measure. (the extent to which the value obtained represents the object of interest.)
Reliability: In research, the term reliability means “repeatability” or “consistency” Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [6]. 3-Selection issues: The following questions should be raised:
Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [7]. Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [8]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [9]. 4-What are the outcome factors and how are they measured?
5-What are the study factors and how are they measured?
Data Analysis and Results:
Confounding Factors: A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [9]. 6- What important potential confounders are considered?
7- What is the statistical method in the study?
Interpretation of p-value: The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.
Confidence interval: Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range. 8- Statistical results:
Are statistical tests performed and comparisons made (data searching)? Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study. It is important to identify if this is appropriate for the study [9].
Clinical significance: Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:
9- What conclusions did the authors reach about the study question? Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [10].
Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats? 10- Are ethical issues considered? If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [10,11].
Critical appraisal of RCTs: Factors to look for:
[Table/Fig-2] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [12]. Summary of the CONSORT guidelines.
Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results. In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:
[Table/Fig-3] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [13]. Summary of PRISMA guidelines.
Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice. [1] Burls A. What is critical appraisal? London: Hayward Medical Communications; 2016. Available from: http://www.whatisseries.co.uk/what-is-critical-appraisal/ [Google Scholar] [2] MacInnes A, Lamont T. Critical appraisal of a research paper. Scott Uni Med J. 2014;3(1):10–17. [Google Scholar] [3] Richards D, Lawrence A. Evidence-based dentistry. Br Dent J. 1995;179(7):270–73. [PubMed] [Google Scholar] [4] Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71–72. [PMC free article] [PubMed] [Google Scholar] [5] Greenhalgh T. 5th ed. New York United States: John Wiley & Sons; 2014. How to read a paper: The basics of evidence based medicine. [Google Scholar] [6] Sakka S, Al-ani Z, Kasioumis T, Worthington H, Coulthard P. Inter-examiner and intra-examiner reliability of the measurement of marginal bone loss around oral implants. Implant Dent. 2005;14(4):386–88. [PubMed] [Google Scholar] [7] Rosenberg W, Donald A. Evidence based medicine: an approach to clinical problem-solving. BMJ. 1995;310(6987):1122–26. [PMC free article] [PubMed] [Google Scholar] [8] Stewart LA, Parmar MK. Bias in the analysis and reporting of randomized controlled trials. Int J Technol Assess Health Care. 1996;12(2):264–75. [PubMed] [Google Scholar] [9] Egger M, Smith GD. Bias in location and selection of studies. BMJ. 1998;316(7124):61–66. [PMC free article] [PubMed] [Google Scholar] [10] Haynes RB. Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. Evid Based Med. 2006;11(6):162–64. [PubMed] [Google Scholar] [11] Al-Jundi A, Sakka S. Protocol writing in clinical research. J Clin Diagn Res. 2016;10(11):ZE10–ZE13. [PMC free article] [PubMed] [Google Scholar] [12] Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869. [PMC free article] [PubMed] [Google Scholar] [13] Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: The PRISMA Statement. PLoS Med. 2009;6(7):e1000097. [PMC free article] [PubMed] [Google Scholar] |