• Users Online: 183
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
Year : 2015  |  Volume : 1  |  Issue : 2  |  Page : 189-190

Tools for placing research in context

Department of Preventive Medicine and Medicine-Cardiology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA

Date of Web Publication30-Sep-2015

Correspondence Address:
Dr. Mark D Huffman
Department of Preventive Medicine and Medicine-cardiology, Northwestern University Feinberg School of Medicine, 680 N. Lake Shore Drive, Chicago, IL 60611
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/2395-5414.166322

Rights and Permissions

How to cite this article:
Huffman MD. Tools for placing research in context. J Pract Cardiovasc Sci 2015;1:189-90

How to cite this URL:
Huffman MD. Tools for placing research in context. J Pract Cardiovasc Sci [serial online] 2015 [cited 2023 Feb 2];1:189-90. Available from: https://www.j-pcs.org/text.asp?2015/1/2/189/166322

  Introduction Top

Many cardiology trainees are interested in placing research results into a larger context to understand their significance (or lack thereof) more clearly. The aim of this commentary is to describe tools for determining the research question, for assessing the risk of bias, for evaluating casual inference, and for evaluating the overall quality of evidence to do so.

  Determine the Research Question Top

Many trainees struggle with interpreting a research study report's findings because she/he did not clearly understand the research question. The PICOTS framework is a simple way to identify the participants, intervention, comparator, outcome(s), timing, and study design (some say setting) of the study. This framework is designed to evaluate the components of trials but can be adapted for observational studies by changing intervention to exposure (PECOTS). Either framework can be particularly useful when comparing similar studies to help identify the differences between them.

  Assess Risk of Bias Top

The Cochrane Handbook (freely available on handbook.cochrane.org) outlines, how to evaluate the key domains for assessing the risk of bias with particular attention paid to selection bias (sequence generation; allocation concealment); performance bias (blinding); attrition bias (differential drop-out); reporting bias (comparing trial protocol with reported results, and evaluating for funnel plot asymmetry). Other sources of bias exist, particularly in nonrandomized studies, but these domains are important when evaluating randomized trials. It can be difficult to be certain whether or not part or all of a trial is “biased” (particularly when only reading a study report), so it is preferred to evaluate the “risk” of bias across these domains. The Cochrane Collaboration discourages the use of “quality scores” for evaluating the risk of bias and encourages a description of the risk of bias (low, unclear, or high) based on the study reports.

  Evaluate Causal Inference Top

Based on his research evaluating the association between smoking (exposure) on lung cancer (outcome), Hill,[1] and Blackburn and Labarthe [2] described the criteria for evaluating whether or not a relationship is likely to be causal [Box 1]. Evaluating causality is ultimately a judgment that is not determined by P values. Most research reports include results from observational studies, and it is useful for trainees to revert back to these fundamentals to assess whether a purported relationship is likely to be causal. If uncertainty is present about the causal nature of a relationship, it is strongly preferred that the researchers and individuals avoid causal language.

  Evaluate Overall Quality of Evidence to Inform the Strength of Recommendations Top

The Grading of Recommendations Assessment, Development, and Evaluation Working Group outlines domains for evaluating the quality of evidence for each outcome under study (range: High, moderate, low, and very low).[3] Randomized trials start at a high level of evidence, whereas observational studies start at a low level of evidence, and each can move up to two levels. The domains include: (1) Study limitations (based on risk of bias); (2) inconsistency of results; (3) indirectness of evidence; (4) imprecision; and (5) reporting bias. The assessment of evidence quality is explicitly separated from the strength of recommendations, which are based on: (1) Quality of evidence; (2) uncertainty between desirable and undesirable effects; (3) values and preferences; and (4) whether or not the intervention represents a wise use of resources.

  Conclusion Top

Cardiology trainees who need tools to place research into a larger context can use the above steps to: Determine the research question, assess the risk of bias, evaluate causal inference, and evaluate the quality of evidence to inform the strength of recommendations.

  References Top

Hill AB. The environment and disease: Association or causation? Proc R Soc Med 1965;58:295-300.  Back to cited text no. 1
Blackburn H, Labarthe D. Stories from the evolution of guidelines for causal inference in epidemiologic associations: 1953-1965. Am J Epidemiol 2012;176:1071-7.  Back to cited text no. 2
Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: An emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336:924-6.  Back to cited text no. 3


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article
   Determine the Re...
  Assess Risk of Bias
   Evaluate Causal ...
   Evaluate Overall...

 Article Access Statistics
    PDF Downloaded206    
    Comments [Add]    

Recommend this journal