The usp of systematic review is finding the magnitude of the relationship between variables. yes no

Meta-analysis steps

Defining the research area

The first step in conducting a meta-analysis implies defining very clearly which is the focus of the meta-analysis. Which is the research question addressed by the meta-analytic review? For instance, we may want to study the efficacy of intervention aimed at reducing hypertension, or gender differences in depression, or associations between physical exercise and body weight. More generally, in the meta-analysis we can compare differences between two groups or examine associations between two variables

Defining inclusion and exclusion criteria

Inclusion and exclusion criteria define which studies are eligible or not for being included in the meta-analysis. They refer to:

  • characteristics of the study (e.g., population, design)
  • characteristics of the publication (e.g., language, type, year of publication)

Searching and selecting primary studies

In order to retrieve all the relevant literature it is necessary to use multiple search strategies. Main search strategies include:

  • Search in reference databases (e.g., PsycINFO, ERIC, MEDLINE, EMBASE, Scopus, Web of Science, Dissertation abstract, etc.)
  • Search in the reference list of reviews available on the same topic
  • Search in the reference list of pertinent primary studies
  • Search in indexes of journals that publish most papers on the topic of the meta-analysis (especially useful to find articles in press)
  • Contacts with experts in the field

After having conducted the search, it is necessary to check each retrieved reference to see if it matches inclusion criteria. In this way, it is possible to identify the primary studies that would be included in the meta-analysis.

Coding primary studies

The coding is the process by which primary studies are examined in order to extract relevant data to perform the meta-analysis. The coding protocol serves as a guide to the coding procedure.

Computing effect sizes

For each study, it is necessary to compute an effect size, its variance, standard error, and confidence interval. The effect size is a measure of the magnitude of a relationship between two variables or a difference between groups. Main types of effect sizes are based on:

  • means (Cohen’s d, Hedges’ g, raw unstandardized difference)
  • binary data (risk ratio, odds ratio, risk difference)
  • correlations (Pearson’s correlations, Fisher’s Z)
  • survival data (hazard ratio)

The variance, standard error, and confidence interval provide an estimate of the precision of an effect size.
The best way of reporting these results is through a forest plot. The forest plot is a plot of effect sizes (with confidence intervals) of all the studies included in the meta-analysis.

The usp of systematic review is finding the magnitude of the relationship between variables. yes no

Aggregating effect sizes

After having computed an effect size for each study it is possible to compute an overall effect size. In this step, it is possible to combine effect sizes by means of:

  • Fixed-effect model: the assumption of this model it is that there is a true effect size common to all studies. In assigning a weight to each study, it takes into account only one source of variance: the within-study variance
  • Random-effects model: the assumption of this model it is that the true effect size is normally distributed. In assigning a weight to each study, it takes into account two sources of variance: within-study variance and between-studies variance

Assessing heterogeneity

Heterogeneity across study effect sizes can be assessed through two statistics:

  • Q statistic: is used for establishing if there is a significant heterogeneity across studies
  • I2: is used to quantify the heterogeneity; it estimates the proportion of observed variance that reflects real differences in effect sizes

Testing moderators

Moderators (or predictors) are the factors which are assumed to affect the magnitude of the effect sizes across the studies in which these factors are present. If the moderator is categorical, its effect is tested by a subgroup analysis; if the moderator is continuous, its effect is tested by a meta-regression.

Evaluating publication bias

Publication bias exists when published studies (those that can be easily retrieved) differ systematically from unpublished studies (gray literature). In the meta-analysis, the potential impact of publication bias can be assessed through different methods:

  • funnel plot
  • Egger’s linear regression method
  • Begg and Mazumdar’s rank correlation method
  • Duval and Tweedie’s Trim and Fill method
  • Rosenthal’s Fail-safe N

Publishing a meta-analysis

In order to publish a high-quality meta-analysis is useful to refer to meta-analysis reporting standards. In particular, there are various guidelines currently available in various research fields:

  • MARS (Meta-analysis reporting standards) included in the Publication Manual of the American Psychological Association, 6th ed. (2010)
  • PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses): it consists of a Statement, Explanation, Checklist, and Flow diagram
  • MOOSE (Meta-analyis of Observational Studies in Epidemiology; Stroup et al., JAMA 2000)

CharacteristicsEdit

Systematic reviews can be used to inform decision making in many different disciplines, such as evidence-based healthcare and evidence-based policy and practice.[8]

A systematic review can be designed to provide an exhaustive summary of current literature relevant to a research question.

A systematic review uses a rigorous and transparent approach for research synthesis, with the aim of assessing and, where possible, minimizing bias in the findings. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews and other types of mixed-methods reviews which adhere to standards for gathering, analyzing and reporting evidence.[9]

Systematic reviews of quantitative data or mixed-method reviews sometimes use statistical techniques (meta-analysis) to combine results of eligible studies. Scoring levels are sometimes used to rate the quality of the evidence depending on the methodology used, although this is discouraged by the Cochrane Library.[10] As evidence rating can be subjective, multiple people may be consulted to resolve any scoring differences between how evidence is rated.[11][12][13]

The EPPI-Centre, Cochrane and the Joanna Briggs Institute have all been influential in developing methods for combining both qualitative and quantitative research in systematic reviews.[14][15][16] Several reporting guidelines exist to standardise reporting about how systematic reviews are conducted. Such reporting guidelines are not quality assessment or appraisal tools. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement[17] suggests a standardized way to ensure a transparent and complete reporting of systematic reviews, and is now required for this kind of research by more than 170 medical journals worldwide.[8] Several specialized PRISMA guideline extensions have been developed to support particular types of studies or aspects of the review process, including PRISMA-P for review protocols and PRISMA-ScR for scoping reviews.[8] A list of PRISMA guideline extensions is hosted by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network.[18]

For qualitative reviews, reporting guidelines include ENTREQ (Enhancing transparency in reporting the synthesis of qualitative research) for qualitative evidence syntheses; RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) for meta-narrative and realist reviews;[19][20] and eMERGe (Improving reporting of Meta-Ethnography) for meta-ethnograph.[14]

Developments in systematic reviews during the 21st century included realist reviews and the meta-narrative approach, both of which addressed problems of variation in methods and heterogeneity existing on some subjects.[21][22]

There are over 30 types of systematic review and the Table 1 below summarises some of these, but it is not exhaustive.[8][17] It is important to note that there is not always consensus on the boundaries and distinctions between the approaches described below.

Table 1: A summary of some of the types of systematic review.
Review typeSummary
Mapping review/systematic mapA mapping review maps existing literature and categorizes data. The method characterizes quantity and quality of literature, including by study design and other features. Mapping reviews can be used to identify the need for primary or secondary research.[8]
Meta-analysisA meta-analysis is a statistical analysis that combines the results of multiple quantitative studies. Using statistical methods, results are combined to provide evidence from multiple studies. The two types of data generally used for meta-analysis in health research are individual participant data and aggregate data (such as odds ratios or relative risks).
Mixed studies review/mixed methods reviewRefers to any combination of methods where one significant stage is a literature review (often systematic). It can also refer to a combination of review approaches such as combining quantitative with qualitative research.[8]
Qualitative systematic review/qualitative evidence synthesisThis method for integrates or compares findings from qualitative studies. The method can include 'coding' the data and looking for 'themes' or 'constructs' across studies. Multiple authors may improve the 'validity' of the data by potentially reducing individual bias.[8]
Rapid reviewAn assessment of what is already known about a policy or practice issue, which uses systematic review methods to search for and critically appraise existing research. Rapid reviews are still a systematic review, however parts of the process may be simplified or omitted in order to increase rapidity.[23] Rapid reviews were used during the COVID-19 pandemic.[24]
Systematic reviewA systematic search for data, using a repeatable method. It includes appraising the data (for example the quality of the data) and a synthesis of research data.
Systematic search and reviewCombines methods from a 'critical review' with a comprehensive search process. This review type is usually used to address broad questions to produce the most appropriate evidence synthesis. This method may or may not include quality assessment of data sources.[8]
Systematized reviewInclude elements of systematic review process, but searching is often not as comprehensive as a systematic review and may not include quality assessments of data sources.

Scoping reviewsEdit

Scoping reviews are distinct from systematic reviews in several important ways. A scoping review is an attempt to search for concepts by mapping the language and data which surrounds those concepts and adjusting the search method iteratively to synthesize evidence and assess the scope of an area of inquiry.[21][22] This can mean that the concept search and method (including data extraction, organisation and analysis) are refined throughout the process, sometimes requiring deviations from any protocol or original research plan.[25][26] A scoping review may often be a preliminary stage before a systematic review, which 'scopes' out an area of inquiry and maps the language and key concepts to determine if a systematic review is possible or appropriate, or to lay the groundwork for a full systematic review. The goal can be to assess how much data or evidence is available regarding a certain area of interest.[25][27] This process is further complicated if it is mapping concepts across multiple languages or cultures.

As a scoping review should be systematically conducted and reported (with a transparent and repeatable method), some academic publishers categorize them as a kind of 'systematic review', which may cause confusion. Scoping reviews are helpful when it is not possible to carry out a systematic synthesis of research findings, for example, when there are no published clinical trials in the area of inquiry. Scoping reviews are helpful when determining if it is possible or appropriate to carry out a systematic review, and are a useful method when an area of inquiry is very broad,[28] for example, exploring how the public are involved in all stages systematic reviews.[29]

There is still a lack of clarity when defining the exact method of a scoping review as it is both an iterative process and is still relatively new.[30] There have been several attempts to improve the standardisation of the method,[31][32][27][33] for example via a PRISMA guideline extension for scoping reviews (PRISMA-ScR).[34] PROSPERO (the International Prospective Register of Systematic Reviews) does not permit the submission of protocols of scoping reviews,[35] although some journals will publish protocols for scoping reviews.[29]

While there are multiple kinds of systematic review methods, the main stages of a review can be summarised into five stages:

Defining the research questionEdit

Defining an answerable question and agreeing an objective method is required to design a useful systematic review.[36] Best practice recommends publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable consistency between methodology and protocol.[37] Clinical reviews of quantitative data are often structured using the acronym PICO, which stands for 'Population or Problem', 'Intervention or Exposure', 'Comparison' and 'Outcome', with other variations existing for other kinds of research. For qualitative reviews PICo is 'Population or Problem', 'Interest' and 'Context'.

Searching for relevant data sourcesEdit

Planning how the review will search for relevant data from research that matches certain criteria is a decisive stage in developing a rigorous systematic review. Relevant criteria can include only selecting research that is good quality and answers the defined question.[36] The search strategy should be designed to retrieve literature that matches the protocol's specified inclusion and exclusion criteria.

The methodology section of a systematic review should list all of the databases and citation indices that were searched. The titles and abstracts of identified articles can be checked against pre-determined criteria for eligibility and relevance. Each included study may be assigned an objective assessment of methodological quality, preferably by using methods conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement,[18] or the high-quality standards of Cochrane.[38]

Common information sources used in searches include scholarly databases of peer-reviewed articles such as MEDLINE, Web of Science, Embase, and PubMed as well as sources of unpublished literature such as clinical trial registries and grey literature collections. Key references can also be yielded through additional methods such as citation searching, reference list checking (related to a search method called 'pearl growing'), manually searching information sources not indexed in the major electronic databases (sometimes called 'hand-searching'),[39] and directly contacting experts in the field.[40]

To be systematic, searchers must use a combination of search skills and tools such as database subject headings, keyword searching, Boolean operators, proximity searching, while attempting to balance the sensitivity (systematicity) and precision (accuracy). Inviting and involving an experienced information professional or librarian can notably improve the quality of systematic review search strategies and reporting.[41][42][43][44][45]

A visualisation of data being 'extracted' and 'combined' in a Cochrane intervention effect review where a meta-analysis is possible[46]

Relevant data are 'extracted' from the data sources according to the review method. It is important to note that the data extraction method is specific to the kind of data, and data extracted on 'outcomes' is only relevant to certain types of reviews. For example, a systematic review of clinical trials might extract data about how the research was done (often called the method or 'intervention'), who participated in the research (including how many people), how it was paid for (for example funding sources) and what happened (the outcomes).[36] Effectively, relevant data being extracted and 'combined' in a Cochrane intervention effect review, where a meta-analysis is possible.[46]

Assess the eligibility of the dataEdit

This stage involves assessing the eligibility of data for inclusion in the review, by judging it against criteria identified at the first stage.[36] This can include assessing if a data source meets the eligibility criteria, and recording why decisions about inclusion or exclusion in the review were made. Software can be used to support the selection process including text mining tools and machine learning, which can automate aspects of the process.[47] The 'Systematic Review Toolbox' is a community driven, web-based catalogue of tools, to help reviewers chose appropriate tools for reviews.[48]

Analyse and combine the dataEdit

Analysing and combining data can provide an overall result from all the data. Because this combined result uses qualitative or quantitative data from all eligible sources of data, it is considered more reliable as it provides better evidence, as the more data included in reviews, the more confident we can be of conclusions. When appropriate, some systematic reviews include a meta-analysis, which uses statistical methods to combine data from multiple sources. A review might use quantitative data, or might employ a qualitative meta-synthesis, which synthesises data from qualitative studies. The combination of data from a meta-analysis can sometimes be visualised. One method uses a forest plot (also called a blobbogram).[36] In an intervention effect review, the diamond in the 'forest plot' represents the combined results of all the data included.[36]

An example of a 'forest plot' is the Cochrane Collaboration logo.[36] The logo is a forest plot of one of the first reviews which showed that corticosteroids given to women who are about to give birth prematurely can save the life of the newborn child.[49]

Recent visualisation innovations include the albatross plot, which plots p-values against sample sizes, with approximate effect-size contours superimposed to facilitate analysis.[50] The contours can be used to infer effect sizes from studies that have been analysed and reported in diverse ways. Such visualisations may have advantages over other types when reviewing complex interventions.

Assessing the quality (or certainty) of evidence is an important part of some reviews. GRADE (Grading of Recommendations, Assessment, Development and Evaluations) is a transparent framework for developing and presenting summaries of evidence and is used to grade the quality of evidence.[51] The GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) is used to provide a transparent method for assessing the confidence of evidence from reviews or qualitative research.[52] Once these stages are complete, the review may be published, disseminated and translated into practice after being adopted as evidence.

PMC

US National Library of Medicine
National Institutes of Health

Search database

Search term

Search

  • Advanced
  • Journal list
  • Help

Try out PMC Labs and tell us what you think. Learn More.

  • Journal List
  • Korean J Anesthesiol
  • v.71(2); 2018 Apr
  • PMC5903119

The usp of systematic review is finding the magnitude of the relationship between variables. yes no

Korean J Anesthesiol. 2018 Apr; 71(2): 103–112.

Published online 2018 Apr 2. doi:10.4097/kjae.2018.71.2.103

PMCID: PMC5903119

PMID: 29619782