Meta-analysis stepsDefining the research areaThe first step in conducting a meta-analysis implies defining very clearly which is the focus of the meta-analysis. Which is the research question addressed by the meta-analytic review? For instance, we may want to study the efficacy of intervention aimed at reducing hypertension, or gender differences in depression, or associations between physical exercise and body weight. More generally, in the meta-analysis we can compare differences between two groups or examine associations between two variables Show
Defining inclusion and exclusion criteriaInclusion and exclusion criteria define which studies are eligible or not for being included in the meta-analysis. They refer to:
Searching and selecting primary studiesIn order to retrieve all the relevant literature it is necessary to use multiple search strategies. Main search strategies include:
After having conducted the search, it is necessary to check each retrieved reference to see if it matches inclusion criteria. In this way, it is possible to identify the primary studies that would be included in the meta-analysis. Coding primary studiesThe coding is the process by which primary studies are examined in order to extract relevant data to perform the meta-analysis. The coding protocol serves as a guide to the coding procedure. Computing effect sizesFor each study, it is necessary to compute an effect size, its variance, standard error, and confidence interval. The effect size is a measure of the magnitude of a relationship between two variables or a difference between groups. Main types of effect sizes are based on:
The variance, standard error, and confidence interval provide an estimate of the precision of an effect size. Aggregating effect sizesAfter having computed an effect size for each study it is possible to compute an overall effect size. In this step, it is possible to combine effect sizes by means of:
Assessing heterogeneityHeterogeneity across study effect sizes can be assessed through two statistics:
Testing moderatorsModerators (or predictors) are the factors which are assumed to affect the magnitude of the effect sizes across the studies in which these factors are present. If the moderator is categorical, its effect is tested by a subgroup analysis; if the moderator is continuous, its effect is tested by a meta-regression. Evaluating publication biasPublication bias exists when published studies (those that can be easily retrieved) differ systematically from unpublished studies (gray literature). In the meta-analysis, the potential impact of publication bias can be assessed through different methods:
Publishing a meta-analysisIn order to publish a high-quality meta-analysis is useful to refer to meta-analysis reporting standards. In particular, there are various guidelines currently available in various research fields:
CharacteristicsEditSystematic reviews can be used to inform decision making in many different disciplines, such as evidence-based healthcare and evidence-based policy and practice.[8] A systematic review can be designed to provide an exhaustive summary of current literature relevant to a research question. A systematic review uses a rigorous and transparent approach for research synthesis, with the aim of assessing and, where possible, minimizing bias in the findings. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews and other types of mixed-methods reviews which adhere to standards for gathering, analyzing and reporting evidence.[9] Systematic reviews of quantitative data or mixed-method reviews sometimes use statistical techniques (meta-analysis) to combine results of eligible studies. Scoring levels are sometimes used to rate the quality of the evidence depending on the methodology used, although this is discouraged by the Cochrane Library.[10] As evidence rating can be subjective, multiple people may be consulted to resolve any scoring differences between how evidence is rated.[11][12][13] The EPPI-Centre, Cochrane and the Joanna Briggs Institute have all been influential in developing methods for combining both qualitative and quantitative research in systematic reviews.[14][15][16] Several reporting guidelines exist to standardise reporting about how systematic reviews are conducted. Such reporting guidelines are not quality assessment or appraisal tools. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement[17] suggests a standardized way to ensure a transparent and complete reporting of systematic reviews, and is now required for this kind of research by more than 170 medical journals worldwide.[8] Several specialized PRISMA guideline extensions have been developed to support particular types of studies or aspects of the review process, including PRISMA-P for review protocols and PRISMA-ScR for scoping reviews.[8] A list of PRISMA guideline extensions is hosted by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network.[18] For qualitative reviews, reporting guidelines include ENTREQ (Enhancing transparency in reporting the synthesis of qualitative research) for qualitative evidence syntheses; RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) for meta-narrative and realist reviews;[19][20] and eMERGe (Improving reporting of Meta-Ethnography) for meta-ethnograph.[14] Developments in systematic reviews during the 21st century included realist reviews and the meta-narrative approach, both of which addressed problems of variation in methods and heterogeneity existing on some subjects.[21][22] There are over 30 types of systematic review and the Table 1 below summarises some of these, but it is not exhaustive.[8][17] It is important to note that there is not always consensus on the boundaries and distinctions between the approaches described below. Table 1: A summary of some of the types of systematic review.
Scoping reviewsEditScoping reviews are distinct from systematic reviews in several important ways. A scoping review is an attempt to search for concepts by mapping the language and data which surrounds those concepts and adjusting the search method iteratively to synthesize evidence and assess the scope of an area of inquiry.[21][22] This can mean that the concept search and method (including data extraction, organisation and analysis) are refined throughout the process, sometimes requiring deviations from any protocol or original research plan.[25][26] A scoping review may often be a preliminary stage before a systematic review, which 'scopes' out an area of inquiry and maps the language and key concepts to determine if a systematic review is possible or appropriate, or to lay the groundwork for a full systematic review. The goal can be to assess how much data or evidence is available regarding a certain area of interest.[25][27] This process is further complicated if it is mapping concepts across multiple languages or cultures. As a scoping review should be systematically conducted and reported (with a transparent and repeatable method), some academic publishers categorize them as a kind of 'systematic review', which may cause confusion. Scoping reviews are helpful when it is not possible to carry out a systematic synthesis of research findings, for example, when there are no published clinical trials in the area of inquiry. Scoping reviews are helpful when determining if it is possible or appropriate to carry out a systematic review, and are a useful method when an area of inquiry is very broad,[28] for example, exploring how the public are involved in all stages systematic reviews.[29] There is still a lack of clarity when defining the exact method of a scoping review as it is both an iterative process and is still relatively new.[30] There have been several attempts to improve the standardisation of the method,[31][32][27][33] for example via a PRISMA guideline extension for scoping reviews (PRISMA-ScR).[34] PROSPERO (the International Prospective Register of Systematic Reviews) does not permit the submission of protocols of scoping reviews,[35] although some journals will publish protocols for scoping reviews.[29] While there are multiple kinds of systematic review methods, the main stages of a review can be summarised into five stages: Defining the research questionEditDefining an answerable question and agreeing an objective method is required to design a useful systematic review.[36] Best practice recommends publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable consistency between methodology and protocol.[37] Clinical reviews of quantitative data are often structured using the acronym PICO, which stands for 'Population or Problem', 'Intervention or Exposure', 'Comparison' and 'Outcome', with other variations existing for other kinds of research. For qualitative reviews PICo is 'Population or Problem', 'Interest' and 'Context'. Searching for relevant data sourcesEditPlanning how the review will search for relevant data from research that matches certain criteria is a decisive stage in developing a rigorous systematic review. Relevant criteria can include only selecting research that is good quality and answers the defined question.[36] The search strategy should be designed to retrieve literature that matches the protocol's specified inclusion and exclusion criteria. The methodology section of a systematic review should list all of the databases and citation indices that were searched. The titles and abstracts of identified articles can be checked against pre-determined criteria for eligibility and relevance. Each included study may be assigned an objective assessment of methodological quality, preferably by using methods conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement,[18] or the high-quality standards of Cochrane.[38] Common information sources used in searches include scholarly databases of peer-reviewed articles such as MEDLINE, Web of Science, Embase, and PubMed as well as sources of unpublished literature such as clinical trial registries and grey literature collections. Key references can also be yielded through additional methods such as citation searching, reference list checking (related to a search method called 'pearl growing'), manually searching information sources not indexed in the major electronic databases (sometimes called 'hand-searching'),[39] and directly contacting experts in the field.[40] To be systematic, searchers must use a combination of search skills and tools such as database subject headings, keyword searching, Boolean operators, proximity searching, while attempting to balance the sensitivity (systematicity) and precision (accuracy). Inviting and involving an experienced information professional or librarian can notably improve the quality of systematic review search strategies and reporting.[41][42][43][44][45] A visualisation of data being 'extracted' and 'combined' in a Cochrane intervention effect review where a meta-analysis is possible[46] Relevant data are 'extracted' from the data sources according to the review method. It is important to note that the data extraction method is specific to the kind of data, and data extracted on 'outcomes' is only relevant to certain types of reviews. For example, a systematic review of clinical trials might extract data about how the research was done (often called the method or 'intervention'), who participated in the research (including how many people), how it was paid for (for example funding sources) and what happened (the outcomes).[36] Effectively, relevant data being extracted and 'combined' in a Cochrane intervention effect review, where a meta-analysis is possible.[46] Assess the eligibility of the dataEditThis stage involves assessing the eligibility of data for inclusion in the review, by judging it against criteria identified at the first stage.[36] This can include assessing if a data source meets the eligibility criteria, and recording why decisions about inclusion or exclusion in the review were made. Software can be used to support the selection process including text mining tools and machine learning, which can automate aspects of the process.[47] The 'Systematic Review Toolbox' is a community driven, web-based catalogue of tools, to help reviewers chose appropriate tools for reviews.[48] Analyse and combine the dataEditAnalysing and combining data can provide an overall result from all the data. Because this combined result uses qualitative or quantitative data from all eligible sources of data, it is considered more reliable as it provides better evidence, as the more data included in reviews, the more confident we can be of conclusions. When appropriate, some systematic reviews include a meta-analysis, which uses statistical methods to combine data from multiple sources. A review might use quantitative data, or might employ a qualitative meta-synthesis, which synthesises data from qualitative studies. The combination of data from a meta-analysis can sometimes be visualised. One method uses a forest plot (also called a blobbogram).[36] In an intervention effect review, the diamond in the 'forest plot' represents the combined results of all the data included.[36] An example of a 'forest plot' is the Cochrane Collaboration logo.[36] The logo is a forest plot of one of the first reviews which showed that corticosteroids given to women who are about to give birth prematurely can save the life of the newborn child.[49] Recent visualisation innovations include the albatross plot, which plots p-values against sample sizes, with approximate effect-size contours superimposed to facilitate analysis.[50] The contours can be used to infer effect sizes from studies that have been analysed and reported in diverse ways. Such visualisations may have advantages over other types when reviewing complex interventions. Assessing the quality (or certainty) of evidence is an important part of some reviews. GRADE (Grading of Recommendations, Assessment, Development and Evaluations) is a transparent framework for developing and presenting summaries of evidence and is used to grade the quality of evidence.[51] The GRADE-CERQual (Confidence in the Evidence from Reviews of Qualitative research) is used to provide a transparent method for assessing the confidence of evidence from reviews or qualitative research.[52] Once these stages are complete, the review may be published, disseminated and translated into practice after being adopted as evidence. PMCUS National Library of Medicine Search database Search term Search
Try out PMC Labs and tell us what you think. Learn More.
Korean J Anesthesiol. 2018 Apr; 71(2): 103–112. Published online 2018 Apr 2. doi:10.4097/kjae.2018.71.2.103 PMCID: PMC5903119 PMID: 29619782 |