- En español – ExME
- Em português – EME
Systematic reviews vs meta-analysis: what’s the difference?
Posted on 24th July 2023 by Verónica Tanco Tellechea
You may hear the terms ‘systematic review’ and ‘meta-analysis being used interchangeably’. Although they are related, they are distinctly different. Learn more in this blog for beginners.
What is a systematic review?
According to Cochrane (1), a systematic review attempts to identify, appraise and synthesize all the empirical evidence to answer a specific research question. Thus, a systematic review is where you might find the most relevant, adequate, and current information regarding a specific topic. In the levels of evidence pyramid , systematic reviews are only surpassed by meta-analyses.
To conduct a systematic review, you will need, among other things:
- A specific research question, usually in the form of a PICO question.
- Pre-specified eligibility criteria, to decide which articles will be included or discarded from the review.
- To follow a systematic method that will minimize bias.
You can find protocols that will guide you from both Cochrane and the Equator Network , among other places, and if you are a beginner to the topic then have a read of an overview about systematic reviews.
What is a meta-analysis?
A meta-analysis is a quantitative, epidemiological study design used to systematically assess the results of previous research (2) . Usually, they are based on randomized controlled trials, though not always. This means that a meta-analysis is a mathematical tool that allows researchers to mathematically combine outcomes from multiple studies.
When can a meta-analysis be implemented?
There is always the possibility of conducting a meta-analysis, yet, for it to throw the best possible results it should be performed when the studies included in the systematic review are of good quality, similar designs, and have similar outcome measures.
Why are meta-analyses important?
Outcomes from a meta-analysis may provide more precise information regarding the estimate of the effect of what is being studied because it merges outcomes from multiple studies. In a meta-analysis, data from various trials are combined and generate an average result (1), which is portrayed in a forest plot diagram. Moreover, meta-analysis also include a funnel plot diagram to visually detect publication bias.
Conclusions
A systematic review is an article that synthesizes available evidence on a certain topic utilizing a specific research question, pre-specified eligibility criteria for including articles, and a systematic method for its production. Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles included in a systematic-review.
Remember: All meta-analyses involve a systematic review, but not all systematic reviews involve a meta-analysis.
If you would like some further reading on this topic, we suggest the following:
The systematic review – a S4BE blog article
Meta-analysis: what, why, and how – a S4BE blog article
The difference between a systematic review and a meta-analysis – a blog article via Covidence
Systematic review vs meta-analysis: what’s the difference? A 5-minute video from Research Masterminds:
- About Cochrane reviews [Internet]. Cochranelibrary.com. [cited 2023 Apr 30]. Available from: https://www.cochranelibrary.com/about/about-cochrane-reviews
- Haidich AB. Meta-analysis in medical research. Hippokratia. 2010;14(Suppl 1):29–37.
Verónica Tanco Tellechea
Leave a reply cancel reply.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
Subscribe to our newsletter
You will receive our monthly newsletter and free access to Trip Premium.
Related Articles
How to read a funnel plot
This blog introduces you to funnel plots, guiding you through how to read them and what may cause them to look asymmetrical.
Heterogeneity in meta-analysis
When you bring studies together in a meta-analysis, one of the things you need to consider is the variability in your studies – this is called heterogeneity. This blog presents the three types of heterogeneity, considers the different types of outcome data, and delves a little more into dealing with the variations.
Natural killer cells in glioblastoma therapy
As seen in a previous blog from Davide, modern neuroscience often interfaces with other medical specialities. In this blog, he provides a summary of new evidence about the potential of a therapeutic strategy born at the crossroad between neurology, immunology and oncology.
- How it works
"Christmas Offer"
Terms & conditions.
As the Christmas season is upon us, we find ourselves reflecting on the past year and those who we have helped to shape their future. It’s been quite a year for us all! The end of the year brings no greater joy than the opportunity to express to you Christmas greetings and good wishes.
At this special time of year, Research Prospect brings joyful discount of 10% on all its services. May your Christmas and New Year be filled with joy.
We are looking back with appreciation for your loyalty and looking forward to moving into the New Year together.
"Claim this offer"
In unfamiliar and hard times, we have stuck by you. This Christmas, Research Prospect brings you all the joy with exciting discount of 10% on all its services.
Offer valid till 5-1-2024
We love being your partner in success. We know you have been working hard lately, take a break this holiday season to spend time with your loved ones while we make sure you succeed in your academics
Discount code: RP0996Y
Meta-Analysis – Definition, Purpose And How To Conduct It
Published by Owen Ingram at April 26th, 2023 , Revised On September 23, 2024
The number of studies being published in biomedical and clinical literature is increasing day by day. The massive abundance of research studies in these areas makes it rather difficult to synthesise all data and accumulate knowledge from various studies. All of this delays certain clinical decisions and conclusions to be made.
To determine the validity of a hypothesis , it is necessary to look into multiple studies rather than just one. For this purpose, systematic reviews or narrative reviews have been used to synthesise data from multiple studies, which often leads to an objective approach as different people can have different opinions. Meta-analysis, on the other hand, provides an objective and quantitative approach to combining evidence from various studies.
What Is Meta-Analysis?
The meaning of meta-analysis is a statistical method of combining results from numerous studies on a certain research question. The term was first used in 1976 and can be used to determine if the effect reported in the literature is real or not.
To conduct a quality meta-analysis, you need to identify an area in which the effect of treatment is uncertain. It is also recommended that you collect as many studies similar to the effect as possible so that you can compare them and get a better picture of it. This assists the researcher in understanding how big or small the effect is, and how different the results are from other studies.
Purpose Of Meta-Analysis In Research
The purpose of meta-analysis is more than just combining results from studies to give a statistical assessment. It also helps to point out:
- Any potential reasons for variations and differences in results, also known as heterogeneity in meta-analysis. Some popular reasons for this might be differences in sample size, or differences in analysis methods in research.
- The real estimate of the effect size that is reported in literature than any individual study. Combining multiple studies reduces research bias and the chances of random errors.
Meta-Analysis In Applied And Basic Research
Basic research involves seeking knowledge and gathering data on any subject, whereas applied research is more experimental and uses methods to solve real-life problems. Meta-analyses are used in both types of research that is applied and basic research.
- Pharmaceutical companies use meta-analyses to gain approval for new drugs, such as antibiotics for bacterial infections. Even regulatory authorities use this research method to gain approval for different processes. Hence, meta-analysis is used in medicine, crime, education and psychology for applied research.
- In terms of basic research, it is used in various fields such as sociology, finance, economics, marketing and social psychology. An example of meta-analysis in basic research is studying the effect of caffeine on cognitive performance.
Strengths and Limitations Of Meta-Analysis
There are many key benefits of meta-analysis in research studies, as it is a powerful tool. Here are some strengths of it:
- A meta-analysis takes place after a systematic review, which means the end product will be reliable and accurate.
- It has great statistical power, as it combines multiple studies rather than one individual study, which might suffer from a lack of sufficient data.
- This analysis can confirm existing research or refute it. Either way, it gives a confirmatory data analysis.
- It is regarded highly in the scientific community, as it provides an objective and solid analysis of evidence.
Challenges Associated With Meta-Analysis
Meta-analysis also has certain challenges that result in limitations while carrying out this statistical quantitative approach. Some challenges faced by meta-analyses are:
- It is not always possible to predict the outcome of a large-scale study. This is because meta-analysis mostly rely on small-scale studies which do not represent the broad population.
- A decent meta-analysis can not make up for flawed or bad research designs. Thus, it can not control the potential for bias to arise in studies. Therefore, it is urged to only include research with sound methodologies known as “best evidence synthesis”.
Hire an Expert Writer
Orders completed by our expert writers are
- Formally drafted in an academic style
- Free Amendments and 100% Plagiarism Free – or your money back!
- 100% Confidential and Timely Delivery!
- Free anti-plagiarism report
- Appreciated by thousands of clients. Check client reviews
How To Conduct A Meta-Analysis
Before conducting a meta-analysis and defining the research scope, it is necessary to evaluate the number of publications that have grown over the years. It can be quite hard to scan and skim through a large number of studies and literature reviews, which is why it is necessary to define the research question with care, including only relevant aspects. Here are steps on how to perform a meta-analysis:
- Formulate a research question that showcases the effects or interventions to be studied. This is mostly a binary question, such as “Does drug X improve outcome Y” in clinical studies.
- Conduct a systematic review that analyses and synthesises all data related to the one research question.
- Gather all data such as sample sizes and research methods used to indicate data variability. All decide which dependent variables are allowed.
- The selection of criteria is also a crucial step as it is necessary to understand whether published or unpublished studies are to be included or not. Based on the research question, it is important to choose studies that are quality-based and relevant.
- Choosing the right meta-analytic methods and meta-analysis software to be used in meta-analysis is another significant step. Some methods used are traditional univariate meta-analysis, meta-regression and meta-analytic structural equation modelling methods.
- While evaluating the data, it is necessary to use a meta-analysis forest plot, which is the graphical representation of the results of a meta-analysis studies. Its visual representation helps understand the heterogeneity among studies and helps compare the overall effect sizes of an intervention.
- The final step of literature meta-analysis is to report the results. They should be comprehensive and precise for the reader’s understanding.
Meta-Analysis Vs Systematic Review
A systematic review is a comprehensive analysis of existing research, whereas a meta-analysis is a statistical analysis or combination of results from two separate studies. Here’s how the two differ from each other:
Frequently Asked Questions
Where does meta-analysis fit in the research process.
It plays a key role in planning new studies and identifying answers to research questions. It is also widely sought for publications. Lastly, it is also used for grant applications that are used to justify the need for a new study.
Which fields use meta-analysis?
Common fields where meta-analyses are used are medicine, psychology, sociology, education, and health. It may also be used in finance, marketing and economics.
Is meta-analysis qualitative or quantitative?
Meta-analysis is a quantitative method that uses statistical methods to synthesise and collect data from various studies to estimate the size of the effect of a particular intervention or treatment.
You May Also Like
What are the different types of research you can use in your dissertation? Here are some guidelines to help you choose a research strategy that would make your research more credible.
In historical research, a researcher collects and analyse the data, and explain the events that occurred in the past to test the truthfulness of observations.
Thematic analysis is commonly used for qualitative data. Researchers give preference to thematic analysis when analysing audio or video transcripts.
As Featured On
USEFUL LINKS
LEARNING RESOURCES
COMPANY DETAILS
Splash Sol LLC
- How It Works
- Research Process
- Manuscript Preparation
- Manuscript Review
- Publication Process
- Publication Recognition
- Language Editing Services
- Translation Services
Systematic Review VS Meta-Analysis
- 3 minute read
- 77.1K views
Table of Contents
How you organize your research is incredibly important; whether you’re preparing a report, research review, thesis or an article to be published. What methodology you choose can make or break your work getting out into the world, so let’s take a look at two main types: systematic review and meta-analysis.
Let’s start with what they have in common – essentially, they are both based on high-quality filtered evidence related to a specific research topic. They’re both highly regarded as generally resulting in reliable findings, though there are differences, which we’ll discuss below. Additionally, they both support conclusions based on expert reviews, case-controlled studies, data analysis, etc., versus mere opinions and musings.
What is a Systematic Review?
A systematic review is a form of research done collecting, appraising and synthesizing evidence to answer a particular question, in a very transparent and systematic way. Data (or evidence) used in systematic reviews have their origin in scholarly literature – published or unpublished. So, findings are typically very reliable. In addition, they are normally collated and appraised by an independent panel of experts in the field. Unlike traditional reviews, systematic reviews are very comprehensive and don’t rely on a single author’s point of view, thus avoiding bias.
Systematic reviews are especially important in the medical field, where health practitioners need to be constantly up-to-date with new, high-quality information to lead their daily decisions. Since systematic reviews, by definition, collect information from previous research, the pitfalls of new primary studies is avoided. They often, in fact, identify lack of evidence or knowledge limitations, and consequently recommend further study, if needed.
Why are systematic reviews important?
- They combine and synthesize various studies and their findings.
- Systematic reviews appraise the validity of the results and findings of the collected studies in an impartial way.
- They define clear objectives and reproducible methodologies.
What is a Meta-analysis?
This form of research relies on combining statistical results from two or more existing studies. When multiple studies are addressing the same problem or question, it’s to be expected that there will be some potential for error. Most studies account for this within their results. A meta-analysis can help iron out any inconsistencies in data, as long as the studies are similar.
For instance, if your research is about the influence of the Mediterranean diet on diabetic people, between the ages of 30 and 45, but you only find a study about the Mediterranean diet in healthy people and another about the Mediterranean diet in diabetic teenagers. In this case, undertaking a meta-analysis would probably be a poor choice. You can either pursue the idea of comparing such different material, at the risk of findings that don’t really answer the review question. Or, you can decide to explore a different research method (perhaps more qualitative).
Why is meta-analysis important?
- They help improve precision about evidence since many studies are too small to provide convincing data.
- Meta-analyses can settle divergences between conflicting studies. By formally assessing the conflicting study results, it is possible to eventually reach new hypotheses and explore the reasons for controversy.
- They can also answer questions with a broader influence than individual studies. For example, the effect of a disease on several populations across the world, by comparing other modest research studies completed in specific countries or continents.
Undertaking research approaches, like systematic reviews and/or meta-analysis, involve great responsibility. They provide reliable information that has a real impact on society. Elsevier offers a number of services that aim to help researchers achieve excellence in written text, suggesting the necessary amendments to fit them into a targeted format. A perfectly written text, whether translated or edited from a manuscript, is the key to being respected within the scientific community, leading to more and more important positions like, let’s say…being part of an expert panel leading a systematic review or a widely acknowledged meta-analysis.
Check why it’s important to manage research data .
Language Editing Services by Elsevier Author Services:
What is a Good H-index?
What is a Research Gap
You may also like.
Essential for High-Quality Paper Editing: Three Tips to Efficient Spellchecks
If You’re a Researcher, Remember These Before You Are Submitting Your Manuscript to Journals!
Navigating “Chinglish” Errors in Academic English Writing
Is The Use of AI in Manuscript Editing Feasible? Here’s Three Tips to Steer Clear of Potential Issues
A profound editing experience with English-speaking experts: Elsevier Language Services to learn more!
Research Fraud: Falsification and Fabrication in Research Data
Professor Anselmo Paiva: Using Computer Vision to Tackle Medical Issues with a Little Help from Elsevier Author Services
What is the main purpose of proofreading a paper?
Input your search keywords and press Enter.
- Meta-Analysis/Meta-Synthesis
Meta Analysis
Meta-analysis is a set of statistical techniques for synthesizing data across studies. It is a statistical method for combining the findings from quantitative studies. It evaluates, synthesizes, and summarizes results. It may be conducted independently or as a specialized subset of a systematic review. A systematic review attempts to collate empirical evidence that fits predefined eligibility criteria to answer a specific research question. Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess the results of previous research to derive conclusions about that body of research (Haidrich, 2010). Rigorously conducted meta-analyses are useful tools in evidence-based medicine . Outcomes from a meta-analysis may include a more precise estimate of the effect of a treatment or risk factor for disease or other outcomes. Not all systematic reviews include meta-analysis , but all meta-analyses are found in systematic reviews (Haidrich, 2010).
A Meta analysis is appropriate when a group of studies report quantitative results rather than qualitative findings or theory, if they examine the same or similar constructs or relationships, if they are derived from similar research designs and report the simple relationships between two variables rather than relationships that have been adjusted for the effect of additional variables (Siddaway et al., 2019).
Meta Synthesis
A meta synthesis is the systematic review and integration of findings from qualitative studies (Lachal et al., 2017). Reviews of qualitative information can be conducted and reported using the same replicable, rigorous, and transparent methodology and presentation. A meta-synthesis can be used when a review aims to integrate qualitative research. A meta-synthesis attempts to synthesize qualitative studies on a topic to identify key themes, concepts, or theories that provide novel or more powerful explanations for the phenomenon under review (Siddaway et al., 2019).
Haidich A. B. (2010). Meta-analysis in medical research. Hippokratia, 14 (Suppl 1), 29–37.
Lachal, J., Revah-Levy, A., Orri, M., & Moro, M. R. (2017). Metasynthesis: An original method to synthesize qualitative literature in psychiatry. Frontiers in Psychiatry, 8 , 269 .
Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual Review of Psychology, 70 , 747–770.
- << Previous: Rapid Reviews
- Next: Selecting a Review Type >>
- Adelphi University Libraries
- Common Review Types
- Integrative Reviews
- Scoping Reviews
- Rapid Reviews
- Selecting a Review Type
- Types of Questions
- Key Features and Limitations
- Is a Systematic Review Right for Your Research?
- Guidelines for Student Researchers
- Training Resources
- Register Your Protocol
- Handbooks & Manuals
- Reporting Guidelines
- PRESS 2015 Guidelines
- Search Strategies
- Selected Databases
- Grey Literature
- Handsearching
- Citation Searching
- Screening Studies
- Study Types & Terminology
- Quantitative vs. Qualitative Research
- Reducing Bias
- Quality Assessment/Risk of Bias Tools
- Tools for Specific Study Types
- Data Collection/Extraction
- Broad Functionality Programs & Tools
- Search Strategy Tools
- Deduplication Tools
- Screening Tools
- Data Extraction & Management Tools
- Meta Analysis Tools
- Books on Systematic Reviews
- Finding Systematic Review Articles in the Databases
- Systematic Review Journals
- More Resources
- Evidence-Based Practice Research in Nursing This link opens in a new window
- Citation Management Programs
- Last Updated: Sep 3, 2024 11:43 AM
- URL: https://libguides.adelphi.edu/Systematic_Reviews
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- Review Article
- Published: 08 March 2018
Meta-analysis and the science of research synthesis
- Jessica Gurevitch 1 ,
- Julia Koricheva 2 ,
- Shinichi Nakagawa 3 , 4 &
- Gavin Stewart 5
Nature volume 555 , pages 175–182 ( 2018 ) Cite this article
59k Accesses
977 Citations
724 Altmetric
Metrics details
- Biodiversity
- Outcomes research
Meta-analysis is the quantitative, scientific synthesis of research results. Since the term and modern approaches to research synthesis were first introduced in the 1970s, meta-analysis has had a revolutionary effect in many scientific fields, helping to establish evidence-based practice and to resolve seemingly contradictory research outcomes. At the same time, its implementation has engendered criticism and controversy, in some cases general and others specific to particular disciplines. Here we take the opportunity provided by the recent fortieth anniversary of meta-analysis to reflect on the accomplishments, limitations, recent advances and directions for future developments in the field of research synthesis.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
24,99 € / 30 days
cancel any time
Subscribe to this journal
Receive 51 print issues and online access
185,98 € per year
only 3,65 € per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Eight problems with literature reviews and how to fix them
The past, present and future of Registered Reports
Reporting guidelines for precision medicine research of clinical relevance: the BePRECISE checklist
Jennions, M. D ., Lortie, C. J. & Koricheva, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 23 , 364–380 (Princeton Univ. Press, 2013)
Article Google Scholar
Roberts, P. D ., Stewart, G. B. & Pullin, A. S. Are review articles a reliable source of evidence to support conservation and environmental management? A comparison with medicine. Biol. Conserv. 132 , 409–423 (2006)
Bastian, H ., Glasziou, P . & Chalmers, I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 7 , e1000326 (2010)
Article PubMed PubMed Central Google Scholar
Borman, G. D. & Grigg, J. A. in The Handbook of Research Synthesis and Meta-analysis 2nd edn (eds Cooper, H. M . et al.) 497–519 (Russell Sage Foundation, 2009)
Ioannidis, J. P. A. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 94 , 485–514 (2016)
Koricheva, J . & Gurevitch, J. Uses and misuses of meta-analysis in plant ecology. J. Ecol. 102 , 828–844 (2014)
Littell, J. H . & Shlonsky, A. Making sense of meta-analysis: a critique of “effectiveness of long-term psychodynamic psychotherapy”. Clin. Soc. Work J. 39 , 340–346 (2011)
Morrissey, M. B. Meta-analysis of magnitudes, differences and variation in evolutionary parameters. J. Evol. Biol. 29 , 1882–1904 (2016)
Article CAS PubMed Google Scholar
Whittaker, R. J. Meta-analyses and mega-mistakes: calling time on meta-analysis of the species richness-productivity relationship. Ecology 91 , 2522–2533 (2010)
Article PubMed Google Scholar
Begley, C. G . & Ellis, L. M. Drug development: Raise standards for preclinical cancer research. Nature 483 , 531–533 (2012); clarification 485 , 41 (2012)
Article CAS ADS PubMed Google Scholar
Hillebrand, H . & Cardinale, B. J. A critique for meta-analyses and the productivity-diversity relationship. Ecology 91 , 2545–2549 (2010)
Moher, D . et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 6 , e1000097 (2009). This paper provides a consensus regarding the reporting requirements for medical meta-analysis and has been highly influential in ensuring good reporting practice and standardizing language in evidence-based medicine, with further guidance for protocols, individual patient data meta-analyses and animal studies.
Moher, D . et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 4 , 1 (2015)
Nakagawa, S . & Santos, E. S. A. Methodological issues and advances in biological meta-analysis. Evol. Ecol. 26 , 1253–1274 (2012)
Nakagawa, S ., Noble, D. W. A ., Senior, A. M. & Lagisz, M. Meta-evaluation of meta-analysis: ten appraisal questions for biologists. BMC Biol. 15 , 18 (2017)
Hedges, L. & Olkin, I. Statistical Methods for Meta-analysis (Academic Press, 1985)
Viechtbauer, W. Conducting meta-analyses in R with the metafor package. J. Stat. Softw. 36 , 1–48 (2010)
Anzures-Cabrera, J . & Higgins, J. P. T. Graphical displays for meta-analysis: an overview with suggestions for practice. Res. Synth. Methods 1 , 66–80 (2010)
Egger, M ., Davey Smith, G ., Schneider, M. & Minder, C. Bias in meta-analysis detected by a simple, graphical test. Br. Med. J. 315 , 629–634 (1997)
Article CAS Google Scholar
Duval, S . & Tweedie, R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics 56 , 455–463 (2000)
Article CAS MATH PubMed Google Scholar
Leimu, R . & Koricheva, J. Cumulative meta-analysis: a new tool for detection of temporal trends and publication bias in ecology. Proc. R. Soc. Lond. B 271 , 1961–1966 (2004)
Higgins, J. P. T . & Green, S. (eds) Cochrane Handbook for Systematic Reviews of Interventions : Version 5.1.0 (Wiley, 2011). This large collaborative work provides definitive guidance for the production of systematic reviews in medicine and is of broad interest for methods development outside the medical field.
Lau, J ., Rothstein, H. R . & Stewart, G. B. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 25 , 407–419 (Princeton Univ. Press, 2013)
Lortie, C. J ., Stewart, G ., Rothstein, H. & Lau, J. How to critically read ecological meta-analyses. Res. Synth. Methods 6 , 124–133 (2015)
Murad, M. H . & Montori, V. M. Synthesizing evidence: shifting the focus from individual studies to the body of evidence. J. Am. Med. Assoc. 309 , 2217–2218 (2013)
Rasmussen, S. A ., Chu, S. Y ., Kim, S. Y ., Schmid, C. H . & Lau, J. Maternal obesity and risk of neural tube defects: a meta-analysis. Am. J. Obstet. Gynecol. 198 , 611–619 (2008)
Littell, J. H ., Campbell, M ., Green, S . & Toews, B. Multisystemic therapy for social, emotional, and behavioral problems in youth aged 10–17. Cochrane Database Syst. Rev. https://doi.org/10.1002/14651858.CD004797.pub4 (2005)
Schmidt, F. L. What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. Am. Psychol. 47 , 1173–1181 (1992)
Button, K. S . et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14 , 365–376 (2013); erratum 14 , 451 (2013)
Parker, T. H . et al. Transparency in ecology and evolution: real problems, real solutions. Trends Ecol. Evol. 31 , 711–719 (2016)
Stewart, G. Meta-analysis in applied ecology. Biol. Lett. 6 , 78–81 (2010)
Sutherland, W. J ., Pullin, A. S ., Dolman, P. M . & Knight, T. M. The need for evidence-based conservation. Trends Ecol. Evol. 19 , 305–308 (2004)
Lowry, E . et al. Biological invasions: a field synopsis, systematic review, and database of the literature. Ecol. Evol. 3 , 182–196 (2013)
Article PubMed Central Google Scholar
Parmesan, C . & Yohe, G. A globally coherent fingerprint of climate change impacts across natural systems. Nature 421 , 37–42 (2003)
Jennions, M. D ., Lortie, C. J . & Koricheva, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 24 , 381–403 (Princeton Univ. Press, 2013)
Balvanera, P . et al. Quantifying the evidence for biodiversity effects on ecosystem functioning and services. Ecol. Lett. 9 , 1146–1156 (2006)
Cardinale, B. J . et al. Effects of biodiversity on the functioning of trophic groups and ecosystems. Nature 443 , 989–992 (2006)
Rey Benayas, J. M ., Newton, A. C ., Diaz, A. & Bullock, J. M. Enhancement of biodiversity and ecosystem services by ecological restoration: a meta-analysis. Science 325 , 1121–1124 (2009)
Article ADS PubMed CAS Google Scholar
Leimu, R ., Mutikainen, P. I. A ., Koricheva, J. & Fischer, M. How general are positive relationships between plant population size, fitness and genetic variation? J. Ecol. 94 , 942–952 (2006)
Hillebrand, H. On the generality of the latitudinal diversity gradient. Am. Nat. 163 , 192–211 (2004)
Gurevitch, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 19 , 313–320 (Princeton Univ. Press, 2013)
Rustad, L . et al. A meta-analysis of the response of soil respiration, net nitrogen mineralization, and aboveground plant growth to experimental ecosystem warming. Oecologia 126 , 543–562 (2001)
Adams, D. C. Phylogenetic meta-analysis. Evolution 62 , 567–572 (2008)
Hadfield, J. D . & Nakagawa, S. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters. J. Evol. Biol. 23 , 494–508 (2010)
Lajeunesse, M. J. Meta-analysis and the comparative phylogenetic method. Am. Nat. 174 , 369–381 (2009)
Rosenberg, M. S ., Adams, D. C . & Gurevitch, J. MetaWin: Statistical Software for Meta-Analysis with Resampling Tests Version 1 (Sinauer Associates, 1997)
Wallace, B. C . et al. OpenMEE: intuitive, open-source software for meta-analysis in ecology and evolutionary biology. Methods Ecol. Evol. 8 , 941–947 (2016)
Gurevitch, J ., Morrison, J. A . & Hedges, L. V. The interaction between competition and predation: a meta-analysis of field experiments. Am. Nat. 155 , 435–453 (2000)
Adams, D. C ., Gurevitch, J . & Rosenberg, M. S. Resampling tests for meta-analysis of ecological data. Ecology 78 , 1277–1283 (1997)
Gurevitch, J . & Hedges, L. V. Statistical issues in ecological meta-analyses. Ecology 80 , 1142–1149 (1999)
Schmid, C. H . & Mengersen, K. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 11 , 145–173 (Princeton Univ. Press, 2013)
Eysenck, H. J. Exercise in mega-silliness. Am. Psychol. 33 , 517 (1978)
Simberloff, D. Rejoinder to: Don’t calculate effect sizes; study ecological effects. Ecol. Lett. 9 , 921–922 (2006)
Cadotte, M. W ., Mehrkens, L. R . & Menge, D. N. L. Gauging the impact of meta-analysis on ecology. Evol. Ecol. 26 , 1153–1167 (2012)
Koricheva, J ., Jennions, M. D. & Lau, J. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J . et al.) Ch. 15 , 237–254 (Princeton Univ. Press, 2013)
Lau, J ., Ioannidis, J. P. A ., Terrin, N ., Schmid, C. H . & Olkin, I. The case of the misleading funnel plot. Br. Med. J. 333 , 597–600 (2006)
Vetter, D ., Rucker, G. & Storch, I. Meta-analysis: a need for well-defined usage in ecology and conservation biology. Ecosphere 4 , 1–24 (2013)
Mengersen, K ., Jennions, M. D. & Schmid, C. H. in The Handbook of Meta-analysis in Ecology and Evolution (eds Koricheva, J. et al.) Ch. 16 , 255–283 (Princeton Univ. Press, 2013)
Patsopoulos, N. A ., Analatos, A. A. & Ioannidis, J. P. A. Relative citation impact of various study designs in the health sciences. J. Am. Med. Assoc. 293 , 2362–2366 (2005)
Kueffer, C . et al. Fame, glory and neglect in meta-analyses. Trends Ecol. Evol. 26 , 493–494 (2011)
Cohnstaedt, L. W. & Poland, J. Review Articles: The black-market of scientific currency. Ann. Entomol. Soc. Am. 110 , 90 (2017)
Longo, D. L. & Drazen, J. M. Data sharing. N. Engl. J. Med. 374 , 276–277 (2016)
Gauch, H. G. Scientific Method in Practice (Cambridge Univ. Press, 2003)
Science Staff. Dealing with data: introduction. Challenges and opportunities. Science 331 , 692–693 (2011)
Nosek, B. A . et al. Promoting an open research culture. Science 348 , 1422–1425 (2015)
Article CAS ADS PubMed PubMed Central Google Scholar
Stewart, L. A . et al. Preferred reporting items for a systematic review and meta-analysis of individual participant data: the PRISMA-IPD statement. J. Am. Med. Assoc. 313 , 1657–1665 (2015)
Saldanha, I. J . et al. Evaluating Data Abstraction Assistant, a novel software application for data abstraction during systematic reviews: protocol for a randomized controlled trial. Syst. Rev. 5 , 196 (2016)
Tipton, E. & Pustejovsky, J. E. Small-sample adjustments for tests of moderators and model fit using robust variance estimation in meta-regression. J. Educ. Behav. Stat. 40 , 604–634 (2015)
Mengersen, K ., MacNeil, M. A . & Caley, M. J. The potential for meta-analysis to support decision analysis in ecology. Res. Synth. Methods 6 , 111–121 (2015)
Ashby, D. Bayesian statistics in medicine: a 25 year review. Stat. Med. 25 , 3589–3631 (2006)
Article MathSciNet PubMed Google Scholar
Senior, A. M . et al. Heterogeneity in ecological and evolutionary meta-analyses: its magnitude and implications. Ecology 97 , 3293–3299 (2016)
McAuley, L ., Pham, B ., Tugwell, P . & Moher, D. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet 356 , 1228–1231 (2000)
Koricheva, J ., Gurevitch, J . & Mengersen, K. (eds) The Handbook of Meta-Analysis in Ecology and Evolution (Princeton Univ. Press, 2013) This book provides the first comprehensive guide to undertaking meta-analyses in ecology and evolution and is also relevant to other fields where heterogeneity is expected, incorporating explicit consideration of the different approaches used in different domains.
Lumley, T. Network meta-analysis for indirect treatment comparisons. Stat. Med. 21 , 2313–2324 (2002)
Zarin, W . et al. Characteristics and knowledge synthesis approach for 456 network meta-analyses: a scoping review. BMC Med. 15 , 3 (2017)
Elliott, J. H . et al. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 11 , e1001603 (2014)
Vandvik, P. O ., Brignardello-Petersen, R . & Guyatt, G. H. Living cumulative network meta-analysis to reduce waste in research: a paradigmatic shift for systematic reviews? BMC Med. 14 , 59 (2016)
Jarvinen, A. A meta-analytic study of the effects of female age on laying date and clutch size in the Great Tit Parus major and the Pied Flycatcher Ficedula hypoleuca . Ibis 133 , 62–67 (1991)
Arnqvist, G. & Wooster, D. Meta-analysis: synthesizing research findings in ecology and evolution. Trends Ecol. Evol. 10 , 236–240 (1995)
Hedges, L. V ., Gurevitch, J . & Curtis, P. S. The meta-analysis of response ratios in experimental ecology. Ecology 80 , 1150–1156 (1999)
Gurevitch, J ., Curtis, P. S. & Jones, M. H. Meta-analysis in ecology. Adv. Ecol. Res 32 , 199–247 (2001)
Lajeunesse, M. J. phyloMeta: a program for phylogenetic comparative analyses with meta-analysis. Bioinformatics 27 , 2603–2604 (2011)
CAS PubMed Google Scholar
Pearson, K. Report on certain enteric fever inoculation statistics. Br. Med. J. 2 , 1243–1246 (1904)
Fisher, R. A. Statistical Methods for Research Workers (Oliver and Boyd, 1925)
Yates, F. & Cochran, W. G. The analysis of groups of experiments. J. Agric. Sci. 28 , 556–580 (1938)
Cochran, W. G. The combination of estimates from different experiments. Biometrics 10 , 101–129 (1954)
Smith, M. L . & Glass, G. V. Meta-analysis of psychotherapy outcome studies. Am. Psychol. 32 , 752–760 (1977)
Glass, G. V. Meta-analysis at middle age: a personal history. Res. Synth. Methods 6 , 221–231 (2015)
Cooper, H. M ., Hedges, L. V . & Valentine, J. C. (eds) The Handbook of Research Synthesis and Meta-analysis 2nd edn (Russell Sage Foundation, 2009). This book is an important compilation that builds on the ground-breaking first edition to set the standard for best practice in meta-analysis, primarily in the social sciences but with applications to medicine and other fields.
Rosenthal, R. Meta-analytic Procedures for Social Research (Sage, 1991)
Hunter, J. E ., Schmidt, F. L. & Jackson, G. B. Meta-analysis: Cumulating Research Findings Across Studies (Sage, 1982)
Gurevitch, J ., Morrow, L. L ., Wallace, A . & Walsh, J. S. A meta-analysis of competition in field experiments. Am. Nat. 140 , 539–572 (1992). This influential early ecological meta-analysis reports multiple experimental outcomes on a longstanding and controversial topic that introduced a wide range of ecologists to research synthesis methods.
O’Rourke, K. An historical perspective on meta-analysis: dealing quantitatively with varying study results. J. R. Soc. Med. 100 , 579–582 (2007)
Shadish, W. R . & Lecy, J. D. The meta-analytic big bang. Res. Synth. Methods 6 , 246–264 (2015)
Glass, G. V. Primary, secondary, and meta-analysis of research. Educ. Res. 5 , 3–8 (1976)
DerSimonian, R . & Laird, N. Meta-analysis in clinical trials. Control. Clin. Trials 7 , 177–188 (1986)
Lipsey, M. W . & Wilson, D. B. The efficacy of psychological, educational, and behavioral treatment. Confirmation from meta-analysis. Am. Psychol. 48 , 1181–1209 (1993)
Chalmers, I. & Altman, D. G. Systematic Reviews (BMJ Publishing Group, 1995)
Moher, D . et al. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of reporting of meta-analyses. Lancet 354 , 1896–1900 (1999)
Higgins, J. P. & Thompson, S. G. Quantifying heterogeneity in a meta-analysis. Stat. Med. 21 , 1539–1558 (2002)
Download references
Acknowledgements
We dedicate this Review to the memory of Ingram Olkin and William Shadish, founding members of the Society for Research Synthesis Methodology who made tremendous contributions to the development of meta-analysis and research synthesis and to the supervision of generations of students. We thank L. Lagisz for help in preparing the figures. We are grateful to the Center for Open Science and the Laura and John Arnold Foundation for hosting and funding a workshop, which was the origination of this article. S.N. is supported by Australian Research Council Future Fellowship (FT130100268). J.G. acknowledges funding from the US National Science Foundation (ABI 1262402).
Author information
Authors and affiliations.
Department of Ecology and Evolution, Stony Brook University, Stony Brook, 11794-5245, New York, USA
Jessica Gurevitch
School of Biological Sciences, Royal Holloway University of London, Egham, TW20 0EX, Surrey, UK
Julia Koricheva
Evolution and Ecology Research Centre and School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, 2052, New South Wales, Australia
Shinichi Nakagawa
Diabetes and Metabolism Division, Garvan Institute of Medical Research, 384 Victoria Street, Darlinghurst, Sydney, 2010, New South Wales, Australia
School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
Gavin Stewart
You can also search for this author in PubMed Google Scholar
Contributions
All authors contributed equally in designing the study and writing the manuscript, and so are listed alphabetically.
Corresponding authors
Correspondence to Jessica Gurevitch , Julia Koricheva , Shinichi Nakagawa or Gavin Stewart .
Ethics declarations
Competing interests.
The authors declare no competing financial interests.
Additional information
Reviewer Information Nature thanks D. Altman, M. Lajeunesse, D. Moher and G. Romero for their contribution to the peer review of this work.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
PowerPoint slides
Powerpoint slide for fig. 1, rights and permissions.
Reprints and permissions
About this article
Cite this article.
Gurevitch, J., Koricheva, J., Nakagawa, S. et al. Meta-analysis and the science of research synthesis. Nature 555 , 175–182 (2018). https://doi.org/10.1038/nature25753
Download citation
Received : 04 March 2017
Accepted : 12 January 2018
Published : 08 March 2018
Issue Date : 08 March 2018
DOI : https://doi.org/10.1038/nature25753
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
This article is cited by
Accelerating evidence synthesis for safety assessment through clinicaltrials.gov platform: a feasibility study.
BMC Medical Research Methodology (2024)
Investigate the relationship between the retraction reasons and the quality of methodology in non-Cochrane retracted systematic reviews: a systematic review
- Azita Shahraki-Mohammadi
- Leila Keikha
- Razieh Zahedi
Systematic Reviews (2024)
A meta-analysis on global change drivers and the risk of infectious disease
- Michael B. Mahon
- Alexandra Sack
- Jason R. Rohr
Nature (2024)
Systematic review of the uncertainty of coral reef futures under climate change
- Shannon G. Klein
- Cassandra Roch
- Carlos M. Duarte
Nature Communications (2024)
A population-scale analysis of 36 gut microbiome studies reveals universal species signatures for common diseases
- Qiulong Yan
npj Biofilms and Microbiomes (2024)
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
SYSTEMATIC REVIEW AND META‐ANALYSIS: A PRIMER
Franco m impellizzeri , phd, mario bizzini , pt, phd.
- Author information
- Copyright and License information
Acknowledgment
We would like to thank Kirsten Clift for the English revision of the manuscript.
Mario Bizzini, PT, PhD, FIFA Medical Assessment and Research Center, Schulthess Clinic, Lengghalde 2, 8008 Zurich, Switzerland, Phone: +41 44 385 75 85, Fax: +41 44 385 75 90, E‐mail: mario.bizzini@f‐marc.com
The use of an evidence‐based approach to practice requires “the integration of best research evidence with clinical expertise and patient values”, where the best evidence can be gathered from randomized controlled trials (RCTs), systematic reviews and meta‐analyses. Furthermore, informed decisions in healthcare and the prompt incorporation of new research findings in routine practice necessitate regular reading, evaluation, and integration of the current knowledge from the primary literature on a given topic. However, given the dramatic increase in published studies, such an approach may become too time consuming and therefore impractical, if not impossible. Therefore, systematic reviews and meta‐analyses can provide the “best evidence” and an unbiased overview of the body of knowledge on a specific topic. In the present article the authors aim to provide a gentle introduction to readers not familiar with systematic reviews and meta‐analyses in order to understand the basic principles and methods behind this type of literature. This article will help practitioners to critically read and interpret systematic reviews and meta‐analyses to appropriately apply the available evidence to their clinical practice.
Keywords: evidence‐based practice, meta‐analysis, systematic review
INTRODUCTION
Sacket et al 1 , 2 defined evidence‐based practice as “the integration of best research evidence with clinical expertise and patient values”. The “best evidence” can be gathered by reading randomized controlled trials (RCTs), systematic reviews, and meta‐analyses. 2 It should be noted that the “best evidence” (e.g. concerning clinical prognosis, or patient experience) may also come from other types of research designs particularly when dealing with topics that are not possible to investigate with RCTs. 3 , 4 From the available evidence, it is possible to provide clinical recommendations using different levels of evidence. 5 Although sometimes a matter of debate, 6 ‐ 8 when properly applied, the evidence‐based approach and therefore meta‐analyses and systematic reviews (highest level of evidence) can help the decision‐making process in different ways: 9
Identifying treatments that are not effective;
Summarizing the likely magnitude of benefits of effective treatments;
Identifying unanticipated risks of apparently effective treatments;
Identifying gaps of knowledge;
Auditing the quality of existing randomized controlled trials.
The number of scientific articles published in biomedical areas has dramatically increased in the last several decades. Due to the quest for timely and informed decisions in healthcare and medicine, good clinical practice and prompt integration of new research findings into routine practice, clinicians and practitioners should regularly read new literature and compare it with the existing evidence. 10 However, this is time consuming and therefore is impractical if not impossible for practitioners to continuously read, evaluate, and incorporate the current knowledge from the primary literature sources on a given topic. 11 Furthermore, the reader also needs to be able to interpret both the new and the past body of knowledge in relation to the methodological quality of the studies. This makes it even more difficult to use the scientific literature as reference knowledge for clinical decision‐making. For this reason, review articles are important tools available for practitioners to summarize and synthetize the available evidence on a particular topic, 10 in addition to being an integral part of the evidence‐based approach.
International institutions have been created in recent years in an attempt to standardize and update scientific knowledge. The probably best known example is the Cochrane Collaboration, founded in 1993 as an independent, non‐profit organisation, now regrouping more than 28,000 contributors worldwide and producing systematic reviews and meta‐analyses of healthcare interventions. There are currently over 5000 Cochrane Reviews available ( http://www.cochrane.org ). The methodology used to perform systematic reviews and meta‐analyses is crucial. Furthermore, systematic reviews and meta‐analyses have limitations that should be acknowledged and considered. Like any other scientific research, a systematic review with or without meta‐analysis can be performed in a good or bad way. As a consequence, guidelines have been developed and proposed to reduce the risk of drawing misleading conclusions from poorly conducted literature searches and meta‐analyses. 11 ‐ 18
In the present article the authors aim to provide an introduction to readers not familiar with systematic reviews and meta‐analysis in order to help them understand the basics principles and methods behind this kind of literature. A meta‐analysis is not just a statistical tool but qualifies as an actual observational study and hence it must be approached following established research methods involving well‐defined steps. This review should also help practitioners to critically and appropriately read and interpret systematic reviews and meta‐analyses.
NARRATIVE VERSUS SYSTEMATIC REVIEWS
Literature reviews can be classified as “narrative” and “systematic” ( Table 1 ). Narrative reviews were the first form of literature overview allowing practitioners to have a quick synopsis on the current state of science in the topic of interest. When written by experts (usually by invitation) narrative reviews are also called “expert reviews”. However, both narrative or expert reviews are based on a subjective selection of publications through which the reviewer qualitatively addresses a question summarizing the findings of previous studies and drawing a conclusion. 15 As such, albeit offering interesting information for clinicians, they have an obvious author's bias since not performed by following a clear methodology (i.e. the identification of the literature is not transparent). Indeed, narrative and expert reviews typically use literature to support authors' statements but it is not clear whether these statements are evidence‐based or just a personal opinion/experience of the authors. Furthermore, the lack of a specific search strategy increases the risk of failing to identify relevant or key studies on a given topic thus allowing for questions to arise regarding the conclusions made by the authors. 19 Narrative reviews should be considered as opinion pieces or invited commentaries, and therefore they are unreliable sources of information and have a low evidence level. 10 , 11 , 19
Characteristics of narrative and systematic reviews, modified from Physiotherapy Evidence Database.37
By conducting a “systematic review”, the flaws of narrative reviews can be limited or overcome. The term “systematic” refers to the strict approach (clear set of rules) used for identifying relevant studies; 11 , 15 which includes the use of an accurate search strategy in order to identify all studies addressing a specific topic, the establishment of clear inclusion/exclusion criteria and a well‐defined methodological analysis of the selected studies. By conducting a properly performed systematic review, the potential bias in identifying the studies is reduced, thus limiting the possibility of the authors to select the studies arbitrarily considered the most “relevant” for supporting their own opinion or research hypotheses. Systematic reviews are considered to provide the highest level of evidence.
META‐ANALYSIS
A systematic review can be concluded in a qualitative way by discussing, comparing and tabulating the results of the various studies, or by statistically analysing the results from independent studies: therefore conducting a meta‐analysis. Meta‐analysis has been defined by Glass 20 as “the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings”. By combining individual studies it is possible to provide a ‐single and more precise estimate of the treatment effects. 11 , 21 However, the quantitative synthesis of results from a series of studies is meaningful only if these studies have been identified and collected in a proper and systematic way. Thus, the reason why the systematic review always precedes the meta‐analysis and the two methodologies are commonly used together. Ideally, the combination of individual study results to get a single summary estimate is appropriate when the selected studies are targeted to a common goal, have similar clinical populations, and share the same study design. When the studies are thought to be too different (statistically or clinically), some researchers prefer not to calculate summary estimates. Reasons for not presenting the summary estimates are usually related to study heterogeneity aspects such as clinical diversity (e.g. different metrics or outcomes, participant characteristics, different settings, etc.), methodological diversity (different study designs) and statistical heterogeneity. 22 Some methods, however, are available for dealing with these problems in order to combine the study results. 22 Nevertheless, the source of heterogeneity should be always explored using, for example, sensitivity analyses. In this analysis the primary studies are classified in different groups based on methodological and/or clinical characteristics and subsequently compared. Even after this subgroup analysis the studies included in the groups may still be statistically heterogeneous and therefore the calculation of a single estimate may be questionable. 11 , 19 Statistically heterogeneity can be calculated with different tests but the most popular are the Cochran's Q 23 and I. 23 Although the latter is thought to be more powerful, it has been shown that their performance is similar 24 and these tests are generally weak (low power). Therefore, their confidence intervals should always be presented in meta‐analyses and taken into consideration when interpreting heterogeneity. Although heterogeneity can be seen as a “statistical” problem, it is also an opportunity for obtaining important clinical information about the influences of specific clinical differences. 11 Sometimes, the goal of a meta‐analysis is to explore the source of diversity among studies. 15 In this situation the inclusion criteria are purposely allowed to be broader.
Meta‐analyses of observational studies
Although meta‐analyses usually combine results from RCTs, meta‐analyses of epidemiological studies (case‐control, cross‐sectional or cohort studies) are increasing in the literature, and therefore, guidelines for conducting this type of meta‐analysis have been proposed (e.g. Meta‐analysis Of Observational Studies in Epidemiology, MOOSE 25 ). Although the highest level of evidence study design is the RCT, observational studies are used in situations where RCTs are not possible such as when investigating the potential causes of a rare disease or the prevalence of a condition and other etiological hypotheses. 3 , 4 , 11 The two designs, however, usually address different research questions (e.g. efficacy versus effectiveness) and therefore the inclusion of both RCTs and observational studies in meta‐analyses would not be appropriate. 11 , 15 Major problems of observational studies are the lack of a control group, the difficultly controlling for confounding variables, and the high risk of bias. 26 Nevertheless, observational studies and therefore the meta‐analyses of observational studies can be useful and are an important step in examining the effectiveness of treatments in healthcare. 3 , 4 , 11 For the meta‐analyses of observational studies, sensitivity analyses for exploring the source of heterogeneity is often the main aim. To note, meta‐analyses themselves can be considered “observational studies of the evidence” 11 and, as a consequence, they may be influenced by known and unknown confounders similarly to primary type observational studies.
Meta‐analyses based on individual patient data
While “traditional” meta‐analyses combine aggregate data (average of the study participants such as mean treatment effects, mean age, etc.) for calculating a summary estimate, it is possible (if data are available) to perform meta‐analyses using the individual participant data on which the aggregate data are derived. 27 ‐ 29 Meta‐analyses based on individual participant data are increasing. 28 This kind of meta‐analysis is considered the most comprehensive and has been regarded as the gold standard for systematic reviews. 29 , 30 Of course, it is not possible to simply pool together the participants of various studies as if they come from a large, single trial. The analysis must be stratified by study so that the clustering of patients within the studies is retained for preserving the effects of the randomization used in the primary investigations and avoiding artifacts such as the Simpson's paradox, which is a change of direction of the associations. 11 , 15 , 28 , 29 There are several potential advantages of this kind of meta‐analysis such as consistent data checking, consistent use of inclusion and exclusion criteria, better methods for dealing with missing data, the possibility of performing the same statistical analyses across studies, and a better examination of the effects of participant‐level covariates. 15 , 31 , 32 Unfortunately, meta‐analyses on individual patient data are often difficult to conduct, time consuming, and it is often not easy to obtain the original data needed for performance of a such an analysis.
Cumulative and Bayesian meta‐analyses
Another form of meta‐analysis is the so‐called “cumulative meta‐analysis”. Cumulative meta‐analyses recognize the cumulative nature of scientific evidence and knowledge. 11 In cumulative meta‐analysis a new relevant study on a given topic is added whenever it becomes available. Therefore, a cumulative meta‐analysis shows the pattern of evidence over time and can identify the point when a treatment becomes clinically significant. 11 , 15 , 33 Cumulative meta‐analyses are not updated meta‐analyses since there is not a single pooling but the results are summarized as each new study is added. 33 As a consequence, in the forest plot, commonly used for displaying the effect estimates, the horizontal lines represent the treatment effect estimates as each study is added and not the results of the single studies. The cumulative meta‐analysis should be interpreted within the Bayesian framework even if they differ from the “pure” Bayesian approach for meta‐analysis.
The Bayesian approach differs from the classical, or frequentist methods to meta‐analysis in that data and model parameters are considered to be random quantities and probability is interpreted as an uncertainty rather than a frequency. 11 , 15 , 34 Compared to the frequentist methods, the Bayesian approach incorporates prior distributions, that can be specified based on a priori beliefs (being unknown random quantities), and the evidence coming from the study is described as a likelihood function. 11 , 15 , 34 The combination of prior distribution and likelihood function gives the posterior probability density function. 34 The uncertainty around the posterior effect estimate is defined as a credibility interval, which is the equivalent of the confidence interval in the frequentist approach. 11 , 15 , 34 Although Bayesian meta‐analyses are increasing, they are still less common than traditional (frequentist) meta‐analyses.
Conducting a systematic review and meta‐analysis
As aforementioned, a systematic review must follow well‐defined and established methods. One reference source of practical guidelines for properly apply methodological principles when conducting systematic reviews and meta‐analyses is the Cochrane Handbook for Systematic Reviews of Interventions that is available for free online. 12 However other guidelines and textbooks on systematic reviews and meta‐analysis are available. 11 , 13 , 14 , 15 Similarly, authors of reviews should report the results in a transparent and complete way and for this reason an international group of experts developed and published the QUOROM (Quality Of Reporting Of Meta‐analyses), 16 and recently the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta‐Analyses) 17 guidelines addressing the reporting of systematic reviews and meta‐analyses of studies which evaluate healthcare interventions. 17 , 18
In this section the authors briefly present the principal steps necessary for conducting a systematic review and meta‐analysis, derived from available reference guidelines and textbooks in which all the contents (and much more) of the following section can be found. 11 , 12 , 14 A summary of the steps is presented in Figure 1 . As with any research, the methods are similar to any other study and start with a careful development of the review protocol, which includes the definition of the research question, the collection and analysis of data, and the interpretation of the results. The protocol defines the methods that will be used in the review and should be set out before starting the review in order to avoid bias, and in case of deviation this should be reported and justified in the manuscript.
Steps in conducting a systematic review. Modified from 11 , 14
Step 1. Defining the review question and eligibility criteria
The authors should start by formulating a precise research question, which means they should clearly report the objectives of the review and what question they would like to address. If necessary, a broad research question may be divided into more specific questions. According to the PICOS framework, 35 , 36 the question should define the P opulation(s), I ntervention(s), C omparator(s), O utcome(s) and S tudy design(s). This information will also provide the rationale for the inclusion and exclusion criteria for which a background section explaining the context and the key conceptual issues may be also needed. When using terms that may have different interpretations, operational definitions should be provided. An example may be the term “neuromuscular control” which can be interpreted in different ways by different researchers and practitioners. Furthermore, the inclusion criteria should be precise enough to allow the selection of all the studies relevant for answering the research question. In theory, only the best evidence available should be used for the systematic reviews. Unfortunately, the use of an appropriate design (e.g. RCT) does not ensure the study was well‐conducted. However, the use of cut‐offs in quality scores as inclusion criteria is not appropriate given their subjective nature, and a sensitivity analysis comparing all available studies based on some methodological key characteristics is preferable.
Step 2. Searching for studies
The search strategy must be clearly stated and should allow the identification of all the relevant studies. The search strategy is usually based on the PICOS elements and can be conducted using electronic databases, reading the reference lists of relevant studies, hand‐searching journals and conference proceedings, contacting authors, experts in the field and manufacturers, for example.
Currently, it is possible to easily search the literature using electronic databases. However, the use of only one database does not ensure that all the relevant studies will be found and therefore various databases should be searched. The Physiotherapy Evidence Database (PEDro: http://www.pedro.org.au ) provides free access to RCTs (about 18,000) and systematic reviews (almost 4000) on musculoskeletal and orthopaedic physiotherapy (sports being represented by more than 60%). Other available electronic databases are MEDLINE (through PubMed), EMBASE, SCOPUS, CINAHL, Web of Science of the Thomson Reuters and The Cochrane Controlled Trials Register. The necessity of using different databases is justified by the fact that, for example, 1800 journals indexed in MEDLINE are not indexed in EMBASE, and vice versa.
The creation and selection of appropriate keywords and search term lists is important to find the relevant literature, ensuring that the search will be highly sensitive without compromising precision. Therefore, the development of the search strategy is not easy and should be developed carefully taking into consideration the differences between databases and search interfaces. Although Boolean searching (e.g. AND, OR, NOT) and proximity operators (e.g. NEAR, NEXT) are usually available, every database interface has its own search syntax (e.g. different truncation and wildcards) and a different thesaurus for indexing (e.g. MeSH for MEDLINE and EMTREE for EMBASE). Filters already developed for specific topics are also available. For example, PEDro has filters included in search strategies (called SDIs) that are used regularly and automatically in some of the above mentioned databases for retrieving guidelines, RCTs, and systematic reviews. 37
After performing the literature search using electronic databases, however, other search strategies should be adopted such as browsing the reference lists of primary and secondary literature and hand searching journals not indexed. Internet sources such as specialized websites can be also used for retrieving grey literature (e.g. unpublished papers, reports, conference proceedings, thesis or any other publications produced by governments, institutions, associations, universities, etc.). Attempts may be also performed for finding, if any, unpublished studies in order to reduce the risk of publication bias (trend to publish positive results or results going in the same direction). Similarly, the selection of only English‐language studies may exacerbate the bias, since authors may tend to publish more positive findings in international journals and more negative results in local journals. Unpublished and non‐English studies generally have lower quality and their inclusion may also introduce a bias. There is no rule for deciding whether to include or not include unpublished or exclusively English‐language studies. The authors are usually invited to think about the influence of these decisions on the findings and/or explore the effects of their inclusion with a sensitivity analysis.
Step 3. Selecting the studies
The selection of the studies should be conducted by more than one reviewer as this process is quite subjective (the agreement, using kappa statistic, between reviewers should be reported together with the reasons for disagreements). Before selecting the studies, the results of the different searches are merged using reference management software and duplicates deleted. After an initial screening of titles and abstracts where the obviously irrelevant studies are removed, the full papers of potentially relevant studies should be retrieved and are selected based on the previously defined inclusion and exclusion criteria. In case of disagreements, a consensus should be reached by discussion or with the help of a third reviewer. Direct contact with the author(s) of the study may also help in clarifying a decision.
An important phase at this step is the assessment of quality. The use of quality scores for weighting the study entered in the meta‐analysis is not recommended, as it is not recommended to include in a meta‐analysis only studies above a cut‐off quality score. However, the quality criteria of the studies must be considered when interpreting the results of a meta‐analysis. This can be done qualitatively or quantitatively through subgroup and sensitivity analyses based on important methodological aspects, which can be assessed using checklists that are preferable over quality scores. If quality scores would like to be used for weighting, alternative statistical techniques have been proposed. e.g. 38 The assessment of quality should be performed by two independent observers. The Cochrane handbook, however, makes a distinction between study quality and risk of bias (related for example to the method used to generate random allocation, concealment, blindness, etc.), focusing more on the latter. As for quality assessment, the risk of bias should be taken into consideration when interpreting the findings of the meta‐analysis. The quality of a study is generally assessed based on the information reported in the studies thus linking the quality of reporting to the quality of the research itself, which is not necessarily true. Furthermore, a study conducted at the highest possible standard may still have high risk of bias. In both cases, however, it is important that the authors of primary studies appropriately report the results and for this reason guidelines have been created for improving the quality of reporting such as the CONSORT (Consolidated Standards of Reporting Trials 39 ) and the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology 40 ) statements.
Step 4. Data extraction
Data extraction must be accurate and unbiased and therefore, to reduce possible errors, it should be performed by at least two researchers. Standardized data extraction forms should be created, tested, and if necessary modified before implementation. The extraction forms should be designed taking into consideration the research question and the planned analyses. Information extracted can include general information (author, title, type of publication, country of origin, etc.), study characteristics (e.g. aims of the study, design, randomization techniques, etc.), participant characteristics (e.g. age, gender, etc.), intervention and setting, outcome data and results (e.g. statistical techniques, measurement tool, number of follow up, number of participants enrolled, allocated, and included in the analysis, results of the study such as odds ratio, risk ratio, mean difference and confidence intervals, etc.). Disagreements should be noted and resolved by discussing and reaching a consensus. If needed, a third researcher can be involved to resolve the disagreement.
Step 5. Analysis and presentation of the results (data synthesis)
Once the data are extracted, they are combined, analyzed, and presented. This data synthesis can be done quantitatively using statistical techniques (meta‐analysis), or qualitatively using a narrative approach when pooling is not believed to be appropriate. Irrespective of the approach (quantitative or qualitative), the synthesis should start with a descriptive summary (in tabular form) of the included studies. This table usually includes details on study type, interventions, sample sizes, participant characteristics, outcomes, for example. The quality assessment or the risk of bias should also be reported. For narrative reviews a comprehensive synthesis framework ( Figure 2 ) has been proposed. 14 , 41
Narrative synthesis framework. Modified from 14 , 41
Standardization of outcomes
To allow comparison between studies the results of the studies should be expressed in a standardized format such as effect sizes. The appropriate effect size for standardizing the outcomes should be similar between studies so that they can be compared and it can be calculated from the data available in the original articles. Furthermore, it should be interpretable. When the outcomes of the primary studies are reported as means and standard deviations, the effect size can be the raw (unstandardized) difference in means (D), the standardized difference in means (d or g) or the response ratio (R). If the results are reported in the studies as binary outcomes the effect sizes can be the risk ratio (RR), the odds ratio (OR) or the risk difference (RD). 15
Statistical analysis
When a quantitative approach is chosen, meta–analytical techniques are used. Textbooks and courses are available for learning statistical meta‐analytical techniques. Once a summary statistic is calculated for each study, a “pooled” effect estimate of the interventions is determined as the weighting average of individual study estimates, so that the larger studies have more “weight” than the small studies. This is necessary because small studies are more affected by the role of chance. 11 , 15 The two main statistical models used for combining the results are the “fixed‐effect” and the “random‐effects” model. Under the fixed effect model, it is assumed that the variability between studies is only due to random variation because there is only one true (common) effect. In other words, it is assumed that the group of studies give an estimate of the same treatment effect and therefore the effects are part of the same distribution. A common method for weighting each study is the inverse‐variance method, where the weight is given by the inverse of variance of each estimate. Therefore, the two essential data required for this calculation are the estimate of the effect with its standard error. On the other hand, the “random‐effects” model assumes a different underlying effect for each study (the true effect varies from study to study). Therefore the study weight will take into account two sources of error: the between‐ and within‐studies variance. As in the fixed‐effect model, the weight is calculated using the inverse‐variance method, but in random‐effects model the study specific standard errors are adjusted incorporating both within and between‐studies variance. For this reason, the confidence intervals obtained with random‐effect models are usually wider. In theory, the fixed‐effect model can be applied when the studies are heterogeneous while the random‐effects model can be applied when the results are not heterogeneous. However, the statistical tests for examining heterogeneity lack power and, as aforementioned, the heterogeneity should be carefully scrutinized (e.g. interpreting the confidence intervals) before taking a decision. Sometimes, both fixed‐ and random‐effects models are used for examining the robustness of the analysis. Once the analyses are completed, results should be presented as point estimates with the corresponding confidence intervals and exact p ‐values.
Other than the calculations of the individual studies and summary estimates, other analyses are necessary. As mentioned various time, the exploration of possible source of heterogeneity is important and can be performed using sensitivity, subgroup, or regression analyses. Using meta‐regressions is also possible to examine the effects of differences in study characteristics on the treatment effect estimate. When using meta‐regression, the larger studies have more influence than smaller studies; and regarding other analyses, recall that the limitations should be taken into account before deciding to use it and when interpreting the results.
Graphic display
The results of each trial are commonly displayed with their corresponding confidence intervals in the so‐called “forest plot” ( Figure 3 ). In the forest plot the study is represented by a square and a horizontal line indicating the confidence interval, where the dimension of the square reflects the weight of each study. A solid vertical line usually corresponds to no effect of treatment. The summary point estimate is usually represented with a diamond at the bottom of the graph with the horizontal extremities indicating the confidence interval. This graphic solution gives an immediate overview of the results.
Example of a forest plot: the squares represent the effect estimate of the individual studies and the horizontal lines indicate the confidence interval; the dimension of the square reflects the weight of each study. The diamond represent the summary point estimate is usually represented with a diamond at the bottom of the graph with the horizontal extremities indicating the confidence interval. In the example as standardized outcome measure the authors used d.
An alternated graphic solution called a funnel plot can be used for investigating the effects of small studies and for identifying publication bias ( Figure 4 ). The funnel plot is a scatter‐plot of the effect estimates of individual studies against measures of study size and precision (commonly, the standard error, but the use of sample size is still common). If there is no publication bias the funnel plot will be symmetrical ( Figure 4B ). However, the funnel plot examination is subjective, based upon visual inspection, and therefore can be unreliable. In addition, other causes may influence the symmetry of the funnel plot such as the measures used for estimating the effects and precision, and differences between small and large studies. 14 Therefore, its use and interpretation should be done with caution.
Example of symmetric (A) and asymmetric (B) funnel plots.
Step 6. Interpretation of the results
The final part of the process pertains to the interpretation of the results. When interpreting or commenting on the findings, the limitations should be discussed and taken into account, such as the overall risk of bias and the specific biases of the studies included in the systematic review, and the strength of the evidence. Furthermore, the interpretation should be performed based not solely using P ‐values, but rather on the uncertainty and the clinical/practical importance. Ideally, the interpretation should help the clinician in understanding how to apply the findings in practice, provide recommendations or implications for policies, and offer directions for further research.
CONCLUSIONS
Systematic reviews have to meet high methodological standards, and their results should be translated into clinically relevant information. These studies offer a valuable and useful summary of the current scientific evidence on a specific topic and can be used for developing evidence‐based guidelines. However, it is important that practitioners are able to understand the basic principles behind the reviews and are hence able to appreciate their methodological quality before using them as a source of knowledge. Furthermore, there are no RCTs, systematic reviews, or meta‐analyses that address all aspects of the wide variety of clinical situations. A typical example in sports physiotherapy is that most available studies deal with recreational athletes, while an individual clinician may work with high‐profile or elite athletes in the clinic. Therefore, when applying the results of a systematic review to clinical situations and individual patients, there are various aspects one should consider such as the applicability of the findings to the individual patient, the feasibility in a particular setting, the benefit‐risk ratio, and the patient's values and preferences. 1 As reported in the definition, evidence‐based medicine is the integration of both research evidence and clinical expertise. As such, the experience of the sports PT should help in contextualizing and applying the findings of a systematic review or meta‐analysis, and adjusting the effects to the individual patient. As an example, an elite athlete is often more motivated and compliant in rehabilitation, and may have a better outcome than average with the given physical therapy or training interventions (when compared to a recreational athlete). Therefore, it is essential to merge the available evidence with the clinical evaluation and the patient's wishes (and consequent treatment planning) in order to engage an evidence‐based management of the patient or athlete.
- 1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS: Evidence based medicine: what it is and what it isn't. BMJ. 1996; 312(7023): 71‐72 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 2. Sackett DL, Strauss SE, Richardson WS, Rosenberg W, Haynes RB: Evidence‐Based Medicine. How to practice and Teach. EBM (2nd ed). London: Churchill Livingstone, 2000 [ Google Scholar ]
- 3. Black N: What observational studies can offer decision makers. Horm Res. 1999; 51 Suppl 1: 44‐49 [ DOI ] [ PubMed ] [ Google Scholar ]
- 4. Black N: Why we need observational studies to evaluate the effectiveness of health care. BMJ. 1996; 312(7040): 1215‐1218 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 5. US Preventive Services Task Force : Guide to clinical preventive services, 2nd ed. Baltimore, MD: Williams & Wilkins, 1996 [ Google Scholar ]
- 6. LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F: Discrepancies between meta‐analyses and subsequent large randomized, controlled trials. N Engl J Med. 1997; 337(8): 536‐542 [ DOI ] [ PubMed ] [ Google Scholar ]
- 7. Liberati A: “Meta‐analysis: statistical alchemy for the 21st century”: discussion. A plea for a more balanced view of meta‐analysis and systematic overviews of the effect of health care interventions. J Clin Epidemiol. 1995; 48(1): 81‐86 [ DOI ] [ PubMed ] [ Google Scholar ]
- 8. Bailar JC, 3rd: The promise and problems of meta‐analysis. N Engl J Med. 1997; 337(8): 559‐561 [ DOI ] [ PubMed ] [ Google Scholar ]
- 9. Von Korff M: The role of meta‐analysis in medical decision making. Spine J. 2003; 3(5): 329‐330 [ DOI ] [ PubMed ] [ Google Scholar ]
- 10. Tonelli M, Hackam D, Garg AX: Primer on systematic review and meta‐analysis. Methods Mol Biol. 2009; 473: 217‐233 [ DOI ] [ PubMed ] [ Google Scholar ]
- 11. Egger M, Daey Smith G, Altman D: Systematic Reviews in Health Care: Meta‐Analysis in Context, 2nd Edition: BMJ Books, 2001 [ Google Scholar ]
- 12. Higgins JPT, Green S: Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0 (updated March 2011). The Cochrane Collaboration (available from: http://www.cochrane‐handbook.org/ ), 2008
- 13. Atkins D, Fink K, Slutsky J: Better information for better health care: the Evidence‐based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005; 142(12 Pt 2): 1035‐1041 [ DOI ] [ PubMed ] [ Google Scholar ]
- 14. Centre for Reviews and Dissemination : Systematic reviews: CRD's 16 guidance for undertaking reviews in health care. York: University of York, 2009 [ Google Scholar ]
- 15. Borenstein M, Hedges LV, Higgins JPT, H.R. R: Introduction to Meta‐Analysis: John Wiley & Sons Ltd, 2009 [ Google Scholar ]
- 16. Clarke M: The QUORUM statement. Lancet. 2000; 355(9205): 756‐757 [ DOI ] [ PubMed ] [ Google Scholar ]
- 17. Moher D, Liberati A, Tetzlaff J, Altman DG: Preferred reporting items for systematic reviews and meta‐analyses: the PRISMA statement. BMJ. 2009; 339: b2535. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 18. Liberati A, Altman DG, Tetzlaff J, et al. : The PRISMA statement for reporting systematic reviews and meta‐analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009; 339: b2700. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 19. Sauerland S, Seiler CM: Role of systematic reviews and meta‐analysis in evidence‐based medicine. World J Surg. 2005; 29(5): 582‐587 [ DOI ] [ PubMed ] [ Google Scholar ]
- 20. Glass GV: Primary, secondary and meta‐analysis of research. Educ Res. 1976; 5: 3‐8 [ Google Scholar ]
- 21. Berman NG, Parker RA: Meta‐analysis: neither quick nor easy. BMC Med Res Methodol. 2002; 2: 10. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 22. Ioannidis JP, Patsopoulos NA, Rothstein HR: Reasons or excuses for avoiding meta‐analysis in forest plots. BMJ. 2008; 336(7658): 1413‐1415 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 23. Higgins JP, Thompson SG: Quantifying heterogeneity in a meta‐analysis. Stat Med. 2002; 21(11): 1539‐1558 [ DOI ] [ PubMed ] [ Google Scholar ]
- 24. Huedo‐Medina TB, Sanchez‐Meca J, Marin‐Martinez F, Botella J: Assessing heterogeneity in meta‐analysis: Q statistic or I2 index? Psychol Methods. 2006; 11(2): 193‐206 [ DOI ] [ PubMed ] [ Google Scholar ]
- 25. Stroup DF, Berlin JA, Morton SC, et al. : Meta‐analysis of observational studies in epidemiology: a proposal for reporting. Meta‐analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000; 283(15): 2008‐2012 [ DOI ] [ PubMed ] [ Google Scholar ]
- 26. Kelsey JL, Whittemore AS, Evans AS, Thompson WD: Methods in observational epidemiology. New York: Oxford University Press, 1996 [ Google Scholar ]
- 27. Duchateau L, Pignon JP, Bijnens L, Bertin S, Bourhis J, Sylvester R: Individual patient‐versus literature‐based meta‐analysis of survival data: time to event and event rate at a particular time can make a difference, an example based on head and neck cancer. Control Clin Trials. 2001; 22(5): 538‐547 [ DOI ] [ PubMed ] [ Google Scholar ]
- 28. Riley RD, Dodd SR, Craig JV, Thompson JR, Williamson PR: Meta‐analysis of diagnostic test studies using individual patient data and aggregate data. Stat Med. 2008; 27(29): 6111‐6136 [ DOI ] [ PubMed ] [ Google Scholar ]
- 29. Simmonds MC, Higgins JP, Stewart LA, Tierney JF, Clarke MJ, Thompson SG: Meta‐analysis of individual patient data from randomized trials: a review of methods used in practice. Clin Trials. 2005; 2(3): 209‐217 [ DOI ] [ PubMed ] [ Google Scholar ]
- 30. Oxman AD, Clarke MJ, Stewart LA: From science to practice. Meta‐analyses using individual patient data are needed. JAMA. 1995; 274(10): 845‐846 [ DOI ] [ PubMed ] [ Google Scholar ]
- 31. Stewart LA, Tierney JF: To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data. Eval Health Prof. 2002; 25(1): 76‐97 [ DOI ] [ PubMed ] [ Google Scholar ]
- 32. Riley RD, Lambert PC, Staessen JA, et al. : Meta‐analysis of continuous outcomes combining individual patient data and aggregate data. Stat Med. 2008; 27(11): 1870‐1893 [ DOI ] [ PubMed ] [ Google Scholar ]
- 33. Lau J, Schmid CH, Chalmers TC: Cumulative meta‐analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol. 1995; 48(1): 45‐57; discussion 59‐60. [ DOI ] [ PubMed ] [ Google Scholar ]
- 34. Sutton AJ, Abrams KR: Bayesian methods in meta‐analysis and evidence synthesis. Stat Methods Med Res. 2001; 10(4): 277‐303 [ DOI ] [ PubMed ] [ Google Scholar ]
- 35. Armstrong EC: The well‐built clinical question: the key to finding the best evidence efficiently. Wmj. 1999; 98(2): 25‐28 [ PubMed ] [ Google Scholar ]
- 36. Richardson WS, Wilson MC, Nishikawa J, Hayward RS: The well‐built clinical question: a key to evidence‐based decisions. ACP J Club. 1995; 123(3): A12‐13 [ PubMed ] [ Google Scholar ]
- 37. Physiotherapy Evidence Database (PEDro) : Physiotherapy Evidence Database (PEDro). J Med Libr Assoc. 2006; 94(4): 477‐478 [ Google Scholar ]
- 38. Greenland S, O'Rourke K: On the bias produced by quality scores in meta‐analysis, and a hierarchical view of proposed solutions. Biostatistics. 2001; 2(4): 463‐471 [ DOI ] [ PubMed ] [ Google Scholar ]
- 39. Moher D, Schulz KF, Altman DG: The CONSORT statement: revised recommendations for improving the quality of reports of parallel‐group randomised trials. Lancet. 2001; 357(9263): 1191‐1194 [ PubMed ] [ Google Scholar ]
- 40. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet. 2007; 370(9596): 1453‐1457 [ DOI ] [ PubMed ] [ Google Scholar ]
- 41. Popay J, Roberts H, Sowden A, et al. : Developing guidance on the conduct of narrative synthesis in systematic reviews. J Epidemiol Comm Health. 2005; 59(Suppl 1): A7 [ Google Scholar ]
- PDF (433.3 KB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
COMMENTS
Whereas a meta-analysis is a quantitative, epidemiological study design used to assess the results of articles included in a systematic-review. Synthesizes relevant and current information regarding a specific research question (qualitative). Merges multiple outcomes from different researches and provides an average result (quantitative).
A systematic review is a comprehensive analysis of existing research, whereas a meta-analysis is a statistical analysis or combination of results from two separate studies. Here’s how the two differ from each other:
Meta-analysis is the ‘original big data’. In essence, SRs compile the results of two or more independent studies on the same subject. SRs may or may not have quantitative component to summarize the outcomes of the studies reviewed (viz. meta-analysis).
What methodology you choose can make or break your work getting out into the world, so let’s take a look at two main types: systematic review and meta-analysis. Let’s start with what they have in common – essentially, they are both based on high-quality filtered evidence related to a specific research topic.
A meta-analysis is a mathematical synthesis of the results of two or more primary studies that addressed the same hypothesis in the same way. Although meta-analysis can increase the precision of a result, it is important to ensure that the methods used for the reviews were valid and reliable.
In particular, I examine meta-analysis, a quantitative method to combine data, and illustrate with a clinical example its application to the medical literature. Then, I describe the strengths and weakness of meta-analysis and approaches to its evaluation.
Meta-analysis is a quantitative, formal, epidemiological study design used to systematically assess the results of previous research to derive conclusions about that body of research (Haidrich, 2010). Rigorously conducted meta-analyses are useful tools in evidence-based medicine.
Meta-analysis is a research method for systematically combining and synthesizing findings from multiple quantitative studies in a research domain. Despite its importance, most literature evaluating meta-analyses are based on data analysis and statistical discussions.
Meta-analysis—the quantitative, scientific synthesis of research results—has been both revolutionary and controversial, with rapid advances and broad implementation resulting in substantial...
A meta‐analysis is not just a statistical tool but qualifies as an actual observational study and hence it must be approached following established research methods involving well‐defined steps. This review should also help practitioners to critically and appropriately read and interpret systematic reviews and meta‐analyses.