Abstract
“Health Care providers, researchers and policy makers are inundated with unmanageable amounts of information […].” (Mulrow, 1994).
The citation above dates back until 1994, but it appears particularly true today with even far more published research than in this time: From 1946 until December 31, 1994, PubMed tags 125, 595 items as Randomized or Controlled Clinical Trial, whereas in the period until November 6, 2017, PubMed tags 639, 998 of such items. No one is able to keep up with this huge amount of information without the help of systematic reviews. Systematic reviews attempt to answer a specific research question by collating the complete evidence base that fits prespecified eligibility criteria. It aims at minimizing bias by using explicit, systematic methods. Hence, it is possible to provide more reliable findings from which conclusions can be drawn and decisions be made (Antman, Lau, Kupelnick, Mosteller, & Chalmers, 1992). There are systematic reviews for a variety of research questions, like estimating treatment effects of a certain healthcare intervention, for estimating diagnostic test accuracy of a certain clinical test, or estimating harms associated with a certain intervention or test. Many systematic reviews use meta-analyses as statistical methods to summarize results of independent studies, thus being able to provide more precise estimates of the effects of healthcare interventions than those obtained from individual studies alone. Meta-analyses can increase power, improve precision, answer questions not posed by individual studies, and settle controversies arising from conflicting studies and also generate new hypotheses (Deeks, Higgins, & Altman, 2011). Moreover, meta-analyses can ease the process of planning new studies by providing information on future sample size calculation. Whether a meta-analysis is appropriate in a review depends on the aim of the review and on its included studies: Meta-analyses often are problematic (a) in case of clinically diverse studies (different treatments, comparators, or outcomes may obscure treatment effects), (b) in case of high risk of bias of included studies (size of effect may be seriously misleading), or (c) in case of serious publication or reporting bias (size of effect may be seriously misleading). Contemporary approaches to meta-analysis are, for example, network meta-analysis (Bucher, Guyatt, Griffith, & Walter, 1997), which allows for a quantitative synthesis of an evidence network. This is made possible by combining direct evidence from head-to-head comparisons of three or more interventions within randomized trials with indirect evidence across randomized trials on the basis of a common comparator (Mills et al., 2011).
According to a publication series published 2014 in The Lancet, 85% of research investment is wasted and biomedical research oftentimes suffers from poor quality (Salman et al., 2014). Recent research suggests that this might also be true with regard to meta-analysis: It is estimated that in biomedical research, only 3 out of 100 systematic reviews and meta-analyses are both non-misleading and useful (Ioannidis, 2016). Every fourth meta-analysis is redundant and unnecessary, every fifth remains unpublished or its methodology is flawed beyond repair (Ioannidis, 2016). Every sixth is decent, but not useful and every eighth is misleading, because, for example, it was performed on abandoned genetics (Ioannidis, 2016).
Although one might argue that to some extent, replication of findings is useful in a scientific context, it might be a major problem regarding systematic reviews: There are up to 20 meta-analyses on the same topic, for example, on statins for prevention of atrial fibrillation after cardiac surgery, with mostly conflicting results (Ioannidis, 2016). Performing meta-analyses became a business, run by contractor companies mostly hired by the pharmaceutical and medical device industry (Ioannidis, 2016). This in turn contributes to a massive production of multiple meta-analyses aligned with sponsor interests. Current research progress in artificial intelligence on the one hand is supposed to ease the process of literature search and analysis, but on the other hand, possibly will worsen this problem. Only a small fraction of systematic reviews are recorded prospectively in registries like PROSPERO (University of York, 2017), which could avoid such excessive redundancy.
What does this mean to physiotherapy research? Although one might speculate that in physiotherapy research, funding bias may not be as apparent as in biomedicine, mechanisms regarding the lack of preregistered protocols, and exploration of prior research as well as low-quality research might be the same and we should be aware of this fact. The biggest problem maybe arises by the nature and quality of primary research, included in a systematic review. We can change this only by performing better in our primary research, that is, in conducting and reporting clinical trials (Ioannidis, 2016). But since also systematic reviews are prone to bias due to poor methodology, another step would be to emphasize rigorous, prospective reviews like the ones of Cochrane, which are today's state of the art, as a primary source of information for clinical decision-making. So the solution for these problems might be (a) the forceful preregistration of a research project in a trial or review database and (b) the consistent use of standardized checklists when conducting a clinical study or systematic review and writing its report to improve quality.
Notwithstanding the above listed problems systematic reviews and meta-analyses suffer from—they can still have major value: Ideally, they can show us what is known and what is unknown regarding a certain topic and present its accompanying uncertainty. In a nonideal situation, they can reveal the unreliability of published evidence—which also is an important fact (Ioannidis, 2016).
The citation above dates back until 1994, but it appears particularly true today with even far more published research than in this time: From 1946 until December 31, 1994, PubMed tags 125, 595 items as Randomized or Controlled Clinical Trial, whereas in the period until November 6, 2017, PubMed tags 639, 998 of such items. No one is able to keep up with this huge amount of information without the help of systematic reviews. Systematic reviews attempt to answer a specific research question by collating the complete evidence base that fits prespecified eligibility criteria. It aims at minimizing bias by using explicit, systematic methods. Hence, it is possible to provide more reliable findings from which conclusions can be drawn and decisions be made (Antman, Lau, Kupelnick, Mosteller, & Chalmers, 1992). There are systematic reviews for a variety of research questions, like estimating treatment effects of a certain healthcare intervention, for estimating diagnostic test accuracy of a certain clinical test, or estimating harms associated with a certain intervention or test. Many systematic reviews use meta-analyses as statistical methods to summarize results of independent studies, thus being able to provide more precise estimates of the effects of healthcare interventions than those obtained from individual studies alone. Meta-analyses can increase power, improve precision, answer questions not posed by individual studies, and settle controversies arising from conflicting studies and also generate new hypotheses (Deeks, Higgins, & Altman, 2011). Moreover, meta-analyses can ease the process of planning new studies by providing information on future sample size calculation. Whether a meta-analysis is appropriate in a review depends on the aim of the review and on its included studies: Meta-analyses often are problematic (a) in case of clinically diverse studies (different treatments, comparators, or outcomes may obscure treatment effects), (b) in case of high risk of bias of included studies (size of effect may be seriously misleading), or (c) in case of serious publication or reporting bias (size of effect may be seriously misleading). Contemporary approaches to meta-analysis are, for example, network meta-analysis (Bucher, Guyatt, Griffith, & Walter, 1997), which allows for a quantitative synthesis of an evidence network. This is made possible by combining direct evidence from head-to-head comparisons of three or more interventions within randomized trials with indirect evidence across randomized trials on the basis of a common comparator (Mills et al., 2011).
According to a publication series published 2014 in The Lancet, 85% of research investment is wasted and biomedical research oftentimes suffers from poor quality (Salman et al., 2014). Recent research suggests that this might also be true with regard to meta-analysis: It is estimated that in biomedical research, only 3 out of 100 systematic reviews and meta-analyses are both non-misleading and useful (Ioannidis, 2016). Every fourth meta-analysis is redundant and unnecessary, every fifth remains unpublished or its methodology is flawed beyond repair (Ioannidis, 2016). Every sixth is decent, but not useful and every eighth is misleading, because, for example, it was performed on abandoned genetics (Ioannidis, 2016).
Although one might argue that to some extent, replication of findings is useful in a scientific context, it might be a major problem regarding systematic reviews: There are up to 20 meta-analyses on the same topic, for example, on statins for prevention of atrial fibrillation after cardiac surgery, with mostly conflicting results (Ioannidis, 2016). Performing meta-analyses became a business, run by contractor companies mostly hired by the pharmaceutical and medical device industry (Ioannidis, 2016). This in turn contributes to a massive production of multiple meta-analyses aligned with sponsor interests. Current research progress in artificial intelligence on the one hand is supposed to ease the process of literature search and analysis, but on the other hand, possibly will worsen this problem. Only a small fraction of systematic reviews are recorded prospectively in registries like PROSPERO (University of York, 2017), which could avoid such excessive redundancy.
What does this mean to physiotherapy research? Although one might speculate that in physiotherapy research, funding bias may not be as apparent as in biomedicine, mechanisms regarding the lack of preregistered protocols, and exploration of prior research as well as low-quality research might be the same and we should be aware of this fact. The biggest problem maybe arises by the nature and quality of primary research, included in a systematic review. We can change this only by performing better in our primary research, that is, in conducting and reporting clinical trials (Ioannidis, 2016). But since also systematic reviews are prone to bias due to poor methodology, another step would be to emphasize rigorous, prospective reviews like the ones of Cochrane, which are today's state of the art, as a primary source of information for clinical decision-making. So the solution for these problems might be (a) the forceful preregistration of a research project in a trial or review database and (b) the consistent use of standardized checklists when conducting a clinical study or systematic review and writing its report to improve quality.
Notwithstanding the above listed problems systematic reviews and meta-analyses suffer from—they can still have major value: Ideally, they can show us what is known and what is unknown regarding a certain topic and present its accompanying uncertainty. In a nonideal situation, they can reveal the unreliability of published evidence—which also is an important fact (Ioannidis, 2016).
Original language | English |
---|---|
Article number | e1703 |
Journal | Physiotherapy Research International |
Volume | 23 |
Issue number | 1 |
ISSN | 1358-2267 |
DOIs | |
Publication status | Published - 01.2018 |
Research Areas and Centers
- Health Sciences
DFG Research Classification Scheme
- 205-02 Public Health, Health Services Research and Social Medicine
- 109-02 General and Domain-Specific Teaching and Learning