Headlines may show us a black or white vision of the world; in reality, testing hypotheses and reaching credible conclusions is an uncertain and constantly evolving business.
The news headlines have been full of accusations of bad science, fraud, and scandal in the world of medical research lately. In fact, one researcher published a paper showing that, statistically, there is a greater than 50-percent chance that any given scientific study is wrong.
Yet we increasingly rely on scientific studies to advance our knowledge in the fields of both conventional medicine and alternative health. Documented and reproducible scientific studies are there to help us sort the wheat from the chaff in the oftentimes contradictory world of health claims, and they can provide us with more reliable data than anecdote or folklore.
Fudging the Figures
In 2005 epidemiologist John Ioannidis’ research led him to the astonishing conclusion that “most claimed research findings are false.” Researcher bias, “manipulation in the analysis or reporting of findings,” whether intentional or subconscious, plays a key role in skewing the validity of study results. Ioannidis statistically analyzed the damaging effects of the following factors on the reliability of study results: small sample sizes or the study of tiny, isolated effects; sloppy study designs and definitions; and the presence of a profit motive or the protection of prestige. Ioannidis noted, too, that “the hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true,” because competing teams of scientists rush to publish their results. Alone or in combination, these factors can destroy the credibility of a scientific study.
Ioannides’ statistical study is not an isolated blast of the trumpet. In June 2005 Nature published an article with the provocative title, “Scientists behaving badly.” More than 3,400 US scientists, predominantly in the fields of biology, medicine, and chemistry, were asked whether they had participated in a range of 16 bad behaviours, including “falsifying or ‘cooking’ research data,” “not properly disclosing involvement in firms whose products are based on one’s own research,” and “changing the design, methodology or results of a study in response to pressure from funding sources.” Many of these inappropriate behaviours suggest researcher bias and diminish the likelihood that their studies’ findings are true.
While less than two percent of respondents reported engaging in practices such as plagiarism or outright falsification of data, overall, 33 percent of the scientists admitted that they had engaged in at least one of the bad behaviours in the last three years. More than a quarter of the respondents (27.5 percent) admitted to “inadequate record keeping related to research projects,” and almost 16 percent said they had modified a study because of pressure from funders. Ioannidies found that both these factors–too much flexibility in recordkeeping and the influence of financial interests–play a statistically significant role in reducing the verity of research results.
Faking It
What about real-life headlines? Perhaps the most well-known recent example was the scandal over falsified stem cell research. This is certainly a hot–even hyped–area of research, promising treatment for now-incurable conditions such as Parkinson’s disease and spinal cord injuries. In 2004 South Korean researcher Hwang Woo Suk published what appeared to be groundbreaking research on the cloning of human embryonic stem cells in the journal Science. It was later discovered that Hwang obtained cell donations from his own research team, and the research was faked with doctored photographs. Science has since retracted the study.
Either sloppy record-keeping or the need to justify spending hundreds of millions of dollars may have influenced analysists at the World Bank to publish what appeared to be manipulated statistical data. Canadian researcher Amir Attaran’s article in the Lancet (November 2004) alleged that the World Bank invented epidemiological statistics to support claims its malaria-control program made immense advances in reducing malaria deaths in India, in some areas by as much as 98 percent.
The World Bank refused to share its too-good-to-be-true data with Attaran, but he was able to obtain statistics from Indian health authorities. The raw data showed that the incidence of malaria deaths had actually increased, not decreased.
How to Lie with Statistics
Researcher at Memorial University, recipient of the Order of Canada, and Nobel-Prize nominee Dr. Ranjit Kumar Chandra published almost 200 scientific research papers during his career and was well known as an expert in the fields of nutrition and immunology. But in January 2006 CBC’s The National aired a three-part program exposing Dr. Chandra’s long-term “pattern of scientific fraud and financial deception.”
Memorial University found Dr. Chandra guilty of “scientific misconduct” in 1994, but Dr. Chandra continued to publish follow-ups to studies “that had never been done in the first place.” In 2000 Dr. Chandra submitted a study to the British Medical Journal about his patented multivitamin for seniors (Jaavan 50), but their reviewers turned it down, saying it had “all the [statistical] hallmarks of having been completely invented.” To date, only one of Dr. Chandra’s articles has been formally retracted by the journal that published it, and he continues to publish.
Systemic Misconduct
In March 2005 the US Office of Research Integrity (ORI) charged University of Vermont researcher Eric Poehlman with using fabricated research in 10 different articles. One journal in which Dr. Poehlman’s papers were published, Annals of Internal Medicine, published a retraction earlier this year. “In April 2005, Poehlman had 204 publications listed in PubMed [an]. If only 10 have undergone careful scrutiny by ORI, how sure are we about the other 194?” asked Harold Sox, MD, editor of the Annals.
How sure are we about any published studies, after reading about tainted studies like these? Most approaches to scientific misconduct have focused on “bad” individuals, but there is increasing evidence that the research environment itself—the rush to publish, pressure from financial sponsors, the lure of lucrative grants–plays a role in scientists’ ethical failures. Dr. Sox has concluded that “scientific misconduct is endemic.”
The editors of two of the most prestigious medical journals published today, the British Medical Journal and the Lancet, told a New York Times reporter that medical journals have become “information-laundering operations for the pharmaceutical industry” because they “rely on revenues from [pharmaceutical]-industry advertisements” and sell drug companies reprints of articles about clinical trials on their products.
Recognizing Bias
As informed consumers of natural health information, it’s tempting to view incidents of scientific misconduct as tempests in pharmaceutical teapots that have little impact on our lives. But as alternative medicine increasingly relies on research to support its claims, we must realize that bad science–and media spin–can be bad for our health.
First, bias keeps some studies from receiving funding, being published, and benefitting the public. One study in the Journal of the Royal Society of Medicine (January 2004) found that studies on nonconventional medicines were themselves often victims of bias when being evaluated for publication. “A lack of open mindedness in the peer review process,” wrote the authors, “could affect the introduction of unconventional concepts into medicine.” Clinical trials on drugs flood the medical literature, but important advances in an alternative modality such as orthomolecular medicine, for example, receive very little attention from the medical community.
Bias plays a role in the way alternative medicine is studied and reported. Despite multiple studies done in the last decade demonstrating human health benefits of low-fat/healthy-fat diets for heart health and cancer, the Women’s Health trials published in the Journal of the American Medical Association (February 8, 2006) “proved” those diets’ inefficacy. Critics of the trials point out they were too short to yield meaningful results, didn’t recognize the health benefits of healthy fats, and the women studied failed to significantly reduce unhealthy fat intake.
Vitamins D and E, calcium, glucosamine, saw palmetto, and the whole field of homeopathy have received bad press lately. The media reports on these studies can lead consumers to believe these and other supplements are worthless, but the conclusions reached in many of the studies themselves are, in fact, more nuanced. The studies supposedly discounting glucosamine and chondroitin, for example, found they were almost 25 percent more effective than placebo and more effective than the drug they were compared with.
Just Because It Says So
With the natural health products industry now topping $2 billion in Canada alone, lucrative grants and contracts for clinical trials may make unbiased studies in our own field harder to find, as natural health becomes a “hot” field for researchers. Here at alive, we’ll continue to search for studies that bring you well-documented and up-to-date scientific research about natural health products you can trust.
Headlines may show us a black or white vision of the world; in reality, testing hypotheses and reaching credible conclusions is an uncertain and constantly evolving business.