EBM grew from the belief that the practice of medicine should be based not just on the experience of individual doctors, common practices or the advice of “experts,” but on evidence from high-quality medical research. A 1992 article in the Journal of the American Medical Association (JAMA) called EBM a “new paradigm” that required research literature to be evaluated by “formal rules of evidence.”
The authors of that article were most likely not referring to the rules of evidence used in legal proceedings, but they might just as well have been. For it has largely been through lawsuits that physicians, consumers and researchers discovered a fatal flaw in this new paradigm. That flaw is the research EBM depends upon is largely created by pharmaceutical companies. And, many of those companies have viewed medical research as something to be manipulated, altered, falsified, or kept hidden in order to sell drugs and expand markets.
By 2004, the evidence base had been corrupted to the point that chief, or former chief, editors at three of the world’s leading medical journals—Marcia Angell (The New England Journal of Medicine), Richard Horton (The Lancet), and Richard Smith (British Medical Journal)—were all saying that medical journals had widely been transformed into marketing tools for the pharmaceutical industry. Dr. Angell administered what might well have been considered EBM’s last rites in a piece published by the New York Review of Books in 2009. “It is simply no longer possible,” she wrote, “to believe much of the clinical research that is published.” Without trustworthy research to sustain it, EBM, as a practical reality, and after an astonishingly short life, had passed away. The “cause of death,” verified by dozens of doctors and scientists, was drug company poisoning.
“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines.” Marcia Angell, “Drug Companies and Doctors,” The New York Review of Books, January 15, 2009
The Celexa and Lexapro Lawsuit Exposed the Misleading Celexa Study
New facts evidencing the decline of evidence-based medicine continue to be uncovered. For instance, a Celexa study published in the American Journal of Psychiatry (AJP) in 2004 was recently discovered to be a textbook example of what the journal editors warned about that very same year. The study, which appeared to be the work of several prominent psychiatrists, reported the results of a clinical trial of Celexa (formally identified as CIT-MD-18) for the treatment of depression in children and adolescents. (In a clinical trial, drugs are tested on human subjects.) The study concluded that Celexa was safe and effective for children and adolescents.
As a result of the 2013 Celexa and Lexapro lawsuit (Celexa and Lexapro Marketing and Sales Practices Litigation, No. 09–MD–2067 (NMG)), the underlying data from the study were finally revealed through a legal process called “discovery.” Three investigators with access to the previously hidden discovery documents, child psychiatrist Jon Jureidini, Jay Amsterdam, a psychiatrist in the Department of Psychiatry at the University of Pennsylvania School of Medicine, and Leemon McHenry, a researcher and professor at California State University Northridge, delivered their conclusions in the International Journal of Risk and Safety in Medicine in March 2016.
Jureidini and his colleagues, using internal Forest emails and other documents turned over to plaintiffs attorneys in the lawsuit, were able to expose the inner workings of the process by which drug makers turn medical journals into sales brochures.
The investigators found that the “authors” named on the Celexa study didn’t actually write it. In fact, the principal “authors” were individuals who put their names on the paper after someone else wrote it. Jureidini et al. found no evidence that the lead author was involved in designing the trial, analyzing the trial results or writing the study manuscript. The manuscript was actually written by Forest company employees and a “ghostwriter” who worked for an outside marketing company.
This ghostwriting was part of a carefully orchestrated Forest marketing plan. Multiple emails make it clear that:
The content of the AJP paper was controlled by the marketing department at Forest.
The primary goal was to use the ghostwritten report for public relations purposes and to promote Celexa and Lexapro (generic escitalopram, a closely related antidepressant made by Forest) to doctors through Continuing Medical Education courses.
The investigators also found several serious violations of the Celexa study protocol—the plan that every clinical trial must have that explains exactly how the research will be carried out and the key outcome measures that will determine whether the drug is more effective than the placebo.
The primary outcome measure in the study was the change in the subjects’ scores on the Children’s Depression Rating Scale-Revised (CDRS-R). The CDRS score is determined by interviews that cover various aspects of a child’s life, including mood, suicidal thoughts, school work, self-esteem, social withdrawal, and sleep disturbance. After eight weeks, there was no statistically significant difference between the change in CDRS-R scores of those taking Celexa and the change in those taking the placebo. Contrary to the study protocol (the blueprint that must be followed in any study), Forest added the CDRS-R scores of eight “unblinded” subjects (i.e., eight subjects were mistakenly given pink pills, which clearly identified them as active Celexa rather than placebo) back into their analysis—a violation of both the protocol and FDA requirements that clinical trials be well controlled.
Only by adding the eight unblinded subjects was it possible to achieve the statistically significant difference between the drug and placebo groups that was reported in the AJP. Without the unblinded patients, the results did not achieve statistical significance, so the study, in actuality, was a negative study. Even with the borderline statistical significance created by improperly including the unblinded patients, the difference in CDRS-R scores between the two groups was not, in the view of Jureidini et al., clinically significant. In other words, by the end of the trial, those taking Celexa with the unblinded patients included, were not noticeably less depressed than those taking the placebo. The unblinding was not mentioned in the journal article.
The Celexa study protocol also called for an analysis to determine whether the age group of the subjects—children (ages 7-11) or adolescents (ages 12-17)—affected treatment efficacy. Jureidini’s team found that, even with the eight unblinded subjects included, the CDRS-R scores indicated that Celexa was not effective in children. This result was not made public and not reported in the AJP article. The article only disclosed that the Celexa trial, “was not powered sufficiently to detect treatment differences by age group.” (Statistical power refers to the ability of a study to detect a true effect that is not just the result of chance.) This left open the possibility that, when the CDRS statistics for both groups (children and adolescents) were combined, the positive (i.e., better than placebo) change scores in one group compensated for the negative scores in the other group, producing a slightly positive outcome. The effectiveness of Celexa in one of the groups could have hidden the ineffectiveness of Celexa in the other group. But allegedly due to the lack of power, Forest asserted, falsely, that it was incapable of affirming that Celexa was statistically superior to placebo in either group. In short, the trial provided no basis for concluding that Celexa was effective for children and adolescents.
Issues surrounding the statistical power of the Celexa study are puzzling to say the least. On page two of the paper, the “authors” state they carried out a statistical analysis to see if age affected efficacy. Two pages later, they explain their study was not sufficiently powered to make the results of that analysis meaningful. This is head-spinning. One lingering question is why the reviewers at the AJP did not notice.
Jureidini’s team also uncovered several other questionable practices.
The results of four secondary outcome measures that were part of the study protocol were negative, but those results were not mentioned in the AJP paper, nor was there any mention that those outcome measures were part of the protocol.
The authors failed to mention several Celexa side effects that developed during the trial. Hypomania (a mild form of mania), agitation, akathisia and anxiety were seen in the Celexa group, but not in those taking the placebo. All of these effects are associated with an increased risk of suicidality. The largest study of SSRI antidepressants and suicide to date found that, in 70 clinical trials, SSRIs increased rates of suicidality (defined as suicide, suicide attempt or preparatory behavior, intentional self harm, and suicidal ideation) in children and adolescents by nearly two-and-a-half times. The authors believe the increased risk in the real world may be much higher than the increase found in the trials.
“We have concluded that citalopram’s apparent superiority arises from Forest management and the ghostwriters….” Jon Jureidini et al., “The citalopram CIT-MD-18 pediatric depression trial: Deconstruction of medical ghostwriting, data mischaracterisation and academic malfeasance,” International Journal of Risk and Safety in Medicine, March 2016.
It should be noted that mania is one of the most dangerous SSRI and Celexa side effects. In children, it frequently leads to the diagnosis of bipolar disorder, increased drugging with antipsychotics and mood stabilizers, and chronic disability. One can’t help but wonder whether this is also part of the marketing plan.
Celexa and the Future of Medical Practice
The full extent of the pharmaceutical industry’s influence on medical practice continues to be uncovered through lawsuits and the work of concerned scientists and academics. Of course, this is at the core of the problem. It can be years before the public finds out what a drug company has hidden. Jureidini’s paper, published in 2016, contains copies of Forest emails from 2001 in which Forest executives discuss the fact that “some of the [outcome] measures didn’t look that great” and “not all the data look as great as the primary outcome data” (which we now know was not “great” either). What Forest knew in 2001, readers of Jureidini’s paper discovered 15 years later.
Evidence-based medicine is not only compromised by falsified studies. The results of a large percentage of clinical trials, even those carried out at leading academic medical centers (AMCs), are never published. (The influence of the drug industry in academia is considerable. A 2014 study published in the Journal of the American Medical Association found that 40% of the pharmaceutical companies they investigated, including drug giants Eli Lilly, GlaxoSmithKline, Johnson & Johnson, Pfizer and Forest Laboratories, have board members who simultaneously serve in leadership positions in AMCs). Without publishing the study, the evidence never sees the light of day. This is a particular problem when children are involved. Non-publication also contributes to a highly distorted impression of the effectiveness of antidepressants. Both practices—publishing negative results to make them look positive and the failure to publish at all—put sales and marketing, not evidence and not science, in the driver’s seat when it comes to the practice of medicine. Documents uncovered as a result of the Celexa and Lexapro lawsuit and many other lawsuits against drug makers, suggest that this is exactly what the pharmaceutical industry wants.