APBN

Asia-Pacific Biotech News

The ABCs of Clinical Trials

Clinical trials are the make-or-break stage of a drug’s long journey from initial discovery to eventual commercialisation. What happens in clinical trials? Who are involved? Why is the failure rate so high? We delve into the dark side of the pharmaceutical industry to answer these questions.

by Shaun Tan Yi Jie

“In no other industry does it cost so much or take so long to bring a new product to market. None are as heavily regulated, as uncertain, or susceptible to such unforeseeable, disastrous failure.”1 These are the words of freelance medical writer Robert M. Rydzewski in his book Real World Drug Discovery, in describing the pharmaceutical industry.

Indeed, the process of developing a new drug is costly, lengthy and risky. The drug has to pass through many stages of testing on humans, called clinical trials, with the purpose of demonstrating safety (side-effects are acceptable cf. benefits) and efficacy (does what it is claimed to do). Clinical trials are heavily regulated, having had a dark history of deaths and disasters.

 

How Did Clinical Trials Come About?

The tragic history started all the way back in 1906, with the Pure Food and Drug Act, that mandated all preparations containing drugs must be labeled to reflect content and amount. The motivation of the Act was to weed out “patent medicines” – preparations that were incorrectly promoted and sold as medicinal cures. These were rampant at that time, ranging from snake-oil liniments being a cure-all, to cocaine for toothaches.

Then came the sulfanilamide disaster in 1937. Sulfanilamide was a popular antimicrobial agent at that time as penicillin was not yet discovered, and its effectiveness in preventing bacterial infections created a demand for a liquid formulation that could be drunk.

Messengill, a pharmaceutical manufacturer, did just that using diethylene glycol (DEG) as a solvent, and called the preparation “Elixir Sulfanilamide”. Unknown to the company’s chief chemist Harold Watkins, DEG is toxic to humans. Within two months, 107 deaths were attributed to the medicine, many involving children.

Hence, the 1938 Food, Drug and Cosmetic Act was created, which required drugs to be proven safe for their intended use.

Unfortunately, because retroactive punishments were not allowed under law, Massengill could only be fined under the 1906 Pure Food and Drug Act, which prohibited labeling the preparation an “elixir” if it contained no ethanol. Dr. Samual Evans Massengill, the firm’s owner infamously said, “My chemists and I deeply regret the fatal results, but there was no error in the manufacture of the product. We have been supplying a legitimate professional demand and not once could have foreseen the unlooked-for results. I do not feel that there was any responsibility on our part.”2 Watkins apparently did not feel the same way; he committed suicide while awaiting trial.

In 1961, another tragedy struck. A Germany company, Chemie Gruenthal, synthesised a new drug called thalidomide and introduced it as a sedative in 1957.

However, researchers at the company apparently found that the drug could alleviate morning sickness as well, so it was largely promoted over the counter as a drug for treating morning sickness in pregnant women, which was a bigger market.

By 1961, over 2000 babies died and 11,000 were born with severe birth defects such as phocomelia (deformed limbs). This number would have been greater had it not been for the heroic efforts of FDA Officer Dr. Frances Kelsey, who withstood pressure from the company and refused to grant approval for the drug to be sold in the United States, having had concerns about the drug’s side-effects. “I just held my ground. I wouldn’t approve it,” Dr. Kelsey says in an interview. “The information as presented was very sketchy. I just didn’t like it from the start. It was just too overblown. And they didn’t have any evidence to submit. They were so sure it was good because of its popularity in England.”3

While the thalidomide disaster is considered one of the darkest episodes in pharmaceutical research history, it also created the foundation for the modern pharmaceutical industry.

These days, rules on the testing and licensing of drugs are extremely stringent, and less than one per cent of all drugs discovered make it to commercialisation. Each new drug has to make it through four stages of clinical trials, which take on average eight to 10 years, and cost about US$2 billion.

 

Stages of a Clinical Trial

The four stages of a clinical trial are known as phases. Phase I assesses the safety of the drug – how much to give, how often, how the body gets rid of the drug and what the side effects are. It involves short-term treatment (a few months) on a small number of healthy volunteers (usually 20–50).

Phases II and III evaluate the efficacy of the drug – does it do what it is supposed to do for patients? Phase III enrolls more patients than Phase II (thousands vs. hundreds). In phase III, the results are compared to the standard treatment in the market because even if a new drug is effective, if it does not perform better than what is already out there, there is no incentive to mass-produce this new drug.

Phase IV is conducted after the drug is approved, commercialised and sold. It is a post-marketing surveillance trial, to observe how well the drug works in a wider population, and discover any long term effects. Phase IV results can sometimes lead to two outcomes: 1) label expansion, if a new positive effect is observed from the wider population, and on the other hand, 2) black box warning, for new negative effects.

 

Design of a Clinical Trial

The safety and efficacy of drugs are therefore demonstrated within the controlled environment of a clinical trial. Volunteers for a clinical trial are usually patients in hospitals, although not all who volunteer are selected. Administrators of the clinical trial will carefully pick and choose those they deem suitable. For example, if the trial is testing a new heart disease drug, it does not make sense to choose patients that also have other illnesses such as diabetes – because it then becomes difficult to conclude whether an observed upturn in health is due to the drug being effective against which.

The selected volunteers are divided into two groups – one receiving the drug (treatment group), and one not (control). To ensure fairness, the assignment is randomised, each person having an equal chance of being in either group. In most well-designed clinical trials, both patient and nurse/doctor are kept unaware of the assignment. This is called a double-blind trial. If only the patient is in the dark, it is single-blind. And if all parties are aware, the trial is open label. Open label trials are sometimes inevitable, such as a trial comparing the effectiveness of medication vs. surgery for cancer.

Otherwise, it is essential to blind the patient, in order to remove the placebo effect. The placebo effect is an expectation effect – if you believe you are receiving a drug that will make you feel better, you actually feel better after taking it, regardless of whether the drug is real. By blinding the patient, they do not know whether the drug given is the real one, which minimises the placebo effect.

The placebo effect is controversial. In 2010, medical researchers Asbjørn Hróbjartsson and Peter C. Gøtzsche concluded that their study “did not find that placebo interventions have important clinical effects in general.”4 On the other hand, clinical epidemiologist Jeremy Howick has argued that combining so many varied studies to produce a single average is misleading: “Even if the average placebo effect (for any placebo for any disease) is quite small, some placebos for some things could be quite effective.”5 There is also an opposite effect called the nocebo effect: if a patient anticipates a side effect of a medication, they can suffer that effect even if the medication is fake.

Most trials nowadays also blind the doctor/nurse who administers the drug, so as to remove observer bias. Sometimes the doctor has a vested interest in the trial, if they are involved in the development of the drug. When this is the case, if they know which patient is receiving the real treatment, it may consciously or unconsciously influence their reporting of observed effects.

 

The Good, the Bad and the Ugly

While failure is more common in clinical trials, sometimes there can be unexpected success. A famous example is sildenafil, more commonly known as Viagra. Sildenafil was synthesised by Pfizer, and originally designed for cardiovascular diseases. But the drug did not perform well in phase I, so they decided to increase the dose. That was when they found out the side-effect – it could induce prolonged erections. Pfizer therefore decided to market it for erectile dysfunction instead, and Viagra has been a worldwide success since.

But when clinical trials fail, the consequences can be quite disastrous. In 2016, Biotrial, a contract research company, tested BIA 10-2474, a drug developed by Portuguese pharmaceutical company Bial for various diseases.6 The study started in July 2015, and no side effects were observed. But then eight people entered the study on 6 January and received multiple, high doses. On 10 January, the fifth day of dosing, one man fell ill and was taken to the hospital. The next day, Biotrial staff continued to give the remaining seven participants their daily dose without investigating how the man was doing. The patient was declared brain-dead later that day, and the study was halted.

As with every area in life, there are always black sheep around; trial 329 is an infamous example. Trial 329 was a clinical study conducted by GlaxoSmithKline (GSK) in 1994 for paroxetine, a drug for teenage depression. Suspicions were raised when it became evident that many who took the drug had increased suicidal thinking and jumped to their deaths. In 2001, it was uncovered that the trial report presented four outcomes showing the drug was effective. But it transpired that there were originally eight variables measured, none of which favored the drug. So, the researchers measured 19 new outcomes, and of these, four gave results they wanted. A leaked memo discussed how to “effectively manage the dissemination of these data in order to minimise any potential negative commercial impact”.7 And thus GSK reported these four outcomes as if they had been the intended outcomes of the clinical trial all along. This led to many lawsuits and has been a classic case study of “outcome switching”, a term used to describe reporting of only favorable outcomes and passing them off as the original aims.

Whether it is due to careless oversight in conducting clinical trials, deliberate malpractice or a perception that drug companies go by the slogan “profit first, patient second”, the pharmaceutical industry has suffered a reputational decline over the past few decades. “Few other industries are as poorly understood or viewed as negatively by the public”, says Rydzewski.1 Will this change?

The onus lies with the companies, to restore ethical behavior when conducting clinical trials, be more transparent in reporting trial results, and ultimately, refocus on patients, not profits. For without patients, how can they get profits? [APBN]


About the Author

Shaun Tan Yi Jie has recently graduated from the National University of Singapore (NUS). He will commence his PhD studies in Chemistry at NUS in August.