Human Subject: An Investigational Memoir

Previous chapter | Contents | Next chapter | References | Contact


16. Child Sacrifices

“Il n'y a pas à hésiter; la science de la vie ne peut se constituer que par des expériences, et l'on ne peut sauver de la mort des êtres vivants qu'après en avoir sacrifié d'autres.”

This wasn’t the first time that I had made regular visits to PsychoPharm. When my daughter was 14, she experienced more junior-high angst and parent-directed animosity than seemed within the normal range. For that reason, and because I thought you’re never too young to contribute to generalizable knowledge, I responded to an ad placed by PsychoPharm and enrolled her in a study of an anti-anxiety drug. At least that’s how I remembered what had happened.

Now 23 years old, Emily is a frighteningly well-adjusted college dropout. After I started the desvenlafaxine study, she reminded me that our original reason for going to PsychoPharm was so that she could be screened for a study of antidepressant medication; the investigator had determined that she was eligible for the anxiety study, which needed more subjects. That was her recollection anyway, and she remembered that the study medication was buspirone. She has always believed that taking the drug did her long-lasting physical and emotional harm, though the evidence suggests that she has made a full recovery.

My own memory of the whole process is very hazy, probably because of the guilt I feel over having allowed a pharmaceutical company to test its drug on my precious offspring. To refresh my memory, I decided to ask at PsychoPharm if they still had her records. Someone there had told me that they keep records indefinitely, even though the law only requires that they keep them for seven years.

In 1999 another parent offered up his child for research purposes, although technically the child was 18 and therefore able to give his own consent. Paul Gelsinger’s son Jesse had a genetic liver condition called ornithine transcarbamylase disorder, which prevented his body from metabolizing ammonia. Babies born with the disease usually die in infancy. In Jesse’s case, the condition resulted from a genetic mutation, not an inherited gene; his disease could be controlled with diet and medication.

When doctors at the University of Pennsylvania began testing a new gene therapy on human subjects, Jesse agreed to participate, even though he wouldn’t personally stand to gain from the research results. He volunteered with the understanding that the technique being tested might someday save the lives of babies, but that there would be no benefit to him.

Jesse relied on his father to evaluate the risks and benefits of the research, but in the end it was his own decision to participate. He read and signed a consent form, which disclosed the risks of the procedures he would undergo. Both Jesse and his father understood that there was a risk of hepatitis, liver damage, or even death.

As it turned out, Jesse’s liver did not react well to the experimental treatment, and he died. At first Paul Gelsinger did not blame the doctors for Jesse’s death. Then he learned that they had omitted some information from the consent form, mainly the results of some studies that had shown harmful effects in animals. Gelsinger had reason to believe that Jesse’s poor liver function should have excluded him from the trial. He also believed that the doctors had lied about the effectiveness of the treatment.

Eventually Paul Gelsinger sued the university. “Too many mistakes had been made,” he later wrote, “and unfortunately, because of our litigious society, it was the only way to correct these problems.” (Gelsinger, 2002) More litigation seems like an illogical approach to the problem of litigiousness. I don’t think lawsuits are ever the only way to correct a problem, but this one may have corrected any cash flow problem Mr. Gelsinger had: The university eventually settled, paying “an undisclosed sum.” It also paid half a million dollars to settle charges filed by the U.S. Justice Department. The government had charged that the researchers knew that other subjects in the study had experienced serious reactions to the procedure, so they should have stopped the study before Jesse died.

Another person who found the lawsuit beneficial was Paul Gelsinger’s lawyer, Alan C. Milstein. After negotiating the settlement with Penn, he went on to sue the university on behalf of another patient in the same study as Jesse. Since then he has made a career of filing lawsuits on behalf of study subjects. (See Blumenstyk, 2002) Details of the lawsuits he has filed are at http://www.sskrplaw.com/gene/. In at least one case, he has sued individual IRB members for negligence, but as far as I know, no such case has been won.

The main issue raised by the Gelsinger case was conflict of interest. The director of the institute where the study was conducted, Dr. James M. Wilson, had founded a biotechnology firm that, while it didn’t directly fund that particular study, did provide financial support for the institute. To some people this raised the possibility that Wilson had a lot to gain by lying to subjects and to the FDA about the risks involved.

In 2005, at least partly in response to the Gelsinger case, the National Institutes of Health issued new rules that abolished all corporate consulting by NIH researchers and required all employees to sell their investments in health-related industries. Many in the research community, including some NIH employees, have viewed these measures as disproportionately severe. If the rules had existed in the 1970s and 1980s, argued Thomas P. Stossel in the New England Journal of Medicine, we would not have the benefit today of the important contributions made by the scientist-founded biotechnology industry. He pointed out that most of the fatal adverse events that are reported occur in research without any commercial involvement. “The death rate in industry-sponsored phase 1 oncology trials,” he wrote, “has not changed during the past 10 years, despite growing collaboration between academe and industry. To conclude that the hope of financial gain contributed at all to any errors leading to Gelsinger's death, in the absence of a confession or other evidence, is purely speculative.” (Stossel, 2005, p. 1061)

“The rule in effect will kill all conflict but it will also kill all sorts of gainful cooperation as well,” predicted another researcher, “and there is no willingness to take into account any adverse impact that these restrictions will impose on other organizations whose ability to attract top talent is limited by the regulations.” (Epstein, 2007, p. 82)

After the new rules went into effect, some NIH researchers quit their jobs, or at least threatened to, which prompted one writer to ask: “Is it worth having an institution free of commercial influence if it means losing some of the best government researchers to academia or private enterprise?” After answering “no” to her own question, she went on to write: “The current controversy over the new NIH conflict of interest rules will almost certainly subside over the coming months. Most likely, the rules will be modified slightly to appease angry researchers.” (Gold, 2006, p. 106)

As far as I can tell, a year and a half after the new rules were implemented, there has been no change to them.

Interestingly, the conflict-of-interest concern on the part of those who regulate research seems to be out of step with the views of most study subjects: A survey of participants in cancer research revealed that 90 percent of them were not at all worried about financial ties that researchers or institutions might have to drug companies, and fewer than half said that they would even want to know about such ties. (Carr-Lee, 2007; Hampson et al., 2006)

When you think about it, even the most scrupulous researcher has a conflict of interest. In fact, those with scruples may be the most conflicted of all: They want to be ethical and forthcoming during the informed-consent process, but at the same time, as Boston University bioethicist Larry Glantz has pointed out, they need subjects for their research. (Getz & Borfitz, 2002) Even if they don’t have a financial investment in the treatment being studied, their careers and future earnings depend to a large extent on the results of their research.

Another change prompted by Jesse Gelsinger’s death involved the Office for Protection from Research Risks. Since its creation in 1966, it had been conveniently housed at NIH, where it could keep a close eye on the researchers it oversaw. Another view, and the one apparently held by the panel that reviewed OPRR in 1999, was that the office was too cozily ensconced at NIH, where it could pal around with the researchers it oversaw. So the office got a new location and a new name, which it still holds today: the Office for Human Research Protections (OHRP).

Paul Gelsinger is now the vice-president of an organization called CIRCARE—Citizens For Responsible Care and Research. According to its Web site (http://www.circare.org), one of CIRCARE’s most important missions is to support the enactment of the National Human Research Protections Act (NHRPA). There was no mention of this act on the organization’s list of proposed federal legislation, so I searched for bills with similar titles, and I found one called the Human Research Subject Protections Act of 1997, introduced by Sen. John Glenn, Jr. (Manned space flight is really the ultimate human-subjects research, so I could understand his interest in the subject, whereas I was clueless as to why two of the five co-sponsors were the senators from Hawaii.) The bill never made it to the Senate floor from the Committee on Labor and Human Resources.

Then there was the Human Research Subject Protections Act of 2000, which expired in the House Subcommittee on Health and Environment. It was followed by the Human Research Subject Protections Act of 2002, which met its demise in the House Subcommittee on Health. Both of those were introduced by Rep. Diana DeGette of Colorado, who was apparently rejection-shy in 2004, but bounced back in 2006 with the Protection for Participants in Research Act of 2006, which died in committee again. It seems that human-subjects protection isn’t a big priority for Congress.


Subpart D of 45 CFR 46 is called “Additional Protections for Children Involved as Subjects in Research.” This subpart of the HHS regulation is not included in the Common Rule as codified by other federal agencies, but some agencies have adopted it, together with the two subparts that deal with prisoners and with pregnant women, human fetuses, and neonates. Many institutions, such as Big U, have also incorporated Subpart D into their policies.

The regulation specifies four levels of research on children that can be funded (in the case of a government agency) or conducted (in the case of an institution):

  1. “Research not involving greater than minimal risk.”
  2. “Research involving greater than minimal risk but presenting the prospect of direct benefit to the individual subjects.” The risk must be justified by the anticipated benefit to the subject, and the relationship of the benefit to the risk must be at least as favorable as that available from alternative approaches.
  3. “Research involving greater than minimal risk and no prospect of direct benefit to individual subjects, but likely to yield generalizable knowledge about the subject's disorder or condition.” The risk must be only slightly more than minimal; the procedures must be similar to those the subject would actually encounter in normal clinical, social, or educational settings; and the intervention or procedure must be likely to generate highly valuable information about the subject’s condition.
  4. “Research not otherwise approvable which presents an opportunity to understand, prevent, or alleviate a serious problem affecting the health or welfare of children.” The opportunity presented must be “reasonable,” and the research must be conducted in accordance with “sound ethical principles.” How’s that for vagueness?

For research in the first three categories, the IRB alone decides whether the proposed study can go forward. Research in the last category, where the risks may greatly outweigh the benefits, additionally requires that “a panel of experts in pertinent disciplines” (e.g., in science, law, and ethics) advise the secretary of HHS, who then determines whether the research meets the established criteria.

Any research involving children also requires that they give their assent to participate—if they’re deemed capable of giving it—and that the parents give their permission. Together these two requirements substitute for consent, which children are not legally able to give. Again, it’s the IRB that determines whether the child is capable of giving assent, based on age, maturity, and other factors.

Some researchers believe that parents should be given more autonomy in weighing the risks and benefits of research, and they believe that the current system, by being overprotective, may hurt children more than it helps them. Children and adolescents are essentially not allowed to be altruistic, and it doesn’t help the situation that some IRBs interpret minimal risk to be no risk at all. (Arnold et al., 1996)

Because the system doesn’t recognize the gradual maturation of children, adolescents don’t get the chance to learn about being selfless. That is, the same regulations apply to a 3-year-old as to a teenager who’s about to turn 18. Of course, there are times when the parents would like to teach a child selflessness, but the little brat will have none of it. The Board on Health Sciences Policy of the Institute of Medicine has stated: “As long as the child is not coerced into assenting, it is reasonable for parents to engage in persuasive discussion, for example, about the importance of helping others. . . . Nonetheless, consistent with the regulations, the child’s dissent should override parental assent when the research does not promise direct benefit to the child.” (Field & Behrman, 2004, p. 191)

Some kinds of research that would be exempt from review if the subjects were all adults do require review when children are involved. These include surveys, interviews, and “observations of public behavior,” except “when the investigator(s) do not participate in the activities being observed.” In other words, I guess, you can do whatever you want to children as long as you aren’t doing it to yourself as well.


Previous chapter | Contents | Next chapter | References | Contact