Human Subject: An Investigational Memoir

Previous chapter | Contents | Next chapter | References

2. The Protection Racket

“Le lâche assassin, le héros et le guerrier plongent également le poignard dans le sein de leur semblable. Qu'est-ce qui les distingue, si ce n'est l'idée qui dirige leur bras?”

After participating in the oxycodone study, I wondered what percentage of subjects, in the thousands of biomedical research studies published every year, have no contact whatsoever with a medical or scientific professional. Surely, I thought, there were rules somewhere specifying who was qualified to run tests on subjects.

A few minutes’ research revealed that there were indeed rules. In fact, there was an overwhelming collection of treaties, laws, regulations, and guidelines aimed at protecting human research subjects. To try to piece it all together, I started reading up on the history of human-subject protection.

Before World War II, I learned, the Nazis had a reputation for providing quality health care for German citizens and for supporting research on cancer, genetics, and public-health issues. They were even the first to ban smoking in public buildings. (Getz & Borfits, 2002) Unfortunately they tarnished their reputation just a bit by performing unethical, and frequently fatal, experiments on populations considered expendable, such as Jews and the mentally ill. The experiments included infecting prisoners with cholera, smallpox, malaria, and other diseases; locking prisoners in airtight chambers and observing the effects of rapid decreases in air pressure; and various painful and pointless surgical interventions. Most of the thousands of victims of this experimentation either died or were crippled for life.

In 1947, after hearing the accounts of these atrocities, a tribunal of four American judges sentenced seven of the perpetrators to death. Some of the German doctors had argued in their defense that there were no international laws or guidelines that defined legal or ethical experimentation. Two doctors who had assisted the prosecution wanted to make sure that no one could use that argument in the future. At their urging, the four judges issued a set of ethical principles, which came to be called the Nuremberg Code. It gave specific guidelines for the protection of human subjects, including this commandment: “The experiment should be conducted only by scientifically qualified persons.”

Then in 1964 the World Medical Association, to which the American Medical Association belongs, developed a document called Ethical Principles for Medical Research Involving Human Subjects, better known as the Declaration of Helsinki. Although sometimes characterized as a weakening of the Nuremberg Code, the declaration, which has been revised five times since its inception, is generally viewed as a solid foundation for ethical research practices. One of its Basic Principles states: “Medical research involving human subjects should be conducted only by scientifically qualified persons and under the supervision of a clinically competent medical person. The responsibility for the human subject must always rest with a medically qualified person and never rest on the subject of the research, even though the subject has given his or her consent.”

This all looked promising to me. Surely Big U was obligated to adhere to the Nuremberg Code and the Helsinki rules, wasn’t it?

As they say in the infomercials: But wait! There's more!

In July 1972 a front-page New York Times story revealed that for 40 years researchers had been studying black men with syphilis, an activity that would have been commendable if they had bothered to tell the men in the study what they had and if the men had received any treatment for the disease. The public outcry over the Tuskegee Study of Untreated Syphilis in the Negro Male led to the passage in 1974 of the National Research Act., which authorized federal agencies to regulate research involving human subjects.

The main regulatory effect of the National Research Act was the creation of the institutional review system. All research funded by the Department of Health, Education and Welfare (later Health and Human Services) needed to be approved by an Institutional Review Board, or IRB, at the institution where the research was being conducted.

The act also established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, which was charged with developing ethical guidelines for researchers. It seems odd that they put the regulations in place before getting a determination as to what would be most ethical, but I guess they figured they could amend the regulations later, which they did.

In 1976 the NCPHSBBR met at the Belmont Conference Center in Maryland to hammer out some ethical principles. The commission had eleven members, of whom three were physicians. There were also three attorneys, two of whom were law professors. Four other professors represented the fields of psychology, bioethics, “Christian ethics,” and “behavioral biology.” Finally there was one Negro Woman (i.e., the president of the National Council of Negro Women, Inc.), Dorothy Height, who also happened to be a psychologist. This mainly scientific group had help from a bevy of ethicists and a stack of essays on ethics.

NCPHSBBR issued a lot of reports and recommendations over the next few years, but its greatest hit by far was Ethical Principles and Guidelines for the Protection of Human Subjects of Research, released In April 1979. Soon nicknamed the Belmont Report, this succinct document identified three broad ethical principles and a practical application of each:

I was somewhat dismayed to see that this august body of educated professionals alluded in their report to “the Hippocratic maxim ‘do no harm.’” Hippocrates may have written that phrase in one of his books, but it doesn’t appear in the oath he’s said to have sworn. In fact, the dictate “Primum non nocere”—“First do no harm”—has been attributed to the Roman physician Galen rather than to the Greek Hippocrates. Modern versions of the Hippocratic Oath don’t contain the word “harm” at all. But everyone now associates the prohibition on harm-doing with the man who advocated burning hemorrhoids with red-hot irons.

More disappointing, from my point of view, is the Belmont Report’s lack of any mention of scientific qualifications for researchers. But then it wasn’t meant to replace the earlier principles, but rather to build on them. Surely this important requirement would make it into the subsequent federal regulations, I thought.

In 1981, the Department of Health & Human Services incorporated the Belmont recommendations into the Code of Federal Regulations, Title 45, Part 46, Protection of Human Subjects. The general guidelines for ethical research are contained in Subpart A, known as the Common Rule because of its adoption by many federal agencies and departments other than HHS. Subparts B through D of the regulation mandate additional protections for special populations, such as fetuses, children, and prisoners.

The Common Rule defines “research” as “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” Nowhere in the regulations is “generalizable knowledge” defined. Here are just a few of the interpretations I found:

I don’t think any of these explanations quite captures the interpretation applied by most IRBs, but the community college comes closer than the universities or the federal government. The commercial outfit, listed last, wins points both for simplicity and for circularity.

After thoroughly muddying the waters as to the definition of “research,” the Common Rule sets forth three basic requirements for conducting it ethically:

  1. Informed consent—a process and/or document whereby the subject agrees to participate after learning such particulars as the nature of the research, the risks and benefits, any alternatives to participating, any compensation that will be given, and whom to contact with concerns about the study.
  2. Institutional review—a process of evaluating proposed research to make sure that risks are minimal and reasonable (in relation to anticipated benefits), that subject selection is equitable, and that informed consent requirements will be met.
  3. Institutional assurance—a written agreement that the institution will comply with the provisions of the Common Rule. This document is usually referred to as a “federal wide assurance” or FWA.

Much to my chagrin, nowhere in the Common Rule does it say that research is to be conducted only by scientifically qualified persons. Now maybe this is because there are so many other regulations—and there’s such stringent review of proposed studies—that an unqualified person couldn’t get away with conducting research. But a researcher could still get a study approved and then allow unsupervised, minimally trained students and assistants to do all the work. Of course, to my knowledge no one has ever defined “scientifically qualified.” I suppose it’s up to the PI to determine when someone is qualified, but to me this seems like a significant oversight (or lack thereof).


Major research institutions now have huge internal bureaucracies to implement the requirement of institutional review. At Big U a staff of more than 40 people coordinates the work of seven IRBs in an office that I’ll call InvestiGuard. The Common Rule only requires IRBs to review “research conducted or supported by any Federal Department or Agency which takes appropriate administrative action to make the policy applicable to such research.” According to its procedural manual, InvestiGuard has expanded its jurisdiction to include all research conducted by faculty, staff, or students at the university, whatever the funding source. This is apparently a choice that a lot of institutions and their IRBs have made.

What InvestiGuard doesn’t make clear is whether the IRB’s power extends to all research conducted, even on a student or staff person’s own time and property. If a part-time student decides to conduct a study that’s unrelated to any course, and which doesn’t use any university resources, does the IRB have jurisdiction? The wording in its policy manual would indicate that it does. I find that a bit scary.

IRBs must have at least five members, with at least one member from a scientific discipline, at least one member without a scientific background, and at least one who is not affiliated with the institution. Apart from that, the size and composition of the IRB is unspecified. InvestiGuard had seven different IRB committees; some reviewed purely biomedical research, while others reviewed “behavioral” research (sociology, psychology, anthropology, etc.). The meeting locations and member names were not disclosed, but this seems to be a matter of institutional choice, as some other institutions post this information on their Web sites. What InvestiGuard did make available online was an application to join an IRB committee. Since the minimum commitment is only a year, and there’s no payment for the 30 hours or so of work each month, I could see why they would need this permanent recruitment page.

While IRBs are busy recruiting people to review research proposals, the researchers are busy recruiting subjects, a task that has been streamlined in recent years by Web registries of clinical trials. Registration is aimed at preventing researchers, especially in the pharmaceutical industry, from just throwing away data that don't reflect well on a particular treatment option. The International Committee of Medical Journal Editors, representing the most widely read journals in the U.S. and several overseas journals as well, announced in 2004 that its members would only publish reports of clinical trials if those trials had been registered prior to enrollment of the first subject. A “clinical trial,” according to the ICMJE, is “any research project that prospectively assigns human subjects to intervention and comparison groups to study the cause-and-effect relationship between a medical intervention and a health outcome.” (http://www.icmje.org/clin_trialup.htm)

There is no law that requires registration of clinical trials, but people are registering them so that they can get their results published (and also to earn the trust of research subjects and colleagues). In a similar vein, it is now possible for an organization to get voluntary accreditation of its human-subjects protection program. The Association for the Accreditation of Human Research Protection Programs, Inc., (AAHRPP) “works to protect the rights and welfare of research participants and promote scientifically meritorious and ethically sound research by fostering and advancing the ethical and professional conduct of persons and organizations that engage in research with human participants.” (http://www.aahrpp.org/) As of this writing, two private IRBs in my state have been accredited, but Big U is just beginning the application process.

As with any unwieldy bureaucracy on which many thousands of people depend, the IRB system has its share of critics. A growing contingent of those being regulated, and a tiny minority of research ethicists, believe that IRBs have assumed more power than the regulations ever intended. That is, their efforts to control the research process are going above and beyond their original mandates.

A Web site called IRB Watch, http://www.irbwatch.org/, exists “to chronicle the abuses by IRB's.” “Over the past decade,” the home page says, “IRB's have grown greatly in power and range of authority. The home institutions have, however, largely abrogated their responsibility to oversee and control the procedures followed by IRB's. As a consequence, the IRB's have increasingly harassed researchers and slowed down important research, without protecting any human research participants.”

In the face of this growing criticism, the beleaguered IRB professionals are circling the wagons. An online discussion group called the IRB Forum has about 7,000 members. Most of those who post to the forum have an occupational connection with IRBs, though there are a few nonaffiliated types, including at least one research subject with a serious illness. The discussion generally involves dedicated protectors of human subjects weighing in with advice on questions raised by their fellow protectors.

One forum posting in May 2007 described a situation in which a Ph.D. candidate reported that she had accidentally violated a part of her own protocol: She had neglected to require subjects with high blood pressure to bring a physician’s authorization to participate in her exercise study. The research was finished, and no subjects had been injured, but because of this oversight, the IRB was considering whether to disqualify her from receiving her Ph.D. or whether instead to allow her to exclude those patients from her data analysis.

The reactions to this scenario were all over the map, from punishing the student to punishing her thesis adviser to just letting it go. The range of opinions reflected the many different levels of IRB involvement and control at different institutions.

In another posting, a few weeks later, an IRB administrator described the case of a hand therapist who wanted to weigh the handbags carried by his patients. The therapist would not be collecting any information about the individuals, just their handbags; he would use the information for patient education on the institution’s Web site. The administrator wanted to know if this research would be exempt from review. While several respondents thought it would qualify for exemption or expedited review, many (this was a hot topic for a day or so) sensibly realized that this was not human-subjects research at all. Less sensibly, most of them still thought the IRB would need to issue some kind of formal dispensation to the researcher before he was allowed to weigh inanimate objects.

One posting asked whose fault it is if a study is suspended because the PI failed to submit an annual review report on time. Apparently the PI was blaming the IRB for not being more forceful in demanding compliance. None of the many responses to this question raised the issue of whether the only reasonable sanction for tardiness is to suspend potentially valuable research, and only one person mentioned the futility of assessing blame. According to the regulations, the only exception to suspension would be a case where subjects would be adversely affected by having the research stopped.

In an article called “The IRB Paradox: Could the Protectors Also Encourage Deceit?”, Patricia Keith-Spiegel and Gerald P. Koocher argue that the actions and, more importantly, the attitudes of many IRBs have encouraged unethical behavior on the part of researchers (Keith-Spiegel & Koocher, 2005). They give examples of researchers who begin collecting data before getting IRB approval (or in some cases before even submitting a proposal), because the IRB is unresponsive, arrogant, or just plain slow to act. In one case they cite, the IRB in question “imposes strict requirements for consent and allowable risks that considerably exceed federal guidelines.” Therefore, the investigator doesn’t bother to submit his proposal, because he believes it won’t be approved; he goes ahead with the research without approval. In another case, the researcher collects data as part of “regular educational assignments”; later she may submit an application to use the data that was previously collected for nonresearch purposes.

Each institution has an appeals process, which usually involves taking one’s concerns to some kind of executive or ad hoc committee, which is made up of current or former IRB members. One IRB Forum participant wrote that the appeals board “often gave an advisory opinion about other avenues that might be explored, or other information the investigator might provide, in order to achieve approval. But we always remanded the matter back to the IRB of origin.” (IRB Forum, 2007-04-19) Another writer added, more colorfully: “Negotiations between an oppressed PI and the gods on Mount IRB can get a bit testy.”


If it accomplished nothing else, at least my reading about human subjects solved a mystery that had been gnawing at me for years. I was sure I had heard or read a true story, probably when I was in elementary school, about a doctor who had observed the inner workings of a man’s stomach for many years. The way I remembered the story, a huge section of skin and muscle had been replaced by glass, the better to see the man’s organs and still allow them to function.

The more I thought about this story, the more ridiculous it seemed, but I was still convinced that it had a factual basis. Then, in the course of my reading, I came across the tale of Alexis St. Martin, a 28-year-old Canadian fur-company worker, who was accidentally shot in the stomach in 1822. The surgeon at a nearby U.S. Army fort treated the wound, and the patient recovered. The wound, however, healed in a way that resulted in unprecedented research opportunities for the physician, William Beaumont, and lifelong difficulties for the patient: The edges of the stomach wound became attached to the edges of the hole in the skin, creating a permanent window, albeit one without glass, into St. Martin’s stomach.

Beaumont took full advantage of the situation, hiring his former patient as a handyman so that he could conduct experiments on him. These consisted mainly of inserting pieces of food, attached to a string, into the stomach hole, and then removing them at intervals to observe the state of digestion. Other experiments involved removing gastric acid and analyzing it. Naturally St. Martin had to keep the hole covered while he was actually digesting food, or else it would have fallen out.

Beaumont’s prize subject left him after a few years, only to rejoin him when the doctor pursued him and offered him a substantial salary. Beaumont’s discoveries contributed greatly to what we now know about digestion; the medical profession continues to honor him by giving prizes and awards in his name, and there are several medical facilities named after him, including an Army hospital in Texas. He died in 1853, but St. Martin, the hapless opportunist, continued to exhibit his gastric anomaly for pay.

If Dr. Beaumont had been required to submit his research proposals to an IRB, the relationship he had with his patient would have been considered coercive, rendering informed consent impossible. And the risks of such invasive experimentation would have been considered too great, especially since there was no benefit to the patient (other than the generous pay package, which would be considered unethically excessive).

Knowing that he wouldn’t be allowed to conduct experiments, would Beaumont have attended to his patient so diligently? Perhaps St. Martin would have died of his wound. If had he survived, perhaps he would have been unable to obtain gainful employment (his disability did cause him to lose his job with the fur company). It seems likely that he was both richer and healthier (i.e., alive versus dead) for his chance encounter with the insatiably curious surgeon. And thanks to his chance encounter with a stray bullet, we have a much richer knowledge of gastrointestinal phenomena.

For there to be true autonomy, the participation decision must be made by the study subject and not by a regulatory body or institutional committee. Yes, we need special protections for incapacitated or under-age participants, just as we do in other areas of medical and legal decision-making, but the current system denies mature, capable people the opportunity to weigh the risks and benefits for themselves. I don’t think the drafters of the Nuremberg Code envisioned such a restrictive research climate when they set out to make sure that the atrocities committed by Nazi war criminals could never happen again.


Previous chapter | Contents | Next chapter | References | Contact