Human Subject: An Investigational Memoir

Previous chapter | Contents | Next chapter | References | Contact

10. First Do No Research

“Chaque science a son problème et son point de vue qu'il ne faut point confondre sans s'exposer à égarer la recherche scientifique.”

My friend “Bob” was doing some research for a class he was taking at Big U. He wanted to ask Internet users to take part in an online study. There was no personal information being collected, and the users would be anonymous. Still, his professor thought it would be a good idea for Bob to apply for an exemption from institutional review. As you may recall from chapter 6, at Big U the department head decides whether proposed research should be exempt from review, but the researcher still must submit an application to InvestiGuard to get a certificate of exemption.

A few days after Bob submitted his application for exemption, he got an email from someone at InvestiGuard, asking a series of detailed questions about his proposed research. The questions included:

Bob had to look up that last phrase before he could give an answer. Eventually, after providing copious details in a lengthy email exchange, he was granted the exemption.

The term “inflicted insight” was coined by psychologist Diana Baumrind in a 1964 article criticizing experiments done by Stanley Milgram (Baumrind, 1964). In Milgram's experiments, subjects acting as teachers were made to think that they were inflicting painful punishment on a slow learner, under orders from a researcher. People continued to obey the researcher, increasing the level of pain inflicted, even when the “learner” was apparently in a great deal of distress.

Milgram’s findings suggested the lengths to which people will go to obey a respected authority figure. When the same experiment was conducted in a less prestigious setting than an Ivy League university, obedience rates fell (but then the rates were higher at Princeton than at Yale, for some reason that, to my knowledge, no one has attempted to explain). Baumrind and others claimed that it was unethical to force people to recognize that they were capable of cruelty, that this type of self-knowledge was too emotionally devastating. But the majority of Milgram's subjects (84 percent) revealed in a follow-up questionnaire that they were glad to have participated; 15 percent had neutral feelings, and fewer than 2 percent regretted participating. (Milgram, 1974)

Milgram certainly wasn’t the only researcher to inflict unwanted insight on his subjects before federal regulations put a stop to such investigator-instigated revelations. In 1971, the Stanford Prison Experiment demonstrated how an institutional setting—or any new situation—can influence a person’s attitudes and behavior.

Principal investigator Philip Zimbardo (a high-school classmate of Milgram’s) converted the basement of the Stanford University psychology department into a temporary jail, complete with locked cells, and identified 25 mentally healthy student volunteers to populate it. (I don’t think they were screened for race and socioeconomic status, but they were mostly white and middle-class.) Each subject was assigned randomly to the role of guard or prisoner. A research assistant performed the job of prison warden, while Zimbardo acted as both prison superintendent and the study’s PI. He later admitted that taking on that dual role made it impossible for him to maintain the neutrality required of a researcher.

The experiment was supposed to last two weeks, but it ended on the sixth day, when Zimbardo’s then-girlfriend (later his wife) visited the “prison” and was horrified by the behavior of the increasingly sadistic guards. By that time more than half of the prisoners had succumbed to emotional breakdowns or psychosomatic illness. Zimbardo later said that he should have ended the study after two days. However, none of the other visitors to the site of the experiment, including a Catholic priest who interviewed all the prisoners, had called for the experiment to be shut down.

The experiment is still widely discussed today, partly because of its pre-Belmont lack of emphasis on subjects’ rights, but mainly because of what it purported to reveal about human nature. At a recent viewing and discussion of Quiet Rage, Zimbardo’s 1991 film about the study, most of the comments after the screening dealt with the profound implications of the results of the study. People mentioned connections with workplace hierarchies, family relationships, and military service, including the behavior of guards at the Abu Ghraib prison in Iraq. But when I remarked that it was too bad that a similar experiment could never be conducted today, none of the 40 or so people in the room expressed agreement with me. The consensus seemed to be that it was an unethical experiment, which should never have taken place.

The Stanford human subjects committee had approved the experiment, and Zimbardo has said that he still thinks review committees should “allow some controversial things to be done but in a highly monitored way. . . . there should be the option of an independent overseer blowing the whistle at any time.” He also thinks that, while it may be unethical for researchers to be the ones to reveal a person’s dark side, it’s important for the person—and the world—to know that the dark side exists. (Stanford University, 1997)

The consent form signed by the students in the prison experiment would never make it past an IRB today. It said that subjects would only be released from the experiment for reasons of health or for other reasons approved by the PI (and prison superintendent), Zimbardo. Today’s consent forms give you an out for any—or no—reason, based on Article 22 of the Helsinki Declaration, which states in part: “The subject should be informed of the right to abstain from participation in the study or to withdraw consent to participate at any time without reprisal.” A few researchers and ethicists have questioned the blanket right to withdraw at any time, especially where the consent was given to use some part of the subject that is no longer in his or her possession, such as tissue, eggs, or blood. It sure doesn’t seem right to me that adults are allowed to renege on a commitment to participate just because they don’t feel like doing it or they got a better offer. It’s definitely a bad example to set for children.

Not all criticism of the experiment has related to its ethical lapses. There are also detractors who say it was poorly designed, that it had too small a sample size, and that the participants were just playing roles rather than experiencing a transformation in personality. But the experiment’s shortcomings haven’t kept Zimbardo, who is now a professor emeritus at Stanford, from becoming a popular author and speaker. Nor have they kept large numbers of people from discussing, citing, imitating, and re-enacting the experiment, most recently in a Hollywood drama scheduled for release in 2008.

One unexpected result of the experiment was the insight it gave Zimbardo into the nature of shyness. Shy people, he realized, are in a prison of their own making. He proceeded to do a lot of research on shyness, which led to the publication of books and articles on the subject; he even founded a shyness clinic.

I find it bewildering that there is general agreement on the significance and lasting influence of the Stanford Prison Experiment, yet at the same time concerned citizens and human-subject protection workers are determined that such experiments shall never again take place. The Belmont Report stressed the importance of subject autonomy, but the regulations and the IRBs seem determined to take away a person’s right to make an independent, informed choice. In today’s overly cautious research climate, there is no room for altruism or risk-taking on the part of human subjects.

Besides “inflicted insight,” another aspect of social-science research that IRBs vigilantly try to eliminate is deception. If subjects are going to give informed consent to participate in a study, the reasoning goes, they need to know exactly what the purpose of the study is.

The heyday of deception in the social sciences was the late 1960s and early 1970s, when about 40 percent of all research in social psychology made use of deception. (Hunt, 1982) I would have guessed that the percentage was higher than that. I took Psychology 101 during that period, and we were required to “volunteer” for a certain number of experiments being conducted in the department. Every one of the studies in which I participated involved deception. While I don’t recall having any unwanted insight inflicted on me, I did develop a lifelong suspicion of people's motives, particularly psychologists'. Now that I think about it, I’ve never completely trusted anyone since then. But heck, it was for science, so who am I to complain?

My own abuse at the hands of psychologists pales next to the deliberate trauma inflicted by Henry A. Murray and his colleagues on 22 Harvard undergraduates. For a three-year period, beginning in the fall of 1959, the students participated in a series of psychological experiments designed, among other stated purposes, to measure people’s reactions to stress. In one of the first activities, each subject wrote an essay explaining his philosophy of life and then was told that he and a fellow student would be debating the relative merits of their philosophies. In fact, when the subject showed up for the debate, his opponent turned out to be a brilliant young lawyer, who had instructions from the researchers to be as aggressive and degrading as possible.

This experiment had a profound effect on some of the students, who recalled 25 years later the frustration, helplessness, anger, and humiliation they had felt. One student, who was younger and poorer than most in the group, may have been particularly traumatized by the stressful situation. Although no one can say for sure whether the experiment was a pivotal event in his life, that was the time when Theodore Kaczynski began to fantasize about “taking revenge against a society that he increasingly viewed as an evil force obsessed with imposing conformism through psychological controls.” By the time of his Harvard graduation, he had formulated most of the ideas that he would later espouse in the 35,000-word essay “Industrial Society and Its Future,” which came to be known as the Unabomber’s Manifesto. (Chase, 2000)

Currently the American Psychological Association’s code of ethics has this to say about deception:

8.07 Deception in Research

(a) Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study's significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible.

(b) Psychologists do not deceive prospective participants about research that is reasonably expected to cause physical pain or severe emotional distress.

(c) Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data.

Much of what we know about human behavior could not have been obtained if the researchers had been totally forthcoming about the purpose of the study. If Milgram had revealed the truth about his experiment (“We want to see how far you’ll go in obeying authority. Oh, and by the way, that ‘learner’? He’s just pretending, and you’re not really inflicting any pain on him.”), he would never have demonstrated that decent, kind people are capable of torture.

Other studies used deception to demonstrate that in an “emergency” simulated by the experimenters, the likelihood that a subject will take action is inversely correlated with the number of other people present. This phenomenon is known as the bystander effect. It’s also called the Kitty Genovese effect, after the woman who was murdered in New York City in 1964 while 38 witnesses allegedly did nothing (in fact, no one saw the whole crime, and some did call the police). (Latane & Darley, 1969)

In the “Good Samaritan” study, researchers used a group of seminary students to test the limits of compassion, with results that can be seen as either troubling or ironic, depending on your degree of religiosity. The researchers told all the students that each of them was to give a talk in another building. Some of the students were asked to talk about the parable of the good Samaritan, others about jobs for seminary students. Within each of those two groups, one-third were told they had plenty of time to get to the other building; one third were told they still had a few minutes, but should head right over; and one-third thought that they were already late.

While walking to the other building, each student encountered a man in apparent distress. The results of the study showed no correlation between the topic of the student’s talk (a parable versus job prospects) and the student’s likelihood to offer help. What did make a difference was how hurried the students were: When they thought they had plenty of time, 63 percent stopped to offer help. Of the students who thought they were late, only 10 percent stopped. (Darley & Batson, 1973)

It isn’t just in the social sciences that researchers wrestle with the question of whether or not to deceive subjects. Big U’s medical center addresses the question on its Web site, stating that deception is generally unacceptable, but in rare cases it may be all right to be less than totally forthcoming about the experimental hypothesis or goals. Researchers need a really good reason, the site warns, to get a deceptive study past the IRB.

Not all medical researchers agree that deception and patient protection are mutually exclusive: “We argue, in contrast, that investigators can conduct deceptive studies, while respecting subject autonomy, by informing subjects prospectively that they are being deceived, but not informing them of the nature of the deception.” (Wendler & Franklin, 2004)

One area where deception is crucial is in studying the placebo effect. If subjects are told that they’re getting a placebo, rather than medication, and that’s exactly what they get, there’s no way to study the patients’ “response expectancies.” You can’t say to someone, “I’m going to give you a fake pill and observe your reactions to it” and then give the fake pill and hope to get any meaningful data. (Miller, Wendler, & Swartzman, 2005).

A 1978 article by Alan Soble discusses various proposals on how to allow deception in research while still being ethical. The author describes an approach advocated by Robert Veatch, the doctor who also advocates sharing protocols with subjects (see chapter 4). Veatch suggested that, for each proposed study, a sample of subjects be asked if they would object to being deceived; if a substantial percentage say they would participate in such research, then it would be OK to proceed. Soble argues that a majority vote of strangers can’t substitute for the actual consent of a subject, so he offers a compromise: Allow research subjects to consent in general to deception and then have each subject designate a relative or friend as a “proxy” to approve or disapprove of the actual procedures to which the subject would be subjected. (Soble, 1978) This idea doesn’t seem to have caught on among those who protect human subjects.

As with all research, it's a question of weighing the benefits against the potential risks to subjects. That's what IRBs are supposed to do. But some researchers have argued that in seeking to minimize risk, especially in non-biomedical research, IRBs are instead requiring that there be no risk at all.

In past decades, this aversion to risk could be explained almost entirely by an overly cautious interpretation of the regulations, but today’s IRB has an additional reason to be cautious: Increasing numbers of study subjects are suing over alleged harm caused by study participation, and class actions have become the lawsuit of choice. IRBs are now more skittish than ever about approving research that poses even the slightest risk to the financial health of the institution. (Mello, Studdert, & Brennan, 2003). I don’t think any of the lawsuits have claimed that inflicted insight or other psychic traumas occurred during research in the social sciences, but you just can’t be too careful these days.

Previous chapter | Contents | Next chapter | References | Contact