Obsessed as they are with weights, measures, and other materialistic minutiae, scientists, we are often told, miss the mark when it comes to discerning the intangible values that give life its true meaning—and are sometimes an active force for evil. “We murder to dissect,” the poet Wordsworth said. Left to their own devices, scientists build super-weapons or harvest embryos for their experiments, indifferent to their human costs and consequences. As such wildly diverse pop culture figures as the tormented Dr. Frankenstein and the creepy Dr. Strangelove teach us, science and scientists are amoral. What they get up to in their laboratories may make us smarter, but it’s religion that makes us good.
Or so many a partisan of religion has claimed. A recent study at the University of California, Santa Barbara, by Christine Ma-Kellams and Jim Blascovich, respectively a postdoc and a professor in the university’s Department of Psychological and Brain Sciences, put this idea to an objective test—and arrived at precisely the opposite conclusion. The experiments were presented in a paper entitled “Does ‘Science’ Make You Moral? The Effects of Priming Science on Moral Judgments and Behavior.”
Four groups of test subjects were recruited: three sets of students, who were paid for their participation with course credits, and one set of volunteers. In the first phase of the study, a group of subjects read a vignette about a date rape and were asked to rate the rapist’s behavior on a scale of 1 (he behaved properly) to 100 (what he did was completely wrong). Then they answered questions regarding their field of study, which included the question “How much do you believe in science?” Science students, it was found, were the harshest judges.
Another group of participants were asked to form sentences out of sets of scrambled words. Half were given words that are associated with science, such as “logical,” “hypothesis,” and “theory,” and half were given neutral words, such as “more,” “paper,” and “once.” Then they read the same vignette and rated it. The group that had been primed with science words condemned the rapist most vehemently.
Still another group was similarly primed with science and nonscience word jumbles and asked how likely it was that they would engage in various activities over the next month, some of them “prosocial” (donating to charity, giving blood) and some of them “distractor” activities (going to a party, seeing a movie). The group that had been primed with science words reported “more prosocial intentions.”
The final group was also given word exercises to do. Then they played a game in which they were told to divide five one-dollar bills between themselves and an absent second participant. More women than men allocated more money to themselves, but overall the people who had been primed with science words allocated less money to themselves than the control group.
The “study of science itself” holds “normative implications” and “leads to moral outcomes,” the researchers concluded. At the very least, “the act of thinking about science itself produces important psychological consequences.” The “same scientific ethos that serves to guide empirical inquiries,” in other words, “also facilitates the enforcement of moral norms.”
Reading this paper, I was reminded of another, better-known experiment that was featured prominently in Predictably Irrational, the New York Times best-selling book by the MIT behavioral economist Dan Ariely. Ariely and some of his colleagues recruited volunteers and asked them to take a math test consisting of 20 simple problems. The participants had five minutes to solve as many of the problems as they could. After that, they were entered into a lottery. If they won, they would receive $10 for each problem they got right. Some of the participants—the control group—handed their papers to the proctor to be scored; the second group was told to grade themselves and destroy the originals. Though they had ample opportunity, fewer of the second group cheated than you might expect. Before they took the math test, half of them had been asked to list the titles of ten books they read in high school and half had been asked to write down as many of the Ten Commandments as they could remember.
The results? The control group solved an average of 3.1 problems correctly. The group that recalled ten books read in high school self-reported an average of 4.1 problems solved (a third more than the control group). But the students who had tried to recall the Ten Commandments averaged just three correct answers—basically the same as the control group. “What a miracle the Ten Commandments had wrought!” as Ariely put it.
Of course it wasn’t the commandments themselves that did the trick, but the priming; the act of thinking about a moral benchmark encouraged moral behavior, just as thinking about the unbiased, rigorously objective values of science encouraged the participants in the University of California, Santa Barbara, experiment to be more ethical and social-minded. As the much-quoted formulation from New Thought has it, “You are what you think.”