Ethics and Neuroscience

Neuroethics and Moral Progress
Toward an understanding of ethical decisions

John Banja, Associate Professor of Rehabilitation Medicine and Assistant Director of the Programs in Health Sciences and Clinical Ethics, Center For Ethics


Vol. 9 No. 2
October/November 2006

Return to Contents


Out of Control
Alcohol abuse and academic life at Emory

Use and Abuse

Select Recommendations from the President's Task Force on Alcohol and Other Drugs

"Faculty members generally are more aware of what’s going on with students than the rest of us are. They see the impact more closely in terms of class absences, emotional trauma because of assault, and grades suffering."

"In Italy someone who is out of control because of alcohol is considered the lowest of the low as far as bad public behavior is concerned. Drunkenness is disgusting. American youth are always associated with drunken behavior, and they go from drunken to destructive."


An Image of Ethics
The response of the human brain to moral conflict

Neuroethics and Moral Progress
Toward an understanding of ethics decisions

Emory Indicators: Research impact on neuroscience

Further Reading


Endnotes

Despite a considerable degree of philosophical antipathy towards the way neuroethics “naturalizes” ethics by translating it into biological, sociological, and evolutionary phenomena, I can’t see how anyone today who seriously studies moral thinking and ethical behavior can afford to be ignorant of the intersection of ethics and neuroscience. This is hardly to say that neuroethics will provide us with the moral template that Plato and Kant sought. Indeed, as I see it, neuroethics will leave virtually every specific moral dilemma of the past and present that demands a normative solution—for example, abortion, embryonic stem cell research, distributing scarce health care resources, welfare reform, etc.—pretty much untouched. As Hume famously argued, the logical gap between the factual (that is, neuroscientific) “is” and the normative “ought” is unbridgeable.

Nevertheless, neuroscience will dramatically enlighten us on why humans morally behave as they do. And as neuroscientists develop and deploy unprecedented technology in studying and treating brain behavior and various diseases, their efforts will generate enough ethical problems to keep scholars and scientists busy for the foreseeable future.

Turning to the study Clint Kilts of the Department of Psychiatry and Behavioral Sciences discusses in this issue of the Academic Exchange, note that its objective is to “describe spatially and temporally ongoing patterns of neural activity in human subjects” that model the neural architecture of moral sensitivity. Key findings demonstrate that pontificant neural circuits occur in the prefrontal (especially ventromedial and orbitofrontal) cortices, the posterior cingulate and the superior temporal sulcal areas. This finding is the result of averaging the brain activation patterns of sixteen participants. Consequently, it is possible—although I don’t think likely—that a scan of sixteen other participants would be quite different. Moreover, the “template” scan that emerges from the study does not (nor was it intended to) predict whether or to what extent these participants would actually act in morally sensitive ways—a point to which I’ll return.

In spite of these caveats, discovering that the neural networks this study identifies are commonly activated in self-knowledge and self-referential brain activity is a particularly striking finding because it is hardly intuitive. According to traditional ethical models, one would not think to look for the neural substrates of moral sensitivity in the “it’s all about me” circuits of the brain because moral sensitivity is exquisitely other-regarding. (Thus, the posterior aspect of the superior temporal sulcus, which subserves the brain’s perspective-taking functions, is where one would expect to find this activity.) Kilts speculates that the ventromedial and orbital “me” parts of the prefrontal cortices and the autobiographical/historical “me” aspect of the posterior cingulate are activated because moral sensitivity requires the ability to formulate a “what if this happened to me?” judgment if I am to effectively empathize with another.

What, however, does an investigation like this one tell us about these questions: “Does being morally sensitive mean that one is more likely to act compassionately, kindly, generously, or dutifully?” Or how about, “Can a person be morally proactive without being especially morally sensitive?”—a question that a staunch Kantian, who doesn’t put much stock in the importance of moral feelings, would not find all that peculiar.

These kinds of questions take this study considerably beyond its original objectives. The reason is that if we are interested in the link between the neural circuitry of moral sensitivity and its practical manifestations in observable behavior, we are faced with a host of methodological problems beginning with whom we recruit as study participants. Presumably, we would recruit “normals”—not sociopaths—but how certain can we be that an investigation of their neural activation patterns is going to tell us anything causally interesting or important about their behavior? If we choose these individuals on the basis of their public displays of moral sensitivity, aren’t we making an unwarranted assumption in saying that the neural activation patterns depicted on their fMRIs specifically account for that behavior? How do we know that other neural assemblages aren’t as important, even if they don’t meet the activation thresholds? Further, it’s quite possible that people in the same socioeconomic group with considerable variation in their neural activation patterns will nevertheless act with a similar degree of moral sensitivity when confronted with an ethical dilemma.

Here’s another issue: Suppose that inherently morally sensitive people find themselves working for a company whose leadership only values moral sensitivity instrumentally; that is, a company whose leaders consider moral sensitivity to be valuable only insofar as it contributes to profit maximization (as exemplified by criticisms from the public health sector about Philip Morris’s advertisements of its efforts to convince adolescents to refrain from smoking).

If we put these corporate Machiavellis in a scanner and compare their neural activation patterns with those of our good and true participants, we might indeed discern differing activations between the two groups. But these brain scans will never be morally informative with meaningful content. All they depict is brain activity representing moral personalities we do and don’t like. Why we ought not like this or that person and even whether or not we are justified in our dislike of him or her is up for grabs.

Moral sensitivity, however, is a rather subtle and nuanced ethical construct. There will be other kinds of neuroethical research whose implications seem more concrete, largely because these investigations will report on neural activation patterns among people whose behavior is known to be extremely aberrant, so that comparisons between them and normals can be more confidently made. Thus, much research on neural activation patterns among sociopaths has implicated the prefrontal cortices and the cingulate gyri as “impaired” or considerably less active than in nonsociopaths. Recent neurogenetics research among persons with serious impulse control problems and who were abused as children has identified a genetic variation as the cause. People with this genetic variation (an “L” gene variant for the H gene that codes for the enzyme monamine oxydase-A) have decreased brain volume in a mood-regulating circuit that controls anger and fear (the limbic/cingulate/orbital prefrontal loop). Presumably, that decreased brain volume compromises what would normally be the diminution of fear and anxiety build-up and results in emotional eruptions.

These kinds of studies, I think, will prove immensely challenging for the future of our thinking about social responsibility and “free will,” that is, voluntary control; the more enlightened among us will find it impossible to disregard their implications.
People who resist these findings despite mounting evidence, however, may display another interesting neuroethical phenomenon, namely, motivated reasoning. One form of motivated reasoning occurs when one argues backwards—that is, selecting only those premises or distorting data so that the conclusion that he or she desires is confirmed. Drew Westen and his Emory colleagues recently conducted a study that confirmed previous ones showing that politically partisan individuals (from either party) will discount, distort, rationalize, or ignore evidence that goes against their chosen candidate. Paradoxically, after “processing” materials that discredit their candidates, these individuals emerge from the study only more convinced of the rightness of their political affiliation and their chosen representatives.

A possible explanation of this phenomenon is that all cognitive acts admit some degree of coloring or biasing that derives from affective neural circuitry operating in parallel with purely intellectual processing. The job of the affective circuitry is to confirm a belief’s correctness or “fit” with the data. The more the belief and its accompanying behavior result in the person’s experiencing positive affect, the more it is reinforced and likely to be deployed in the future. To the extent that certain beliefs are of deep concern to us—that is, the more they are nested in and inform a person’s deepest life roles and concerns—the more those beliefs will be penetrated with a degree of “convictive” affect that will cause them to be virtually unshakable, regardless of evidence to the contrary. Revising one’s time-honored beliefs about politics, religion, or one’s self-image in light of new, conflicting data would be too painful for most people or would require too much effort to alter their pre-existing mental schemas and frames. The result is that persons will defend against this emotionally disconcerting data by resorting to various rationalizations to maintain the security of the beliefs that they have evolved over time.

One wonders if this behavior or pattern cannot be extended to history’s greatest moral philosophers. Was Kant’s categorical imperative, for example, derived from his Prussian fondness of reason’s consistency and universality, the regal imperturbability of the priori, and a marked distaste for the moral messiness of human, all-too-human, contingencies? Was Mill, social reformer that he was, horrified at the human suffering that the industrial revolution had brought, prompting a self-right-eous confidence that his English forebears had gotten it right in understanding moral action to consist in securing the happiness of the greater number. Does all moral reasoning, then, suffer from the inevitable way certain premises (or certain conclusions) appeal to or “feel good” to us, so that our idea of moral objectivity is in need of serious revision?

In any event, today’s neuroethicists would scoff at the Enlightenment dream of discovering a brace of purely rational, transtemporal, noncontingent principles that would provide normative rules specifying “right” behavior in specific cases. Instead they would raise the banner of John Dewey’s pragmatism high. Moral progress, they would contend, occurs just like scientific progress: we formulate moral hypotheses; we implement them; we evaluate and modify them (sometimes, this requires decades or even centuries of tinkering, as with concepts like privacy, confidentiality, informed consent, and decisional competence); and we evaluate what has been brought about. We occasionally make catastrophic moral mistakes, but we have little choice but to keep on responding to the survival and adaptational challenges that human life places before us. Neuroethics stands as an important contemporary discipline that will help us understand these processes better.