Quick Note: I'm not taking any particular stand with regard the discussion. I only want to clarify the issues and what seems to be at stake. I wrote this in a rush so apologies for the spelling errors. If I have time, I might go back and edit it. I doubt it's even going to be read, though. Haha.
Philosopher Tamler Sommers of University of Houston and psychologist David Pizarro of Cornell University have a great podcast called Very Bad Wizards, and in their most recent episodes, they interview author Sam Harris and discuss that interview in a subsequent episode. A lot was covered in the interview with Sam Harris, but the discussion turned mainly on two topics: (1) the metaphysics of free will and (2) moral theory. The two topics bisected in the conversation because there's a question of whether our knowledge of having or lacking free will bears on our moral judgments, especially our judgments of moral responsibility.
|John Searle makes this distinction between free will and|
determinism in terms of causal gaps and causally sufficient
conditions in his book Freedom and Neurobiology. Sam
Harris talks free will in his book, Free Will.
Even though everybody was in agreement that there's no free will in this classical sense, they were in disagreement about some of the implications this conclusion would have on moral judgments, specifically moral responsibility judgments. Harris contended that if we're willing to grant that all our actions are causally determined and that we don't have free will, then we have no grounds upon which we could blame anybody for their actions. Sommers and Pizarro disagreed. They contended that there are grounds upon which you could blame someone or hold them morally responsible even if determinism is true (that is, even if free will isn't a real feature of the world or us as persons).
|Daniel Dennett writes of similar|
thought experiments as on the left
in his book, Elbow Room:
The Varieties of Free Will
Now, consider another thought, one more fantastical, about an Evil Genius who wants to control your mind. Imagine he has found some way to control your thoughts and behavior and so everything you think and do is really just a result of his manipulation. If you commit a crime, say, like the man who acted as the sniper did, under the control of the Evil Genius, do you think that you're responsible for the crime?
Let's hop to another case where it isn't a person who's controlling your thought and behavior but cosmic rays. As with the last thought experiment, these cosmic rays are causing you to commit the crime. Are you then responsible for the crime?
Now finally hop to the case that some scientists think we find ourselves in today, that of determinism. What causes you to act in a certain way is not a tumor, not an Evil Genius, not cosmic rays, but natural laws. You do what you do because the laws of the universe necessitate that you do what you do. Given that, do you think you are responsible for your actions?
These are, of course, thought experiments, and thought experiments designed to test your intuitions regarding free will's relevance to the issue of moral responsibility. Assuming you think that you or another person is not morally responsible in all the cases except the last one, you'd have to make an argument that there's a relevant dissimilarity between the last case, the case of determinism, and the other cases. At least you'd have to demonstrate dissimilarity across some or all of the cases.
During the podcast with Harris, Sommers and Pizarro challenge Harris's examples with further argument and thought experiments. I'll turn to the thought experiments first. Sommers asks us to consider the case of a drunk driver. In this scenario, imagine you've drunk a lot of alcohol and gotten behind the wheel of a car. As you're driving home, you hit a small child who's playing outside. Are you responsible for hitting the child? Listening to both podcasts now, it's apparent to me that intuitions become more divided on this issue. Sommers claims you are, Pizarro claims you're not, and Harris claims you're not.
Then Pizarro puts up for consideration the case when a person, say, commits a crime because of a hysteric nerve reaction. To tease out Harris's commitments, Pizarro asks whether that man is morally responsible and why? And here Harris's response is curious. Harris says that man isn't morally responsible because the response was involuntary. (He might have misspoken, I don't know. This part of the discussion occurs at two hours and 17 minutes into the episode, almost at the end.) This immediately raises the question whether the man would be morally responsible if the action were voluntary. It's peculiar because if Harris is to be consistent, it looks like a voluntary/involuntary distinction can't be the reason here why we would be willing to blame a person or hold him morally responsible. If Harris is consistent, this can't, to use a phrase, be the difference that makes a difference.
What I'd like to get to is the reason why, it seems to me, Sommers, Pizarro, and Harris have their differences of opinion regarding the implications of determinism on moral responsibility. I think each of the guys is appealing to an implicit moral theory, and this might help make clear how they approach these thought experiments and the free will and moral responsibility issue. I won't be defending anybody's position, exactly, just laying out what I think the landscape is, and perhaps why the participants in the discussion have the views they do. I'll start with Harris's view.
|Derek Parfit is a global consequentialist, according to|
Julia Driver in The Cambridge Companion to
Utilitarianism, q.v. "Global utilitarianism." Julia writes
that "the primary motivation behind global
consequentialism is to show that a fully worked-out
consequentialist normative ethics will be immune to
criticisms that the theory ignores character and ethical
that are not impartial."
Harris's might seem like a confused position, but it's actually quite principled. (It's just a matter as to whether or not you want to accept the principles.) Harris doesn't say this on the podcast, but he's a consequentialist of a particular type, a global consequentialist. A global consequentialist is someone who believes that when judging the consequences of human well-being, we have to take everything into account. This would include taking judgments into account now and judgments as they would be at some optimal state of the species. Whatever could contribute to well-being, even perhaps the consequences of accepting the things that we think would contribute to well-being into well-being, ought to be considered, and when we do, we will actually improve well-being. Notice the odd but cool (?) recursive quality to the position. It's sometimes been argued that if we're consequentialists, we have to accept that if throwing Christians to the lions contributed to the well-being of the Romans at the Coliseum, then we have to accept the conclusion that the Christians ought to have been thrown to the lions. But the global consequentialist would reply that this immediate consequence, too, would have to be taken into account when considering the human species as a whole, and across time, with regard well-being. And most global consequentialists would think that, on par, the immediate pleasure that the Romans would have received would not on balance outweigh the events' contribution to well-being across place and time. (But of course that's an empirical claim.)
You might raise a question at this point and ask if global consequentialism doesn't lead us to a kind of relativism. The global consequentialist would answer Not really. Consider the analogy of chess. There are certain rules to the game, certain constitutive rules that make chess the game that it is. But then there are other regulative rules, rules of thumb used throughout the game. (This is an example Harris used in his book The Moral Landscape, although the context was slightly different.) If we want to play the game well and eventually mate the other King and protect our own King, we're going to act in accord with the constitutive rules and with many of the regulative rules. But we will, in certain situations, abandon one regulative rule for another in order to optimize our play. Suppose we're playing right along and we're thinking about the rule of thumb Castle early. Then we see a threat to our Queen. We might move our Queen before we castle if it seems like a good idea, even if we castle later or don't get to castle at all as a result. Or alternatively we might flout the rule of thumb Protect your Queen and instead sacrifice her in order to get better positioning. So too it is with maximizing well-being, says the global consequentialist. What we might do in the short-term would actually be a good decision in the long-run.
If we think about Harris as a global consequentialist, it might make sense what he said early regarding voluntary and involuntary actions, which sounded like misspeaking. (Or he could have actually misspoken, I don't know.) The voluntary/involuntary distinction could really make a difference in the longterm or it could not. If he's to be consistent with his empirical claims, it seems to me he should predict that in the long-term, it won't make a difference in terms of moral responsibility. But he should be willing to grant that if the distinction contributes to well-being, then we should accept it.
|David Hume was the virtue ethicist of sentiments|
|Immanuel Kant was a celebrated|
So, before I conclude, let's take these moral theories and apply them to the thought experiments posed on the podcast, and the reasons why, per the moral theories, that each person seems to believe as he does. Across all the cases, Harris believes that there's no moral responsibility because none of those cases allow for the sorts of differences that make a difference. There might be a difference in our short term judgments of attributing moral responsibility to the person with the hysteric nerve reaction but probably not in the long term, he conjectures. And, you know, it might contribute to well-being in the short-term if we find any of these people morally responsible for their actions because the retribution might contribute to well-being but probably not in the long-term. So on balance, the conjecture on empirical grounds is No, maybe short-term, in some cases, yes.
Sommers thinks No, I think, with regard to the Evil Genius, cosmic rays, hysteric nerve reaction, and tumor guy, but Yes to the drunk driver because he reasons that the guilt that he would feel regarding doing what he did would be sufficient for someone else sympathetic to his position to assume that anyone ought to feel such guilt in that or similar situations, and so should be judged morally responsible. Nobody ought to be drinking and driving so the lack of good sense before doing it coupled with the guilt he would feel makes this case different from the others. Our sentiments, he thinks, ought to incline us to blame such a person, in a way that they wouldn't in the other cases.
Pizarro says No to all the cases because in all the cases the person didn't have sufficient control over his faculties. If we're assuming that the actions that are capable of moral praise and blame are those which we could judge voluntary in some sense, if that much is true, then we won't be able to find anybody in these cases who did what they did in a manner that we would say that their actions were voluntary. The drunk driving case might seem like a counter example because, for instance, the man chose to drive drunk. But Pizarro could say but he chose while drunk, and not under much control of his cognitive capacities, and furthermore even if we do somehow grant that he voluntarily drove the car, it doesn't make any sense to see that he voluntarily hit the child.
Moral theories matter, and they matter because so much of what we believe about the world hinges on which position we take toward it. Whether we incline toward consequentialism, virtue ethics, or deontology, or some as-yet-understood mixture of them, these theories will make a difference across our moral judgments and can also account for competing differences we have over the big questions. How can I be a good person? How can I do something good? What rules should I follow in order to live well?