More thoughts on Sam Harris’s insightful new book, The Moral Landscape: How Science Can Determine Human Values, 2011. I am reading it with two friends.
Would you like to join us? It would be great to have your comments as well. As we read, we are each identifying key ideas we agree with, and statements where we disagree.
Chapter 2 follows:
The Moral Landscape, Chapter 2 – Good and Evil
Agreements (and my rewording/additions in italics):
Harris is an Ethical Naturalist. Some ethical statements are true, and derive from real physical aspects of the universe. Harris is also a Utilitarian. Striving to maximize the overall good, create the greatest good for the greatest number. Harris is also a Consequentialist. The consequences of one’s conduct, actual or potential, are the ultimate basis for any judgment about the rightness of that conduct. Thus Harris (and many of us) can self-describe our morality as Naturalist Utilitarian Consequentialist. Now doesn’t make you feel better? 🙂
Religious believers who seek to justify thoughts or behaviors based on consequences which do not or cannot occur in our natural world can easily be immoral.
We may have theistic beliefs, but those beliefs should always be consistent with and constrained by natural-world consequences, potential and actual. Supernatural consequentialism, to the extent that it conflicts with natural-world consequences, can easily become immoral. It gives us the wrong priorities, or causes us to lose sight of the real consequences that matter, in favor of imagined consequences that are both untestable and wrong. Examples: Christian theism that sometimes devalues science and natural and social progress in the physical world, or which diverts or constrains our feeble and finite cognitive resources to fundamentalist thought or behavior, or to converting others to nonadaptive beliefs. Islamic theism that sometimes legitimates religious violence, etc.
The moment we accept there are right and wrong answers on questions of well being and progress, we accept there are many who are wrong about their answers. It is often difficult to determine the net long-term moral consequences of an event, a problem philosopher Dan Dennett calls the Three Mile Island Effect. We do our best anyway.
We value total well being and progress over the average well being or progress of all. We may sacrifice ourselves to improve total well being or progress, ideally both.
In some domains, as in our valuing of family and subgroups, or of monogamy (or other limitations on polygamy) over open relationships, we want a bias toward the well being or progress of the subgroup. In other areas we want equality of treatment, opportunity, and access, or a lack of bias, as much as is practical. Whether we want bias or not depends on the total consequences, for well being and progress, of the value preference.
Calculations of fairness drive reward related activity in the brain, according to neuroimaging and behavioral economics. Our brain is a fairness computing and emoting machine.
Kant’s Categorical Imperative: Act always in a manner that you hope is consistent with universal law.
Jonathan Haidt: We make moral judgments intuitively and emotionally. Our reasoning is usually post hoc (constructed after the fact), and has limited ability to change our intuitive-emotional judgments. Amen.
Genuine altruism, benefiting others without reciprocation, includes altruistic punishment, the sacrifice of self to punish norm violators, with personal harm incurred in the process.
Altruistic punishment is both a powerful and a dangerous concept. If we were individually more courageous, more willing to sacrifice ourselves to punish norm violators (for example more of the 90% willing to go to jail to thwart or block unfair actions by powerful corporations, the ultrawealthy, the government, and other members of the top 10%), we could have much better society, but if this were done poorly, we could also easily have a much more violent and complexity-poorer society. The morality of a contemplated altruistic punishment strategy depends on the consequences to society. This in turn depends on the context, intelligence and proportionality of the behavior. As with Democracy, which could not flourish as a beneficial form of governance until societies had literacy and mass communications, mass scale altruistic punishment (sacrifice of individual freedoms, wealth, etc. in order to punish the transgressions of much more powerful groups) may only become a generally net positive development once we have cybertwins guiding our democratic activities post 2020, intelligently channeling us into more effective mass activism, such as sitdowns, strikes, boycotts, purchases of true competitors products, strategies that will bring negative consequences and shame to the 10%, and other forms of civil disobedience. There are some great scenarios and stories to be written here!
Consciousness expands choice, so it is an evolutionary good. The more consciousness we have, the more proactive choices we have as to how to decide a thought or behavior (logic, emotion, random chemical oscillators, coin flips, horoscope, etc.) That is what free will is. Freedom is conscious awareness of and increased control over cognitive choice. Like consciousness, it is variable and transient, but freedom is no illusion!
Disagreements:
Pat Churchland: “No one knows how to compare the headache of 5 million against the broken legs of two.”
Disagree. We make economic estimates for these all the time. Actuarial science, insurance, risk mgmt are big industries, in fact, and increasingly quantitative.
Paul Slovic, in Psychic Numbing, has shown we are more distressed by violence to single individuals than to large populations. We grow numb as numbers rise.
Harris finds this illogical, but it seems quite logical for those who believe their ability to influence or control environmental outcomes decreases as the number of actors rise. We steadily lose hope and empathy as numbers rise, and this seems a reasonable way to view the world. We pick fights that we think we can win. As long as our hope and empathy remain strong in systems of smaller numbers, we can continue to move the system forward.
Derek Parfit’s “Repugnant Conclusion” for using total well-being as your standard of value: hundreds of billions of barely surviving can be preferable to 7 billion happy. Average well being can prevent even worse problems.
But if we value well being and progress together, the “logic problem” of Parfit’s model falls away. Total well being and progress are what seem most useful to care about, not average (we also care about the distribution of the total, or the social divide, a topic you haven’t mentioned). There are also inescapable real-world tradeoffs between these values. More of us choosing individually to sacrifice in certain ways can often get us total progress faster, and we can be sold on and willing to test such strategies.
Loss aversion (cognitive bias). We are more averse to real losses than real forsaken gains. So we preserve the status quo more than risk.
Harris questions the value of this, but to me this also sounds like prudence, a strategy likely to be generally adaptive. Part of our psychology is seems to be set up to seek progress, and part to appreciate what we have (think of Type A and Type B personalities). In my own head, when I have a forsaken gain, I remind myself of how lucky I am, and take stock of what I do have. When I have a real loss, however, it’s clearly a regression.
“We cannot give a rational explanation of why it is worse to lose something than not to gain it.”
Yes we can, or at least I think we can. Loss sets us up to see a regressive pattern, and imagine further regression. Not gaining pushes us to value what we have, and imagine stasis, a more preferable fate.
“Can the disparity between our desires to satisfy our own desires (eat well) and to end the suffering of others (global starvation) be morally justfied? Of course not.”
Disagree. There is always a judgment of efficacy. We estimate our efficacy. We can do little to end global suffering, and much to increase our and friends pleasure.We all personally know abusers who don’t quit when we try to alleviate the conditions of the abused. Many social games occur inside systems so broken (education, government, unions) they are “no win.” This is similar to Psychic Numbing. It is adaptive to focus on the well being we know we can achieve and progress we can make — starting with ourselves and our loved ones.
“We are now poised to consciously engineer our further evolution, thus escaping evolutionary dynamics.”
Not so. Respectfully, this kind of language is I believe unaware of the limits of reason, which is one form of memetic evolution. We can’t escape evolutionary processes, no matter our level of development, if we live in an evolutionary developmental universe.
“Free will cannot be squared with our growing understanding of the physical world.”
Disagree. The will of all living organisms seems to be on a continuum of constraint. There are degrees of freedom, and the more conscious the organism, the more its will is free to follow the dictates of rationality, emotion, intuition, random chemical oscillators (see Martin Heisenberg’s work), or any other strategy it can see, chosen with some measure of proactivity, vs. reactive and unconscious thought or behavior. That sliver of thought or behavior that is conscious in any organism, at any moment in time, has some degree of choice to follow a range of decision rules available to its awareness. Less conscious and unconscious animals simply have far fewer of those choices.
“It seems clear that retribution rests upon a cognitive illusion of free will, and is thus also a moral illusion.”
Disagree. Conscious will is much freeer/more voluntary/choice rich, and to the extent a crime is more conscious, it is more immoral, and should be punished (and rehabilitated where possible) as such, whenever the social consequences would be better than no punishment (and rehabilitation). The utility of socially agreed and broadcast punishments for various crimes, the act of retribution/punishment for a committed crime, and rehabilitation, are all morally meaningful with more conscious, choice-capable human beings, and they are less morally meaningful (socially consequential) with psychopaths, mentally ill, substance-addicted, children, etc. In the latter cases we need other methods to deter crime than punishment or the threat of punishment, such as increased social transparency to identify and rehabilitate or monitor individuals who have less free will/choice/consciousness than the norm.
Thoughts? Comments? Let me know, thanks.
A first comment is that I don’t follow your disagreement on the two Loss Aversion quotes. In the first, your disagreement talks about being conservative vs. risk tolerant. But the quote is not about that spectrum. It is about one type of risk being valued differently than another type of risk, when those two risks have the same absolute or relative potential utility gain or loss. That applies to people wherever they are on the conservative vs. risk tolerant spectrum.
In the second quote, as I understand it, I don’t think your point about mutations is relevant – whether genetic mutations are more often positive or negative is not related, as far as I can tell or imagine, with how the conscious mind reacts to choices involving material gains or losses.
Nice catch! Looks like I made a category error of some kind. I rewrote those points, hopefully they make more sense now. Part of our psychology is seems to be set up to seek progress, and part to appreciate what we have (think of Type A and Type B personalities). In my own head, when I have a forsaken gain, I remind myself of how lucky I am, and take stock of what I do have. I remain in stasis, but appreciative. When I have a real loss, however, it’s clearly a regression, and now I worry about and try to guard against further regression. So for me, the former is definitely preferable to the latter.
Of direct relevance to Harris’ point about psychic numbing is Section, 8, “Scope Neglect” of Eliezer Yudkowsky’s Cognitive Biases Potentially Affecting Judgment of Global Risks [http://yudkowsky.net/rational/cognitive-biases]. Regarding your first couple disagreements with Harris: True, there is a natural logic (or at least a consistency) to our scope neglect, but it is not necessarily the most conscious or morally responsible logic for us to use. I believe that despite the best efforts of our highly adaptive neocortex, we change our environment faster than we can update the more genetically-hardwired evolutionary inheritance aspects of our psychology, including our moral sense. Consider Kurzweil’s description of Intuitive Linear vs Historical Exponential perspectives as an example of mass psychological maladaption.
RE: “We can do little to end global suffering, and much to increase our and friends pleasure. […] Many social games occur inside systems so broken they are “no win.” This is similar to Psychic Numbing. It is adaptive to focus on the well being we know we can achieve and progress we can make — starting with ourselves and our loved ones.”
I feel somewhat split between you and Harris here… While Harris’ conclusion may lead to a numbing sense of guilt and overwhelmed impotence, your conclusion potentially sounds like a path toward the moral pitfalls of defeatism and escapism. I would argue it’s important for us to focus part-time on the uncertain pay-off of well being and progress that we do not (yet) know we can achieve. For example, organizing to replace or reform broken systems (or to agitate others into discontent and awareness of systemic dysfunction) is movement out of ones comfort zone and has high potential individual and societal rewards.
Depending on how we define “well being,” it is a state that may or may not sometimes conflict with progress. Discomfort and temporary suffering (as with sacrifice) — particularly when voluntary — are conditions more closely correlated with progress. They may exert positive stress on society in the form of cultural and economic evolutionary competition. I associate well being more with comfort and security, enjoyable states that may lead from stasis to moral stagnation and corruption/degeneracy instead of development.
On the topic of Free Will, I’d certainly agree that capability and choice (agency) increase proportionally to intelligence. While flipping a coin is theoretically deterministic, practically, it’s stochastic. The threshold of complexity beyond which the outcome of a given isolated system becomes unpredictable is quite low (I’m speaking strictly of Evolutionary systems here; the threshold of unpredictability for Developmental processes is much higher, by definition). Any entities that we might regard as having Free Will (though personally, I’ve never found FW to be a philosophically useful concept; too much baggage) are always well over this complexity threshold. There’s also the fact that “isolation” is only a modelling approximation and thermodynamics doesn’t allow for any truly isolated physical systems; then there’s observer effect to consider, and the uncertainty principle, etc…
In my experience, in the context of discussions of morality, bringing up Free Will vs Determinism has always counterproductively derailed the conversation. How so? In order to see a moral dimension to Determinism — one side absurdly, yet invariably, rejects Free Will as illusory, imagines humans as puppets of a higher power, and generally entertain some deus ex machina idea of external constraint ala Groundhog’s Day, or as in the Grandfather Paradox-like conditions that thwart the protagonist’s intentions in popular films. On the other side, in order to reassert agency and reclaim a meaningful view of existence, some may find it necessary to speciously invoke quantum indeterminacy.
Thank you for the thoughts Kjell. For me, free will vs. determinism has always been a duality that seems clarified by focusing not on the idealized extremes of either, but on the physical mechanisms of capability and choice for the will of the choosing organism, mechanisms that seem to increase in number and variety proportionally to intelligence, as you say. I find it interesting to make mental lists of those mechanisms, and ask which of them I might be using at any point in time. It’s also an interesting exercise in self-awareness to try to be aware of as many of them as I can for important decisions, kind of a way to increase my effective freedom prior to an act of will.
For some choices, such as where I’m going to move my hand right now as I move it from my lap to the table, my choice seems to be based on a very low-level mechanism, perhaps a chemical or network oscillator in some location in my brain. This level of will might be only as “free” as an insect’s typically is, perhaps. But for other choices I may recognize I am also using my emotion, intuition, beliefs, or a range of conscious thought processes. It is the increasing number of such choice mechanisms available to me that for me defines increasing freedom of will. I appreciate your comments about how easy it is to get caught in counterproductive discussion of these issues however. I’m not that well read in philosophy as I’ll often stop reading it unless I can understand how it is tied to physical process, real or proposed.
Kjell and John, I absolutely love and appreciate the comments you each made about free will in particular. Right on on all counts. I am hopefully that as I get further in reading Harris’s book, that his “free will” points will not seem necessary to evaluating his other points, since I did not think they accomplished much in and of themselves.
Concerning “We are more averse to real losses than real forsaken gains”, I have trouble accepting the premise of these mental games. In my experience, real forsaken gains are always less certain. If a psychologist offered gain of $100 or loss of $90, I would trust the likelihood of gain less than that of loss, even if told they are equal. The gain could always be interrupted at the last moment with a clause or exception. Just kidding! Less likely would the loss be interrupted before completion. An extreme case: the ambassador from Nigeria assures me that I will get millions if I only commit to paying a small fee.
Another reason I may act “illogically” when faced with a gain-loss proposal is the value of my time. When a psychologist makes a logically-attractive offer to test my evaluative ability, I may decline because I do not know how much time I’ll have to spend to get this gain. Suppose I could count cards at a Las Vegas blackjack table and skew the odds in my favor. How much time would I divert from my passion to working this system to gain money?
This factor of the unknown reminds me of the moral questions about throwing a railway switch to direct a train to kill one person instead of 5. Or pushing a person off a bridge to block the train to save 5. I would not take action that harms (or kills!) a person if my knowledge of the situation is incomplete. Suppose the 5 who would have been killed know that they are in the way and plan to move, while the 1 that I target knows that he is not in the way and will be unprepared for my “assistance.” Gosh, I’m sorry I killed someone, but it kinda looked like those others might die.
So it’s the incomplete information that could make me choose the less beneficial option in the contrived small loss vs. large forsaken gains scenario.
Hey Miguel,
Thanks for this. I have a pretty similar response to all those forced choice “lose-lose” psychology experiments you describe here. I find them so contrived, and I also just don’t trust their evaluation of the situation. I’d rather lose trying to redefine the lose-lose game, as in Kirk’s “Kobayashi maneuver” response, lore in the original Star Trek, and shown on film for the first time in the great 2009 Star Trek reboot. Funny how they rarely seem to address the option of ignoring the data and trying to challenge the assumptions in their experiments. Makes them pretty darn unrealistic, in my opinion. Universe forbid, if a Nazi forced me to take one or another of my children on the train with me, condemning the other to death, as in the great movie Sophie’s Choice, I’d be spending all my mental energy on figuring out how to kill or disable that person, or otherwise change the situation causing the forced choice.
Cheers,
JS