Jared Moore and David Gottlieb
Is there something in your brain that makes you moral?
and does this somehow “explain away” morality?
Morality is just social rationality. It finds “good solutions to social problems” (Churchland 2018).
What’s right is what fits the circumstances.
Stronger claim:
Weaker claim:
| Cooperation (in the context of competition) |
Second-Personal Morality (obligate collaborate foraging w/ partner choice) |
“Objective” Morality (life in a culture) |
|
|---|---|---|---|
| Prosociality | Sympathy | Concern | Group Loyalty |
| Cognition | Individual Intentionality | Joint Intentionality - partner equivalence - role-specific ideals |
Collective Intentionality - agent independence - objective right & wrong |
| Social interaction | Dominance | Second-Personal Agency - mutual respect & deservingness - 2P (legitimate) protest |
Cultural Agency - justice & merit - third-party norm enforcement |
| Self-Regulation | Behavioral Self-Regulation | Joint Commitment - cooperative identity - 2P responsibility |
Moral Self-Governance - moral identity - obligation & guilt |
| Rationality | Individual Rationality | Cooperative Rationality | Cultural Rationality |
Tomasello (2016)
Morality—social behavior—emerges from
We’ll mainly focus on the first two.
Churchland (2018)
![]()
The equilibrium or balance, so to speak, between his intellectual faculties and animal propensities, seems to have been destroyed. He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating [Harlow, John Martyn (1868).]

Lim, Murphy, and Young (2004)
Mammals whose circuitry outfitted them for offspring care had more of their offspring survive than those inclined to offspring neglect. (Churchland 2018)
Think of a time when you have felt attached. Perhaps you held an infant. You gave a hug. Sex.
First, what did it feel like?
Further consider:
You’re unwinding at home after your trip, tired from last night’s parties. One more year. What will your graduation be like? Will you continue with the well-heeled venture you’re interning at this summer? Your mother steps in. “Anything you don’t box up we’ll handle.” Take to the dump, she means. You hold up [Mr. Snufflekins]. You wonder: would it be wrong to throw him away?
(Swap [Mr. Snufflekins] with some replaceable childhood object you feel attached to.)
Rationally, it seems as if objects like Mr. S shouldn’t matter—you could just get another. And yet we feel attached to him.
Does this mean that our attachment system has gone awry?
Or, conversely, that rational ideas of what should matter fail to account for proper human morality?


Mundy and Newell (2007)
a device able to recognize successful and unsuccessful applications of the attentional apparatus to shared intentional frames […] If such a device can recognize what are effectively prosocial and antisocial behaviors, it could change the process of attending to those behaviors by piggybacking onto the extant learning mechanisms of the brain
We then rear that bonobo as we would a human child. The experiment works well enough; it results in behavior (along Tomasello’s dimensions) qualitatively similar to that of a five year old human child.
Is this a conceivable experiment? (More on conceivability next week.)
Would the experiment result in a moral agent?
Moore (2023)
People vary in their abilities to attend to social situations, to engage in the behavior putatively necessary for moral agency.
(Think of psycopathy, dementia, opiod addition, autism spectrum disorders, and hydroencephaly.)
Does that mean some people have more or less moral agency?
Can we draw a non-arbitrary behavioral boundary between what counts as a moral agent and what doesn’t?
(E.g. those sharks seem like they could access aspects of sociality and therefore could eventually yield recognizable moral agents.)
If not, should we use moral agency as a requirement for moral patiency?
What else would we use?
(We’ll talk about this more next week.)
If we have a device able to recognize prosocial and antisocial stimuli, why bother with bonobos?
Say that we hook that device up to some actuators. (We embody it in a robot or simply use it as the reinforcer in RLHF.)
The low-level constraints this system faces would be very different than those humans face. (It doesn’t use oxytocin, e.g.)
Does this matter?
How close would we need to match the context (environment) of the AI and humans? (Would we need to raise it like a child?)
What “counts” as sociality? (e.g. as satisfying Tomasello’s criteria)
E.g. do you have to feel an emotion to be driven to act prosocially?
The mechanisms of attachment are very detailed and multilayered. Thus to describe them only as “attachment” may be to woefully reduce them and, one might argue, to mistake our model for reality.
Inspired by Ayana, Sneha, and Isabel’s comments

(Griskevicius et al. 2007)
If … men were reared under … the same conditions as hive-bees, … our unmarried females would, like the worker-bees, think it a sacred duty to kill their brothers, and mothers would strive to kill their fertile daughters; and no one would think of interfering. (Darwin 1871)
Does this give a reason for moral skepticism?
Here’s Parfit making a parallel argument in a very different context:
[I]f some attitude has an evolutionary explanation, this fact is neutral. It neither supports nor undermines the claim that this attitude is justified. But there is one exception. This is the claim that, since we all have this attitude, this is a ground for thinking it justified. This claim is undermined by the evolutionary explanation. Since there is this explanation, we would all have this attitude even if it was not justified. (Parfit 1984, 308)
Parfit is saying: if we all think we have a self because of evolution, this undermines the explanation that we all think we have a self because we really have one.
Can we apply the same reasoning to morality?
By the same reasoning, if we all share certain moral attitudes because of evolution, this undermines the explanation that we share them because they are true or good.
The challenge for realist theories of value is to explain the relation between … evolutionary influences on our evaluative attitudes, on the one hand, and the independent evaluative truths that [moral] realism posits, on the other. (Street 2006)
If morality evolved along with the human race, then asking how we ought to live makes as much sense as asking what animals ought to exist, or which language we ought to speak. [Binmore, 2005, p. 2] . . .
In schematic form:
Therefore,
Korsgaard criticizes the whole idea that morality is supposed to consist of “independent evaluative truths” (what she calls “substantive realism”). This turns out to be an example of the naturalistic fallacy: no independent truth by itself can decide for us what we should do.
The substantive realist assumes we have normative concepts because we are aware that the world contains normative phenomena, or is characterized by normative facts, and we are inspired by that awareness to construct theories about them. But that is not why we have normative concepts. … It is because we have to figure out what to believe and what to do. … even when we are inclined to believe that something is right and to some extent feel ourselves moved to do it we can still always ask: but is this really true? and must I really do this? (Korsgaard 1996, 46–47)
Even if there were “independent moral truths,” we would still face the question of what to do. Conversely, even though our inclinations reflect selective pressures rather than “independent moral truths,” we still face the question of what to do.
Let’s ask Immanuel Kant:
In fact it is absolutely impossible to settle with complete certainty … whether there is even a single case where the maxim of an otherwise dutiful action has rested solely on moral grounds…. (Kant 2018, Ak. 4:407 / 21)
What’s the answer?
We can be fully suspicious of everyone’s actual motives in every dutiful action. Perhaps we will be right! But this does not affect the question of what we should do.
The ultimate causation involved in evolutionary processes is independent of the actual decision making of individuals seeking to realize their personal goals and values. The textbook case is sex, whose evolutionary raison d’être is procreation but whose proximate motivation is most often other things. The fact that the early humans who were concerned for the welfare of others and who treated others fairly had the most offspring undermines nothing in my own personal moral decision making and identity. (Tomasello 2016, 7)
He says it doesn’t matter what the ultimate cause of our moral attitudes is – that’s just not relevant to deciding what to do.
What does he mean by saying “the textbook case is sex”?
Thrasymachus: “Justice is nothing other than the advantage of the stronger.” (Plato, Republic I, 338c)
Callicles: “The makers of laws are the majority who are weak; and they make laws and distribute praises and censures with a view to themselves and to their own interests.” (Plato, Gorgias)

It is sometimes claimed that there is no such thing as altruism. Why?
The evolutionary version of this thought is that any stable altruistic behaviors can only exist because they provided selective advantages in the past.
It is sometimes claimed that the most moral acts are utterly selfless. For example, Jesus is supposed to have sacrificed himself for the redemption of all humanity. Notably, Jesus did not have any offspring.
The most extreme forms of altruism and self-sacrifice are often associated with religious systems, where individuals are encouraged to emulate figures like Jesus, who are portrayed as embodying ultimate selflessness. Such emulation can lead individuals to act in ways that may reduce their own reproductive success, suggesting that these moral systems can promote behaviors that transcend individual genetic advantage. (Alexander 1987)
If altruism is cloaked interest, how do we explain self-sacrificing behaviors?
Basically,





Suppose the Earth was struck by a meteor tomorrow, eliminating all animal life. All animals alive today would be complete failures in terms of reproductive success. Does anything you do today matter morally?
If we don’t think nihilism is right, we can return to something Tomasello said earlier:
The ultimate causation involved in evolutionary processes is independent of the actual decision making of individuals seeking to realize their personal goals and values. (Tomasello 2016, 7)
I evolved to care about my friends because being disposed to care about my friends made animals in my lineage more likely to reproduce. But I care about my friends for their own sakes. My friends would never seriously say, “You only care about me because it increases your own reproductive chances.” Because it wouldn’t be true.
I can by no means will that lying should be a universal law. For with such a law there would be no promises at all. (Kant 2018)
On Thursday, we might be joined by Professor Brian Skyrms, author of
The Stag Hunt and Signals and many other books and
papers. Brian is the first philosopher to be inducted into the National
Academy of Sciences and has made important contributions to evolution
and game theory as well as classical philosophical questions like
what is meaning?.
To make the most of his generous visit, we’d like everyone to come prepared with one question. It can be about Stag Hunt, or Signals, or an optional paper, or about how the game-theoretic approach relates to everything else we’ve been thinking about. Thank you!
Social Mechanisms