Self-interest

Jared Moore and David Gottlieb

Activity: Is there a moral argument every rational being must accept?

An argument gives reason in favor of a conclusion. It can be based on one or more assumptions (“premises”). To accept an argument is to accept its conclusion on the basis of the given reason.

In groups, see if you can come up with a moral argument that every rational being must accept.

That is: if anyone doesn’t accept the argument, they are being irrational. You can choose any moral conclusion as the conclusion of the argument. It’s probably easiest to choose a very uncontroversial moral conclusion to target.

Come up with an argument for a moral conclusion that can’t be rejected by any rational being!

Roadmap

  1. Framing moral rationalism
  2. Against rationalism: Williams
  3. The moral sentiments, continued: Smith, De Grouchy
  4. How to build it

Framing moral rationalism

We previously framed sentimentalism …

  • negatively, in relation to rationalism: that morals do not derive from reason alone, and
  • positively: that morals are based in particular sentiments.

We can operationalize both sentimentalism and rationalism in terms of arguments.

  • If rationalism is true, then there is a moral argument that every rational being must accept (i.e., that it would be irrational not to accept).
  • If sentimentalism is true, there is no moral argument that every rational being must accept, unless we also assume they possess certain sentiments.

Some possible predictions of rationalism

If rationalism is true and:

  • we meet intelligent aliens, …
    • … would they be persuadable by moral argument?
  • we construct a human-level artificial intelligence, …
    • … would it share our moral commitments?

Williams against rationalism

Williams defends a version of Hume’s position that reason is the slave of the passions. His “internal reasons theory” position is similar to how we have characterized sentimentalism in terms of arguments. His “external reasons theory” is similar to that characterization of rationalism. But they might not overlap perfectly.

Williams helps capture the Humean idea that, for someone to be motivated by an argument, the argument must appeal to a motivation (a passion) they already have.

Internal and external reason statements

A reason statement.

A statement with a content like, “A has a reason to ϕ,” where A is an agent and ϕ is an action. The connection between rationality and reason statements might be, “A rational agent does what they have most reason to do.”

Internal reason interpretation.

On the internal reason interpretation, a reason statement can only be true of A if A has some motive which counts in favor of ϕ-ing. We’ll call the collection of an agent’s motives their “subjective motivational set,” S.

Internal and external reason statements

External reason interpretation.

On the external reason interpretation, a reason statement can be true even if A has no motives in S that count in favor of ϕ-ing.

Internal reasons thesis.

Reason statements are only ever true if interpreted internalistically. All external reason statements are either false or meaningless. It would then follow that, for all A, ϕ, A is never rationally required to ϕ unless there is something in their S that counts in favor of ϕ-ing.

Williams’s argumentative strategy

  1. Internal reasons statements are perfectly understandable.
  2. Internal reasons statements are adequate to the jobs we use reasons statements for.
  3. External reasons statements are difficult to find sensible interpretations for.

Can internal reason statements do the work we want reason statements to do?

  1. D ∈ S does not give A a reason for ϕ-ing if D or its connection to ϕ-ing is based on a false belief.
  2. As a corollary, A can falsely believe internal reason statements about themselves.
  3. An agent can, through deliberation, come to accept an internal reason statement they didn’t previously. This can lead to adding or subtracting elements from S. (Compare Tracy’s question from Tuesday.)

External reasons and rationalism

Williams explicitly takes aim at rationalism in his critique of external reason statements.

[T]he external reasons statement itself will have to be taken as roughly equivalent to, or at least as entailing, the claim that if the agent rationally deliberated, then, whatever motivations he originally had, he would come to be motivated to ϕ. (109)

Compare what we said above:

If rationalism is true, then there is a moral argument that every rational being must accept (i.e., that it would be irrational not to accept).

Williams on Kant

  • According to Kant, morality must take the form of a categorical imperative: a command that applies unconditionally to all rational beings.
  • If a command applies unconditionally, then it applies without regard to the contents of S.
  • Accordingly, categorical imperatives seem to be external reason statements

Morality and blame

  • Here is a picture of moral wrongdoing and blame:

    1. You do a moral wrong, ϕ.
    2. I blame you for it.
    3. You admit you were wrong and you shouldn’t have ϕed. 
  • For you to admit you were wrong, you must be able to reason your way to the conclusion that you shouldn’t have ϕed. 

  • What if you don’t have anything in your S that gives you a reason not to ϕ?

  • Williams’s argument raises the worry that some rational beings might be outside the scope of morality, just because they lack an appropriate element in their S.

How does intelligence relate to an agent’s ends?

The Orthogonality Thesis.

Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal. (bostrom-2012-the-superintelligent-will?)

A bit more about the positive picture of moral sentiments

Sophie DeGrouchy: starting from a richly social picture

Each person finds herself, for all necessities— her well- being and life’s comforts— in a particular dependence on many others [ … ] This particular dependence on a few individuals begins in the crib; it is the first tie binding us to our fellow creatures. (de Grouchy, letters, quoted in (buckner-2023-rational-machines?), 305)

We learn to share each others’ feelings because this is how we survive as babies. Sharing each others’ feelings is more basic than having our own feelings.

Is sentiment enough for morality?

  • Sympathy helps us live together, because we cooperate better when we care about each other’s interests.
  • But is sentiment enough for morality?
  • If we built sentimental machines, would we fall short of building moral machines?

Possible objections:

  • Our sympathetic imagination can be wrong (Anushka)
  • If our sympathetic habits are learned, how can we learn to challenge the biases of our environment? (Jolie)
  • Can we sympathize with agents different from us? (Eli)

Sentimentalist AI

Two approaches

  1. Can AI systems figure out other people’s beliefs and desires?

  2. Can AI systems be said to have motivations?

Can AI systems figure out other people’s beliefs and desires?

  • Yes, largely under the heading of “theory of mind”

  • There’s some quibbiling about the kind of architecture needed: whether a system has to be “born with it” (a symbolic architecture in some form) or whether this can be learned (an empiricist or “learned” approach).

  • We’ll talk about the evolutionary and psychological evidence for these things more in week five

Can AI systems be said to have motivations (emotions, an affective response, a basic passion)?

  • What counts as having a motivation?

    • (What would it mean for AI systems to have the “right” motivations?)
  • What kind of (AI) architectures might we use here?

  • We’ll talk about this more in week 6 but it also relates to identity and selfhood which we’ll discuss in week 4

References