Jared Moore and David Gottlieb
An argument gives reason in favor of a conclusion. It can be based on one or more assumptions (“premises”). To accept an argument is to accept its conclusion on the basis of the given reason.
In groups, see if you can come up with a moral argument that every rational being must accept.
That is: if anyone doesn’t accept the argument, they are being irrational. You can choose any moral conclusion as the conclusion of the argument. It’s probably easiest to choose a very uncontroversial moral conclusion to target.
Come up with an argument for a moral conclusion that can’t be rejected by any rational being!
We previously framed sentimentalism …
We can operationalize both sentimentalism and rationalism in terms of arguments.
If rationalism is true and:
Williams defends a version of Hume’s position that reason is the slave of the passions. His “internal reasons theory” position is similar to how we have characterized sentimentalism in terms of arguments. His “external reasons theory” is similar to that characterization of rationalism. But they might not overlap perfectly.
Williams helps capture the Humean idea that, for someone to be motivated by an argument, the argument must appeal to a motivation (a passion) they already have.
A statement with a content like, “A has a reason to ϕ,” where A is an agent and ϕ is an action. The connection between rationality and reason statements might be, “A rational agent does what they have most reason to do.”
On the internal reason interpretation, a reason statement can only be true of A if A has some motive which counts in favor of ϕ-ing. We’ll call the collection of an agent’s motives their “subjective motivational set,” S.
On the external reason interpretation, a reason statement can be true even if A has no motives in S that count in favor of ϕ-ing.
Reason statements are only ever true if interpreted internalistically. All external reason statements are either false or meaningless. It would then follow that, for all A, ϕ, A is never rationally required to ϕ unless there is something in their S that counts in favor of ϕ-ing.
Williams explicitly takes aim at rationalism in his critique of external reason statements.
[T]he external reasons statement itself will have to be taken as roughly equivalent to, or at least as entailing, the claim that if the agent rationally deliberated, then, whatever motivations he originally had, he would come to be motivated to ϕ. (109)
Compare what we said above:
If rationalism is true, then there is a moral argument that every rational being must accept (i.e., that it would be irrational not to accept).
Here is a picture of moral wrongdoing and blame:
For you to admit you were wrong, you must be able to reason your way to the conclusion that you shouldn’t have ϕed.
What if you don’t have anything in your S that gives you a reason not to ϕ?
Williams’s argument raises the worry that some rational beings might be outside the scope of morality, just because they lack an appropriate element in their S.
Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal. (bostrom-2012-the-superintelligent-will?)
Each person finds herself, for all necessities— her well- being and life’s comforts— in a particular dependence on many others [ … ] This particular dependence on a few individuals begins in the crib; it is the first tie binding us to our fellow creatures. (de Grouchy, letters, quoted in (buckner-2023-rational-machines?), 305)
We learn to share each others’ feelings because this is how we survive as babies. Sharing each others’ feelings is more basic than having our own feelings.
Possible objections:
Can AI systems figure out other people’s beliefs and desires?
Can AI systems be said to have motivations?
Yes, largely under the heading of “theory of mind”
There’s some quibbiling about the kind of architecture needed: whether a system has to be “born with it” (a symbolic architecture in some form) or whether this can be learned (an empiricist or “learned” approach).
We’ll talk about the evolutionary and psychological evidence for these things more in week five
What counts as having a motivation?
What kind of (AI) architectures might we use here?
We’ll talk about this more in week 6 but it also relates to identity and selfhood which we’ll discuss in week 4