Jared Moore and David Gottlieb
Last class we talked about what sentience is.
If how we treat a being matters morally, that being is a moral patient.
To determine which beings are moral patients, we need to know what about a being makes it matter how we treat it.
The limit of sentience … is the only defensible boundary of concern for the interests of others. To mark this boundary by some other characteristic like intelligence or rationality would be to mark it in an arbitrary manner. Why not choose some other characteristic, like skin color? (Singer 1975)
Strictly speaking, it is not exactly sentience that Singer means. It is “the capacity to suffer and / or experience enjoyment” – i.e., not only to have experiences but to have positive or negative experiences.
It may one day come to be recognized that the number of legs, the villosity of the skin [furriness], or the termination of the os sacrum [having a tail] are reasons equally insufficient for abandoning a sensitive being to [an unpleasant] fate. What else is it that should [determine whether a being is a moral patient]? … The question is not, Can they reason? nor Can they talk? but, Can they suffer? (Bentham 1789)
Moral zombies would be creatures who act indistinguishably from us as moral agents, but for whom there is nothing it is like to be them. (Véliz 2021)
“Moral zombies” would be like psychopaths. Since they are not sentient, they a fortiori don’t experience sympathetic pleasure or pain.
What we think of as values will never be values for an AI as long as it cannot feel the warmth of the sun or the sharpness of a knife blade, the comfort of friendship and the unpleasantness of enmity. At most, for an AI that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons. (Véliz 2021)
Should we assign power and authority to things that can’t feel?
In the absence of “direct” evidence about a system’s sentience, we assess it for behavioral and architectural features that are associated with known cases of sentience.
The basic consequence is of course that if AI systems are moral patients, then we are morally required to take their interests into account when we act.
However, at present, we don’t know whether AI systems are or will soon be moral patients.
Two kinds of uncertainty:
According to Long et al. (2024), both kinds of uncertainty are present. Furthermore, they argue, we should treat them the same in our decision-making.
Can you think of a time when you had to act without knowing whether it was right or wrong? How did that uncertainty affect your decision-making?
For each of the following, decide if you think it would be acceptable to do to a sentient AI system?
Long et al. (2024) suggest:
Can you think of anything else? What’s not on this list?
A preliminary conclusion: even if we grant that AI is or might be sentient, we know very little about how to have appropriate concern for its welfare. This might be partly because we know very little about what its experience could be like.
Long et al. (2024) reject the idea of implementing “red lines” to stop development if certain markers of sentience emerge.
Why?
Do you agree?
Take a moment and compose an email to David and Jared. It should say in your own words what you’re doing right now. Don’t overthink it, just write down the first thing that comes to mind and hit send.
Take 30 seconds and write down anything you can think of about your experience of writing the email.
Did it feel like anything? Was it pleasant? Unpleasant? If so, how did it feel that made it pleasant or unpleasant?
This is an opportunity to care for ourselves by caring about AI. Understanding the quality of experience is both morally significant because of AI sentient, and significant to us for how we live our own lives.
We’re giving you a small homework activity, due Tuesday. It involves reflecting on pleasures you experience in the course of your normal life. See handout.