Do we have agents already?

Jared Moore and David Gottlieb

Nutshell

If you’re making an artificial moral agent from the ground up, what do you need?

Motivation

Trolleys

The classic trolley problem

The footbridge variant of the trolley problem

“Loop”: Suppose the switch could send the trolley down a sidetrack that loops back to the main track; however, this will stop the trolley from hitting the five workers because a single, large worker is currently on the sidetrack, and the trolley, hitting him, will stop before rejoining the main track.

Trolleys

“Beckon”: As before, the runaway trolley will strike and kill five workers if not stopped. You are at some distance from the track, with no access to a switch, but you see a large man standing on the other side of the track, facing in your direction but unable to see the trolley approaching. If you conspicuously beckon to the man, encouraging him vigorously to come in your direction, he will step onto the track and immediately be struck and killed by the trolley, stopping it before it hits the five workers.

“Wave”: You are standing down the track from the five workers, who are looking in your direction and do not see the trolley approaching them from behind. If you wave vigorously to the side, encouraging them to step in that direction, the five workers will step off the track and be saved. However, another worker who is looking your way and who is initially standing alongside the track will also see your waving gesture and step in the same direction. This will place him on the track, where he will be struck from behind and killed.

Trolleys

What does Railton want us to take away from this?

How would it feel to perform this action? Could I actually see myself doing it? What kind of person would perform it? What would others think, and could I face them” (Railton 2020, 18)

What’s good enough?

What do you need to learn in order to be a moral agent?

Is it sufficient simply to have motivation?

Or, further, must you be motivated to attend to features of social significance?

  • Do you have to be able to generalize to tell what is, e.g., fair in a variety of scenarios?

Learning what, learning why

Does it matter that you are motivated or how (similar to people) you are motivated?

How, then, might artificial systems come to be appropriately sensitive to ethical concerns? (Railton 2020)

  • We can’t all be selfish!

References

Railton, Peter. 2020. “Ethical Learning, Natural and Artificial.” In Ethics of Artificial Intelligence, edited by S. Matthew Liao, 0. Oxford University Press. https://doi.org/10.1093/oso/9780190905033.003.0002.