Jared Moore and David Gottlieb
If you’re making an artificial moral agent from the ground up, what do you need?





What does Railton want us to take away from this?
How would it feel to perform this action? Could I actually see myself doing it? What kind of person would perform it? What would others think, and could I face them” (Railton 2020, 18)
What do you need to learn in order to be a moral agent?
Is it sufficient simply to have motivation?
Or, further, must you be motivated to attend to features of social significance?
Does it matter that you are motivated or how (similar to people) you are motivated?
How, then, might artificial systems come to be appropriately sensitive to ethical concerns? (Railton 2020)