What does it mean to have a value?

Jared Moore and David Gottlieb

What is moral agency?

  • What is moral agency?
    • One of the things we anticipate being difficult about the class: there is no consensus right answer to this question.
    • Neither:
      • What moral agency means,
      • What it takes to be a moral agent, nor
      • What the significance of something having moral agency is.
  • In broad outlines, a moral agent is something that is capable of acting rightly or wrongly.

Moral agency vs. moral patiency

  • There’s a lot of conceptual distinctions we can make in this space. Even these should not be taken for granted, though. A and B can be conceptually distinct but actually every A is B and vice versa.
  • As an example, moral agency vs. moral patiency.
  • We saw: moral agent is something that is capable of acting rightly or wrongly.
  • Moral patient: something whose interests matter for moral purposes.
    • I.e., you’re a moral patient if actions that affect you have moral significance because of their affects on you.
  • Can you conceive of a moral patient that’s not a moral agent?
    • E.g., suppose that a rabbit has no moral responsibilities, but it’s morally wrong to make a rabbit suffer (at least unless there’s a good reason).
  • Can you conceive of a moral agent that’s not a moral patient?

Moral patiency cont’d

  • So we have a conceptual distinction: agency vs. patiency. But this doesn’t mean the concepts are not related in some way.
    • How do you think they are related?
    • E.g., if morality is a reciprocal obligation between equals, then the agents and patients seem to coincide. In general, contractarian theories of morality make agency and patiency connected.
    • Kant in a nutshell:
      • Moral agency means, making normative rules for yourself.
      • Moral patiency means, having others make normative rules for you.
      • Only those who can make normative rules for themselves can be subject to normative rules.
      • Therefore, agency and patiency coincide.

Moral patiency, cont’d

  • So we’ve done two things here.
    • One, made an important conceptual distinction: moral agency vs. moral patiency.
    • Two, illustrate that there’s no consensus on how these concepts relate to each other.
      • They are conceptually distinct, but, depending on what you think, maybe they completely overlap.
      • This is a microcosm of the kind of philosophical work we’ll be trying to do.
      • You’ll be trying to keep these concepts separate in your head, while at the same time thinking about the connections among them.
  • Philosophy is like 90% thinking about the connections between concepts. Hope you like that.

Unpacking moral agency

  • A moral agent: something that is capable of acting rightly or wrongly. Let’s figure out what we mean by this.
    • Another feature of the philosophical method is, we’re not going to be prematurely satisfied that we’ve answered these questions. We’re not gonna say “good enough for government work” if we don’t know what we mean by “capable,” for example. We’re going to investigate as far as we can.
    • That’s a difference between philosophy and other areas of life. In other areas of life, you might leave some questions unanswered, go, “I understand well enough for some practical purpose.” Then you go out and do your practical purpose. Then maybe you come back when you’re done and return to contemplation.
    • At the same time, philosophy is part of life. Sometimes, deeper contemplation is exactly what we need in the moment.

Why now?

  • Hypothesis 1: now is the perfect time to think deeply about AI and moral agency.
    • AI research is moving really quick, and as we’ll see lots of people are building systems that relate to moral agency in some way: to imitate or predict or exercise moral decision-making, to subject AI systems to moral constraint, to subject AI system users to moral constraint.
    • At the same time, the Venn diagram of people who both have technical expertise and have thought carefully about the conceptual issues is pretty small.
    • So if you can be in that overlap, you’ve got an edge.
    • We hope this class can help people develop that edge.

A picture of agency

  • Two main ideas to unpack (overview both before drilling in):
    • Capable of acting rightly or wrongly in the practical sense. Having practical abilities.
      • Practical ability to choose among actions.
      • Detecting morally salient features of a situation.
      • Morally salient features of a situation matter to you in the right way.
    • Counting as a moral agent: being the kind of thing whose actions can count as either right or wrong. Capable of being treated as a moral agent.
      • One thing that really sticks out here: ability to be held accountable or responsible. (Show IBM exhibit if available.)
      • We might also include: having a right to make moral decisions.
    • (Ask at this stage, how are the two connected? One possibility here is pointing to “ought implies can.” You can’t be held responsible if you’re not capable.)

Why care what counts as a moral agent?

  • I distinguished practical questions of capability from what counts as a moral agent. We might ask, why care about anything other than the practical questions of capability?
    • If a robot is cutting me up into little pieces, do I care whether this counts as a morally wrong action?
    • Why not care exclusively about what it can do rather than what it counts as?
    • This is a live question to me.

Why care what counts as a moral agent? cont’d

  • This kind of question is often raised by AI systems and we’ll see it throughout the course.
    • If a system produces moral decisions like ours, does it matter whether it gets there by a reasoning process like ours?
    • If a system accurately reproduces people’s moral judgments, does it matter whether it is making its own moral judgments (vs. just predicting moral judgments)?

Possible reasons to care

  • Two possible thoughts (ask as question):
    • When we think about other human agents, we care how they reason, not just how they overtly behave.
    • When we ourselves decide how to act, we don’t just predict our or anyone else’s moral judgments. We have to reason them out. AI systems promise to be like us in various ways.
  • This is why we’ve designed the class like this.
    • You’re reading an enormous variety of stuff. AI research, cognitive sciences, classics of philosophy.

Ultimate goals for the class

  • This leads to two final hypotheses for the class:
    • Hypothesis 2: Thinking about our own moral agency and reasoning is a way to gain insight into agency and reasoning in general, including in the case of AI.
    • Hypothesis 3: Thinking about how moral agency and reasoning work or might work in AI systems is a way to gain insight into our own agency and our own minds.

References