Jared Moore and David Gottlieb
If the self is so unreal, how come trillions of organisms work with something like a “self” every day?
Special concern for one’s own future would be selected by evolution. Animals without such concern would be more likely to die before passing on their genes. … [I]f some attitude has an evolutionary explanation, this fact is neutral. It neither supports nor undermines the claim that this attitude is justified. But there is one exception. This is the claim that, since we all have this attitude, this is a ground for thinking it justified. This claim is undermined by the evolutionary explanation. Since there is this explanation, we would all have this attitude even if it was not justified. (Parfit 1984, 308)
It’s your first day as a crewmember of the famous Federation starship USS Enterprise! Time to report for duty by beaming aboard! As a reminder, this is how the transporter works. At the beginning of your journey, a computer scans your physical structure molecule-by-molecule. This process destroys your body. Then, a digital copy of the scan is sent to your destination. At your destination, a computer builds a new body that’s an exact copy of your original body. Then you can report for your exciting new duty! You’ve never been transported before. It’s your turn. Ready to come aboard?
Now imagine that “you” is some AI system.
How do our anwers to the thought experiment change, if at all?
When you try to implement selves, you either find yourself already committed—or sometimes it’s just a reasonable next step—to building out some of the main features of personal identity over time. (Millgram 2025)
…
“Evolution, in animals, puts a premium on coherent action, and from a certain point onward, the way to act effectively as a self is to have a sense of oneself, as a unit of that kind. This sense of self is entirely tacit or implicit at first, but can become less tacit as the complexity of behavior continues to evolve.” (Godfrey-Smith 2020, pg. 259)
“The view I’m defending here does, in a way, agree that minds exist in patterns of activity, but those patterns are a lot less”portable” than people often suppose; they are tied to a particular kind of physical and biological basis.” (Godfrey-Smith 2020, pg. 270)
The claim is that there is no teletransporter that can produce an exact replica.
Further: Any conceivable transformation that results in a psychological connection (a qualitative one), one might argue, maintains a physical connection (a numerical one).
And so personal identity matters, at least in this biological world.
(As having a self)
(And does AI count?)
![]()

![]()
![]()

In this way, we can take Godfrey Smith to argue that the only way to really do something like the teletransporter is to play with the degree of similarity between the systems we’re considering.
Our challenge: what is a thought experiment similar to the teletransporter but that is biologically possible?
Does this thought experiment license the same kind of conclusions that Parfit would want it to?
Does this tell us anything about whether AI systems have selves?
Light blue: hidden; Pink: sensory; Red: active; Blue: internal

Humphrey, 2006


“If biological systems must minimise their entropy, and entropy is average information, then it follows that they must keep the flow of information they process to a minimum.” (Solms 2021)
“Friston free energy is a quantifiable measure of the difference between the way the world is modeled by a system and the way the world really behaves.” (Solms 2021)