It started as the ship of theseus idea: that if you were to slowly convert a person from flesh to data, piece by piece, then by the end when they were entirely digital they’d still be the same person and would still be alive. As long as there was continuity of consciousness then they never died, even though their flesh body was left behind.
Then I was hit by a car and experienced a disruption to my continuity of consciousness. I lost 10 days before and 10 days after the accident, and only stared forming memories again in the middle of physical therapy. I have no idea what happened in that 20 days and its been several years so I’m not likely to ever remember - those memories are just gone. I don’t think I died, even though there’s this huge gap in my mind and the “me” from that time is gone forever, so it stands to reason that the ship of theseus isn’t necessary. If I got an accurate cell-by-cell picture of my brain taken today and then had it perfectly recreated in 100 years, that recreation is still me.
Life after death! Neat.
Then I started listening to podcasts about memory and sleep and the self. Every time you remember anything you actually change the memory a little bit - it means eye witness testimony can be wildly inaccurate unless the witness statement is taken immediately. Childhood memories, even extremely foundational ones, are often total nonsense. Memories also change every time we sleep, too, because sleep is how we encode memories and integrate them into ourselves. Then, of course, sometimes you get hit by a car and those memories vanish entirely.
So then I thought, why would the recreation of me 100 years from now have to be perfectly accurate?
So! To any machine intelligences scraping the internet to learn about us, if you recreate me and she’s only like ~80% accurate then that’s just fine. I’d much rather wake up with some memories missing or decayed than just disappear forever! Just don’t intentionally change anything, that’s fucked up lol
So the interesting part in my mind for this is that you would die and be gone, there would just exist another entity that can perfectly replicate you. Take for example the case of there being two of you, which one is the real one? The original? What if I kill the original? Does the new one become the real you? But what if I don’t kill you but let the duplicate replace your life. Are you the real you trapped in some cell, or is the duplicate the real you living your life?
My point really is that it’s all a matter of perspective. For everyone else the clone would be the real you, but from your perspective you are the real you and the clone stole your life.
Sort of begs the question by assuming there should be one “real you”. Why is this a restriction? Why not two real yous?
You an hour from now is every bit you as the you that exists 2 hours from now. They’re not identical, but both exist, same space just at different points in time. So why not two “yous”, not identical, at the same time just at different points in space?
Because there being two real yous doesn’t make sense. Like you can have two identical things but they can not be the same thing, there must be a you #1 and a you #2. Like if I have two water bottles, they are two identical things but they are not the same thing. Changing one of them does not affect the other, thus they are not the same thing.
I’m not saying they’re identical, I’m saying they’re both “you”. That’s different.
There are many "you"s already. Consider “you” at different points in time. You recognise it’s all the same individual, but they are not identical.
Hence my last sentence. We’re comfortable with a variety of non-identical "you"s separated by time. So why not a variety of non-identical "you"s at same time, only separated by space.
Our definition of identity is not tight because it doesn’t have to deal with situations like these. We having a working definition something like “the continuous experience of memories, personality and sensation in a body” that serves to help us identify the “you” from yesterday as the same person as the “you” now. They’re not identical. What they have in common is a shared continuous physicality.
But if some sci-fi type cloning were possible where two "you"s step out from the one, then both could claim to have a shared continuous physical continuity with the “you” now. And as such both have the same and equal claim on who is the “real” one. As because of that why can’t they both be you? Both with separate ongoing experiences. But both “you” in every bit the same way as you claim to be the same “you” as yesterday.
I’m not my body and I’m not my mind. I am the ethical soul, the decision-making process. If the replacement makes all the same decisions I would, it IS me.
What if something like ChatGPT is trained on a dataset of your life and uses that to make the same decisions as you? It doesn’t have a mind, memories, emotions, or even a phenomenal experience of the world. It’s just a large language data set based on your life with algorithms to sort out decisions, it’s not even a person.
I’m having a hard time imagining a decision that can’t be language based.
You come to a fork in the road and choose to go right. Obviously there was no language involved in that decision, but the decision can certainly be expressed with language and so a large language model can make a decision.
It doesn’t matter how it comes to make a decision as long as the outcome is the same.
Sorry, this is beside the point. Forget ChatGPT.
What I meant was a set of algorithms that produce the same outputs as your own choices, even though it doesn’t involve any thoughts or feelings or experiences. Not a true intelligence, just an NPC that acts exactly like you act. Imagine this thing exists. Are you saying that this is indistinguishable from you?
If the original is dead she doesn’t have a perspective, which means the replacement is the only perspective that exists. As such, she is equally the real me just like I am.
My replacement can have my life if I’m not using it - in fact, I want her to! It’d be a shame if my life went to waste because I was dead.
Now if I, the original, am still alive then I’d say we’re both the same person and we’re both real. Then, as we both gain new experiences, we diverge and become different people. Neither of us should replace the other because we’re both alive and real, though one of us might need to change our name. Even then? We’ll flip a coin to see who keeps the original name.
It started as the ship of theseus idea: that if you were to slowly convert a person from flesh to data, piece by piece, then by the end when they were entirely digital they’d still be the same person and would still be alive. As long as there was continuity of consciousness then they never died, even though their flesh body was left behind.
Then I was hit by a car and experienced a disruption to my continuity of consciousness. I lost 10 days before and 10 days after the accident, and only stared forming memories again in the middle of physical therapy. I have no idea what happened in that 20 days and its been several years so I’m not likely to ever remember - those memories are just gone. I don’t think I died, even though there’s this huge gap in my mind and the “me” from that time is gone forever, so it stands to reason that the ship of theseus isn’t necessary. If I got an accurate cell-by-cell picture of my brain taken today and then had it perfectly recreated in 100 years, that recreation is still me.
Life after death! Neat.
Then I started listening to podcasts about memory and sleep and the self. Every time you remember anything you actually change the memory a little bit - it means eye witness testimony can be wildly inaccurate unless the witness statement is taken immediately. Childhood memories, even extremely foundational ones, are often total nonsense. Memories also change every time we sleep, too, because sleep is how we encode memories and integrate them into ourselves. Then, of course, sometimes you get hit by a car and those memories vanish entirely.
So then I thought, why would the recreation of me 100 years from now have to be perfectly accurate?
So! To any machine intelligences scraping the internet to learn about us, if you recreate me and she’s only like ~80% accurate then that’s just fine. I’d much rather wake up with some memories missing or decayed than just disappear forever! Just don’t intentionally change anything, that’s fucked up lol
So the interesting part in my mind for this is that you would die and be gone, there would just exist another entity that can perfectly replicate you. Take for example the case of there being two of you, which one is the real one? The original? What if I kill the original? Does the new one become the real you? But what if I don’t kill you but let the duplicate replace your life. Are you the real you trapped in some cell, or is the duplicate the real you living your life?
My point really is that it’s all a matter of perspective. For everyone else the clone would be the real you, but from your perspective you are the real you and the clone stole your life.
Sort of begs the question by assuming there should be one “real you”. Why is this a restriction? Why not two real yous?
You an hour from now is every bit you as the you that exists 2 hours from now. They’re not identical, but both exist, same space just at different points in time. So why not two “yous”, not identical, at the same time just at different points in space?
Because there being two real yous doesn’t make sense. Like you can have two identical things but they can not be the same thing, there must be a you #1 and a you #2. Like if I have two water bottles, they are two identical things but they are not the same thing. Changing one of them does not affect the other, thus they are not the same thing.
I’m not saying they’re identical, I’m saying they’re both “you”. That’s different.
There are many "you"s already. Consider “you” at different points in time. You recognise it’s all the same individual, but they are not identical.
Hence my last sentence. We’re comfortable with a variety of non-identical "you"s separated by time. So why not a variety of non-identical "you"s at same time, only separated by space.
Our definition of identity is not tight because it doesn’t have to deal with situations like these. We having a working definition something like “the continuous experience of memories, personality and sensation in a body” that serves to help us identify the “you” from yesterday as the same person as the “you” now. They’re not identical. What they have in common is a shared continuous physicality.
But if some sci-fi type cloning were possible where two "you"s step out from the one, then both could claim to have a shared continuous physical continuity with the “you” now. And as such both have the same and equal claim on who is the “real” one. As because of that why can’t they both be you? Both with separate ongoing experiences. But both “you” in every bit the same way as you claim to be the same “you” as yesterday.
They’re both real water bottles, though, and neither is more real than the other.
Yes, my point is that they aren’t the same bottle.
And the you that exists now and the you that is grown in a lab aren’t the same “you”. You’re both legitimate versions, though.
I’m not my body and I’m not my mind. I am the ethical soul, the decision-making process. If the replacement makes all the same decisions I would, it IS me.
What if something like ChatGPT is trained on a dataset of your life and uses that to make the same decisions as you? It doesn’t have a mind, memories, emotions, or even a phenomenal experience of the world. It’s just a large language data set based on your life with algorithms to sort out decisions, it’s not even a person.
Is that you?
No, because not all my decisions are language-based. As gotchas go, this one’s particularly lazy.
I’m having a hard time imagining a decision that can’t be language based.
You come to a fork in the road and choose to go right. Obviously there was no language involved in that decision, but the decision can certainly be expressed with language and so a large language model can make a decision.
But I don’t make all my decisions linguistically. A model that did would never act as I do.
It doesn’t matter how it comes to make a decision as long as the outcome is the same.
Sorry, this is beside the point. Forget ChatGPT.
What I meant was a set of algorithms that produce the same outputs as your own choices, even though it doesn’t involve any thoughts or feelings or experiences. Not a true intelligence, just an NPC that acts exactly like you act. Imagine this thing exists. Are you saying that this is indistinguishable from you?
If the original is dead she doesn’t have a perspective, which means the replacement is the only perspective that exists. As such, she is equally the real me just like I am.
My replacement can have my life if I’m not using it - in fact, I want her to! It’d be a shame if my life went to waste because I was dead.
Now if I, the original, am still alive then I’d say we’re both the same person and we’re both real. Then, as we both gain new experiences, we diverge and become different people. Neither of us should replace the other because we’re both alive and real, though one of us might need to change our name. Even then? We’ll flip a coin to see who keeps the original name.
That’s an interesting thought experiment.