OpenAI famously states that its mission is to “ensure that artificial general intelligence benefits all of humanity.” Many other companies, pioneers, and visionaries also say that their mission is the betterment of “humanity” in one way or another because doing something for “humanity” sounds more noble, selfless, and bigger than themselves.
But what do they mean by “humanity”? Who is included and who is not included in that definition? And how far can we extend the definition of "humanity"? Can it include non-human beings, like robots and advanced AIs?
What does “humanity” even mean?
If I were to ask a random person to say what the word “humanity” means, they would most likely say something along the lines of “displaying human qualities” or “all humans,” which is very close to dictionary definitions. The Oxford Dictionary defines “humanity” as “people in general,” and you’ll find a similar definition in the Merriam-Webster dictionary: “the quality or state of being human”, “the totality of human beings, the human race, humankind.”
Today, saying that someone belongs to “humanity” is the same as saying they are a human, a member of the Homo sapiens species. However, that was not always the case and, sadly, still is not always the case.
Throughout history, if you were not born into the right family, or in the right place, or with a certain skin colour, or if you were born as a woman and not as a man, then you could not enjoy full human rights. You were a member of the Homo sapiens species, but you were not granted the full status of a “human.” You had to earn your status as a human, if that option was even possible.
By excluding certain groups of people from the definition of “humanity,” by dehumanising them, things like slavery, the concept of owning a person, and treating a person as an object could be justified. Dehumanisation often was, and still is, the first step in justifying horrible acts to come.
Words, however, can change their meaning. That has happened to our definition of the word “humanity.” In the last 200 years or so, through movements such as slave abolition, women's rights, and various civil and human rights campaigns, the definition of “humanity” has expanded to include more groups of people. More people have become equal not only in the eyes of the law but also to other people. They have finally been recognised as humans.
We expand the definition of “humanity” by recognising the “human” in a fellow human being. If the being in front of us looks like a human, behaves like a human, speaks like a human, and has other qualities we associate with a human, then there is a great chance they are a human. The more things that connect us, the more likely we are to see each other as equals and recognise as a human.
And that definition is still expanding. There are still groups of people today all over the world campaigning and fighting to be fully recognised as human beings. This ever-expanding definition of humanity, who is included and who is not, raises an interesting question: how far can we go in extending the definition of “humanity”?
Since their discovery in the middle of the 19th century, Neanderthals were depicted as a stereotypical cavemen. Primitive, hairy brutes with clubs, wearing leather clothes who communicated only in grunts that, if not for their humanoid form, would be considered animals and not our ancestors or evolutionary cousins.
However, as we learned more about Neanderthals, our perception of them started to change. We have learned they took care of sick members, ritually buried their dead, and practised art. There is even some evidence that Neanderthals had some form of belief systems, were capable of symbolic thinking, and had distinct cultures.
Then we learned that Neanderthals were much closer to us on a genetical level than previously thought. Early humans interbred with Neanderthals, and today non-African humans have about 2% of Neanderthal DNA in their genomes.
Suddenly, the image of Neanderthals has changed. They are no longer seen as unintelligent, brutish cavemen. They are now portrayed more like humans, more like us. Modern depictions of Neanderthals show beings not so dissimilar from us, to the point that if you placed a Neanderthal in modern clothes on a busy street in London or New York, they would blend in perfectly and not stand out.
Even though Neanderthals are being classified as a different species of humans, we humanised them and discovered qualities that make us in them. We discovered that there we have more in common than previously though, that Neanderthals are more human, more like us. We extended the definition of humanity to include our evolutionary cousins.
Things begin to get interesting when we ask ourselves: how far can we go? For example, can we extend the definition of “humanity” to include animals?
We are noticing qualities previously only associated with humans in other animals. We know elephants mourn their dead and we now have some evidence that elephants have specific calls for individuals, similar to how we use names. We started to decode whale’s language, possibly the closest thing to human language in animal kingdom. We have multiple examples of animals such as crows, chimpanzees, and octopuses using tools.
I don’t think we will anytime soon, if ever, expand the definition of humanity to include animals. However, we do see the emergence of movements and campaigns advocating for animal rights, which may lay the foundation for a new concept encompassing both humans and animals.
But what is and what is not included in our definition of “humanity” could soon be challenged by advancements in AI and robotics.
Does this unit have a soul?
Science fiction often tackles the difficult and complex question of what makes a human. These stories frequently explore the boundary between human and artificial, between being born and being built. These imagined worlds are inhabited by advanced AIs, humanoid robots, and androids that look and behave like humans to the point where they either need to be labeled as such or require advanced tests, like the Voight-Kampff test from Blade Runner, to distinguish them from real humans.
Many of these robots and AIs often serve as main or supporting characters in these stories, each with its own role and sometimes even personality. They are written to engage the audience and evoke a range of responses—anger, sympathy, sadness, love.
We are good at making connections and bonding not only with each other but also with the inanimate. Many people connected and felt something for characters like WALL·E or felt sad when the robot from The Iron Giant declares itself a Superman and sacrifices itself to save everyone.
These are just two of many examples from fictional stories but the same thing has also happened in real life. In Japan, for example, some people bonded with their AIBO robot dogs so much that they held funerals for them when their robot dogs broke down and couldn’t be repaired anymore.
Another example of people developing connection with a robot is Opportunity. Many people were touched after hearing that Opportunity’s last message to the Earth was “My Batteries are low, and it's getting dark.” Although the real final message from Opportunity was different, the poetic interpretation of the last transmissions from a robot that spent 14 Earth years, way beyond the 13 weeks it was expected to last, exploring the surface of Mars, has touched and evoked an emotional response from many people. Opportunity was described as a brave, curious, and persistent little explorer that "died" on the surface of another planet.
Today, we have large language models that can generate text almost indistinguishable from what a human would write. Meanwhile, other models excel at generating art, videos, and music—tasks that were, until recently, reserved only for humans. Only a human could write a poem, we thought. Only a human could paint an artwork, we thought. And only a human could make music, we thought. Now AI models can do all of that, sometimes even better than most humans. That challenges our place in the Universe and how we relate with other beings.
No respectable scientists or philosophers call the current generation of AI models sentient or conscious. There is, however, an ongoing conversation about whether modern large language models are merely statistical models that are quite good at predicting the next word in a sequence, or if they possess some kind of internal model of how the world works. There are also discussions about how much these models’ behaviour can be mapped to our current understanding of how the human mind works and how much they mimic or parallel human psychology.
The technology will only get better. In a not so distant future, we might be talking and interacting on a daily basis with AI agents without even knowing what they really are. We might not even care. There is a possibility that we will be working alogside humanoid robots that are no worse than an average human.
At some point, someone will build a machine that looks like a human, behaves like a human, and speaks like a human. When such a machine is made, what will we use to differentiate it from humans? Intelligence? It will be as intelligent as us, if not more so. Emotions? Even if those emotions are simulated, we wouldn’t be able to tell how genuine they are. Perhaps the only way we could differentiate such a being from humans will be the knowledge that it was built, not born.
When such a machine is built, it will challenge how we see ourselves. It will be the ultimate challenge to human exceptionalism. It will be like a mirror, and it will be up to us to decide what we see in it. Will we see just a machine? Or will we be able to see something more, something that is more like us? Will we be able to extend the definition of humanity to include such beings?
In many science fiction stories, the moment those questions need to be answered marks a fork in history. One path leads to division, resentment, and eventually to destruction. The other path is one of unity and prosperity.
The answer to the question of whether the definition of humanity can be extended to include other sentient beings—be it machines, animals, or other humans—depends on whether we can see ourselves in another being.
Thanks for reading. If you enjoyed this post, please click the ❤️ button or share it.
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"
Great article, loved the Mass effect reference ahhaha
This is a rich topic - a built device, machine, or computer program can evoke feelings and human responses. I think at that moment a choice is made to suspend disbelief. And, maybe we humans allow it to happen because we like it? We like to fall in love with our imaginings.
At what point does this topic of discussion become more than a fascinating study of psychological or philosophical delusion?