![]() |
All this is to say that human beings are, first and foremost, meaning making beings. That's what defines who we are, why we live, whom we love, whom we kill…etc. If we are honest, our lives are insufferably predictable and boring. Sure, we have different lives, but we live in very similar ways. We wake up, eat breakfast, trudge off to work, do our thing, come home, have dinner, watch some entertainment, and then go to sleep. What separates us are the meanings that we imbue to the comparable narratives that give us that purpose to keep going. In fact, we are desperate to wring the last drop of meaning to everything we do because, otherwise, what is there to live for?
As social animals, our meaning-making capacity centers on our community. Maybe it's the evolutionary biology of our caveman ancestors that has condemned us, but we derive our life's meaning from what value we bring to our family, community, society and whatever in-group that we have associated ourselves with. That's why the sense of belonging and communications that define and bound that belonging are so critical to our sense of self.
Therefore, it's not surprising that Google put one of its engineers on administrative leave after he claimed that LaMDA, an AI-driven Google chatbot, was sentient and that it has developed something akin to human consciousness. Automatically, the story triggered images of Hal9000 in "2001: Space Odyssey" uttering those famous words, "I am sorry, Dave. I'm afraid that I can't do that." Or, more recently, the seductive android in "Ex Machina" that dupes a naive researcher into falling for her in order to escape.
Interestingly, there is always an undercurrent of threat in these AI movies; we automatically treat AI as a potential adversary just because it might be "human." All movies about AI are about humans being outsmarted and targeted by AI for termination. It's interesting that we humans would view something else that's emerging as humanlike and treat it as a threat to our survival. It actually doesn't speak well of us since we know ― deep in our DNA ― what we have done and continue to do with the human "other" that we encounter and who are weaker than us.
The real story of LaMDA is much simpler. The Google engineer saw and felt what he wanted to see and feel. More precisely, what he was humanly biased to see and feel. He held a conversation with the chatbot that was trained to speak back in natural language that was indistinguishable from a human being and ascribed human intent behind that speech. In other words, he observed a phenomenon and imbued it with a meaning that he was primed to do, creating a narrative that reinforced that original meaning that he wanted to discover. He anthropomorphized it. It's not too different from seeing an angel in the shape of the clouds or Jesus Christ in the peanut butter pattern spread on a sandwich. It's just a matter of degrees.
Nonetheless, let's go back to my first complaint about imprecise language. When reading articles about this story, I came across words ― which I am also guilty of throwing about ― such as sentience, consciousness, reasoning, learning and other words to ascribe "humanness" to artificial intelligence. But what do all these terms mean?
What does it mean when AI is awakened? When it's sentient? When it's conscious? What does it mean for an AI to be human?
These are difficult questions because they are posed with imprecise language over the nature of what it means to be a human. Perhaps the source of our angst about AI turning human is that we can't even define for ourselves what it means to be human… or sentient, conscious, aware, etc. It's a description of a state of being that we can't associate with some brain geography or pinpoint with a functional magnetic resonance imaging (fMRI). It's imprecise and ephemeral by nature because we don't really know.
AI can never be human. Because we don't know what that means.
Jason Lim (jasonlim@msn.com) is a Washington, D.C.-based expert on innovation, leadership and organizational culture.