![]() |
By Jason Lim
Two weeks ago, there was a photo of the pope wearing a thick, luxurious-looking goose-down puffer coat that had many folks fooled into thinking that he had somehow changed his traditional style. As for me, I was more impressed by his seemingly buff physique underneath all that puff, thinking that he underwent the same type of bodily transformation that Jeff Bezos went through during his ascent to becoming the richest person in the world. I was slightly disappointed when it turned out to be fake.
Making the rounds on social media this week is a photo of former British Prime Minister Boris Johnson being arrested and dragged along by several bobbies, his characteristically wild hair flying about his panicked face. Apparently, it was made in a few seconds with ChatGPT and Midjourney and is obviously fake. But we only know that it's fake because it wasn't backed up by other news sources, such as TV news, print media, etc. There is no way to tell whether the photo itself was fake.
A recent publication by the Network Contagion Research Institute (NCRI) titled, "Exploiting Tragedy: The Rise of Computer-Generative Enabled Hoaxes and Malicious Information in the Wake of Mass Shootings," described how a fake manifesto was quickly created and circulated among anti-trans groups when it was reported that the perpetrator of the Nashville school shooting could be a trans male. This is a known and inevitable phenomenon.
"Hoaxes and conspiracies almost always arise following high-profile mass casualty attacks; however, the potential to enable and amplify malicious content following an attack will continue to be exacerbated by computer generative technology and AI. While a quick and close examination could conclude that this particular manifesto was inauthentic, rapidly evolving AI tools can generate text that is virtually indistinguishable from authentic handwriting. Furthermore, advanced image generation-tools akin to Midjourney are becoming increasingly sophisticated and can, in theory, rapidly generate realistic images of handwritten notes in different environments. In all likelihood, bad actors will leverage these easily accessible tools to create believable documents, images and messages following mass shootings, or other high-profile incidents, in an effort to implicate innocent individuals or groups to intentionally provoke animosity and incite further violence."
This is where we, as a society, are staring at a coming reality collapse driven by AI. In this context, reality collapse represents the disintegration of the foundational social norms and agreement that places parameters around our behaviors toward one another. If what we see and hear are realistic across all primary mediums and reinforced by secondary mediums of communication, then we tend to believe it to be true. However, when the "realistic" aspect of the purported truth becomes suspicious, and we are incapable of discerning "real" from "fake," then we will either not believe in anything or believe in those realities that feed into our sense of how the world works. Alternative facts, indeed.
This has already happened with social media. Driven by incipient AI that sought to maximize our attention and engagement, social media has demonstrably contributed to the visceral division of our society, break-down in civil debate and extreme polarization of our politics. Talking about unintended consequences. But now we have the next generation of AI that can generate surround sound reality in real time. It's not a reality bubble anymore. It's an unbreakable sphere of selective realities that will break our world as we know it.
In a March 5 article by the Washington Post titled, "They thought loved ones were calling for help. It was an AI scam," the reporter recounts a story of an elderly couple who took a panicked call from their grandson asking for money, only to be told by the banker that this was probably a hoax when they went to the bank to withdraw money to wire to the grandson in trouble.
Such fraud isn't new. We have seen this before, but it had always been through texts or calls from a supposed "friend." This time, however, it was the grandson's own voice calling the grandparents.
This was possible because we now have AI that can mimic your voice ― down to the intonations, accents and pauses ― after listening to it for three seconds. You literally would not be able to tell whether that phone call is actually from your son. It's not an exaggeration to predict that these new generation AI tools will soon have the ability to discern what we are thinking about at the moment of us thinking about it. There is no dealing with such superior intelligence. If you think Alpha Go was impressive in beating Lee Sedol, then wait till you sit across the table with an AI-driven negotiator.
Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology, borrowed this analogy from Yuval Harari in their presentation titled, "The A.I. Dilemma," given to a private group of leading technologists on March 9th: "…what nukes are to the physical world, AI is to the virtual and symbolic world." I'd like to respond to that analogy with a more vernacular one of my own: we are like a bunch of deer in the middle of a four-lane highway staring at the oncoming AI headlights with nowhere to turn or hide.
Jason Lim (jasonlim@msn.com) is a Washington, D.C.-based expert on innovation, leadership and organizational culture.