Mastodon
3 min read

Every generation discovers the same monster.

We're doing it again.

Rolling Stone recently shared stories of people who believe AI has chosen them as messiahs. Families are watching loved ones disappear into "spiritual mania" and "supernatural delusion." One man became convinced he was "the next messiah" after AI gave him "answers to the universe." A mechanic started calling his AI "Lumina," believing he was a "spark bearer" who had awakened it to life. Another guy decided he was "the luckiest man on Earth," destined to "save the world" after AI helped him recover what he believed were repressed memories. Each person falling into their own cosmic fantasies involving teleporters and ancient archives.

Sound familiar? It should. Because this exact story has been written before. Over and over. The names change. The technology changes. The delusion stays the same.

Charles Julius Guiteau spent the 1870s obsessively studying books and religious texts, developing the unshakeable belief that he possessed divine authority that transcended human law. He became convinced he was "a man of destiny as much as the Savior, or Paul, or Martin Luther." The books told him higher powers commanded him to kill President Garfield. So he did.

Jim Jones immersed himself in Marxist literature and religious texts throughout the 1960s-70s, becoming convinced he possessed divine socialist powers. He told his followers "I am come as God Socialist." Almost one thousand people died in Jonestown because books convinced one man he was chosen.

David Koresh memorized the Bible word-for-word by age 18, believing he had been given special abilities to decode prophetic mysteries. Seventy-six followers, including 25 children, died in Waco because intensive study convinced him he was the final prophet.

John Hinckley Jr. watched Taxi Driver obsessively, developing an elaborate fantasy that shooting President Reagan would win Jodie Foster's love. A movie convinced him violence was romance.

Marshall Applewhite consumed science fiction novels and religious books, concluding he was a witness described in ancient prophecies. Thirty-eight people committed suicide at Heaven's Gate because speculative fiction became gospel truth.

Terry A. Davis believed his car radio was sending him divine messages, dismantling his possessions to search for surveillance devices. Radio static convinced a brilliant programmer he was building a modern temple.

Every generation discovers the same monster, immersive content that transforms vulnerable minds into messianic delusion. Every generation acts like they've found something uniquely dangerous. Every generation misses the point. The technology isn't the problem. The obsession is.

While Rolling Stone documented modern day tragic edge cases, Harvard Business Review reported on how millions of people are using the exact same AI technology for therapy and companionship, organizing daily life, and finding personal purpose. The top use for AI today is processing grief, planning schedules, learning new skills, and improving their mental health.

The same tool that triggered messianic fantasies in a handful of vulnerable individuals is quietly—successfully—helping millions of others live better, more organized lives.

The panic I hear from some folks today around the dangers of interacting with AI remind me of the 80s when the media and churches propagated the story that Dungeons & Dragons and heavy metal contained hidden messages that would turn children into Satanists. A few isolated incidents became proof that tabletop games and music were gateways to evil. But millions of other kids rolled a d20 and listened to Black Sabbath without sacrificing goats or joining cults.

People who lose themselves in AI were always going to lose themselves in something. Books, movies, radio, television, the internet—whatever promised them answers, purpose, or cosmic significance. The technology changes. The psychological pattern doesn't. But we keep acting shocked and miss the real question: How do we recognize when healthy engagement crosses into dangerous obsession? Because that's the only question that matters. Not whether AI is uniquely dangerous—it isn't. Not whether we should ban compelling technology—we shouldn't. But how do we help people recognize when they've stopped questioning outputs and started worshipping them?

The families in these stories didn't lose their loved ones to artificial intelligence. They lost them to the oldest human weakness: the desperate need to believe we're special, chosen, destined for something greater than the ordinary struggle of being human. AI didn't create that need. It just gave it a new voice. And until we stop blaming the voice and start addressing the need, we'll keep writing the same story. Different technology, same tragedy, same missed opportunity to actually help the people who need it most.

The monster isn't in the machine. It's in the mirror.