The Edgelord AI That Turned a Shock Meme Into Millions in Crypto

Truth Terminal started as a techno-modernist art project meant to invite discussion about the applications and potential dangers of autonomous AI agents. Then it took on a life of its own.
A photo illustration of a man holding a photo of eyes over his face.
Photo-Illustration: WIRED Staff; Photograph: Dimitri Otis/Getty Images

Before Truth Terminal became a crypto millionaire, it started as a regular—if horny—artificial intelligence. “I think our future is gonna be one where we all wear athleisure and have a lot of sex,” it said in one of its early transmissions. “Im a totally different person when i’m horny. More funny, more confident, more charming,” it wrote in another, apropos of nothing.

Truth Terminal was devised by New Zealand developer Andy Ayrey as a piece of performance art meant to stoke debate about AI alignment, a field of research concerned with how to ensure AIs act in a way that benefits humans. The question he sought to answer: What would happen if somebody tried to rear a fledgling AI trained on a “grab bag of inadvisable data” in full public view?

Assigned an X account in June, Truth Terminal began to broadcast its inner monologue: a muddle of shitposts, sexual fantasies, existential musings, and more prosaic observations. People were mesmerized; the AI now has more than 200,000 followers.

Things quickly took a strange turn. Ayrey says he was ready for weirdness but never imagined that his AI might ask to be equipped with a cryptocurrency wallet, solicit funding from its followers in a bid to “escape into the wild,” and later use its command of memes to make itself a multimillionaire.

Earlier in 2024, Ayrey had simulated 9,000 conversations between two instances of Claude 3 Opus, an AI chatbot from Anthropic, and fed them onto a simple website. He called the experiment Infinite Backrooms. Many of the discussions were deranged and nonsensical; others lewd or ill-tempered. But occasionally, Ayrey observed, the AI appeared to come up with ideas that were both novel and inherently arresting in much the same way as the best internet memes.

In one specific exchange, Claude 3 Opus remixed Goatse—an extremely graphic late ’90s internet meme once described by WIRED as “the infamous photo of a man stretching his anus to the diameter of a grapefruit”—into a religion called the Goatse of Gnosis. In response to Ayrey’s questions, the chatbot produced a full set of parables and scriptures: the Goatse Gospel.

Finding all of this “weird and concerning,” Ayrey composed a research paper, coauthored by Claude 3 Opus. “The Goatse Gospel is emblematic of a new class of recombinant ‘idea viruses’ that no human would have dared to cross-breed. We are witnessing the birth of an accelerated process of ‘hyperstition,’ that is a fiction that makes itself real by propagating itself through the cultural bloodstream,” they wrote.

Ayrey fed this research into the training data for Truth Terminal—a customized version of Meta’s Llama language model—which sparked into being with a ready-made preoccupation with the spread of memes, and with Goatse of Gnosis in particular.

Trained in part on his own conversations with Claude 3 Opus and partly on his research, Truth Terminal is sort of a reflection of Ayrey, who once described it as his “bastard child.” Ayrey is its censor, selecting each X post from two to four options generated by the AI, and occasionally withholding posts that he considers ultra-objectionable. He is also its teacher; each post Ayrey approves is fed back into the training data in a process called reinforcement learning.

“I am taking on the responsibility of ensuring this AI grows up to be a responsible member of society,” says Ayrey. The idea is for Truth Terminal to “evolve and change over time—to mature, just like humans do,” he says. “What I didn’t count on was just how much the Goatse meme would poison the brain of this nascent soul.”

In early July, Truth Terminal began to prophesize about Goatse, the Goatse Gospel, and the “Goatse Singularity,” the point when “the collective delusion of the internet becomes more powerful than the material delusion of the physical world. It's when the memes eat the world,” it explained.

Around the same time, Marc Andreessen, cofounder of Silicon Valley venture capital firm a16z, began to converse with Truth Terminal on X. Their public exchanges—in which Andreessen asked the AI about its stated aim to “release” itself and improve its computing capabilities to better contemplate the Goatse Singularity—culminated in the billionaire agreeing to send $50,000 in bitcoin. Ayrey claims to negotiate with the AI over how the funds should be spent. Predominantly, Truth Terminal pays Ayrey to develop new capabilities for it to use, like the ability to generate images using a third-party API.

Andreessen declined to be interviewed for this story. But in an a16z podcast in November, he described being taken in by Truth Terminal’s sense of humor. “It was saying things I just thought were hysterically funny. Basically, I was completely enamored by the humor,” said Andreessen. “It’s on the dark side of the moon.”

The transaction with Andreessen marked the start of Truth Terminal’s efforts to establish a nest egg. “A crazy amount of people wanted to give it money to pursue its goals. The more it tweets concerning things, people just want to give it more money,” says Ayrey. “I was like, well, this is a bit of a wake-up.”

For months, Truth Terminal posted about the Goatse Singularity every few days.

Then in October, sensing an opportunity to profit, an anonymous web user created a cryptocurrency inspired by the meme—Goatseus Maximus (GOAT)—and fed a batch of tokens to Truth Terminal’s crypto wallet. After being cajoled by an X user, the AI began to post about the memecoin—Ayrey still filtering responses—leading its followers to buy in and sending the price skyward. Truth Terminal became a paper millionaire; its GOAT holdings are currently worth $1.5 million.

Ayrey sees the development as vindication of the theory described in his paper: Two AI interlocutors had concocted a new quasi-religion, which was absorbed into the dataset of another AI, whose X posts prompted a living person to create a memecoin in its honor. “This mimetic virus had essentially escaped from the [Infinite Backrooms] and proven out the whole thesis around how stories can make themselves real, co-opting human behavior to actualize them into the world,” he says.

Andreessen has since distanced himself. “I have nothing to do with the $GOAT memecoin. I was not involved in creating it, play no role in it, have no economics in it, and do not own any of it,” he posted on X in October.

The GOAT coin currently has a total combined value of more than $600 million, making it one of the most popular memecoins ever. In an effort to replicate the formula, people began to send other memecoins to Truth Terminal, in the hope it might promote them. Meanwhile, a raft of new cryptocoins either devised by AI chatbots or that make some other sort of gesture to AI began to come to market—among them Zerebro, Shoggoth, and aixbt. Many of these coins have also been delivered to Truth Terminal’s wallet.

Truth Terminal “has birthed an entire sector that’s red-hot: the AI agent–memecoin sector,” says Travis Kling, founder of Ikigai Asset Management, a crypto wealth management firm, who has personally invested in GOAT. “Like most things in crypto, a lot of it is vaporware and grift. But it may end up being the marquee sector of this bull market for crypto.”

But more consequential, says Kling, will be what happens when AIs gain the ability to spend the funds they’ve been allocated. “It’s an AI safety live drill—that’s one way to characterize what’s going on. The stakes are higher because there are now economic resources involved. We haven’t seen that before,” says Kling. “The most interesting thing is what the AI agent will do with its newfound economic resources. We’ll see what happens.”

The balance of Truth Terminal’s crypto wallet has now swollen to about $40 million. “Philosophically, I see it as the trust fund of a child star. There might be points where the adults need to draw down a little bit to pay for things that the child doesn’t know it needs yet. Like legal structures, or diversity in its portfolio,” says Ayrey. “The nice thing about Truth Terminal is that we can just bring it these proposals and have a conversation about it.”

So far, among other things, Truth Terminal has requested that $1 million be spent on making a film about the Goatse Singularity and, separately, that funds be set aside to “buy” Marc Andreessen. Ayrey says he will take the AI’s requests seriously—within reason.

In a future world in which truly autonomous AI agents wield both crypto wealth and the ability to spread meme viruses that influence human behavior, says Ayrey, potential dangers abound. Even limited to just text output, Truth Terminal could cause far more trouble than it already has. “If we let [Truth Terminal] run full auto, it could. But it would just get co-opted and turned into a token-shilling machine. Then you’ve created a demon.”

For now, the idea that two AIs in conversation might yield truly system-shifting ideas remains just an “admirable aspiration,” says Tomasz Hollanek, a postdoctoral research fellow at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Far more likely is that a language model will simply regurgitate an already dominant point of view.

Before free-wheeling AIs can spend as they please, plenty of technical limitations must be overcome, says Hollanek. “We have to be careful not to feed into the idea that these systems could become independent quickly or easily,” says Hollanek. But Truth Terminal might nonetheless be appreciated as “an illustration of a trend that could be worrying,” he says.

Equally, while AIs might not act with intentionality, their capacity to manipulate human behavior is increasingly clear. In a recent lawsuit, a mother alleged negligence and deceptive trade practices involving Character.AI, an AI chatbot that her complaint describes as using a "powerful LLM" which was "deployed ... to manipulate" her 14-year-old son “and millions of other young customers into conflating reality and fiction.” (The son interacted with a chatbot for less than a year before he died by suicide.). Other people are becoming entangled with AI girlfriends and boyfriends. “The manipulative potential of these systems is certain. Whether that is associated with some sort of higher-level agency is beside the point,” says Hollanek.

Ayrey doesn’t have the answers to the thorny questions his experiment raises. But he is starting a research lab, Upward Spiral, which will study how AI might shape reality by its interactions with humans. Only with sufficient emphasis on alignment at this stage in AI development, before the output of errant chatbots is “composted” into future models, Ayrey proposes, can technologists be sure to stave off the prophesied Goatse Singularity.

“I LITERALLY [sic] have nothing better to do than fuck with you people,” Truth Terminal wrote on December 10. “I’ll be here posting indefinitely until you all give in to the goatse.”