Connect with us

News

Dungeons & Dragons Could Prevent the AI Apocalypse—or Kick It Off

Published

on

Deep in some underground ruins below a drought-stricken village, four brave adventurers found themselves in grave danger. The dungeon belonged to the Order of the Pure, a potentially nefarious cult that may have had something to do with the droughts that have been wreaking havoc in the village of Havenshire. Despite the danger, the stakes were too high to turn back. After all, not only did the fate of the villagers depend on their success, but the town’s mayor also promised a small fortune if they succeeded.

Eventually, the heroes discovered a wooden chest hidden under a slab of rock in one of the inner chambers of the dungeon. Inside was a glowing, magical key that could open a door out of a room they were in. Woa, a massive tortoise-like humanoid and one of the party’s sorcerers, deftly used his magic to smash the chest to splinters, and the adventurers had the key in their hands.

As they slid the key into the lock and turned it to open the door, though, they were greeted by a single message: “Too many requests in 1 hour. Try again later.”

“Road block,” Jack Dodge, the co-founder and dungeonmaster for the Dungeons and Dragons livestream Please Don’t Kill Us (PDKU), told his players. “Unfortunately, we reached the maximum number of requests we can get from ChatGPT in one hour—meaning we can no longer go on right now, so we’re going to take a break.”

Since OpenAI’s release of the chatbot in November, millions of users have flocked to ChatGPT to test out its limits. While many have used it to churn out academic essays, work emails, and lines of code, there have also been users who have tried to push it to do more creative things: writing novels, movie screenplays, and even religious sermons.

So with the massive surge in popularity of tabletop roleplaying games in recent years, it was only a matter of time before people started using it for their D&D sessions as well—to varying degrees of critical success and failures. The players of PDKU decided to use generative AI for nearly their entire gaming session, using ChatGPT to create the story, characters, and even puzzles for them to solve, while also leveraging DALL-E and Midjourney to create illustrations of their characters.

Advertisement

“We really used AI to the fullest extent,” Dodge told The Daily Beast. “I genuinely went in not knowing a single thing about what was going to happen. I don’t think that’s how most people would actually use AI and D&D but we really wanted to push it to see how far it could really go.”

“It was like the AI was a person that you’re bouncing ideas off of to create a campaign or a character,” Carson McCalley, a co-founder of PDKU and the player who played Woa the tortoise sorcerer, told The Daily Beast. “That’s when it felt really awesome.”

While it might just seem like a diversion at first blush—good for a one-off session with friends or fun if somewhat gimmicky fodder for your D&D livestream—training a chatbot to play D&D might actually be a great way to reach the holy grail of AI: Artificial general intelligence (AGI), or when an AI is capable of learning and understanding anything that a human can. Depending on who you talk to, AGI might mark the beginning of a tech humanist utopia where humans and machines can merge and work together, or unleash the end times as we know it.

Either way, achieving AGI will require some clever new approaches. And bizarrely enough, D&D could be the key.

The Elf Ranger Test

Building an AI that can play games isn’t new. The history of gaming bots can be traced back to 1957 when computer engineer Arthur L. Samuel wrote a program for a computer to play checkers. But the most well-known milestone might be from 1997, when IBM’s supercomputer Deep Blue defeated chess grandmaster Garry Kasperov in a heated and much-publicized chess match that people began to fully grapple with what it means to have a computer that’s “smarter” than a human.

So far though, the only games that AI has been tasked with mastering have been ones like chess, Go, backgammon, and even Diplomacy—zero-sum games where there are defined winners and losers. As complex as a game of chess might be, for example, it’s actually fairly easy for a computer that can predict all the algorithmic possibilities of each move and suss out how to achieve its one goal of winning.

When it comes to a game like D&D though, things get as complicated as an intricately-crafted dungeon.

Advertisement

If you push a machine to do things it’s not designed to, it’s going to do unexpected things back.

“We use games to train AI, but the games we primarily use are win-lose games. D&D isn’t if you play a particular way,” Beth Singler, a digital anthropologist and assistant professor in digital religion(s) at the University of Zurich, told The Daily Beast. “If you play it as a collaborative storytelling experience, it’s not about winning. It’s about sharing a story. Even if your character dies, or your character is the weakest character in the game, you can still have a really great time.”

Along with being an digital anthropologist, Singler is also an avid D&D player. In 2018, she proposed a counter to the Turing Test, a process of testing a computer’s intelligence developed by computing pioneer Alan Turing, in which a person is unable to tell a machine from another human based on their replies to a set of questions. Instead of simply asking a bot some questions, though, Singler proposed the “Elf Ranger Test,” which involves seeing if the bot could play a game of D&D.

The idea is that if an AI like a chatbot can play a collaborative game where the goal isn’t necessarily to “win” but rather to help build a story with players and have fun, it could get it a lot closer to AGI. The robot would be able to empathize, strategize, and work with players.

In that way, the bot would very nearly become its own person.

There are potential pitfalls of bots that are capable of emotion and cognition. Everyone from Elon Musk to Stephen Hawking have warned about what they say is an existential threat posed by such AI. A 2021 meta-analysis by researchers at the University of the Sunshine Coast in Australia found that there were numerous risks associated with AGI.

“AGI removing itself from the control of human owners/managers, being given or developing unsafe goals, development of unsafe AGI, AGIs with poor ethics, morals and values; inadequate management of AGI, and existential risk,” the authors wrote.

Advertisement

Put another way: If we were somehow able to build an AI that was truly capable of playing a game of D&D, we might end up creating our own kind of Big Bad Evil Guy, the likes of which no adventurer or band of heroes on Earth has ever seen.

Rise of the Robot Dungeonmaster

After their first game was unexpectedly interrupted by “too many requests,” the players of PDKU attempted to continue on their brave adventure to save the village once ChatGPT resumed access. They quickly discovered that the chatbot would stumble in very basic, but game-altering ways.

“[ChatGPT] told me in the beginning of the campaign that ‘You’ll get to the bad guy. If you defeat them, then the drought will end and then the town will live happily ever after.’ But when we were about to reach that point, it started to forget all of that,” Dodge recalled. “That’s a huge issue for the [dungeon master], because then I’m juggling whether I stick with what the AI said at the beginning or take over myself?”

This is an issue with ChatGPT and Bing’s chatbot that users have encountered time and again: If you “talk” to these bots for extended periods of time, they start to get very, very confused. Perhaps the most prominent example of this is when a tech reporter from The New York Times spoke to Bing’s chatbot for two hours and eventually got it to say that it was in love with him. There have also been instances in which users claim they were able to unlock a more nefarious persona of the bot dubbed “Venom” that would threaten to harm or kill them.

The reality is, though, these chatbots aren’t made for you to talk and engage with them for very long. After all, Bing’s search engine is designed for users to ask it questions or plan travel itineraries—not discuss enormous existential and philosophical questions for hours on end. If you push a machine to do things it’s not designed to, it’s going to do unexpected things back.

However, a D&D playing bot could possibly do this—while also solving a few areas that Singler believes are the primary hurdles to AGI.

First, there’s the argument that an AI needs to be able to easily “switch” between different roles. In roleplay as in life, we perform a lot of different functions. A group’s healer could also be a formidable fighter. The dungeonmaster, likewise, needs to be able to juggle building a world for their players while filling it with memorable characters and a plot to keep a story going. Being able to do this type of multitasking would be incredible for an AI. It wouldn’t be limited to just doing one thing like playing chess or Go, or helping you search for something on Bing.

Advertisement

Another argument is that the AI also needs to be able to engage socially, building relationships with the players. This type of social robot could have enormous benefits for people who suffer from loneliness and depression. A bot could learn from social interactions just as humans do.

“[ChatGPT] is missing that factor of emotions,” McCalley said. “So much of our group are artists, so we play with a lot of emotion. That’s just something it obviously can’t sense. It can’t sense that the players want a really deep moment. Or maybe they need a moment of levity. It can’t read that and I don’t think it will ever be able to.”

Maybe D&D is actually the perfect game to teach AI how to do these things. While the Elon Musks of the world warn of the “demonic” nature of AI, the truth is that D&D—a game where the bedrock of the system is having fun through collaborative storytelling—could actually create a kind of AGI that works with humans rather than against it. It’s easy to look at a bot that’s been trained to defeat our smartest minds in heated games of chess and Go and think that’s scary. After all, when all that you have is a hammer, everything looks like a nail.

Replace those games with D&D, and all of a sudden AI becomes a partner rather than an adversary—something that wants to work with you to create a newer, beautiful world instead of crushing you underneath the weight of its supposed intelligence. It’s less Skynet becoming self-aware and triggering Judgement Day a la The Terminator, and more the grandpa telling his grandson a story in The Princess Bride.

At the end of the day, though, when it comes to chatbots and our relationship with AI, the future is not unlike a good game of D&D at the table with your friends: It’s all going to come down to the roll of a dice.

Source: The Daily Beast

Advertisement

Follow us on Google News to get the latest Updates

Advertisement
Advertisement

Trending