How Ubisoft, Xbox, and Blizzard Are Using AI to Make Next-Generation Games
Some of the most prolific video game studios are using generative AI to automate the tedious elements of game design, like NPC dialogue and creating new levels of detail for in-game renderings—all in an effort to speed development and focus human talent on more important elements.
Generative AI is artificial intelligence that can create new content—such as text, images, or music—using prompts. It learns from a large amount of data and uses that data to generate new, complementary content ranging from simple sentences to videos and complex works of art.
Although a technology that’s only recently captured the imagination (and concern) of the mainstream, here is how some game developers are already using AI.
Blizzard Entertainment, the studio behind Diablo, Overwatch, and World of Warcraft, reportedly revealed in an internal memo in May that the company had begun experimenting with AI to create in-game character renderings.
According to the New York Times, Blizzard deployed its own internal AI tool, warning employees not to use third-party AI platforms to prevent the leak of confidential company data and IP. Although Blizzard executives were excited about the potential use of AI in the next generation of games, some employees said the company’s AI did a bad job of catching in-game bugs and issues in the games it was tested on.
In April, Square Enix’s AI division published a demo of its AI-generated update to the 1983 text adventure game, The Portopia Serial Murder Case, highlighting how large language models could be applied to text-based games.
The company behind Final Fantasy and Kingdom Hearts has been looking at AI for a while now. In May 2022, Square Enix announced that it would sell several of its subsidiaries and the Tomb Raider franchise (among others) to Sweden-based Embracer Group AB, and use the funds to invest in artificial intelligence as well as Web3 games.
Square Enix has also backed the AI startup Atlas, which uses generative AI to turn text and images into 3D worlds.
The online gaming platform Roblox announced the launch of two new generative AI tools in February to streamline game creation: Code Assist and Material Generator.
Currently in beta, the tools automate basic coding tasks by generating helpful code snippets and creating object textures from prompts. Roblox said that using generative AI makes the creative process easier and faster, adding that there are plans to enable third-party AI services as well in the hopes of enticing AI developers and creators to the Roblox platform.
Roblox co-founder and CEO David Baszucki said in August that he believes the AI tools will help players make games that are “more rich and dynamic.”
Ubisoft, the creator of the Assassin’s Creed franchise, announced the launch of Ghostwriter in March. This AI tool lets game developers generate first draft non-player character (NPC) dialogue, colloquially known as “barks,” allowing writers to focus on story development.
Ubisoft says the ultimate goal is to give designers the ability to create AI systems tailored to their specific needs by using a back-end tool called Ernestine for creating large language models of their own—like Ghostwriter.
Already betting heavily on generative AI with a $10 billion investment in OpenAI, Microsoft in November said it is adding an AI design copilot that adds generative AI features for Xbox game developers.
“Our goal is to deliver state-of-the-art AI tools for game developers of any size,” wrote Xbox General Manager of Gaming AI Haiyan Zhang.
Microsoft said the new AI tools—thanks to Inworld AI, one of the portfolio companies in the tech behemoth’s venture arm—are meant to empower game developers and allow devs to turn prompts into game elements, including scripts, dialogue trees, and quests. However, the move received substantial pushback from game developers and performers alike, who said the move threatens their jobs.
In March, NCSoft—creator of Aion, Guild Wars, and the Lineage series of games—unveiled its digital human technology in a trailer for the upcoming game Project M using Unreal Engine 5. The company created the digital human’s dialogue using NCSoft’s AI text-to-speech synthesis technology, which can translate text into human speech as well as reproduce an actor’s way of speaking, accent, and emotions.
Not content to stop at AI-generated voices, NCSoft also used “voice-to-face” technology to add facial expressions and lip-sync the dialogue.
Technology giant Nvidia released a demo of its NVIDIA Avatar Cloud Engine (ACE) for Games in May. The demo depicted an AI-generated rendering of a ramen shop and a non-player character (NPC) behind the bar speaking with the player.
By using AI, the NPC can understand meaning with better interactions, explained Nvidia CEO Jensen Huang. Created in partnership with Melbourne-based Convai, ACE incorporates several Nvidia services, including NeMo, Riva, and Omniverse Audio2Face.
Edited by Ryan Ozawa and Andrew Hayward
Editor’s note: This article was first published on July 5, 2023 and most recently updated with new content on November 19.