It’s getting to the new normal for developers employing ML techniques to all sorts of technologies, and it’s only reasonable to wonder how this can be used to enhance game development. \n \nThis is what Google Stadia developers are opting for via its Project Chimera. A team of developers and engineers has been researching the potentiality of generative adversarial networks (GANs). \n \nSpeaking with MCVUK magazine—issue 956, April 2020— Erin Hoffman-John, the Chief of creative for Stadia research and development, had a brief discussion on machine learning. He explained that ML equips small development teams to build full-fledged games like World of Warcraft. \n \n \n \nAlso, content creation can be made much easier as ML trains a set of images used in the game and then creates completely new designs based on the image. \n“We’re taking on the risk that developers don’t want to. We’ve been talking externally to developers and asking them, what are the things that you’ve always wanted to do but have not been able to do? What are the things that you’ve had to cut out of your games because you haven’t been able to do them fast enough, or you just haven't had the processing power?” \n \n“What if a team of 14 people could make a game the scale of World of Warcraft? That's an impossible goal, right? The thing about games like World of Warcraft is that they rely on a lot of heavy, repetitive content creation. The artists and the writers are doing a lot of essentially duplicate work, that’s where a lot of the investment goes. If you look at the amount of money that is spent making a game like World Warcraft, it’s like 70% content and 30% or less code, even though it’s a tremendous amount of code, it’s way more on the content side.” \nThis is somewhat similar to what happened recently with NVIDIA’s StyleGAN ML, where a generative adversarial network did recreate new manga designs taken from the works of Osamu Tezuka. \n \nGoogle’s Project Chimera also aims to make the balancing phase of every game much easier for game developers. Again, this is something that would be tough and complex for smaller teams, but with ML—specifically reinforcement learning—the development time is significantly reduced and can go a long way in simplifying the complexity, according to Erin Hoffman-John; \n“(…)by playing the game millions of times with reinforcement learning agents that we’ve trained on the rules of the game, that lets us test the balance very, very quickly. So, even a small developer who might not have access to hundreds of people to playtest their game could have access to this reinforcement learning tool that will optimize the play of the game. It can learn the game by itself without being scripted and then tell you where the problems are in the balancing. It lets you test your theories of the design against what's happening in real-time.” \nReinforcement learning has already taken the world by storm. One example is DeepMind’s AlphaStar AI can beat 99.8% of Starcraft II human gamers. But the goal here would be to help balance the game rather than competing with humans using ML. \n \nThis would be significant progress for the gaming industry, especially for studios with a small team of developers. Now, these studios could successfully make massive games without the number of developers' big gaming studios have at their disposal. \n \nIn essence, ML is here to stay and is going to play a significant role in gaming. Stay tuned for more updates!