Train Game Testing Agents by Google AI: Machine Learning-Based System for Game Developers

Google AI announced recently a machine-learning-based framework for game developers that could be used to quickly train game-testing agents. This allows human testers to concentrate on more difficult problems. The system does not require any machine learning (ML), works with many popular game genres, can train an ML Policy, and generates game actions from a single instance of a game in under an hour. Google AI also provides an open-source library showing how these techniques can be used in practice.

The most basic form for testing video games is simply playing them. It is easy to find and fix many serious bugs, such as crashes or falling out of the world. It is difficult to locate the bugs in the large state space of modern games. The developers chose to train a system that could "just run the game" at scale.

Developers could train multiple game-testing agents to test the game instead of one super-effective agent who plays the entire game from start to finish. This was a very effective way to do this. The "gameplay loops" are tasks that each agent can complete in a matter of minutes.

One of the biggest obstacles to machine learning in game development is bridging the gap between the data-centric worlds of ML and the simulation-centric worlds of video games. Instead of asking developers to translate the game state directly into custom, low-level ML features (which would be too time-consuming) or attempting to learn from raw pixels (which would require too much data to train), the proposed solution gives developers an efficient, game-developer-friendly API that lets them describe their game in terms of the core state that a player perceives and the semantic actions that they can take. This information is represented using concepts game developers know well, including entities, raycasts and 3D locations and rotations. Buttons and joysticks are also available.

Read Blog: Game Designers to Stop Making These 8 Common Horse Mistakes

This semantic API allows the system to adapt to the game being developed. It is easy to use. Because it influences the choice of network architecture, the game developer's particular combination of API building blocks has an impact on the gaming scenario. For example, raycasts can be used to monitor the environment of an agent that is probing it with raycasts. This is similar to how autonomous vehicles use LIDAR to analyze their environment.

This API can be used to emulate many popular control schemes in games. These include first-person shooters and third-person shooters with camera relative controls, racing games, and twin-stick shooters. Because 3D movement and targeting are a key part of games, networks are made to be simple. This is achieved by the technology analyzing the game's control scheme, creating neural network layers that specialize in processing the game's actions and observations. The AI-controlled gaming entity can see the locations and rotations in real objects and translate them into distances and directions. This transforms learning into a more efficient and generalized network.

After creating a neural network architecture, it is necessary to train the network to play the game using the appropriate learning algorithm.

Get a Free Estimation or Talk to Our Business Manager

Imitation Learning (IL) is a tool that teaches ML policies through watching professionals play it. It works well in this case. Unlike RL, where agents must create a policy by themselves, IL replicates human expert behavior. Game developers and testers can quickly show how to play their games because they are experts in their field. The DAgger algorithm inspires an IL approach. This approach allows the system to take advantage of interactivity, video games' most captivating quality.

The system combines a high-level semantic API and a DAgger-inspired interactive learning flow to create machine learning policies that can be used for videogame testing across all genres. Google AI released an open-source library as a working example. It is possible to train agents for test applications in less than an hour using a single machine that has no prior knowledge of machine learning. This research should encourage the development of machine-learning approaches that can be used in real-world game production.