Scientist creates playable Pokémon overworld using a neural network

Image from Ollin Boer Bohan’s demo.

Ollin Boer Bohan, a deep learning scientist at NVIDIA, has created a playable Pokémon overworld that is generated by a neural network. The demo creates visuals that resemble a Pokémon game and anyone can try it out for themselves via a web browser.

The left image is from the Virtual Console release of Pokémon Red while the right image is from Pokémon Diamond and Pearl.

Taking a look at Bohan’s demo shows that it does indeed bear a close likeness to a Pokémon game. Judging from the style of the playable character and the scenery, it appears to be largely based on Pokémon Diamond and Pearl.

The demo is playable, and you can use keyboard input to move the player character up, down, left, and right. As the character moves location, the scenery also changes in real time. That being said, the way in which the world is created causes the visuals to be a little unstable. This, in turn, produces an unsettling atmosphere.

As the player character moves around the world, objects such as grass or ledges pop in and out of existence, and roads that you just walked across can disappear just as suddenly as new bits of scenery can come into view. Even if you just stand in place, there is still a subtle yet noticeable fluctuation or wobble in the visuals. It looks very much like a Pokémon game, but one where just about everything lacks stability. However, there’s also a strange charm to it—almost like you are seeing a dream or an illusion.

While the demo looks like a video game, it doesn’t reproduce any gameplay elements. You can only move around and view the scenery, occasionally being able to enter buildings.  

Image from Ollin Boer Bohan’s demo.

The neural network that Bohan trained to create this demo is a computing system that is modeled on the human brain and nervous system. It is a machine learning process that creates an imitation by using actual video frames of Pokémon gameplay that have been fed to it by Bohan.

The network does not reproduce the inner workings of an actual game program; it only predicts the kind of visuals that would result from the corresponding player input and generates an appropriate image. This is the reason why the visuals continue to change as the character moves around.

Image from Ollin Boer Bohan’s demo.

For those who are interested in learning more about how this neural network came to be, there’s a thorough technological explanation of it on Bohan’s website.

Bohan also discusses the future of neural networks and whether the technology may one day be able to reproduce an entire game. They note that as the imitations created by these kinds of neural networks continue to become more accurate, it will be theoretically possible to replicate any kind of system that you want.

Even today, neural networks and deep learning are already being used by developers in image enhancement and upscaling technologies like DLSS. We will likely see even more ways that these technologies can be harnessed in the future.

Written by. Marco Farinaccia based on the original Japanese article (original article’s publication date: 2022-09-17 20:03 JST)