Google DeepMind’s AI is all about creating 3D models using 2D images

The new algorithm can also render the unseen sides of objects in order to develop a 3D view from multiple angles

|

DeepMind, which is a London based AI-subsidiary of Google, has developed an algorithm which can create full 3D models of objects and scenes using the regular 2D images. The AI is being called as the GQN (Generative Query Network) and the algorithm can be used for a number of applications such as robotic vision, VR simulation, among others. All the above-mentioned information was revealed yesterday when the details regarding the DeepMind's research was published in the Science Magazine.

Google DeepMind’s AI is all about creating 3D models using 2D images

The report from the Science Magazine further mentioned that the GQN will be able to compose and render an object or scenario from any angle, even if it's provided only with a handful of 2D images. This is a quite different than how the AI generally works, where the system would require a number of images which are labeled by humans, which could be a tedious task.

It is also being said that the new algorithm can also render the unseen sides of objects in order to develop a 3D view from multiple angles and the algorithm will be able to do so on its own, without taking any help from the human. This is simply because the AI has the ability to imagine that how the scene might appear from the other side.

The Science Magazine commented on the matter and said that:

"The GQN first uses images taken from different viewpoints and creates an abstract description of the scene, learning its essentials. Next, on the basis of this representation, the network predicts what the scene would look like from a new, arbitrary viewpoint".

Google DeepMind’s AI is all about creating 3D models using 2D images

As per some researchers, the GQN at the moment has been tested on simple scenarios with a small number of objects. This is because the GQN still lacks in technology that could allow it to create more complex 3D models. The researchers further said that "While there is still much more research to be done before our approach is ready to be deployed in practice, we believe this work is a sizeable step towards fully autonomous scene understanding", the researchers wrote. We are expecting that the algorithms will be able to execute some complex tasks going ahead, and it would be worth waiting a while.

Best Mobiles in India

Read More About: news google ai lg

Best Phones

Get Instant News Updates
Enable
x
Notification Settings X
Time Settings
Done
Clear Notification X
Do you want to clear all the notifications from your inbox?
Yes No
Settings X
X