TRENDING ON ONEINDIA
- Supreme Court Verdict On Rafale Deal Tomorrow
- Petrol Price Increased For The First Time In Nearly Two Months
- Hockey World Cup 2018 Quarter-Finals: India vs Netherlands — Updates
- Nissan Kicks Review — KICK-starting A New Statement Among Five-Seater SUVs
- Things To Know About The World's Quad Camera Smartphone
- Patiala — The Princely State Of Punjab
- Inside Pics From Isha's Wedding: Ash-Aaradhya Pose For A Pic
- She Sits In The Lake For The Entire Day — Here's Why!
DeepMind, which is a London based AI-subsidiary of Google, has developed an algorithm which can create full 3D models of objects and scenes using the regular 2D images. The AI is being called as the GQN (Generative Query Network) and the algorithm can be used for a number of applications such as robotic vision, VR simulation, among others. All the above-mentioned information was revealed yesterday when the details regarding the DeepMind's research was published in the Science Magazine.
The report from the Science Magazine further mentioned that the GQN will be able to compose and render an object or scenario from any angle, even if it's provided only with a handful of 2D images. This is a quite different than how the AI generally works, where the system would require a number of images which are labeled by humans, which could be a tedious task.
It is also being said that the new algorithm can also render the unseen sides of objects in order to develop a 3D view from multiple angles and the algorithm will be able to do so on its own, without taking any help from the human. This is simply because the AI has the ability to imagine that how the scene might appear from the other side.
The Science Magazine commented on the matter and said that:
"The GQN first uses images taken from different viewpoints and creates an abstract description of the scene, learning its essentials. Next, on the basis of this representation, the network predicts what the scene would look like from a new, arbitrary viewpoint".
As per some researchers, the GQN at the moment has been tested on simple scenarios with a small number of objects. This is because the GQN still lacks in technology that could allow it to create more complex 3D models. The researchers further said that "While there is still much more research to be done before our approach is ready to be deployed in practice, we believe this work is a sizeable step towards fully autonomous scene understanding", the researchers wrote. We are expecting that the algorithms will be able to execute some complex tasks going ahead, and it would be worth waiting a while.