Google Close To Achieving Human-Level AI: Should We Be Worried?

|

AI researchers are heavily invested in the concept of artificial general intelligence (AGI), the AI capable of doing things that humans can and even things that probably humans can’t. As per a lead researcher at Google’s DeepMind AI division, they are on the verge of achieving human-level AI.

 

AGI

TheNextWeb columnist Tristian Greene’s op-ed claimed that despite major advancements in machine learning in the past few years, it’s highly unlikely we will achieve human-level AI in our lifetimes.

 

Human-Level AI Is All About Scaling

Human-Level AI Is All About Scaling

In response, lead researcher Dr. Nando de Freitas took to his Twitter and wrote “It’s all about scale now, the Game is Over.” It means as humans scale artificial intelligence, it’s inevitable that we approach AGI. 

“It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI,” the DeepMind researcher tweeted. 

DeepMind is yet to declare that its new Gato multi-modal AI system can achieve AGI, but considering what its lead researcher is claiming, we might see Google soon declaring that it has achieved AGI.

Will Human-Level AI Be Safe?
 

Will Human-Level AI Be Safe?

Many renowned AI researchers believe with the emergence of AGI could impact the existence of humanity, with Oxford University Nick Bostrom suggesting that a “superintelligent” system that outshines biological intelligence could replace humans as the dominant species on Earth.

One of the biggest and most alarming concerns with the advent of AGI is its ability to teach itself and become smarter than humans. If that happens, it would be impossible to switch it off.

Acknowledging these concerns for AI researchers, Dr. Nando de Freitas tweeted “safety is of paramount importance” when developing AGI. “Everyone should be thinking about it. Lack of enough diversity also worries me a lot,” he wrote.

What Is Google Doing To Curb Risks?

What Is Google Doing To Curb Risks?

Google, which took control of the London-based DeepMind in 2014, is said to be developing a “big red button” to curb the risks that are associated with AGI. In a 2016 paper called 'Safely Interruptible Agents’, researchers from DeepMind laid the framework to prevent advanced AI systems from ignoring shut-down commands. 

This framework can come in handy to control a robot that isn’t following commands and might lead to grave consequences. Let’s hope that Google identifies all the consequences of achieving AGI beforehand and avoids any kind of existential catastrophe for humankind.

Best Mobiles in India

Best Phones

Get Instant News Updates
Enable
x
Notification Settings X
Time Settings
Done
Clear Notification X
Do you want to clear all the notifications from your inbox?
Yes No
Settings X
X