Google Gemini 2.0 New AI Model Announced: How to Access? Best Features
Google has introduced Gemini 2.0, the latest version of its AI model, which now includes capabilities for image and audio output along with tool integration. This development is part of the "agentic era," where AI systems can independently perform tasks through adaptive decision-making. These models can automate various activities, such as shopping or scheduling appointments from a simple prompt.
Gemini 2.0 incorporates several agents designed to assist in diverse areas. For instance, they can offer real-time suggestions in games like Clash of Clans or help select a gift and add it to your shopping cart based on a prompt. These agents exhibit goal-oriented behaviour, creating task-based steps and executing them autonomously.

AI Agents and Their Capabilities
Among the agents in Gemini 2.0 is Project Astra, which serves as a universal AI assistant for Android phones. It supports multiple modes and integrates with Google Search, Lens, and Maps. Another experimental agent, Project Mariner, can navigate independently within a web browser and is currently available as an early preview for "trusted testers" via a Chrome extension.
Beyond these agents, Gemini 2.0 Flash represents the initial version of Google's new AI model. Currently in its experimental (beta) phase, it offers lower latency and enhanced benchmark performance compared to previous models like Gemini 1.0 and 1.5. It also shows improved reasoning and understanding in mathematics and coding tasks.
Accessing Gemini 2.0 Flash Experimental
The Gemini 2.0 Flash Experimental is accessible on the web for all users and will soon be available on the mobile Gemini app. Those interested in testing it must choose the Gemini 2.0 Flash Experimental from the dropdown menu provided.
Developers have the opportunity to explore this new model through Google AI Studio and Vertex AI platforms. Google has also announced plans to reveal additional sizes of the Gemini 2.0 model in January.
This advancement allows users to generate images natively using Google DeepMind's Imagen 3 text-to-image model, marking a significant step forward from previous iterations. The introduction of these features highlights Google's commitment to advancing AI technology by enhancing functionality across various applications while maintaining user accessibility.


Click it and Unblock the Notifications








