Google Gemini 2.0 Revealed with Advanced AI Capabilities

Google revealed Gemini 2.0, the most recent iteration of their AI model, which now incorporates tools for the “agentic era” and allows voice and visual output. AI systems that can complete tasks on their own with adaptive decision-making are represented by agentic AI models. Consider using a prompt to schedule an appointment or automate chores like shopping.
Multiple agents in Gemini 2.0 will be able to assist you in a variety of ways, from making suggestions in real time while playing games like Clash of Clans to selecting a gift and adding it to your shopping basket while you’re prompted.
In Gemini 2.0, the AI agents exhibit goal-oriented behavior, just like other AI agents. They are able to independently complete a task-based list of steps. Among the agents in Gemini 2.0 is Project Astra, an all-purpose AI assistant for Android smartphones that integrates Google Search, Lens, and Maps and supports many modes.
Another experimental AI agent that can explore a web browser by itself is called Project Mariner. For “trusted testers,” Mariner is now accessible as a Chrome extension in early preview version.
Gemini 2.0 Flash is the initial iteration of Google’s new AI model, apart from the AI agents. Compared to the Gemini 1.0 and 1.5 models, this experimental (beta) version has better benchmark performance, decreased latency, and enhanced coding and mathematical reasoning and comprehension. Additionally, it has native picture generation capabilities using Google DeepMind’s Imagen 3 text-to-image model.
All users can access Gemini 2.0 Flash Experimental on the web, and the mobile Gemini app will soon follow. To test it out, users must choose the Gemini 2.0 Flash Experimental option from the dropdown menu.
The new model is also available to developers through Vertex AI and Google AI Studio. Additionally, Google stated that it will reveal additional Gemini 2.0 model sizes in January.