Google I/O: Decoding Gemini App, AI in Search, Google Beam and WorksSpace

Mountain view, California: With a sleep of updates to Gemini 2.5 Pro and Gemini 2.5 flash, importantly generative AI models VO3 and imagene 4, upgrade for Gemini Live, Live for Gemini Live, with the introduction of Deep Research and Canvas – Google Al Pro and Google al Ultra Plan Live, inspires us for an important question – how is it changing? The answer is quite simple.

The first glimpse of the Google beam, and most could not be felt when Google demoned the project starline in I/O Keenot a few years ago
The first glimpse of the Google beam, and most could not be felt when Google demoned the project starline in I/O Keenot a few years ago

Change is important.

First things first, Google has confirmed that Gemini Live capabilities were now available on all compatible Android and Apple devices, for all and without any membership plan. The Arsenal of the new tools added to the Gemini app includes imagene 4 image generation model, VO3 video generation model (both of these will be in the drop-down list for model selection), new deep research and canvas features, as well as finding an integration within the Chrome web browser.

The Gemini 2.5 flash model now becomes a default model, which makes the 2.0 flash model successful.

“With the Gemini 2.5 model, the canvas is now even more comfortable and powerful. You can create interactive infographics, quiz and even podcast-style audio overview in 45 languages. But the magic of 2.5 Pro has the ability to create complicated ideas to create a more faster ideas to work to work to work in remarkable speed and accurately. Josh Woodward, Vice President, Google Labs and Mithun said.

Google is offering two new AI subscription plans, and it should not be surprising, as there is pressure on widening the AI ​​tool to generate revenue for technical giants. Google AL Pro is (this is essentially the current Google AI premium scheme named, with some ad-on), and Google Al Ultra that will be available as an option for customers.

With the Pro Plan, users will get a complete suit of AI products with a higher rate limit than their free version, including the Gemini app, earlier known as Mithun Advanced, such as products such as flow and notebooks with high rate limits.

Ultra plan, as the name suggests, is being postponed as a flagship tier, and for now, only available in the US (some functionality in it is limited to the American region, for now). It will have the highest rate limits, the initial access to new experimental features and the upcoming deep think model as well as the priority access to agent mode when it is launched.

Woodward said, “Agent mode basically combines advanced features such as live web browsing, in-depth research and smart integration with its Google apps, empowering it to manage complex, multi-step tasks.”

Google says the cost of ultra plan is $ 249.99 per month, and more countries will soon be added to the rollout. Openai also has a Pro subscription priced at $ 200 per month. Anthropic also has a maximum plan for cloud users, which is above $ 100 per month, it depends on how it has been configured. India pricing is undeclared for ultra plan.

Aye, and agent aspirations in search

In a universal AI auxiliary for Google, Gemini to Gemini, the data they collect from the search will be important. The AI ​​overview, which was launched in last year’s I/O, has ever seen rollouts in more countries like India. Google said that the search query is on a trajectory upwards. AI overview in Google Search is now available in 200 countries, and can be overlade on search results in more than 40 languages.

This year, the discovery gets AI mode. The keys here are advanced logic and polynomial. The head of Google Search, Liz Reid, who is the vice-president, stated that AI will use the Query fan-out technique, which will be in further sub-science, to break any question asked by a user.

Reid said, “This enables a deeper into the web compared to a traditional discovery on Google, which helps you find even more detection of what the web has to present to the untrustable, hyper-pranic content that matches your question.”

AI mode will also have a deep discovery, which uses the same query fan-out technology. In AI mode, Google said that deep discovery can release hundreds of discoveries, due to uneven pieces of information, and can create a specially quoted report in a few minutes.

The search live, along with Google lenses -joining visual search perspements is the search live, which will allow a user to indicate the phone’s camera on anything around them to start searching. “For example, if you are feeling stumps on a project and need some help, just tap the” Live “icon in AI mode or lens, pointing to your camera, and ask your question. In this way, the search becomes a learning partner who can see what you see – you can see – as well as different processions that you can find – like you can also find out – like you can also find the stories of different processions. Websites, videos, “

Agent AI abilities are getting a deep runriend in AI mode, which Google said, can help people save time with tasks like monitoring and buying film tickets. “This event will begin with tickets, restaurant reservation and local appointments. And we will work with companies such as ticketmaster, Stubhub, Rage and Washo to create a spontaneous and useful experience,” the company said. This should increase rapidly for Google in the appointed time.

As should be the AI ​​mode shopping experience, which for shopping graphs, which uses Gemini for shopping graphs, browsing users for inspiration, thinking through ideas to help and help narrow products for more manageable shortlists.

“Shopping graphs now have more than 50 billion product listings, from global retail vendors to local mother and pop shops, details such as each review, prices, color options and availability. Personally listing for your taste,” explained about Lillian Rincon, Vice President, Consumer Shopping Product.

This AI agent uses Google Pay to complete the order if pricing and other criteria match the checklist you will set in the beginning.

Beam It, in 3D

The first glimpse of the Google beam, and most could not be felt when Google demoned the project starline in I/O Keenote a few years ago. The 3D communication platform, as the Google Beam is being called, uses an AI volumetric video model that see these calls perfectly 3D from any point of view. This standard 2D video stream turns the stream into realistic 3D experiences. This can change the 2D video calls to more immersive, otherwise without the need to wear any 3D glasses or virtual reality headset.

“We are working closely with HP to bring the first Google Beam devices with selected customers to the end of this year. In a few weeks, you will see Google Beam products before HP in Infocom.

Aye in your field

The Google is not slowing down on the widespread AI-run functionality integration within the field. They say that the workspace provides 2 billion AI aid every month. Some major changes now include the availability of imagene 4 to generate an image in slides, vids and doors, pointing to Google Docs and Gemini for several documents where information sources may be scattered, presentation slides can be converted into video, converting in video, speech translation in Google Meet as well as fast-appointed.

Leave a Reply

Your email address will not be published. Required fields are marked *