By: Nick Gambino
This year’s Google I/O conference was held in California on Tuesday and we got a ton of new tech to talk about. Per Google’s count, there were over 100 things announced at the event.
Unsurprisingly, AI was a big one. Google is kicking off their most impressive AI feature yet in Gemini Live. This is the ultimate in proactive AI assistance, allowing Gemini to gather information from a plethora of sources including your camera and web searches. This makes it a super efficient research agent.
A video posted by Google shows a user briefly pointing his phone camera at a page from what looks like a science magazine. He doesn’t take a photo and only “scans” for a moment, but Gemini Live is able to answer questions and give context to a lot of the information on the page.
Essentially you can have a conversation with Gemini with your camera out, taking in any data from your surroundings whether it be written text or landscapes, buildings, etc.
You can even use it to brainstorm ideas. As you can imagine, it’s very conversational and even adapts to your unique vocal quirks and mannerisms. If you tend to be a little more scattered in your thinking, Gemini can roll with it.
Google is also releasing AI Mode in Search for every US user. This is the chatbot that’s found its way into Google Search and gives a much needed boost to internet research. This is separate from the AI summary you get at the top of a search query. AI Mode is a separate tab that you can navigate to if you need a more robust search. The new AI Mode runs on Gemini 2.5.
Google showed a demo of the upcoming Android XR glasses. In the demo we saw users engaging with an overlaid display that shows what you might find on your phone screen – maps, text messages and even photos. The glasses were slim enough to look like regular glasses which is essential if they want to make these work.
You can find the rest of Google’s announcements here.