By: Nick Gambino

Google is looking to release a version of their AI assistant Gemini for kids under 13. This was laid out in an email sent by the company to parents who have supervised accounts for their kids. 

Creating a children version of the AI chatbot means kids can get help with homework and creative writing assignments. This raises a concern that they’re just going to have AI answer everything for them instead of learning the skill themselves. This also calls up another issue.

Gemini can be a useful resource for kids, especially in education, but there are questions of accuracy in AI responses. Google even acknowledged this in their email. 

“Remind your child that Gemini isn’t human,” the email reads. “Even though it sometimes talks like one, it can’t think for itself or feel emotions.”

I think this touches on something key in the current AI tidal wave. We’re all so swept in how well it talks, we forget this AI is just 1s and 0s. It has mastered syntax and conversational language, but it also straight up lies. It’s not even saying things in error, so much as it’s not able to find the answer, so it makes one up. 

This is something they have to figure out. No matter how many times you tell a human that AI is riddled with inaccuracies and you should double check any answers, they just start trusting it like they would a professional grifter. 

This came up recently when a friend of mine looked up some trivial bit about movies and took the Google AI response at its “word.” Me being a cinephile, I was able to call foul instantly. They challenged me on it, so we looked it up for ourselves not using AI as a middleman and lo and behold, I was right. I don’t even know where the AI was getting its highly inaccurate answer. 

Well, now kids are going to have their very own inaccurate AI to play with. I’m not hating, I just want us all to be wary of smooth-talking liars whether they’re human or robot.