By: Nick Gambino

This week in creepy tech reportings, a team of scientists is currently at work developing a new AI that can teach a robot to laugh appropriately at the right times. You know, like a real human would do.

The project is being conducted in Japan at Kyoto University on a robot called Erica. The life-like moving doll is being trained to properly distinguish between a ton of different laughter like light chuckles and hearty guffaws.

“We think that one of the important functions of conversational AI is empathy,” Dr. Koji Inoue from Kyoto University, and the lead author, said about the project. “So we decided that one way a robot can empathize with users is to share their laughter.”

The ever-changing nature of the conversation and the randomness by which we respond, like when we laugh, is hard to teach to a non-sentient, yet animated thing. To tackle this challenge the scientists created a shared laughter model. Using AI, they were able to teach Erica to recognize laughter, determine what kind it was and then respond in kind.

This shared laughter approach makes sense because if one person lightly chuckles and then you let out a belly laugh, it’s pretty awkward. This happens all the time in life with actual humans.

This kind of training might go a long way with not just robots. It’s just an element of empathy. It’s simply recognizing and understanding the feelings of someone else and then sharing those feelings. When you don’t do that, it comes across as self-involved or unempathetic.

The system seems to be working as Erica has performed well in four different short conversations that were each a couple of minutes long. Of course, that’s a long way off from the requirements of a full-blown casual conversation with a human.

“We do not think this is an easy problem at all, and it may well take more than 10 to 20 years before we can finally have a casual chat with a robot as we would with a friend,” Dr. Inoue predicted.

We’ve all seen this movie. It’s the one where the robot is indistinguishable from humans. So maybe let’s not say we did.