Notice: Function wpdb::prepare was called incorrectly. The query argument of wpdb::prepare() must have a placeholder. Please see Debugging in WordPress for more information. (This message was added in version 3.9.0.) in /home/bil/domains/cms.kolsup.com/public_html/wp-includes/functions.php on line 6031
The AI Robot Predicted And Laughed Simultaneously With The Person In Front of It. - Gocat

The AI Robot Predicted And Laughed Simultaneously With The Person In Front of It.

The Emo robot can predict a smile about 840 milliseconds before the person in front of it smiles, then laughs simultaneously.

People are becoming increasingly familiar with robots capable of fluent verbal communication, partly due to advancements in large language models like ChatGPT, but their non-verbal communication skills, particularly facial expressions, still lag far behind. Designing a robot that can not only display a wide range of facial expressions but also time them accurately is extremely challenging.

The Creative Machines Lab at Columbia University’s School of Engineering in the United States has been researching this issue for over 5 years. In a new study published in the journal Science Robotics, the team of experts there introduced Emo, an AI robot capable of anticipating human facial expressions and performing them simultaneously with the person, as reported by TechXplore on March 27th. It predicts a smile about 840 milliseconds before the person smiles, then laughs simultaneously.

Emo is a humanoid robot with a face equipped with 26 actuators that allow it to perform various facial expressions. The robot’s head is covered with a soft silicone skin with a magnetic linkage system, making it easy to adjust and maintain quickly. To enhance its interactions further, the research team integrated high-resolution cameras into each eye, allowing Emo to interact with its eyes, which is crucial in non-verbal communication.

The research team developed two AI models. The first model predicts human facial expressions by analyzing subtle changes on the opposite face, while the second model generates movement commands using corresponding expressions.

To train the robot in expressing emotions, the research team placed Emo in front of a camera and let it perform random movements. After several hours, the robot learned the relationship between facial expressions and movement commands – similar to how humans practice expressions while looking in the mirror. The research team calls this “self-modeling,” similar to humans imagining how they look when making certain expressions.

Next, the research team provided Emo with videos of human facial expressions to observe frame by frame. After hours of training, Emo can predict expressions by observing subtle changes in the face when a person starts to smile.

“I believe accurately predicting human facial expressions is a revolution in human-robot interaction. Previously, robots were not designed to consider human expressions during interaction. Now, robots can integrate facial expressions to respond,” said Yuhang Hu, a doctoral student at the Creative Machines Lab, a member of the research team.

“The ability of the robot to perform expressions simultaneously with humans in real-time not only improves the quality of interaction but also helps build trust between humans and robots. In the future, when interacting with a robot, it will observe and interpret your facial expressions, just like a real human,” added Hu.