- A child android robot named Affetto was first introduced in 2011.
- Researchers have upgraded this robot to achieve more human-like facial expressions.
- They analyzed over 100 points on Affetto’s face and added a variety of expressions like frowning and smiling.
Robot faces are one of the most crucial information display devices that can convey emotions, affections, and intentions with deformations of different facial parts. Although robots have been used in several different fields including sales and healthcare (especially in Japan), giving human-like expression to a robotic face remains a tricky challenge.
In 2011, Japanese scientists tackled this challenge by developing an Android child face, named Affetto. It was developed by Osaka University to help with learning human cognitive development via robotics. For instance, it could help caregivers to refine their skills in caring infants.
Now they’ve upgraded their android face robotic and made it strikingly expressive. They carefully analyzed Affetto’s facial surface points and balancing of several forces to obtain humanistic motion. By using mathematical modeling and mechanical measurements, they were able to significantly improve Affetto’s range of facial expressions.
Controlling Android Faces With Surface Deformations
So far, factors like materials used in robot skin, asymmetry and range of facial movements haven’t been investigated in details. In this work, researchers considered each of these factors and developed an effective method to measure and control movements of soft facial skin.
They found that surface deformation play a key role in handling robotic faces. They analyzed more than 100 different points (116 to be specific) on the Affetto face to measure its movements in three dimensions.
Researchers used so-called deformational units to select these facial points. Every unit contains a group of mechanisms, which generate a unique face contortion like raising or lowering the part of an eyelid or lip. Then they fed these measurements to a mathematical model to analyze surface motion patterns.
Courtesy of researchers | Osaka University
The model helped them adjust the synthetic skin by applying appropriate force, and employ the system to configure the deformation units to precisely control Affetto’s facial surface motions.
So far, android faces have been nothing more than a black box problem: Although these robots have been realized, they have only been evaluated in general and vague terms.
This new technique can be used to detect and examine the facial system, and to introduce a variety of expression like frowning and smiling. Overall, it’s a powerful solution to the android faces’ black box problem.