New article by Martin Maier, Alexander Leonhardt, Florian Blume, Pia Bideau, Olaf Hellwich, and Rasha Abdel Rahman:
Neural Dynamics of Mental State Attribution to Social Robot Faces
Our latest study explores how people attribute mental states to social robots based on affective information about their behavior. We found that when participants learned about a robot’s positive, negative, or neutral actions, they quickly formed impressions of its trustworthiness, facial expressions, and intentionality (aka mindedness). EEG recordings revealed that this information influenced both early perceptual and later evaluative stages of brain activity, similar to human face perception—except in fast emotional responses, where robots were processed differently. These findings suggest that while people instinctively see emotional expressions and intentions in robots, they do not elicit the same emotional engagement as humans. This research sheds light on some basic cognitive and neural mechanisms of human-robot interaction, with implications for the ethical and social integration of artificial agents in our daily lives.