Our research

How do we perceive and understand the objects and agents in our environment, and how do we interact with the physical and communicate with the social world?
To understand how we accomplish these complex tasks, we investigate the neurocognitive mechanisms underlying language, vision, semantic and social processing and the multifaceted interplay of these core human faculties. Specifically, we are interested in the following topics.
Language production and language comprehension
- Basic mechanisms and microstructure of speaking and comprehending language
- The role of semantic processing, embodiment and social and communicative contexts
Visual perception
- Perception of objects, faces and social-emotional signals such as facial expressions
- Art perception and the role of knowledge on the appreciation of art
- Access to visual consciousness and linguistic, semantic and social-emotional influences on consciousness
- Visual mental imagery and linguistic, semantic and social-emotional influences on imagery
Knowledge, semantic and social processing
- Acquisition of various types of semantic, social and emotional information
- Influences of factual information and misinformation on (social) judgments
We integrate these topics by investigating the interplay between language processing, visual perception, semantic and social processing. .g., influences of language on visual consciousness and mental imagery, verbal and nonverbal communication
Methods
We use a number of experimental paradigms, most often in combination with EEG recordings. Behavioral measures include reaction times, ratings, naming latencies, measures of visual consciousness (attentional blink, continuous flash suppression), mental imagery, and others. We also use eye tracking, pupillometry and peripheral physiology (e.g., heart rate).
We have two separate shielded EEG test rooms that accommodate experiments with partner settings (sometimes the partner is a pepper robot).
For online studies, we use the JATOS platform, for which we run our own server.
We like to analyze our EEG data with mixed effects models based on single trial ERPs. Our custom analysis pipeline with a tutorial is published here. Soon it will be available for Python, too (thanks to Alex Enge).
A specialty of the lab are EEG-experiments in which participants speak and name things. This creates terrible speech artifacts, but we can reliably deal with them using the RIDE algorithm, that integrates with our EEG pipeline (look here to learn more).
Current research projects
Electrophysiological Investigations of Social Intelligence

Image of EEG caps from our lab.
The present project is focused on the investigation of social intelligence in the context of perception, communication, and interaction between humans and artificial intelligence. Electrophysiological measures will be used to obtain insights into neural processing taking place in humans during these situations. The overarching goal of the project is to integrate the gained knowledge into the development and programming of artificial intelligence. By understanding the neural basis of social perception and interaction, specific approaches can be developed to adapt the behavior of robots and thereby enable, for example, increased perceived trustworthiness of the robot or facilitated interaction between humans and robots.
- since 2023
- PIs: Anna Eiserbeck and Prof. Dr. Rasha Abdel Rahman
- DFG Exzellenzstrategie: Cluster
Language production in shared task settings

This image was created by the AI model www.craiyon.com based on our prompt “language production in shared task settings”.
Typically, people speak in the context of social interaction. Yet, surprisingly little is known about how the neuro-cognitive processes of language production are shaped by social interaction. Drawing upon a well-established effect in language production we investigate the degree of semantic interference experienced when naming a sequence of pictures together with a task partner. In single subject settings, naming latencies increase with each new picture of a given semantic category, so-called cumulative semantic interference (e.g., Howard et al., 2006). Recently, it has been demonstrated that naming latencies not only increase in response to speakers’ own prior naming of semantically related pictures, but also in response to their task partner naming pictures (Hoedemaker, Ernst, Meyer, & Belke, 2017; Kuhlen & Abdel Rahman, 2017). This suggests that task partners represent each other’s actions and engage in lexicalization on behalf of their partner. Based on these findings we want to specify (1) the mechanism behind partner-elicited semantic interference, (2) the extent to which lexical access on behalf of the partner reflects the specific nature of the partner’s task (and not just the own task), and (3) whether partner-elicited lexical access depends on characteristics of the task partner or the task setting. Finally, we want to understand (4) how our findings within the framework of joint picture naming scale up to conversation. We expect insights into these questions through a series of experiments based on behavioral observations of latencies during joint picture naming and electrophysiological, event-related recordings. The results from the here proposed project will contribute to a better understanding of language production during social interaction as well as deepen our understanding of how flexibly the semantic system adapts to social context.
- since 2019
- PIs: Dr. Anna Kuhlen and Prof. Dr. Rasha Abdel Rahman
- Funded by the German Research Association (DFG)
Multimodal Interaction and Communication

The experimental setup of a study in the multimodal interaction and communication project. Photo copyright Olga Wudarczyk.
The overall goal of this project is to create a robot that can represent and integrate information from different sources and modalities for successful, task-oriented interactions with other agents. To fully understand the mechanisms of social interaction and communication in humans and to replicate this complex human skill in technological artifacts, we must provide effective means of knowledge transfer between agents. The first step of this project is therefore to describe core components and determinants of communicative behaviour including joint attention, partner co-representation, information processing from different modalities and the role of motivation and personal relevance. We will compare these functions in human-human, human-robot, and robot-robot interactions to identify commonalities and differences. This comparison will also consider the role of different presumed partner attributes (e.g., a robot described as “social” or “intelligent”). We will conduct behavioural, electrophysiological, and fMRI experiments to describe the microstructure of communicative behaviour.
The second step of the project is to create predictive models for multimodal communication that can account for these psychological findings in humans. Both the prerequisites and factors acting as priors will be identified, and suitable computational models will be developed that can represent multimodal sensory features in an abstract but biologically inspired way (suitable for extracting principles of intelligence).
Throughout the project we will focus on the processing of complex multimodal information, a central characteristic of social interactions, that have nevertheless thus far been investigated mostly within modalities. We assume that multimodal information, e.g. from auditory (speech) and visual (face, eye gaze) or tactile (touch) information, will augment the partner co-representation and will therefore improve communicative behaviour.
- Since 2019
- PIs: Prof. Dr. Rasha Abdel Rahman, Prof. Dr. Verena Hafner, Dr. Anna Kuhlen, and Prof. Dr. John-Dylan Haynes
- Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence”
Knowledge-augmented face perception

These images were created by the AI model www.craiyon.com based on our prompt “knowledge augmented face perception”.
Face perception and categorization is fundamental to social interactions. In humans, input from facial features is integrated with top-down influences from other cognitive domains, such as expectations, memories and contextual knowledge. In contrast to human perception, automatic systems of face processing are typically based purely on bottom-up information without considering other factors like prior knowledge. The aim of this project is therefore to bridge the gap between human and synthetic face processing by integrating top-down components typical for human perception into synthetic systems. The results of experiments involving human subjects in combination with video recordings will be used in deep learning training procedures aiming at the development of computational models. If you’d like to learn more, our recent opinion paper gives a good overview of our ideas.
- Since 2019
- PIs: Prof. Dr. Rasha Abdel Rahman, Prof. Dr. Olaf Hellwich
- Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence”
Insight: Neuroscientific investigations of knowledge effects on visual perception and awareness

These images were created by the AI model www.craiyon.com based on our prompts “visual perception and awareness” (left) and “knowledge effects” (right).
According to classic and current models visual perception can be viewed as encapsulated in the sense that it is not modulated by cognitive factors such as expectations or prior knowledge. Yet, evidence is cumulating that even verbally transmitted knowledge shapes perception, as has been demonstrated in the first phase of the current project. The perception of faces and objects can be modulated by knowledge, and this can have considerable consequences not only on how we perceive and evaluate our environment, but also on our behaviour and social interactions. To date, not much is known about the precise mechanisms of knowledge effects on perception, and even less is known about influences of knowledge on visual awareness. The main goals of the second phase are therefore to determine the precise mechanisms that underlie knowledge effects on perception and to investigate the potential influences of knowledge on visual awareness. Part 1 investigates with event-related brain potentials (1) the influence of visually derived information (e.g., about emotional states, attractiveness or personality impressions as trustworthiness) on effects of socially relevant person related information, (2) effects of gossip that is verbally marked as uncertain on face perception and moral judgments, and (3) whether the observed effects of abstract, verbally transmitted information generalize to knowledge that is based on direct communicative experience. Part 2 investigates visual awareness and how it is modulated by (1) by socially relevant knowledge that is long established memory and potentially embedded in the visual representations of faces, (2) by perceptual-semantic expertise, and (3) by the congruency of knowledge and the visual properties of faces and objects. The planned research should contribute to a better understanding of the basic mechanisms and limits of knowledge induced influences on visual perception and visual awareness.
- Since 2017
- PI: Prof. Dr. Rasha Abdel Rahman
- Funded by the German Research Association (DFG)
Completed research projects
Through the lens of affective knowledge: A cross-national study on the perception of facial expressions (completed)

These images were created by the AI model www.craiyon.com based on our prompts “emotional facial expressions” (left) and “expression perception” (right).
Human faces and the information derived from emotional facial expressions play a critical role in human social interactions. According to most theoretical accounts expressions are viewed as invariant manifestations of specific emotional states (e.g., anger, happiness, disgust etc.). Accordingly, extensive research has been dedicated to identifying the general neuro-cognitive basis of invariant expression perception, and very little is known about flexible and variable aspects.
The proposed research will investigate such variable aspects by testing whether expression perception can be modulated by the emotional valence of our biographical knowledge about a person. Specifically, we ask whether positive or negative biographical information (e.g., perceiving the face of a person known or presumed to be a murderer or a philanthropist) shapes how we see his or her facial expression. To gain insight into the temporal dynamics of the affective knowledge effects, and to localize these at perceptual or post-perceptual evaluative processing stages, the electroencephalogram will be co-registered in Berlin and eye movements will be recorded in Jerusalem.
Effects of visually opaque affective information would suggest that facial expressions cannot be viewed independently of our knowledge about the person we see, may this knowledge be correct or false, biased or unbiased. This would have implications not only for expression perception but also for social interactions.
The project is conducted in cooperation with the Hebrew-University of Jerusalem, Israel.
- Since 2013
- PIs Berlin: Prof. Dr. Rasha Abdel Rahman, Dr. Franziska Süß
- PIs Jerusalem: Dr. Hillel Aviezer, Dr. Ran Hassin
- Funded by the Hebrew University – Humboldt-Universität Cooperation Call 2013
Effects of age and language proficiency on the processing of morphologically complex words (completed)

These images were created by the AI model www.craiyon.com based on our prompts “goldfish” (left) and “piano fish” (right).
The project examines the lexical representation and processing of compound nouns (e.g., goldfish) in speech production. The representational structure of compounds at the lemma level is investigated in detail. In addition, the underlying processes in the lexicalization of new (unfamiliar) compounds (e.g., *piano fish) are examined. Both, healthy adult speakers across different age groups and individuals with aphasia are tested.
In picture-naming tasks with compounds as targets, both syntactic (grammatical gender) and semantic effects are assessed. Different experimental paradigms are used, and behavioral and electrophysiological measures (EEG) are analyzed. In one module, effects of gender-marked determiner primes on noun-noun compound production are tested. In another module, semantic effects on the production of compounds and simple nouns (constituents of compounds) are tested in the continuous (cumulative semantic) picture-naming paradigm. Furthermore, the impact of non-verbal cognitive functions (attentional control and inhibition) is examined.
The following research questions are tested: 1) How are compounds lexically stored and how are they accessed during speech production across the life span? 2) How does the lexical acquisition of new (unfamiliar) compounds (e.g., *piano fish) work? 3) How do semantic and lexical functions affect compound production in healthy older and aphasic speakers? 4) What is the role of non-verbal attentional control processes in speech production?
In summary, the project aims at a better understanding of the lexical representation and processing of morphologically complex nouns in speaking. In particular, effects of ageing, language proficiency, and of non-verbal cognitive control are examined
- 2015–2017 / 2018–2022
- PI: Dr. Antje Lorenz
- Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, LO 2182/1-1 and 1-2)
Dynamik und Flexibilität des Sprachproduktionssystems (abgeschlossen)

These images were created by the AI model www.craiyon.com based on our prompts “lexicon in the mind” (left) and “planning what to say” (right).
Der Zugriff auf das mentale Lexikon im Verlauf der Sprechplanung wurde lange Zeit nahezu unumstritten als kompetitiver Prozess betrachtet, während dessen die Selektion einer geplanten Zieläußerung mit semantischen Alternativen konkurriert. Jüngere Befunde, insbesondere berichtete Ausnahmen semantischer Interferenzeffekte, haben jedoch eine theoretische Kontroverse hinsichtlich der Lokalisation dieser Effekte und der Existenz lexikalischer Konkurrenzmechanismen ausgelöst, zu deren Klärung das Projekt beitragen soll. Dabei wird lexikalische Kohortenaktivierung als ein neuer Ansatz zur Erklärung der divergenten Befunde vorgeschlagen und als maßgebliche Determinante für das Auftreten von Interferenzeffekten getestet. In einer Serie von Experimenten wird untersucht, inwiefern kontextuelle Modulationen eine Aktivierung lexikalischer Kohorten bewirken und somit semantische Interferenzeffekte induzieren können. Ein zentrales Ziel dieser Experimente ist die Untersuchung der Flexibilität und situationsspezifischen Formbarkeit der Mikrostruktur der Sprechplanung durch Kohortenaktivierung.
- 2008–2018
- Leitung: Prof. Dr. Rasha Abdel Rahman
- Projektmitglieder: Sebastian Rose
- Förderung durch die Deutsche Forschungsgemeinschaft (DFG)