👨🤖 EEG responses to the Real vs Virtual faces

Real-life relevant face perception is not captured by the N170 but reflected in later potentials: A comparison of 2D and virtual reality stimuli

An interesting article in which the authors investigated how EEG responses differ during the perception of real and virtual faces.

Pasted image 20250522021231.png
FIG 1. Procedure of stimulus presentation: 0.5–0.8s fixation, 1.5s stimulus presentation, 1.5s inter stimulus interval (ISI). Exemplary stimuli of face conditions and perceptual control conditions (scrambled, blurred) are illustrated.

Pasted image 20250603201428.png
FIG. 2. Time-by-amplitude plot of the root mean squared ERP averaged over all electrodes for the selection of appropriate time windows for all ERP components. Grey highlighted sections mark the time windows for P1 (95–125ms), N170 (165–195ms), L1 (230–420ms) and L2 (685–1,385ms).

Pasted image 20250522010323.png
Fig. 3. Time-by-amplitude plot of the mean P1 and N170 amplitudes for all conditions (panels A1, B1). Mean topographies across conditions used for ERP averaging (panel A2, B2). The electrodes selected for analyses are indicated. For the P1 electrodes Oz, O1, O2, P7, PO7, P8, PO8, TP7, TP8 and those in close vicinity were used. For the N170 electrodes P7, P8, PO7, PO8, P10, P9, PO10, PO9, TP7, TP8 and those in close vicinity were used.

Pasted image 20250603201747.png
FIG. 7. Panel A illustrates the P1 topographies for all stimulus types in both modalities. Panel B depicts the mean P1 amplitudes for all stimulus types in both modalities. The error bars depict the confidence intervals for the mean values. Significant differences within each modality are marked (p<0.05).

Pasted image 20250603202728.png
FIG. 8. Panel A illustrates the N170 topographies for all stimulus types in both modalities. Panel B depicts the mean N170 amplitudes for all stimulus types in both modalities. The error bars depict the confidence intervals for the mean values. Significant differences within each modality are marked (p<0.05).

Extending previous laboratory studies, later components reflect said mechanisms of realistic face processing. In contrast to earlier components, later potentials are linearly related to stimulus realism (Schindler et al., 2017), modulated by socially relevant emotional expressions and affective contexts (Bublatzky et al., 2014; Stolz et al., 2019) and especially sensitive for self-related emotions (Herbert et al., 2013).

Processing of actually self-relevant emotional and contextual information, such as, e.g., threat towards oneself, seems to not be captured by the N170 component.

Thus, consistent with laboratory results, late components discriminate faces and controls under realistic conditions, as they exhibiting much more discriminatory potential than the N170.

source: https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1050892/full

#EEG #face #N170 #ERP #VR #perception #faceRecognition