Skip to main content

Imitation and recognition of facial emotions in autism: a computer vision approach

Abstract

Background

Imitation of facial expressions plays an important role in social functioning. However, little is known about the quality of facial imitation in individuals with autism and its relationship with defining difficulties in emotion recognition.

Methods

We investigated imitation and recognition of facial expressions in 37 individuals with autism spectrum conditions and 43 neurotypical controls. Using a novel computer-based face analysis, we measured instructed imitation of facial emotional expressions and related it to emotion recognition abilities.

Results

Individuals with autism imitated facial expressions if instructed to do so, but their imitation was both slower and less precise than that of neurotypical individuals. In both groups, a more precise imitation scaled positively with participants’ accuracy of emotion recognition.

Limitations

Given the study’s focus on adults with autism without intellectual impairment, it is unclear whether the results generalize to children with autism or individuals with intellectual disability. Further, the new automated facial analysis, despite being less intrusive than electromyography, might be less sensitive.

Conclusions

Group differences in emotion recognition, imitation and their interrelationships highlight potential for treatment of social interaction problems in individuals with autism.

Introduction

Facial expressions are an essential tool to communicate emotions non-verbally in social interactions [1]. Being able to understand as well as to generate these expressions is crucial to the exchange of inner states with others [2]. Impairments in reciprocal social communication and interaction are key diagnostic aspects of autism spectrum conditions (ASC) [3]. Although the diagnosis includes both the understanding and the generation of non-verbal signals, especially the latter as well as the association of the two has neither been fully understood nor investigated.

In the context of this question, especially the ability to generate facial expressions that match the facial expression of the interaction partner might play a crucial role. Neurotypical (NT) individuals (i.e. individuals without autism) tend to mimic facial expressions in social interactions automatically [4]. There is evidence that such spontaneous facial imitation, often referred to as mimicry, might help people to recognize emotions (e.g. [5,6,7,8,9]).

Accordingly, many specific interventions for patients with ASC involve teaching the voluntary imitation of other’s facial emotions (e.g. [10,11,12]). However, the actual benefit of imitation that is voluntarily produced remains unclear, especially in individuals with autism. Investigating the relationship between the voluntary imitation of facial expressions and the recognition of those expressions in autism seems promising. It may be a possible key to elucidate the expression and recognition deficits, understand their interaction and target them therapeutically.

Although many studies reported difficulties of individuals with autism to recognize emotions, results remain inconsistent regarding specific emotions [13,14,15]. A reason might be the low sensitivity of many tasks. This becomes apparent in studies with high-functioning individuals, i.e. individuals who show only a mild level of symptoms and an intelligence quotient of 70 or above [16]. Generally, mixed results may be due to differences in the individuals’ level of functioning, potential compensatory mechanisms and task demands [15].

Regarding imitation of non-verbal signs and its important role in social functioning, surprisingly little is known about the tendency to imitate facial expressions in individuals with ASC. Compared to healthy controls, there seem to be differences in spontaneous imitation of facial expressions [17, 18], voluntary imitation, however, seems to be grossly unimpaired [18,19,20]. The limited evidence for aberrant voluntary imitation might be explained by ceiling effects [21] as most studies focused on the occurrence of imitation, ignoring its quality. However, the voluntary facial imitation capacity of individuals with ASC appears to differ from neurotypicals’ regarding quality [22] as well as timing [23]. A recent meta-analysis [24] summarized a variety of differences in how people with ASC express facial emotion expressions. However, the authors pointed out that the strength of the group differences may be overestimated due to confounding effects of age or intellectual functioning. In conclusion, the exact nature of facial imitation in adults with autism and without intellectual impairment has not yet been fully understood and deserves to be investigated further.

Although most studies investigated voluntary facial imitation of individuals with autism in the context of emotion recognition paradigms, the performance in both areas has not been linked in those studies (e.g. [18, 23]). One reason might be that the measures of emotion recognition as well as of imitation performance used in those studies were not sensitive enough, as they used easy-to-recognize emotions and measured only occurrence or speed but not the precision of imitation.

So far, most studies investigating the expression of facial emotions have either deployed electromyography, which has been reported as obtrusive, were limited to very few muscles, or have used time-costly coding of video-recorded expressions by observers. Due to current advances in image and video classification [25, 26], computer-based facial expression analysis offers new possibilities to measure facial expressions. This analysis classifies the purely visual input of facial features and facial motion into abstract classes [27]. Unlike electromyography, a computer-based analysis is neither expensive nor intrusive. It allows measurement of facial expression without physical contact with the participant (e.g. when applying the EMG electrodes). This is especially relevant in studies including individuals with autism, as touch is often perceived as aversive, might induce irritation and thus introduce confounds. A study on detection of autism diagnosis successfully classified individuals with autism based on their automatically analysed facial expressions [28]. Another recent study [29] analysed the spontaneous production of facial expressions using automated facial expression analysis software and related its relationships to alexithymia. Both studies clearly showed the value of automatic computer-based approaches.

Taken together, the relationship between recognition and imitation of facial expressions lacks rigorous investigation in adults with ASC and without intellectual impairment. A better understanding of the nature of both phenomena, as well as their association, might help to target the social struggle of individuals with autism. This study, therefore, seeks to examine the voluntary facial imitation capacity of individuals with and without autism in an emotion recognition paradigm. First, we expect to replicate the previously described emotion recognition deficit in individuals with ASC. Second, we assume quantitative as well as qualitative differences in facial imitation in individuals with autism. Third, we aim to elucidate the relationship between facial imitation and emotion recognition—especially for ASC.

Methods

Participants

Thirty-seven adults with ASC (18 female, mean age = 36.89, range 22–62) and forty-three NT individuals (22 female; mean age = 33.14, range 18–49) with no self-reported history of psychiatric or neurological disorders participated in the study. Three participants had been excluded previously because of not matching the inclusion criteria. The remaining sample size of 80 exceeded the required sample size of 67 estimated by a statistical power analysis (evaluated for whole sample bivariate one-tailed correlations with power = 0.80, α = 0.05 and a medium effect size ρ = 0.30). Participants from the ASC group were recruited through the autism outpatient clinic of the Charité – Universitätsmedizin Berlin. All of the participants were diagnosed according to ICD-10 criteria for Asperger syndrome or atypical autism or childhood autism [30].

The diagnostic procedure included the Autism Diagnostic Observation Schedule (n = 36; ADOS-2; due to readability, the authors use the term ADOS; the group’s raw algorithm total score can be found in Table 1; all analyses were calculated on the social domain score of ADOS-2; [31]) and the Autism Diagnostic Interview-Revised (n = 22, ADI-R, diagnostic algorithm total score; [32]) if parental informants were available.

Table 1 Demographic and diagnostic information for participants with ASC and NT individuals

Exclusion criteria were current antipsychotic and anticonvulsant medication, comorbid neurological disorders or age over 65 years to avoid possible confounding age-related neurodegeneration. Furthermore, high German language proficiency was demanded that was assessed through a German vocabulary test (Wortschatztest (WST), [33]). A further exclusion criterion was the use of any medical treatments (e.g. benzodiazepines) that could have an impact on the cognitive abilities of the participants. In addition, in the control group, any history of psychiatric disorder led to exclusion.

Procedure

The experiments were conducted in a laboratory with constant lighting conditions. Participants were asked to engage in an emotion recognition and imitation task. During the experiment, the participants’ faces were recorded with a webcam with a rate of 30 frames per second and a resolution of 640 × 480 pixels. An effort was made to disguise the aim of the video recording so that participants would not concentrate on their facial movements: the experimenter told the participants that the webcam was only placed to monitor their attention level. The video recordings of all participants were checked individually and were, in case the instructions were not followed correctly, excluded.

Measures

Emotion recognition

The Berlin Emotion Recognition Test (BERT) [34] is a computer-based task for sensitively assessing emotion recognition. The test consists of a total of 48 photographs of facial expressions of professional actors displaying one of the six basic emotions ([35]; for stimulus production see [36]). The face is centred in front of a dark grey background. There are eight pictures per emotion, and each is expressed by four different female and four different male actors (see Fig. 1 for an example picture for each emotion). Below each picture, two emotional words are presented, and the participant is asked how the person is feeling. Only one of the two possible options correctly describes the emotion expressed. The emotion recognition score is the percentage of correct answers. The position of the correct answer, as well as the order of the picture, is randomized.

Fig. 1
figure 1

Example pictures of BERT for each emotion

Fig. 2
figure 2

Time course of a trial of BERT for both conditions

To develop a sensitive task, the pictures were extracted from video clips in which professional actors expressed the target emotions. The actors had been instructed with emotional scripts (e.g. imagine you receive an unexpected present) to perform the facial expressions, starting with a neutral expression. This led to a more naturalistic footage. From each video clip, frames of three different intensities were extracted. These pictures of facial emotion expressions built the item pool for the BERT. This pool was reduced to the most sensitive items in a pre-study at a public open house event in Berlin, Germany, where large scientific institutions welcome the general public. In this pre-study with a sample of opportunity, 46 participants were asked to recognize the emotion of each of the items. Each picture was presented with the six basic emotions as possible answers. Based on their responses, for each video clip we selected the picture which discriminated best between low- and high-scoring participants. Additionally, we identified for each item the most difficult distractor out of the five incorrect emotion labels. In a follow-up online-study [37] with 436 participants, the selected pictures and distractors were tested and further improved with respect to reliability and discriminatory power by choosing the best eight items per emotion and most-difficult distractor. A more detailed description of the task development and the current version of the task can be found online under: http://www.hannadrimalla.de/bert.html.

Imitation

Each emotional expression was preceded by a picture of the same actor showing a neutral expression displayed for 1500 ms. This period was used as a baseline; thereafter the emotional expression picture was shown for 6 s before the emotional words appeared. During this period, the participant’s facial response to the picture was recorded via video, preventing movement artefacts resulting from behavioural responses. The reaction time was calculated, for correct answers only, from the moment when the emotion words appeared, and a response was made. After the participant’s reaction, the picture disappeared, a blank screen was shown for 100 ms, and the next trial began. Figure 2 displays the time course of a trial.

To investigate imitation, the BERT was presented in an imitation and a watch condition. In the imitation condition, the subject was instructed to move their facial muscles like the person in the photograph. The term “imitation” was not mentioned to mask the hypothesis. In the watch condition, the participant was instructed to just watch the person in the picture. Each condition consisted of 23 different pictures randomly drawn from the BERT picture pool. Due to a technical error, of the 48 pictures only 46 were randomly drawn for each participant. The order of the blocks was randomized.

Autistic traits

To assess autistic traits in both groups and to screen for ASC in the neurotypical group, the Autism-Spectrum Quotient [38] was administered in its German version [39].The AQ is a 50-item self-report questionnaire assessing different areas of behaviour and attitudes associated with autism spectrum conditions, such as social and communication skills, imagination and attention. On a 4-point scale, the participants indicate how strongly they agree or disagree with a statement. Every slight or strong agreement to an autistic behaviour adds one point to the total score. A score of 32 and above is seen as an indicator of autistic traits that might be clinically significant. The AQ has been shown to have good test–retest reliability and inter-rater reliability [38] as well as good discriminative validity and screening properties in clinical practice [40].

Automatic analysis of facial imitation behaviour

We chose a sign-based approach to measure participants’ facial expressions. Sign-based approaches are descriptive; they classify the visual input into abstract facial movements described by their location and intensity. As a coding system for these movements, the Facial Action Coding System (FACS [41]) is widely used in behavioural science and in automatic facial expression analysis. It breaks down facial expressions into 44 observable muscle movements, called action units (AU).

A major advantage of sign-based approaches is their objectivity as they do not involve interpretation [27]. Moreover, they do not reduce the complex emotional facial expression of a person to a small set of more abstract prototypical emotional expressions [42]. Last but not least, sign-based approaches allow us to preserve more dynamic information such as the time point, duration, and amplitude of an action [43]. This is crucial, as humans are very sensitive to the timing of facial actions [44]. Therefore, we chose a sign-based approach to measure participants’ facial expressions.

We employed the OpenFace 2.0. tool [45] to extract facial action units from the video recordings of the participant’s faces. OpenFace is an open-source tool capable of facial-landmark detection, head-pose estimation, facial-action-unit recognition and eye-gaze estimation. OpenFace 2.0 was trained on video data of people responding to an emotion-elicitation task. This corresponds to the conditions under which the BERT stimuli were recorded. Furthermore, it allows correcting for person-specific neutral expressions. OpenFace 2.0 has been tested on several emotion video data sets and demonstrated state-of-the-art results [4546].

OpenFace extracts the intensity (scale from 0 to 5) and the presence of 18 action units (AU) from each video frame (except for AU28, for which only presence is analysed). An overview of the AUs that can be detected by OpenFace is provided in Table 2 in “Appendix”.

To control for idiosyncrasies in the expression of the participant and their reaction to faces in general, we performed a baseline-correction. For each trial for each individual, we calculated the mean activity of each action unit during the baseline-phase (presentation of a neutral face). Next, for each trial we subtracted this baseline-activity from each action unit activity of each frame.

Measures of imitation

To assess the amount as well as the precision of the participant’s imitation (see Fig. 3 for a conceptual overview), we used two approaches: an imitation imprecision score (IIS) and cosine similarity measures (see Fig. 4).

Fig. 3
figure 3

Automated facial analysis of imitation. At the left, the automated facial analysis of the stimulus material is displayed. At the right, the automated facial analysis of a neurotypical subject (*represented by the experimenter) in the imitation condition is displayed. Both measures are combined to analyze the imitation as described in the method section

Fig. 4
figure 4

Calculation of cosine similarity between participant’s and actor’s vector

The imitation imprecision score IIS indicates the absolute deviation of the participant’s facial expressions from the facial expressions displayed by the actors. Thus, it takes into account all AUs of the facial expression. Lower scores in IIS indicate a higher imitation precision. The imitation imprecision score IIS was calculated for each subject in two steps. First, we averaged the AU activity over frames, then averaged over AUs (Eq. 1), and finally we averaged over pictures (Eq. 2).

$${\text{IIS}}_{ps} = \mathop \sum \limits_{i = 1}^{a} \left( {\left| {\frac{{\mathop \sum \nolimits_{f = 1}^{{m_{ps} }} x_{{{\text{if}}_{ps} }} }}{{m_{ps} }} - \left( {x_{ip} } \right)_{{{\text{act}}}} } \right|} \right)$$
(1)

IISps, imitation imprecision score for picture p and subject s; a, total number of tracked action units; mps, total number of frames for picture p for subject s; xif, intensity value of AUi in frame f; xip, intensity value of an AUi shown in picture p by an actor act

$${\text{IIS}}_{s} = \frac{{\mathop \sum \nolimits_{p = 1}^{n} {\text{IIS}}_{ps} }}{n}$$
(2)

IISs, action units-based imitation measure for each subject s averaged across n pictures; n, total number of pictures in a condition.

$$\hat{\mu } (\cos \left( \theta \right)_{s} ) = \frac{1}{m}*\mathop \sum \limits_{t = 1}^{m} \left( {\frac{{\mathop \sum \nolimits_{i = 1}^{n} P_{ti} \cdot A_{i} }}{{\sqrt {\mathop \sum \nolimits_{i = 1}^{n} P_{ti}^{2} } \cdot \sqrt {\mathop \sum \nolimits_{i = 1}^{n} A_{i}^{2} } }}} \right)$$
(3)

cosinus similarity averaged over all frames of trial s; Pti, participant’s action unit i intensity at timepoint t; Ai, intensity value of actors’ action unit i; t, timepoint of imitation (frame); n, total number of action units; m, total number of frames of a trial.

For each frame, we calculated the cosine similarity of the actor’s and participant’s vector, which indicates whether the vectors point in the same direction, i.e. the expressions are similar (with 1 as highest possible value). We analyzed both the average as well as the maximum cosine similarity (highest value) of each trial for each participant. For the averaged cosine similarity, we first calculated the mean over all frames for a trial and then averaged over all trials of a participant. For the maximum cosine similarity, we calculated the maximum of all frames for a trial and then averaged the maxima over all trials of a participant.

To analyze the intensity of the imitation, we calculated the ratio of the participant’s vector’s length and the actress’ vector’s length at the time point of highest cosine similarity.

To analyze the speed of the imitation, we measured the time point (i.e. frame number) of maximum cosine similarity for each imitation of a participant. These 23 values were averaged for each participant.

Statistical analysis

In general, we used a significance level of p < 0.05. However, as we compared the imitation performance of the groups regarding four different aspects (imprecision, similarity, intensity and speed), we Bonferroni-corrected the level to a* = 0.0125 for these analyses. Data were analyzed using Python and R. In cases, in which we found evidence for a strong violation of the normal distribution assumption of our data, we used medians and nonparametric statistical tests indicated by the respective signs. Otherwise, we used means in combinations with parametric tests.

Results

Demographics

The groups did not differ significantly with respect to age [t(78) = 1.77, p = 0.08], gender [X2(1, N = 80) = 1.10, p = 0.29], education (W = 765.5, p = 0.76) or verbal IQ [t(75) = 1.32, p = 0.19] as assessed through a German vocabulary test (Wortschatztest (WST), Schmidt Metzler [33]). For an overview of the demographic and diagnostic information of both groups, see Table 1.

As expected, the groups differed significantly regarding the AQ, with the mean AQ score being significantly higher in the ASC group than in the neurotypical group [ASC: M = 37.54, SD = 5.74; control group: M = 14.49, SD = 5.82; t(74) = 17.33, p < 0.001]. No participants from the neurotypical group scored above the cut-off score of 32.

Emotion recognition

Reliability and item analysis of BERT

Internal consistency of the BERT was assessed with Cronbach’s alpha and McDonald’s omega. Item difficulty was defined as the mean percentage of correct responses over all participants divided by the number of all participants. The following satisfactory results were obtained: n = 80, Cronbach’s α = 0.74, McDonald’s ωT = 0.75, mean item difficulty = 0.77, range = 0.36 − 0.99.

Effects of autism diagnosis

Across both conditions of the emotion classification task, the NT group showed a higher percentage of correct emotion classification than the ASC group [NT: 79%; ASC: 73%; t(78) = 2.96; p = 0.004, d = 0.67] and faster responses (ASC: 4100 ms, NT: 2832 ms; z = 395, p < 0.001, r = 44.16). We calculated two mixed effects regression models regarding the emotion recognition abilities of the participants. The first model was built to predict the percentage of correct responses with group and imitation-instruction as fixed effects and a random intercept for each participant. The second model aimed to predict the reaction times of correct responses with group and imitation-instruction as fixed effects and a random intercept for each participant. Both models supported lower emotion recognition of the individuals with ASC, for the correct responses (β =  − 0.054; 95% CI [− 0.093, − 0.015]; z =  − 2.73, p = 0.006) as well as for the reaction times (β = 0.34; 95% CI [0.201, 0.470]; z = 4.89, p < 0.001).

Facial imitation

Four participants who, irrespective of imitation condition, imitated the facial expression either never or always, were excluded. Thus, the analysis of imitation effects was calculated on 41 neurotypical subjects and 35 individuals with ASC. Further, for the analysis of the facial expression movements, four participants were excluded, as tracking in these cases was flawed. The resulting sample size for each group was 39 neurotypical individuals and 33 individuals with ASC. We used linear mixed-effects models to control for individual differences and deal with missing values. We built a model with the fixed factors autism diagnosis, imitation-instruction and their interaction. Additionally, we modelled a random intercept and a random slope for each participant to control for random individual baseline differences and differences in their reaction to the mimicry instruction. We aggregated the data for each participant separately for the two conditions.

Due to the novel use of computational approaches in facial expression analysis and due to a lack of literature, it is unclear if such approaches are sensitive enough to detect even very small muscle movements, as is the case for spontaneous mimicry. Thus, for this study, we focused on the facial expressions in the instructed imitation condition only, for which we expected marked facial movements.

We focused on four measures of imitation performance: the imprecision (IIS, absolute difference of actor’s and participant’s AUs), the similarity of their expression (cosine similarity of their AU vectors), the intensity of their most similar expression (ratio of their vector’s length at the time point of highest cosine similarity) and the speed of the imitation. First, we checked whether both groups were able to imitate, i.e. showed a higher mean and maximum similarity in the imitation than in the watch condition. Second, we compared individuals with and without autism on all imitation measures. Third, for the individuals with autism we analyzed whether these measures were associated with their level of symptoms on the social domain indicated by the respective ADOS subscore.

Comparison of imitation and watch condition (separated by groups)

Cosine similarity of imitation in NT individuals

The neurotypical participants successfully imitated the expression. If they were instructed to imitate, they displayed expressions that were significantly more similar to the target expression (measured by cosine similarity averaged across time and all pictures) compared to when they just watched the expressions [difference: M = 0.063, 95% CI [0.06 0.10], t(38) = 8.57, p < 0.001, d = 1.39], as well as compared to the baseline condition [difference: M = 0.38, 95% CI [0.37 0.39], t(38) = , p < 0.001, d = 10.44)]. In line with this finding, the maximum cosine similarity, i.e. most similar expression during the complete trial, was also higher in the imitation condition (difference: Mdn = 0.076, Z = 71, p < 0.001).

Cosine similarity of imitation in individuals with ASC

Individuals with autism also imitated the presented facial expressions when they were instructed to. In the imitation condition, they displayed expressions that were significantly more similar to the target expression (measured by cosine similarity averaged across time and all pictures) than towards the watch condition (difference: M = 0.07, 95% CI [0.05 0.09], t(32) = 8.65, p < 0.001, d = 1.53), as well as compared to the baseline condition (difference: M = 0.38, 95% CI [0.36 0.40], t(32) = , p < 0.001, d = 8.14) In line with this finding, the maximum cosine similarity, i.e. most similar expression during the complete trial, was also higher in the imitation condition (difference: Mdn = 0.086, Z = 36, p < 0.001).

Group comparison of individuals with and without ASC (imitation condition)

Similarity of Imitation

In accordance, we found no evidence that individuals with autism showed less similar expressions, averaged over all pictures, than neurotypical individuals, neither regarding the mean similarity [MASC = 0.38, MNT = 0.38, t(70) = 0.14, p = 0.890, d = 0.033] nor the maximum expressed similarity (MdnASC = 0.71, MdnNT = 0.69, Z = 585, p = 0.256).

Intensity of imitation

Further, we found no evidence that individuals with autism showed less intensity of maximal similarity (MASC = 0.85, MNT = 0.77, t(70) = 1.45, p = 0.153, d = 0.347). Figure 5a shows the intensity of the most similar expression averaged over trials for each participant and separated by emotion categories, and Fig. 5b shows the intensity of the most similar expression averaged over trials for both groups and separated by emotion categories.

Fig. 5
figure 5

a Comparison of intensity of most similar expression in the imitation condition separated by emotions. b Comparison of intensity of most similar expression in the imitation condition for neurotypical individuals (left) and individuals with autism (right) separated by emotions

Imprecision of imitation

We measured the precision of the imitated expression by calculating the difference from the original stimuli (IISs). The facial expressions that individuals with autism showed differed from the actors’ expression (Mdn = 8.90) significantly more than the facial expressions of the neurotypical individuals differed from the actors’ expression (Mdn = 8.50), U = 444, p = 0.012, d = 0.3.

Post Hoc: Variance of Imitation in Individuals with ASC

Post hoc, we compared the variance of the imitation measures between both groups. There was significantly more variance in the group of individuals with autism than in the neurotypical group regarding the intensity of the imitation (F = 4.63, p = 0.035). The distribution of imitation intensity can be seen in Fig. 6. Further, there was a tendency to more variance regarding the maximum similarity (p = 0.096).

Fig. 6
figure 6

Comparison of intensity of most similar expression in the imitation condition for neurotypical individuals (left) and individuals with autism (right)

Timing of Imitation in Autism

To analyze the speed of the imitation, we measured the time point (i.e. frame number) of maximum cosine similarity. Individuals with autism needed significantly more time (M = 102.56 95%CI [96.17 108.95]) than neurotypical individuals (M = 88.90 95%CI [83.95 93.84]) for the imitation, t(70) = − 3.48, p < 0.001, d = 0.84. In general, the speed of the imitation scaled positively with the speed of the emotion recognition of an individual (r = 0.388, p < 0.001). The participant’s averaged time for maximum imitation can be seen in Fig. 7 for both groups.

Fig. 7
figure 7

Comparison of timepoint (i.e. number of frames) of maxima of relevant action units during imitation for neurotypical individuals (left) and individuals with autism (right)

Dimensional Relationship of social ADOS and Imitation

Severity of autism social symptomatology (social ADOS) was positively associated with both the maximum intensity of the imitation (r = 0.445, p = 0.009) and negatively associated with the similarity of imitation and target expression (r = − 0.477, p = 0.005) and the maximum similarity (r = − 0.512, p = 0.0023). Further, severity of social autism symptomatology (social ADOS) correlated positively with the imprecision of the imitation (IIS; r = 0.357, p = 0.041) in autistic individuals, which, however, did not survive Bonferroni-correction.

Emotion recognition and imitation

Effects of instructed imitation on emotion recognition

Imitation influenced emotion recognition performance negatively: under the condition of imitation, both participant groups showed lower rates of correct responses (β =  − 0.03; 95% CI [− 0.061, − 0.001]; z =  − 2.03, p = 0.042) and needed more time for the recognition (β = 0.05; 95% CI [0.015, 0.087]; z = 2.79, p = 0.005). There was no evidence for an interaction effect of autism and imitation if those were included as additional factor (for correctness p = 0.814 and for response time p = 0.379).

Effects of amount of imitation on emotion recognition

To further investigate the relationship between imitation of an expression and emotion recognition performance, we calculated a model separately for the imitation condition. For the precision and occurrence of the imitation, we calculated a regression model that controlled for the group as well as for the interaction. We calculated an Ordinary Least Squares regression predicting the percentage of correct answers and using the IIS and the autism condition as predictors. The accuracy of the emotion recognition scaled negatively with the individual imprecision of imitation (β = − 0.039; 95% CI [− 0.068 − 0.010]; z = − 2.67, p = 0.009). We found no effect on the speed of correct answers (β = 75.90, p = 0.706). Further, we found no relationship between emotion recognition and the cosine similarity measures (all p > 0.05).

Discussion

Based on a large sample and computer-based facial analysis, we measured quantitative and qualitative differences in facial imitation as well as emotion recognition between individuals with and without autism. Both groups showed intact imitation of facial expressions when they were instructed to imitate. However, the group of individuals with autism differed from neurotypical individuals regarding the speed and precision of the imitation: their voluntary imitation was on average slower and less precise. A separate analysis of the imitation’s intensity and its similarity with the actor’s facial expression revealed an association between these measures and the severity of the social deficits in the individuals with autism. The more affected individuals expressed a less similar but more intense imitation.

On average, individuals with autism recognized fewer emotions correctly and were slower in imitating them than neurotypical individuals. While the effect was small for recognition accuracy, it was of greater magnitude for recognition time. For both groups, the instruction to imitate the emotional expressions was associated with decreased performance in emotion recognition compared to the watch condition. However, the precision of the imitation was positively associated with recognition performance across groups.

Group differences in emotion recognition

We replicated previously reported difficulties of individuals with autism to recognize emotions from facial expressions (e.g. [36, 47, 48]). Using expressions of basic emotions with varying intensities, which were designed to be difficult to interpret and thus more sensitive, we were able to show that even high-functioning adults with autism seem to have difficulties in recognizing them, in that they need more time and make slightly more errors. These results are consistent with previous studies on emotion recognition in individuals with ASC, that report differences in reference to emotion recognition of briefly presented stimuli [49, 50]. In real life, emotions often occur briefly in a subtle form comparable to our stimulus material, which might, at least partially, explain the social difficulties of high-functioning individuals with autism in daily life [15].

Group differences in facial imitation

In accordance with similar studies, we found that individuals with autism were on average capable of imitating facial expressions when instructed (e.g. [18, 19, 23]).

Using computer-based face analysis, we could also measure and quantify qualitative differences in imitation, especially in dependence of the level of autism symptoms in the social domain. Most studies so far have focused on the occurrence of facial imitation and ignored qualitative differences. Preliminary descriptions of such qualitative differences exist, e.g. Loveland et al. [22] stated that “the responses of subjects with autism contained many unusual behaviours, such as bizarre expressions and those that looked ‘mechanical’”. Another study, which measured the spontaneous and instructed imitation of facial expression in children with autism, pointed to an altered time course of the facial expressions in individuals with ASC, but only for spontaneous imitation [23].

We replicated this timing effect for imitation, as we found that individuals with autism needed on average more time than neurotypical individuals to imitate facial expressions voluntarily. As dynamic properties of facial expression (e.g. time point of maximal expression, duration, etc.) play an important role for the perceived genuineness [51], the group differences in timing might in part underlie the social interactions problems that individuals with autism show. Further, the imitation speed was associated with the recognition speed. This might point to the importance of imitation for emotion recognition. However, it could also be interpreted as generally slower processing times, which underlie both imitation and recognition. Although intelligence might further represent a factor underlying this association, it is less likely to play a role here, given that our participant groups were matched for IQ. In addition to the timing of facial expressions, individuals with autism differed on average from neurotypical individuals regarding the mean precision of their imitation of emotional expressions. This matches a finding by Brewer et al. [52], which reported that posed facial expressions of emotions of individuals with autism were recognized less accurately than of neurotypical individuals by both individuals with and without autism.

Further, the imitation quality of the individuals with autism was associated on average with their ADOS social subscores, evident in three different measures of imitation performance (similarity, intensity and, marginally, imprecision). In accordance with this finding, Yoshimura et al. [53] showed an association between facial imitation extent and social functioning. Partially, our finding regarding the negative association between imitation performance and severity of autism resonates with a study of Faso and colleagues [54] as well. The authors compared posed and evoked facial expressions of adults with and without ASC. Naive observers rated the expression of individuals with ASC as more intense and less natural. However, we did not replicate this difference regarding similarity and intensity of the imitation on a group level, presumably because of a less affected patient group. A post hoc analysis, supports this interpretation, as there was more variance in the group of individuals with autism than in the neurotypical group regarding the intensity of their imitation. Second, the negative association of imitation performance and social symptomology without a group-difference in contrast to the neurotypical individuals might be explained by the heterogeneity within the autism population, especially as some individuals predominantly show impairments in one of the two domains, either the social communication and interaction domain or the domain of repetitive behaviour [55]. Thus, a clear group difference regarding the imitation performance might only be evident if individuals with social deficits are compared with neurotypical participants. This interpretation is in accordance with the results of a recent study of Zane et al. [56], which compared facial expressions of neurotypical individuals and individuals with autism in an instructed imitation of emotional expressions task. Similar to our study, the authors found more variance regarding the intensity of their facial expressions in the group of individuals with autism compared to the neurotypical group.

The difficulties of, especially more severely affected, individuals with ASC in generating facial expressions might be associated with their lower tendency to engage in impression management such as in displaying social laughter [57,58,59]. One possible reason might be a reduced motivation of individuals with ASC for social maintaining [60]; another might be a reduced ability to finetune one’s facial expressions. The second explanation corroborates recent evidence that individuals with ASC show less precise imitation regarding hand movements [61, 62]. These findings favour the assumption that individuals with ASC demonstrate difficulties in the finetuning of imitation [63] rather than an inability to imitate [64].

Our findings also match at least partially the summary of a recent meta-analysis [24] investigating facial expression production in autism. Trevisan and colleagues concluded that participants with ASC display facial expressions less frequently, for a shorter amount of time and less accurately. Further, they stated that individuals with ASC do not express emotions less intensely nor slower. As explained above, the null effect regarding intensity might be explained by not considering the level of social impairments. In general, the comparison of our results with this meta-analysis should be taken with caution, as the meta-analysis covers a high number of very different studies including some on spontaneous expressions, mimicry and verbally prompted posing of facial expression.

Relationship of imitation and recognition of facial emotions

Comparing the imitation condition to the watch condition revealed a negative effect of the instruction to imitate on emotion recognition performance across groups. This finding is consistent with that of Kulesza et al. [65], who asked healthy participants to recognize basic emotional facial expressions of an actress and found that participants who were instructed to imitate the expression recognized less facial displays of the emotions than participants who were instructed to inhibit spontaneous imitation of the expressions. In accordance with these findings, in a study with healthy individuals by Stel et al. [66], mimicking facial and behavioural movements of an interaction partner reduced another aspect of emotional understanding, i.e. detecting whether the partner was lying.

That being said, those results cannot rule out the possibility that imitation does foster emotion recognition after all. For example, another possible reason for the negative effect of the imitation on the accuracy of emotion recognition in our design is an additional cognitive load. Controlling the facial muscles might absorb cognitive energy in the imitation condition, and thereby worsen emotion recognition. In line with this interpretation are the results of a study by Lewis et al. [67]. The participants in this study performed an emotion recognition task twice, and half of the participants had to imitate the facial expression in the second round. As both groups recognized more emotions in the second round, it can be assumed that the participants’ cognitive load for the emotion recognition task itself was reduced in the second round. While mimicking did not help the performance at the baseline test, the increase in performance in the second round was significantly higher for the mimickers. It seems plausible that only the lower cognitive load in the repeated condition allowed mimicry to take an effect. Thus, individuals’ emotion recognition might benefit from imitation, if the imitation does not involve much extra cognitive load. In our design, all stimuli were presented only once, resulting in two equally difficult conditions. This might overshadow any possible positive effect of imitation.

That beneficial effects of imitation on emotion recognition might indeed exist is indicated by our finding that the intensity as well as the precision of imitation was positively associated with emotion recognition performance across the whole group of participants. However, given the correlational nature of this finding, the interpretation warrants caution as the finding could also be explained in a way that either recognition of an emotion mediates imitation or that the severity of autism social symptoms act as a confounding variable.

Most studies investigating facial expressions in autism suffer from low statistical power and might be biased by low intellectual level or age of the participants (for an overview, see [24]). We managed to collect a study sample of 80 individuals, including 37 individuals with autism and without intellectual impairment and ensured an equal portion of male and female participants. A further strength of this study is its unobtrusive measure of facial expressions, which is particularly relevant for individuals with autism and allowed us to research a large sample of this population.

Limitations

As our study investigates adults with autism spectrum condition without intellectual impairment, we do not know whether our findings hold for children with autism or adults with intellectual impairment. A further limitation of this study is its unknown sensitivity to measure non-observable imitation, as OpenFace only assesses muscle movements detectable by camera, whereas EMG allows assessment of very subtle muscle movements [68]. However, OpenFace 2.0. and its precursor OpenFace toolkit have shown their usefulness in studies, which aimed to detect suicidal ideation [69], psychotic symptoms [70] and autism [28] based on facial expressions, which speaks for its general sensitivity. Another potential further limitation of our study is that we cannot rule out that participants moved their facial muscles voluntarily in the watch condition. However, the negative results for imitative behaviour in that condition speak against this having occurred. Additionally, individuals with gross voluntary movement during the watch condition were excluded based on the individual screening of all video recordings.

In our analysis, we first applied a general baseline-correction to correct for the participant’s general facial expression. Second, we calculated a trial-wise baseline-correction to measure the participant’s imitation in comparison with their reaction to the actor’s neutral face. As a result, our imitation measures are measures of change and movement of someone’s face. This baseline correction implies, however, that a person that shows a certain emotional expression during the neutral phases of the experiment (e.g. because she feels anxious throughout the experiment) might receive a lower imitation score for showing a similar emotional expression as the person to be imitated. However, we are not interested in the absolute facial expression but in the change towards someone’s neutral face. This change, from a neutral baseline expression to a more emotion-specific expression, significantly occurred in both groups, evident as a higher cosine similarity averaged above all six basic emotions and participants. Due to a technical error, not all participants saw the same stimuli. However, as the differences were very small and by random choice, we do not assume that this effected our results.

Aiming at a voluntary imitation condition that would be as clearly defined as possible, while not necessitating explicit emotion processing, we asked the participants to “move their facial muscles like the person in the photo”. We avoided mentioning the term “imitation”, as it might activate popular science beliefs about imitation and its effects on emotion recognition. However, people might scan faces differently if the instruction creates an explicit focus on the muscles rather than the emotion, e.g. by looking less to the eyes and more at other parts of the face. It is also possible that NT and ASC groups respond to this instruction differently, with ASC potentially focusing more literally on muscles rather than the holistic emotion expression. Further studies including eye-tracking should elucidate this aspect as well as the process of imitation more fine-grained.

Aiming for a high standardization, we explicitly asked participants to imitate a static expression display in a photograph for a specific time, instead of collecting facial imitation in the wild. The aim of the study was to investigate the general ability of individuals with autism to imitate facial expressions if there are instructed to. In social interactions, emotions are sometimes expressed voluntarily to produce a certain impression or present oneself in a socially desirable way [71]. However, the results need to be interpreted with caution, as it is not clear, whether people would behave differently in the real-world, e.g. due to different contexts, additional load, the dynamics of facial expressions or social motivations. Further, it has been shown that voluntary imitation relies on different underlying processes than spontaneous imitation [72]. Thus, it would be of great value to conduct a similar study in a real-world setting to see if the results generalize to those settings. In such an experiment, computer-based measures may help to enable an unobtrusive measurement of facial expression imitation. Still, as previous research has often claimed that voluntary facial imitation is not affected in individuals with autism [18], we consider it important to elucidate these differences in our work.

Indeed it is important to bear in mind that our understanding of how facial expressions are used in the real world is still very limited [73]. Further research is needed to better understand how people move their faces in different contexts of everyday life and how they use their facial movements to transfer social information.

Finally, yet importantly, the positive relationship of emotion recognition and imitation extent and precision could only be shown as a correlation. Further studies are needed to investigate the causal direction of this relationship.

Conclusions

To the best of our knowledge, this is the first study that successfully used computer-based analysis to measure facial expression in an imitation context. This unobtrusive and affordable method allowed us to measure qualitative differences in facial expressions between neurotypical individuals and individuals with autism. Using the newly developed sensitive emotion recognition task BERT, we were able to replicate the emotion recognition deficit in individuals with autism and provided some evidence for a positive association of imitation performance and the recognition of emotions.

Further research should explore facial expressions in social interactions with active and passive roles of the participants (expressing and recognizing emotions) to exclude the artificial load of the instruction to express an emotion in imitation paradigms. More broadly, research is also needed to determine the potential of training imitation as a possible mechanism to enhance emotion recognition. While imitation does not seem to help emotion recognition immediately (likely due to additional task demands), training imitation precision via instruction might enhance spontaneous imitation and by that foster emotion recognition.

Availability of data and materials

The datasets generated and analyzed during the current study are not publicly available due to privacy restrictions of the video data but are available from the corresponding author on reasonable request. The BERT [34] is available online under GNU General Public License Version 3.0 from http://www.hannadrimalla.de/bert.html.

Abbreviations

ADI-R:

Autism Diagnostic Interview-Revised.

ADOS-2:

Autism Diagnostic Observation Schedule 2

AQ:

Autism Spectrum Quotient

ASC:

Autism Spectrum Condition

AU:

Action Unit

BERT:

Berlin Emotion Recognition Test

IIS:

Imitation Imprecision Score

WST:

Wortschatztest

References

  1. Ekman P. Facial expression and emotion. Am Psychol. 1993;48:384.

    Article  CAS  Google Scholar 

  2. Frith CD, Frith U. Social cognition in humans. Curr Biol. 2007;17:R724–32. https://doi.org/10.1016/j.cub.2007.05.068.

    Article  CAS  PubMed  Google Scholar 

  3. World Health Organization. International Classification of Diseases 11th Revision. 2018. http://id.who.int/icd/entity/437815624.

  4. Dimberg U. Facial reactions to facial expressions. Psychophysiology. 1982;19:643–7.

    Article  CAS  Google Scholar 

  5. Wood A, Rychlowska M, Korb S, Niedenthal P. Fashioning the face: sensorimotor simulation contributes to facial expression recognition. Trends Cogn Sci. 2016;20:227–40. https://doi.org/10.1016/j.tics.2015.12.010.

    Article  PubMed  Google Scholar 

  6. Stel M, van Knippenberg A. The role of facial mimicry in the recognition of affect. Psychol Sci. 2008;19:984–5. https://doi.org/10.1111/j.1467-9280.2008.02188.x.

    Article  PubMed  Google Scholar 

  7. Niedenthal PM, Brauer M, Halberstadt JB, Innes-Ker ÅH. When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cogn Emot. 2001;15:853–64. https://doi.org/10.1080/02699930143000194.

    Article  Google Scholar 

  8. Oberman LM, Winkielman P, Ramachandran VS. Face to face: Blocking facial mimicry can selectively impair recognition of emotional expressions. Soc Neurosci. 2007;2:167–78. https://doi.org/10.1080/17470910701391943.

    Article  PubMed  Google Scholar 

  9. Rychlowska M, Cañadas E, Wood A, Krumhuber EG, Fischer A, Niedenthal PM. Blocking mimicry makes true and false smiles look the same. PLoS ONE. 2014;9:e90876. https://doi.org/10.1371/journal.pone.0090876.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Gena A, Couloura S, Kymissis E. Modifying the affective behavior of preschoolers with autism using in-vivo or video modeling and reinforcement contingencies. J Autism Dev Disord. 2005;35:545–56.

    Article  Google Scholar 

  11. Russo-Ponsaran NM, Evans-Smith B, Johnson J, Russo J, McKown C. Efficacy of a facial emotion training program for children and adolescents with autism spectrum disorders. J Nonverbal Behav. 2016;40:13–38.

    Article  Google Scholar 

  12. Charlop MH, Dennis B, Carpenter MH, Greenberg AL. Teaching socially expressive behaviors to children with autism through video modeling. Educ Treat Child. 2010;33:371–93.

    Article  Google Scholar 

  13. Uljarevic M, Hamilton A. Recognition of emotions in autism: a formal meta-analysis. J Autism Dev Disord. 2013;43:1517–26. https://doi.org/10.1007/s10803-012-1695-5.

    Article  PubMed  Google Scholar 

  14. Lozier LM, Vanmeter JW, Marsh AA. Impairments in facial affect recognition associated with autism spectrum disorders: a meta-analysis. Dev Psychopathol. 2014;26:933–45. https://doi.org/10.1017/S0954579414000479.

    Article  PubMed  Google Scholar 

  15. Harms MB, Martin A, Wallace GL. Facial emotion recognition in autism spectrum disorders: a review of behavioral and neuroimaging studies. Neuropsychol Rev. 2010;20:290–322. https://doi.org/10.1007/s11065-010-9138-6.

    Article  PubMed  Google Scholar 

  16. Carpenter LA, Soorya L, Halpern D. Asperger’s syndrome and high-functioning autism. Pediatr Ann. 2009;38:30–5. https://doi.org/10.3928/00904481-20090101-01.

    Article  PubMed  Google Scholar 

  17. Hermans EJ, van Wingen G, Bos PA, Putman P, van Honk J. Reduced spontaneous facial mimicry in women with autistic traits. Biol Psychol. 2009;80:348–53. https://doi.org/10.1016/j.biopsycho.2008.12.002.

    Article  PubMed  Google Scholar 

  18. McIntosh DN, Reichmann-Decker A, Winkielman P, Wilbarger JL. When the social mirror breaks: deficits in automatic, but not voluntary, mimicry of emotional facial expressions in autism. Dev Sci. 2006;9:295–302. https://doi.org/10.1111/j.1467-7687.2006.00492.x.

    Article  PubMed  Google Scholar 

  19. Press C, Richardson D, Bird G. Intact imitation of emotional facial actions in autism spectrum conditions. Neuropsychologia. 2010;48:3291–7. https://doi.org/10.1016/j.neuropsychologia.2010.07.012.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Schulte-Rüther M, Otte E, Adigüzel K, Firk C, Herpertz-Dahlmann B, Koch I, Konrad K. Intact mirror mechanisms for automatic facial emotions in children and adolescents with autism spectrum disorder. Autism Res. 2017;10:298–310. https://doi.org/10.1002/aur.1654.

    Article  PubMed  Google Scholar 

  21. Heyes C. Causes and consequences of imitation. Trends Cogn Sci. 2001;5:253–61.

    Article  CAS  Google Scholar 

  22. Loveland KA, Tunali-Kotoski B, Pearson DA, Brelsford KA, Ortegon J, Chen R. Imitation and expression of facial affect in autism. Dev Psychopathol. 1994;6:433–44. https://doi.org/10.1017/S0954579400006039.

    Article  Google Scholar 

  23. Oberman LM, Winkielman P, Ramachandran VS. Slow echo: facial EMG evidence for the delay of spontaneous, but not voluntary, emotional mimicry in children with autism spectrum disorders. Dev Sci. 2009;12:510–20. https://doi.org/10.1111/j.1467-7687.2008.00796.x.

    Article  PubMed  Google Scholar 

  24. Trevisan DA, Hoskyn M, Birmingham E. Facial expression production in autism: a meta-analysis. Autism Res. 2018;11:1586–601. https://doi.org/10.1002/aur.2037.

    Article  PubMed  Google Scholar 

  25. He K, Zhang X, Ren S, Sun J, editors. Deep residual learning for image recognition.  In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. pp. 770–8.

  26. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L, editors. Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. pp. 1725–32.

  27. Fasel B, Luettin J. Automatic facial expression analysis: a survey. Pattern Recognit. 2003;36:259–75.

    Article  Google Scholar 

  28. Drimalla H, Landwehr N, Baskow I, Behnia B, Roepke S, Dziobek I, Scheffer T. Detecting autism by analyzing a simulated social interaction. In: Berlingerio M, Bonchi F, Gärtner T, Hurley N, Ifrim G, editors. ECML PKDD; 2018. Cham: Springer; 2019. p. 193–208. https://doi.org/10.1007/978-3-030-10925-7_12.

    Chapter  Google Scholar 

  29. Trevisan DA, Bowering M, Birmingham E. Alexithymia, but not autism spectrum disorder, may be related to the production of emotional facial expressions. Mol Autism. 2016;7:46. https://doi.org/10.1186/s13229-016-0108-6.

    Article  PubMed  PubMed Central  Google Scholar 

  30. World Health Organization. The ICD-10 classification of mental and behavioural disorders: diagnostic criteria for research. Geneva: World Health Organization; 1993.

    Google Scholar 

  31. Lord C, Risi S, Lambrecht L, Cook EH, Leventhal BL, DiLavore PC, et al. The autism diagnostic observation schedule—generic: a standard measure of social and communication deficits associated with the spectrum of autism. J Autism Dev Disord. 2000;30:205–23.

    Article  CAS  Google Scholar 

  32. Rutter M, Le Couteur A, Lord C. Autism diagnostic interview-revised. Los Angeles, CA: Western Psychological Services. 2003;29:30.

  33. Herzfeld HD. WST-Wortschatztest. Karl-Heinz Schmidt und Peter Metzler. Weinheim: Beltz Test GmbH, 1992. Diagnostica. 1994;40(3):S.293–7.

    Google Scholar 

  34. Drimalla H, Dziobek I. Berlin emotion recognition test (BERT); 2019. Available from Open Access repository of the Humboldt University of Berlin (edoc-Server). https://doi.org/10.18452/20019.

  35. Ekman P, Friesen WV. Constants across cultures in the face and emotion. J Pers Soc Psychol. 1971;17:124–9. https://doi.org/10.1037/h0030377.

    Article  CAS  PubMed  Google Scholar 

  36. Kliemann D, Rosenblau G, Bölte S, Heekeren HR, Dziobek I. Face puzzle—two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition. Front Psychol. 2013;4:376.

    Article  Google Scholar 

  37. Drimalla H, Kirst S, Dziobek I. Insights about emotion recognition by BERT and ERNIE (two new psychological tests). Manuscript in preparation.

  38. Baron-Cohen S, Wheelwright S, Skinner R, Martin J, Clubley E. The autism-spectrum quotient (aq): evidence from asperger syndrome/high-functioning autism, male- sand females, scientists and mathematicians. J Autism Dev Disord. 2001;31:5–17.

    Article  CAS  Google Scholar 

  39. Freitag CM, Retz-Junginger P, Retz W, Seitz C, Palmason H, Meyer J, et al. Evaluation der deutschen Version des Autismus-Spektrum-Quotienten (AQ) - die Kurzversion AQ-k. Z Klin Psychol Psychother. 2007;36:280–9. https://doi.org/10.1026/1616-3443.36.4.280.

    Article  Google Scholar 

  40. Woodbury-Smith MR, Robinson J, Wheelwright S, Baron-Cohen S. Screening adults for asperger syndrome using the aq: a preliminary study of its diagnostic validity in clinical practice. J Autism Dev Disord. 2005;35:331–5.

    Article  CAS  Google Scholar 

  41. Ekman P, Friesen WV. Facial action coding system: a technique for the measurement of facial movement. Palo Alto: Consulting Psychologists Press; 1978.

    Google Scholar 

  42. Tian Y-I, Kanade T, Cohn JF. Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell. 2001;23:97–115.

    Article  Google Scholar 

  43. Cohn JF, Ambadar Z, Ekman P. Observer-based measurement of facial expression with the Facial Action Coding System. The handbook of emotion elicitation and assessment. 2007:203–21.

  44. Edwards K. The face of time: temporal cues in facial expressions of emotion. Psychol Sci. 1998;9:270–6. https://doi.org/10.1111/1467-9280.00054.

    Article  Google Scholar 

  45. Baltrusaitis T, Zadeh A, Lim YC, Morency L-P. OpenFace 2.0: Facial behavior analysis toolkit. In: Recognition IICoAFaG, editor. 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018); 5/15/2018–5/19/2018; Xi'an. Piscataway, NJ: IEEE; 2018. p. 59–66. https://doi.org/10.1109/FG.2018.00019.

  46. Mavadati SM, Mahoor MH, Bartlett K, Trinh P, Cohn JF. Disfa: A spontaneous facial action intensity database. IEEE Trans Affect Comput. 2013;4:151–60.

    Article  Google Scholar 

  47. Law Smith MJ, Montagne B, Perrett DI, Gill M, Gallagher L. Detecting subtle facial emotion recognition deficits in high-functioning Autism using dynamic stimuli of varying intensities. Neuropsychologia. 2010;48:2777–81. https://doi.org/10.1016/j.neuropsychologia.2010.03.008.

    Article  PubMed  Google Scholar 

  48. Wingenbach TS, Ashwin C, Brosnan M. Diminished sensitivity and specificity at recognising facial emotional expressions of varying intensity underlie emotion-specific recognition deficits in autism spectrum disorders. Res Autism Spectrm Disorders. 2017;34:52–61. https://doi.org/10.1016/j.rasd.2016.11.003.

    Article  Google Scholar 

  49. Rump KM, Giovannelli JL, Minshew NJ, Strauss MS. The development of emotion recognition in individuals with autism. Child Dev. 2009;80:1434–47. https://doi.org/10.1111/j.1467-8624.2009.01343.x.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Clark TF, Winkielman P, McIntosh DN. Autism and the extraction of emotion from briefly presented facial expressions: stumbling at the first step of empathy. Emotion. 2008;8:803–9. https://doi.org/10.1037/a0014124.

    Article  PubMed  Google Scholar 

  51. Krumhuber EG, Kappas A, Manstead ASR. Effects of dynamic aspects of facial expressions: a review. Emot Rev. 2013;5:41–6.

    Article  Google Scholar 

  52. Brewer R, Biotti F, Catmur C, Press C, Happé F, Cook R, Bird G. Can neurotypical individuals read autistic facial expressions? Atypical production of emotional facial expressions in autism spectrum disorders. Autism Res. 2016;9:262–71. https://doi.org/10.1002/aur.1508.

    Article  PubMed  Google Scholar 

  53. Yoshimura S, Sato W, Uono S, Toichi M. Impaired overt facial mimicry in response to dynamic facial expressions in high-functioning autism spectrum disorders. J Autism Dev Disord. 2015;45:1318–28. https://doi.org/10.1007/s10803-014-2291-7.

    Article  PubMed  Google Scholar 

  54. Faso DJ, Sasson NJ, Pinkham AE. Evaluating posed and evoked facial expressions of emotion from adults with autism spectrum disorder. J Autism Dev Disord. 2015;45(1):75–89.

    Article  Google Scholar 

  55. Happé F, Ronald A. The “fractionable autism triad”: a review of evidence from behavioural, genetic, cognitive and neural research. Neuropsychol Rev. 2008;18:287–304. https://doi.org/10.1007/s11065-008-9076-8.

    Article  PubMed  Google Scholar 

  56. Zane E, Yang Z, Pozzan L, Guha T, Narayanan S, Grossman RB. Motion-capture patterns of voluntarily mimicked dynamic facial expressions in children and adolescents with and without ASD. J Autism Dev Disord. 2019;49:1062–79. https://doi.org/10.1007/s10803-018-3811-7.

    Article  PubMed  PubMed Central  Google Scholar 

  57. Helt MS, Fein DA. Facial feedback and social input: effects on laughter and enjoyment in children with autism spectrum disorders. J Autism Dev Disord. 2016;46:83–94. https://doi.org/10.1007/s10803-015-2545-z.

    Article  PubMed  Google Scholar 

  58. Zane E, Neumeyer K, Mertens J, Chugg A, Grossman RB. I think we’re alone now: solitary social behaviors in adolescents with autism spectrum disorder. J Abnorm Child Psychol. 2018;46:1111–20. https://doi.org/10.1007/s10802-017-0351-0.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Hudenko WJ, Stone W, Bachorowski J-A. Laughter differs in children with autism: an acoustic analysis of laughs produced by children with and without the disorder. J Autism Dev Disord. 2009;39:1392–400. https://doi.org/10.1007/s10803-009-0752-1.

    Article  PubMed  Google Scholar 

  60. Chevallier C, Kohls G, Troiani V, Brodkin ES, Schultz RT. The social motivation theory of autism. Trends Cogn Sci. 2012;16:231–9. https://doi.org/10.1016/j.tics.2012.02.007.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Sowden S, Koehne S, Catmur C, Dziobek I, Bird G. Intact automatic imitation and typical spatial compatibility in autism spectrum disorder: challenging the broken mirror theory. Autism Res. 2016;9:292–300. https://doi.org/10.1002/aur.1511.

    Article  PubMed  Google Scholar 

  62. Spengler S, Bird G, Brass M. Hyperimitation of actions is related to reduced understanding of others’ minds in autism spectrum conditions. Biol Psych. 2010;68:1148–55. https://doi.org/10.1016/j.biopsych.2010.09.017.

    Article  Google Scholar 

  63. Cook JL, Bird G. Atypical social modulation of imitation in autism spectrum conditions. J Autism Dev Disord. 2012;42:1045–51. https://doi.org/10.1007/s10803-011-1341-7.

    Article  PubMed  Google Scholar 

  64. Williams JH, Whiten A, Suddendorf T, Perrett DI. Imitation, mirror neurons and autism. Neurosci Biobehav Rev. 2001;25:287–95.

    Article  CAS  Google Scholar 

  65. Kulesza WM, Cisłak A, Vallacher RR, Nowak A, Czekiel M, Bedynska S. The face of the chameleon: the experience of facial mimicry for the mimicker and the mimickee. J Soc Psychol. 2015;155:590–604. https://doi.org/10.1080/00224545.2015.1032195.

    Article  PubMed  PubMed Central  Google Scholar 

  66. Stel M, van Dijk E, Olivier E. You want to know the truth? Then don’t mimic! Psychol Sci. 2009;20:693–9. https://doi.org/10.1111/j.1467-9280.2009.02350.x.

    Article  PubMed  Google Scholar 

  67. Lewis MB, Dunn E. Instructions to mimic improve facial emotion recognition in people with sub-clinical autism traits. Q J Exp Psychol. 2017;70:2357–70. https://doi.org/10.1080/17470218.2016.1238950.

    Article  Google Scholar 

  68. Cacioppo JT, Petty RE, Losch ME, Kim HS. Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions. J Pers Soc Psychol. 1986;50:260–8. https://doi.org/10.1037//0022-3514.50.2.260.

    Article  CAS  PubMed  Google Scholar 

  69. Laksana E, Baltrusaitis T, Morency L-P, Pestian JP. Investigating facial behavior indicators of suicidal ideation; 2017. p. 770–7.

  70. Vijay S, Baltrušaitis T, Pennant L, Ongür D, Baker JT, Morency L-P. Computational study of psychosis symptoms and facial expressions. In: Computing and mental health workshop at CHI; 2016.

  71. Schmidt KL, Cohn JF. Human facial expressions as adaptations: Evolutionary questions in facial expression research. Am J Phys Anthropol. 2001;Suppl 33:3–24. https://doi.org/10.1002/ajpa.2001.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  72. Matsumoto D, Lee M. Consciousness, volition, and the neuropsychology of facial expressions of emotion. Conscious Cogn. 1993;2:237–54. https://doi.org/10.1006/ccog.1993.1022.

    Article  Google Scholar 

  73. Barrett LF, Adolphs R, Marsella S, Martinez AM, Pollak SD. Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol Sci Public Interest. 2019;20:1–68. https://doi.org/10.1177/1529100619832930.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  74. Gosselin P, Kirouac G, Doré FY. Components and recognition of facial expression in the communication of emotion by actors. J Pers Soc Psychol. 1995;68:83–96. https://doi.org/10.1037/0022-3514.68.1.83.

    Article  CAS  PubMed  Google Scholar 

  75. Rosenberg EL. Introduction: the study of spontaneous facial expressions in psychology. In: Ekman P, Rosenberg EL, editors. What the face reveals: basic and applied studies of spontaneous expression using the Facial Action Coding System, vol. 2. USA: Oxford University Press; 2005. pp. 3–18.

    Chapter  Google Scholar 

Download references

Acknowledgements

We thank all the individuals who participated in this research. Further, we want to thank Christian Knauth for his technical assistance with the automated facial analysis.

Funding

Open Access funding enabled and organized by Projekt DEAL. The research was supported by the Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.

Author information

Authors and Affiliations

Authors

Contributions

HD designed the study concept, the experimental procedure and the stimuli, analyzed and interpreted the data, and wrote the manuscript. IB collected the data and made a contribution to the manuscript within the scope of her master thesis in Psychology. BB and SR contributed to the conception of the study and the data acquisition. ID oversaw and assisted with all the aspects of the study design, data analysis, and writing process. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hanna Drimalla.

Ethics declarations

Ethics approval and consent to participate

All participants gave written informed consent before their participation, and the study was approved by the ethics committee of the Charité – Universitätsmedizin Berlin.

Consent for publication

All individuals displayed in figures have given their consent for publication.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Table 2 Selected single action units from the Facial Action Coding System (FACS) [41];

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Drimalla, H., Baskow, I., Behnia, B. et al. Imitation and recognition of facial emotions in autism: a computer vision approach. Molecular Autism 12, 27 (2021). https://doi.org/10.1186/s13229-021-00430-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13229-021-00430-0

Keywords