Despite sustained interest in emotion understanding, there are few studies on the way people understand the authenticity of emotional expressions. Limited results suggest people perform above average when judging the authenticity of smiles, but perform poorly with other, negative emotions (Brinke et al., 2011). We aimed to investigate this complex aspect of emotion understanding, people’s ability to identify the authenticity of emotional expressions. Since previous studies exclusively used static, pictorial stimuli (Dawel et al., 2017), it is plausible that people might easily identify authenticity if they viewed multimodal expressions. This is why we used assessments with both pictures and videos of authentic and inauthentic emotional expressions (happiness, sadness and neutral expressions). Young adults (N=170) viewed a set of pictures and videos in which the same displayers express an emotion that is either consistent with how they felt (authentic) or different (inauthentic). Participants identified the emotion expressed in the photo/video and rated the authenticity (“very real” to “very false”). Preliminary results show participants had a lower accuracy identifying the authenticity of sad expressions compared to happy ones. However, participants were better at identifying the authenticity of expressions from multimodal, video stimuli than picture stimuli. These results suggest complex emotion understanding requires more than just decoding of simple, static stimuli, as people judged authenticity more accurately when viewing an expression in context, multimodally. This might shed some light on the role emotional authenticity has in socio-emotional interactions. These findings also support the need for more complex assessments of emotion understanding.
Discriminating others' facial expressions is fundamental to negotiating our worlds, and the smile is arguably the most ubiquitous expression we encounter. The function and meaning of smiles has been much discussed in the literature, but there is empirical consensus that we discriminate different types of smiles based on morphological, contextual, motivational, and cultural cues. This scoping study examines the breadth and characteristics of empirical research investigating peoples' abilities to discriminate different types of smiles. We are particularly interested in assessing the theoretical influences, typologies of smiles, and smile stimuli used in these studies. Secondary aims include scoping the disciplines, geographical locations, samples, and designs of these studies. Using a scoping review methodology (Arksey and O'Malley; 2005), we identified more than 120 empirical studies published over four decades that investigate humans’ abilities to discriminate different types of smiles. In this presentation we characterise this literature by reviewing dominant theoretical perspectives, the terminology used to describe types of smiles, and the stimuli used in these studies. The Basic Emotion Theory of facial expressions dominates the theoretical perspective of these studies, and smile authenticity is the most common theme underlying the descriptions used for different smile types. Common descriptions of smile types include real, fake, posed, and authentic. The stimuli used in these studies vary considerably, ranging from computer-generated smile videos to in situ photographs of evoke expressions to posed videos of actors. The findings of this study outline a range of theoretical gaps, design considerations, and resources for future smile research.
People’s ability to classify emotional facial expressions is very good, however, their ability to determine their authenticity is much poorer. Generally, emotion recognition research investigates differences in authenticity discrimination by contrasting people’s perceptions to ‘posed’ and ‘genuine’ expressions. However, such a broad categorization is inadequate for accurately capturing decoder perceptions. We argue that the technique used to produce posed expressions significantly affects how decoders perceive and discriminate authenticity. Second, that decoders perception is affected by seeing these in a dynamic or static format. To demonstrate the importance of production method and presentation format, in a series of studies decoders were assessed on various facial expression types. Senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), while other senders faked surprise with no preparation (Improvised) or after having first experienced genuine surprise (Rehearsed). Decoders rated the genuineness and intensity of these expressions, and the confidence of their judgment. It was found that both expression type and presentation format impacted decoder perception and accurate discrimination. Genuine surprise achieved the highest ratings of genuineness, intensity, and judgmental confidence (dynamic only), and was fairly accurately discriminated from posed surprise expressions. Rehearsed expressions were perceived as more genuine (in dynamic presentation), whereas Improvised were seen as more intense (in static presentation). However, both were poorly discriminated as not being genuine. Overall, dynamic stimuli improved authenticity discrimination and perceptual differences between expressions. Our findings demonstrate the importance for research to consider the type of posed expression used and presentation format.
Most research on facial expressions has focused on posed rather than spontaneous facial expressions. Yet, the differences in their properties and the ability for observers to discriminate between the two types remain unclear. This present research aims to examine the encoding and decoding of spontaneous and posed expressions on the basis of four emotions: surprise, amusement, disgust, and sadness. Study 1 compared the morphological and dynamic properties of 103 spontaneous and posed facial expressions. Results showed that facial activation patterns at the apex phase significantly differed between spontaneous and posed displays. Also, the two types of expressions comprised different dynamic sequences in which the facial actions reached their apex. Study 2 explored whether observers (N = 58) can discriminate between posed and spontaneous displays of the four emotions when seen in a static or dynamic form. Results showed that dynamic (compared to static) information significantly increased the ability to detect the absence of an emotional experience in posed expressions. However, it had no effect on participants’ ability to detect the presence of an emotional experience in spontaneous expressions. Together the findings point towards clear differences in the encoding and decoding of spontaneous and posed expressions. Moreover, they suggest that the two abilities in detecting the presence/absence of an emotion are unrelated, with dynamic information only contributing to the latter one.
Emotion understanding, the ability to identify and interpret others’ emotional expressions and reactions, is an important developmental skill, and investigating how this skill develops early in life is crucial. To date, it remains unclear how commonly used measures of emotion matching in infancy relate to emotion understanding in early childhood. In the present study, we hypothesized that infants’ emotion matching would predict early childhood emotion understanding. Forty infants (20 male) participated in this study at 9, 15, 21, and 30 months. At the first three visits, infants engaged in an intermodal emotion matching task adapted from Walker’s (1982) design. Infants viewed silent and asynchronous audio trials with pairs of happy, sad, angry, and neutral facial expressions. Emotion matching was calculated as the increase in looking time to one emotional face from the silent condition to the condition with the matching emotional tone. At the 4th visit, children participated in the Affective Knowledge Test (AKT; Denham, 1986). Results revealed that only the 15-month emotion matching performance predicted AKT performance, and the relation was negative (ρ=-.464, p=.010). Relations between the AKT and 9-month (ρ=-.002, p=.991) and 21-month (ρ=-.097, p=.624) emotion matching were non-significant. The negative relation observed at 15-months indicates that a novelty preference on emotion matching tasks at this age may be particularly indicative of later emotion understanding performance. These results hold implications for better understanding the trajectory for early emotion understanding development, as well as a potentially beneficial age to target for early emotion understanding or emotion matching interventions.