Traditionally, the prevailing assumption concerning emotional expression was that only a small set of almost exclusively negative emotions become apparent in the face (reflecting quasi-automatic, internal, physiological reactions to external events). However, the rise of interest in positive emotions has challenged this assumption by broadening the research focus in terms of the number of positive emotions to be studied, the modalities to be considered and by showing the vital role that context plays when recognising others’ expressions. We will begin this symposium with two presentations on positive vocal expressions. In the first, Roza Kamiloglu will report findings from a comprehensive review of studies that investigate acoustic cues relating specifically to positive emotion in speech and vocalizations. Secondly, Doron Atias will present the results of a series of empirical studies that point to a critical role for context in disambiguating positive voices. Ursula Beerman will then present research on the measurement and recognition of different kinds of humour and laughter. Finally, we will present new findings on a cross-cultural study of emotion recognition of 8 emotions including interest, pride, joy and pleasure across different combinations of the modalities of face, body and voice. The symposium will therefore provide further evidence that research in positive emotion has contributed to emotion theory by moving the debate beyond (predominantly) negative, facially expressed emotions. Discussions of the experimental findings and the theoretical implications will be led by Hillel Aviezer and Daniel Dukes.
While most research in emotion expression and recognition has focused on the static facial expressions of a few emotions - almost all negative, the only exception being joy, - recent studies have argued that positive emotions may be more efficiently communicated by modalities other than the face, namely the voice and the body. In this study we investigated how 8 emotions (including 4 positive ones, namely pride, interest, joy, and pleasure) were recognized when presented in one of 7 perceptual conditions: face, voice, body, face and voice, face and body, body and voice, and face, body and voice. Six hundred thirty participants from two countries – Argentina and the US - were randomly assigned to one of the seven perceptual conditions and viewed and/or listened to 160 emotional stimuli (affect bursts) performed by ten actors and taken from the Geneva Multimodal Emotion Portrayals database. Results show that expressive modalities are differentially successful at conveying the various emotions. Results confirm that while the face is generally effective and sufficient in conveying the negative emotions, positive emotions particularly benefit from the inclusion of other expressive modalities. While the recognition accuracy from body expressions alone was generally low across emotions with the exception of fear and anger, voice contained most of the salient information for pleasure. Implications of these results for theory will be discussed and cross-cultural differences will be highlighted.
Several researchers have established different types of smiling and laughter, some of them expressing felt emotions of amusement or happiness (genuine smiles and laughter), some others representing masking smiles or expressing positive emotions blended with negative ones (non-genuine smiles and laughter; e.g., Bänninger-Huber & Rauber-Kaiser, 1989; Ekman & Friesen, 1982; Frank & Ekman, 1993). These are usually distinguished by employing the Facial Action Coding System (FACS, Ekman, Friesen, & Hager, 2002). Furthermore, humor in general can have both positive and negative aspects. For instance, laughing at oneself has been viewed as the core element of the sense of humor (Beermann & Ruch, 2011; Comte-Sponville, 2010; McGhee, 1999) and predicts beneficent outcomes like life and marital satisfaction (Terzic, 2018); at the same time, it is important to differentiate it from self-deprecating humor, which has been identified as a maladaptive humor style connected with low self-esteem and negative outcomes (Martin, Puhlik-Doris, Larsen, Gray, & Weir, 2003). Perceiving a person’s laugh as genuine results in the attribution of more prosocial traits to this person (e.g., Beermann et al., in prep.). Expressing genuine smiles seems to be affected by different factors such as genetic predisposition (Haase et al., 2015) and social status of a person and their interaction partner (Côté et al., 2017). In this presentation, the importance of differentiating genuine from non-genuine forms of smiling and laughter is demonstrated by reviewing and discussing different studies on implications and outcomes, and aspects of social interactions on genuine vs. non-genuine types of smiling and laughter.
A basic premise of emotion theories is that experienced feelings (whether specific emotions or broad valence) are expressed via vocalizations in a veridical and clear manner. By contrast, functional–contextual frameworks, rooted in animal communication research, view vocalizations as contextually flexible tools for social influence, not as expressions of emotion. Testing these theories has proved difficult because past research relied heavily on posed sounds which may lack ecological validity. In a series of studies, we test these theories by examining the perception of human affective vocalizations evoked during highly intense, real-life emotional situations. We show that highly intense vocalizations of opposite valence (e.g., joyous reunions, fearful encounters) are perceptually confusable and their ambiguity increases with higher intensity. Using authentic lottery winning reactions, we show that increased hedonic intensity leads to lower, not higher perceived valence. Finally, we demonstrate that visual context operates as a powerful mechanism for disambiguating real-life vocalizations, shifting perceived valence categorically. These results suggest affective vocalizations may be inherently ambiguous, demonstrate the role of intensity in driving affective ambiguity, and suggest a critical role for context in vocalization perception.
Researchers examining nonverbal communication of emotions are becoming increasingly aware of differentiations between different positive emotional states like interest, relief, and pride. Given the importance of the voice in communicating emotion in general, and positive emotion in particular, it is remarkable that there is to date no systematic review of what characterizes vocal expressions of different positive emotions. Furthermore, integration and synthesis of current findings is lacking. In this talk, we will review the studies (N = 108) investigating acoustic cues relating to specific positive emotions in speech prosody and nonverbal vocalizations. Evidence suggests that happiness as expressed in the voice is generally loud with high variability in loudness, high and variable in pitch, and high in the first two formant frequencies. When specific positive emotions are directly compared with each other, pitch mean, loudness mean, and speech rate revealed differences among these emotions, with patterns mapping onto emotion families. For instance, pitch is higher for epistemological emotions (amusement, interest, relief), moderate for savoring emotions (contentment and pleasure), and lower for prosocial emotion (admiration). Furthermore, the acoustic patterns are attributable to differing arousal levels as described in previous research. These findings will be discussed in relation to limitations in extant work and concrete proposals will be provided for future research on positive emotions in the voice.