Individuals with autistic traits often experience difficulties in understanding others’ minds (agency: plans and goals) as well as their emotional states (experience: feel and sense). This applies in particular to those working in STEM occupations which require high systemizing ability. The present research aims to explore differences between STEM and non-STEM (i.e. Social Science (SSC)) fields in the ascription of agency and experience when the targets depict either human faces or anthropomorphic objects. In Study 1, STEM and SSC students (N = 93) rated the perceived animacy (i.e. aliveness) and agency of human faces and vehicles that varied in realism from artificial/anthropomorphic to lifelike. Whereas no differences occurred for ratings of animacy, STEM students were less likely to attribute agency to human faces than SSC students. This result persisted when individuals’ autism quotient (AQ) scores were considered as a covariate, but was non-significant in the context of vehicles. In Study 2, STEM and SSC students (N = 218) rated the perceived agency and experience of human faces that varied in realism and were presented either alone or embedded within a vehicle body. Ascriptions of agency and experience to face-only stimuli were significantly reduced for STEM compared to SSC students (also when AQ acted as a covariate). However, no such difference was observed in the context of faces with vehicle bodies. The present findings point toward the moderating role of stimulus target type for explaining potential differences between STEM and SSC occupations in mind attribution (i.e. agency and experience).
Voice is one of the main communicative sources of evidence in interpreting the expression of emotion. Affective computing aims to create systems and algorithms that automatically analyse people's emotional state. Consequently, several companies such as Affectiva, Beyond Verbal and Audeering have developed automatic systems to analyse the vocal expression of emotions. However, little is known about the accuracy of such systems. To evaluate the accuracy of automatic emotion recognition from voice, we processed vocal expressions from the GEMEP database with "SensAI Emotion" developed by Audeering. The GEMEP database contains audio-video recordings from 10 actors performing 17 different emotional scenarios (Bänziger & Scherer, 2010). SensAI Emotion analyses emotions from speech and renders a value for 23 affective states and for valence and arousal dimensions (Eyben, Scherer, & Schuller, 2018). In terms of category recognition, the accuracy of SensAI labeling GEMEP vocal expressions of emotion is 6.67%. However, this low result is partly due to the high number of different affective state labels recognized. To bypass this label matching bias, we compared the recognition accuracy for valence and arousal dimensions. The results show an accuracy of 0.56 (CI95%[0.46,0.65]) for valence and 0.73 (CI95%[0.64,0.81]) for arousal recognition. Vocal automatic emotion recognition is a growing research area in affective computing. The categorical recognition of emotion remains a challenge due to the diversity of affective states. However, the accuracy of a system like SensAI Emotion provides promising results in the recognition of valence and arousal.
The job of teaching children and youth is an emotional one. Many new teachers enter the field pedagogically equipped, but unprepared for the emotional components of the job. Thus, we seek to understand teacher candidates’ early learning about the emotional labor involved in teaching. The emotional labor framework (Hargreaves, 2000; Hochschild, 1983) centers employees' emotional acting in the workplace with a focus on how this acting is shaped by employers’ expectations (display rules) and oriented toward organizational goals. Our qualitatively driven mixed methods design includes both concurrent and sequential components for the primary purposes of triangulation and enhancement. Undergraduate candidates (N = 116) working toward teaching licensure at a mid-sized, public university in the U.S. Midwest participated. We have collected and analyzed two rounds of data: 17 face-to-face interviews and 104 questionnaires. A third round of data collection is underway, and includes second interviews with candidates after they engage in a guided conversation with their mentors and write a reflection, as well as interviews with teacher mentors and University faculty. A key finding is that candidates described confusion about the emotional labor involved in teaching (e.g., “when is it appropriate to show emotions?”; “what is too much emotion?”). Many said that “professionalism” dictated the suppression of negative emotions while teaching, but they also worried that this suppression would lead students to find them less relatable, less authentic, and simply, less “human.” We discuss implications for research and practice that have the potential to transcend the teaching profession.