Music listening often leads to emotional experiences, be it in conjunction with narrative events in cinema, during a live concert or solitary listening. Music was once considered to communicate basic emotions through acoustic characteristics similar to vocalisations. However, recent evidence to be presented by Tuomas Eerola (Durham University, UK) points to music communicating core affect but emotional experiences being constructed in light of extra-musical characteristics. Given this malleability, emotional music responses give unique insight into the basic versus constructionist debate of emotion. This symposium includes a further four empirical papers from researchers in musicology, emotion psychology and clinical psychology. In the first paper Joel Larwood (University of QLD, Australia) will be discussing new research testing how alexithymia influences valence specific responses to music according to self-report, facial EMG and phasic skin conductance. Jonna Vuoskoski (University of Oslo, Norway) will present findings detailing how trait empathy predicts increased intensity of self-reported emotions to music and emotion specific autonomic reactivity. Adopting an experience sampling methodology in Finnish adolescents, Will Randall (Jyvaskyla University, Finland) will present on context specific affect changes when listening to different types of music. Finally, clinical psychologist Allison Waters (Griffith University, Australia) will discuss uses of music in the context of cognitive behavioural therapy, whereby using musical jingles during fear extinction learning led to an enhanced response to anxiety treatment.
Introduction/Background: Saying things out loud and expressing them as melodies/jingles enhances new learning and memory consolidation. The present study incorporated the expression of key therapeutic strategies as jingles during cognitive control training to enhance engagement and clinical outcomes for clinically anxious children. Methods: In this study, 59 anxious children between 7-12 years of age were randomly assigned to cognitive control training or a waitlist control condition. Children in the active treatment condition completed 12 sessions of positive search training involving melody/jingles to enhance learning and memory of the positive search strategies. Children in the waitlist control condition were assessed before and after the active intervention phase. Results: Significant reductions in clinician and parent report of children’s anxiety symptoms were observed from pre- to post-intervention in the active condition compared to the waitlist condition. More importantly, greater use of melody/jingles during treatment significantly predicted better treatment outcomes at post-treatment and follow-up. Conclusion: These results encouraged further studies of melody and jingles to enhance memory of therapeutic strategies and additional research since these initial findings will be presented and avenues for future research will be discussed.
There are two key theoretical perspectives of emotional experience. Basic emotion theory links emotional experience to psychophysiology. Whereas constructivist theory posits emotions to be a product of predictions, previous knowledge, and expectations. According to basic emotion theory, an emotion will be easier to differentiate when there a more pronounced bodily reaction is present. However, from a constructivist perspective, the ability to differentiate emotional experiences is contingent on well-formed emotion concepts—not psychophysiological reactivity. Alexithymia is a personality trait characterised by a lack of knowledge about emotional experiences and poor differentiation of emotions, particularly negative ones. However, studies on psychophysiological responses in alexithymia have been consistently underpowered and returned inconsistent results with no study linking physiology to differentiation. This will be explored in the current study where participants listen to music that varies (high or low) in valence and arousal. Participants are 120 university students aged 18 to 25 years. Skin responses, zygomaticus major, and corrugator supercilii are continuously measured during music listening, along with self-report of experienced emotions at the end of each song. Consistent with constructivist theory, it is predicted that psychophysiology will cluster according to valence and arousal regardless of alexithymia. Further, psychophysiology will not predict emotion differentiation but emotion differentiation will reduce as alexithymia increases. Results and conclusions will be included upon completion of data collection.
Music is an important part of everyday life, and one of the most prominent motivations for listening to music is for the self-regulation of emotional states. However, this regulation is not always beneficial, and the use of maladaptive strategies is apparent in the development of many forms of psychopathology, including mood and personality disorders. This is particularly true for adolescents, as failure to develop adaptive regulation strategies during this critical period can lead to social and mental problems into young adulthood. Music listening has unique potential as an age-appropriate tool to support emotional health in adolescents. The aim of this study was to provide a comprehensive understanding of how adolescents use music to regulate emotional states, and identify the individual and contextual variables that influence this regulation. All data were collected through the mobile app MuPsych, which was designed to collect ecologically-valid and real-time data during music listening experiences. Participants were Finnish middle-school students (ages 13-16), who responded to questions as they listened to music on their mobile phone. These questions assessed change in emotion (valence, arousal, and intensity of a categorical state) over a five-minute listening period, along with contextual variables and regulation strategies. The app also assessed individual variables through questionnaires on personality and mental health. Data collection is ongoing, with final results to be presented at the symposium. Results indicate several clear patterns of regulation for different emotion states, predicted by sets of contextual and individual variables.
It has been postulated that empathy and emotional contagion might be some of the fundamental mechanisms through which music induces emotional responses in listeners. Previous studies have reported correlations between questionnaire measures of trait empathy and self-reported intensity of music-induced emotion (particularly in response to sad and tender music), but it is not yet known whether this association only exists at the level of self-report. Thus, the aim of this study was to investigate the relationship between trait empathy and psychophysiological indices of music-induced emotion. Fifty-four participants heard 10 1-minute music excerpts representing five different emotions (sad, happy, scary, tender, and neutral). For each excerpt, participants rated their liking and the overall intensity of their emotional response, and described their felt emotion using 7 rating scales (happy, tender, peaceful, moved, anxious, and energetic). In addition, participants’ electrodermal activity and heart rate variability (HRV) were measured. Trait empathy was measured using the Interpersonal Reactivity Index (Davis, 1980). Trait empathy correlated significantly with the mean ratings of overall intensity of felt emotion (averaged across all excerpts; r = .29). Trait empathy also correlated with phasic skin conductance activity in response to sad (r = .28) and tender (r = .33) excerpts, and with high-frequency HRV in response to happy excerpts (r = .36; all p < .05). These results corroborate previous findings that have associated trait empathy with the self-reported intensity of music-induced emotions, and provide novel evidence of a similar pattern also on the level of psychophysiology.