The term valence refers to both the affective response (e.g., experiencing bad feelings) and semantic knowledge (e.g., knowing that cancer is bad). Humans’ ability to represent valence both affectively and semantically provides a clear advantage: It preserves the evolutionarily functional, immediate role of affect, while at the same time enabling the representation of stimulus value without spending a great deal of resources on full-blown affective response. The importance of this distinction in research is well recognized by emotion theories, though disentangling the two is challenging. Consequently, with the absence of a clear marker, tasks that are potentially semantic in nature may be interpreted as affective. This symposium aims to present the contribution of the distinction between affective and semantic representations of valence to various domains of affective science. First, Hillel Aviezer will address the gap between poses, stereotypical facial expressions that are more reflective of semantic knowledge, and real-life facial behavior that is part of the affective response. Secondly, Ella Givon and Nachson Meiran will discuss the differences between feelings-focused and semantic-focused self-reports in the context of an evidence accumulation model for generating feelings. Thirdly, Michal Kuniecki will present the distinction in the neural domain for the case of visual stimuli. Fourthly, Assaf Kron will present work that systematically dissociate affective and semantic representations of valence.
How do people answer the question “how do you feel?” In the present work, participants were given two tasks in each trial. They first indicated whether a picture made them feel pleasant (or was supposed to be felt as pleasant, in another group), and then made gender decisions regarding faces. Evidence accumulation modeling showed that (a) reporting genuine feeling is qualitatively different from reporting the supposed feeling; (b) reporting one’s feeling is remarkably similar to gender decisions; (c) evidence regarding negative feelings accumulates more quickly than in positive feelings. These results support the assumption that when asked, participants report genuine as opposed to supposed feelings and strengthen the analogy between feeling reports and perceptual decisions.
Emotional stimuli are processed in a prioritized manner. Viewing emotional scenes causes widespread brain activations, which are related to various cognitive processes as well as to autonomic activation. We have shown previously that in case of emotional scenes semantic features dominate over visual saliency in attracting eye fixations. This effect is stable even in the condition of poor visibility caused by mixing images with visual noise. Moreover fixating negative emotional objects induces more intense pupil dilations than fixating neutral objects or background of negative images. This substantiates the idea that some dedicated brain regions are activated by the presentation of negative scenes resulting in the more intensive processing of the information contained in the presented image. In the follow-up experiment, we scanned participants and measured their pupil size while they were viewing negative and neutral natural images. We showed that arousal induced by the negative images, as compared to the neutral ones, is primarily related to higher amygdala activity, while increasing visibility of negative content enhanced activity in the lateral occipital complex (LOC). It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects irrespective of the level of arousal. Both arousal and processing of emotional meaning modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC possibly reflect the integration of these aspects of emotional processing.
Valence can be represented affectively (i.e., feeling bad) and semantically (i.e., knowing that X is bad). The affective and semantic modes of valence are difficult to separate from one another for purposes of empirical examination. One reason that the dissociation is particularly challenging to investigate is that there is usually a high degree of correlation between the two; meaning that the emotional response is often determined, or inextricably colored, by the activation of semantic knowledge and vice versa. Here we present three experimental approaches that offer a window into this potential dissociation. Experiment 1 examined the divergent effect of repeated exposure on semantic- and affective-based measures. The results showed that measures related to affective valence (feelings-focused self-reports, heart rate, and facial EMG activations) attenuated with repeated exposure, whereas measures related to semantic valence (knowledge-focused self-reports and congruency effect of affective Simon task) did not. In experiment 2 we compared the ability of the three types of self-report data (feelings-focused, knowledge-focused, and traditional instructions) to predict facial electromyography, heart rate, and electrodermal changes in response to affective stimuli. Results suggest an advantage for feelings-focused instructions over knowledge-focused instructions with traditional instructions falling in between. In experiment 3 we examined the divergent effect of level of abstraction of the stimulus on affective and semantic measures. Results suggest that the affective response is more influenced by stimulus level of abstraction (i.e., stronger with more concrete stimulus) than semantic knowledge. The relevance of results to emotion theory and research is discussed.
The distinction between positive and negative facial expressions is assumed to be clear and robust. Nevertheless, accumulating research with intense real-life faces has shown that viewers find it challenging to differentiate the valence of such expressions without the use of context. Using FACS analysis, we supplied participants with valid information about objective facial activity that could be easily used to differentiate positive from negative expressions. Strikingly, ratings remained virtually unchanged and participants failed to differentiate the valence of positive and negative faces. We propose that the immunity of participants to objectively useful facial information results from stereotypical (but erroneous) inner representations of extreme positive and negative expression. Finally, we suggest that these representations originate from social situations in which expressions are used strategically to convey interpersonal signals to others. Our work suggests that facial reaction diagnosticity depends on the situation, that inner representations may dissociate from real-life expressions, and that context may play a key role in everyday emotion perception.