Symposium

Neurorobotic models of emotion

July 11th A2.07
Marwen Belkaid (Sorbonne Université)

Neurorobotics has emerged as the scientific field interested in embodied neural systems. Inherently interdisciplinary, this line of research mixes methods and techniques from computational neuroscience, machine learning and robotics. It serves two complementary goals: designing efficient machines inspired by natural cognition, and understanding the brain through embodied intelligent machines. There have been significant interest and effort in the modeling of emotion-related phenomena in the field. This symposium thus aims to bring together researchers interested in how neurorobotic models of emotion can help understand human emotions and improve robots autonomy and social capabilities. Of particular interest are models that investigate the mechanisms underlying the emergence of emotion in natural and artificial organisms. The symposium will feature three talks and one discussant. Speakers: Dr Lola Cañamero, Dr Sofiane Boucenna and Dr Marwen Belkaid. Discussant: Dr Frédéric Alexandre.

Emotion as a collection of metacontrol mechanisms in natural and artificial systems

Marwen Belkaid

Natural and artificial organisms engage a variety of processes to control their interactions with the environment. From a computational perspective, these first order processes are governed by a set of parameters which determine their functioning. On the other hand, metacontrol refers to the mechanisms that modify these parameters, allowing the system to adapt its behavior to different situations. To illustrate this, I will show results suggesting that mice exhibit a type of metacontrol in a decision-making experiment. I will also present robotic experiments in which metacontrol mechanisms help solve the task. I will argue that metacontrol is one the major features of emotion and that this conceptual framework is useful to understand emotional phenomena and to guide the design of cognitive architectures for autonomous machines.

Embodying Affect in Autonomous Interactive Robots

Lola Canamero

One of the key contributions that robots can make to emotion research is the possibility to implement, test, extract and analyze assumptions and consequences, assess the scope and "usefulness" of different conceptualizations, models and theories of emotions. Beyond computational models, robots permit to investigate how theoretical models "behave" when they are situated in, and in interaction with, real (versus simulated) physical and social environments. Since 1995, in my research, I have taken a "strong" approach to embodiment that models an "internal" as well as "external" affective embodiment. In this talk, I will argue that such affective embodiment can provide grounding for various cognitive and interactional skills that autonomous and social robots need, and how it can shed light towards understanding affect in humans. My specific model is biologically-inspired and builds on a synthetic physiology regulated using principles stemming from embodied AI, cybernetics (e.g., homeostasis and allostasis), ethology, neuroscience and dynamical systems. It provides a blueprint for a "core affective self" that endows robots with internal values (their own or acquired) and motivations that drive adaptation and learning through interactions with the physical and social environment. I will illustrate how such model has been used in my group over many years to ground a broad range of affective, cognitive and social skills such as adaptive decision making (action selection), learning (e.g., self-learning of affordances, learning to explore novel environments, learning to coordinate social interaction), development (e.g., development of attachment bonds, epigenetic development), and social interaction (e.g., robot companions for children).

Emotional interaction as a way to regulate robot behavior

Sofiane Boucenna

In this study, we study how emotional interactions with a social partner can bootstrap increasingly complex behaviors such as social referencing. Our idea is that social referencing as well as facial expression recognition can emerge from a simple sensory-motor system involving emotional stimuli. Without knowing that the other is an agent, the robot is able to learn some complex tasks if the human partner has some ”empathy” or at least ”resonate” with the robot head (low level emotional resonance). Hence, we advocate the idea that social referencing can be bootstrapped from a simple sensory-motor system not dedicated to social interactions.