___________ SBDM 2018

Eighth International Symposium on Biology of Decision Making, Paris, France


[PNG] AI and Brain Science project

Satellite workshop

24 May 2018 @ Sorbonne Université

Learning and decision-making at the interface between Neuroscience, Artificial Intelligence and Robotics.

This workshop is co-sponsored by the GT8 "Robotics and Neuroscience" working-group of the French National Robotics network (CNRS GDR Robotique) and the Japanese "AI and Brain Science" project.

It will take place as satellite workshop after SBDM 2018, on 24 May 2018 at Sorbonne Université, UPMC Campus, 4 place Jussieu, 75005 Paris. The precise building and room will be announced later.

The goal of the workshop is to gather researchers with different research experience and points of view to discuss current hot topics at the crossroads between these disciplines. In particular, we will focus the discussion on on-line and off-line learning, decision-making, motivation, knowledge restructuring, systems-level models, integration of cognitive functions within cognitive architectures, and embodiment.

The program is currently under construction.



To register to the workshop, please click on this link to the registration form.



Confirmed speakers

Introductory words by Kenji Doya (OIST, https://groups.oist.jp/ncu/kenji-doya) and Mehdi Khamassi (CNRS / Sorbonne U, http://people.isir.upmc.fr/khamassi/)
Gianluca Baldassarre, CNR
Natalia Díaz-Rodríguez, ENSTA ParisTech / INRIA Flowers
Peter Dominey, INSERM
Philippe Gaussier, U Cergy-Pontoise
Michèle Sebag, INRIA U Paris-Saclay
Tadahiro Taniguchi, Ritsumeikan U
Jun Tani, OIST


Workshop program

Thursday, May 24th - venue: Sorbonne Université

(Room 106, Towers 44-45, 4 place Jussieu, 75005 Paris)

Morning session (09h-12h30)
09h00 - 9h05 Mehdi Khamassi (CNRS / Sorbonne Université)
Introductory words

09h05 - 9h50 Kenji Doya (Okinawa Institute of Science and Technology)
Introductory talk: Building autonomous robots to understand what brains do

Abstract: What we thought would work by intuition or mathematical insights often fail to work in computer simulations, and what worked perfectly in computer simulations rarely work in hardware experiments without further fixing and tuning. Building autonomous robots is the best way, if not the only way, to understand the challenges our brains are facing for perception, control and learning. This short talk introduces what we have learned by building robots that try to survive and reproduce, and what we aim to understand in our "AI and Brain Science" project.
9h50 - 10h35 Jun Tani (Okinawa Institute of Science and Technology)
Exploring Robotic Minds by Using the Predictive Coding Principle

Abstract: My research motivation has been to investigate how cognitive agents can acquire structural representation via iterative interaction with the world, exercising agency and learning from resultant perceptual experience. Over the past 20 years, my group has tackled on this problem by investigating the idea of predictive coding applied to development of cognitive constructs of robots. Under the principle of predictive coding, dense interaction take place between the top-down intention proactively acting on the outer world and the resultant bottom-up perceptual reality accompanied with the prediction error. Our finding has been that compositionality enabling some conceptualization can emerge via such iterative interaction as the result of downward causation in terms of constraints such as multiple spatio-temporal scale property applied to the neural network modeling. The talk will highlight our recent results on interactive and integrative learning among multi-modality of perceptual channels including pixel level dynamic vision and proprioception using a humanoid robot platform. Finally, I will point to one aim of future research, how the deep mind of a robot may arise through long-term developmental and educational process.

Coffee break + posters in Room 102 (10h35-11h)
11h - 11h45 Natalia Díaz-Rodríguez (ENSTA Paris Tech)
State Representation Learning for Control: An Overview

Contributors: Timothée Lesort, Natalia Díaz-Rodríguez, Jean-François Goudou, David Filliat (ENSTA Paris Tech)
Abstract: Representation learning algorithms are designed to learn abstract features that characterize data. State representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. As the representation learned captures the variation in the environment generated by agents, this kind of representation is particularly suitable for robotics and control scenarios. In particular, the low dimension helps to overcome the curse of dimensionality, provides easier interpretation and utilization by humans and can help improve performance and speed in policy learning algorithms such as reinforcement learning. 
This survey aims at covering the state-of-the-art on state representation learning in the most recent years. It reviews different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real). In particular, it highlights how generic learning objectives are differently exploited in the reviewed algorithms. Finally, it discusses evaluation methods to assess the representation learned and summarizes current and future lines of research.
11h45 - 12h30 Gianluca Baldassarre (ISTC, CNR)
Goals: a pillar of intelligence in brains and robots

Abstract: Goals are a pillar of intelligence, both natural and artificial. I will first present theoretical work and computational models supporting the idea that goals are internal representations of world states to which value, and hence desirability, is assigned. This value can originate either from extrinsic motivations, connected to the acquisition of biologically relevant resources, or from intrinsic motivations, linked to the acquisition of information. In brain, this process of value assignment pivots on the nucleus accumbens forming a nexus between various sources of value, such as the amygdala and the hippocampus, and prefrontal cortex regions, representing world states. This system is at the core of goal-directed behaviour and controls actions, via both cortico-cortical pathways and the basal-ganglia inter-loop connections, that flexibly adapt to internal needs and external conditions. I will then present some architectures that pivot on similar concepts but are directed to control autonomous humanoid robots. Here goals self-generated on the basis of sensorimotor contingencies and intrinsic motivations become the pivot of the autonomous learning of multiple motor skills, from those closely connected to the control of body to those connected to the manipulation of objects. Goals are then usable within inverse models to recall the skills needed to pursue them when activated by extrinsic motivations. Overall, the different contributions show how goals are a fundamental pillar of autonomous learning and flexible behaviour.
Lunch break + posters in Room 102 (12h30-14h)
Afternoon session (14h-17h30)
14h - 14h45 Philippe Gaussier (Université de Cergy-Pontoise)
The hippocampo-cortical loop: spatio-temporal learning & goal-oriented planning

Contributors:J. Hirel, P. Gaussier, M. Quoy, J.P. Banquet, E. Save, B. Poucet
Abstract: We present a neural network model where the spatial and temporal components of a task are merged and learned in the hippocampus as chains of associations between sensory events. The prefrontal cortex integrates this information to build a cognitive map representing the environment. The cognitive map can be used after latent learning to select optimal actions to fulfill the goals of the animal. A simulation of the architecture is made and applied to learning and solving tasks that involve both spatial and temporal knowledge. We show how this model can be used to solve the continuous place navigation task, where a rat has to navigate to an unmarked goal and wait for 2 seconds without moving to receive a reward. The results emphasize the role of the hippocampus for both spatial and timing prediction, and the prefrontal cortex in the learning of goals related to the task.
14h45 - 15h30 Michèle Sebag (Université de Paris-Saclay)


Coffee break + posters in Room 102 (15h30-15h50)
15h50 - 16h35 Tadahiro Taniguchi (Ritsumeikan University)
Unsupervised Language Acquisition by Robots with Hierarchical Bayesian Models

Abstract: Language acquisition is a challenging task in robotics. Most of the current machine learning methods for speech recognition, image recognition, and natural language processing heavily rely on labeled data, i.e., human annotated supervised training data. However, human children perform language acquisition from their sensorimotor information alone without any artificial labeled data. To build a system that can acquire language based on the real-world experience is a big challenge in robotics and cognitive science. Our research field is called symbol emergence in robotics.
In this talk, I will introduce machine learning methods for unsupervised language acquisition by robots. This talk mainly focuses on word discovery task and multimodal categorization. We have been developing machine learning methods that enable a robot to learn words automatically.
I am introducing two unsupervised machine learning methods. One is nonparametric Bayesian double articulation analyzer (NPB-DAA) for learning phonemes and words directly from speech signals using hierarchical Dirichlet process hidden language model (HDP-HLM). The other is a method for simultaneous learning of map, lexicons, spatial categories, and localization called an online spatial concept acquisition and simultaneous localization and mapping (SpCoSLAM). SpCoSLAM integrates SLAM, multimodal categorization for forming spatial concept and lexical acquisition. Both methods are based on Bayesian nonparametrics.
16h35 - 17h20 Peter Dominey (INSERM)
Narrative Intelligence : The structuring role of language

Contributors: Peter Ford Dominey (1), Clement Delgrange (1, 2), Jean-Michel Dussoux (2), David Mugisha (1,2), Nicolas Lair (1,2), Carol Madden Lombardi(1), Jocelyne Ventre-Dominey (1)
(1) INSERM U1208, Human and Robot Cognitive Systems Team, Bron
(2) Cloud Temple, Nanterre
Abstract: A significant component of the human brain has evolved, both in terms of volume and connectivity, in order to allow the symbolic language system to access integrative structures, particularly in the temporo-parietal junction (TPJ).  This anatomical organization suggests that language has functional access to cognitive neural structures and can play a crucial role in the structuring and organization of experience.  We will present results from our fMRI and DTI studies (Jouen et al. 2015, 2018 Neuroimage) of semantic processing of meaningful human events, and the integration of these results into a neurocomputational model that allows the iCub to enrich its experience through human narrative (Mealier et al. 2017 Frontiers Psychology).  These results are currently being evolved into a new vision of AI assistants that will incorporate the power of narrative structure in to machine learning algorithms.
Closing words


Workshop organizers

Kenji Doya (Okinawa Institute of Science and Technology, Okinawa, Japan)

Benoît Girard (CNRS / Sorbonne Université, Paris, France)

Mehdi Khamassi (CNRS / Sorbonne Université, Paris, France)

Ghilès Mostafaoui (University Cergy-Pontoise / CNRS / ENSEA, Cergy, France)

Alex Pitti (University Cergy-Pontoise / CNRS / ENSEA, Cergy, France)



Image credits: Top-left, logo of the AMAC team at ISIR by Benoît Girard; top-right, logo of the Japanese "AI and Brain" project.