Our paper, resulting from work of Master student Gustavo Claudio Karl Couto, entitled "Hierarchical Generative Adversarial Imitation Learning with Mid-level Input Generation for Autonomous Driving on Urban Environments" has shown very interesting results:
Both the intermediate mid-level input BEV representation and the control policy are learned as the agent navigates in an urban town (click for more videos from playlist).
This work proposes a general Reservoir Computing (RC) learning framework which can be used to learn navigation behaviors for mobile robots in simple and complex unknown, partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) fixed while only a linear readout output layer is trained. The proposed RC framework builds upon the notion of navigation attractor or behavior which can be embedded in the high-dimensional space of the reservoir after learning. The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using an hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors towards the goal.
During my PhD, I've worked mainly on Reservoir Computing (RC) architectures with application to modeling cognitive capabilities for mobile robots from sensor data and sometimes through interaction with the environment.
Reservoir Computing (RC) is an efficient method for trainning recurrent neural networks, which can handle spatio-temporal processing tasks, such as speech recognition. These networks are also biological plausible, as recently argued in the literature.
In my case, I used these RC networks for modeling a wide range of capabilities for mobile robots, such as:
My publications are listed and can be downloaded in Google Scholar or here.
Some simulated and real robots employed in the experiments:
Environment used for localization experiments using the real e-puck robot:
After using unsupervised learning methods for self-localization, the plots below show the mean activation of place cells as a function of the robot position in the environment.
Red denotes a high response whereas blue denotes a low response.
It is possible to perform map generation through sensory prediction given the robot position as input. Black points represent the sensory readings whereas gray points are the robot trajectory.
Autonomous robot navigation in partially observable environments is a complex task because the state of the environment can not be completely determined only by the current sensory readings of a robot. This work uses the recently introduced paradigm for training recurrent neural networks (RNNs), called reservoir computing (RC), to model multiple navigation attractors in partially observable environments. In RC, the RNN with randomly generated fixed weights, called reservoir, projects the input into a high-dimensional dynamic space. Only the readout output layer is trained using standard linear regression techniques, and in this work, is used to approximate the state-action value function. By using a policy iteration framework, where an alternating sequence of policy improvement (samples generation from environment interaction) and policy evaluation (network training) steps are performed, the system is able to shape navigation attractors so that, after convergence, the robot follows the correct trajectory towards the goal. The experiments are accomplished using an e-puck robot extended with 8 distance sensors in a rectangular environment with an obstacle between the robot and the target region. The task is to reach the goal through the correct side of the environment, which is indicated by a temporary stimulus previously observed at the beginning of the episode. We show that the reservoir-based system (with short-term memory) can model these navigation attractors, whereas a feedforward network without memory fails to do so.
Reservoir Computing network as a function approximator for reinforcement learning tasks with partially observable environments. The reservoir is a dynamical system of recurrent nodes. Solid lines represent connections which are fixed. Dashed lines are the connections to be trained
Motor primitives or basic behaviors: left, forward and right.
A sequence of robot trajectories as learning evolves, using the ESN. Each plot shows robot trajectories in the environment for several episodes during the learning process. In the beginning, exploration is high and several locations are visited by the robot. As the simulation develops, two navigation attractors are formed to the left and to the right so that the agent receives maximal reward.
This work proposes a hierarchical biologically-inspired architecture for learning sensor-based spatial representations of a robot environment in an unsupervised way. The first layer is comprised of a fixed randomly generated recurrent neural network, the reservoir, which projects the input into a high-dimensional, dynamic space. The second layer learns instantaneous slowly-varying signals from the reservoir states using Slow Feature Analysis (SFA), whereas the third layer learns a sparse coding on the SFA layer using Independent Component Analysis (ICA). While the SFA layer generates non-localized activations in space, the ICA layer presents high place selectivity, forming a localized spatial activation, characteristic of place cells found in the hippocampus area of the rodent’s brain. We show that, using a limited number of noisy short-range distance sensors as input, the proposed system learns a spatial representation of the environment which can be used to predict the actual location of simulated and real robots, without the use of odometry. The results confirm that the reservoir layer is essential for learning spatial representations from low-dimensional input such as distance sensors. The main reason is that the reservoir state reflects the recent history of the input stream. Thus, this fading memory is essential for detecting locations, mainly when locations are ambiguous and characterized by similar sensor readings.
Video for data generation:
Publications
Eric Antonelo and Benjamin SchrauwenLearning slow features with reservoir computing for biologically-inspired robot localization NEURAL NETWORKS, pp. 178-190 (2011)
Eric Antonelo and Benjamin SchrauwenTowards autonomous self-localization of small mobile robots using reservoir computing and slow feature analysis IEEE International conference on Systems, Man, and Cybernetics, Conference digest, Vol. 2, pp. (2009)
Eric Antonelo and Benjamin SchrauwenUnsupervised learning in reservoir computing : modeling hippocampal place cells for small mobile robots LECTURE NOTES IN COMPUTER SCIENCE, Vol. 5768, pp. 747-756 (2009)
In this work we propose a hierarchical architecture which constructs internal models of a robot environment for goal-oriented navigation by an imitation learning process. The proposed architecture is based on the Reservoir Computing paradigm for training Recurrent Neural Networks (RNN). It is composed of two randomly generated RNNs (called reservoirs), one for modeling the localization capability and one for learning the navigation skill. The localization module is trained to detect the current and previously visited robot rooms based only on 8 noisy infra-red distance sensors. These predictions together with distance sensors and the desired goal location are used by the navigation network to actually steer the robot through the environment in a goal-oriented manner. The training of this architecture is performed in a supervised way (with examples of trajectories created by a supervisor) using linear regression on the reservoir states. So, the reservoir acts as a temporal kernel projecting the inputs to a rich feature space, whose states are linearly combined to generate the desired outputs. Experimental results on a simulated robot show that the trained system can localize itself within both simple and large unknown environments and navigate successfully to desired goals.
Eric Antonelo and Benjamin SchrauwenOn Learning Navigation Behaviors for Small Mobile Robots with Reservoir Computing Architectures IEEE Transactions on Neural Networks and Learning Systems, Vol. 26 pp. 763-780 (2014). DOI: 10.1109/TNNLS.2014.2323247.
Eric Antonelo and Benjamin SchrauwenSupervised learning of internal models for autonomous goal-oriented robot navigation using Reservoir Computing IEEE International conference on Robotics and Automation, Proceedings, pp. 6 (2010)
In this work we tackle the road sign problem with Reservoir Computing (RC) networks. The T-maze task (a particular form of the road sign problem) consists of a robot in a T-shaped environment that must reach the correct goal (left or right arm of the T-maze) depending on a previously received input sign. It is a control task in which the delay period between the sign received and the required response (e.g., turn right or left) is a crucial factor. Delayed response tasks like this one form a temporal problem that can be handled very well by RC networks. Reservoir Computing is a biologically plausible technique which overcomes the problems of previous algorithms such as Backpropagation Through Time - which exhibits slow (or non-) convergence on training. RC is a new concept that includes a fast and efficient training algorithm. We show that this simple approach can solve the T-maze task efficiently.
Video showing trained RC network controlling the robot:
Publications
Eric Antonelo, Benjamin Schrauwen and Dirk StroobandtMobile Robot Control in the Road Sign Problem using Reservoir Computing Networks Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 911-916 (2008)
Eric Antonelo, Benjamin Schrauwen and Jan Van CampenhoutGenerative Modeling of Autonomous Robots and their Environments using Reservoir Computing Neural Processing Letters, Vol. 26(3), pp. 233-249 (2007)
Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.
Videos showing data generation for event detection and localization:
Autonomous mobile robots form an important research topic in the field of robotics due to their near-term applicability in the real world as domestic service robots. These robots must be designed in an efficient way using training sequences. They need to be aware of their position in the environment and also need to create models of it for deliberative planning. These tasks have to be performed using a limited number of sensors with low accuracy, as well as with a restricted amount of computational power. In this contribution we show that the recently emerged paradigm of Reservoir Computing (RC) is very well suited to solve all of the above mentioned problems, namely learning by example, robot localization, map and path generation. Reservoir Computing is a technique which enables a system to learn any time-invariant filter of the input by training a simple linear regressor that acts on the states of a highdimensional but random dynamic system excited by the inputs. In addition, RC is a simple technique featuring ease of training, and low computational and memory demands.
Eric Antonelo, Benjamin Schrauwen and Jan Van CampenhoutGenerative Modeling of Autonomous Robots and their Environments using Reservoir Computing Neural Processing Letters, Vol. 26(3), pp. 233-249 (2007)
Title of Master thesis: A Neural Reinforcement Learning Approach for Intelligent Autonomous Navigation Systems
Classical reinforcement learning mechanisms and a modular neural network are unified to conceive an intelligent autonomous system for mobile robot navigation. The conception aims at inhibiting two common navigation deficiencies: generation of unsuitable cyclic trajectories and ineffectiveness in risky configurations. Different design apparatuses are considered to compose a system to tackle with these navigation difficulties, for instance: 1) neuron parameter to simultaneously memorize neuron activities and function as a learning factor, 2) reinforcement learning mechanisms to adjust neuron parameters (not only synapse weights), and 3) a inner-triggered reinforcement. Simulation results show that the proposed system circumvents difficulties caused by specific environment configurations, improving the relation between collisions and captures.
Video (inhibiting unsuitable cyclic trajectories through reinforcement learning):
The robot starts not knowing what it should do in the environment, but as times passes, we can see that it interacts with the environment by colliding against obstacles and capturing targets (yellow boxes). Each collision elicits an appropriate innate response, i.e., aversion. As more collisions take place, its neural network learns to associate obstacles (and its blue color) with aversion behaviors such that it can deviate from obstacles (emergent behavior). The same process occurs for target capture being associated with attraction behavior through learning. In the end, the robot can navigate the environment efficiently, capturing targets, effectively suppressing cyclic trajectories common to such reactive systems.
Video (robot cooperation; each robot trained with previous neural network architecture)
The intelligent autonomous system corresponds to a neural network arranged in three layers (Fig. 4). In the first layer there are two neural repertoires: Proximity Identifier repertoire (PI) and Color Identifier repertoire (CI). Distance sensors stimulate PI repertoire whereas color sensors feed CI repertoire. Both repertoires receive stimuli from contact sensors. The second layer is composed by two neural repertoires: Attraction repertoire (AR) and Repulsion repertoire (RR). Each one establishes connections with both networks in the first layer as well as with contact sensors. The actuator network, connected to AR and RR repertoires, outputs the adjustment on direction of the robot.