Skip directly to content

Filter Research by Application Areas

 

Autonomous Vehicles
DAS-UFSC

 

 

                       

Physics-Informed Neural Networks
DAS-UFSC

 

   

 

Robot learning (navigation/localization) through dynamical systems

(supervised, unsupervised and reinforcement learning/PhD at Ghent University)

                     

Proxy models and Recurrent neural learning for control in Oil and Gas

(cooperation with Petrobras and DAS/UFSC, Brazil)

Deviation detection through dynamical reservoir models 

(cooperation with Halmstad University)

Time Series-based Fraud detection

(University of Luxembourg/Choice technologies)

   

 

 

 

Research

They are also listed below:

Cognitive computation for Deviation detection in Fleet of City Buses

on Wed, 12/09/2015 - 16:21

With Prof. Thorsteinn Rögnvaldsson, from Halmstad University, Sweden, we are looking at how Reservoir Computing can help in deviation detection in a fleet of Swedish city buses using a signal from the air tank pressure from the buses in order to predict when a bus is going to break well in advance.

Video from the project at Halmstad University:

 

 

 

Learning navigation attractors for mobile robots with reinforcement learning and reservoir computing

on Wed, 12/16/2015 - 16:48

Autonomous robot navigation in partially observable environments is a complex task because the state of the environment can not be completely determined only by the current sensory readings of a robot. This work uses the recently introduced paradigm for training recurrent neural networks (RNNs), called reservoir computing (RC), to model multiple navigation attractors in partially observable environments. In RC, the RNN with randomly generated fixed weights, called reservoir, projects the input into a high-dimensional dynamic space. Only the readout output layer is trained using standard linear regression techniques, and in this work, is used to approximate the state-action value function. By using a policy iteration framework, where an alternating sequence of policy improvement (samples generation from environment interaction) and policy evaluation (network training) steps are performed, the system is able to shape navigation attractors so that, after convergence, the robot follows the correct trajectory towards the goal. The experiments are accomplished using an e-puck robot extended with 8 distance sensors in a rectangular environment with an obstacle between the robot and the target region. The task is to reach the goal through the correct side of the environment, which is indicated by a temporary stimulus previously observed at the beginning of the episode. We show that the reservoir-based system (with short-term memory) can model these navigation attractors, whereas a feedforward network without memory fails to do so.

Reservoir Computing network as a function approximator for reinforcement learning tasks with partially observable environments. The reservoir is a dynamical system of recurrent nodes. Solid lines represent connections which are fixed. Dashed lines are the connections to be trained

 

Motor primitives or basic behaviors: left, forward and right.

 

A sequence of robot trajectories as learning evolves, using the ESN. Each plot shows robot trajectories in the environment for several episodes during the learning process. In the beginning, exploration is high and several locations are visited by the robot. As the simulation develops, two navigation attractors are formed to the left and to the right so that the agent receives maximal reward.

 

Biologically-inspired robot localization (Place cells)

on Wed, 12/09/2015 - 17:21

This work proposes a hierarchical biologically-inspired architecture for learning sensor-based spatial representations of a robot environment in an unsupervised way. The first layer is comprised of a fixed randomly generated recurrent neural network, the reservoir, which projects the input into a high-dimensional, dynamic space. The second layer learns instantaneous slowly-varying signals from the reservoir states using Slow Feature Analysis (SFA), whereas the third layer learns a sparse coding on the SFA layer using Independent Component Analysis (ICA). While the SFA layer generates non-localized activations in space, the ICA layer presents high place selectivity, forming a localized spatial activation, characteristic of place cells found in the hippocampus area of the rodent’s brain. We show that, using a limited number of noisy short-range distance sensors as input, the proposed system learns a spatial representation of the environment which can be used to predict the actual location of simulated and real robots, without the use of odometry. The results confirm that the reservoir layer is essential for learning spatial representations from low-dimensional input such as distance sensors. The main reason is that the reservoir state reflects the recent history of the input stream. Thus, this fading memory is essential for detecting locations, mainly when locations are ambiguous and characterized by similar sensor readings.

Video for data generation:

 

 

Publications

  1. Eric Antonelo and Benjamin Schrauwen Learning slow features with reservoir computing for biologically-inspired robot localization NEURAL NETWORKS, pp. 178-190 (2011)   
  2. Eric Antonelo and Benjamin Schrauwen Towards autonomous self-localization of small mobile robots using reservoir computing and slow feature analysis IEEE International conference on Systems, Man, and Cybernetics, Conference digest, Vol. 2, pp. (2009)   
  3. Eric Antonelo and Benjamin Schrauwen Unsupervised learning in reservoir computing : modeling hippocampal place cells for small mobile robots LECTURE NOTES IN COMPUTER SCIENCE, Vol. 5768, pp. 747-756 (2009)   

 

Supervised Learning of Internal Models for Autonomous Goal-Oriented Robot Navigation using Reservoir Computing

on Wed, 12/16/2015 - 14:45

In this work we propose a hierarchical architecture which constructs internal models of a robot environment for goal-oriented navigation by an imitation learning process. The proposed architecture is based on the Reservoir Computing paradigm for training Recurrent Neural Networks (RNN). It is composed of two randomly generated RNNs (called reservoirs), one for modeling the localization capability and one for learning the navigation skill. The localization module is trained to detect the current and previously visited robot rooms based only on 8 noisy infra-red distance sensors. These predictions together with distance sensors and the desired goal location are used by the navigation network to actually steer the robot through the environment in a goal-oriented manner. The training of this architecture is performed in a supervised way (with examples of trajectories created by a supervisor) using linear regression on the reservoir states. So, the reservoir acts as a temporal kernel projecting the inputs to a rich feature space, whose states are linearly combined to generate the desired outputs. Experimental results on a simulated robot show that the trained system can localize itself within both simple and large unknown environments and navigate successfully to desired goals.

 

 

 

  1. Eric Antonelo and Benjamin Schrauwen On Learning Navigation Behaviors for Small Mobile Robots with Reservoir Computing Architectures IEEE Transactions on Neural Networks and Learning Systems, Vol. 26 pp. 763-780 (2014). DOI: 10.1109/TNNLS.2014.2323247.  
  2. Eric Antonelo and Benjamin Schrauwen Supervised learning of internal models for autonomous goal-oriented robot navigation using Reservoir Computing IEEE International conference on Robotics and Automation, Proceedings, pp. 6 (2010)   

 

Delayed Response Tasks in Robot Control

on Wed, 12/09/2015 - 14:38

In this work we tackle the road sign problem with Reservoir Computing (RC) networks. The T-maze task (a particular form of the road sign problem) consists of a robot in a T-shaped environment that must reach the correct goal (left or right arm of the T-maze) depending on a previously received input sign. It is a control task in which the delay period between the sign received and the required response (e.g., turn right or left) is a crucial factor. Delayed response tasks like this one form a temporal problem that can be handled very well by RC networks. Reservoir Computing is a biologically plausible technique which overcomes the problems of previous algorithms such as Backpropagation Through Time - which exhibits slow (or non-) convergence on training. RC is a new concept that includes a fast and efficient training algorithm. We show that this simple approach can solve the T-maze task efficiently.

 

Video showing trained RC network controlling the robot:

 

Publications

  1. Eric AntoneloBenjamin Schrauwen and Dirk Stroobandt Mobile Robot Control in the Road Sign Problem using Reservoir Computing Networks Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 911-916 (2008)    
  2. Eric AntoneloBenjamin Schrauwen and Jan Van Campenhout Generative Modeling of Autonomous Robots and their Environments using Reservoir Computing Neural Processing Letters, Vol. 26(3), pp. 233-249 (2007)   

Event detection and localization for small mobile robots using reservoir computing

on Wed, 12/09/2015 - 14:06

Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.

 

Videos showing data generation for event detection and localization:

 

 

Publications

  1. Eric AntoneloBenjamin Schrauwen and Dirk Stroobandt Event detection and localization for small mobile robots using reservoir computing NEURAL NETWORKS, Vol. 21(6), pp. 862-871 (2008)  
  2. Eric AntoneloBenjamin SchrauwenXavier DutoitDirk Stroobandt and Marnix Nuttin Event detection and localization in mobile robot navigation using reservoir computing Proceedings of the International Conference on Artificial Neural Networks (ICANN), pp. 660-669 (2007)    

Generative Modeling of Autonomous Robots and their Environments using Reservoir Computing

on Wed, 01/20/2016 - 21:02

Autonomous mobile robots form an important research topic in the field of robotics due to their near-term applicability in the real world as domestic service robots. These robots must be designed in an efficient way using training sequences. They need to be aware of their position in the environment and also need to create models of it for deliberative planning. These tasks have to be performed using a limited number of sensors with low accuracy, as well as with a restricted amount of computational power. In this contribution we show that the recently emerged paradigm of Reservoir Computing (RC) is very well suited to solve all of the above mentioned problems, namely learning by example, robot localization, map and path generation. Reservoir Computing is a technique which enables a system to learn any time-invariant filter of the input by training a simple linear regressor that acts on the states of a highdimensional but random dynamic system excited by the inputs. In addition, RC is a simple technique featuring ease of training, and low computational and memory demands.

Keywords: reservoir computing, generative modeling, map learning, T-maze task, road sign problem, path generation

 

 

 

 

 

 

Related Publications

  1. Eric AntoneloBenjamin Schrauwen and Jan Van Campenhout Generative Modeling of Autonomous Robots and their Environments using Reservoir Computing Neural Processing Letters, Vol. 26(3), pp. 233-249 (2007)   

 

Reinforcement learning of robot behaviors (Master thesis)

on Wed, 12/09/2015 - 14:02

Title of Master thesis: A Neural Reinforcement Learning Approach for Intelligent Autonomous Navigation Systems

Classical reinforcement learning mechanisms and a modular neural network are unified to conceive an intelligent autonomous system for mobile robot navigation. The conception aims at inhibiting two common navigation deficiencies: generation of unsuitable cyclic trajectories and ineffectiveness in risky configurations. Different design apparatuses are considered to compose a system to tackle with these navigation difficulties, for instance: 1) neuron parameter to simultaneously memorize neuron activities and function as a learning factor, 2) reinforcement learning mechanisms to adjust neuron parameters (not only synapse weights), and 3) a inner-triggered reinforcement. Simulation results show that the proposed system circumvents difficulties caused by specific environment configurations, improving the relation between collisions and captures. 

 

Video (inhibiting unsuitable cyclic trajectories through reinforcement learning):

The robot starts not knowing what it should do in the environment, but as times passes, we can see that it interacts with the environment by colliding against obstacles and capturing targets (yellow boxes). Each collision elicits an appropriate innate response, i.e., aversion. As more collisions take place, its neural network learns to associate obstacles (and its blue color) with aversion behaviors such that it can deviate from obstacles (emergent behavior). The same process occurs for target capture being associated with attraction behavior through learning. In the end, the robot can navigate the environment efficiently, capturing targets, effectively suppressing cyclic trajectories common to such reactive systems.

Video (robot cooperation; each robot trained with previous neural network architecture)

 

The intelligent autonomous system corresponds to a neural network arranged in three layers (Fig. 4). In the first layer there are two neural repertoires: Proximity Identifier repertoire (PI) and Color Identifier repertoire (CI). Distance sensors stimulate PI repertoire whereas color sensors feed CI repertoire. Both repertoires receive stimuli from contact sensors. The second layer is composed by two neural repertoires: Attraction repertoire (AR) and Repulsion repertoire (RR). Each one establishes connections with both networks in the first layer as well as with contact sensors. The actuator network, connected to AR and RR repertoires, outputs the adjustment on direction of the robot. 

For more information on the robot simulator, check out this page: Autonomous robot simulator

 

Pages