Skip directly to content

Artificial Intelligence

On Learning Navigation Behaviors for Small Mobile Robots with Reservoir Computing Architectures

on Wed, 12/16/2015 - 14:56

This work proposes a general Reservoir Computing (RC) learning framework which can be used to learn navigation behaviors for mobile robots in simple and complex unknown, partially observable environments. RC provides an efficient way to train recurrent neural networks by letting the recurrent part of the network (called reservoir) fixed while only a linear readout output layer is trained.
The proposed RC framework builds upon the notion of navigation attractor or behavior which can be embedded in the high-dimensional space of the reservoir after learning. 
The learning of multiple behaviors is possible because the dynamic robot behavior, consisting of a sensory-motor sequence, can be linearly discriminated in the high-dimensional nonlinear space of the dynamic reservoir. 
Three learning approaches for navigation behaviors are shown in this paper. The first approach learns multiple behaviors based on examples of navigation behaviors generated by a supervisor, while the second approach learns goal-directed navigation behaviors based only on rewards. The third approach learns complex goal-directed behaviors, in a supervised way, using an hierarchical architecture whose internal predictions of contextual switches guide the sequence of basic navigation behaviors towards the goal.

 

 

 

 

 

Robot learning through dynamical systems (PhD thesis)

on Wed, 12/09/2015 - 14:12

During my PhD, I've worked mainly on Reservoir Computing (RC) architectures with application to modeling cognitive capabilities for mobile robots from sensor data and sometimes through interaction with the environment.

Reservoir Computing (RC) is an efficient method for trainning recurrent neural networks, which can handle spatio-temporal processing tasks, such as speech recognition. These networks are also biological plausible, as recently argued in the literature.

In my case, I used these RC networks for modeling a wide range of capabilities for mobile robots, such as:

These tasks were modeled basically using regression for learning behaviors or classification for discrete localization.

My PhD thesis can be download here. It is entitled: "Reservoir Computing Architectures for Modeling Robot Navigation Systems".

My publications are listed and can be downloaded in Google Scholar or here.

Some simulated and real robots employed in the experiments:


 

Environment used for localization experiments using the real e-puck robot:


 

After using unsupervised learning methods for self-localization, the plots below show the mean activation of place cells as a function of the robot position in the environment.
Red denotes a high response whereas blue denotes a low response.
 

It is possible to perform map generation through sensory prediction given the robot position as input. Black points represent the sensory readings whereas gray points are the robot trajectory.

 

Physics-Informed Neural Nets for Control of Dynamical Systems

on Tue, 09/28/2021 - 16:00

Physics-informed neural networks (PINNs) impose known physical laws into the learning of deep neural networks, making sure they respect the physics of the process while decreasing the demand of labeled data. For systems represented by Ordinary Differential Equations (ODEs), the conventional PINN has a continuous time input variable and outputs the solution of the corresponding ODE. In their original form, PINNs do not allow control inputs neither can they simulate for long-range intervals without serious degradation in their predictions. In this context, this work presents a new framework called Physics-Informed Neural Nets for Control (PINC), which proposes a novel PINN-based architecture that is amenable to control problems and able to simulate for longer-range time horizons that are not fixed beforehand. The framework has new inputs to account for the initial state of the system and the control action. In PINC, the response over the complete time horizon is split such that each smaller interval constitutes a solution of the ODE conditioned on the fixed values of initial state and control action for that interval. The whole response is formed by feeding back the predictions of the terminal state as the initial state for the next interval. This proposal enables the optimal control of dynamic systems, integrating a priori knowledge from experts and data collected from plants into control applications. We showcase our proposal in the control of two nonlinear dynamic systems: the Van der Pol oscillator and the four-tank system.

MPC: Representation of the output prediction in a time instant, where the proposed actions generate a predicted behavior that reduces the distance between the value predicted by the model and a reference trajectory:

mpc_pred.png


The PINC network has initial state y(0) of the dynamic system and control input u as inputs, in addition to continuous time scalar t. Both y(0) and u can be multidimensional. The output y(t) corresponds to the state of the dynamic system as a function of t 2 [0; T], and initial conditions given by y(0) and u. The deep network is fully connected even though not all connections are shown:

pinc_net.png

Below, modes of operation of the PINC network. (a) PINC net operates in self-loop mode, using its own output prediction as next initial state, after T seconds. This operation mode is used within one iteration of MPC, for trajectory generation until the prediction horizon of MPC completes (predicted output from the first Figure). (b) Block diagram for PINC connected to the plant. One pass through the diagram arrows corresponds to one MPC iteration applying a control input u for Ts timesteps for both plant and PINC network. Note that the initial state of the PINC net is set to the real output of the plant. In practice, in MPC, these two operation modes are executed in an alternated way (optimization in the prediction horizon, and application of control action).

a). pinc_feedback.png           


b) pinc_plant.png

Below, the representation of a trained PINC network evolving through time in self-loop mode (previous Figure a)) for trajectory generation in prediction horizon. The top dashed black curve corresponds to a predicted trajectory y of a hypothetical dynamic system in continuous time. The states y[k] are snapshots of the system in discrete time k positioned at the equidistant vertical lines. Between two vertical lines (during the inner continuous interval between steps k and k + 1), the PINC net learns the solution of an ODE with t \in [0; T], conditioned on a fixed control input u[k] (blue solid line) and initial state y(0) (green thick dashed line). Control action u[k] is changed at the vertical lines and kept fixed for T seconds, and the initial state y(0) in the interval between steps k and k + 1 is updated to the last state of the previous interval k  1 (indicated by the red curved arrow). The PINC net can directly predict the states at the vertical lines without the need to infer intermediate states t < T as numerical simulation does. Here, we assume that T = Ts and, thus, the number of discrete timesteps M is equal to the length of the prediction horizon in MPC.

pinc_evolution.png

POLSAB aims at advancing the state-of-the-art in safe imitation learning for high-dimensional domains in an end- to-end approach. We focus on two main applications: autonomous robot navigation and self-driving car simulations. In order to design efficient and safe policies (which map observations to actions) for these tasks, it is necessary more than just using behavioral cloning which basically applies supervised learning on a labelled dataset.

Control tasks usually have the issue of cascading errors. This happens when the controller's policy does not take into account the feedback loops of controller mistakes: little deviations from the desired reference track (the street lane, or the robot path) causes the error to feedback into the policy as new observations arise, until no valid action is possible anymore.

In order to fix these problems, in this project, we will use a recently introduced framework called Generative Adversarial Imitation Learning (GAIL) for learning robust policies by imitation learning for the robot and the simulated vehicle. To minimize the risk of high-cost events (accidents), the risk-averse version of GAIL will be extended to our application domains.

A second approach will tackle the recent framework of option-critic in Reinforcement Learning (RL), where by defining a reward function (a qualitative measure of the agent's behavior) it is possible to learn robust control policies by trial and error. This method also takes into account temporal abstractions in the policy mappings by creating hierarchies of behaviors in time, which makes it possible to scale up reinforcement learning.

Finally, this project will contribute to research in safe AI by investigating risk-sensitive methods in applications where this is of paramount importance in order to position Luxembourg as a important player in billion dollars industries: safe, robust AI agents in real-world settings (e.g. trading, autonomous driverless vehicles, service robotics).

SUPPORT (if project is realized with FNR support):

  • FNR Luxembourg
  • SnT/University of Luxembourg
  • Institute for Robotics and Process Control at the Technische Universität Braunschweig (Prof. Dr. Jochen Steil)
  • Google DeepMind (Dr. Raia Hadsell)

PLATFORMS TO BE USED IN THE PROJECT:

                 

Turtlebot Waffle with camera and LiDAR sensors for autonomous navigation

                

   

CARLA: simulation for self-driving/autonomous cars in urban environments

 

TORCS: simulation for racing/road autonomous cars

 

State-of-the-art Artificial Intelligence method for detecting that you is really you and not some intruder entering the code on your mobile phone.

Technologies used:
Python (backend & custom Neural network model);
Java (Android app frontend);

Developed in 2016/2017.

 

More information:  TigerAI_info.pdf

 

 

Pages