Skip directly to content

General Portfolio

State-of-the-art Artificial Intelligence method for detecting that you is really you and not some intruder entering the code on your mobile phone.

Technologies used:
Python (backend & custom Neural network model);
Java (Android app frontend);

Developed in 2016/2017.


More information:  TigerAI_info.pdf



Technologies used: C++, TCP/IP sockets, Linux, Qt, Qwt. 
Developed from 2001 to 2008.
Two different software programs were developed during my undergraduate and Master studies: SINAR, a simulator that shows graphically the representation of the environment and the simulation in real time; and CONAR, the autonomous  controller that receives sensor data (from SINAR) and output the actuators data (to SINAR). Simulations with multiple robots can be done if more than one controller (CONAR) connects to SINAR. The communication between both programs is represented on the following figure:

The communication protocol is implemented using TCP/IP sockets. Thus, several controllers can run under different computers over a network (distributing the computing load through different nodes of a network). Both simulator programs were developed under the Linux operating system; the graphical interface was developed with Qt library and some graphical plots were created with Qwt library. C++ was the programming  language used to create both programs.


SINAR is a simulator for autonomous robot navigation experiments. Its graphical user interface contains menus, command bars, and the environment display. 

The user can create simulation environments merely by clicking and moving the mouse cursor in the display area. Objects are inserted, resized, translated and rotated simply by moving the mouse device. An object can be a obstacle, a target or even a robot. The user can also edit an object by changing its color and type of movement (for moving objects).

Environments can be saved in files and posteriorly they can be loaded for being used in simulations. Before a simulation starts, one or more controllers (CONAR) should be connected to the SINAR software. The user can control the simulation by activating appropriate (button) commands: start, pause and finish.

During a simulation, sensor data can be viewed graphically through plots in real time:

The performance of the robot (number of collisions, number of target captures, and number of executed iterations) can be verified in real time as well.

There are two modes of simulation: orginary mode and sophisticated mode. In the former mode, the environment display is updated at each iteration such that the user can view graphically the progress of the simulation. Furthermore,  the user can move any object in the environment in real time.

In the latter mode, the simulation is accomplished implicitily (not graphically) and is composed of a set of experiments configured by a specific C++ script. The script determines the sequence and the duration of simulation experiments (considering that each experiment uses distinct environments), besides the number of repetitions for a sequence of experiments. In the sophisticated mode, all generated data are recorded in files:  the trajectory of the robot and the performance measures (number of captures, collisions and their respective iteration time); the representation of the final state of the environment in PNG format and the performance plot (also called learning evolution graphic) are also generated automatically. The controller data (neural networks states) are also saved in an automatic way since the script tells CONAR to save its state when each simulation is finished.



CONAR is a program that simulates the brain of a robot located in the SINAR environment. After receiving sensor data (distance, color and contact) from its respective robot in the SINAR environment, it sends actuator data (direction adjustment and velocity adjustment) to the same robot. This cycle is kept until the simulations ends.

The graphical interface of CONAR is shown on the following figure. Parameters of the controller can be adjusted before the simulation and in real time; commands can be activated by clicking on buttons: connect to SINAR, apply parameters changes in real time, generate performance data and plots for recording in files, save neural networks state, exit simulation. Furthermore, some neural networks in the controller can be disabled in real time (so that it outputs null (zero)): IP, IC, RR and AR networks.

   In addition, neural networks state can be viewed graphically in real time. In the following figures, a neuron is represented by a circle. In addition, the more black a neuron is, stronger is its output.

In above figure, it is shown a representation of PI repertoire neurons. A small red square inside a circle means that a neuron has already been winner during a learning event.

The next figure shows AR, RR and actuator neurons. The energy levels (degree of activity) of AR or RR neurons are represented by a thick line next to the respective neurons.

In following picture, it is shown the graphical representation of output of winner neurons in PI repertoire (each line represents the winner neuron output in a column: the first line corresponds to the first column and so on).


To see a video of a simulation run, check out this page: Reinforcement learning of robot behaviors

Related publications

  1. Eric AntoneloAlbert-Jan BaerveldtThorsteinn Rognvaldsson and Mauricio Figueiredo Modular Neural Network and Classical Reinforcement Learning for Autonomous Robot Navigation: Inhibiting Undesirable Behaviors Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN), pp. 1225-1232 (2006)    
  2. Eric Antonelo A Neural Reinforcement Learning Approach for Behavior Acquisition in Intelligent Autonomous Systems Master thesis, Halmstad University (2006)    
  3. Eric AntoneloMauricio FigueiredoAlbert-Jan Baerlveldt and Rodrigo Calvo Intelligent autonomous navigation for mobile robots: spatial concept acquisition and object discrimination Proceedings of the 6th IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), pp. 553-557 (2005)    
  4. Eric Antonelo and Mauricio Figueiredo Autonomous intelligent systems applied to robot navigation: spatial concept acquisition and object discrimination Proceedings of the 2nd National Meeting of Intelligent Robotics (II ENRI) in the Congress of the Brazilian Computer Society (in Portuguese), pp. (2004)  

Technologies used: Matlab and toolboxes. 
Developed in 2005.

I worked with Prof. Thorsteinn Rognvaldsson from Halmstad University on a consulting project for Eka Chemicals from Sweden.

It involved the multi-variate data mining with several variables representing measurements from the chemical process. The main goal was to find out about the important variables influencing negatively the process.

For that, machine learning tools such as logistic regression, multi-layer perceptrons, and support vector machines were used (in Matlab).

Technologies used: Ruby, Ruby on Rails, Apache Solr, JavascriptTwitter Boostrap and various ruby gems. 
Started Development in 2015.
Mobile ready.

NatVim is an international vertical search engine, made using the Ruby on Rails framework in early 2015, for natural and alternative treatments to diseases, mainly based on herbs and medicinal plants. It also indexes content from internet on related topics.

Initially, the most supported country is Brazil and the Portuguese language. Update: English, Spanish and French languages are getting increasingly supported.

From the website: "NatVim stands for Natural Vim. NatVim functions as a vertical search engine for natural and alternative treatments for diseases, healthy recipes and other articles. Besides indexing content from all over the internet, it also allows the collaborative creation of treatments, recipes and articles by therapists and health professionals as well as other NatVim users all around the world."

For more information, check: and


Technologies used: Ruby, Ruby on Rails, Apache solr, and various ruby gems.
Started Development in 2012.
Mobile version of website developed in 2013.
Imofox is a vertical search engine for real estate properties in Brazil, made using the Ruby on Rails framework.
It has XML integration to third-party real estate software, which is run in background every day for indexing properties in the Imofox search.


It also computes statistics on price of properties per number of rooms and square meters in neighborhoods and cities:

Technologies usedPHP, Drupal, Apache solr, and various Drupal modules
Started development in 2008.

Imohoo was a vertical search engine for real estate properties in Brazil, made using the Drupal platform and modules.

It had crawling capabilities as well as XML integration to third-party real estate software, developed as customized drupal modules in PHP. This crawling and integration routines were run in background every day for indexing properties in the Imohoo search with Apache solr.


The web portal achieved almost 1 million page views/month by April 2011.
I was involved in the overall development process: crawler of websites, interface design, server administration, SEO, marketing, and integration to other services.