Institut de Neurosciences Cognitives et Intégratives d'Aquitaine (UMR5287)

Aquitaine Institute for Cognitive and Integrative Neuroscience



INCIA - UMR 5287- CNRS
Université de Bordeaux

Zone nord Bat 2 2ème étage
146, rue Léo Saignat
33076 Bordeaux cedex
France

Téléphone 05.57.57.15.51
Télécopie 05.56.90.14.21

Supervisory authorities

CNRS Ecole Pratique des Hautes Etudes Université de Bordeaux

Our partners

Neurocampus Unitéde Formation de Biologie

GDR

GDR Robotique GDR Mémoire GDR Multi-électrodes

Search




Home > Teams > HYBRID (A. De RUGY) > Themes

Theme 4: Computer Vision and Gaze Information

by Loïc Grattier - published on , updated on

In collaboration with Prof J Benois-Pineau (LABRI), we develop a line of research aiming at integrating tools from computer vision augmented with gaze information, in order to help prosthesis and robotics controls. In particular, we improved automatic object detection using a Deep Convolutional Neural Network (DeepCNN, Fig3A) operating on egocentric video at eye level in conjunction with saliency map from gaze information (Fig3B-D) (Pérez de San Roman et al., 2017). By combining CNN and LSTM (Long Short Term Memory) architecture, we were also able not only to detect object localization, but also to predict the intention to grasp an object, and this in natural environments as complex as real kitchens (Fig3E) (Gonzalez-Diaz et al. 2019).

This information from computer vision and gaze information is critical to the alternative kinematic-based control developed in virtual reality Theme 5, and provide rich perspectives of integration on our robotic platform REACHY Theme3, which can now host computer vision algorithms.

Publications:
- Gonzalez-Diaz I, Benois-Pineau J, Domenger J-P, Cattaert D, de Rugy A (2019) Perceptually-guided deep neural networks for ego-action prediction: Object grasping. Pattern Recognition. 88: 223-235.
- Pérez De San Roman P, Benois-Pineau J, Domenger J-P, Cattaert D, Paclet F, de Rugy A (2017) Saliency Driven Object Recognition in Egocentric Videos with Deep CNN: toward application in assistance to Neuroprostheses. Computer Vision and Image Understanding. 164, 82-91.