Paper: Framework for Fast Experimental Testing of Autonomous Navigation Algorithms

Title: Framework for Fast Experimental Testing of Autonomous Navigation Algorithms
Author: Muñoz–Bañón MÁ, del Pino I, Candelas FA, Torres F. Framework for Fast Experimental Testing of Autonomous Navigation Algorithms.
Journal: Applied Sciences. 2019; 9(10):1997. doi:10.3390/app9101997
Abstract: Research in mobile robotics requires fully operative autonomous systems to test and compare algorithms in real-world conditions. However, the implementation of such systems remains to be a highly time-consuming process. In this work, we present a robot operating system (ROS)-based navigation framework that allows the generation of new autonomous navigation applications in a fast and simple way. Our framework provides a powerful basic structure based on abstraction levels that ease the implementation of minimal solutions with all the functionalities required to implement a whole autonomous system. This approach helps to keep the focus in any sub-problem of interest (i.g. localization or control) while permitting to carry out experimental tests in the context of a complete application. To show the validity of the proposed framework we implement an autonomous navigation system for a ground robot using a localization module that fuses global navigation satellite system (GNSS) positioning and Monte Carlo localization by means of a Kalman filter. Experimental tests are performed in two different outdoor environments, over more than twenty kilometers. All the developed software is available in a GitHub repository.
Download paper

Paper: Clasificación de objetos usando percepción bimodal de palpación única en acciones de agarre robótico

Title: Clasificación de objetos usando percepción bimodal de palpación única en acciones de agarre robótico
Author: Edison Velasco, Brayan S. Zapata-Impata, Pablo Gil, Fernando Torres
Journal: Revista Iberoamericana de Automática e Informática industrial, [S.l.], abr. 2019. ISSN 1697-7920. doi: https://doi.org/10.4995/riai.2019.10923.
Abstract: Este trabajo presenta un método para clasificar objetos agarrados con una mano robótica multidedo combinando en un descriptor híbrido datos propioceptivos y táctiles. Los datos propioceptivos se obtienen a partir de las posiciones articulares de la mano y los táctiles del contacto registrado por células de presión en las falanges. La aproximación propuesta permite identificar el objeto, extrayendo de la pose de la mano la geometría de contacto y de los sensores táctiles la estimación de la rigidez y flexibilidad de éste. El método muestra que usar datos bimodales de distinta naturaleza y técnicas de aprendizaje supervisado mejora la tasa de reconocimiento. En la experimentación, se han llevado a cabo más de 3000 agarres de hasta 7 objetos domésticos distintos, obteniendo clasificaciones correctas del 95% con métrica F1, sin necesidad de ejecutar múltiples palpaciones del objeto. Además, la generalización del método se ha verificado entrenando nuestro sistema con ciertos objetos y clasificando otros nuevos sin conocimiento previo alguno de estos.
Download paper

Paper: 3DCNN Performance in Hand Gesture Recognition Applied to Robot Arm Interaction

Title: 3DCNN Performance in Hand Gesture Recognition Applied to Robot Arm Interaction
Author: Castro-Vargas, J., Zapata-Impata, B., Gil, P., Garcia-Rodriguez, J. and Torres, F.
Conference: In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods – Volume 1, 2019: ICPRAM, ISBN 978-989-758-351-3, pages 802-806. DOI: 10.5220/0007570208020806
Abstract: In the past, methods for hand sign recognition have been successfully tested in Human Robot Interaction (HRI) using traditional methodologies based on static image features and machine learning. However, the recognition of gestures in video sequences is a problem still open, because current detection methods achieve low scores when the background is undefined or in unstructured scenarios. Deep learning techniques are being applied to approach a solution for this problem in recent years. In this paper, we present a study in which we analyse the performance of a 3DCNN architecture for hand gesture recognition in an unstructured scenario. The system yields a score of 73% in both accuracy and F1. The aim of the work is the implementation of a system for commanding robots with gestures recorded by video in real scenarios.
Download paper

Paper: Fast geometry-based computation of grasping points on three-dimensional point clouds

Title: Fast geometry-based computation of grasping points on three-dimensional point clouds
Author: Brayan S Zapata-Impata, Pablo Gil, Jorge Pomares, Fernando Torres
Journal: International Journal of Advanced Robotic Systems, January-February 2019: 1–18, https://doi.org/10.1177/1729881419831846
Abstract: Industrial and service robots deal with the complex task of grasping objects that have different shapes and which are seen from diverse points of view. In order to autonomously perform grasps, the robot must calculate where to place its robotic hand to ensure that the grasp is stable. We propose a method to find the best pair of grasping points given a three-dimensional point cloud with the partial view of an unknown object. We use a set of straightforward geometric rules to explore the cloud and propose grasping points on the surface of the object. We then adapt the pair of contacts to a multi-fingered hand used in experimentation. We prove that, after performing 500 grasps of different objects, our approach is fast, taking an average of 17.5 ms to propose contacts, while attaining a grasp success rate of 85.5%. Moreover, the method is sufficiently flexible and stable to work with objects in changing environments, such as those confronted by industrial or service robots.
Download paper

Paper: Learning Spatio Temporal Tactile Features with a ConvLSTM for the Direction Of Slip Detection

Title: Learning Spatio Temporal Tactile Features with a ConvLSTM for the Direction Of Slip Detection
Author: Zapata-Impata, Brayan S. and Gil, Pablo and Torres, Fernando
Journal: Sensors 2019, 19(3), 523; https://doi.org/10.3390/s19030523
Abstract: Robotic manipulators have to constantly deal with the complex task of detecting whether a grasp is stable or, in contrast, whether the grasped object is slipping. Recognising the type of slippage—translational, rotational—and its direction is more challenging than detecting only stability, but is simultaneously of greater use as regards correcting the aforementioned grasping issues. In this work, we propose a learning methodology for detecting the direction of a slip (seven categories) using spatio-temporal tactile features learnt from one tactile sensor. Tactile readings are, therefore, pre-processed and fed to a ConvLSTM that learns to detect these directions with just 50 ms of data. We have extensively evaluated the performance of the system and have achieved relatively high results at the detection of the direction of slip on unseen objects with familiar properties (82.56% accuracy).
Paper download

Paper: Non-Matrix Tactile Sensors: How Can Be Exploited Their Local Connectivity For Predicting Grasp Stability?

Title: Non-Matrix Tactile Sensors: How Can Be Exploited Their Local Connectivity For Predicting Grasp Stability?
Author: Brayan S. Zapata-Impata, Pablo Gil, Fernando Torres
Publication: arXiv.org – arXiv:1809.05551
Abstract: Tactile sensors supply useful information during the interaction with an object that can be used for assessing the stability of a grasp. Most of the previous works on this topic processed tactile readings as signals by calculating hand-picked features. Some of them have processed these readings as images calculating characteristics on matrix-like sensors. In this work, we explore how non-matrix sensors (sensors with taxels not arranged exactly in a matrix) can be processed as tactile images as well. In addition, we prove that they can be used for predicting grasp stability by training a Convolutional Neural Network (CNN) with them. We captured over 2500 real three-fingered grasps on 41 everyday objects to train a CNN that exploited the local connectivity inherent on the non-matrix tactile sensors, achieving 94.2% F1-score on predicting stability.
Paper download

Paper: Agarre bimanual de objetos asistido por visión

Title: Agarre bimanual de objetos asistido por visión
Author: J.A. Castro-Vargas, B.S. Zapata-Impata, P. Gil, J. Pomares
Conference: Tejado Balsera, Inés, et al. (eds.). Actas de las XXXIX Jornadas de Automática: Badajoz, 5-7 de Septiembre de 2018. ISBN 978-84-09-04460-3, pp. 1030-1037
Abstract: Manipulation tasks of objects, sometimes, require the use of two or more cooperating robots. In the industry 4.0, assistance robotic is being more and more demanded, for example, to carry out tasks such as lifting, dragging or pushing of both heavy and big packages. Consequently, it is possible to find robots with human appearance addressed on helping human operators in activities in which these types of movements occur. In this article, a vision-assisted robotic platform is presented to carry out both grasping tasks and bimanual manipulation of objects. The robotic platform consists of a metallic torso with rotational joint at the hip and two industrial manipulators, with 7 degrees of freedom, which act as arms. Each arm mounts a multifinger robotic hand at the end. Each of the upper extremities use visual perception from 3 RGBD sensors located in an eye-to-hand configuration. The platform has been successfully used and tested to carry out bimanual object grasping in order to develop cooperative manipulation tasks in a coordinated way between both robotic extremities.
Download paper

Paper: A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography

Title: A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography
Author: Andrés Úbeda, Brayan S. Zapata-Impata, Santiago T. Puente, Pablo Gil, Francisco Candelas and Fernando Torres
Journal: Sensors 2018, 18(7), 2366; https://doi.org/10.3390/s18072366
Abstract: This paper presents a system that combines computer vision and surface electromyography techniques to perform grasping tasks with a robotic hand. In order to achieve a reliable grasping action, the vision-driven system is used to compute pre-grasping poses of the robotic system based on the analysis of tridimensional object features. Then, the human operator can correct the pre-grasping pose of the robot using surface electromyographic signals from the forearm during wrist flexion and extension. Weak wrist flexions and extensions allow a fine adjustment of the robotic system to grasp the object and finally, when the operator considers that the grasping position is optimal, a strong flexion is performed to initiate the grasping of the object. The system has been tested with several subjects to check its performance showing a grasping accuracy of around 95% of the attempted grasps which increases in more than a 13% the grasping accuracy of previous experiments in which electromyographic control was not implemented
Download paper