Paper: RGB-D tracking and optimal perception of deformable objects

Title: RGB-D tracking and optimal perception of deformable objects
Author: Ignacio Cuiral-Zueco, Gonzalo López-Nicolás
Journal: IEEE Access, vol. 8, pp. 136884-136897, 2020.


Abstract: Addressing the perception problem of texture-less objects that undergo large deformations and movements, this article presents a novel RGB-D learning-free deformable object tracker in combination with a camera position optimisation system for optimal deformable object perception. The approach is based on the discretisation of the object’s visible area through the generation of a supervoxel graph that allows weighting new supervoxel candidates between object states over time. Once a deformation state of the object is determined, supervoxels of its associated graph serve as input for the camera position optimisation problem. Satisfactory results have been obtained in real time with a variety of objects that present different deformation characteristics.
Download paper

Paper: Robotic workcell for sole grasping in footwear manufacturing

Title: Robotic workcell for sole grasping in footwear manufacturing
Author: Guillermo Oliver, Pablo Gil, Fernando Torres
Conference: 25th Int. Conf. on Emerging Technologies and Factory Automation (ETFA), Vienna (Austria), 8-11 September 2020.

 
Abstract: The goal of this paper is to present a robotic workcell to automate several tasks of the cementing process in footwear manufacturing. Our cell’s main applications are sole digitization of a wide variety of footwear, glue dispensing and sole grasping from conveyor belts. This cell is made up of a manipulator arm endowed with a gripper, a conveyor belt and a 3D scanner. We have integrated all the elements into a ROS simulation environment facilitating control and communication among them, also providing flexibility to support future extensions. We propose a novel method to grasp soles of different shape, size and material, exploiting the particular characteristics of these objects. Our method relies on object contour extraction using concave hulls. We evaluate it on point clouds of 16 digitized real soles in three different scenarios: concave hull, k-NNs extension and PCA correction. While we have tested this workcell in a simulated environment, the presented system’s performance is scheduled to be tested on a real setup at INESCOP facilities in the upcoming months.

Paper: Blind Manipulation of Deformable Objects Based on Force Sensing and Finite Element Modeling

Title: Blind Manipulation of Deformable Objects Based on Force Sensing and Finite Element Modeling
Author: Jose Sanchez, Kamal Mohy El Dine, Juan Antonio Corrales, Belhassen-Chedli Bouzgarrou and Youcef Mezouar
Journal: Frontiers in Robotics and AI. 09 June 2020. 7:73. doi: 10.3389/frobt.2020.00073
Abstract: In this paper, we present a novel pipeline to simultaneously estimate and manipulate the deformation of an object using only force sensing and an FEM model. The pipeline is composed of a sensor model, a deformation model and a pose controller. The sensor model computes the contact forces that are used as input to the deformation model which updates the volumetric mesh of a manipulated object. The controller then deforms the object such that a given pose on the mesh reaches a desired pose. The proposed approach is thoroughly evaluated in real experiments using a robot manipulator and a force-torque sensor to show its accuracy in estimating and manipulating deformations without the use of vision sensors.
Download paper

Paper: Adaptive multirobot formation planning to enclose and track a target with motion and visibility constraints

Title: Adaptive multirobot formation planning to enclose and track a target with motion and visibility constraints
Author: G. López-Nicolás, M. Aranda and Y. Mezouar
Journal: IEEE Transactions on Robotics, vol. 36, no. 1, pp. 142-156, Feb. 2020.
Abstract: Addressing the problem of enclosing and tracking a target requires multiple agents with adequate motion strategies. We consider a team of unicycle robots with a standard camera on board. The robots must maintain the desired enclosing formation while dealing with their nonholonomic motion constraints. The reference formation trajectories must also guarantee permanent visibility of the target by overcoming the limited field of view of the cameras. In this article, we present a novel approach to characterize the conditions on the robots’ trajectories taking into account the motion and visual constraints. We also propose online and offline motion planning strategies to address the constraints involved in the task of enclosing and tracking the target. These strategies are based on maintaining the formation shape with variable size or, alternatively, on maintaining the size of the formation with flexible shape.
Download paper

Paper: Multi-camera architecture for perception strategies

Title: Multi-camera architecture for perception strategies
Authors: Enrique Hernández, Gonzalo López-Nicolás and Rosario Aragüés.
Conference: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA 2019), September 10-13, 2019, Zaragoza, Spain.
Abstract: Building the 3D model of an object is a complex problem that involves aspects such as modeling, control, perception or planning. Performing this task requires a set of different views to cover the entire surface of the object. Since a single camera takes too long to travel through all these positions, we consider a multi-camera scenario. Due to the camera constraints such as the limited field of view or self-occlusions, it is essential to use an effective configuration strategy to select the appropriate views that provide more information of the model. In this paper, we develop a multi-camera architecture built on the Robot Operating System. The advantages of the proposed architecture are illustrated with a formation-based algorithm to compute the view that satisfies these constraints for each robot of the formation to obtain the volumetric reconstruction of the target object.

Download paper

Paper: Survey on multi-robot manipulation of deformable objects

Title: Survey on multi-robot manipulation of deformable objects
Authors: Rafael Herguedas, Gonzalo López-Nicolás, Rosario Aragüés and Carlos Sagüés.
Conference: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA 2019), September 10-13, 2019, Zaragoza, Spain.
Abstract: Autonomous manipulation of deformable objects is a research topic of increasing interest due to the variety of current processes and applications that include this type of tasks. It is a complex problem that involves aspects such as modeling, control, perception, planning, rasping, estimation, etc. A single robot may be unable to perform the manipulation when the deformable object is too big, too heavy or difficult to grasp. Then, using multiple robots working together naturally arises as a solution to perform coordinately the manipulation task. In this paper, we contribute a survey of relevant state of-the-art approaches concerning manipulation of deformable objects by multiple robots, which includes a specific classification with different criteria and a subsequent analysis of the leading methods, the main challenges and the future research directions.

Download paper

Paper: Multi-camera coverage of deformable contour shapes

Title: Multi-camera coverage of deformable contour shapes
Authors: Rafael Herguedas , Gonzalo López-Nicolás and Carlos Sagüés.
Conference: IEEE International Conference on Automation Science and Engineering (CASE 2019), August 22-26, 2019, Vancouver, BC, Canada.
Abstract: Perception of deformation is a key problem when dealing with autonomous manipulation of deformable objects. Particularly, this work is motivated by tasks where the manipulated object follows a prescribed known deformation with the goal of performing a desired coverage of the object’s contour along its deformation. The main contribution is a simple yet effective novel perception system in which a team of robots equipped with limited field-of-view cameras covers the object’s contour according to a prescribed visibility objective. In order to define a feasible visibility objective, we propose a new method for obtaining the maximum achievable visibility of a contour from a circumference around its centroid. Then, we define a constrained optimization problem and we solve it iteratively to compute the minimum number of cameras and their near optimal positions around the object that guarantee the visibility objective, over the entire deformation process.

Download paper

Paper: Tactile-driven grasp stability and slip prediction

Title: Tactile-driven grasp stability and slip prediction

Author: B. S Zapata-Impata, P. Gil, F. Torres
Journal: Robotics 2019, 8, 85.
Abstract: One of the challenges in robotic grasping tasks is the problem of detecting whether a grip is stable or not. The lack of stability during a manipulation operation usually causes the slippage of the grasped object due to poor contact forces. Frequently, an unstable grip can be caused by an inadequate pose of the robotic hand or by insufficient contact pressure, or both. The use of tactile data is essential to check such conditions and, therefore, predict the stability of a grasp. In this work, we present and compare different methodologies based on deep learning in order to represent and process tactile data for both stability and slip prediction.
Download paper

Paper: TactileGCN: A graph convolutional network for predicting grasp stability with tactile sensors

Title: TactileGCN: A graph convolutional network for predicting grasp stability with tactile sensors
Author: A. Garcia-Garcia, B.S. Zapata-Impata, S. Orts-Escolano, P. Gil, J. García
Conference: International Joint Conference on Neural Netwoks (IJCNN), 14-19 July 2019
Abstract: Tactile sensors provide useful contact data during the interaction with an object which can be used to accurately learn to determine the stability of a grasp. Most of the works in the literature represented tactile readings as plain feature vectors or matrix-like tactile images, using them to train machine learning models. In this work, we explore an alternative way of exploiting tactile information to predict grasp stability by leveraging graph-like representations of tactile data, which preserve the actual spatial arrangement of the sensor’s taxels and their locality. In experimentation, we trained a Graph Neural Network to binary classify grasps as stable or slippery ones. To train such network and prove its predictive capabilities for the problem at hand, we captured a novel dataset of ~ 5000 three-fingered grasps across 41 objects for training and 1000 grasps with 10 unknown objects for testing. Our experiments prove that this novel approach can be effectively used to predict grasp stability.
Download paper