Paper: Prediction of tactile perception from vision on deformable objects

Title: Prediction of tactile perception from vision on deformable objects

Authors: Brayan S. Zapata-Impata and Pablo Gil

Workshop: Robotic Manipulation of Deformable Objects (ROMADO), 25 October – 25 December, 2020

Abstract: Through the use of tactile perception, a manipulator can estimate the stability of its grip, among others. However, tactile sensors are only activated upon contact. In contrast, humans can estimate the feeling of touching an object from its visual appearance. Providing robots with this ability to generate tactile perception from vision is desirable to achieve autonomy. To accomplish this, we propose using a Generative Adversarial Network. Our system learns to generate tactile responses using as stimulus a visual representation of the object and target grasping data. Since collecting labeled samples of robotic tactile responses consumes hardware resources and time, we apply semi-supervised techniques. For this work, we collected 4000 samples with 4 deformable items and experiment with 4 tactile modalities.

Download paper

Paper: Simultaneous shape control and transport with multiple robots

Title: Simultaneous shape control and transport with multiple robots

Author: G. López-Nicolás, R. Herguedas, M. Aranda, Y. Mezouar.

Journal: IEEE International Conference on Robotic Computing (IRC), pp. 218-225, 2020.

Abstract: Autonomous transport of objects may require multiple robots when the object is large or heavy. Besides, in the case of deformable objects, a set of robots may also be needed to maintain or adapt the shape of the object to the task requirements. The task we address consists in transporting an object, represented as a two dimensional shape or contour, along a desired path. Simultaneously, the team of robots grasping the object is controlled to the desired contour points configuration. Since the mobile robots of the team obey nonholonomic motion constraints, admissible trajectories are designed to keep the integrity of the object while following the prescribed path. Additionally, the simultaneous control of the object’s shape is smoothly performed to respect the admissible deformation of the object. The main contribution lies in the definition of the grasping robots’ trajectories dealing with the involved constraints. Different simulations, where the deformable object dynamics are modelled with consensus-based techniques, illustrate the performance of the approach.

Download paper

Video

Paper: Distributed relative localization using the multi-dimensional weighted centroid

Title: Distributed relative localization using the multi-dimensional weighted centroid
Author: R. Aragüés, A. González, G. López-Nicolás, C. Sagüés.
Journal: IEEE Transactions on Control of Network Systems, vol. 7, pp. 1272-1282, 2020.

Example with 10 agents in a chain graph. Evolution along iterations of the estimated x-coordinate relative to the weighted centroid of the team. Top: The ringing oscillatory behavior can be observed for h = 0.99. At each step, the estimates change their values sharply. Bottom: The ringing oscillatory behavior is removed with h = 0.49. The estimates converge nowsmoothly.


Abstract: A key problem in multi-agent systems is the distributed estimation of the localization of agents in a common reference from relative measurements. Estimations can be referred to an anchor node or, as we do here, referred to the weighted centroid of the multi-agent system. We propose a Jacobi Over–Relaxation method for distributed estimation of the weighted centroid of the multi-agent system from noisy relative measurements. Contrary to previous approaches, we consider relative multi-dimensional measurements with general covariance matrices not necessarily fully diagonal. We analyze the method convergence and provide mathematical constraints that ensure avoiding ringing phenomena. We also prove our weighted centroid method converges faster than anchor-based solutions.
Download paper

Paper: Dynamic occlusion handling for real time object perception

Title: Dynamic occlusion handling for real time object perception

Authors: Ignacio Cuiral-Zueco and Gonzalo Lopez-Nicolas

Conference: International Conference on Robotics and Automation Engineering

(ICRAE 2020), November 20-22, 2020

Abstract: An RGB-D based occlusion-handling camera position computation method for proper object perception has been designed and implemented. This proposal is an improved alternative to our previous optimisation-based approach where the contribution is twofold: this new method is geometric-based and it is also able to handle dynamic occlusions. This approach makes extensive use of a ray-projection model where a key aspect is that the solution space is defined within a sphere surface around the object. The method has been designed with a view to robotic applications and therefore provides robust and versatile features. Therefore, it does not require training nor prior knowledge of the scene, making it suitable for diverse applications and scenarios. Satisfactory results have been obtained with real time experiments.

Conference website

Paper: Generation of tactile data from 3D vision and target robotic grasps

Title: Generation of tactile data from 3D vision and target robotic grasps
Autor: B.S. Zapata-Impata, P. Gil, Y. Mezouar, F. Torres
Journal: IEEE Transactions on Haptics, July 2020


Abstract:Tactile perception is a rich source of information for robotic grasping: it allows a robot to identify a grasped object and assess the stability of a grasp, among other things. However, the tactile sensor must come into contact with the target object in order to produce readings. As a result, tactile data can only be attained if a real contact is made. We propose to overcome this restriction by employing a method that models the behaviour of a tactile sensor using 3D vision and grasp information as a stimulus. Our system regresses the quantified tactile response that would be experienced if this grasp were performed on the object. We experiment with 16 items and 4 tactile data modalities to show that our proposal learns this task with low error.
Paper at IEEE

Paper: Monocular visual shape tracking and servoing for isometrically deforming objects

Title: Monocular visual shape tracking and servoing for isometrically deforming objects
Author: Miguel Aranda, Juan Antonio Corrales Ramon, Youcef Mezouar, Adrien Bartoli, Erol Özgür
Conference: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 25-29, 2020, Las Vegas, NV, USA (Virtual).

Abstract: We address the monocular visual shape servoing problem. This pushes the challenging visual servoing problem one step further from rigid object manipulation towards deformable object manipulation. Explicitly, it implies deforming the object towards a desired shape in 3D space by robots using monocular 2D vision. We specifically concentrate on a scheme capable of controlling large isometric deformations. Two important open subproblems arise for implementing such a scheme. (P1) Since it is concerned with large deformations, perception requires tracking the deformable object’s 3D shape from monocular 2D images which is a severely underconstrained problem. (P2) Since rigid robots have fewer degrees of freedom than a deformable object, the shape control becomes underactuated. We propose a template-based shape servoing scheme in which we solve these two problems. The template allows us to both infer the object’s shape using an improved Shape-from-Template algorithm and steer the object’s deformation by means of the robots’ movements. We validate the scheme via simulations and real experiments.

Paper download

Paper: RGB-D tracking and optimal perception of deformable objects

Title: RGB-D tracking and optimal perception of deformable objects
Author: Ignacio Cuiral-Zueco, Gonzalo López-Nicolás
Journal: IEEE Access, vol. 8, pp. 136884-136897, 2020.


Abstract: Addressing the perception problem of texture-less objects that undergo large deformations and movements, this article presents a novel RGB-D learning-free deformable object tracker in combination with a camera position optimisation system for optimal deformable object perception. The approach is based on the discretisation of the object’s visible area through the generation of a supervoxel graph that allows weighting new supervoxel candidates between object states over time. Once a deformation state of the object is determined, supervoxels of its associated graph serve as input for the camera position optimisation problem. Satisfactory results have been obtained in real time with a variety of objects that present different deformation characteristics.
Download paper

Paper: Robotic workcell for sole grasping in footwear manufacturing

Title: Robotic workcell for sole grasping in footwear manufacturing
Author: Guillermo Oliver, Pablo Gil, Fernando Torres
Conference: 25th Int. Conf. on Emerging Technologies and Factory Automation (ETFA), Vienna (Austria), 8-11 September 2020.

 
Abstract: The goal of this paper is to present a robotic workcell to automate several tasks of the cementing process in footwear manufacturing. Our cell’s main applications are sole digitization of a wide variety of footwear, glue dispensing and sole grasping from conveyor belts. This cell is made up of a manipulator arm endowed with a gripper, a conveyor belt and a 3D scanner. We have integrated all the elements into a ROS simulation environment facilitating control and communication among them, also providing flexibility to support future extensions. We propose a novel method to grasp soles of different shape, size and material, exploiting the particular characteristics of these objects. Our method relies on object contour extraction using concave hulls. We evaluate it on point clouds of 16 digitized real soles in three different scenarios: concave hull, k-NNs extension and PCA correction. While we have tested this workcell in a simulated environment, the presented system’s performance is scheduled to be tested on a real setup at INESCOP facilities in the upcoming months.

Paper at IEEE

Paper: Blind Manipulation of Deformable Objects Based on Force Sensing and Finite Element Modeling

Title: Blind Manipulation of Deformable Objects Based on Force Sensing and Finite Element Modeling
Author: Jose Sanchez, Kamal Mohy El Dine, Juan Antonio Corrales, Belhassen-Chedli Bouzgarrou and Youcef Mezouar
Journal: Frontiers in Robotics and AI. 09 June 2020. 7:73. doi: 10.3389/frobt.2020.00073
Abstract: In this paper, we present a novel pipeline to simultaneously estimate and manipulate the deformation of an object using only force sensing and an FEM model. The pipeline is composed of a sensor model, a deformation model and a pose controller. The sensor model computes the contact forces that are used as input to the deformation model which updates the volumetric mesh of a manipulated object. The controller then deforms the object such that a given pose on the mesh reaches a desired pose. The proposed approach is thoroughly evaluated in real experiments using a robot manipulator and a force-torque sensor to show its accuracy in estimating and manipulating deformations without the use of vision sensors.
Download paper