Paper: 3D reconstruction of deformable objects from RGB-D cameras: an omnidirectional inward-facing multi-camera system

Title: 3D reconstruction of deformable objects from RGB-D cameras: an omnidirectional inward-facing multi-camera system
Authors: Eva Curto, Helder Araujo
Conference: 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP’2021)

Abstract: This is a paper describing a system made up of several inward-facing cameras able to perform reconstruction of deformable objects through synchronous acquisition of RGBD data. The configuration of the camera system allows the acquisition of 3D omnidirectional images of the objects. The paper describes the structure of the system as well as an approach for the extrinsic calibration, which allows the estimation of the coordinate transformations between the cameras. Reconstruction results are also presented.
Download paper

Paper: Intel RealSense SR305, D415 and L515: Experimental evaluation and comparison of depth estimation

Title: Intel RealSense SR305, D415 and L515: Experimental evaluation and comparison of depth estimation
Authors: Francisco Lourenco, Helder Araujo
Conference: 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP’2021)

Abstract: In the last few years Intel has launched several low cost RGB-D cameras. Three of these cameras are the SR305, the L415 and the L515. These three cameras are based on different operating principles. The SR305 is based on structured light projection, the D415 is based on stereo based using also the projection of random dots and the L515 is based on LIDAR. In addition they all provide RGB images. In this paper we perform and experimental analysis and comparison of the depth estimation by the three cameras.
Download paper

Paper: RGB-D Sensing of Challenging Deformable Objects

Title: RGB-D Sensing of Challenging Deformable Objects

Authors: Ignacio Cuiral-Zueco and Gonzalo Lopez-Nicolas

Workshop: Workshop on Managing deformation: A step towards higher robot autonomy (MaDef), 25 October – 25 December, 2020

Abstract: The problem of deformable object tracking is prominent in recent robot shape-manipulation research. Additionally, texture-less objects that undergo large deformations and movements lead to difficult scenarios. Three RGB-D sequences of different challenging scenarios are processed in order to evaluate the robustness and versatility of a deformable object tracking method. Everyday objects of different complex characteristics are manipulated and tracked. The tracking system, pushed out the comfort zone, performs satisfactorily.

Webpage

Paper: Experimental multi-camera setup for perception of dynamic objects

Title: Experimental multi-camera setup for perception of dynamic objects

Authors: Rafael Herguedas, Gonzalo Lopez-Nicolas and Carlos Sagues

Workshop: Robotic Manipulation of Deformable Objects (ROMADO), 25 October – 25 December, 2020

Abstract: Currently,  perception  and  manipulation  of  dynamic  objects  represent  an  open  research  problem.  In this paper,  we  show  a  proof  of  concept  of  a  multi-camera  robotic setup   which   is   intended   to   perform   coverage   of   dynamic objects.  The  system  includes  a  set  of  RGB-D  cameras,  which are  positioned  and  oriented  to  cover  the  object’s  contour  as required  in  terms  of  visibility.  An algorithm of a previous study allows us to minimize and configure the cameras so that collisions and occlusions are avoided. We test the validity of the platform with the Robot Operating System (ROS) in simulations with  the  software  Gazebo  and  in  real  experiments  with  Intel RealSense  modules.

Download paper

Paper: Prediction of tactile perception from vision on deformable objects

Title: Prediction of tactile perception from vision on deformable objects

Authors: Brayan S. Zapata-Impata and Pablo Gil

Workshop: Robotic Manipulation of Deformable Objects (ROMADO), 25 October – 25 December, 2020

Abstract: Through the use of tactile perception, a manipulator can estimate the stability of its grip, among others. However, tactile sensors are only activated upon contact. In contrast, humans can estimate the feeling of touching an object from its visual appearance. Providing robots with this ability to generate tactile perception from vision is desirable to achieve autonomy. To accomplish this, we propose using a Generative Adversarial Network. Our system learns to generate tactile responses using as stimulus a visual representation of the object and target grasping data. Since collecting labeled samples of robotic tactile responses consumes hardware resources and time, we apply semi-supervised techniques. For this work, we collected 4000 samples with 4 deformable items and experiment with 4 tactile modalities.

Download paper

Paper: Simultaneous shape control and transport with multiple robots

Title: Simultaneous shape control and transport with multiple robots

Author: G. López-Nicolás, R. Herguedas, M. Aranda, Y. Mezouar.

Journal: IEEE International Conference on Robotic Computing (IRC), pp. 218-225, 2020.

Abstract: Autonomous transport of objects may require multiple robots when the object is large or heavy. Besides, in the case of deformable objects, a set of robots may also be needed to maintain or adapt the shape of the object to the task requirements. The task we address consists in transporting an object, represented as a two dimensional shape or contour, along a desired path. Simultaneously, the team of robots grasping the object is controlled to the desired contour points configuration. Since the mobile robots of the team obey nonholonomic motion constraints, admissible trajectories are designed to keep the integrity of the object while following the prescribed path. Additionally, the simultaneous control of the object’s shape is smoothly performed to respect the admissible deformation of the object. The main contribution lies in the definition of the grasping robots’ trajectories dealing with the involved constraints. Different simulations, where the deformable object dynamics are modelled with consensus-based techniques, illustrate the performance of the approach.

Download paper

Video

Paper: Distributed relative localization using the multi-dimensional weighted centroid

Title: Distributed relative localization using the multi-dimensional weighted centroid
Author: R. Aragüés, A. González, G. López-Nicolás, C. Sagüés.
Journal: IEEE Transactions on Control of Network Systems, vol. 7, pp. 1272-1282, 2020.

Example with 10 agents in a chain graph. Evolution along iterations of the estimated x-coordinate relative to the weighted centroid of the team. Top: The ringing oscillatory behavior can be observed for h = 0.99. At each step, the estimates change their values sharply. Bottom: The ringing oscillatory behavior is removed with h = 0.49. The estimates converge nowsmoothly.


Abstract: A key problem in multi-agent systems is the distributed estimation of the localization of agents in a common reference from relative measurements. Estimations can be referred to an anchor node or, as we do here, referred to the weighted centroid of the multi-agent system. We propose a Jacobi Over–Relaxation method for distributed estimation of the weighted centroid of the multi-agent system from noisy relative measurements. Contrary to previous approaches, we consider relative multi-dimensional measurements with general covariance matrices not necessarily fully diagonal. We analyze the method convergence and provide mathematical constraints that ensure avoiding ringing phenomena. We also prove our weighted centroid method converges faster than anchor-based solutions.
Download paper

Paper: Dynamic occlusion handling for real time object perception

Title: Dynamic occlusion handling for real time object perception

Authors: Ignacio Cuiral-Zueco and Gonzalo Lopez-Nicolas

Conference: International Conference on Robotics and Automation Engineering

(ICRAE 2020), November 20-22, 2020

Abstract: An RGB-D based occlusion-handling camera position computation method for proper object perception has been designed and implemented. This proposal is an improved alternative to our previous optimisation-based approach where the contribution is twofold: this new method is geometric-based and it is also able to handle dynamic occlusions. This approach makes extensive use of a ray-projection model where a key aspect is that the solution space is defined within a sphere surface around the object. The method has been designed with a view to robotic applications and therefore provides robust and versatile features. Therefore, it does not require training nor prior knowledge of the scene, making it suitable for diverse applications and scenarios. Satisfactory results have been obtained with real time experiments.

Conference website

Paper: Generation of tactile data from 3D vision and target robotic grasps

Title: Generation of tactile data from 3D vision and target robotic grasps
Autor: B.S. Zapata-Impata, P. Gil, Y. Mezouar, F. Torres
Journal: IEEE Transactions on Haptics, July 2020


Abstract:Tactile perception is a rich source of information for robotic grasping: it allows a robot to identify a grasped object and assess the stability of a grasp, among other things. However, the tactile sensor must come into contact with the target object in order to produce readings. As a result, tactile data can only be attained if a real contact is made. We propose to overcome this restriction by employing a method that models the behaviour of a tactile sensor using 3D vision and grasp information as a stimulus. Our system regresses the quantified tactile response that would be experienced if this grasp were performed on the object. We experiment with 16 items and 4 tactile data modalities to show that our proposal learns this task with low error.
Paper at IEEE

Paper: Monocular visual shape tracking and servoing for isometrically deforming objects

Title: Monocular visual shape tracking and servoing for isometrically deforming objects
Author: Miguel Aranda, Juan Antonio Corrales Ramon, Youcef Mezouar, Adrien Bartoli, Erol Özgür
Conference: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 25-29, 2020, Las Vegas, NV, USA (Virtual).

Abstract: We address the monocular visual shape servoing problem. This pushes the challenging visual servoing problem one step further from rigid object manipulation towards deformable object manipulation. Explicitly, it implies deforming the object towards a desired shape in 3D space by robots using monocular 2D vision. We specifically concentrate on a scheme capable of controlling large isometric deformations. Two important open subproblems arise for implementing such a scheme. (P1) Since it is concerned with large deformations, perception requires tracking the deformable object’s 3D shape from monocular 2D images which is a severely underconstrained problem. (P2) Since rigid robots have fewer degrees of freedom than a deformable object, the shape control becomes underactuated. We propose a template-based shape servoing scheme in which we solve these two problems. The template allows us to both infer the object’s shape using an improved Shape-from-Template algorithm and steer the object’s deformation by means of the robots’ movements. We validate the scheme via simulations and real experiments.

Paper download