Paper: Towards footwear manufacturing 4.0: shoe sole robotic grasping in assembling operations

Title: Towards footwear manufacturing 4.0: shoe sole robotic grasping in assembling operations
Autor: Guillermo Oliver, Pablo Gil, Jose F. Gomez, Fernando Torres
Journal: The International Journal of Advanced Manufacturing Technology, 2021

Abstract: In this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with different sizes and characteristics. Grasps were performed in two different configurations, obtaining an average score of 97.5% of successful real grasps for soles without heel made with materials of low or medium flexibility. In both cases, the grasping method was tested without carrying out tactile control throughout the task.

Paper at Springer

Paper: 3D reconstruction of deformable objects from RGB-D cameras: an omnidirectional inward-facing multi-camera system

Title: 3D reconstruction of deformable objects from RGB-D cameras: an omnidirectional inward-facing multi-camera system
Authors: Eva Curto, Helder Araujo
Conference: 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP’2021)

Abstract: This is a paper describing a system made up of several inward-facing cameras able to perform reconstruction of deformable objects through synchronous acquisition of RGBD data. The configuration of the camera system allows the acquisition of 3D omnidirectional images of the objects. The paper describes the structure of the system as well as an approach for the extrinsic calibration, which allows the estimation of the coordinate transformations between the cameras. Reconstruction results are also presented.
Download paper

Paper: Intel RealSense SR305, D415 and L515: Experimental evaluation and comparison of depth estimation

Title: Intel RealSense SR305, D415 and L515: Experimental evaluation and comparison of depth estimation
Authors: Francisco Lourenco, Helder Araujo
Conference: 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP’2021)

Abstract: In the last few years Intel has launched several low cost RGB-D cameras. Three of these cameras are the SR305, the L415 and the L515. These three cameras are based on different operating principles. The SR305 is based on structured light projection, the D415 is based on stereo based using also the projection of random dots and the L515 is based on LIDAR. In addition they all provide RGB images. In this paper we perform and experimental analysis and comparison of the depth estimation by the three cameras.
Download paper

Paper: RGB-D Sensing of Challenging Deformable Objects

Title: RGB-D Sensing of Challenging Deformable Objects

Authors: Ignacio Cuiral-Zueco and Gonzalo Lopez-Nicolas

Workshop: Workshop on Managing deformation: A step towards higher robot autonomy (MaDef), 25 October – 25 December, 2020

Abstract: The problem of deformable object tracking is prominent in recent robot shape-manipulation research. Additionally, texture-less objects that undergo large deformations and movements lead to difficult scenarios. Three RGB-D sequences of different challenging scenarios are processed in order to evaluate the robustness and versatility of a deformable object tracking method. Everyday objects of different complex characteristics are manipulated and tracked. The tracking system, pushed out the comfort zone, performs satisfactorily.

Webpage

Paper: Experimental multi-camera setup for perception of dynamic objects

Title: Experimental multi-camera setup for perception of dynamic objects

Authors: Rafael Herguedas, Gonzalo Lopez-Nicolas and Carlos Sagues

Workshop: Robotic Manipulation of Deformable Objects (ROMADO), 25 October – 25 December, 2020

Abstract: Currently,  perception  and  manipulation  of  dynamic  objects  represent  an  open  research  problem.  In this paper,  we  show  a  proof  of  concept  of  a  multi-camera  robotic setup   which   is   intended   to   perform   coverage   of   dynamic objects.  The  system  includes  a  set  of  RGB-D  cameras,  which are  positioned  and  oriented  to  cover  the  object’s  contour  as required  in  terms  of  visibility.  An algorithm of a previous study allows us to minimize and configure the cameras so that collisions and occlusions are avoided. We test the validity of the platform with the Robot Operating System (ROS) in simulations with  the  software  Gazebo  and  in  real  experiments  with  Intel RealSense  modules.

Download paper

Paper: Prediction of tactile perception from vision on deformable objects

Title: Prediction of tactile perception from vision on deformable objects

Authors: Brayan S. Zapata-Impata and Pablo Gil

Workshop: Robotic Manipulation of Deformable Objects (ROMADO), 25 October – 25 December, 2020

Abstract: Through the use of tactile perception, a manipulator can estimate the stability of its grip, among others. However, tactile sensors are only activated upon contact. In contrast, humans can estimate the feeling of touching an object from its visual appearance. Providing robots with this ability to generate tactile perception from vision is desirable to achieve autonomy. To accomplish this, we propose using a Generative Adversarial Network. Our system learns to generate tactile responses using as stimulus a visual representation of the object and target grasping data. Since collecting labeled samples of robotic tactile responses consumes hardware resources and time, we apply semi-supervised techniques. For this work, we collected 4000 samples with 4 deformable items and experiment with 4 tactile modalities.

Download paper

Autonomous navigation of mobile manipulator robot with camera and laser in the ROS environment

Student degree project at UNIZAR on December 2020 in the framework of COMMANDIA:
Autonomous navigation of mobile manipulator robot with camera and laser in the ROS environment” by David Barrera.

In this work we have used the environment ROS (Robot Operating System) to develop the autonomous navigation of a mobile manipulator robot. The navigation has been carried out in simulation and in real environments. The mobile platform is a robot known as robot Campero, that it is a prototype of the commercial robot RB-EKEN (Robotnik). The sensors used for the navigation are laser and vision. We have developed several programs for different types of navigation and the experiment results have been analyzed.

BiTS D INNOVATION

The BiTS D INNOVATION is a dissemination conference, organized by INESCOP on 17th December 2020, with the objective of bringing some of the results and technological advances obtained in 2020 closer to the footwear sector, through R+D+I activities.

“Pieces” of innovation on “Sustainability”, “Comfort and Health” and “Advanced Manufacturing”, in an entertaining and very demonstrative format.

In the area of Advanced Manufacturing, Jose Maria Gutierrez from INESCOP presented the talk “Multi-robot manipulation for the cut-floor joint operation”.

At COMMANDIA, INESCOP works together with 4 European universities, in the automation of the joining operation between the shoe cut, and a deformable object such as the floor. The manipulation of deformable objects is one of the main current challenges of robotization.

This union operation takes place after the adhesive has been applied on the floor, and consists of collecting the sole of a tape with a robot, and with the help of a second robot, run accurately on a cut that is prefixed. For this union, we have a 3D vision system that will tell us at all times the relative position between the elements and the necessary actions of the robots to proceed properly. The use of two robots allows us to correctly handle the floor even though it is flexible, and to be able to use this factor in our favor if necessary.

Paper: Simultaneous shape control and transport with multiple robots

Title: Simultaneous shape control and transport with multiple robots

Author: G. López-Nicolás, R. Herguedas, M. Aranda, Y. Mezouar.

Journal: IEEE International Conference on Robotic Computing (IRC), pp. 218-225, 2020.

Abstract: Autonomous transport of objects may require multiple robots when the object is large or heavy. Besides, in the case of deformable objects, a set of robots may also be needed to maintain or adapt the shape of the object to the task requirements. The task we address consists in transporting an object, represented as a two dimensional shape or contour, along a desired path. Simultaneously, the team of robots grasping the object is controlled to the desired contour points configuration. Since the mobile robots of the team obey nonholonomic motion constraints, admissible trajectories are designed to keep the integrity of the object while following the prescribed path. Additionally, the simultaneous control of the object’s shape is smoothly performed to respect the admissible deformation of the object. The main contribution lies in the definition of the grasping robots’ trajectories dealing with the involved constraints. Different simulations, where the deformable object dynamics are modelled with consensus-based techniques, illustrate the performance of the approach.

Download paper

Video