HTP Graphics

RAIN Hub Year 3 Report

Issue link: https://htpgraphics.uberflip.com/i/1385717

Contents of this Issue

Navigation

Page 49 of 77

LEAD RESEARCHER: RADHIKA NATH After completing my PhD in computer science, I joined RACE to work on the RAIN project. My work is on the vision system for remote handling, and I am currently researching on deep learning models for robotic grasping. Coming from a machine learning and computer vision background, working in the RAIN project with my colleagues has given me a wonderful opportunity to learn about robotics from experts in the field. DEEP LEARNING MODELS FOR ROBOTIC GRASPING UNIQUENESS // In recent years, deep learning models have demonstrated incredible robustness in robotic grasping of household objects. However, this becomes more difficult when the objects are of a more complex geometry. With gloveboxes, there is an added difficulty of reduced visibility and clutter. Therefore, we are interested in models that perform grasping with high accuracy and speed in an unstructured environment full of unknown objects while trained on a limited set of labelled data. For this, we are developing grasping mechanism that is focused on reducing the uncertainty of grasps over an area with the help of variational autoencoders and neural network ensembles. In this work, we are also using a simulated platform to investigate a variety of deep learning models and comparing their performance.. 50 SUMMARY // The ability of robots to grasp and manipulate objects is a difficult problem to solve as it needs to account for a lot of uncertainties that arise either from noisy sensors or external environmental factors. This is a vast and ever-expanding area of research, and in the recent years, there has been a lot of interest in the exploration of deep learning techniques to address its many challenges. In the RAIN project, we are interested in improving the safety and efficiency of nuclear glovebox operations through teleoperation. Due to the general lack of visibility and clutter which often includes objects that are of ambiguous shapes, textures and sizes, there is an increased difficulty in autonomously detecting and executing grasps. We explore the use of deep learning models to improve the speed and accuracy of grasp pose estimation from visual input.

Articles in this issue

Archives of this issue

view archives of HTP Graphics - RAIN Hub Year 3 Report