Issue link: https://htpgraphics.uberflip.com/i/1385717
LEAD RESEARCHER: JULIE B SKEVIK I've worked as a researcher for a year now at the University of Reading in the Vision and Haptics Lab, on the topic of using perceptual psychology to improve speed, task performance and sensory integration in telemanipulation tasks in a simulated glovebox. The Lab is also where I did my PhD on the topic of multisensory integration for detection of tumours in medical imaging. RAIN has enabled me to develop software for our custom state-of-the-art immersive simulation hardware using simulated touch and motion-tracked VR. Additionally, RAIN has a wide range of different disciplines, which encourages interesting conversations and potential cross-disciplinary collaborations. SUMMARY // Telemanipulation is an important technology which allows us to perform complicated tasks in hazardous and remote environments. In these systems, the operator uses monitors and often conventional input methods such as standard keyboards to perform tasks such as grasping and handling objects with limited sensory feedback. Since the systems tend to use rigid cameras, 2D monitors and keypresses for tasks that rely on depth perception and natural hand movements, they are slow to use, have difficult controls and require a lot of training. Even then, not everyone can successfully learn to use the several static viewpoints to skilfully navigate the remote location. UNDERSTANDING SENSORIMOTOR OPTIMISATION OF BIMANUAL TELEROBOTICS We look at which of these transformations are the most disruptive to performance, such as time delay or control rotation between user input and robot movement, so that we can disambiguate which elements of the telemanipulation system should be 'corrected' for first, and which transforms are easier to adapt to. UNIQUENESS // For this project we have a custom built state-of-the-art bimanual haptic feedback rig with full force feedback to two points of contact per hand, and featuring a motion- tracked virtual reality display. The multiple points of contact allow for precision grip using index and thumb of each hand independently, with the VR simulation spatially coaligned with the position of the fingers in space. This allows us to render physical objects with texture and weight, such as picking up objects and passing them from one hand to the other. With this rig we can closely replicate physical human motions and actions such as grasping and placing objects, track metrics such as task execution time, grip apertures which indicate operator confidence, the effects of training, and so on. We then implement distortions commonly found in telemanipulation systems, such as input lag between user movement and robot action and try to identify which are worse for performance. 18