Eye/gaze tracking

For the Introduction of this article: https://www.kinovarobotics.com/en/knowledge-hub/how-existing-assistive-hmis-could-change-our-near-future

For Part II of this article: https://www.kinovarobotics.com/en/knowledge-hub/how-existing-assistive-hmis-could-change-our-near-future-part-ii

What is eye/gaze tracking?

Eye/gaze tracking devices are camera systems, often accompanied by infrared emitters, that are pointed at a user’s face. The software then uses feature tracking algorithms or AI to find the eyes and sometimes other facial traits for each frame. Then, the position of the center of each eye is estimated and the gaze direction of the user is computed as the direction between the center of the eyes and the center of the pupils.

Martin Leroux eye gaze face tracking points arrows diagram X Y and Z axis FRP gaze direction SRP

Above: Figure 1 & 2, Martin Leroux, Eye/gaze tracking picture and diagram.

Eye/gaze trackers were not originally created as an HMI. In fact, their original purpose was to study gaze patterns, for example, to evaluate visual marketing material or driving habits1. Subjects were usually presented with visual stimuli while their gaze was recorded to be analyzed offline. With this analysis, it was possible to create heat maps representing the focus point of the user. With time, the technology improved, allowing us to discern voluntary blinking from involuntary. This for the first time enabled a binary user input from eye-tracking, which is now commonly used to click when using eye/gaze trackers for cursor control on a computer.

Interacting with the real, unorganized world

Moving eye/gaze tracking technologies to screen-less applications was more challenging. Since the output of the tracker is a direction from each eye, the estimation of the point of focus in 3D is necessarily imprecise without additional input - especially when taking into consideration that our eyes tend to become parallel as the target we are looking at is moved away2. Nevertheless, researchers managed to make the technology usable by using additional inputs like computer vision3. From then on, it was only a matter of time before the technology was used for assistive technologies in uncontrolled environments.

The two main uses of eye/gaze tracking technology in assistive robotics are target selection and user intent detection. In the case of target selection, the premise is simple: the user selects an object or a point in space he/she wishes to interact with by looking at it, and then the system autonomously takes over for some or all of the interaction. In its simplest form, it is used to select modes on a screen4, then it evolved to automate the reaching and grasping of objects with a robotic manipulator5. It can also be empowered by AI and/or general robotics algorithms like collision avoidance to further automate tasks and improve the user experience6.

The field of intent detection is somewhat similar, although broader because it is a field of research that goes beyond assistive robots. Its key is to use AI techniques for context awareness to deduce tasks that are more complex than simply reaching and grasping7. For example, one could imagine looking at a doorknob to get a robot to grab it, but also follow-up with the appropriate motion to open the door.

Siddarth Jain shared control diagram probabilistic human intent recognition for shared autonomy in assistive robotics

Above: Figure 3, Siddarth Jain, argallab.26

Eye/gaze tracking is ready to see use outside of assistive applications

If eye/gaze tracking were to become commonplace in industrial environments with cobots, the same two kinds of applications may provide benefits.

Eye/gaze tracking for target identification could be used by manual workers who already have their hands occupied - maybe full of parts or holding tools - to control a robotic arm that literally gives them an extra hand.

Such assistant robots can even see application outside of industries, during professional and even medical services. This assistant robot can for example automatically orient a light source in the direction where the user is focusing to work during a procedure. Whereas target detection can be used for robot teleoperation, intent detection is useful for human-robot collaboration8. Indeed, the detection of intent allows the robot to make predictions about the people who may enter its workspace, instead of reacting to them - which makes a world of difference for safety matters. For example, the robot could have a program following this thought process: "This person is looking this way. He/She is probably moving in my direction. I should slow down".

Admittedly, in an industrial context where the environment and the processes are often rigorously controlled and predictable, generic target detection through eye/gaze tracking may provide only little benefits. However, in this day and age, as robots become democratized, cobots will make their way into more and more disordered environments and in the hands of people with less technical knowledge. For these new markets, eye/gaze tracking would prove to be an intuitive and useful tool, with already mature technology ready to be deployed.

Body-machine interface

Body-machine interfaces for assistive technologies

In general, the term body-machine interface means that some technology is controlled via some body motion. In the field of assistive robotics, this is often challenging and limiting because of the various disabilities that can afflict the users. Many of them have either a limited range of motions of their limbs or spasms making repeatable movement impossible. Of course, since assistive technologies are adapted on a case by case basis to the capacities of each user, some form of solution is always found eventually.

One common body part used for inputs by people with severe disabilities is their head. The low-tech interface that is used for people that have to use their head to control their device is simply an array of buttons of appropriate size positioned on the headrest of an electric-powered wheelchair. However, in the last few years, researchers in the field of HMI developed a more convenient high-tech solution for this clientele: head-tracking. The concept of head-tracking can go from simply taking the head orientation as a joystick-type input for control9 to full-blown facial expression recognition mapped onto functionalities10.

Moving robots by moving muscles

The limitation of this kind of technology, especially for assistive users who are most of the time not experts in robotics, is that it is still required to map two different motion types (user motion and robot motion) that hardly correspond to one another.

The dream in the field of body-robot interface is to be able to make some kind of motion, and get the target robot to imitate it in real-time.

Sadly this is made harder by the fact that the human arms and robots are kinematically very different11. One method that is used with assistive technologies, generally for wearable prostheses, is the use of electromyography (EMG) signals. The first robotic prostheses used to be controlled via the mapping of signals in unrelated muscles, like pectoral and shoulder flexes to move the forearm and wrist, but the improvements in signal classification due to AI in the recent years now allow more and more complex motions to be mapped one-to-one between the expected body part motion and the resulting robot motion12 - even on limbs that were amputated!

Although EMG signals cannot be mapped one-to-one to robotic manipulator motions because manipulators don’t have the same parts as our bodies do, researchers were still able to use them for control13. This can be used for example by patients who do have the necessary range of motion, but lack the body strength to hold whatever it is that they wish to interact with - as is often the case with the elderly. Alternatively, EMG signals can also be collected from unrelated and healthy muscle nerves that are not involved in most daily activities. After some adaptation time, this kind of control helps make the robot feel like an extension of the body.

Tommaso Lisini Baldi human guidance wearable technologies methods and experiments GESTO glove VR manipulation

Above: Figure 4, Tommaso Lisini Baldi, Human guidance wearable technologies, methods, and experiments.

This kind of semi-direct mapping between human motion and robot motion already inspired researchers outside of the field of assistive robotics. Multiple methods were developed to properly identify human motion to interface with technology (you can imagine for example Tony Stark dismissing a holographic projection with a flick of his wrist in the movies). Once the limitations of the assistive clientele are no longer considered, new ways to capture motion can be used for teleoperation in a professional environment. Researchers have used computer vision paired with AI14-16, and wearables17, 18 to detect and identify motion. Depending on the application, the robot control algorithm can then decide to map the end-effector motion to the motion of the hand of the user, or to map the joint motion to the user’s articulations.

An intuitive interface for industrial workers

This possibility to imitate the motion is probably the single most intuitive interface of control for an untrained user.

In an industrial environment, for example, this could be used for teleoperation when robots are deployed out of reach or in environments too dangerous for people or for robot teaching when new programs, trajectories or sequences must be drawn for the robot. Overall, the AI systems responsible for general motion identification may be still at the research stage, but creating a subset of desired motions that make sense in the context of a controlled environment like in the industry and identifying it properly is perfectly realistic.

Yoshikawa machine learning for human movement understanding ladle dipper robot arm human arm with sensors bowl with sand

Above: Figure 5, Yoshikawa, Machine learning for human movement understanding.

Alireza Golgouneh controllable biomimetic SMC-actuated robotic arm human bicep with sensor black sleeve

Above: Figure 7, Alireza Golgouneh, A controllable biomimetic SMC-actuated robotic arm.

Part II - Read more

Intro & Part II

For your convenience, we splitted this article in 3 parts: Introduction, Part I and Part II. Please click on the tiles below to access the different parts.

References

1. Andrew T Duchowski. A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, & Computers, 34(4):455–470, 2002.

2. Martin Leroux, Maxime Raison, T Adadja, and Sofiane Achiche. Combination of eyetracking and computer vision for robotics control. In 2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), pages 1–6. IEEE, 2015.

3. Martin Leroux, Sofiane Achiche, and Maxime Raison. Assessment of accuracy for target detection in 3d-space using eye tracking and computer vision. PeerJ Preprints, 5:e2718v1, 2017.

4. Qiyun Huang, Yang Chen, Zhijun Zhang, Shenghong He, Rui Zhang, Jun Liu, Yuandong Zhang, Ming Shao, and Yuanqing Li. An eog-based wheelchair robotic arm system for assisting patients with severe spinal cord injuries. Journal of neural engineering, 16(2):026021, 2019.

5. Yann-Seing Law-Kam Cio, Maxime Raison, Cédric Leblond Ménard, and Sofiane Achiche. Proof of concept of an assistive robotic arm control using artificial stereovision and eye-tracking. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27(12):2344–2352, 2019.

6. Reuben M Aronson and Henny Admoni. Eye gaze for assistive manipulation. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages 552–554, 2020.

7. Gheorghe-Daniel Voinea and Razvan Boboc. Towards hybrid multimodal brain computer interface for robotic arm command. In Augmented Cognition: 13th International Conference, AC 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26–31, 2019, Proceedings, volume 11580, page 461. Springer, 2019.

8. Chien-Ming Huang. Human-robot joint action: Coordinating attention, communication, and actions. PhD thesis, The University of Wisconsin-Madison, 2015.

9. Sanders Aspelund, Priya Patel, Mei-Hua Lee, Florian Kagerer, Rajiv Ranganathan, and Ranjan Mukherjee. Controlling a robotic arm for functional tasks using a wireless head-joystick: A case study of a child with congenital
absence of upper and lower limbs. bioRxiv, page 850123, 2019.

10. Hairong Jiang, Juan P Wachs, and Bradley S Duerstock. Integrated visionbased robotic arm interface for operators with upper limb mobility impairments. In 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR), pages 1–6. IEEE, 2013.

11. Martin Leroux. Design d’un manipulateur robotique à architecture anthropomorphique. PhD thesis, École Polytechnique de Montréal, 2017.

12. Anand Kumar Mukhopadhyay and Suman Samui. An experimental study on upper limb position invariant emg signal classification based on deep neural network. Biomedical Signal Processing and Control, 55:101669, 2020.

13. François Nougarou, Alexandre Campeau-Lecours, Daniel Massicotte, Mounir Boukadoum, Clément Gosselin, and Benoit Gosselin. Pattern recognition based on hd-semg spatial features extraction for an efficient proportional control of a robotic arm. Biomedical Signal Processing and Control, 53:101550, 2019.

14. Florin Gîrbacia, Cristian Postelnicu, and Gheorghe-Daniel Voinea. Towards using natural user interfaces for robotic arm manipulation. In International Conference on Robotics in Alpe-Adria Danube Region, pages 188–193. Springer, 2019.

15. Taizo Yoshikawa, Viktor Losing, and Emel Demircan. Machine learning for human movement understanding. Advanced Robotics, 34(13):828–844, 2020.

16. Guilherme N DeSouza, Hairong Jiang, Juan P Wachs, and Bradley S Duerstock. Integrated vision-based system for efficient, semi-automated control of a robotic manipulator. International Journal of Intelligent Computing and Cybernetics, 2014.

17. Tommaso Lisini Baldi, Giovanni Spagnoletti, Mihai Dragusanu, and Domenico Prattichizzo. Design of a wearable interface for lightweight robotic arm for people with mobility impairments. In 2017 International Conference on Rehabilitation Robotics (ICORR), pages 1567–1573. IEEE, 2017.

18. Tommaso Lisini Baldi. Human Guidance: Wearable Technologies, Methods, and Experiments. PhD thesis, Istituto Italiano Di Tecnologia.

19. Stevo Bozinovski, Mihail Sestakov, and Liljana Bozinovska. Using eeg alpha rhythm to control a mobile robot. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 1515–1516. IEEE, 1988.

20. Jennifer L Collinger, Brian Wodlinger, John E Downey, Wei Wang, Elizabeth C Tyler-Kabara, Douglas J Weber, Angus JC McMorland, Meel Velliste, Michael L Boninger, and Andrew B Schwartz. High-performance neuroprosthetic control by an individual with tetraplegia. The Lancet, 381(9866):557–564, 2013.

21. Dorian Goueytes, Aamir Abbasi, Henri Lassagne, Daniel E Shulz, Luc Estebanez, and Valérie Ego-Stengel. Control of a robotic prosthesis simulation by a closed-loop intracortical brain-machine interface. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pages 183–186. IEEE, 2019.

22. Fred Achic, Jhon Montero, Christian Penaloza, and Francisco Cuellar. Hybrid bci system to operate an electric wheelchair and a robotic arm for navigation and manipulation tasks. In 2016 IEEE workshop on advanced
robotics and its social impacts (ARSO)
, pages 249–254. IEEE, 2016.

23. David Achanccaray, Juan M Chau, Jairo Pirca, Francisco Sepulveda, and Mitsuhiro Hayashibe. Assistive robot arm controlled by a p300-based brain machine interface for daily activities. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pages 1171–1174. IEEE, 2019.

24. Valeria Mondini, Reinmar Josef Kobler, Andreea Ioana Sburlea, and Gernot R Müller-Putz. Continuous low-frequency eeg decoding of arm movement for closed-loop, natural control of a robotic arm. Journal of Neural Engineering, 2020.

25. Yuanqing Li, Qiyun Huang, Zhijun Zhang, Tianyou Yu, and Shenghong He. An eeg-/eog-based hybrid brain-computer interface: Application on controlling an integrated wheelchair robotic arm system. Frontiers in Neuroscience, 13:1243, 2019.

26. Jain, Siddarth, and Brenna Argall. "Probabilistic human intent recognition for shared autonomy in assistive robotics." ACM Transactions on Human-Robot Interaction (THRI) 9.1 (2019): 1-23.