Henny Admoni assistant professor black and white profile picture with buildings background city blurred background short hair

The Kinova Innovator Spotlight series is a Q&A discussion with roboticists from all horizons sharing Kinova values of humanity, excellence and creativity. It will be a great opportunity to learn more about them, their academic path and their past, current and future work.

Kinova innovator spotlight logo camera frame focus black invisible background

---

For our second Innovator Spotlight of 2020, we are proud to feature Henny Admoni, Assistant Professor in the Robotics Institute at Carnegie Mellon University. She leads the HARP Lab. Source: http://hennyadmoni.com/bio.php.

An interview conducted by Jérôme Bédard and Martin Leroux from Kinova, with Henny Admoni.

 

Kinova: When did robotics start to be an option for you? 

Henny Admoni: I got to robotics and computer sciences in general pretty late. When I graduated from high school I fully expected to become a journalist. I wanted to be a reporter, and work on international politics. My plan was to travel the world and write newspaper articles. Then I got to university, but rather into Introduction to programming. I took that because it was something that I was interested in when I was in high school but never had the chance to do… Then I explored psychology and mixed cognitive science with computer science to create my own degree, it was wonderful that my school allowed that! My goal was to understand people better by using computational techniques.

So I finished college with this understanding of computer science and AI, and how that can be applied to human interaction. Then, for my PhD, I joined the Social Robotics Lab of Brian Scassellati at Yale in 2009, which really merges the social aspect of human behaviour with robotics. There I discovered that robots were more than just AI in the real world! That there was this whole world of robotics, where it is much harder to get something running in that real-world versus in simulation... Since then, I moved from social robotics, more toward core robotics. At Carnegie Mellon University (CMU) I do both: social robotics but also assistive robotics (vision, manipulation, etc.).

Carnegie Mellon University CMU Robotics Institute logo red on white

Opposite: CMU Robotics Institute logo. Source: https://www.ri.cmu.edu/about/ri-logos/.

K: Do you have something to say about being a woman in this traditionally male world?

H.A.: We still don’t have enough women in robotics (although it’s getting better). I think it’s getting better in large part because of initiatives that identify the problems and try to address them. For example, there are a lot of networking programs that help women connect and support each other in grad school and after as faculty or in industry. Those are valuable because when you are in a field where your core identity is not the majority, you end up not having a lot of those opportunities to learn from your peers and connect in traditional ways that really boost you forward. So having these systematic programs that help connect women in robotics has really helped. I think now we can also look at other underrepresented minority groups.

For instance, there is a huge issue now in the representation of black roboticists, other people of color and indigenous roboticists. We have very few people, nationally, in those groups. It’s a huge problem. Both in recruitment but also in retention and experience that people have once there are in universities. I’m hopeful that people will be talking about this more and more. Particularly in the US. In the end, the same kind of techniques that worked to increase the participation of women in robotics can also work to increase the participation of other minority groups.

K.: Could you describe your lab/institution?

H.A.: I’m an Assistant Professor at CMU, and I run the Human and Robot Partners (HARP) lab. Its goal is to develop robots that assist and collaborate with people.

All our research is driven by this principle that for robots to be useful, they need to be able to understand how people want to use them and when people want to use them.

We do a lot of research on various applications. One of our biggest applications is the one that we use the Kinova robot for, and which is various physical assistance. We are particularly interested in assistance for daily living like eating or preparing a meal. Eating is a universal activity. Preparing a meal is critical to be able to live independently. These are great application areas and use cases for us to investigate. But the work is driven by these underlying questions. How does a robot arm understand the kind of help that it needs to give to a person that is putting together a meal?

One of the ways is to monitor their behaviour and use what is observable (where I’m looking, actions I’m taking, questions I’m asking, etc.) and translate them into the unobservable intentions and goals that the human has in the first place. So our robots monitor eye gaze and try to use where people are looking to predict what they're trying to do, or what they want to do next or even whether they are confused and having a problem. So that is the principle that underlies all of the work that we do.

I also work in another field that is AI self-proficiency assessment. For instance, ‘How does an AI/robot reflect on its own capability and communicate to a human whether it’s gonna be able to do a task that the human has given it?’. We also work in HRI non-assistive collaborating cases, for example, a robot server at a restaurant that could help a human server to deliver food at a table. In all these cases, what drives us is the notion that to build better robots we have to understand people first.

K.: In January 2020, you were invited to give the presentation 'The Future of Human-Robot Interaction' at the World Economic Forum in Davos. How did it go?

H.A.: That was really fun. I was lucky enough to get invited by Carnegie Mellon University as their official representative at the forum. I got to speak about the future of Human-Robot Interaction (HRI). It’s not an experience that I expect to have a lot in my life to be speaking to people that are number one in companies or in a government. I found that the audience was super intellectual, engaged and interested in understanding what the core problems were. I talked with a lot of people in areas like healthcare, eldercare and education. Those people saw the capacity for robots to really make people’s lives better. It was nice to talk to them and I told them:

‘Robotics is amazing but it’s not a hammer. In order to understand how to apply robotics in the real world to solve human problems, you need to be aware of the human out of the equation too.’

This is the big message I have every time I talk to audiences. We can all dream of robots doing amazing things, but connecting that vision that we have to the reality of robotics is important to actually ground people to what is possible today.

Henny Admoni Carnegie Mellon University CMU Robotics Institute Duet setup Kinova MICO robot arm camera image acquisition rig

Opposite: Henny Admoni with Duet, the HARP’s lab long time Kinova MICO. Source: http://hennyadmoni.com/.

K.: What kind of gap is there to fill in before we can get the results from your research into users’ hands, for example, robots assessing to do a task or not?

H.A.: I think people are actually usually really good at recognizing other people’s intentions. For instance, when we are in a kitchen with someone to cook a meal, we can avoid running into each other while chopping, stirring, etc. Because we are good at predicting (based on the body posture, the direction they are looking, what they are doing, etc.) what people are gonna do next. People coordinate really with each other really well. My dream is to get robots that coordinate that well with people.

So, robots need to assist based on these invisible mental states (intentions, the knowledge that we have or don’t have, etc.) but drive the way we complete a task. For robots to help us complete that task, they need to know many invisible things (e.g.: where we are, if we are intending to do this task, if we need a tool, if we are missing something, if we are confused about the actual goal, etc.).

The next thing that we do have are observations that we can make of human behaviours (e.g.: where people are looking, what they are saying, what their body or face is doing, etc.). All of this can help inform a model of these mental states.

We can think of this as a partially observable Markov decision process (POMDP), where they are used to represent sequential decision making in human-robot interactions. But the 'partially observable' part means that there is some uncertainty about the states. We are not 100% sure what some of the states are, or even what state we are currently in. So this means 'I don’t know what your current intention is and I have some uncertainty'.

As we are cooking together, I think you may be interested in chopping the carrots, but maybe you might put olive oil in the pot. So I’m uncertain about that, but I observe your behavior (with observations about where you are moving, looking at, etc.) and I use that to reduce my uncertainty and update my mental state. So, to answer the question 'what do we need to fill the gap?' — We need good models of the link between 'observations' and 'uncertainty reduction'. As people, we get that from experience and a lot of interactions. Or we can also systematically develop these models based on the knowledge that we have about the task.

Right now, our robots are really constrained to do specific tasks. I don’t think we will have a general-purpose robot anytime soon. Because in every new environment, doing this 'model building' for the connections between behaviour and intentions needs additional work. I haven’t found a way to generalize that process yet. But, through enough training examples and thoughtful cognitively inspired model building, we can get robots most of the way there.

Henny Admoni shared autonomy capture robotic arm on table helping woman pour water from blue pitcher into transparent glass

Opposite: Shared autonomy diagram. Source: 'Predicting User Intent Through Eye Gaze for Shared Autonomy'.

K.: The HARP lab does have a dataset available for eye-tracking and manipulation (i.e.: HARMONIC). Can you elaborate on what you and your team are hoping to do and what the research community can accomplish with it?

H.A.: We were really interested in how multimodal signals from humans could help improve shared control and  HRI. So we recorded  24 people with a motor impairment doing an assistive task with the Kinova Gen2's MICO robot. The participants use the 2-axis joystick to pick up food from a plate, then retrieve it for them. We recorded some information while they were doing that. 

  • The eye gaze (where they were looking - with a worn eye tracker);
  • Their muscles activation (EMG signals when using the joystick);
  • A frontal video showing their face with precise face points;
  • Information about the joystick signals;
  • Robot positions.

We then used this dataset to understand the role of eye gaze doing this assistive task. We are hopeful that people could do other things with it, doing the same kind of task.

K.: Can you explain how multidisciplinary teams are so important for you, and how they can make robots better?

H.A.: Multidisciplinary research is key because, in order to build robots that work well, we need to draw together all the elements of understanding. I have a background in psychology but we also work with people who understand human-decision making. For instance, a psychologist or a business professor who knows how groups of people make decisions. So we bring in those experts to inform the way the robot should interact or understand the world. It goes both ways: what the robot puts out (interacting) and takes in (understanding). Those two directions require us to draw from a subject matter expert. Right before the COVID, we also started to work with rehabilitation specialists at the University of Pittsburgh. We then try to understand from the clinical perspective how robots could address some of the rehab needs for people with motor impairments. 

Eye gaze robotic shared control robot choosing candy at International Conference on Social Robotics 2020

K.: Your recent papers touch mostly on shared control. Can you explain in your own words what it is?

H.A.: Shared control algorithms try to combine human control of robots with some autonomous behavior.

So if we imagine control as existing along a spectrum, where on one end you have teleoperation with a human controlling every aspect of the robot using a joystick, etc. That puts the burden on the human to be figuring out how their input to the control interface can move the robot in the right way to accomplish their task. Some people get really good at this, for instance, people that are good at video games, which is not my case...

At the other end of the spectrum are autonomous robots. Those robots don’t take human inputs but instead perceive their environment, intelligently plan some behaviour based on that and finally execute the behaviour. 

Shared control is in the middle. You have a human controlling the robot using his interface but at the same time, the robot itself tries to do some parts of the task autonomously. And it blends human control with what the robot planner says, in order to achieve the task better.

K.: What we have found in our studies is that using the shared control methods, people can complete tasks more quickly, with fewer interactions to produce the desired outcome and that people perceive these systems as more enjoyable to use.

H.A.: The beautiful thing about shared control is that the human still gets to maintain his autonomy and control over the interaction. At the end of the day, I don’t think it’s a win if we have an autonomous robot that feeds someone with a motor impairment food that the person doesn’t want to eat. The wins come from empowering the people to do what they want to do. The shared control method allows us to have the robots produce just enough autonomy to be useful while maintaining user control. It is also more exciting than having just the robot solving the problem.

Opposite: Eye gaze shared control. Source: Benjamin A. Newman *, Abhijat Biswas *, Sarthak Ahuja, Siddharth Girdhar, Kris K. Kitani, Henny Admoni. International Conference on Social Robotics 2020.

K.: In your recent research, why do you use eye/gaze tracking over the other commonly used hands-free HMIs (voice control, computer vision)?

I like that question. Eye-tracking is incredibly rich and informative. In my career, I have come back to eyes as meaningful in a lot of different ways… In a high school science project, I have looked at ‘visual perceptions of faces’. So I have been thinking about how people perceive the world and use their eyes for a long time! Eye gaze is rich because:

  1. It’s tightly tied to the task that we are doing and; 
  2. Because it’s a social signal that people know how to use.

First, psychologists have known for a long time that when people go to reach an object, their eyes move to that object first, even before the hands start moving. But we also look at the object in front of us, that object we are referencing to. For example, we look at the object before saying something like ‘give me that blue mug’. Therefore, having robots that understand eye gaze and that connection is useful because then the robot can have another modality for understanding what the people are doing.

The other side of that is the social cue. I became early interested in eye gaze in my PhD from a social perspective (use eyes to direct people's attention, to indicate to people that they are paying attention to them). There is a notion in psychology that ‘eyes make humans special’. We have adapted to recognize the direction of attention and eye gazes from other people as a special stimulus. In fact, we are so tuned to our eyes that we can’t help but follow the direction that someone is looking.

Psychologists have made great studies that I and others have replicated with robots. You show people a picture of a human face. Then the picture of the face turns (look in a different direction). The psychologists found that even if you tell people to not look in that direction, they will unconsciously look in that direction for the first 2-400 ms. We just can’t help it! It’s a good thing. It bootstraps our learning as infants and our social interactions. So that’s why I think it’s useful.

The flip side… is that it’s overloaded. The eye gaze signals are not clean.

Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA) 2019

Opposite: Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA) 2019.

K.: Some of your research papers mention robots as conversational partners. Can you explain what you mean?

H.A.: Another thread that we are following is investigating how social robots can act to support people with severe speech impairments. These people sometimes use augmentative and alternative communication (AAC) devices. They type or select icons on a screen so the device speaks what they want to say. The challenge with these devices is that they don’t provide non-verbal signals that we use to regulate many things. For example, whose turn is it in the conversation, show that we are still thinking, show that we have more to say. On the other hand, what we have learned from doing some studies with AAC users is that this functionality is often provided by a close conversational partner. For instance, based on their non-verbal, a parent or a paid aid that knows them very well and can translate some of that conservation management for them. So we are interested in whether we can have social robots do something similar; help AAC users to get back some of the non-verbal communication capabilities using motion. That is something we have been studying as well.

K.: What is next for you in your lab in terms of robotics research?

H.A.: We are continuing to be really interested in eye gaze, both in single tasks (helping to take a bite of food for example), but also sequential tasks (construct a meal). We will also continue to work on our social agents for AAC device users. Plus, the lab is excited about other AI applications where the AI systems are helping people perform tasks better in teams or collaborating well with people in explaining their own proficiencies. COVID has made HRI research a little hard… We are finding ways around it. For instance, there is a project we are working on right now where the robot stays in the lab with a researcher, but we dropped off a computer and a sensor at a participant’s house. They can teleoperate the robot in the lab from home (with some networking challenges…) That might be the future of human-robot interaction for a little while.

Kinova Gen3 robot for research and professional applications two people controlling a robotic arm with video game controller

Want to know more?

Get started today with your own robotic application

References

HARP Lab: http://harp.ri.cmu.edu/

Harmonic data set: http://harp.ri.cmu.edu/harmonic/

HARP Lab Github for Kinova Gen2's MICO arm: https://github.com/harplab

***

Disclaimer: Kinova Gen2's MICO is a discontinued product but if you're looking for a similar product we'll be glad to guide you with the right choice of Kinova robotic arm for your field of work.