Abolfazl Mohebbi profile picture Professor at Polytechnique and expert in rehabilitation and assistive robotics

Abolfazl Mohebbi is a former Kinova Control Team employee. After finishing a Postdoc in Biomedical Engineering at McGill, He joined the Motion Analysis Lab at Harvard University Medical School as a Postdoctoral Fellow. There he worked with different rehabilitation and assistive robotics systems, such as a lower-limb exoskeleton device.

He recently went back to Montreal where he became a Professor at Polytechnique (École Polytechnique de Montréal) and started teaching in fall 2020. He is the lab director of the Polytechnique Laboratory for Assistive and Rehabilitation technologies (POLAR).

Quite recently, he published an article titled 'Human-Robot Interaction in Rehabilitation and Assistance: a Review'. Kinova recently conducted a Q&A with him regarding rehabilitation and assistive devices, this will be featured as part 2 of this content in a future article.



You mention in your publications that developers should focus on human-centric design. Can you elaborate on that?

Before, in terms of rehabilitation and assistive systems, we didn’t have a lot of thoughts and focus directed toward the subjects and also the experts in the field (rehabilitation technicians, therapists, etc.). 

'In order to use technologies in their full potential, we need to have a mindset of bringing the users in the loop of design. Not after the design is done and the production is finished…'

And now, with the advances in artificial intelligence and intelligent control systems, a lot of things are directed toward understanding what the neuromuscular system of a human does and how this knowledge can be useful for designing rehabilitation and assistive systems.

POLAR Montréal logo and École Polytechnique de Montréal logo

Can you explain how multi-disciplinary teams can make better robots?

Without the expertise of a therapist/doctor working in rehabilitation, it is hard to put together an intervention that helps and improves the motor function of a user. For example, we need to know the procedure of rehabilitation, the program intended for each user, the level of customizability to the user’s needs and ultimately the technical characteristics of each exercise.

On the other side, robotics, control and AI engineers can help to put all these items into action, by creating a device. They also parameterize all those programs in terms of robotic tasks and procedures.

Can you explain why there is such a widespread over the kind of human-robot interaction (HRI) modalities? Why not focusing on the best one with everyone working on it?

'The short answer would be: different users have their own needs, and probably their own limitations in terms of motor function.'

So, as the users differ, the HRI should be unique for each category of users.

But the reality is that not all those human-robot-interaction interventions are 100% proven to be effective. So, they are the subject of on-going research and they are being advanced together in parallel. Sometimes they are used in cooperative/hybrid mode so the users can benefit from different modalities.

Girl with upper-body disability in motorized wheelchair eating meal with assistive robotic arm Kinova Jaco

In the past, Kinova’s President & CEO, Charles Deguire had mentioned the following: 'We won’t be completely happy as long as the user cannot eat a complete plate by himself…' Do you think this is something possible in the near future?

For us as human beings, I believe when we do our daily activities (eating, manipulating objects, etc.) it doesn’t necessarily come from the urgency of covering our needs, but we sometimes do things as part of our life and social interactions. For example, in everyday communications, we often use hand gestures to imply our thoughts and intentions.  people with disabilities, or even with missing limbs should be ideally able to take full control of the assistive robot by communicating their intentions to the control system and commanding the actions through specific interfaces. I believe the next generation of assistive devices should be focused on that.

For example, we have many advanced surface EMG (electromyography) electrodes that can read muscle activities from different parts of the body and with the help of AI, we can classify those activities and put them together into categories of actions. So, with the help of AI and processor technology, I hope in a few years or maybe a decade we can reach that point.

Taxonomy of the research methods for safe human-robot interaction of ARMs figure

Opposite: Taxonomy of the research methods for safe human-robot interaction of ARMs. Source: 'Human-Robot Interaction in Rehabilitation and Assistance: a Review' by Abolfazl Mohebbi.

The current work on mobility robots is reminiscent of technologies that are available in self-driving cars, which are already on the road (navigation, obstacle avoidance, path planning). Can you explain the technology gap preventing the direct use of the same technologies?

Well, to be honest, the technologies used in assistive/rehabilitation always have a difference with the main usage of that specific technology. For instance, the use of robotic manipulators in the industry goes back to the late 1950s, while the use of robots for assistive purposes goes back to one or two decades only. So there has been this gap for a long time, especially in the field of robotics. That’s the first thing.

The other thing is that self-driving cars are using features that are distinct and limited (they must be able to see roads, objects, etc.). But assistive mobile robots and Intelligent wheelchairs are used in different types of settings (indoor, outdoor) with a lot of features and different objects that are more complex than a self-driving car 

The third part of the answer would be directed to the users. In self-driving cars, we have the driver and the passengers that can be fully aware of the situation and can regain full control if they want. But the assistive users, they have physical limitations.

In your opinion, which safety feature developed for Assistive robotic manipulators (ARMS) could be considered as the 'lowest hanging fruit'?

Those features are quite intertwined. Learning is now in collaboration or even a part of the control system. For instance, at the time I was working at Kinova to design motion planning and collision avoidance algorithms, we didn’t implement a fully intelligent system using AI. But now, using AI, we can attain a lot of objectives regarding safety measures quite sooner. In that regard, the next step for us would be to incorporate robot learning and intelligence in those four categories of safety issues. 

For instance, in terms of user, I mentioned task adaptation and ARM assessment.

'To interact with users we can benefit from a lot of sensing devices, and understand what they intend over a relatively short period of time.'

Ultimately, we can accumulate big classes of data from many users and put together a good intervention and an effective control system.

In motion planning, it’s the same. We used to put a lot of effort into classic control systems (ex: inverse kinematic of the robot which is not always simple), but I see now experts from Google Brain who physically train clusters of robots to avoid the singularities and use artificial intelligence. It’s more a recursive reinforcement-based type of solutions toward inverse kinematics. And that is actually a very good solution: it’s quite near to the definitive answer rather than the classical approach.

Spaulding Rehabilitation Hospital view from the water

Opposite: Spaulding Rehabilitation Hospital in Charlestown, Boston, Massachusetts.

How do rehabilitation robots change the way therapists work? Are they happy about it?

When I was at Harvard University, I worked in the Motion Analysis Lab which was located at the Spaulding Rehabilitation Hospital in Boston. It’s a facility that uses many devices and technologies and performs various rehabilitation interventions. It also has a lot of experts, researchers, doctors and therapists in various labs to support the rehabilitation of many people. I believe both users and experts are quite happy with the new technologies. And the reason is before, they needed to put together a lot of programs, customized for each user. They also needed to help the user physically sometimes with excessive force/pressure on the limb that the user wants to rehabilitate. Moreover, they had to keep track of all the data during the rehabilitation session and also over a long period of time for that user.

But now with robotics systems, everything is algorithmic. First, the user wears a rehabilitation device, so there is no need for the therapist to apply a force or interact with the user excessively.

Second, they can easily keep track of the improvements of the user. These improvements are getting measured while being subjected to the rehabilitation program.

It is not just a passive system, but also a complete set of measurements (forces, muscle activities, torques, kinematic movements, motion tracker, etc.).

That’s very beneficial, and with the help of AI, a lot of these things can be categorized, classified and customized for each user, so they can get results faster. (opposed to a generic rehab program).

Do therapists have the background knowledge to work with those devices?

Not all the educational programs for therapists have the opportunities of getting knowledge about robotic systems, but the therapists nowadays get on-site training with the developers of the technologies, to learn how to use them. But in the future, it is possible. I saw that schools are getting more and more involved in the use of those technologies.



Top banner picture: Polytechnique Montréal Lassonde pavilion. Source: https://www.linkedin.com/school/polytechnique-montreal/.

Kinova Gen3 robot for research and professional applications two people controlling a robotic arm with video game controller

Want to know more?

Get started today with your own robotic application