Expert TalkHumans & Robots
By Keith Blanchet

As discussions around the advancement of AI evolve, so too do the concerns. And for good reason. Machines are getting smarter and smarter, able to complete complex tasks and process information incredibly fast. But, what happens when the developments in AI move too quickly — and we lose sight of human ethics?

This is one of the biggest challenges that leaders will have to address — or at least begin productively addressing — in the near future. By productively I mean taking measurable action, not simply recognizing it as an issue and saying, “we’ll address that next year.”

Here are the questions we need to ask when interrogating our ethical principles in robotics, AI, and beyond…



Q: Should we only build technology that advances humankind?


It seems obvious, right? But we always need to remind ourselves of this point. Especially because the “cool factor” on AI and robotics is pretty high. Unfortunately, flashiness is not a good reason to innovate. It’s a waste of time, money, and, worst of all, it likely only serves a very small segment of the human population — if anyone at all.



Q: Who’s to blame when a machine makes an unethical decision?

A: This is a tough one.

How can a machine make an unethical decision? Consider this scenario: you’re in an autonomous vehicle, and the car has the option to crash into a tree — effectively hurting you or your passenger — or hurting a pedestrian on the street. What does the car do? Someone has to be accountable for ways in which our machines process information and learn. This applies to robots, drones, autonomous vehicles, etc in most spheres of society where robots while operating in the future.  In these scenarios, we need to keep humans in the loop of the decision-making process for the robot — especially important when the choice to make is either “bad” or “very bad.”



Q: At this point, what’s the biggest differentiators from the top thinking “machines” and the people using them?

A: Interpretation of behaviors.

In the previous scenario, I outlined a situation where it would make it very hard for a human to choose. Two people would process that situation entirely different from one another. This is because all humans interpret behavior from their own lived experience, unique to every individual. We cannot program context and lived experiences into a robot (yet, anyway!)



Q: Is it possible to find a foolproof “solution” to the issues of ethics?

A: No.

Unfortunately, this can’t be fully “solved.” Much of these are esoteric concepts; however, I can assuredly argue that tech should always operate around the human cause in some way. If we can strive toward that goal, and keep humans in the loop when ethical dilemmas are likely to arise, we shouldn’t run into too many ethical issues.


You might also like these articles

Expert Talk

Hitting the Mark with Marketing: The Do’s and Don’ts of Creating a Message Around Robotics

While robotics companies have been around for decades, the spotlight on the industry has only really intensified in recent years as new technologies become more readily available and more implicated in the lives of humans.

Read more
Expert Talk

For the Greater Good: The Importance of Fulfilling Corporate Social Responsibility

By now, it goes without saying that the cornerstone of Kinova’s DNA is our desire to create products that empower humanity, and that fulfilling our ethical, human and social responsibilities is fundamental to maintaining the core of our culture. Always has been, always will be.

Read more
Expert Talk

What’s in a Word? Bringing “Innovation” Back Down to Earth

The word “innovation” is being thrown around a lot these days. As the term becomes normalized into the everyday vernacular, we need to educate ourselves about what it actually means to be innovative.

Read more