Don’t Forget the Human Factor in Autonomous Systems and AI Development
It goes without saying that humans are the intended beneficiaries of the AI applications and autonomous systems that data scientists and developers are creating. But what’s the best way to design these AI apps and autonomous systems to maximize human interaction and human benefit? That’s a tougher question to answer. It’s also the focus of human factors specialists, who are increasingly in demand.
Datanami recently caught up with one of these in-demand human factors specialists. Catherine Neubauer is a research psychologist at the Army Research Lab, and a lecturer at the University of Southern California’s online Master of Science in Applied Psychology Program. Neubauer, who holds a Ph.D. in Psychology with an emphasis on Human Factors from the University of Cincinnati, has researched various aspects of the human factors equation, including assessing human performance and decision-making.
According to Neubauer, there are a slew of concerns where humans and the latest technology come together.
“AI and autonomous systems are really becoming very prevalent in our everyday interactions,” she says. “We really need to focus on them because if we don’t design them with the human user in mind, that interaction is not going to be easy or desirable.”
As an applied psychologist working in this field, Neubauer understands the problem from multiple angles. On the one hand, she wants to understand how humans interact with autonomous systems and AI so that humans can be better trained to work with next-gen systems. On the other hand, her work also informs data scientists and developers on how they can build better, more human-centered systems.
There is considerable room for improvement on both sides of the equation. “I think we’re getting there,” she says, “but I think a lot more work is needed.”
For instance, in the autonomous driving arena, where Neubauer has spent a considerable amount of time, people may feel that great progress is being made. After all, some new cars can essentially drive themselves, at least in some circumstances. But those “aha” experiences are not what they may appear to be, she says.
“There’s this idea of ‘Oh great, I have this self-driving car. It’s a Tesla. I can just sit back and not pay attention [and] fall asleep.’ That’s not the case. We’re not there yet,” she tells Datanami in a recent interview. “There are limitations to this technology. In an ideal state, yes it can it can drive around on it’s own. But the human should always be ready to take over control if they need to.”
Similarly, advances in natural language processing (NLP) have supercharged the capabilities of personal assistants, which are able to understand and respond to ever-more-sophisticated questions and requests. But once again, the gains should not overshadow the fact that a lot more work is needed.
“I think we are doing a good job in the sense that we made very large gains and what we’re capable of doing,” she says. “But I still think that you know more work needs to be done to get it to where you know you can just easily interact with a personal assistant, that it’s like a robot or something like that, with no mistakes, no errors. We’re still seeing some kinks that need to be worked out.”
Some of Neubauer’s latest research involves the algorithmic detection of human emotion. Computer vision technology has made great strides not only in being able to recognize specific faces, but also to detect somebody’s mood based on how their face appears. Knowing if a human is happy, sad, or angry can be very valuable, and governments around the world are investing in the technology as part of their defense initiatives.
But, again, the technology is not quite there yet, Neubauer says.
“While I think it’s really great that that we kind of have this classification system to read the emotion, you kind of have to take that with a grain of salt, because everyone expresses emotions differently,” she says. “And some people might feel really happy, but they’re just not outwardly expressive. Some people might feel really sad or depressed, but you might not see that expressed for whatever reasons.”
Instead of just using the computer vision algorithm, Neubauer is investigating multi-modal forms of emotion detection. This is a promising area of research, she says. “I’m not going to focus specifically on a facial expression. I’m going to get other streams of data to give me more information about a human,” she says.
So what should data scientists and autonomous systems developers do if they want to benefit from human factors research? Number one is know your users.
“I think that the best products or systems or technologies that we interact with have been designed with the human user in mind,” Neubauer says. “First and foremost, you have to make sure that your designing the systems for your users, to make them easy to use.”
A rule of thumb with this sort of design thinking is to make the product so easy to use that it doesn’t require a manual. This often requires limiting the ways in which a user can interact with an application or a system, and to encourage exploration. (There is a limit to this rule, of course – after all, Tesla tells users in the manual to always be ready to take over controls, but many people obviously ignore this.)
Neubauer’s second piece of advice for data scientist and autonomous systems developers who wan to incorporate human factors advances into their work, interestingly, concerns ethics.
“I like to think of myself as an ethical person, and I am always thinking of where my research and my work is going, and who’s going to be using it,” she says. “Just because we can do something with technology doesn’t mean we should. So anytime we’re implementing this technology, building new systems, we have to ask ourselves, is it actually helping society? And who is it helping?”
Not all humans are good at assessing risk. It’s not necessarily a qualification that data scientists will look to build out, or to put on their resume. But in Neubauer’s reading, risk assessment should be part of the creative process for those creating AI apps and autonomous systems, particularly when it comes to the risks that they are asking their users to take.
The risks of a bad outcome are significantly higher when AI and autonomous features are built into self-driving cars, autopilot systems in airplanes, and traffic control systems for trains, for example, than they are in developing a personal assistant or adding probabilistic features to a word processor program (Clippy, were looking at you).
“If it’s some sort of stressful, high-stakes scenario and I have an autonomous agent working with me and it [tells] me to go left when I should go right, because that’s the data that it had trained its decision on, that’s going to be a problem,” Neubauer says. “On the other hand, maybe you’re a surgeon going into surgery. You want to make sure your personal assistant is letting you know what your appointments are. So I think it depends on the scenario that you’re in and how important it is to make sure that we have a very small, if not non-existent, percentage or likelihood that an errors going to occur.”
It appears that we’re at the beginning of a period of great progress in the field of AI and autonomous systems. There are many aspects of life and society that can benefit from a certain degree of data-driven decision making.
But in the long run, there are other intangible aspects to the human factors equation that should bear some scrutiny. Neubauer understands that AI and autonomous systems can reduce our cognitive workload and let us get more done. But she also wonders how the ever-growing use of this technology will impact human development.
“Sometimes I get concerned that we basically have these personal assistants in our phone reminding us to do everything,” she says. “We don’t have to memorize phone numbers anymore. What is actually going to happen to our cognitive system if we have GPS taking us everywhere? We don’t have to actually develop a mental map of the cities we live in. Those kinds of basic skills worry me that they’re not being used. And if they’re not being used there, we’re not going to be strong in those areas.”