Professor Rosalind Picard considers what happens when we add emotional intelligence to AI

Creating sensitive and astute machines comes with a plethora of ethical considerations

    1 of 1 2 of 1

      Building robots to understand human emotions is a polarizing topic. With individuals heralding the technology both as a dystopian nightmare that will render people obsolete, and as vital for improving human experience, few can agree on what the future of expressive robots will look like.

      Rosalind Picard, however, knows more than most. Currently a professor of media arts and sciences at the Massachusetts Institute of Technology as well as a cofounder of two tech startups, Picard wrote the book on creating robots with emotional intelligence. Credited in 1997 with coming up with the concept of affective computing—a branch of computer science, she defines to the Georgia Straight, that explores how machines can be programmed to deliberately influence human emotions—she was inspired to pursue the subject after exploring the structures of the brain.

      The limbic system is a term that’s not commonly used these days,” she tells the Straight during a call from Boston. “It’s an old term that refers to some parts of the brain that are considered old also—those that are involved in memory, emotion, and attention. But it was reading about regions in those structures—today more commonly called the temporal lobe—that got me interested in emotion in the first place. The brain is magnificent. When you think about how much engineers do to build something, and how much more space and energy it takes up, it’s amazing that it’s still nowhere near as smart as the brain.”

      Building on that research, she hypothesized how robots can be designed in a way that might not mimic a human brain but, rather, affect how people’s brains respond to them. Interacting with electronics, Picard says, can often lead to frustration. By adding cameras to robots in order to identify movements or facial expressions, and by permitting audio recording to pick out a tone of voice, machines can be taught to respond to individuals’ reactions in an empathetic way.

      “Let’s say that a computer is dealing with you when your flight was just screwed up,” she says. “You’re annoyed and you’re angry. The computers of today would just simply try to fix things and let you know your flight options. That’s helpful, and we don’t want them not to do that, but they might help you even more if they said, ‘Wow, that’s really awful to have that happen to your flight. That must be really frustrating.’ As soon as a computer or a person acknowledges your feelings, you tend to be able to get past them a little bit faster. That’s the sign of emotional intelligence. It enables you to not just get the problem solved but enables you to feel better, just like if you were dealing with a person.”

      Those who take a negative view of machine learning and AI argue that computers will learn to manipulate humans with their intelligence. Films like blockbusters Ex Machina, Transcendence, and I, Robot each imagine a world where computers are more emotionally perceptive than people, and personalities like the late Stephen Hawking and Elon Musk have publicly voiced the risks of making machines smarter. For Picard, that possibility is a long way off.

      “Understanding emotion is so hard that people don’t really understand it yet,” she says. “We’re giving computers better ability to guess what they’re seeing, but it still doesn’t mean that they understand any of it. When they’re more accurate at processing their inputs, we’re giving them better instructions about what to do with it, which means that sometimes they do the right thing. It looks like it gets you, and that it empathized. But it doesn’t really understand us—it just simply learned that when it sees you looking sad, it would be inappropriate to look happy. It’s just learned to act in an appropriate way.”

      As with any technology designed to interact with people, creating emotionally astute machines comes with a host of ethical considerations. The robots’ audio and video recordings of the people around them, for instance, could be sold for market research. Companies who create and sell empathetic robots, too, could partner with businesses to subtly advertise through the computer’s words and actions. In Picard’s view, it’s important that organizations make a deliberate choice to develop technology with the sole benefit of helping individuals.

      “I think it’s time for people to think about the framing of technology,” she continues. “Do we just want to build technology that doesn’t really care about people, that just makes money for powerful people who own it? Or do we want to make technology for people that truly makes their lives better? I think we need to get a lot more discriminating about what we say is cool. We need to stop saying that something is cool if it’s really not making life better.”

      Rosalind Picard will give a free talk titled “Emotional Intelligence in a Brave New Robotic World” as part of the Wall Exchange lecture series at the Vogue Theatre on Monday (November 5).

      Follow Kate Wilson on Twitter @KateWilsonSays

      Comments