Just over two years ago, Microsoft released a chatbot on Twitter named Tay. Created to mimic the speech and spelling of a 19-year-old American girl, the program was designed to interact with other Twitter users and get smarter as it discovered more about the world through their posts—a process called machine learning. Rather than becoming an after-school chum for bored teens, though, Tay was soon tweeting everything from “I’m smoking kush in front of the police” to “I fucking hate feminists and they should all die and burn in hell.” She was shut down 16 hours after her launch.
Tay’s rants—which featured racist slurs and Holocaust denials—tapped into people’s biggest anxieties about the future of artificial intelligence (AI). With no moral compass to guide them, the fear goes, machines will be unable to follow the same social rules as humans.
In response, an industry is growing around robot ethics.
UBC graduate AJung Moon, director of the Open Roboethics Institute, has dedicated her career to dissecting the tricky issues thrown up by artificial intelligence. A mechatronic engineer by trade, Moon became interested in the topic when a mentor at her university mentioned how South Korea was developing autonomous weapons to guard the demilitarized zone. Realizing that there were few discussions around what kind of robots companies should be creating, she delved into the morality of machines in her graduate studies.
“I’m a woman in her 30s with a technology background, born in Korea and raised in Canada,” she tells the Georgia Straight on the line from her office in Seattle. “I have my own set of biases. Those should not be assumed to be reflective of everyone’s values—and yet, if I create robots that take on those standards, I have the power to replicate my views over and over. Artificial-intelligence systems act as a proxy for one person’s ideas, and a single set of opinions can become the rule. It’s incredibly important for us to be thoughtful about the decisions we make when we program machines, and that’s where ethics comes into play.”
Moon focuses her work not just on chatbots or physical robots but on any system that uses machine learning—the ability to get better at a task through experience rather than direct programming—to power its artificial intelligence. AI is already ubiquitous. Google Maps, for instance, uses machine learning to predict how long a journey will take based on the information it interprets from others’ phones in real time, and makes its own decisions about the best route to take. With artificial intelligence now underpinning everything from road safety to résumé-reading, Moon believes that companies must interrogate the morality behind their programming.
“There’s many different ways to implement ethics into artificial intelligence,” she says. “I recently worked with Technical Safety B.C., which is an organization which oversees the safe installation of equipment. They wanted to take the huge amount of data that they gather and use it to make decisions about where hazards are most likely to arise. They could then send over a safety officer to do something about it before there was a danger.
“One of their employees pointed out that the machine-learning system could throw up a false finding and make a wrong prediction about a hazard,” she continues. “If they were to send out a safety officer to that site, they would be wasting their time and the company’s money, but if they chose not to send a safety officer, a huge accident could happen in their absence. We conducted an AI ethics assessment, which was the first one that we know of in the world. That gave them an ethics road map to break down why they make the decisions that they do. It’s available for everyone to see, so they are able to justify their choices.”
A number of B.C. companies use AI technology but don’t let the public know how their machines make decisions. In Moon’s view, those systems can be ethically problematic. High-profile organizations like the Vancouver Police Department, for instance, use machine learning to predict where and when certain crimes are more likely to happen—but by failing to disclose how those choices are being made, they risk being accused of prejudice or profiling.
“Based on data the Vancouver police has gathered in the past, its AI system can make accurate guesses about property crimes,” Moon says. “They can then preemptively send officers to those locations. That’s when the idea of bias comes into play. The type of neighbourhood is often associated with the type of people who live there, whether that be in terms of race or socioeconomic status. If more officers are in those areas, there will likely be more arrests for crimes that might not have otherwise been seen. If you want to be fair in keeping everybody safe, what should fairness mean in this particular context? If you don’t have a definition that’s loud and clear for everyone to see, you’re going to run into trouble in the future.”
Despite the potential failings of artificial intelligence, Moon is optimistic about its future. At the upcoming B.C. Tech Summit, she plans to discuss how robots and humans need to be working together to make decisions, with the machines offering suggestions and people making the final call.
“There should be a healthy amount of concern in terms of what we should be doing about workers who will be displaced or the amount of large-scale disruption that these technologies will bring,” she says. But I think we do need to point out the positives of the technology. Not only can AI do tasks more efficiently than us, there are also areas where we have huge shortages of employees. In the care sector, for instance, B.C. has big problems hiring people to support the elderly, and that can be supplemented by robotic systems. There are definitely use-cases that can change the world for the better.”
AJung Moon speaks at the B.C. Tech Summit at the Vancouver Convention Centre West on Tuesday (May 15).
Follow Kate Wilson on Twitter @KateWilsonSays