Last week in tech: Google Assistant can now make phone calls that sound entirely human. Is it time to worry?

    1 of 2 2 of 2

      Two weeks ago, Google unveiled a live-recorded phone call between a hair salon receptionist and its Google Assistant technology. Instead of calling the shop herself, a human asks her phone to make a haircut appointment on Tuesday, any time between 10 and 12.

      The minute-long conversation shows just how close companies are to making artificial intelligence indistinguishable from human interactions.

      The demonstration, showcased at the I/O conference by Google CEO Sundar Pichai, reveals an AI voice that sounds wholly real. Able to respond to a person who doesn’t always offer straightforward responses, the Google Assistant can maintain a conversation and intelligently complete a task with a minimal amount of instruction. Perfectly mimicking a person’s inflection, the machine’s casual “mm hmm” as the receptionist checks her schedule is spot-on.

      Google Duplex makes a phone call to a hair salon

      By asking discerning questions and pausing for the just the right amount of time, it’s near impossible to tell that the call is conducted by a robot. The woman on the line, too, is clearly none the wiser.

      With its pitch-perfect interaction, Google’s technology proves a huge step forward for artificial intelligence—but what does that mean for us?

      The technology, named Google Duplex, spawns scores of ethical questions. In the past, robot voices have sounded very deliberately like machines. Everything from A Space Odyssey’s HAL 9000 to Translink’s tannoy announcements fail to appropriate the cadences of the human voice. There’s a good reason for those choices.

      In 1970, professor Masahiro Mori tracked the ways that people interacted with artificial intelligence. He discovered that, as a robot becomes more alike to a human, people’s responses towards the AI become more positive and empathetic. When the robot reaches a certain point of similarity, however, individuals become repulsed. That disgust towards not-quite-human automatons—or interactions with it—is termed the “uncanny valley”.

      The feelings of revulsion arise when a person is unsure of whether they’re interacting with a human or machine. Google’s new technology denies that clarity. When the Duplex tech is put through live testing this summer, it will force those picking up the phone to question whether their interactions are real—a potentially distressing prospect. We might be a long way from sophisticated discussions like those in the movie Her, but the new tech opens the door to increasing uncertainty over digital interactions.

      The inability to discern whether you’re conversing with a human or robot impacts on trust. As digital technologies become smarter, media articles have begun to be written by AI, while programs like Face2Face can produce video footage of speeches from public figures with fabricated words and facial expressions. Further blurring that boundary between man and machine could contribute to increasing skepticism of digital media.

      The introduction of human-like robo-callers, too, has the potential to stratify society. As the technology progresses, lower-skilled workers will be the ones answering menial calls from AI, while phone conversations with a real person will be reserved for the privileged.

      Google’s technology is a breakthrough for artificial intelligence. Amazon’s Alexa, after all, this week recorded a couple’s private conversation and sent it to another user, and Apple’s Siri still can’t run two timers simultaneously. But, like any cutting-edge development, the wonder comes with worry. A one minute phone call has opened up a Pandora’s Box of ethical issues. Here’s to hoping someone in the company can close it.

      Follow Kate Wilson on Twitter @KateWilsonSays