When whistle blower Christopher Wylie was asked earlier this year to sum up the story of Cambridge Analytica in a minute, he simply responded, “No.”
The impact that the Steve Bannon–founded, Robert Mercer–funded company had on global politics and big tech is maddeningly complex. Gathering user information from platforms like Facebook—the bulk of which was obtained without the subjects’ authorization—Cambridge Analytica used data, including people’s fashion choices or music tastes, to create a hyperspecific personality profile. Under the management of Christopher Wylie, then the company’s director of research, that huge collection of information could be used to micro-target voters, and influence their election decisions with specific political ads.
As well as allegedly working with parties in India, Kenya, and Malta, Cambridge Analytica played a role in two highly controversial campaigns: 2016’s Brexit vote and the Donald Trump election. But while both outcomes have faced national scrutiny, the only organization to have suffered any immediate consequences for its role in the scandal is Facebook.
As well as being publicly hauled in front of the U.S. Congress to testify why the data of some 87 million users could have been harvested by Cambridge Analytica without their explicit consent, the tech giant was hit with a symbolic fine of £500,000 (CAD $873,000) by the U.K.—the maximum allowable—for failing to protect its citizens’ personal information. The greatest damage, however, was to the company’s bottom line. The moment that the Guardian and the New York Times first broke Wylie’s story, Facebook shares fell more than 11 percent, wiping roughly $50 billion USD from its value in two days: a number nearly twice the total value of Airbnb.
One year on, Wylie is still trying to wrap his head around the impact of going public.
“It has been quite a year; I’m not going to lie,” he tells the Georgia Straight on the line from London, England, where he now works as the research director for multinational clothing business H & M. “So many things have happened for me, personally. But then I also think more broadly in terms of conversation about the new era that we’re entering, where data and technology and AI is actually becoming quite influential in our lives. I’m heartened that at least we’re now having a conversation about the intersection of technology in our society.”
After a vehement initial backlash against Facebook and social-media companies, though, few things have changed. The tech giant currently faces no criminal charges for allowing Cambridge Analytica to access people’s data without permission. The billions of dollars lost from its valuation were restored within two months of the scandal. Facebook’s fundamental architecture remains the same, full of addiction-inducing ludic loops and the time-sucking infinite scroll, and—most damning of all—its platform still permits microtargeting. The effects of that, Wylie warns, could have a serious impact on communities.
“When we have information being separated out and target different groups of people…that segmentation risks actually segregating society,” he says of the danger of personalized advertising. “We’ve come out of several decades—particularly in the United States—of desegregating society. If you think about what segregation is, it’s often things that are really simple: where you can sit on a bus, what water fountain you can use, what door you can use to enter a school or a movie theatre. But the power of that segregation is that you create two Americas. You create two worlds—and two perspectives on the world. We’ve now reached a point where we’ve realized that segregation is wrong, but this is happening online now. A kind of cognitive segregation is happening.
“When you look at the United States right now, I think it’s a classic example,” he continues. “Or the rise of the alt-right in Europe. There are people who just have a completely different understanding of reality, and they act out based on that understanding—or misunderstanding. And it causes a huge amount of social tension.”
Facebook’s unwillingness to change its core structure—and Wylie’s insight into the resulting impact—is one of the reasons that the 29-year-old remains in high demand by the media and at speaking events. Another is his readiness to challenge the public image of big-tech businesses. In Wylie’s view, Facebook’s reaction to the scandal contradicts an oft-repeated narrative: that developments in technology equate to progress for humanity, and that the goal of tech companies is to improve lives. Millions of aspiring entrepreneurs revere Mark Zuckerberg, Steve Jobs, and Jeff Bezos as visionaries and herald their role in shuttering industries like retail or journalism as disruption rather than cannibalism. Wylie calls them “false prophets”.
“One of the things I think the Cambridge Analytica scandal revealed is that technology companies are just like other companies,” he says. “The technology sector is just like Big Pharma. It is just like the oil industry. It is just like any other industry that is looking to employ resources—in this case, people—for profit.
“The thing I find so frustrating about Facebook is the way it treats society. The functioning democracy of the United States and, more broadly, the western world is what allows it to make money and profit…it’s what allows them to have people who can innovate and make technology and make money. There is an obligation to respect that.…And they have really not clocked that, in a way that feels very similar to how an oil company doesn’t really care about polluting the oilsands, for example. Someone else can clean up the mess, as long as they make money.”
The impacts of Wylie’s statements have been wide-reaching. Unthinkable in previous years, political figures in the United States are considering stepping in to regulate the way that big tech operates. Democrat Elizabeth Warren, for instance, will run for president in 2020 on a platform of breaking up tech giants like Facebook, Google, and Amazon: a move that’s been, for the most part, warmly received. Warren is suggesting a policy of personal accountability for executives—including jail time—that break the trust of their customers with data breaches.
Wylie’s own proposal isn’t far off. He argues that because data architects and engineers are building the infrastructure of the Internet, they should be held to the same standards as those building the physical world—the people who design and construct houses, for instance. Issuing professional licences or requiring adherence to a set of safety principles, he believes, could be one way to ensure tech companies behave ethically online.
But while enthusiasm for regulation has been slowly growing, it has so far been hindered by those hung up on the practicalities. Skeptics highlight how lawmakers are ill-equipped to understand new developments and that tech companies innovate much faster than the time it takes to pass legislation. Wylie begs to differ.
“It’s a hard problem, because the Internet is so innately international and tech platforms are so big,” he says. “But the thing that I would say is that we are able to take highly complicated things—like nuclear power or airplane safety or cancer drugs—and we are able to regulate these. But the way that we regulate medication safety or airspace safety, we don’t create the literal legislation in Parliament or in Congress. You do not have two members of Congress debating whether this molecular isomer is safer than another one for a cancer drug. They create a regulator, which hires scientists—who know what they’re doing—to go and create rules. One of the problems with the way that a lot of people talk about technology and the regulation of technology is that they say that the law can never keep up. Parliament and Congress can’t keep up with technology, but regulators can, and they do.”
Wylie is quick to point out that if big-tech companies continue unchecked, the damage to our societies could become irreversible. Spotlighting Russian interference in the U.S. election, the online propaganda that led to virulent ethnic cleansing in Myanmar, and the hate messaging in Sri Lanka that has been linked to the death of local Muslims, he believes that the case for regulating social media is a matter of life or death.
“If we allow AI to perfectly segment our countries and our societies, we will no longer live in the same place,” he says. “We will live beside each other, but not with each other. And that, I think, is dangerous.”