Could fixing Facebook algorithms bridge our political divides?

Proposed U.S. legislation aims to make Facebook accountable for misinformation by targeting algorithms rather than content

    1 of 2 2 of 2

      By Aeryn Pfaff 

      We’ve all heard the calls for unity since the January 6 attack on the U.S. Capitol by supporters of former President Donald Trump. We’re meant to unite with far-right radicals who want to end democracy and QAnon supporters who make real-world decisions based on the belief that liberals are a cabal of Satanist pedophiles. Embrace your uncle who believes every lie spread by Trump and far-right media. Engage social media contacts in debates about whether or not minorities deserve rights—because if we don’t, we run the risk of creating the dreaded echo chamber.

      But what if something beyond our control created and maintains that echo chamber?

      Americans are more politically divided than ever, and their culture hugely influences ours in Canada. With Trump out of office, Canadian progressives are breathing sighs of relief, but what about the extremism Trump has inspired here? White nationalist Faith Goldy won an alarming amount of support in Toronto’s last mayoral election, Canadians escaped rule by a prime minister with bigoted views by the skin of our teeth in the last federal election, online right-wing extremism is thriving and Torontonians have been treated to anti-lockdown marches made up of protesters carrying signs celebrating Trump, QAnon and COVID misinformation throughout the COVID-19 crisis.

      Calls for unity that put the onus on individuals are like declarations that global warming will end if we take shorter showers and turn off the lights. Just like meaningfully fighting climate change requires government policy that punishes giant corporate polluters, the United States Congress has the opportunity to reform Section 230 of the Communications Decency Act, which would go a long way toward healing the political divide—online and in person.

      Section 230 states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. In a lay person’s terms: Facebook and other similar platforms are not liable for the content or material their users publish.

      Passed in 1996, this previously obscure law has come under fire as the internet has expanded and become more chaotic. Originally intended to foster growth in a fledgling industry, the law remains unchanged even though social media companies are now among the richest in the world and their products are ubiquitous parts of everyday life.

      Rutgers Law Professor Ellen P. Goodman says that as with FOSTA and SESTA, Section 230 reform would likely have ripple effects anywhere these companies’ services are available.

      “As a practical matter, if an American law passes that changes Section 230 immunity, that will probably change the platform’s structure and behaviour elsewhere,” she says.

      When FOSTA and SESTA were passed by Congress in 2018, purportedly to combat non-consensual sex trafficking, escorting sites like Backpage were shut down, eliminating consensual Canadian sex workers’ access to safe platforms. It also led to Tumblr’s decision to disallow sexually explicit images regardless of where users live, and caused dating apps like Grindr and Scruff to limit what users could show in their profile pictures.

      A lot of social media users seem unable to distinguish between fact and fiction and appear extremely vulnerable to having their biases taken advantage of by bad actors looking to lead them down the paths of extremist ideologies. Unfortunately, the way social media is set up doesn’t help. 

      Social media companies prioritize “engagement, ad space maximization and data collection” to create behavioural profiles using the news feed algorithm, ranking and recommendations, says Dipayan Ghosh, Director of the Digital Platforms & Democracy Project at Harvard University and former advisor to Facebook and the Obama administration.

      These algorithms exist to figure out what content we find most engaging and to place it between ads, so we keep scrolling and seeing more ads. But because of the way algorithms suggest and silo content based on what we read, watch and react to, people from different political factions are no longer operating with a shared set of facts on which to build disagreements. 

      Like it or not, people “engage” with misinformation

      Between fake news travelling six times faster than the truth on Twitter and years of rampant dishonesty from Trump, we’re living in a time in which difference of opinion often equals a completely different perception of reality—and social media companies are cashing in.

      “When you have a platform that cares only about engagement, not about factualness or public good, then it will prioritize information that causes the greatest emotional charge,” says Imran Ahmed, CEO of the Center for Countering Digital Hate. “And the greatest emotional charge is often not that which you like, but that which you dislike. So misinformation’s genius is that the people who support it engage with it and the people who don’t like it engage with it too.

      “This is essentially the constant car crash approach to the attention economy.”

      It’s no surprise that as the spread of radicalizing content becomes more prevalent online, we’re seeing more real-world consequences. In this last U.S. election cycle a right-wing militia group was charged with attempting to kidnap Michigan Governor Gretchen Whitmer, a convoy of vehicles draped in pro-Trump regalia attempted to run one of Joe Biden’s campaign buses off the road in Texas, and a teenage white supremacist shot three pro-BLM protesters in Kenosha, Wisconsin, killing two.

      “We’ve always seen the online world as being somehow different to the offline world, as though the people who we are misinforming online don’t have human equivalents who create real offline harm to real human beings, and 2020 showed that to be untrue on a really profound level,” Ahmed says. 

      Different political factions and stakeholders have proposed various changes to Section 230. Trump once tweeted that he wanted it repealed based on the unfounded belief that social media companies censor conservative views. Jeff Kosseff, a cyber-security law professor at the U.S. Naval Academy and author of Twenty-Six Words That Created The Internet, a book about Section 230, says a repeal of the law could result in social media companies halting moderation entirely.

      Section 230 was originally passed in response to a court decision called Stratton Oakmont, Inc. v Prodigy Services Co.

      “Because [internet service Prodigy] had done a lot of moderation, they were found to be strictly liable, meaning they were just as liable for user content as the users who posted it,” Kosseff says. “Their competition Compuserve did not do as much moderation so they were actually held to a lower standard of liability. They were only to be held liable if they knew or had reason to know of the defamatory or illegal content. Section 230 was passed to fix that.”

      In essence, if Section 230 were repealed platforms might eliminate their moderation departments entirely, adopting a “see no evil, hear no evil” approach, opening the floodgates for unbridled hate speech and misinformation, claiming they didn’t see objectionable content they’re sued over.

      In a New York Times interview in 2019, President Biden expressed support for revoking Section 230. His advisors and members of his party have also spoken in favour of reforms and proposed legislation accordingly, none of which has passed. Last year, Facebook CEO Mark Zuckerberg appeared to indicate support for legal reforms when he told the Senate Judiciary Committee that, “We would benefit from clearer guidance from elected officials.” 

      Minnesota Senator Amy Klobuchar has repeatedly introduced antitrust legislation aimed at breaking up large internet companies. Now that she’s head of the Antitrust Subcommittee, this seems more likely to happen. Analysts say such a move has the potential to effect some positive changes in the online space, but that it’s unlikely to help with misinformation and hate speech on the scale we’ve been seeing.

      Virginia Senator Mark R. Warner proposed the SAFE TECH Act last month, backed by Klobuchar and Hawaii Senator Mazie Hirono. The SAFE TECH Act would remove Section 230 liability shielding in cases of deceptive ads, injunctions, civil rights violations, cyberstalking, harassment and intimidation on the basis of protected classes, wrongful death cases and suits under the Alien Tort Claims Act. It has the potential to do a lot of great things. For example, if suits could be made against Facebook under the Alien Tort Claims Act, Rohingya Muslims could sue Facebook for their inaction in the face of their platform being used to manufacture consent for the genocide against them in Myanmar.

      Elvert Barnes/Flickr

      A common argument against reforms like these is that only large companies would have the resources to comply, leaving start-ups and new companies in the dust. Senator Warner’s office calls those concerns exaggerated, arguing that smaller players don’t have the reach Facebook and Twitter do, making them less likely to cause significant harm. They also argue potential plaintiffs are unlikely to bring action against a small tech company out of fear of not being able to collect sufficient damages to make the effort and cost of litigation worthwhile, and that frivolous or bad-faith lawsuits would likely be thrown out under anti-SLAPP laws.

      An interesting option with the potential to meaningfully combat radicalizing and false information on social media is Congresswoman Anna G. Eshoo and Congressman Tom Malinowski’s Protecting Americans From Dangerous Algorithms Act (PADAA) proposed in October 2020. The proposed legislation only applies to platforms with 50 million or more users, which addresses the anxieties smaller platforms have around new standards crippling their growth.

      In 2016, the families of five victims of attacks by Hamas (labelled a foreign terrorist organization by the U.S. Department of State) in Israel filed a lawsuit alleging Facebook “knowingly provided material support and resources to Hamas” by allowing members of Hamas to network and communicate on their platform. Facebook won the case by claiming Section 230 immunity.

      But the ruling had a quirk. Judge Robert Katzmann of the U.S. Court of Appeals for the Second Circuit laid out in his dissent how part of the plaintiff’s allegations around Facebook’s material support to Hamas was the group’s ability to connect with like-minded radicals through the algorithms that suggest new friend connections to users.

      “When a plaintiff brings a claim that is based not on the content of the information shown but rather on the connections Facebook’s algorithms make between individuals, the [Communications Decency Act] does not and should not bar relief,” Katzmann wrote.

      Enter PADAA. The legislation would amend Section 230 to remove liability immunity for a platform if its algorithm is used to “amplify or recommend content directly relevant to a case involving interference with civil rights; neglect to prevent interference with civil rights; and in cases involving acts of international terrorism,” according to a press release from Malinowski’s office.

      “They’re not talking about content that is harmful, they’re not talking about hate speech, they’re not talking about defamatory speech, they’re specifically talking about speech that results in [real-world actions covered] under the Terrorism Statute and Civil Rights Act,” says Goodman. “Then they would lose Section 230 immunity.”

      Ahmed says legally isolating this particular harm would force social media companies to step up moderation efforts and alter their algorithms with the public good in mind. They would be incentivized to get on top of content with the potential to result in real-world harms, like an event page for a rally that includes calls for violence, and to intervene in the online activities of hate-mongers before they commit civil rights abuses in person and shut down groups that allow them to thrive.

      Ahmed says that not only do social media companies have no incentive to properly moderate content users post without changes to the law, but they have massive financial incentives to maintain the status quo.

      “They ultimately are the rule-setter, the judge, the jury and the executioner, but they happen to be incredibly lazy and frankly greedy because they take a cut of all the bad activity,” he says. “We should not be profiting from violent extremism. How bananas is it that I have to say that in 2021?” 

      Matt Stoller from the American Economic Liberties Project estimates that Facebook made around $2.9 billion off QAnon in 2020. In the wake of the 2019 Christchurch mass shooting, a New Zealand government inquiry found that the shooter was radicalized by YouTube.

      Facebook and Twitter already have rules in place banning hate speech and the propagation of terrorism, as well as efforts to label false or misleading posts. They also banned Trump and other dishonest and dangerous actors since the attack on Capitol Hill, but PADAA would go further.

      “They don’t consistently enforce their terms of service and it looks like these are just political responses,” Goodman says. “The problem is that these sorts of dangerous forums are allowed to metastasize, so by the time that’s all happened, yes, you can de-platform a particular individual, but there has already arisen an entire network of adherence to what ultimately is a violent ideology.” 

      Ramesh Srinivasan, Director of the Digital Cultures Lab at UCLA agrees.

      “Violence occurs and then after the fact certain decisions are made privately in-house with very little third-party scrutiny or visibility so that we’re stuck in some sort of silo either praising or critiquing. I think that is a completely warped way of looking at what the reality should be,” he says. “There should be guidelines that are enforceable that these companies have to adhere to.”

      Legislation can only go so far in combating hate

      If capturing as much of your attention as they can is how social media companies make money – and extremist and false content keeps your eyes glued to your devices—why would they go out of their way to remove it when not facing public relations crises? PADAA wouldn’t change social media companies’ terms of service—it would compel them to enforce their own rules that already exist.

      Let’s be clear: legislation won’t eliminate extremist content entirely nor make the underpinning hateful ideologies magically disappear. It would likely also move extremist organizing into more private spaces. The average person, however, is unlikely to stumble into a private chat of organizing extremists. Thus, it would slow the spread of growing extremist networks via Facebook or YouTube algorithms. Users would also see less misleading and radicalizing content on news feeds in the first place.

      Despite the seriousness of the issue and the potential disruption of the status quo, Srinivasan insists change doesn’t have to be a contentious process.

      “This is not about screaming at people in the technology companies. This is about working together to get things on track,” he says, adding that this is “one of the greatest interdisciplinary challenges of our time.

      “The playful, creative, experimental spirit that actually produced the best things out of Silicon Valley can now be applied to resolve a challenge that is social, cultural, political, and technological at the same time.”

      In 2017 Facebook formed “Integrity Teams” to research and recommend actions to combat divisive content and behaviour on its platform. The findings stated that 64 percent of all extremist group joins were due to recommendation algorithms. Implementing the Integrity Team’s recommendations would likely have resulted in Facebook making less money. Zuckerberg and other Facebook execs shelved the report, and blocked or weakened any efforts to apply it the platform.

      Internally, Facebook Policy Chief Joel Kaplan called the vetting measures “paternalistic,” and the process was nicknamed “Eat your veggies.”

      Looking around at everything that’s happened as a result of online extremism and misinformation run amok, some veggies would really hit the spot right now.

      Comments