Digital Politics is a column about the global intersection of technology and the world of politics.
Facebook’s latest solution to stop divisive political content from distorting democracy is as simple as it is wrongheaded: Let the people decide.
In a series of upgrades to be rolled out over the next six months, the world’s largest social network is giving its users a greater say over how they’re targeted with political ads on the company’s global platforms.
People will be able to opt out of viewing paid-for messages. Facebook’s transparency tools will be expanded. Researchers will have more power to track digital campaigns.
Facebook also doubled down on its refusal to fact-check political messages from established politicians (even when they made false claims), and said it would continue to allow partisan groups to target people online with direct messages — a tactic that Google had banned on its rival services worldwide.
As a U.S. tech giant, appealing to notions of free speech and individual responsibility makes sense — they’re baked into the country’s constitution. But the First Amendment — or the tenets of freedom of speech — does not apply in the same way beyond U.S. borders, where even Western democracies tend to put caveats on the extent of free speech, namely during election seasons.
The tech giant’s raison d’être remains one where users, not the company, make the final call.
It also fails to address a level of quasi-state paternalism that people in other countries, notably in government-heavy Europe and across Asia, have come to rely on when tackling some of the most difficult questions now facing voters in the 21st century.
When it comes to asking users from San Francisco to Stockholm to Seoul to make their own determination about what content shows up in their feeds, Facebook’s new strategy is based on a basic fundamental flaw — that users have the time, willingness or ability to do so on their own.
That’s wishful thinking.
By outsourcing the policing of political messages to regular people on both its social network and Instagram, the photo-sharing service also owned by Facebook, the tech giant is again sidestepping its obligations for taking greater responsibility for what is posted on its social networks.
It’s also deputizing its more than 2.4 billion global users in a complex, ever-changing battle against misinformation and overtly-partisan material that no one is capable of fighting on his or her own — particularly average internet users who are more accustomed to swiping through photos from family and friends than checking if a social media post may have come from a Russian-backed group looking to sow dissent.
What Facebook’s latest steps will lead to is more of the same.
On paper, people will be granted new powers to track and limit who targets them with political ads.
But in reality, few, if any, will take advantage of such powers to change their social media experience by limiting, or even deleting, all forms of paid-for partisan messaging ahead of this year’s U.S. presidential election or the litany of other national votes planned in 2020 from Poland to Peru.
Woods from the trees
It’s not that individuals should not do more to protect themselves online.
Since the 2016 U.S. presidential election, after which the country’s national security agencies alleged that Kremlin-backed groups tried to sway voters, people’s awareness of such tactics — and ability to combat them through new transparency tools from Google, Facebook and Twitter aimed at highlighting who’s buying online ads — has grown steadily.
But relying on Facebook users to do the heavy lifting won’t address some of the systemic problems that have allowed social media platforms to become rich breeding grounds for conspiracy theories, outright hate speech and other politically motivated divisive content.
The real questions are how such material is created in the first place, who is behind campaigns to spread it virally online, and what can be done to suppress the worst offenders — all topics that would require Facebook, not individual users, to step into the breach and take greater responsibility for what is published online.
In the company’s defense, it has invested millions in new technologies to weed out some harmful material, hired (mostly contract) workers to moderate digital content and beefed up its security checks in an effort to keep foreign governments at bay.
Yet the tech giant’s raison d’être remains one where users, not the company, make the final call. That’s a game plan Facebook has already used to sidestep greater regulatory scrutiny.
Take its new plan to allow people to opt out to see fewer (but still some) political ads and to remove themselves from so-called custom audiences, or bespoke lists created by advertisers looking to target voters with similar characteristics.
The company took a similar stance when addressing the barrage of privacy-related hackles in the aftermath of the Cambridge Analytica scandal in which more than 87 million people may have had their data collected, and used, without consent. U.S. authorities fined Facebook $5 billion for the abuses, while regulators from Europe to South America also followed suit.
To address such concerns, the tech giant beefed up users’ privacy protections, allowing people to track, delete and limit which advertisers had access to their information — changes, in part, that were mandated when Europe revamped its data protection rules in 2018.
But many of these new controls remain relegated to the basement of Facebook’s labyrinthine settings. They require individuals to hunt through menu after menu before being able to switch off the data-hungry options that have allowed the company to become one of the biggest online advertisers anywhere in the world. (Users were asked to update their settings when they signed into their accounts.)
The same strategy was also used when the social media giant reintroduced its facial recognition technology in Europe two years ago after it had been banned back in 2011. If someone wanted to use the tech, it was a simple as clicking a button. But if you wanted to opt out, that involved similarly going through reams of options before reaching the correct setting.
The result of such practices? Most people haven’t opted out, either because of inertia, confusion, laziness (or all of the above).
That’s the likely result of Facebook’s renewed push on political ads.
It’s all well and good giving people more control over how they’re targeted. But if few people actually take up that gauntlet, does it even matter?
Mark Scott is chief technology correspondent at POLITICO.
Want more analysis from POLITICO? POLITICO Pro is our premium intelligence service for professionals. From financial services to trade, technology, cybersecurity and more, Pro delivers real time intelligence, deep insight and breaking scoops you need to keep one step ahead. Email [email protected] to request a complimentary trial.