RIGA, Latvia — Jānis Sārts is a foot soldier in the war against fake news.
As director of a NATO-linked military group that tracks potential foreign interference, the Latvian and his team of more than 40 analysts in the Baltic country’s capital have spent years playing cat-and-mouse with online threats.
But as the United Kingdom prepares for a general election on Thursday and campaigning ramps up in the United States ahead of next year’s presidential election, Sārts has a sobering take on the West’s ability to combat foreign meddling.
“We are not able to keep up,” he said in an Americanized accent honed over years working within the North Atlantic Treaty Organization. “At best, we’ve had modest success in combating these threats. The issue is evolving so quickly. If there’s a gap, someone is going to exploit it.”
Sārts’ downbeat analysis is echoed across capitals from Brussels to Ottawa.
Inability to track domestic actors, some of which may be working on behalf of foreign governments, has thwarted a more aggressive response.
In the wake of the 2016 U.S. presidential election, when American intelligence agencies declared that Russian actors had tried to sow division among voters online, governments across the European Union and North America responded by setting up specialized teams to fight disinformation.
More than three years on, these experts are still struggling to come to grips with an ever-evolving threat from Russia, China and beyond, according to officials from across Europe and North America, most of whom spoke to POLITICO on the condition of anonymity because they were not authorized to speak publicly.
Many admit that they are no closer to reining in foreign interference than they were when the problem first garnered the public’s attention in 2016 — despite leaders’ promises to take a firm stance against such manipulative tactics.
Limited access to data from social media companies — where much of this misinformation is spread — has hamstrung efforts to fight back. An inability to track domestic actors, some of which may be working on behalf of foreign governments, has thwarted a more aggressive response. And the failure to update most countries’ electoral rules for the digital age has allowed foreign money to flow into campaigns and false reports to spread quickly with few, if any, checks.
“We are lacking granularity of information. That makes it difficult to know what’s going on,” said Louise Edwards, head of regulation at the U.K.’s Electoral Commission, the local watchdog in charge of the nationwide vote on December 12. “We need good deterrence so people don’t break the law in the first place. We need regulation that can issue meaningful fines.”
Governments respond to fake news
It was not supposed to be this way.
Even before Russia tried to meddle in the U.S. electoral system, officials within the European Commission had created a team, known as East Stratcom, tasked with monitoring and responding to Russian efforts to influence EU citizens.
Set inside a glass-fronted building just a stone’s throw from senior EU officials in the Berlaymont building in central Brussels, a group of mostly-Russian speaking analysts — drawn from across Europe — tracked Kremlin-backed propaganda, publishing examples of misinformation from the likes of RT and Sputnik, media outlets backed by the Russian government.
Similar efforts soon followed from Finland to the Czech Republic, as officials sought to insulate their citizens from state-sponsored attempts to promote divisive material around immigration, the future of the EU and other hot-button issues. The programs included raising awareness about potential fake news with local citizens, countering the worst offenders with fact-checking initiatives and educating civil servants about how to spot potential foreign interference within government.
“You have to do this 24/7,” said Mikael Tofvesson, deputy head of department at the Swedish Civil Contingencies Agency, whose team was tasked with countering online manipulation during the country’s 2018 general election. “Elections are only a speedbump. Interference campaigns are ongoing throughout the year.”
Domestic actors have started copying the tactics first used by the Kremlin, including the use of fake social media accounts.
Yet many of these government-backed projects quickly ran into problems, according to several officials who spoke on the condition of anonymity.
Some, like those in the Czech Republic, fell somewhat out of favor when Andrej Babiš, a local billionaire who is more receptive to closer ties with Russia, became prime minister in late 2017, casting doubt over existing efforts to combat Kremlin interference in domestic affairs. In France, a team dedicated to tracking foreign threats during this year’s EU Parliamentary elections was disbanded soon after the vote in May, despite warnings that the threat of interference from foreign actors was ongoing.
Others, like the programs run by the Commission, suffered from infighting between EU countries. Some, particularly those which rely heavily on Russian natural gas imports, were skeptical of the need to counter the alleged Kremlin threat, and instead lobbied for the region’s misinformation efforts to focus more on terrorist propaganda.
Despite disagreements, the team’s budget has roughly doubled, to €6 million, for 2019, and a slew of new analysts, including data scientists to crunch online information, have also been hired over the last 12 months.
But the influx of cash has not solved a basic problem — that the West’s state-sponsored misinformation spotters are only tasked with countering foreign threats.
Domestic actors have started copying the tactics first used by the Kremlin, including the use of fake social media accounts to promote false narratives on Facebook and Twitter. Foreign actors can often use domestic proxies — both willing agents and those who do not realize they are promoting state-backed misinformation — to get around the safeguards. Yet in all cases, analysts are hamstrung by concerns that by taking down suspicious domestic content, they may be harming legitimate pre-election debates.
“The key challenge is attribution,” said Teija Tiilikainen, head of the European Center of Excellence for Countering Hybrid Threats, an independent center affiliated with both the EU and NATO, based in Helsinki, whose role includes monitoring foreign influence in national elections. “Many times, it’s not easy to tell who’s behind campaigns. It makes it hard to call out bad actors.”
The Canadian government was quick to gear up for potential foreign interference ahead of the country’s nationwide vote in October.
Starting more than two years before this year’s election, government officials toured EU capitals and spent time in Washington, D.C. to learn how others had responded to digital misinformation campaigns, including those led by hostile governments. They also tapped the expertise of a G-7 project, based in Ottawa, that was created to share the latest intelligence between allies about possible foreign disinformation.
In the largest overhaul, so far, to be undertaken by any Western democracy, Canada updated its electoral rules to put significant limits on political advertising spending on social media. It also created a non-partisan unit within government to publicize any possible overseas interference. (After the October election, that group said no foreign threat had met the threshold to raise a red flag with voters.)
Yet even Ottawa’s response has been only partially successful, according to Taylor Owen, project director at the Digital Democracy Project, a joint-operation between McGill University and the Public Policy Forum, an Ottawa-based think tank, that tracked the recent online campaign in Canada.
By passing new electoral rules, Owen said, Canada was able to close many of the loopholes that still exist in other countries’ electoral systems, and which had allowed some of the more basic influence campaigns — like buying political ads on social media — to gain traction elsewhere.
But domestic Canadian groups, often tied to traditional parties, were still able to run online campaigns that promoted manipulative content, including fake fact-checking accounts that tried to portray partisan messages as unbiased.
Possible foreign actors, most notably those connected with China, were also to able spread politically linked messages to Canada’s Mandarin-speaking population through WeChat, the Chinese messaging service — despite Ottawa’s efforts to keep such foreign influence at bay, Owen added. There was also an influx of U.S. social media accounts, often linked to the country’s alt-right movement, though such American misinformation often blended in with domestic Canadian groups looking to stir up division.
UK NATIONAL PARLIAMENT ELECTION POLL OF POLLS
“We had the advantage of going after other people’s elections, the government went after this seriously,” Owen said.
“But you can’t just focus on what happens in an election,” he added. “You have to focus on defending the broader governance ecosystem. If you just focus on the surface-level threats around an election, you’re missing the bigger picture.”
Want more analysis from POLITICO? POLITICO Pro is our premium intelligence service for professionals. From financial services to trade, technology, cybersecurity and more, Pro delivers real time intelligence, deep insight and breaking scoops you need to keep one step ahead. Email [email protected] to request a complimentary trial.