RIGA — Social media companies from Facebook to YouTube are failing to stop people from setting up fake accounts, buying false online followers and promoting inauthentic digital content, according to a new report published Friday by a NATO-accredited group.
The findings — based on analysts purchasing social media manipulation tools and then testing how the companies responded — stand in stark contrast to public statements from Facebook, Twitter and Google that claim they have all significantly clamped down on such activities since the 2016 U.S. presidential election.
But despite those reassurances, researchers at NATO’s Strategic Communications Centre of Excellence, a military organization based in Riga, Latvia’s capital, whose mandate includes identifying potential foreign online interference, were able to purchase roughly 54,000 inauthentic social media interactions — either fake followers, likes, comments or views of online content — with little, if any, pushback from some of Silicon Valley’s biggest names.
“We assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behaviour on their platforms,” Sebastian Bay and Rolf Fredheim, the report’s authors, said in the publication.
“Self-regulation is not working. The manipulation industry is growing year by year,” they continued. “We see no sign that it is becoming substantially more expensive or more difficult to conduct widespread social media manipulation.”
While roughly 10 percent of the paid-for inauthentic activity bought by the NATO-affiliated group had ties to online political campaigns, the lion’s share of online manipulation tools, including the creation and purchase of fake accounts, was tied to commercial purposes.
That raises potentially hard questions for social media platforms as their business models rely on selling online advertising to other companies, many of which trust these firms to promote their paid-for messages to real online users, and not fake accounts created merely to generate online noise.
In response, the companies defended their record, saying that they had made it more difficult to buy such false interactions across their platforms, and that they had invested, collectively, millions of dollars in new technology and content moderation tools to thwart the worst offenders.
“We have dramatically accelerated the amount of takedowns that we do of networks of external interference on our platforms,” Nick Clegg, Facebook’s top global lobbyist, told reporters in Brussels on Monday. “This year alone, we have taken down about 50 networks of what we called coordinated inauthentic behavior.”
Probing social media
To test social media companies’ defenses, the Riga-based researchers spent just €300 between May and June of this year on mostly Russian social media manipulation tools that allowed them to purchase 3,500 online comments, more than 25,000 social media likes, 20,000 video views and a further 5,100 fake digital followers.
To avoid meddling in real-world political events, the analysts limited their inauthentic activities to the promotion of innocuous social media content like a Twitter post saying “hello!”
In total, they were able to identify almost 19,000 false online accounts that had been used to manipulate social media platforms in some form. The research also discovered inauthentic behavior tied to roughly 800 political and official government social media pages, including those of two countries’ presidents and a series of junior-level politicians across the United States and Europe. This activity was primarily to promote false engagement through likes or comments on these pages.
The analysts then reported some of the inauthentic behavior to the companies to gauge their response. In the past, the likes of Facebook and Twitter have been vocal about their ability to thwart such activities, often before users have reported potential abuse.
But after a month of purchasing the social media manipulation tools, the NATO-affiliated researchers said that 80 percent of their fake online engagements were still online. And three weeks after they had reported the abuse, roughly 95 percent of the inauthentic accounts were still active online.
“Given the low number of accounts removed, it is clear that social media companies are still struggling to remove accounts used for social media manipulation, even when the accounts are reported to them,” the report concluded.
Not all the same
Despite the widespread failure to respond to such activity, social media companies varied widely in how they reacted to inauthentic behavior, according to the researchers.
Twitter, for instance, had relatively robust mechanisms in place to identify false accounts, and over half the paid-for likes and retweets bought through the online manipulation tools were removed by the company, based on the research.
In contrast, YouTube performed the best at countering inauthentic likes and fake video views, and was the most costly of all the social networks to purchase such inauthentic activity.
Yet the analysts noted that the video platform did not remove any of the inauthentic accounts they had purchased on the network, adding: “The fact that YouTube is the most expensive platform to manipulate is either a testament to its defensive actions or to the popularity of YouTube manipulation.”
On Facebook and Instagram, the photo-sharing service owned by the world’s largest social network, the researchers said that both platforms ranked poorly when it came to removing false video views — a key metric for some online advertisers.
But they were particularly scathing of Instagram, which has become a new source of online manipulation, based on the ability to easily share photos quickly across multiple accounts and because its audience skews younger to that of other social networks.
Not only was it the cheapest to buy inauthentic activity on Instagram compared to the other services, the research discovered, but the social network also removed less than 1 percent of inauthentic likes during the monthlong test earlier this year.
“Facebook resembles a fortress with formidable defenses facing the outside world, but qualified actors are still able to scale the walls of Facebook,” the analysts concluded. “Policing and oversight within the walls are far less effective.”