US lawmakers are eyeing new rules for tech companies, citing concerns over Russia’s use of social media platforms during the 2016 election.
The current approach is “not working”, Republican Senator Lindsay Graham said.
He spoke as executives from Facebook, Twitter and Google appeared at a Senate panel on extremist content and Russia disinformation in Washington.
The firms said they were troubled by “abuse” of their services and pledged to take it seriously.
Russia has repeatedly denied allegations that it attempted to influence the last US presidential election, in which Donald Trump beat Hillary Clinton.
But Facebook has said as many as 126 million American users may have seen content uploaded by Russia-based operatives over the past two years.
The social networking site said about 80,000 posts were produced before and after the 2016 presidential election.
Google and Twitter have also investigated Russian-backed content appearing on their sites.
Most of the posts focused on sowing political and social divisions, the firms have said.
Google said it is developing tools to make more information about the buyers of political ads available to the public.
What is Facebook saying?
Facebook says some 80,000 posts were published between June 2015 and August 2017 and were seen by about 29 million Americans directly, according to a draft of prepared remarks seen by US media ahead of Tuesday’s Senate judiciary committee hearing.
These posts, which Facebook says were created by a Kremlin-linked company, were amplified through likes, shares and comments, and spread to tens of millions of people.
Facebook also said it had deleted 170 Instagram accounts, which posted about 120,000 pieces of content.
“These actions run counter to Facebook’s mission of building community and everything we stand for,” Facebook’s general counsel Colin Stretch said on Tuesday.
“And we are determined to do everything we can to address this new threat.”
In a blog post from earlier this month, Facebook’s Elliot Schrage said that many of the posts did not violate the company’s content policies. They were removed, he said, because they were inauthentic – the Russians behind the posts did not identify themselves as such.
Google also revealed on Monday that Russian trolls had uploaded more than 1,000 political videos on YouTube on 18 different channels. The company said they had very low view counts and there was no evidence they had been targeting American viewers.
Meanwhile, Twitter found and suspended all 2,752 accounts that it had tracked to the Russia-based Internet Research Agency, Reuters quotes a source as saying.
Key recent developments:
- Nov 2016: Facebook founder Mark Zuckerberg says “the idea that fake news on Facebook influenced the (US) election in any way is a pretty crazy idea”
- Aug 2017: Facebook says it will fight fake news by sending more suspected hoax stories to fact-checkers and publishing their findings online
- Oct 2017: Google finds evidence that Russian agents spent tens of thousands of dollars on ads in a bid to sway the election, reports say
- Oct 2017: Twitter bans Russia’s RT and Sputnik media outlets from buying advertising amid fears they attempted to interfere in the election