The United States will not join an international bid to stamp out violent extremism online, the White House said Wednesday, while stressing that Washington backs the initiative's aims.
"While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected" in the so-called "Christchurch Call," the White House said.
The initiative is named after the New Zealand city where a far-right gunman massacred 51 people at two mosques in March while broadcasting his rampage live on Facebook. It has been spearheaded by New Zealand Prime Minister Jacinda Ardern and France's President Emmanuel Macron.
The White House said in a statement that the private sector should regulate its content, but also stressed the need to protect free speech.
"We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press," it said.
"We encourage technology companies to enforce their terms of service and community standards that forbid the use of their platforms for terrorist purposes," it said.
"Further, we maintain that the best tool to defeat terrorist speech is productive speech and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging."
The call was adopted by U.S. tech companies including Amazon, Facebook, Google, Microsoft, Twitter, YouTube, along with France's Qwant and DailyMotion, and the Wikimedia Foundation. The countries backing it were France, New Zealand, Britain, Canada, Ireland, Jordan, Norway, Senegal, Indonesia and the European Union's executive body. Several other countries not present at the meeting added their endorsement.
The meeting in Paris comes at a pivotal moment for tech companies, which critics accuse of being too powerful and resistant to regulation. Some have called for giants like Facebook to be broken up. Europe is leading a global push for more regulation of how the companies handle user data and copyrighted material. The tech companies, meanwhile, are offering their own ideas in a bid to shape the policy response.
In Wednesday's agreement, which is not legally binding, the tech companies committed to measures to prevent the spread of terrorist or violent extremist content. That may include cooperating on developing technology or expanding the use of shared digital signatures.
They also promised to take measures to reduce the risk that such content is livestreamed, including flagging it up for real-time review.
And they pledged to study how algorithms sometimes promote extremist content. That would help find ways to intervene more quickly and redirect users to "credible positive alternatives or counter-narratives."
Facebook, which dominates social media and has faced the harshest criticism for overlooking the misuse of consumer data and not blocking live broadcasts of violent actions, said it is toughening its livestreaming policies.
It's tightening the rules for its livestreaming service with a "one strike" policy applied to a broader range of offenses. Activity on the social network that violates its policies, such as sharing an extremist group's statement without providing context, will result in the user immediately being temporarily blocked. The most serious offenses will result in a permanent ban.
Previously, the company took down posts that breached its community standards but only blocked users after repeated offenses.
The tougher restrictions will be gradually extended to other areas of the platform, starting with preventing users from creating Facebook ads.
Facebook, which also owns Instagram and Whatsapp, said it's investing $7.5 million to improve technology aimed at finding videos and photos that have been manipulated to avoid detection — a problem the company encountered with the Christchurch shooting, where the attacker streamed the killing live on Facebook.
"Tackling these threats also requires technical innovation to stay ahead of the type of adversarial media manipulation we saw after Christchurch," Facebook's vice president of integrity, Guy Rosen, said in a blog post.
New Zealand Prime Minister Jacinda Ardern welcomed Facebook's pledge. She said she herself inadvertently saw the Christchurch attacker's video when it played automatically in her Facebook feed.
"There is a lot more work to do, but I am pleased Facebook has taken additional steps today ... and look forward to a long-term collaboration to make social media safer," she said in a statement.
Ardern is playing a central role in the Paris meetings, which she called a significant "starting point" for changes in government and tech industry policy.
The "Christchurch Call" was drafted as 80 CEOs and executives from technology companies gathered in Paris for a "Tech for Good" conference meant to address how they can use their global influence for public good — for example by promoting gender equality, diversity in hiring and greater access to technology for lower income users.
Ardern and Macron insist that the Christchurch guidelines must involve joint efforts between governments and tech giants. France has been hit by repeated Islamic extremist attacks by groups who recruited and shared violent images on social networks.
Free speech advocates and some in the tech industry bristle at new restrictions and argue that violent extremism is a societal problem that the tech world can't solve.
Former U.S. Secretary of State John Kerry, a member of the Carnegie Endowment for International Peace, said that while "a higher level of responsibility is demanded from all of the platforms," it is necessary to find a way to not censor legitimate discussion.
"It's a hard line to draw sometimes," he said.