Twitter rolled out an update to its “crisis misinformation policy” on Thursday, saying it will put a warning label on posts about the conflict in Ukraine that fit certain criteria, limiting their ability to be seen, shared or liked. The announcement comes just a day after the resignation of US government’s “disinformation czar” Nina Jankowicz, who had advocated for the ability to edit other people’s tweets.
The policy will be applied globally and guide Twitter’s efforts to “elevate credible, authoritative information,” and “help to ensure viral misinformation isn’t amplified or recommended by us during crises,” said Yoel Roth, the company’s head of Safety and Integrity.
As soon as there is evidence that something posted “may be misleading,” Twitter will label it with a notice and won’t amplify or recommend it in the Home timeline, Search, and Explore tabs. Warning notices will be prioritized for “high profile accounts” such as those designated “state-affiliated media,” verified users, and official government accounts.
Declaring something misinformation will require “verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organizations, open-source investigators, journalists, and more,” Roth added.
As examples, Roth cited “false coverage or event reporting, or information that mischaracterizes conditions on the ground as a conflict evolves,” false allegations regarding “use of force, incursions on territorial sovereignty, or around the use of weapons,” as well as “demonstrably false or misleading allegations of war crimes or mass atrocities against specific populations” and falsehoods regarding “international community response, sanctions, defensive actions, or humanitarian operations.”
However, “strong commentary, efforts to debunk or fact check, and personal anecdotes or first person accounts” will be exempt.
The company’s Help Center page on the policy distills the definitions even further, making clear that Twitter will go after posts that have “the capacity” to “serve as a pretext for further aggression” or “lead to increased humanitarian needs,” disrupt ceasefire talks or “incite the targeting or surveillance” of groups based on political, religious, ethnic or ideological affiliation or membership, or protected by international humanitarian law.
To be labeled, a post has to state a claim or fact “expressed in definitive terms,” be “demonstrably false or misleading, based on widely available, authoritative sources” and “be likely to impact public safety or cause serious harm,” according to Twitter.
The policy is focused on “international armed conflict” such as Ukraine, but Twitter plans to update and expand it to any crisis as defined by the UN, Roth explained.
In October 2020, Twitter infamously locked the account of the New York Post over a story about a laptop belonging to Hunter Biden, whose father Joe ran for president as a Democrat. The platform first cited its “hacked materials” policy, then promoted the claim the story was “Russian disinformation,” neither of which turned out to be true.
While nominally impartial, Twitter has not fact-checked or challenged any claims coming from the government in Kiev – including proven falsehoods such as the “Ghost of Kiev” aerial ace or the story of “Snake Island 13.” Within days of the escalation of hostilities in Ukraine, however, the platform blocked advertising from Russia, and has since boasted about reducing content labeled “Russian state-affiliated” media by 30%, while cutting in half the impressions and engagement by Russian government accounts.
Twitter’s new policy comes amid uncertainty over billionaire Elon Musk’s bid to buy the company and take it private. While Twitter accepted Musk’s $44 billion offer, he is now challenging their public filings citing the number of bots and fake accounts. The SpaceX and Tesla founder has sent satellite technology to the Ukrainian military, but has also spoken out against censorship on Twitter and said he wanted to ensure free speech on the platform.