How protesters in Russia and Ukraine are avoiding internet censorship — and jail
On Thursday night, human rights activist Marina Litinovich posted a video to her Facebook account calling for her fellow Russians to protest the country’s invasion of its neighbor to the west.
“I know that right now many of you feel desperation, helplessness, shame over Vladimir Putin’s attack on the friendly nation of Ukraine,†she said. “But I urge you not to despair.â€
Within hours, Litinovich was in custody, facing a fine for “an attempt to organize an unsanctioned rally.â€
As Russia cracks down on antiwar protests, those voicing dissent on the ground and in online spaces face heightened danger.
Hundreds of protesters have been rounded up in Moscow and St. Petersburg. Human rights advocates have warned that those authoring critical posts on social media in the region would face a new wave of repression, including detention and other legal ramifications.
Some social media users have improvised ways of communicating in an attempt to avoid censorship or arrest. In one instance, an Instagram user posted an image with no clear discernible meaning — rows of man-walking emojis, a sketched profile of a woman’s head, and the number seven — to indicate the time and place of a protest.
In Russia and the U.S., a Kremlin-backed propaganda effort seeks to build support for — or at least muddy the waters around — Putin’s invasion of Ukraine.
Meanwhile, the social media companies have taken measures to address threats to their users in those regions.
In response to news of the escalating conflict Wednesday night, Meta, the parent company of Facebook, established a “Special Operations Center†to monitor and quickly respond to the military conflict, and launched a tool in Ukraine allowing people to lock their profile quickly with one click. The tool provides an extra layer of privacy to prevent users who aren’t their friends from viewing their posts or downloading or sharing their profile photo, according to Nathaniel Gleicher, head of security policy at Facebook, who described the company’s response to the crisis in a series of posts on Twitter.
Facebook previously launched the one-click tool in Afghanistan in August, informed by feedback from activists and journalists. It has also previously deployed the tool in Ethiopia, Bangladesh and Myanmar, according to the company.
Twitter posted a guide to shoring up security, warning that while using its platform “in conflict zones or other high-risk areas, it’s important to be aware of how to control your account and digital information.†The company advised setting up two-factor authentication (a safeguard against password hacking), disabling location info from showing on tweets, adjusting privacy settings to make tweets visible only to one’s followers, or deactivating one’s account if that feels like the safest option.
Sophie Zhang, a former data scientist at Facebook, said that although a quick and easy tool to lock accounts was useful, earlier and stronger measures by social media companies might have slowed Putin’s march toward regional domination. A lack of aggressive response to earlier “horrendous repression†in Belarus — including the use of people’s activity on Facebook to make arrests — reflects a broader issue with how social media companies navigate human rights issues, she said.
Social media users say they’ve been been censored for views expressing support for Palestinians and criticism of Israel. Employees of Facebook and Google have also accused the companies of bias.
Zhang has criticized Facebook’s reaction to global political conflict in the past. She described in a lengthy memo published by BuzzFeed in 2020 how the company failed to address or curb disinformation campaigns by politicians in numerous countries abusing the platform to influence elections and gain power.
Twitter spokesperson Katie Rosborough said in an email that in line with its response to other global events, the company’s safety and integrity teams are monitoring for potential risks, including identifying and disrupting attempts to amplify false and misleading information and looking to “advance the speed and scale†of its policy enforcement.
“Twitter’s top priority is keeping people safe, and we have longstanding efforts to improve the safety of our service,†Rosborough said.
Facebook is actively removing content that violates its policies and working with third-party fact checkers in the region to debunk false claims, spokesperson Dani Lever said in an emailed statement.
“When they rate something as false, we move this content lower in Feed so fewer people see it,†Lever said. “We’re also giving people more information to decide what to read, trust, and share by adding warning labels on content rated false, and applying labels to state-controlled media publishers.â€
On Friday, the Russian government said it would partially limit access to Facebook in response to the company’s treatment of some pro-Kremlin news media accounts, several news outlets reported. Nick Clegg, Meta’s president of global affairs, said in a statement the move came after “Russian authorities ordered us to stop the independent fact-checking and labeling of content posted on Facebook†by the four outlets and the company refused.
Though Twitter and Facebook representatives said the companies are paying close attention to emerging disinformation threats, their response hasn’t been free of missteps.
Twitter erroneously suspended the accounts of independent reporters and researchers posting information about the activities of Russian forces near the Ukrainian border.
Rosborough said in an email that while the company has been monitoring for “emerging narratives†that violate the platform’s rules on manipulated media, “in this instance, we took enforcement action on a number of accounts in error. We’re expeditiously reviewing these actions and have already proactively reinstated access to a number of affected accounts.â€
Some of the affected users accused the Russian state of coordinating a bot campaign to mass report their accounts to Twitter, resulting in the action taken against their accounts, but Rosborough said those claims were inaccurate.
Even as social media companies release tools to improve safety and security for their users in conflict areas, the same companies have given in to pressure from Russia over the last year, taking down posts in support of political opponents to the current regime.
Meta, which owns Instagram and WhatsApp as well as Facebook, acknowledged in its most recent transparency report that it does sometimes delete content in response to requests by Russian authorities, removing about 1,800 pieces of content “for allegedly violating local laws†on Facebook or Instagram in the first half of 2021. Of the removed content, 871 were items “related to extremism,†according to the report. Meta did not immediately respond to emailed questions about the removed posts.
A December report by the BBC found that Russia’s media regulator Roskomnadzor had launched more than 60 lawsuits against Google, Facebook, Instagram and Twitter last year, targeting hundreds of posts. The majority of court proceedings aimed to take action against calls to attend demonstrations in support of jailed political leader Alexei Navalny, who opposes Putin. Meta faces potentially severe fines due to higher penalties Russia imposed last year for failure to delete illegal content, according to the BBC.