By Human Rights Watch
Social media platforms are taking down online content they consider terrorist, violently extremist, or hateful in a way that prevents its potential use to investigate serious crimes, including war crimes, Human Rights Watch has found. While it is understandable that these platforms remove content that incites or promotes violence, they should ensure that this material is archived so it can possibly be used to hold those responsible to account.
The 42-page report, “‘Video Unavailable’: Social Media Platforms Remove Evidence of War Crimes,” urges all stakeholders, including social media platforms, to come together to develop an independent mechanism to preserve potential evidence of serious crimes. They should ensure that the content is available to support national and international investigations, as well as research by nongovernmental organizations, journalists, and academics. Rights groups have been urging social media companies since 2017 to improve transparency and accountability around content takedowns.
“Some of the content that Facebook, YouTube, and other platforms are taking down has crucial and irreplaceable value as evidence of human rights atrocities,” said Belkis Wille, senior crisis and conflict researcher at Human Rights Watch. “With prosecutors, researchers, and journalists increasingly relying on photographs and videos posted publicly on social media, these platforms should be doing more to ensure that they can get access to potential evidence of serious crimes.”
Social media content, particularly photographs and videos, posted by perpetrators, victims, and witnesses to abuses, has become increasingly central to some prosecutions of war crimes and other serious crimes, including at the International Criminal Court (ICC) and national proceedings in Europe. This content also helps the media and civil society document atrocities and other abuses, such as chemical weapons attacks in Syria, a security force crackdown in Sudan, and police abuse in the United States.
For this report, Human Rights Watch interviewed seven people who work at civil society organizations; three lawyers; two archivists; one statistician; two journalists; one former prosecutor with experience in international tribunals; five individuals within internationally mandated investigations; three national law enforcement officers; one European Union official; and one member of the European Parliament.
It also reviewed Facebook, Twittermand YouTube content that Human Rights Watch has cited in its reports to support allegations of abuse since 2007. From 5,396 total pieces of content referenced in 4,739 reports – the vast majority of which were published in the last five years – it found that 619 (or 11 percent) had been removed.
In letters to Facebook, Twitter, and Google sent in May 2020, Human Rights Watch shared the links to this content that had been taken down and asked the companies if Human Rights Watch could regain access for archival purposes, a request which was not granted.
In recent years, social media companies including Facebook, YouTube and Twitter have ramped up efforts to take posts from their platforms offline that they consider violate their rules, community guidelines, or standards according to their terms of service. This includes content they consider to be terrorist or violent extremist, hate speech, organized hate, hateful conduct, and violent threats.
The companies take down posts that users flag and content moderators review. But increasingly they also use algorithms to identify and remove offending posts, in some cases so quickly that no user sees the content before it is taken down. Governments globally have encouraged this trend, calling on companies to take down dangerous content as quickly as possible. It is unclear whether or how long social media companies store various types of content they take down or block from their sites.
Companies are right to promptly take content offline that could incite violence, otherwise harm individuals, or jeopardize national security or public order, so long as the standards they apply comport with international human rights and due process principles. Permanent removal of such content, however, can make it inaccessible and hamper important criminal accountability efforts.
No mechanism yet exists to preserve and archive social media takedowns that could provide crucial evidence of abuses, much less to ensure access by those who investigate international crimes. In most countries, national law enforcement officials can compel social media companies to hand over the content through the use of warrants, subpoenas, and court orders, but international investigators have limited ability to access the content because they lack law enforcement powers and standing.
Independent organizations and journalists have played a vital role in documenting atrocities around the globe, often when there were no judicial bodies conducting investigations. In some cases, this documentation has triggered judicial proceedings. However, they also have no ability to access taken-down content, and in common with official investigators, will not have notice of material artificial intelligence systems take down before anyone views it.
A European law enforcement officer investigating war crimes told Human Rights Watch that “content being taken down has become a daily part of my work experience. I am constantly being confronted with possible crucial evidence that is not accessible to me anymore.”
Holding individuals accountable for serious crimes may help deter future violations and promote respect for the rule of law, Human Rights Watch said. Criminal justice efforts may also help restore dignity to victims by acknowledging their suffering and helping to create a historical record that protects against revisionism by those who deny that atrocities occurred. International law obligates countries to prosecute genocide, crimes against humanity, and war crimes.
It is vital for social media companies and all relevant stakeholders to jointly develop a plan to establish an independent mechanism to take on the role of liaison with social media platforms and to preserve this content. The archive should be responsible for sorting and granting access to the content for research and investigative purposes, in accord with human rights and data privacy standards.
In parallel with these efforts, social media platforms should be more transparent about their existing takedown procedures, including through the increased use of algorithms, Human Rights Watch said. They should ensure that their own systems are not overly broad or biased and that they provide meaningful opportunities to appeal content removal.
“We appreciate that the task before social media companies is not easy, including striking the right balance between protecting free speech and privacy, and taking down content that can cause serious harm,” Wille said. “Consultations that draw on the experiences of other historical archives could lead to a real breakthrough and help the platforms protect free speech and public safety, while also ensuring that accountability efforts aren’t hampered.”
–
Comments welcome.
Posted on October 12, 2020