WASHINGTON: Just weeks after a broad effort announced by tech platforms to curb the spread of violent content, a video of Wednesday’s (Oct 9) deadly shooting in the German city of Halle was posted online where it may have been viewed by millions.
The gunman posted a video of the attack on the Twitch livestream gaming platform owned by Amazon, the company acknowledged.
The video of the shooting at a synagogue and a Turkish restaurant included a “manifesto” with racist and anti-Semitic commentary.
“Twitch has a zero-tolerance policy against hateful conduct, and any act of violence is taken extremely seriously,” a Twitch spokesperson said.
“We worked with urgency to remove this content and will permanently suspend any accounts found to be posting or reposting content of this abhorrent act.”
The news come after the deadly New Zealand mosque shooting livestreamed on Facebook in March, which prompted governments to press social networks to prevent the airing of violent acts on their platforms.
On Sep 23, Facebook announced additional efforts at the United Nations during a meeting with New Zealand’s Prime Minister Jacinda Ardern, who has taken up the cause of fighting online extremism.
Also last month, Amazon announced it was joining the Global Internet Forum to Counter Terrorism, an alliance tasked with tackling the most dangerous content on social media.
The tech firms had been seeking to avoid a repeat of the handling of the bloodbath in Christchurch, where the assailant posted an manifesto online and then livestreamed his killing of 51 worshippers.
DETECTION BY ALGORITHM
Twitch, which has gained a following for livestreaming gaming, was acquired in 2014 by Amazon for US$970 million, and has an estimated 15 million daily active users.
It was not immediately clear how long the video remained online, or how many people saw it. But segments of the video were reposted on Twitter and other social platforms.
After the Christchurch massacre, Facebook and others pointed out the challenges of preventing the sharing of violent content, often with minor changes to avoid detection by artificial intelligence.
Facebook also recently announced efforts to work with police in London and elsewhere to get batter data on violence to improve its detection algorithms.
“Filtering algorithms so far have not been very good at detecting violence on livestream,” noted Jillian Peterson, a professor of criminology at Hamline University, who suggested that social media firms may end up being “held accountable” for their role in spreading violent and hateful content.
Research by Peterson and others suggest shooters may be affected by contagion when they see similar attacks.
“In many ways, these shootings are performances, meant for all of us to watch,” Peterson said.
“Social media – and now livestreaming services – have given perpetrators a larger stage and wider audience. Perpetrators are looking to show their grievance to the world, and livestreaming gives them the means to do it.”
Hans-Jakob Schindler of the Counter Extremism Project, a group seeking to curb online violence, said the latest livestream highlights a need for stronger actions against social platforms.
“Online platforms need to step up and stop their services being used and in turn, parent companies need to hold them accountable,” Schindler said.
“Amazon is just as much to blame as Twitch for allowing this stream online. This tragic incident demonstrates one more time that a self-regulatory approach is not effective enough and sadly highlights the need for stronger regulation of the tech sector.”