Thirty-five minutes live on Twitch, a shared backup through messaging: the online broadcast of the video of
the attack on a synagogue Wednesday in Halle
Germany has once again shown the difficulty of preventing killers from developing online,
despite the mobilization of post-Christchurch.
The shooter, whose Yom Kippur attack left two dead and two seriously wounded in East Germany, was able to post live footage to an account created two months earlier on the platform. live streaming Twitch, a place where fans of video games and e-sports normally meet. The account had been used only once before for a live.
A video shared through third-party messaging
During these 35 minutes, the attack was followed live by only five people, said the platform. The recording, automatically stored on Twitch, was watched by 2,200 people. More interestingly, the platform indicates that the video was then shared "in a coordinated manner" through third-party messaging. And it's impossible to say for the moment how many people have seen it.
"We have done it as soon as possible to remove this content, and we will suspend any accounts that post or repost images of this abominable act," said a spokeswoman for Twitch. "Once the video was removed, we shared the information with a consortium in our industry to help prevent the proliferation of this content. We take this very seriously and are committed to working with our peers, law enforcement and all stakeholders to protect our community, "said Twitch.
The modus operandi evokes the attack on two mosques in Christchurch, New Zealand in March. A far-right Australian had killed 51 people before being arrested but had managed to broadcast his attack live for about 17 minutes on Facebook, before the retransmission was stopped.
This very long delay had earned Facebook very virulent criticism and raised voices from all sides, demanding immediate action.
Make the difference between a real attack and a movie or video game scene
Facebook has recruited police forces on both sides of the Atlantic to educate its artificial intelligence tools to try to prevent the recurrence of such a disaster. The difficulty is that the "artificial brain" must be able to tell the difference between a real attack and a movie or video game scene.
"The filtering algorithms so far have not been very good at detecting violence in live streaming. In the future, we may see social networks held accountable for their role in disseminating violent and hateful content, "says Jillian Pterson, professor of criminology at Hamline University in St. Paul.
To better educate the machine, Facebook has joined the London police at his initiative using the images filmed by the cameras worn by the units of the "Met" during their training in shooting. Artificial intelligence tools need huge amounts of data – here images of shootings – to learn how to correctly identify, sort, and in fine delete them. This initiative, announced in mid-September, is part of a wider effort to clean up "hate and extremist" content and white supremacist movements or individuals.
Alliance of Web Giants Against Extremism
Facebook and its partners have also announced the creation of a new organization, which should include Facebook, Microsoft, Twitter and Google (via YouTube) as well as Amazon or the LinkedIn platforms (owned by Microsoft) and WhatsApp (Facebook). The new structure will "thwart the increasingly sophisticated attempts of terrorists and violent extremists to use digital platforms."
Non-governmental actors will lead an advisory committee and the governments of the United States, France, the United Kingdom, Canada, New Zealand and Japan will also have a consultative role, as will experts from the UN and the European Union.
But for Hans-Jakob Schindler, who heads the Counter Extremism Project, all this is not enough. "This tragic incident demonstrates once again that self-regulation is not effective enough and unfortunately emphasizes the need for stronger regulation of the technology sector," he writes.