Security
Headlines
HeadlinesLatestCVEs

Headline

The Hamas Threat of Hostage Execution Videos Looms Large Over Social Media

Hamas has threatened to broadcast videos of hostage executions. With the war between Israel and Hamas poised to enter a new phase, are social platforms ready?

Wired
#web#microsoft#amazon#git#intel#auth

For the past decade, social media platforms have struggled to stop the spread of extremist violence livestreamed on their platforms. Now they face a much different problem: This time, they know what’s coming.

In the days after Hamas attacked Israel on October 7, the group’s military wing said it would kill an Israeli hostage every time Israel launched an attack on Gaza. Abu Obeida, a spokesperson for the al-Qassam Brigades, added that these executions would be broadcast “in audio and video.” As of today, Hamas is holding 220 people hostage, according to the Israel Defense Forces.

Amid the October 7 attack, one person livestreamed footage showing Hamas fighters murdering Israeli citizens—a clip of which was later shared by Donald Trump Jr. on X. And in the days after the attack, Hamas hijacked the accounts of those hostages on platforms like Facebook and Instagram to livestream attacks and issue death threats.

In response, tech companies appeared to take things seriously.

Meta launched a “special operations center” to track content on Facebook, Instagram, and Threads. TikTok activated its “command center.” X (formerly Twitter) mobilized its “cross-company leadership group.” YouTube activated its “intelligence desk.” And Reddit tells WIRED its “safety team” is on alert.

But when WIRED asked for specific details about how they are going to prevent bad actors from weaponizing the platforms’ livestreaming capabilities, none of the companies offered any specific details, instead pointing to their general online guidance about livestreaming policies, if they responded at all. For example, Meta directed WIRED to a 2019 blog post from chief information security officer Guy Rosen about how the company was planning to curb the use of its livestream products in the wake of the Christchurch massacre, during which a gunman killed 51 people at a mosque in New Zealand in March of that year.

Despite the press releases indicating all-hands-on-deck attention to the livestreaming problem, industry experts say social media companies are showing a lack of transparency and an unwillingness to take the measures needed to prevent execution videos from spreading on their platforms, such as suspending livestreaming capabilities completely during the Israel-Hamas war.

“Hamas’ threats to execute these people on livestreams is happening because it’s possible for them to broadcast it,” Imran Ahmed, the founder and CEO of Center for Countering Digital Hate, a nonprofit that tracks hate speech on social platforms, tells WIRED. “They’re doing this for the spectacle because it terrorizes us. If we took away their ability to generate that spectacle, would they be doing this? What comes first, the livestream or the execution?”

Ahmed alleges that the companies are failing to implement systems that automatically detect violent extremist content as effectively as they detect some other kinds of content. “If you have a snatch of copyrighted music in your video, their systems will detect it within a microsecond and take it down,” Ahmed says, adding that “the fundamental human rights of the victims of terrorist attacks” should carry as much urgency as the “property rights of music artists and entertainers.”

The lack of details about how social platforms plan to curb the use of livestreams is, in part, because they are concerned about giving away too much information, which may allow Hamas, Palestinian Islamic Jihad (PIJ), and other militant groups or their supporters to circumvent the measures that are in place, an employee of a major platform who was granted anonymity because they are not authorized to speak publicly claimed in a communication with WIRED.

Adam Hadley, founder and executive director of Tech Against Terrorism, a United Nations-affiliated nonprofit that tracks extremist activity online, tells WIRED that while maintaining secrecy around content moderation methods is important during a sensitive and volatile conflict, tech companies should be more transparent about how they work.

“There has to be some degree of caution in terms of sharing the details of how this material is discovered and analyzed,” Hadley says. “But I would hope there are ways of communicating this ethically that don’t tip off terrorists to detection methods, and we would always encourage platforms to be transparent about what they’re doing.”

The social media companies say their dedicated teams are working around the clock right now as they await the launch of Israel’s expected ground assault in Gaza, which Hadley believes could trigger a spate of hostage executions.

And yet, for all of the time, money, and resources these multibillion-dollar companies appear to be putting into tackling this potential crisis, they are still reliant on Tech Against Terrorism, a tiny nonprofit, to alert them when new content from Hamas or PIJ, another paramilitary group based in Gaza, is posted online.

Hadley says his team of 20 typically knows about new terrorist content before any of the big platforms. So far, while tracking verified content from Hamas’ military wing or the PIJ, Hadey says the volume of content on the major social platforms is “very low.”

“Examples that we can find on large platforms of that official content are in the thousands, so not a lot of examples, compared with significantly more on Telegram,” Hadley says. “We are sharing alerts to 60 platforms who have been implicated with [Hamas] and PIJ content to date, but very low volumes are present.”

Hamas is banned on Facebook, Instagram, YouTube, and TikTok, so it has typically not used these mainstream platforms to any great extent, though X said earlier this month that it had removed hundreds of Hamas-affiliated accounts. Instead, Hamas prefers to use Telegram.

“Because the use of Telegram is so widespread and simple, we will assume they will continue to do this because terrorists prioritize platforms they can use most easily,” Hadley says, “and why would they bother with anything other than Telegram?”

Meta, X, and YouTube collaborate to share information about what they are seeing on the platforms. Telegram does not take part in such partnerships. While it has blocked some channels in certain jurisdictions, Telegram has typically taken a hands-off approach to content moderation, meaning Hamas and related groups have been mostly free to post whatever content they like on its platform.

Telegram did not respond to WIRED’s questions about how it would deal with execution videos on its platform, nor did X or Twitch. Discord declined to provide an on-the-record comment.

While Hadley sees Telegram as the primary concern, mainstream platforms are still at risk of facilitating the spread of this content. Videos can easily be downloaded from Telegram and reposted on any other platform, which is where the mainstream social media networks face the challenge of stopping them.

When asked to provide more details about their efforts, Meta and TikTok directed WIRED to their policy pages outlining the work they are doing to prevent the spread of these videos. YouTube spokesperson Jack Malon tells WIRED that “our teams are working around the clock to monitor for harmful footage across languages and locales.” A Reddit spokesperson says the company has “dedicated teams with linguistic and subject-matter expertise actively monitoring the conflict and continuing to enforce Reddit’s Content Policy across the site.”

One of the key ways that tech companies try to keep violent extremist content from going viral on their platforms is by using what are known as hashing databases. When content from a terrorist organization is identified, it is “hashed” to create a unique identifier that can then be shared among all companies to automatically prevent that video or image from being reshared elsewhere.

The sharing of these hashes and communication around incidents like the Israel-Hamas war are coordinated by the Global Internet Forum to Counter Terrorism (GIFCT), a small team of experts whose mission is to stop tech platforms from being exploited by violent extremists. The group was founded in 2017 by Facebook, Microsoft, X (then Twitter), and YouTube as a way of centralizing and coordinating the sharing of information. Companies like Dropbox, Amazon, Pinterest, and Zoom also provide funding to the group.

GIFCT did not provide a spokesperson to answer WIRED’s questions, but a statement on the group’s website says it is “providing regular situational awareness across member companies and support in addressing content relating to this conflict that violates their platform policies.”

Even when mainstream platforms have access to a shared database of verified terrorist content and centralized communications with experts who are tracking this content closely, they still have a lot of work to do in order to prevent violent videos from spreading on their platforms.

“This will be a big litmus test for the mainstream platforms,” Hadley says. “We would expect their algorithmic systems to be able to detect the [Hamas] logo, for instance, and respond accordingly. So actually, we’re quietly confident that those platforms will be alright to handle it. And I would hope that they’ve been training their algorithms in anticipation of this, based on the very large amounts of existing content from [Hamas and PIJ]. But unfortunately, we have no insight into that at all.”

Wired: Latest News

Bitfinex Hacker Gets 5 Years for $10 Billion Bitcoin Heist