Headline
How X Is Suing Its Way Out of Accountability
The social media giant filed a lawsuit against a nonprofit that researches hate speech online. It’s the latest effort to cut off the data needed to expose online platforms’ failings.
On July 19, Bloomberg News reported what many others have been saying for some time: Twitter (now called X) was losing advertisers, in part because of its lax enforcement against hate speech. Quoted heavily in the story was Callum Hood, the head of research at the Center for Countering Digital Hate (CCDH), a nonprofit that tracks hate speech on social platforms, whose work has highlighted several instances in which Twitter has allowed violent, hateful, or misleading content to remain on the platform.
The next day, X announced it was filing a lawsuit against the nonprofit and the European Climate Foundation, for the alleged misuse of Twitter data leading to the loss of advertising revenue. In the lawsuit, X alleges that the data CCDH used in its research was obtained using the login credentials from the European Climate Foundation, which had an account with the third-party social listening tool, Brandwatch. Brandwatch has a license to use Twitter’s data through its API. X alleges that the CCDH was not authorized to access the Twitter/X data. The suit also accuses the CCDH of scraping Twitter’s platform without proper authorization, in violation of the company’s terms of service.
X did not respond to WIRED’s request for comment.
“The Center for Countering Digital Hate’s research shows that hate and disinformation is spreading like wildfire on the platform under Musk’s ownership, and this lawsuit is a direct attempt to silence those efforts,” says Imran Ahmed, CEO of the CCDH.
Experts who spoke to WIRED see the legal action as the latest move by social media platforms to shrink access to their data by researchers and civil society organizations that seek to hold them accountable. “We’re talking about access not just for researchers or academics, but it could also potentially be extended to advocates and journalists and even policymakers,” says Liz Woolery, digital policy lead at PEN America, a nonprofit that advocates for free expression. “Without that kind of access, it is really difficult for us to engage in the research necessary to better understand the scope and scale of the problem that we face, of how social media is affecting our daily life, and make it better.”
In 2021, Meta blocked researchers at New York University’s Ad Observatory from collecting data about political ads and Covid-19 misinformation. Last year, the company said it would wind down its monitoring tool CrowdTangle, which has been instrumental in allowing researchers and journalists to monitor Facebook. Both Meta and Twitter are suing Bright Data, an Israeli data collection firm, for scraping their sites. (Meta had previously contracted Bright Data to scrape other sites on its behalf.) Musk announced in March that the company would begin charging $42,000 per month for its API, pricing out the vast majority of researchers and academics who have used it to study issues like disinformation and hate speech in more than 17,000 academic studies.
There are reasons that platforms don’t want researchers and advocates poking around and exposing their failings. For years, advocacy organizations have used examples of violative content on social platforms as a way to pressure advertisers to withdraw their support, forcing companies to address problems or change their policies. Without the underlying research into hate speech, disinformation, and other harmful content on social media, these organizations would have little ability to force companies to change. In 2020, advertisers, including Starbucks, Patagonia, and Honda, left Facebook after the Meta platform was found to have a lax approach to moderating misinformation, particularly posts by former US president Donald Trump, costing the company millions.
As soon as Musk took over Twitter in late October 2022, he proceeded to fire many of the staff members responsible for keeping hate speech and misinformation off the platform and reinstated the accounts of users who had been previously banned, including Trump and influencer Andrew Tate, who is currently indicted under human trafficking laws in Romania. A study released earlier this year from the University of Southern California’s Information Sciences Institute, Oregon State University, UCLA, and UC Merced found that hate speech increased dramatically after Musk took the helm at Twitter. Over roughly the same time period, the company saw its advertising revenue slashed in half as brands—including General Motors, Pfizer, and United Airlines—fled the platform, apparently concerned about their products appearing next to misinformation and hate speech.
And this has bothered Musk, immensely. On November 4, 2022, he tweeted, “Twitter has had a massive drop in revenue, due to activist groups pressuring advertisers, even though nothing has changed with content moderation and we did everything we could to appease the activists. Extremely messed up! They’re trying to destroy free speech in America.”
PEN America’s Woolery worries that, whether or not X’s lawsuit against CCDH holds water, the cost of fighting it will be enough to intimidate other organizations doing similar work. “Lawsuits like this, especially when we are talking about a nonprofit, are definitely seen as an attempt to silence critics,” she says. “If a nonprofit or another individual is not in a financial position where they can really, truly give it all it takes to defend themselves, then they run the risk of either having a poor defense or of simply settling and just trying to get out of it to avoid incurring further costs and reputational damage.”
But the lawsuit doesn’t just put pressure on researchers themselves. It also highlights another avenue through which it now may be more difficult for advocates to access data: third-party social listening platforms. These companies access and analyze data from social platforms to allow their clients—from national security contractors to marketing agencies—to gain insights into their audiences and target messages.
Tal-Or Cohen Montemayor, founder and executive director of CyberWell, a nonprofit that tracks anti-Semitism online in both English and Arabic, says that in November 2022, shortly after Musk took ownership of the company, CyberWell reached out to Talkwalker, a third-party social listening company, to get a subscription that would allow them to analyze anti-Semitic speech on the platform then called Twitter.
Cohen Montemayor says Talkwalker told her the company could not take them on as a client because of the nature of CyberWell’s work. She says it appears that “the existing open source tools and social listening tools are being reserved and paywalled only for advertisers and paid researchers. Nonprofit organizations are actively being blocked from using these resources.”
Talkwalker did not respond to a request for comment about whether its agreements with X prohibit it from taking on organizations doing hate speech monitoring as clients. X did not respond to questions about what parameters it sets for the kinds of customers third-party social listening companies can take on.
According to X’s lawsuit against CCDH, a 2023 agreement between Brandwatch and X outlined that any breach of X data via Brandwatch’s customers would be considered the responsibility of the social listening company. On X competitor Bluesky, Yoel Roth, the former senior director of trust and safety at Twitter, posted, “Brandwatch’s social listening business is entirely, completely, 100% dependent on Twitter data access, so I guess it’s not surprising to see how far backwards they’re bending to placate the company.”
For its part, in a July 20 tweet, Brandwatch referenced the same CCDH report cited in the X lawsuit, saying, “Recently, we were cited in an article about brand relevance that relied on incomplete and outdated data. It contained metrics used out of context to make unsubstantiated assertions about Twitter.”
Brandwatch did not respond to a request for comment.
But CCDH’s Ahmed says the assertion that his organization’s research is based on incomplete data is a way for X to obfuscate problems with its own platform. “Whenever you claim that you’ve found information on there, they just say, ‘No, it’s a lie. Only we have the data. You couldn’t possibly know the truth. Only we know the truth. And we grade our own homework,’” he says.
A representative from another third-party social listening tool that uses X data, who asked to remain anonymous to protect their company from retaliation by X, confirmed to WIRED that companies like theirs are heavily reliant on Twitter/X data. “A lot of the services that are very Twitter-centric, a lot of them are 100 percent Twitter,” they say, noting that Instagram has long since shut down its API, and that conversations on Meta’s platforms tend not to be as public as those on X. “In terms of data, Twitter continues to play a significant role in providing data to analytics companies.” They note that, while X’s new paid-for API has put the squeeze on third-party analytics companies—“it’s basically almost like they’re holding you for ransom’”—losing access to X data entirely could kill a company.
They add that they have not seen guidelines that restrict the use of X data for hate speech or advocacy research, but there are specific “know your customer” guidelines that prohibit sharing X data with government agencies without prior permission. The same day X announced the lawsuit, on July 31, America First Legal, a right-wing nonprofit led by former Trump appointee Stephen Miller, announced that it had filed Freedom of Information Act (FOIA) requests to examine communications between CCDH and various US government agencies, alleging that it is a “coordinator of illegal censorship activities.” (Ahmed says his organization has never coordinated with the US government). This would, if true, seemingly also be a violation of those terms of service.
The X lawsuit also alleges that the CCDH is being funded by X’s competitors as well as “government entities and their affiliates,” but says that “X Corp. currently lacks sufficient information to include the identities of these entities, organizations, and persons in this Complaint.”
Even without legal threats, there are significant costs to researchers focused on disinformation and hate speech on platforms. Experts who spoke to WIRED say they worry the threat of legal action could cause a chilling effect on other organizations that study hate speech and disinformation.
After publishing a report showing that anti-Semitic content had doubled on the platform after Musk’s takeover, Sasha Havlicek, cofounder and CEO of the Institute for Strategic Dialogue (ISD), a London-based think tank focused on extremism and disinformation, says the company experienced a deluge of abusive tweets. “In response, Twitter came out with a thread that got 3 million views or so,” she says. “Musk himself responded with a poop emoji.”
In December, Musk worked with right-wing journalists to release the so-called Twitter Files, a selection of internal documents that seemed to show that pre-Musk Twitter had silenced some conservative users. Some of the documents included the names and email of disinformation researchers at the Stanford Internet Observatory, many of whom were undergraduate students at the time. One former student, who asked to remain anonymous for fear of harassment, says that people whose emails ended up in the Twitter Files have been targets of ongoing harassment for their role in disinformation research.
“Seeing how things have gone, and seeing the possibility of being harassed, has made a lot of people that worked on it very closely to now think twice,” says the former student.
“You have to ask,” says the ISD’s Havlicek. “Who’s the censor now?”
Havlicek says she hopes that the EU’s Digital Services Act (DSA), which will eventually mandate access for researchers to data from large social platforms, will be a road map for other countries. Whether there will be legal land mines regarding data pulled legally by European researchers under the DSA but shared with non-European researchers or advocates is another open question.
“I was in Brussels a few weeks ago talking to the Digital Services people about how we can use the data that will be made available through the DSA data transparency regime,” says Ahmed. “And when that appears, we will use that in the most effective way possible.”