Headline
“Nudify” deepfake bots remove clothes from victims in minutes, and millions are using them
Millions of people are turning normal pictures into nude images using bots on Telegram, and it can be done in minutes.
Millions of people are turning normal pictures into nude images, and it can be done in minutes.
Journalists at Wired found at least 50 “nudify” bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it’s appalling, and highly likely there are much more than those they found.
The history of nonconsensual intimate image (NCII) abuse—as the use of explicit deepfakes without consent is often called—started near the end of 2017. Motherboard (now Vice) found an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name “deepfake.”
Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like Generative Adversarial Networks (GANs) are available to the public.
However, most of the uncovered bots don’t use this advanced type of technology. Some of the bots on Telegram are “limited” to removing clothes from existing pictures, an extremely disturbing act for the victim.
These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of “tokens” to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.
Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when AI Girlfriend was breached), and you can even end up getting infected with malware.
The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with potential long-term psychological and emotional impacts on victims.
To combat this type of sexual abuse there have been several initiatives:
- The US has proposed legislation in the form of the Deepfake Accountability Act. Combined with the recent policy change by Telegram to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.
- Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).
However, so far these steps have shown no significant impact on the growth of the market for NCIIs.
Keep your children safe
We’re sometimes asked why it’s a problem to post pictures on social media that can be harvested to train AI models.
We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.
- Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
- Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
- Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
- Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
- Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
- Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.
If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.