Security
Headlines
HeadlinesLatestCVEs

Headline

Awful 4chan chat bot spouts racial slurs and antisemitic abuse

The creation of a foul-mouthed chat bot called GPT-4chan re-triggered the discussion about how we want to use and regulate AI and ML. The post Awful 4chan chat bot spouts racial slurs and antisemitic abuse appeared first on Malwarebytes Labs.

Malwarebytes
#mac#google#microsoft#git#intel

“A robot may not injure a human being or, through inaction, allow a human being to come to harm”

Science fiction readers, and many others, will recognize Asimov’s first law of robotics. After reading about a bot called GPT-4chan I was wondering whether we should include:

“A bot may not insult a human being or, through interaction, allow a human being to be discriminated”

GPT-4chan was based on an AI instance trained using 3.3 million threads from 4chan’s infamously toxic Politically Incorrect /pol/ board. Once trained, the creator released the chat bot back onto 4chan. And, no surprise here, the AI behaved just as vile as the posts it was trained on, spouting racial slurs and engaging with antisemitic threads.

While many outside the industry may have found the experiment interesting, serious AI researchers commented that this did not qualify as a serious experiment, but as an unethical one.

Déjà vu

Reading the above may cause some people to think they have seen this before. What you may remember reading about is a Microsoft Twitter AI chat bot that went rogue in less than 24 hours. The more someone chats with Tay (the name of the chat bot), said Microsoft, the smarter it gets, learning to engage people through casual and playful conversation.

However, quickly Twitter users proved that artificial intelligence (AI) and machine learning (ML) adhere to the “garbage in, garbage out” law in computer science. Twitter users managed to turn Tay into a racist and misogynist in less than a day.

GPT-3

The name GPT-4chan was partly based on the Generative Pre-trained Transformer 3 (GPT-3) language model that uses deep learning to produce human-like text. In January 2022, OpenAI introduced a new version of GPT-3, which should do away with some of the most toxic issues that plagued its predecessor.

Large language models like GPT-3 use vast bodies of text for training. Often these texts originate from the internet. In these texts they encounter the best and worst of what people put down in words. As such, the training material includes toxic language as well as falsehoods. Filtering out offensive language from the training set can make models perform less well, especially in cases where the training data is already sparse. In its new InstructGPT model, OpenAI tries to align language models with user intent on a wide range of tasks by fine-tuning with human feedback.

Accidental bias

Despite the obvious potential, recent events have exposed how automated systems can both intentionally and unintentionally lead to bias. For example, women see fewer advertisements about entering into science and technology professions than men do. Not because companies are preferentially targeting men, but as a result derived from the economics of ad sales.

Simply put, when an advertiser pays for digital ads, including postings for jobs in science, technology, engineering and mathematics, it is more expensive to get female views than male ones. So the algorithm targets men to enhance the number of eyeballs per spent dollar.

Another well-known example is an algorithm that selected new candidates for a job based on the current population of employees. By doing this, the algorithm amplified the outdated model that says some jobs are predominantly done by men, or women.

As AI becomes a mandatory strategic tool across multiple industries, companies using AI as part of their strategies need to accept their roles and responsibilities in reducing the risk and impact of bias inherent in their products and services.

Regulation

As you may have guessed, my call for regulation was not a novel idea. In 2020, Google CEO Sundar Pichai stated he felt that AI needed regulation in order to prevent the potential negative consequences of tools including deepfakes and facial recognition. In his mind, this was not a conversation to save for tomorrow while the building and implementing of AI tools is happening today. But by nature, laws and regulations are mostly created as a response to abuse, rather than as a visionary approach of what could go wrong.

An ongoing discussion

The responses to the GPT-4chan experiment are another step in an ongoing discussion to determine whether AI and ML are here to save the world or whether they will destroy what’s left of it. This discussion seems pointless. The focus should not be on the product, but on the way in which we use it. As with every new development, we obtain a new tool, which we can wield for good, for evil, or just for profit.

As we pointed out in our 2019 Labs report “When artificial intelligence goes awry: separating science fiction from fact”,

“There’s a crucial period in artificial intelligence’s development—in fact, in any technology’s development—where those bringing this infant tech into the world have a choice to develop it responsibly or simply accelerate at all costs.”

To some, one of the biggest issues of artificial intelligence and machine learning is the impact on the climate. The big issue is that many high-profile ML advances just require a staggering amount of computation.

On that note, at best the GPT-4chan experiment was a waste of energy producing the kind of garbage that humanity, unfortunately, does not need help with.

Don’t be like GPT-4chan!

Malwarebytes: Latest News

Meta takes down more than 2 million accounts in fight against pig butchering