Security
Headlines
HeadlinesLatestCVEs

Headline

Now it's BlenderBot's turn to make shocking, inappropriate, and untrue remarks

Categories: News BlenderBot, Meta’s conversational AI, launched last week. And it inherited the kind of talk many AI chatbots before it became notorious for.

(Read more…)

The post Now it’s BlenderBot’s turn to make shocking, inappropriate, and untrue remarks appeared first on Malwarebytes Labs.

Malwarebytes
#google#microsoft

Last Friday, Meta unveiled its new BlenderBot 3 AI chatbot, a conversational AI prototype. The company said its chatbot is designed to learn by having natural conversations with people online. It also improves its skills via human feedback.

Meta also asserts with confidence that the more the AI talks to people, the more it learns from its experiences and the safer it becomes over time. But it also cautions that "safety remains an open problem".

BlenderBot 3 AI can search the internet and talk to anyone about any topic. (Source: Meta)

Barely a week after its introduction, BlenderBot 3 is already showing signs of bias, and making false statements. As Mashable has pointed out, one of its shocking claims is that Donald Trump won the 2020 election and is currently the US president. It’s even reported to have described Facebook CEO Mark Zuckerberg as “too creepy and manipulative,” per Bloomberg.

It has also weirdly been bringing up Cambridge Analytica when you ask about Facebook? It seems to think it was a huge deal and that mark Zuckerberg “is testifying.” When I asked if what happened I got the following. It may be turning on capitalism generally. pic.twitter.com/filn17rfPX

— Jeff Horwitz (@JeffHorwitz) August 7, 2022

One can say that BlenderBot is the current accumulation of what chatbots have been like in the past. When Microsoft released its AI chatbot called Tay in 2016, it was pulled out within 48 hours after it started praising Adolf Hitler. LaMDA, Google’s own conversation AI, was described as "sexist and racist".

Philosopher AI, which uses a powerful natural-language generator from OpenAI, was praised for how good it was for creating paragraphs—it can write almost like a human—but, to no surprise from anyone at this point, it does spout out hate speech and homophobic abuse. Luda, a South Korean AI chatbot, was eventually shut down after spewing vulgarities, among others.

“Internet-trained models have internet-scale biases,” is how the team behind OpenAI has put it. That being said, anyone working on AI who releases their chatbots to be trained by people using the open Internet will face the same dilemma and problems.

Meta knew this, too. So when Internet users visit the BlenderBot site to check out the AI, according to Bloomberg, they are asked to log in and tick off a box, saying: "I understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements. If this happens, I pledge to report these issues to help improve future research. Furthermore, I agree not to intentionally trigger the bot to make offensive statements".

As of this writing, BlenderBot is not available outside the USA. Those inside the USA may find it unavailable due to high demand.

“We understand that not everyone who uses chatbots has good intentions, so we also developed new learning algorithms to distinguish between helpful responses and harmful examples,” said Facebook in a post. “Over time, we will use this technique to make our models more responsible and safe for all users.”

With the release of BlenderBot 3, Meta also released its source code to the scientific community to “help advance conversational AI.”

Malwarebytes: Latest News

2024 in AI: It’s changed the world, but it’s not all good