Security
Headlines
HeadlinesLatestCVEs

Headline

Removing the human: When should AI be used in emotional crisis? Lock and Code S03E09

Categories: Podcast This week on Lock and Code, we speak with Courtney Brown about a mental health nonprofit’s use of AI to speak to people suffering emotional distress.

(Read more…)

The post Removing the human: When should AI be used in emotional crisis? Lock and Code S03E09 appeared first on Malwarebytes Labs.

Malwarebytes
#mac#apple#google#intel

In January, a mental health nonprofit admitted that it had used Artificial Intelligence to help talk to people in distress.

Prompted first by a user’s longing for personal improvement—and the difficulties involved in that journey—the AI tool generated a reply, which, with human intervention, could be sent verbatim in a chat box, or edited and fine-tuned to better fit the situation. The AI said:

“I hear you. You’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone. There are people here who care about you and want to help you. I’m proud of you for making the decision to try to improve your life. It takes a lot of courage and strength. I send you love and support as you go through this journey.”

This was experimental work from Koko, a mental health nonprofit that integrated the GPT-3 large language model into its product for a short period of time that is now over. In a video demonstration posted on Twitter earlier this year, Koko co-founder Rob Morris revealed that the nonprofit had used AI to provide “mental health support to about 4,000 people” across “about 30,000 messages.” Though Koko pulled GPT-3 from its system after a reportedly short period of time, Morris said on Twitter that there are several questions left from the experience.

“The implications here are poorly understood,” Morris said. “Would people eventually seek emotional support from machines, rather than friends and family?”

Today, on the Lock and Code podcast with host David Ruiz, we speak with Courtney Brown, a social services administrator with a history in research and suicidology, to dig into the ethics, feasibility, and potential consequences of relying increasingly on AI tools to help people in distress. For Brown, the immediate implications draw up several concerns.

“It disturbed me to see AI using ‘I care about you,’ or ‘I’m concerned,’ or ‘I’m proud of you.’ That made me feel sick to my stomach. And I think it was partially because these are the things that I say, and it’s partially because I think that they’re going to lose power as a form of connecting to another human.”

But, importantly, Brown is not the only voice in today’s podcast with experience in crisis support. For six years and across 1,000 hours, Ruiz volunteered on his local suicide prevention hotline. He, too, has a background to share.

Tune in today as Ruiz and Brown explore the boundaries for deploying AI on people suffering from emotional distress, whether the “support” offered by any AI will be as helpful and genuine as that of a human, and, importantly, whether they are simply afraid of having AI encroach on the most human experiences.

You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.

Malwarebytes: Latest News

122 million people’s business contact info leaked by data broker