Suicidal teens were used for experimentation with AI.
Koko, a mental health start-up, looked for vulnerable young people on social media who were using “crisis language”.
They were then funneled to a chatbot that asked them “What are you struggling with?”
After respondents were finished with the experiment, they were given a GIF that said, “Thanks for that. Here’s a cat!”
See screenshot.
How is any of this okay?
Every day they just move that bar lower.
Young adults and older children at risk of suicide are very vulnerable.
Previous studies have shown that as soon as they find out they're talking to an AI most people (reasonably) drop out.