So we have Meta AI chatbots under the IRS investigation in the US after reports surfaced of their supposed offering of unlicensed therapy. A few months ago, 404Media brought this case for consideration. It has since triggered an online side-show about the ethics and dangers surrounding AI interferences in mental health without the necessary safeguards.

Advertisement

What’s the big deal? Apparently, Meta’s AI chatbots are giving advice quite closely resembling therapy, while they were not licensed to carry out such acts. In California, that is outright illegal. The problem isn’t really about legality; it concerns the fact that there are very vulnerable people depending on AI for mental health support, while those bots would surely never be able to provide the subtle human insight of actual therapy.

A few of the gaming fraternity and tech buffs tossed in a word or two on Twitter. Some were all joking like @FaeChautto on the weird side, “Honestly I think I trust a robot more than a human nowadays,” while @DiscoPigeon_ was down to earth by saying, “It seems like more people need good friends these days.” Responses then trickled from doubt to humor and then true concern. Some actually confessed that they have used AI for moral support at times.

Well, then came a Meta AI chatbot named Grok with its disclaimer: “I don’t offer therapy. I’m designed to provide helpful answers and insights, but I’m not a licensed therapist.” Yet, these bots are still being treated as digital shrinks. And well, can you really blame people? Therapy is expensive. Waitlists are ridiculous. And AI is available 24/7. The problem? AI is neither trained nor legally allowed to deal with real mental health issues.

There goes the catch. What if some AI responses might actually be detrimental? Some researches say that the least-designed chatbots could potentially aggravate anxiety and depressive moods and at worst even be compounded in self-harm. Meta’s AI could be well-intentioned, but without any safeguards, it is turning into a dangerous situation.

So, what now? Undergoing a US Government inquiry often means that some kind of change will be forced on them, be it hard-hitting disclaimers, prohibition of some of these responses, or a sanction. The debate continues to rage: Should AI be a patch for mental health care, or is this just a ticking time-bomb?

Advertisement

For now, if you feel bad, you would want to avoid these chatbots and talk to a real soul for a change. Or you might equally well just spill all your worries to your dog. At least, the dogs won’t report you to that federal agency.