FOR MANY, the nearest therapist these days isn’t someone sitting across from them in a room but a friendly face on the other side of a Zoom, or even a chat thread on a smartphone. In a quantum leap beyond those types of virtual encounters, increasingly, the entity offering mental health advice may not even be a human.
Chat-based mental health services boomed during the pandemic, several of them using generative artificial intelligence chatbots to converse about mental health and offer virtual companionship. If that technology starts to make its way into professionally licensed mental health areas, Rep. Josh Cutler has filed legislation to make sure the usage is regulated and disclosed. And he had an unusual collaborator in that mission – his bill was co-written by the generative app ChatGPT.
The Pembroke Democrat knows it’s a little gimmicky. Coverage of the bill’s introduction understandably focused on the irony of asking an artificial intelligence to help regulate itself.
“The ChatGPT writing of the bill is a part of it that I think was kind of a ‘gee whiz shiny object’ kind of thing,” Cutler said. “I think the bill itself, what it’s trying to do, is more substantive, and I think necessary. I chose behavioral health, but what I hope will happen is that we’ll advance this and really take a look at AI in all facets of policy.”
State Sen. Barry Finegold also put forth a ChatGPT-authored bill with a broader brief, but Cutler sees AI mental health services as a creeping concern that has not yet broken into the wider medical field.
“I think it’s all the more reason to do it now,” he said. “Take a look at it proactively, prescriptively, and make sure we have some basic guardrails in place.”
The bill, now before the Committee on Mental Health, Substance Use and Recovery, isn’t a mandate against using AI, he notes. Rather, it sets up some basic approval processes for introducing artificial intelligence into mental health services and rules to make sure that patient privacy and choice are protected.
Other states are beginning to mull similar legislation. Texas legislators were considering a bill that would regulate offering mental health care using artificial intelligence earlier this year.
Artificial intelligence in medical fields is still an emerging subject. Researchers recently laid out possible chatbot applications for medical use in the New England Journal of Medicine. The authors, who are affiliated with the creators of ChatGPT’s most recent version, GPT-4, concluded that even with information only provided through internet databases as opposed to privately restricted health data, the chatbot systems showed “varying degrees of competence” in scenario-based tests.
The three tests targeted using AI in assisting with medical note-taking, the kind of innate medical knowledge a practitioner would need to pass the United States Medical Licensing Examination, and a basic level of patient consultation.
“Although we have found GPT-4 to be extremely powerful, it also has important limitations,” they wrote. It can catch human errors but also introduce new errors through mistakes known as “hallucinations,” and “such errors can be particularly dangerous in medical scenarios because the errors or falsehoods can be subtle and are often stated by the chatbot in such a convincing manner that the person making the query may be convinced of its veracity.”
AI is already being used in places to analyze medical images, flag possible adverse drug interactions, identify high-risk patients, and code medical notes. But mental health care is a delicate subject, and some go to GPT as a potential source of emotional support even though it is explicitly not recommended for that use.
Cutler’s bill requires informing patients if they are being treated by AI, a timely concern as individuals seeking mental health services increasingly turn to telehealth and chat services for support amid a counselor shortage. Earlier this year, the emotional support chat service Koko drew the ire of academics and legal experts when its founder announced about 4,000 people got responses from the app that were at least partially written by a GPT chatbot without disclosing the full role of the AI.
“I consider myself a technology guy, and I’m open minded about these things,” Cutler said. “But I’m also concerned about making sure that we have proper guardrails for this kind of technology. If we look at some of the things in the past with the internet, we haven’t always been ahead of the curve with them. And so I think it’s important that we do try to be as much as we can ahead of the curve with AI.”