To protect consumers, regulate AI chatbots – and the human professionals using them

A doctor uses an AI program to analyze a mammography ultrasound (AP / Mary Altaffer)

If you tell ChatGPT you have a stuffy nose and a fever, it offers fairly standard suggestions: rest, hydrate, take Tylenol. It may even go a step further, offering to help you figure out a likely diagnosis if you provide more detail about your symptoms. On-the-go medical advice may be helpful, but some lawmakers are concerned about AI’s overconfidence. A new bill proposed by State Senator Kristen Gonzalez (D-NY) would ban AI chatbots from offering advice while impersonating licensed medical professionals, like doctors and lawyers. Gonzalez’s bill is a worthwhile effort to hold technology companies accountable, but to truly protect consumer safety, lawmakers should look at how human professionals are using AI in their real-life jobs. 

This proposal follows New York’s passage of the RAISE Act in December 2025, one of the most comprehensive pieces of state-level legislation regulating AI to date. Addressing concerns that AI chatbots can exacerbate mental distress in vulnerable users, the bill requires developers to disclose information about their safety protocols and promptly report any incidents of harm to the state. 

Gonzalez’s bill also reflects a concern about AI’s impact on vulnerable users. Certainly, some users can easily be deceived by AI, whether they’re suffering from mental illness or cognitive impairment, or simply less tech-savvy. Policies should be in place to protect these individuals. Still, few people would believe ChatGPT is anything other than a highly sophisticated technology. 

Passing this bill could lead to an onslaught of baseless claims from users who knew ChatGPT wasn’t human but were dissatisfied with its advice. Tech companies could very well push back in court and argue that any reasonable user knows they’re not interacting with a human.

The likelihood that a chatbot fools someone into thinking it is a doctor or lawyer is slim, but the likelihood that a real-life doctor or lawyer is using AI in their jobs is not. Doctors already regularly use AI scribes to record patient conversations, and lawyers are increasingly utilizing AI to assist with e-discovery, document review, and administrative work. 

In most cases, this use of AI in the workplace probably won’t harm consumers. AI isn’t foolproof, though, and neither are the human professionals using it. Some research suggests that using AI can weaken doctors’ diagnostic ability, and AI has generated fake citations when used for legal research. Notably, a lawyer in California was fined $10,000 for filing a document drafted by ChatGPT that included fake quotations. 

Professionals in every field are using AI to make their jobs simpler and faster. That trend will likely only intensify in the future. Lawmakers should focus on crafting laws that accommodate this reality while promoting transparency and safeguarding against human error by the professionals using it. 

If Gonzalez and fellow state lawmakers want to protect consumers, they should pass legislation requiring licensed professionals to provide consumers with clear explanations of how AI is used in their practice. Such laws would standardize transparency protocols, bolstering consumer trust. Lawmakers could also require that human professionals thoroughly review all AI-generated outputs, and allow consumers to hold them accountable if they fail to do so. Gonzalez’s bill is a start, but advancing AI safety in licensed professions requires recognizing that AI doesn’t always act alone. An effective legislative package would establish accountability for human licensed professionals as well as the chatbots impersonating them.

The Zeitgeist aims to publish ideas worth discussing. The views presented are solely those of the writer and do not necessarily reflect the views of the editorial board.