Media & Culture

New York considers bill that would ban chatbots from giving legal, medical advice

Proposed legislation targets AI like ChatGPT and Claude, imposing fines for unlicensed professional advice.

Deep Dive

New York State lawmakers have introduced a groundbreaking bill that seeks to legally restrict artificial intelligence systems from dispensing professional advice in regulated fields. The legislation, proposed by Assemblymember Clyde Vanel, would amend the state's general business law to classify AI-generated legal or medical guidance as a 'deceptive practice' unless the underlying AI system or its operator holds the appropriate state-issued professional license. This move represents one of the first direct attempts by a U.S. state legislature to draw a legal boundary around the capabilities of consumer-facing large language models (LLMs) like OpenAI's GPT-4, Google's Gemini, and Anthropic's Claude 3, which are increasingly used by the public for preliminary queries on health and legal matters.

The bill's language is intentionally broad, covering any 'automated system' that uses data analysis to generate content, specifically naming advice related to 'the practice of law' or 'the practice of medicine.' Violations could result in civil penalties. This legislative push highlights growing governmental concern over the 'black box' nature of AI reasoning and the potential for hallucinations or outdated information in critical domains. It signals a shift from voluntary safety frameworks proposed by tech companies toward enforceable regulatory action, potentially creating a compliance headache for AI developers who must now consider state-by-state rules on output filtering. If passed, it could set a precedent for other states to follow, fundamentally shaping how AI assistants are designed and deployed for consumer use.

Key Points
  • Bill classifies AI-generated legal/medical advice as 'deceptive practice' under NY business law, subject to fines.
  • Targets all 'automated systems' (e.g., ChatGPT, Claude) unless operated by a licensed professional in that state.
  • Represents a shift from industry self-regulation to enforceable state law for AI output controls.

Why It Matters

Forces AI companies to implement stricter guardrails and could fragment U.S. AI regulation by state.