Why you probably shouldn't tell a chatbot everything about your health
63% of people find AI health info reliable, but doctors warn it can't diagnose and often gets details wrong.
As tech giants like Microsoft, Google, and OpenAI roll out health-focused AI tools, a significant trust gap is emerging. A new Annenberg Public Policy Center poll found that 63% of respondents find AI-generated health information reliable, even as trust in traditional health agencies declines. Microsoft recently unveiled Copilot Health, a secure tool combining health records and wearable data, while companies like Oura are launching specialized models for areas like women's health. This surge in accessible, always-on medical advice is changing how patients interact with the healthcare system.
However, Dr. Alexa Mieses Malchuk, a family physician interviewed by ZDNET, cautions that these tools have critical limitations. She notes that AI can be excellent for administrative tasks like triaging messages, but it cannot diagnose conditions. The accuracy of an AI's response is entirely dependent on the quality and completeness of the user's prompt, and most people lack the medical training to spot errors or omissions. Her key advice is to use AI chatbots as a 'springboard' for discussion with a primary care physician, not as a final authority, emphasizing that their responses are 'only as good as the questions we ask.'
- 63% of people in a recent survey find AI-generated health information reliable, highlighting a major shift in trust.
- Doctors warn AI health tools like Microsoft's Copilot Health cannot diagnose and are prone to errors based on incomplete user prompts.
- Medical professionals recommend using AI as a discussion starter with a real doctor, not as a definitive source for treatment.
Why It Matters
Misplaced trust in AI for critical health decisions can lead to incorrect self-diagnosis and delay proper medical care.