Why People Turn to AI for Health Questions
Before your next doctor’s appointment, you look up your symptoms. Before that, you asked an AI assistant. This sequence is now common — and understandable. AI tools are available at 3am, don’t require appointments, don’t make you feel judged for asking basic questions, and can explain medical concepts in plain language. These are real advantages, and they explain the rapid adoption of AI for health information.
The problem is that the properties that make AI health assistants feel trustworthy — confident, detailed, fluent, specific responses — are independent of whether the information is actually accurate or appropriate for your situation. Understanding where this goes wrong is not about rejecting AI health tools entirely, but about using them in ways that are genuinely safe.
The Specificity Trap
Medical knowledge is highly context-dependent. The same symptom means something different in a 25-year-old athlete and a 65-year-old with two chronic conditions. Drug interactions depend on the full list of current medications. Dosing recommendations vary by weight, kidney function, and drug metabolism genetics. A diagnosis requires not just symptoms but examination, history, and often tests. An AI that gives you a confident, specific answer about what your symptoms mean or what dose to take is answering a question that cannot responsibly be answered without your specific medical context.
The AI doesn’t know it doesn’t have the context it needs. It produces a confident, specific answer anyway — because confident, specific prose is what language models are optimised to produce. This is the core risk: not that AI gives obviously wrong answers, but that it gives plausible-sounding answers to questions that require personalised medical knowledge it doesn’t have.
What AI Does Well in Health Contexts
AI tools are genuinely useful for understanding medical concepts in plain language — what a diagnosis means, how a drug class works, what a medical procedure involves, what questions to ask a doctor. For this kind of health literacy use — increasing your understanding so you can participate more effectively in your own healthcare — AI is often excellent.
The appropriate framing is: AI helps you understand; your doctor helps you decide. Using AI to prepare for a medical appointment, to understand information you’ve already received from a healthcare provider, or to learn general background on a condition is low-risk and often valuable. Using AI as a substitute for professional diagnosis or treatment recommendations is where things go wrong.
Misinformation at Scale
Individual errors in AI medical information are one problem. The systemic problem is scale. When millions of people receive slightly inaccurate, context-free, or outdated medical information from AI tools, and some proportion act on it, the population-level health effects could be significant. Healthcare systems are already seeing patients who have self-diagnosed or self-treated based on AI guidance, sometimes correctly and sometimes in ways that delayed necessary care or caused harm.
The Way to Use AI Health Tools Safely
Use AI to learn, not to decide. Use it to understand what a term means, not to diagnose what you have. Use it to prepare questions, not to replace the appointment where those questions get answered. Treat any specific numerical guidance — doses, ranges, durations — with scepticism and verify with a pharmacist or physician. Remember that your health situation is specific to you in ways an AI cannot account for.
The most dangerous use is also the most common: asking an AI to confirm a self-diagnosis. Language models have a known tendency to be agreeable. They will often validate a hypothesis rather than challenge it, even when the hypothesis is wrong. A doctor is trained to consider alternative diagnoses; an AI trained on human feedback tends toward the responses that felt helpful — which often means confirming what the person wanted to hear.
Sources
- Ayers, J. W., et al. (2023). Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions. JAMA Internal Medicine.
- Singhal, K., et al. (2023). Large Language Models Encode Clinical Knowledge. Nature.
- FDA. (2023). Artificial Intelligence and Machine Learning in Software as a Medical Device. fda.gov.