Back to Home
AIdb#1662

ChatGPT’s quiet role as America’s after-hours clinic

(1w ago)
San Francisco, US
simonwillison.net
ChatGPT’s quiet role as America’s after-hours clinic

ChatGPT’s quiet role as America’s after-hours clinic📷 Source: Web

  • 70% of healthcare queries happen outside clinic hours
  • 600K weekly messages from ‘hospital deserts’
  • Health insurance questions dominate U.S. AI chats

OpenAI’s Chengpeng Mou didn’t announce a new model or a flashy demo. Instead, he dropped a dataset that reads like a diagnostic report on U.S. healthcare access: 2 million weekly messages about health insurance, 600,000 queries from people in ‘hospital deserts’ (where the nearest ER is a 30-minute drive away), and 70% of conversations happening after clinics close. These aren’t edge cases—they’re the default for a swath of users treating ChatGPT as a triage tool of last resort.

The numbers aren’t surprising if you’ve followed the collapse of rural healthcare or the growth of high-deductible plans. But they’re a Rorschach test for AI’s role in society: Is this a failure of public health infrastructure repackaged as a tech success story, or evidence that LLMs are becoming de facto utilities? The answer depends on who’s asking—OpenAI’s investors, hospital administrators, or the user typing ‘my kid has a 103°F fever and the urgent care is closed’ at 2 a.m.

Mou’s data doesn’t reveal whether these queries are accurate, safe, or actionable—just that they’re happening at scale. That’s the reality gap: ChatGPT isn’t a regulated medical device, but it’s being used like one. The FDA’s hands-off stance on AI triage tools doesn’t help. Neither does the fact that no major EHR provider has integrated LLMs for patient-facing advice—yet here we are, with anonymized logs proving the demand exists.

The numbers reveal a healthcare system’s cracks—not an AI breakthrough

The numbers reveal a healthcare system’s cracks—not an AI breakthrough📷 Source: Web

The numbers reveal a healthcare system’s cracks—not an AI breakthrough

The competitive implications are clearer. For telehealth platforms like Teladoc or Amwell, this is a wake-up call: Their apps require appointments, insurance logins, and often, copays. ChatGPT requires none of those. For insurers, the data is a goldmine—imagine Aetna or UnitedHealthcare training models on these queries to preemptively deny claims or nudge users toward cheaper care options. And for OpenAI, it’s a quiet monetization lever: Healthcare is one of the few sectors where enterprises will pay top dollar for ‘safe,’ compliant LLMs.

Developers, meanwhile, are already reverse-engineering how to replicate these workflows without OpenAI’s API. The Hugging Face community is flooded with fine-tuned biomedical models, but none have the distribution or user trust of ChatGPT. That’s the moat: Brand recognition as a healthcare utility—earned not through accuracy, but through availability.

The real bottleneck isn’t the tech. It’s the liability chasm between ‘here’s some advice’ and ‘you should do this.’ Until that’s bridged, we’re left with a system where the most accessible ‘doctor’ is a model trained on Reddit threads and outdated clinical guidelines.

Here’s the unanswered question: If 600,000 weekly queries come from hospital deserts, how many of those users acted on incorrect advice? OpenAI’s anonymized data won’t say—and neither will the ER doctors treating the fallout.

ChatGPTConversational AILanguage Model Applications
// liked by readers

//Comments

RoboticsBaidu robotaxis grounded: China’s traffic chaos exposes real-world limitsAIDisney’s $1B AI bet collapses before the first frameMedicineInflammation’s Epigenetic Scars May Linger, Raising Colon Cancer RiskAIMistral’s tiny speech model fits on a watch—so what?MedicineBrain aging’s genetic map: AI hype vs. Alzheimer’s realityAIPorn’s AI Clones Aren’t Immortal—Just Better PackagedMedicine$100M federal bet on joint regeneration—what the trials can (and can’t) proveAIGitHub’s Copilot data grab: opt-out or be trainedMedicineRNA Sequencing UnifiesAIAI’s dirty little secret: secure by default is a mythSpaceEarth Formed From Inner Solar SystemAI$70M for AI code verification—because shipping works, not just generating itSpaceYouTube’s AI cloning tool exposes a deeper problemAIAI traffic now outpaces humans—but who’s really winning?SpaceSmile Mission to X-Ray Earth’s Magnetic ShieldAIGemini Live’s voice downgrade: AI progress or collateral damage?SpaceGamma Cas’s X-Ray Mystery Solved After 40 YearsGamingNvidia’s AI art war: Why players are sharpening the pitchforksSpaceUK’s AI probe into Microsoft isn’t just about Windows—it’s about controlTechnologyLeaked iPhone hacking tool exposes Apple’s zero-click blind spotRoboticsBaidu robotaxis grounded: China’s traffic chaos exposes real-world limitsAIDisney’s $1B AI bet collapses before the first frameMedicineInflammation’s Epigenetic Scars May Linger, Raising Colon Cancer RiskAIMistral’s tiny speech model fits on a watch—so what?MedicineBrain aging’s genetic map: AI hype vs. Alzheimer’s realityAIPorn’s AI Clones Aren’t Immortal—Just Better PackagedMedicine$100M federal bet on joint regeneration—what the trials can (and can’t) proveAIGitHub’s Copilot data grab: opt-out or be trainedMedicineRNA Sequencing UnifiesAIAI’s dirty little secret: secure by default is a mythSpaceEarth Formed From Inner Solar SystemAI$70M for AI code verification—because shipping works, not just generating itSpaceYouTube’s AI cloning tool exposes a deeper problemAIAI traffic now outpaces humans—but who’s really winning?SpaceSmile Mission to X-Ray Earth’s Magnetic ShieldAIGemini Live’s voice downgrade: AI progress or collateral damage?SpaceGamma Cas’s X-Ray Mystery Solved After 40 YearsGamingNvidia’s AI art war: Why players are sharpening the pitchforksSpaceUK’s AI probe into Microsoft isn’t just about Windows—it’s about controlTechnologyLeaked iPhone hacking tool exposes Apple’s zero-click blind spot
⊞ Foto Review