vitlyoshin 8 hours ago
One thing that stuck with me from a recent podcast conversation: the biggest value of AI in healthcare isn’t automation, it’s continuity.

Between appointments, people disappear. They say “I’m fine” when they’re not. Traditional surveys flatten complex human stories into numbers.

AI, when used carefully, can listen to people in their own words and give providers context, not decisions, at exactly the moments when intervention matters most.

The insight wasn’t “AI replaces clinicians,” but rather that AI works best as a signal amplifier, not a decision-maker.

Where do you think the line should be drawn between AI assistance and human judgment?

JohnFen 5 hours ago
> Where do you think the line should be drawn between AI assistance and human judgment?

I 100% don't want genAI to be making any medical decisions, nor do I want a doctor who just accepts what genAI says as reliable fact.

But for me, when it comes to this kind of thing, that's not even the question. There's no chance I'd be willing to trust my sensitive personal information to a genAI system in the first place out of security/privacy concerns. I don't want it to even take notes because that would require it to be given sensitive information.

So I don't get far enough for "where is the line" questions to be important to me.