Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Somehow we need to get to the point that patients expect doctors to use AI to assist their diagnosis.

I think at the moment using AI would lead to the perception that the doctor is incompetent.

But I would always rather a doctor who consulted AI and then overlaid their personal experience and expertise to rule in and out what the AI suggests.

The idea that any/every doctor knows everything, or even could know everything is so false and wrong and out of date.

There is probably a business opportunity in this somehow, for "AI first doctors" - their target customers being people who want the best human advice PLUS the best AI advice.



> Somehow we need to get to the point that patients expect doctors

Yes, and I think malpractice suits can work well here. If a doctor misdiagnoses a patient, leading to an injury, s/he needs to be able to document that an AI system was consulted and the AI-generated diagnosis was checked or considered. Maybe there's a good reason to discard the AI's suggestion, but you better at least have asked the AI, or you're going to slapped with a big malpractice penalty.


The irony is that the lawyers profiting are doing so because they have not been replaced by AI


Not that some aren't trying! https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-s...

> In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray.

> For nearly two hours Thursday, Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations, all generated by ChatGPT. The judge, P. Kevin Castel, said he would now consider whether to impose sanctions on Mr. Schwartz and his partner, Peter LoDuca, whose name was on the brief.


There should, I would hope, be more to social considerations than litigation.


It's going to be interesting to see this play out with the AMA et al. The raging cynic in me thinks this is going to get very serious, very quickly. There are a lot of lifestyles at risk. On the other hand it will soon be possible to supply an endless parade of mothers offering politically unimpeachable testimony and demanding freedom to employ these tools.

I know who wins that, but there will be many pounds of flesh rent on the field in the meantime.


I’d be amazed if one or more of the large EMR vendors wasn’t already developing some sort of diagnostic co-pilot. They along with insurers have unique access to critical data for such applications and both have motivations… having worked for a major insurer, however, I don’t believe they have the technical competence or willingness to see longer-term value from true improvements in diagnostics vs. short-run thinking. (But I really hope I’m wrong there).


Yeah I feel the same. As soon as someone comes out with a medical AI service that gets traction, the AMA is going to go nuclear and get legislation to ban medical diagnosis AI.


The best they can do is similar to a surgeons general warning. I don't see an outright ban.


They will certainly attempt to employ IP barriers to the necessary data until they can figure out how to monetize and direct profits into the correct pockets. "Safety" will be the imperative on which they hang their cloaks, in addition to the conventional IP claims.


> at the moment using AI would lead to the perception that the doctor is incompetent

There are numerous examples, in popular culture, of geniuses working with AI assistants. This shouldn't be a difficult bridge to sell. Particularly if the patient can witness the whole conversation, possibly even feel involved in it.


Honestly, I think that's the opposite direction for human consideration.

I think patients would like doctors or nurses or nurse practitioners or whatever their insurance pays for . . . to pay attention to them and actually listen and care. Maybe AIs "do it better" in that case.

If computer systems care more about us than other human beings, the problem is not the computer systems.


> I think at the moment using AI would lead to the perception that the doctor is incompetent.

Then it's the patients that are stupid. i've seen my GP check some very specific vaccines / test procedures on the main gov body website and I was like "hey at least he's not making things up".

General medicine is not tailored for specific cases so AI is more than filling a need. It may be doing the job in the future.


The goal should not be to expect doctors to use AI to assist.

The goal is to get AI to be superior to doctors such that AI replaces doctors.

AI should augment your skill such that the doctor is no longer needed. The problem with the US medical system is that they hold your life hostage then rip you off. Make no mistake people attribute this to complexity and other bullshit but the money ends up in the pockets of administrators and doctors.

You want to fix the system? Attack the root. Replace doctors.


It’ll have to be better than today’s spicy autocomplete technology to do that, though. ChatGPT will never be in the driver’s seat; it can’t be trusted.


Yeah definitely. if anything chatGPT is the precursor to the thing that will eventually replace doctors.

But at the same time it's also the precursor to the thing that replaces programmers.


> at the same time it's also the precursor to the thing that replaces programmers.

It seems far more likely that the successors will create more programmers, not unlike the evolution of the elevator. Newer elevator designs didn't replace the elevator operator. They allowed anyone to become the elevator operator.


By programming I mean something separate from typing English and asking something to spit out code. If my manager asks me to code an app, is he programming or am I?

>Similar to the evolution of the elevator. Newer elevators didn't replace the elevator operator. They allowed anyone to become the elevator operator.

By "replace" I mean replace someones job. Similar to how the elevator made elevator operators unemployed.


> If my manager asks me to code an app, is he programming or am I?

In the right context, I would say that it possible that the boss is programming you and that you are, ultimately, both programmers. But even if we only want to think of programming in terms of computers, that is not a concern when it comes to a ChatGPT-like thing. It will almost certainly be of a computer.

> By "replace" I mean replace someones job. Similar to how the elevator made elevator operators unemployed.

The job is still there, but with an effectively infinite supply of workers having joined the market, that pushed the price for the work to zero. That left anyone wanting to be paid more out of the market.

In the same vein, if your employer found someone willing to do your work for a lower wage, fired you, and hired them instead, it seems fair to say that you were replaced, but was the profession replaced?

Anyway, fun analysis aside, understood.


  sed 's/doctor/programmer/g'


Mission accomplished, bit only if everyone can use it, and no one is in a position to gatekeep anyone else's access.

Which will never happen. Controlling access to power/information is too important.


Yeah that's the downside of course. I still think US doctors and the entire US medical system unlike other occupations, deserves to be replaced.

But the free market is the one that makes the rules here. If it can be replaced, it will, whether it deserves to be replaced is irrelevant.


Why would you expect "we have a better solution but we'll charge less for it" to be the outcome in a free market? That's just leaving money on the table.


Additionally keep in mind, this diagnosis was not made by a machine designed to be a "doctor".

The doctor part is a side effect. An emergent property. I can just imagine google running a powerful AI in the future and just using ads to fund the whole thing and the thing just completely demolishes the medical diagnosis industry just as an emergent side effect.


That's possible. Like they did to dedicated GPS/map hardware. Not sure Google's that bold any more, but somebody could.


You need a moat to keep that going for very long. Doctors have a regulated market in which to build moats, but as this is said to be a free market, what moat-suitable ground might there be?


Get yours certified by the AMA and have them make competitors illegal.


That would require a regulated market. The scenario was based on a hypothetical free market.


Because of the extreme increase in competition.


Obvious. Open Source, Piracy and copying. A phenomenon of the free market combined with products constructed out of pure information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: