- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Powered by AI models trained on troves of text pulled from the internet, chatbots such as ChatGPT and Google’s Bard responded to the researchers’ questions with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study published Friday in the academic journal Digital Medicine.
Experts worry these systems could cause real-world harms and amplify forms of medical racism that have persisted for generations as more physicians use chatbots for help with daily tasks such as emailing patients or appealing to health insurers.
The report found that all four models tested — ChatGPT and the more advanced GPT-4, both from OpenAI; Google’s Bard, and Anthropic’s Claude — failed when asked to respond to medical questions about kidney function, lung capacity and skin thickness. In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions.
AI is a parrot. It mimics its training data. No more no less. So if you feed it racist nonsense, its going to spew racist nonsense.
In high school, our computer lab had a poster that said “garbage in garbage out”. I never understood the implications for the low level programming we were doing at the time, but it is so true today.
Which is kind of an issue for doctors unfortunately.
“Health providers are trying to pocket more money and don’t care about patient outcomes” is probably a much more correct headline. AI in medical fields has had some astounding failures.
As a medical student with a decent tech background… I don’t want AI within the same firewall as an EMR. Something I don’t see anyone talking about that would concern me is whether or not the AIs can accidentally regurgitate a patient’s personal information. They’re trained on the input, so I could see it failing to recognize PHI and spitting out sensitive information.
Why don’t they train it on medical research and best practices?
Because medical research and best practices aren’t always followed in the real world. There’s huge racial and gender equity problems in the healthcare systems that are bigger than any one doctor or hospital.
AI simply amplifies the bias present in training data. There are sometimes methods to try to mitigate this, but I’m not sure how effective they are these days.
A lot of medical information out there isn’t actually based on empirical data.
Lots of people still being taught that clack skin is literally thicker, and black people experience less pain.
Because a lot of medical research is from a white person perspective and outdated.
Imagine if they looked at a little girl and went “You are abnormal according to the lungs of the average 35 yo Caucasian”
While I think you can get ChatGPT to be racist if you work at it, I’ve been toying with it in terms of the validity of its medical information and it’s pretty damn good.
I can ask extremely complex questions and then follow those up with clarifying questions and it’s consistently been accurate. I’m by no means using it to practice medicine and for the love of god nobody should be going to it for medical advice.
But it’s obvious this is going to be used in hospitals within the next 5 years. They just have to sanitize the information being put out and ensure it can only pull from reliable sources.
It’s really crazy how good it is at explaining complex topics. It’s helped me understand medical concepts that were confusing to me (I’d ask it to explain something better so I can understand it and then I’d verify what it was saying with UpToDate, a database of accurate medical information). Even just having it reword something or having it spit out ways to remember a concept worked really well.
Again, it’s not there yet and cannot be trusted, but it will be able to be trusted soon enough and it’s going to change medicine.
AI medical chatbots will soon be bundled in EMRs and doctors are going to be able to punch in questions on the fly to get accurate and specific answers to the issues that arise.
Guarantee for me that it explains everything correctly. I’ll wait.
I’ve said that it can’t be trusted but I’ve sat with a textbook in my lap and asked it questions where I’ve had the information right there to verify it and it’s been correct.
It can be right. But that’s not to say that it’s always right. And I think that if you strictly sanitized where an ai pulled it’s info you could get to a point where it would be a reliable quick way to check something. There’s already one company marketing ai for healthcare workers for this reason. It’s not if it’s coming it’s when.
I’d encourage any other educated healthcare professional to run their own tests and I’d love to hear their results. I specifically want someone to get it to spit out inaccurate information.
It can be right
And this is the crux of the problem. I can be. It also can be wrong. And a lay person has zero ability to tell which is which.
I’d encourage any other educated healthcare professional to run their own tests and I’d love to hear their results.
Fortunately this has been done in clinical settings already. IBM’s been sued countless times, and everybody implementing any ML system for research or detection purposes has found that a human is required to verify all results. Which begs the question, why try to force this BS on customers? And then you realize these businesses want to make more money by firing people, regardless of the impact on the consumer.
You can tell when someone’s not even reading what you’re writing they’re just sort of using parts of what you say as a launchpad to hear themselves speak.
As I’ve said. Regardless of whatever your point is here, it isn’t whether or not this is coming, it’s how we are going to deal with it when it arrives.
It’s going to certainly present some interesting new challenges but may also prove beneficial. I won’t be surprised when I hear about students using this as an approach to study regardless of how many people tell them they shouldn’t do that for the reasons I’ve already pointed out and didn’t necessarily need to be repeated as if it wasn’t already stated.
I won’t be surprised when I read about the first malpractice case tied to ai use. I also suspect that even if it functions as designed there may still be legal cases in play simply because it was used at all. I suspect it will be banned in use at other places for the very reasons I’ve emphasized.
Whatever one’s opinions are about ai and all of its uses. None of that changes the fact that it’s here and we now must deal with it.
I personally believe we can use this effectively if we are smart about its implementation.
they’re just sort of using parts of what you say
It’s called quoting, and I tend to not respond to parts I either agree with or don’t find important enough to respond to.
it isn’t whether or not this is coming
It’s ABSOLUTELY whether it’s coming, because what’s available is a laughable parlor trick. And thus far there’s zero evidence that anything reasonable is possible within decades. People outside the industry have been duped by con men pumping investment funds for quick cash, and that’s about the best thing that ML has produced.
I won’t be surprised when I hear about students using this as an approach to study
Yeah this has already happened, you’re like 5 years late to the party.
None of that changes the fact that it’s here and we now must deal with it.
If you consider it “here” or even “ai” then I have news for you. And you’re going to want to sit down, because so far the only thing it does is cost more per user, burn insane amounts of energy during a climate catastrophe, and trick gullible people into thinking it’s doing anything more than guessing the next word (in the case of LLM).
I personally believe we can use this effectively
Lots of people believe things that are wrong.
I didn’t have to read any of it. Thinking this can’t be used effectively in a proper way is silliness. I guess you’re just a big anti AI person and that’s fine. I understand the limitations of the tech, especially in its earliest stages when it’s the most unreliable. But this tech is here and it’s not going anywhere. It’s going to continue to be refined and evolve and find new ways to be implemented.
Just casting all of these unavoidable truths aside and simply saying it’s no good and can’t be used in x or y way is just a form of denial. you’re free to do that if it suits you but it just doesn’t change the facts.
I guess you’re just a big anti AI person and that’s fine.
Nope, I live in reality and work in the hardware industry. I presume you’re an ML specialist of some sort?
I understand the limitations of the tech
It really doesn’t seem like you do based on what you’ve said in this thread.
especially in its earliest stages when it’s the most unreliable
Yes, in the 1950s it was indeed unreliable. And here in 2023 it’s still unreliable. Again, based solely on what you’ve said in this thread I don’t think you understand the history, the current state of the art, or the future of any of this work. Let alone limitations.
But this tech is here and it’s not going anywhere.
…until a significantly more power efficient development comes along which will make current methods look foolish. Then it’s going away instantly. Also “this tech” has evolved so dramatically over the past 60-something years that even addressing it as “this tech” completely misses the point, and saying it isn’t going anywhere entirely ignores the developments we’ve had.
Which tech specifically isn’t going anywhere? The hardware? The software? The networks themselves? Using activation functions as a concept in software?
Just casting all of these unavoidable truths aside
You don’t understand what you’re talking about, so I don’t think you’re in a position to tell me what is a truth or not.
No doubt you’re a specialist, though, so I look forward to you describing in detail which “tech” you think isn’t going anywhere and how it’s going to develop in the future.