Can we trust ChatGPT to get the basics right? – The Health Care Blog

0
Can we trust ChatGPT to get the basics right? – The Health Care Blog

[ad_1]

Health Tech

by MATTHEW HOLT

Eric Topol has a piece in his excellent newsletter Ground Truth‘s today about AI in medicine. He refers to the paper he and colleagues wrote in Nature about Generalist Medical Artificial Intelligence (the medical version of GAI). It’s more on the latest in LLM (Large Language Models). They differ from previous AI which was essentially focused on one problem, and in medicine that mostly meant radiology. Now, you can feed different types of information in and get lots of different answers.

Eric & colleagues concluded their paper with this statement: “Ultimately, GMAI promises unprecedented possibilities for healthcare, supporting clinicians amid a range of essential tasks, overcoming communication barriers, making high-quality care more widely accessible, and reducing the administrative burden on clinicians to allow them to spend more time with patients.” But he does note that “there are striking liabilities and challenges that have to be dealt with. The “hallucinations” (aka fabrications or BS) are a major issue, along with bias, misinformation, lack of validation in prospective clinical trials, privacy and security and deep concerns about regulatory issues.”

What he’s saying is that there are unexplained errors in LLMs and therefore we need a human in the loop to make sure the AI isn’t getting stuff wrong. I myself had a striking example of this on a topic that was purely simple calculation about a well published set of facts. I asked ChatGPT (3 not 4) about the historical performance of the stock market. Apparently ChatGPT can pass the medical exams to become a doctor. But had it responded with the same level of accuracy about a clinical issue I would be extremely concerned!

See also  Aiming to catch Alnylam, AstraZeneca & Ionis plan FDA filing for rare disease drug

The brief video of my use of ChatGPT for stock market “research” is below:

[ad_2]

Source link