Chatbots Make Terrible Doctors, New Study Finds
“Chatbots may be able to pass medical exams, but that doesn’t mean they make good doctors, according to a new, large-scale study of how people get medical advice from large language models. The controlled study of 1,298 UK-based participants, published today in Nature Medicine from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, tested whether LLMs could help people identify underlying conditions and suggest useful courses of action, like going to the hospital or seeking treatment.”
AI lacks any form of real context for situations. The more I've used them for technical issues, I've seen how limited they really are. They will doggedly go down deep holes without looking outside the box. They will grasp at all sorts of solutions to a GitHub project application, without even realising the newest issue logged is in fact a bug waiting to be fixed. I could go on and on, and I've lost count now of the apologies I've extracted from AI for its real stupidity.
Used as an assistant to a thinking human being is fine, but never follow AI advice blindly. It is great for answering questions, but the more you use it, you'd see its limitations surfacing. It can save time, but it can also waste a lot of time, and even do damage. Most recently a chatbot gave me a command to remove the history for a file I updated on GitHub, but it also deleted that specific file itself altogether (I got what I'd asked for, but I did not ask for the file to be deleted, and there was no mention of such a possibility by the chatbot).
So just like a lathe with be a tremendous tool in a skilled technicians hands, giving me a lathe to use would likely do more harm than good.
See
https://www.404media.co/chatbots-health-medical-advice-study#
technology #
medical #
healthcare #
AI