interesting article about hallucinations in LLM.
i don't know if the researchers mistakenly anthropomorphize the behaviour of LLM or if they just choose this low level verbiage to please the journalists
âUnlike human intelligence, it lacks the humility to acknowledge uncertainty,â said Neil Shah, VP for research and partner at Counterpoint Technologies. âWhen unsure, it doesnât defer to deeper research or human oversight; instead, it often presents estimates as facts.â
LLM do not know anything. they produce statistically plausible output from a dataset in their databases. and statistics always has outliers!
#
wissenschaft #
computer #
internet #
KI #
AI #
LLM #
hallucinations