503
We have to stop ignoring AI’s hallucination problem
(www.theverge.com)
This is a most excellent place for technology news and articles.
Because humans can do introspection and think and reflect about our own knowledge against the perceived expertise and knowledge of other humans. There's nothing in LLMs models capable of doing this. An LLM cannot asses it own state, and even if it could, it has nothing to contrast it to. You cannot develop the concept of ignorance without an other to interact and compare with.