I'd like a better explanation of the "hallucination" issue with LLM-based chatbots. I've read that researchers are having a tough time figuring out how to solve it, either because they don't fully understand why it happens or because it is inherent in the way they work. Where do we stand on this? Will trust in AI be limited to our level of trust in the Internet at large?
I'd like a better explanation of the "hallucination" issue with LLM-based chatbots. I've read that researchers are having a tough time figuring out how to solve it, either because they don't fully understand why it happens or because it is inherent in the way they work. Where do we stand on this? Will trust in AI be limited to our level of trust in the Internet at large?
Great question; thanks. Hope we can properly answer it this coming year…!