Loading...
Last week OpenAI published a report detailing why AI models hallucinate. The reason that AI models hallucinate is not because the model is "broken" or the math behind the models is wrong. Instead, the reasearchers claim that hallucinations are a predictable, systemic outcome of how we train and, more importantly, how we test these systems. In short, the models have been taught that it’s better to guess than to admit they don’t know the answer.