- A new OpenAI study using their SimpleQA benchmark shows that even the most advanced AI language models fail more often than they succeed when answering factual questions, with OpenAI’s best model achieving only a 42.7% success rate.
- The SimpleQA test contains 4,326 questions across science, politics, and art, with each question designed to have one clear correct answer. Anthropic’s Claude models performed worse than OpenAI’s, but smaller Claude models more often declined to answer when uncertain (which is good!).
- The study also shows that AI models significantly overestimate their capabilities, consistently giving inflated confidence scores. OpenAI has made SimpleQA publicly available to support the development of more reliable language models.
It’s right there, bud