Google's DeepMind Wins Gold at IMO with Advanced AI, Outshines OpenAI

Google's DeepMind achieved a significant milestone at the International Math Olympiad (IMO) by earning a gold medal with Gemini Deep Think, an AI model designed to solve complex mathematical problems. Unlike OpenAI, which graded its own answers, DeepMind adhered to the official IMO rules, ensuring a fair and transparent evaluation.
In this year's competition, Deep Think answered five out of six questions correctly, securing gold medal status. Last year, DeepMind's AI earned a silver medal by solving four questions. The 2025 model represents a paradigm shift, as it processes natural language end-to-end without requiring translation into specialized formats.
Deep Think's success stems from advanced reinforcement learning techniques, focusing on long-form reasoning to handle each step of problem-solving. This approach allowed the AI to demonstrate human-like critical thinking and adaptability, even tackling questions requiring insights beyond the competition's intended scope.
One notable achievement was Deep Think's ability to solve a challenging problem using elementary number theory, avoiding overly complex methods. However, it fell short on the hardest question, which only five human participants answered correctly.
Google emphasized that the IMO-tuned version of Deep Think will be made available to trusted testers, including mathematicians, and eventually to Google AI Ultra subscribers. DeepMind plans to refine the model further, aiming for a perfect score in next year's competition.
The IMO remains a unique challenge for AI, requiring mastery of multiple mathematical disciplines. DeepMind's transparent approach underscores its commitment to advancing AI capabilities while maintaining integrity in competitive environments.
Published: 7/21/2025