Google DeepMind announced that their Gemini Deep Think system achieved the gold medal standard at the 2025 International Mathematical Olympiad. The model was able to solve five out of six complex problems in algebra, combinatorics, geometry, and number theory, scoring 35 out of a possible 42 points. All solutions were written in natural language, verified by official IMO judges, and described as clear and understandable. Gemini Deep Think operated in a special mode that allows for parallel consideration of multiple hypotheses before forming a final answer and completed the tasks within the standard time without external tools.
The DeepMind team trained the model using reinforcement learning, providing it access to carefully selected solutions from previous olympiads and recommendations on approaches to such problems. This allowed Gemini to find different solution paths and combine them to produce a complete proof. DeepMind plans to give the mathematical community the opportunity to test this version of the model and later make it available to Google AI Ultra subscribers.
Meanwhile, OpenAI announced that their experimental language model also won gold at the IMO, solving five out of six problems under standard conditions. The model operated without internet access or external tools, and all proofs in text form were evaluated by former olympiad winners. According to OpenAI researchers, this model was not specifically trained for the IMO but developed as a universal reasoning system with a focus on sustained performance over many hours.
The success of both companies indicates that modern language models can solve complex mathematical problems using only natural language without specialized software tools. For users, this means that such AI is already capable of supporting prolonged cognitive processes, offering complete proofs, and can become a tool for researchers, educators, and math enthusiasts. DeepMind plans to continue improving Gemini for future olympiads and broader applications.