
After a mathematics win in July, Gemini 2.5 Deep Think has now earned a gold-medal level performance in competitive coding.
The International Collegiate Programming Contest (ICPC) is the “oldest, largest, and most prestigious algorithmic programming competition at the college level,” which Google notes is a “step up in educational level” from the high school-level International Mathematical Olympiad. Students from nearly 3,000 universities across 103 countries aim to get to the final round.
Participants in the World Finals have five hours to tackle 12 real-world coding problems. The ranking system is based on how long (minutes) it takes to solve, while only perfect solutions score points.
Four teams (out of 139) won gold medals this year, with an “advanced version” of Gemini 2.5 Deep Think — competing “live in a remote online environment following ICPC rules under the guidance of the competition organizers” — solving 10 out of 12 problems in 677 minutes. This involved code execution and “utilizing a wide variety of advanced data structures and algorithms.”
- “Gemini solved eight problems within just 45 minutes and two more problems within three hours”
- “…Gemini 2.5 Deep Think would rank 2nd overall, if compared with the human teams in the competition.”

Google is particularly focusing on how Gemini solved a problem (C) that “no other human teams in the contest solved.”
Problem C required finding a solution for distributing liquid through a network of interconnected ducts to a set of reservoirs, with the goal of finding a configuration of these ducts that fills all the reservoirs as quickly as possible. There are an infinite number of possible configurations, as each duct may be open, closed or even partially open, making searching for the optimal one very difficult.
Gemini was able to find an effective solution with a clever insight: It first assumed each reservoir has a “priority value”, representing how much each reservoir should be favored compared to the others. Then, given a set of priority values, the best configuration of the ducts can be found using a dynamic programming algorithm. Gemini further applied the minimax theorem and discovered that the original problem can be approached by finding the priority values that make the resulting flow most constrained. Leveraging the relationship between priority values and optimal flows, Gemini utilized nested ternary search to quickly find optimal priority values in the bowl-like convex solution space, and solved Problem C.
Google ultimately credits a “series of breakthroughs” across pre-training, post-training, novel reinforcement learning techniques, multi-step reasoning, and parallel thinking that let Gemini “explore different ways of solving complex problems, verifying solutions, and continuously iterating before responding.”
For example, during the course of reinforcement learning, we trained Gemini to reason and generate code for some of the most difficult problems coders have faced, to learn from feedback on its results and evolve its approaches. To tackle a problem, multiple Gemini agents each propose their own solutions using terminals to execute code and tests, and then iterate the solutions based on all of the attempts.
The “lightweight version” of Deep Think available in the Gemini app today ($249.99 per month) remains unchanged. Ultimately, Google says “these breakthroughs in competitive programming and mathematical reasoning demonstrate Gemini’s profound leap in abstract problem-solving — marking a significant step on our path toward artificial general intelligence (AGI).”
Solving complex tasks at these competitions requires deep abstract reasoning, creativity, the ability to synthesize novel solutions to problems never seen before and a genuine spark of ingenuity.
FTC: We use income earning auto affiliate links. More.
Source link