From Claude Shannon's 1950 paper to Stockfish 18 — how silicon learned to outplay humanity, and what it means for artificial intelligence.
From a theoretical paper to superhuman intelligence in 76 years. Every milestone that mattered.
Claude Shannon publishes "Programming a Computer for Playing Chess," proposing two strategies: Type A (brute-force exhaustive search) and Type B (selective, human-like pruning). He estimates the game-tree complexity at ~10120 positions, proving brute-force solution is impossible. This paper becomes the foundational blueprint for all chess programming.
Foundational TheoryAlan Turing creates the first chess-playing algorithm, though no computer is powerful enough to run it. He hand-simulates it on paper. Dietrich Prinz programs the Ferranti Mark 1 at Manchester to solve mate-in-two problems via retrograde analysis.
First AlgorithmAlex Bernstein creates the first fully functional chess engine that can play a complete game from start to finish. Each move takes approximately 8 minutes to compute. The program uses Shannon's Type B selective search.
First Complete EngineMIT's Richard Greenblatt writes Mac Hack VI, the first program to play in a human tournament (1966 Massachusetts Amateur, rated 1243). In 1967, it becomes the first program to beat a human in tournament play. Meanwhile, the Soviet ITEP Program defeats the American Kotok-McCarthy program 3-1 in a historic East-West computer chess match played over 9 months.
Tournament Play BeginsNorthwestern University's Chess 4.x series (Atkin, Slate, Gorlen) dominates the US Computer Chess Championships, winning 1970-1973 and 1975. Chess 4.0 (1973) pioneered full-width Type A search, abandoning selective search. In 1974, the Soviet program Kaissa becomes the first World Computer Chess Champion. By 1977, Chess 4.6 defeats US Champion GM Walter Browne in a simultaneous exhibition.
Competitive Era BeginsKen Thompson and Joe Condon at Bell Labs build Belle, pioneering custom chess hardware with dedicated boards for move generation, position evaluation, and microcode alpha-beta pruning. Belle wins 5 ACM championships and the 1980 World Computer Chess Championship. In 1983, Belle achieves a USCF Master rating of 2250 — the first machine to reach that level. Its architecture directly inspires the designs that become Deep Blue.
Hardware RevolutionAt Carnegie Mellon, Feng-hsiung Hsu, Thomas Anantharaman, and Murray Campbell build ChipTest (1986) and evolve it into Deep Thought (1988). Deep Thought becomes the first computer to defeat a grandmaster (Bent Larsen, 1988) and wins the 1989 World Computer Chess Championship 5-0. However, it falls to Kasparov in a two-game exhibition match, proving the world champion is still beyond reach.
Grandmaster Barrier BrokenIBM sponsors the team and builds Deep Blue: an RS/6000 SP supercomputer with 30 PowerPC 604e processors and 480 custom VLSI chess chips, evaluating 200 million positions/second. In 1996, Kasparov wins the match 4-2, but Deep Blue takes Game 1 — the first time a reigning world champion loses to a computer under tournament conditions. In the 1997 rematch, an upgraded Deep Blue wins 3.5-2.5, changing history forever.
World Champion FallsTord Romstad creates Glaurung, an open-source chess engine (2004). In 2008, Marco Costalba forks it and creates Stockfish, named because the code was "produced in Norway and cooked in Italy." The first version is released November 2, 2008. Stockfish adopts the alpha-beta search with hand-crafted evaluation — the gold standard of conventional engines.
Open Source RevolutionStockfish launches Fishtest, a distributed testing framework where volunteers donate CPU time. Using sequential probability ratio testing across thousands of games, the community can rigorously test every proposed improvement. In its first year, Fishtest helps Stockfish gain 120 ELO points, propelling it to the top of all major rating lists. By 2026, Fishtest will accumulate 19,900+ years of CPU time and 9.9 billion games played.
Community-Driven DevelopmentDeepMind unveils AlphaZero, which taught itself chess from scratch in 4 hours using self-play reinforcement learning — no opening book, no endgame tables, no human knowledge. Using MCTS + deep neural networks on 5,000 TPUs, it defeats Stockfish 8 with +28 -0 =72. Its "alien" playing style — sacrifice-heavy, positionally profound, and deeply creative — stuns the chess world. Published in Science (Dec 2018).
Paradigm ShiftGary Linscott (a Stockfish developer) launches Leela Chess Zero, an open-source reimplementation of AlphaZero's approach. The project uses distributed volunteer computing: thousands of contributors generate self-play games to train a shared neural network. By 2026, Lc0 has played over 2.5 billion self-play games. In 2022, Lc0 transitions from convolutional networks to a transformer architecture, gaining ~300 ELO in raw policy strength.
Open-Source AlphaZeroYu Nasu publishes the NNUE (Efficiently Updatable Neural Network) architecture for the shogi engine YaneuraOu (developed by Motohiro Isozaki). The key insight: a neural network where inputs change minimally between evaluations (moving one piece only changes 2-3 entries), enabling CPU-efficient incremental computation at millions of evaluations per second. The name is a Japanese wordplay on "Nue," a mythical chimera.
NNUE BornAfter Hisayori "Nodchip" Noda ports NNUE to Stockfish in early 2020, Stockfish 12 officially incorporates it. The result: +100 ELO in one month — roughly two years of normal improvement compressed into a single architectural change. Stockfish now combines its battle-tested alpha-beta search with a neural network evaluation, creating a hybrid that outperforms both pure classical and pure neural approaches.
Hybrid RevolutionKomodo releases Dragon 1.0, integrating NNUE with both alpha-beta and MCTS search modes. Designed by Larry Kaufman and Mark Lefler (continuing Don Dailey's legacy after his 2013 death), Dragon uses a neural network trained supervised by GM Kaufman plus reinforcement learning from billions of positions. It offers a unique hybrid that can switch between traditional and Monte Carlo search.
MCTS + NNUE HybridStockfish 16 (+50 ELO over SF15), 16.1, 17 (+46 ELO over SF16), and 17.1 continue the relentless climb. The engine wins every TCEC and CCC superfinal from Season 20 onward. Neural network training shifts to automated, reproducible pipelines using over 100 billion positions of Lc0 evaluation data. CCRL ratings push past 3,700.
UnstoppableReleased January 31, 2026. Introduces SFNNv10 architecture with "Threat Inputs" for natural threat perception, "Shared Memory" for multi-process efficiency, "Correction History" for dynamic evaluation adjustment, and removal of the 1024-thread limit. +46 ELO over Stockfish 17, winning 4x more game pairs than it loses. The strongest chess entity ever created.
Current State of the ArtFrom 1,200 to 3,700+ in six decades. The chart below shows the approximate peak engine ELO by era, illustrating the exponential ascent of machine chess strength.
Note: ELO values are approximate and vary by rating list (CCRL, CEGT, TCEC, etc.) and time control. Early engine ratings use USCF scale; modern ratings are from CCRL 40/15. Direct comparisons across eras are imprecise due to differing hardware and opponents.
In December 2017, DeepMind changed everything. A program that knew nothing about chess — not even the value of a queen — taught itself to play at a superhuman level in four hours.
How a neural network architecture from Japanese chess (shogi) transformed Stockfish overnight and created the dominant hybrid approach.
Three paradigms, one game. How the major approaches to chess engine design differ in philosophy and implementation.
The Top Chess Engine Championship (TCEC) and Chess.com Computer Chess Championship (CCC) are the premier proving grounds where engines battle under controlled conditions.
After Lc0's dual victories in Seasons 15 and 17, Stockfish has won every subsequent TCEC Superfinal — a streak of 10+ consecutive championships. The NNUE integration in Season 19 marked the turning point: Stockfish absorbed the neural network revolution into its own framework and came back stronger than ever. The competitive landscape is widening, though, with engines like Obsidian (ELO ~3,686), Caissa (3,641), Komodo Dragon (3,634), and Berserk (3,615) all reaching top-tier strength.
Open source vs. commercial interests: a cautionary tale about what happens when companies try to monetize community work.
From room-sized supercomputers to smartphone apps. How much compute does a top chess engine actually need?
Optimal hardware: High-core-count x86 CPUs with AVX-512 or AVX-VNNI support. Stockfish 18 removes the 1024-thread limit.
Consumer level: A modern 16-core AMD Ryzen or Intel Core i9 runs Stockfish at full strength (~3,600+ ELO). Even a laptop with 4 cores plays at superhuman level.
Competition level: TCEC uses high-end server hardware (128+ threads). Fishtest volunteers collectively provide thousands of CPU cores.
NNUE efficiency: Uses int8/int16 SIMD instructions, requiring zero GPU. The neural network weights are small enough to share between processes via Stockfish 18's "Shared Memory" feature.
Optimal hardware: High-end NVIDIA GPUs (RTX 4090, A100, H100) with CUDA/cuDNN support. Performance scales roughly linearly with GPU compute.
Consumer level: An RTX 3080 or better provides competitive play (~3,500+ ELO). Older or smaller GPUs still work but with significantly reduced strength.
Competition level: TCEC provides Lc0 with an NVIDIA A100 80GB. Google's AlphaZero used 4 TPUs for inference (5,000 TPUs for training).
Transformer networks: Lc0's BT4 transformer model benefits from larger GPU memory and higher throughput, scaling better than older convolutional architectures.
With engines reaching ELO 3,700+, are we approaching perfect play? The answer is more complex than you'd think.
As of early 2026, the top engines and their approximate CCRL 40/15 ratings.
| Rank | Engine | Type | ELO (CCRL) | Strength |
|---|---|---|---|---|
| 1 | Stockfish 18 | AB + NNUE | ~3,759 | |
| 2 | Leela Chess Zero | MCTS + NN | ~3,713 | |
| 3 | Obsidian | AB + NNUE | ~3,686 | |
| 4 | Caissa | AB + NNUE | ~3,641 | |
| 5 | Komodo Dragon | AB/MCTS + NNUE | ~3,634 | |
| 6 | PlentyChess | AB + NNUE | ~3,623 | |
| 7 | Berserk | AB + NNUE | ~3,615 |
Ratings from CCRL 40/15 rating list (Nov 2025). Exact values vary by time control and rating list. Stockfish has won all TCEC and CCC main events since 2020.
Deep Blue needed a room-sized supercomputer to play at ~2,800 ELO. Today's Stockfish achieves ~3,700+ on a laptop. The million-fold improvement came primarily from algorithmic advances (alpha-beta refinements, NNUE evaluation, search heuristics), not raw compute.
Neither pure brute-force search nor pure neural networks won the war. Stockfish's dominance comes from combining the best of both: alpha-beta's tactical depth with NNUE's positional intuition. Even Lc0's self-play data now feeds Stockfish's training. The paradigms have merged.
Stockfish (open-source, community-developed) surpassed every commercial engine. 900+ contributors, 19,900+ years of donated CPU time, and a rigorous distributed testing framework proved that decentralized development can beat corporate R&D — even IBM and Google.
Even though AlphaZero itself was never publicly released and Stockfish has since surpassed it, its impact was permanent. It proved that reinforcement learning from scratch could master chess. It forced the entire field to adopt neural networks. And its "alien" games showed that chess still has undiscovered depths.