Live Demo: AI Learns to Play Snake

DQN starts with zero knowledge of the rules, learning by playing in the browser — the first few dozen episodes are pure chaos, then it gets smarter and smarter. This is deep reinforcement learning.

Episode
0
Current Score
0
Best Score
0
Exploration ε
1.000
Speed:

Purple = Snake head  |  Red dot = Food  |  As ε approaches 0 the AI relies entirely on its policy, no longer exploring randomly

Fun Machine Learning

Master AI with minimal formulas and the most intuitive approach

No memorizing formulas, no copying code — every algorithm runs right in your browser. Watch parameters change in real-time, and you'll naturally understand. From Gradient Descent to Transformer, we only cover what you actually need to understand, and let intuition handle the rest.

Runs in Browser Visualization First Zero-Formula Intro 19 Algorithms Python + JS Dual Versions

Pick a Path and Start Learning

Classification Route
Grad School Exam Route

Just learn everything in the table of contents — you can't escape it 😄

Common Foundations (shared across routes) Advanced (requires prerequisites)

Algorithm Panorama

History of Algorithm Development

1940s – 1960s The Dawn
1943
McCulloch-Pitts Neuron

The first mathematical neuron model, proving that neural networks could theoretically implement any logical operation.

1957
Perceptron

Rosenblatt proposed the first learnable linear classifier, igniting the first neural network boom.

1959
The Term "Machine Learning" is Born

Arthur Samuel first used the term "Machine Learning" in his checkers program paper.

1970s – 1980s Classical Algorithm Foundations
1979
K-Means Clustering

Lloyd proposed the K-Means algorithm, which became a classic baseline for unsupervised learning.

1986
Backpropagation Algorithm

Rumelhart, Hinton, and Williams published the BP algorithm, finally enabling effective training of multi-layer neural networks.

1989
Convolutional Neural Network LeNet

LeCun applied CNNs to handwritten digit recognition, laying the foundational architecture for computer vision.

1990s Rise of Statistical Learning
1995
Support Vector Machine (SVM)

Vapnik proposed SVM, which excelled on small-sample high-dimensional data and dominated competition leaderboards through the 2000s.

1997
LSTM (Long Short-Term Memory)

Hochreiter & Schmidhuber solved the vanishing gradient problem in RNNs, making sequence modeling possible.

2000s Data & Engineering Era
2001
Random Forest

Breiman proposed Random Forest, and ensemble learning began dominating structured data tasks.

2006
Deep Learning Revival

Hinton proposed a pretraining scheme, reinvigorating deep networks and sparking the third AI boom.

2013
Word2Vec Word Embeddings

Mikolov proposed Word2Vec, ushering NLP into the representation learning era — "words have geometry" became reality.

2010s Deep Learning Explosion
2012
AlexNet Wins ImageNet

A deep CNN won the ImageNet competition by a crushing margin, making GPU training of deep networks mainstream.

2014
GAN (Generative Adversarial Network)

Goodfellow proposed GAN, launching generative models into prosperity — the starting point of AI-generated art.

2014
VAE (Variational Autoencoder)

Kingma & Welling proposed VAE, elegantly combining probabilistic graphical models with deep learning.

2015
Residual Network (ResNet)

He et al. invented skip connections, solving the degradation problem in ultra-deep networks and making 152-layer networks possible.

2017
Transformer "Attention Is All You Need"

The Google team published the Transformer, where self-attention replaced RNNs and became the cornerstone of modern AI.

2020s Large Model Era
2020
Diffusion Models

Ho et al. proposed DDPM, and diffusion models comprehensively surpassed GANs in image generation quality.

2022
ChatGPT Ignites AI Adoption

OpenAI released ChatGPT, bringing large language models into the mainstream and ushering AI applications into a new era.

2023 →
Multimodal · Agents · Continued Evolution

Multimodal large models like GPT-4, Gemini, and Claude emerged, and AI Agents began autonomously completing complex tasks.