Live Demo: AI Learns to Play Snake
DQN starts with zero knowledge of the rules, learning by playing in the browser — the first few dozen episodes are pure chaos, then it gets smarter and smarter. This is deep reinforcement learning.
Purple = Snake head | Red dot = Food | As ε approaches 0 the AI relies entirely on its policy, no longer exploring randomly
Fun Machine Learning
Master AI with minimal formulas and the most intuitive approach
No memorizing formulas, no copying code — every algorithm runs right in your browser. Watch parameters change in real-time, and you'll naturally understand. From Gradient Descent to Transformer, we only cover what you actually need to understand, and let intuition handle the rest.
Pick a Path and Start Learning
Just learn everything in the table of contents — you can't escape it 😄
Algorithm Panorama
History of Algorithm Development
The first mathematical neuron model, proving that neural networks could theoretically implement any logical operation.
Rosenblatt proposed the first learnable linear classifier, igniting the first neural network boom.
Arthur Samuel first used the term "Machine Learning" in his checkers program paper.
Lloyd proposed the K-Means algorithm, which became a classic baseline for unsupervised learning.
Rumelhart, Hinton, and Williams published the BP algorithm, finally enabling effective training of multi-layer neural networks.
LeCun applied CNNs to handwritten digit recognition, laying the foundational architecture for computer vision.
Vapnik proposed SVM, which excelled on small-sample high-dimensional data and dominated competition leaderboards through the 2000s.
Hochreiter & Schmidhuber solved the vanishing gradient problem in RNNs, making sequence modeling possible.
Breiman proposed Random Forest, and ensemble learning began dominating structured data tasks.
Hinton proposed a pretraining scheme, reinvigorating deep networks and sparking the third AI boom.
Mikolov proposed Word2Vec, ushering NLP into the representation learning era — "words have geometry" became reality.
A deep CNN won the ImageNet competition by a crushing margin, making GPU training of deep networks mainstream.
Goodfellow proposed GAN, launching generative models into prosperity — the starting point of AI-generated art.
Kingma & Welling proposed VAE, elegantly combining probabilistic graphical models with deep learning.
He et al. invented skip connections, solving the degradation problem in ultra-deep networks and making 152-layer networks possible.
The Google team published the Transformer, where self-attention replaced RNNs and became the cornerstone of modern AI.
Ho et al. proposed DDPM, and diffusion models comprehensively surpassed GANs in image generation quality.
OpenAI released ChatGPT, bringing large language models into the mainstream and ushering AI applications into a new era.
Multimodal large models like GPT-4, Gemini, and Claude emerged, and AI Agents began autonomously completing complex tasks.