A Brief History of Artificial Intelligence
We keep hearing about artificial intelligence (AI), but she didn’t pop up overnight. She’s been stumbling, leaping, and reinventing herself for decades. The path is uneven, which is what makes it interesting.
The conference that named her
AI got her name in 1956 at the Dartmouth Conference. A handful of researchers sat around and said, “Let’s see if machines can learn.” That was it—no roadmap, just a promise. She was young, full of hope, and destined for decades of false starts.
Symbolic dreams
In the 1960s and 70s, she tried symbolic reasoning. The idea was simple: give her rules and symbols, and she’ll act smart. It worked on toy problems, like proving math theorems. But the real world is messy. She could juggle blocks, not human speech.
Expert systems
By the 1980s, she had a new costume: expert systems. Programmers stuffed her with rules from doctors or engineers. If condition A, then outcome B. It looked practical, even profitable. But rules don’t scale. She grew brittle, and companies lost patience.
Machine learning
She regrouped in the 1990s with machine learning. Instead of memorizing rules, she learned patterns from data. Think email spam filters—she spotted the junk by training on examples. The more data she got, the sharper she became. We finally saw her grow beyond brittle tricks.
Deep learning
In the 2010s, she dived deeper. Neural networks with many layers—deep learning—let her recognize faces, translate speech, even beat humans at games like Go. She thrived on raw data and computing power. For the first time, she seemed less like a lab demo and more like a partner.
A coder’s musing
We watch her climb, fall, and climb again. She doesn’t move in a straight line, and neither do we. Maybe that’s the rule of thumb: progress feels like failure until it doesn’t.