SHRDLU

We meet SHRDLU in the 1970s. She was an early AI experiment who could chat with us about colored blocks on a table. Not the internet. Not the world. Just a tiny universe of cubes, pyramids, and a robotic arm. That’s all she needed to seem clever.

Blocks world

A “blocks world” is exactly what it sounds like. A tabletop full of toy blocks—red, green, blue. Shapes stacked or scattered. Think of it as a kid’s playset shrunk into code. By keeping the world small, the AI avoided chaos. She only had to reason about a dozen objects, not the real world. That constraint made her usable.

Grammar parsing

When we typed, she parsed. Parsing means breaking a sentence into grammar pieces—subject, verb, object. Like diagramming sentences in grade school. SHRDLU’s parser handled commands such as “Put the red block on the green cube.” Nothing fancy, but precise. The parser made sure words lined up with the rules of English.

Semantic interpretation

Parsing only gets shape. Meaning takes more work. SHRDLU mapped words to her toy world. “Block” pointed to an object in memory. “On” described a spatial relation. If a sentence didn’t match the world, she said so. This gave the illusion that she “understood.” In truth, she was slotting grammar into a little semantic model.

Planning

Once she had meaning, she planned. “Pick up the red block” meant: check if the hand is free, find the block, lift it. She chained steps into a plan and executed them in order. If a plan failed—say, a pyramid couldn’t balance on a sphere—she told us. Again, not magic. Just rules, checks, and a planner that ran through them.

Coder’s note

We like SHRDLU because she feels approachable. She did a lot with little. And the core ideas—parsing, mapping, planning—still show up in modern AI. If we ever wanted to tinker, we could sketch her in C#. A simple console app, some grammar rules, and a tiny model of blocks. That’s enough to bring her back to life on our screen.