Expert systems

We forget how new AI once was. Back then, she didn’t guess. She reasoned—like a strict librarian with rules taped to the desk. Those systems were called “expert systems.” They sounded fancy, but the idea was simple: if you tell her enough rules, she’ll act like an expert.

Knowledge bases

The heart was the knowledge base. Think of it as a big notebook of facts and “if–then” rules. If a patient has a fever and rash, then consider measles. Not rocket science, but when you pile up enough of these, she starts to look smart. The trick wasn’t the rules—it was storing them in one place where the reasoning could find them fast.

Forward chaining

Forward chaining is when she starts from what we know. She takes the facts on the table and pushes forward, firing rules like dominoes. If A is true, then B must be true. If B is true, then C. Pretty soon we’ve gone from “patient has cough” to “maybe pneumonia.” Useful when the input is messy but the path is clear.

Backward chaining

Backward chaining works the other way. She starts with a guess—say, pneumonia—and asks, “What would I need to know for this to hold?” Then she hunts backward for missing facts. If a fact is missing, she asks us. It feels like a doctor ruling things out. Narrow, then test. Narrow again.

Why it mattered

Expert systems showed us that reasoning in code was possible. Not just math, but rules. She wasn’t flexible—change one fact, and the whole chain could collapse. But for a while, it worked, especially in medicine and engineering.

We still write rules. But now we hide them in code, not notebooks. Sometimes we miss how visible those old chains were. You could follow her thinking line by line. And maybe that was the point.