Music generation
We keep expecting computers to surprise us. Music generation is one of those tricks. AI (artificial intelligence) writes a tune, and suddenly we’re asking if she’s creative—or just shuffling notes. Either way, the output sounds less like math homework and more like art.
Markov models
Start simple. A Markov model only looks one step back. She asks: what’s the most likely note to follow this one? Then she repeats that guess again and again. It works, but only in the way a tourist phrasebook “works.” You get sentences; you don’t get style. After a while the song loops, or collapses into nonsense. Useful as a first experiment, not as music you’d save.
Recurrent neural networks
We wanted more memory. Recurrent neural networks (RNNs) let her keep a short history. Now a melody can echo a phrase from earlier. She can hold onto rhythm as well as pitch. The tradeoff: RNNs get fuzzy when the sequence is long. A tune with verses, chorus, and bridge pushes her memory to the breaking point. Listeners notice.
Transformers
Transformers changed the game. She no longer reads notes one by one; she scans the whole score at once. Attention layers let her jump back to the start of a song and forward to the ending in the same breath. Patterns line up across dozens of bars. Suddenly we get harmonies that resolve, and motifs that return like they were planned. It feels less like prediction, more like intention.
Our take
We treat her less like a bandmate and more like a demo machine. She’s fast. She’s confident. But she doesn’t care if the bridge hits too soon or if the groove gets stale. That’s still on us.