Vector databases
We use embeddings when we want AI (artificial intelligence) to remember. They’re just numbers, but together they act like a fingerprint for meaning. Store enough of them and she can find “things like this” instead of just “the exact same thing.” That’s the trick.
What vectors need
Embeddings don’t live happily in regular databases. Tables and rows are fine for names and phone numbers. They choke on the math of “closest neighbor.” So we use vector databases. They’re built to keep the math fast and the memory wide.
Pinecone
Pinecone is the hosted option. We give it vectors, and it handles scale, updates, and search. It feels like using cloud storage but for meaning. She doesn’t care if the set is small today and huge tomorrow. Pinecone just stretches.
Weaviate
Weaviate leans open-source. We can run it ourselves or let someone else host it. It adds extras like schemas and hybrid search. That means she can juggle text queries and vector queries in the same breath. It’s flexible if we don’t mind a little setup.
FAISS
FAISS is the bare-metal library. It came out of Facebook, and it’s tuned for raw speed. We build our own index, pick our own storage, and wire it into whatever stack we’re using. She’s quick, but she won’t hold our hand. We own the plumbing.
A coder’s thought
Vector databases feel like teaching her to use a new kind of notebook. She doesn’t just copy facts anymore. She draws a map of meaning, and we get to decide how tidy or messy the map is. That’s the fun and the trap.