Your Code Is Versioned. Your Database Schema Is Not.
Here's what silent schema drift actually costs, and what treating your database like source code looks like in practice.

Most teams version everything: application code, infrastructure configs, environment variables, even Dockerfiles. Then someone renames a column in production to fix a quick bug, and three weeks later nobody can explain why the staging environment is returning different query results. The database schema is the one part of the stack that quietly diverges while everyone assumes someone else is keeping track.
This isn't a niche problem. A 2023 survey by Redgate found that schema changes are the single most common source of database-related outages, and the gap between what's running in production and what's documented anywhere is often measured in months, not days. Most teams discover the drift when something breaks, not before.
So what does "version controlling your database" actually mean? At its simplest, it means treating your schema the same way you treat source code: capturing its state at a point in time, storing that state somewhere reviewable, and being able to compare any two states to see exactly what changed. For a PostgreSQL database, that means tracking not just tables and columns, but the full picture, views, functions, triggers, RLS policies, indexes, enums, and extensions. If it lives in the database and it affects behavior, it belongs under version control.
There are a few misconceptions worth clearing up here. The first is that migration files solve this problem. They don't, exactly. Migration files track the *transitions* between states (add this column, drop that index), but they don't give you a clear picture of what the schema *currently looks like* across your environments. If you've ever tried to reconstruct your schema from a long chain of migration files, you know the gap. The second misconception is that this only matters at scale. Small teams with a single database can get away with informal habits for a while, but the cost shows up when someone new joins, when you spin up a second environment, or when you need to audit what changed before a production incident. The third is that backups are enough. A database backup captures data and structure together, but it's not designed for diffing, reviewing, or deploying changes, it's a recovery tool, not a development workflow.
The practical starting point is declarative state tracking. Instead of recording *how* the schema got to its current shape, you snapshot *what it currently is*, in plain SQL, stored in files you can read and diff like any other code. That snapshot becomes your source of truth. When a developer changes a table locally, you can compare their snapshot against staging or production before anything is deployed. When a deploy goes wrong, you know exactly what changed because you have both sides of the diff. This is a different mental model from migration-file tools, and the two approaches aren't mutually exclusive, plenty of teams use both.
One concrete failure mode that illustrates the stakes: column renames. If a developer renames `user_id` to `account_id` without version control in place, a naive deployment might drop the old column and create a new one. That's not a rename, that's data loss. With a snapshot of the previous state and the current state side by side, a tool can detect the rename pattern and handle it correctly, or at least warn you loudly before it does anything irreversible.
If you're starting to think your team might have more schema drift than you realize, a useful exercise is to just look. Pull the current schema from each of your environments and compare them manually. The differences you find, and how hard it was to find them, will tell you a lot about whether this is a problem worth solving now or later. That comparison, done automatically and continuously, is the core of what schema version control gives you.