database-migrationsSafe, zero-downtime database migration strategies β schema evolution, rollback planning, data migration, tooling, and anti-pattern avoidance for production systems. Use when planning schema changes, writing migrations, or reviewing migration safety.
Install via ClawdBot CLI:
clawdbot install wpank/database-migrations| Strategy | Risk | Downtime | Best For |
|----------|------|----------|----------|
| Additive-Only | Very Low | None | APIs with backward-compatibility guarantees |
| Expand-Contract | Low | None | Renaming, restructuring, type changes |
| Parallel Change | Low | None | High-risk changes on critical tables |
| Lazy Migration | Medium | None | Large tables where bulk migration is too slow |
| Big Bang | High | Yes | Dev/staging or small datasets only |
Default to Additive-Only. Escalate to Expand-Contract only when you must modify or remove existing structures.
Every production migration must avoid locking tables or breaking running application code.
| Operation | Pattern | Key Constraint |
|-----------|---------|----------------|
| Add column | Nullable first | Never add NOT NULL without default on large tables |
| Rename column | Expand-contract | Add new β dual-write β backfill β switch reads β drop old |
| Drop column | Deprecate first | Stop reading β stop writing β deploy β drop |
| Change type | Parallel column | Add new type β dual-write + cast β switch β drop old |
| Add index | Concurrent | CREATE INDEX CONCURRENTLY β don't wrap in transaction |
| Split table | Extract + FK | Create new β backfill β add FK β update queries β drop old columns |
| Change constraint | Two-phase | Add NOT VALID β VALIDATE CONSTRAINT separately |
| Add enum value | Append only | Never remove or rename existing values |
| Tool | Ecosystem | Style | Key Strength |
|------|-----------|-------|-------------|
| Prisma Migrate | TypeScript/Node | Declarative (schema diff) | ORM integration, shadow DB |
| Knex | JavaScript/Node | Imperative (up/down) | Lightweight, flexible |
| Drizzle Kit | TypeScript/Node | Declarative (schema diff) | Type-safe, SQL-like |
| Alembic | Python | Imperative (upgrade/downgrade) | Granular control, autogenerate |
| Django Migrations | Python/Django | Declarative (model diff) | Auto-detection |
| Flyway | JVM / CLI | SQL file versioning | Simple, wide DB support |
| golang-migrate | Go / CLI | SQL (up/down files) | Minimal, embeddable |
| Atlas | Go / CLI | Declarative (HCL/SQL diff) | Schema-as-code, linting, CI |
Match the tool to your ORM and deployment pipeline. Prefer declarative for simple schemas, imperative for fine-grained data manipulation.
| Approach | When to Use |
|----------|-------------|
| Reversible (up + down) | Schema-only changes, early-stage products |
| Forward-only (corrective migration) | Data-destructive changes, production at scale |
| Hybrid | Reversible for schema, forward-only for data |
_deprecated suffix instead of droppingCREATE TABLE _backup__ AS SELECT * FROM
- Point-in-time recovery β ensure WAL archiving covers migration windows
- Logical backups β
pg_dump of affected tables before migration
Blue-Green Database
1. Replicate primary β secondary (green)
2. Apply migration to green
3. Run validation suite against green
4. Switch traffic to green
5. Keep blue as rollback target (N hours)
6. Decommission blue after confidence window
Data Migration Patterns
Backfill Strategies
| Strategy | Best For |
|----------|----------|
| Inline backfill | Small tables (< 100K rows) |
| Batched backfill | Medium tables (100Kβ10M rows) |
| Background job | Large tables (10M+ rows) |
| Lazy backfill | When immediate consistency not required |
Batch Processing
DO $
DECLARE
batch_size INT := 1000;
rows_updated INT;
BEGIN
LOOP
UPDATE my_table
SET new_col = compute_value(old_col)
WHERE id IN (
SELECT id FROM my_table
WHERE new_col IS NULL
LIMIT batch_size
FOR UPDATE SKIP LOCKED
);
GET DIAGNOSTICS rows_updated = ROW_COUNT;
EXIT WHEN rows_updated = 0;
PERFORM pg_sleep(0.1); -- throttle to reduce lock pressure
COMMIT;
END LOOP;
END $;
Dual-Write Period
For expand-contract and parallel change:
- Dual-write β application writes to both old and new columns/tables
- Backfill β fill new structure with historical data
- Verify β assert consistency (row counts, checksums)
- Cut over β switch reads to new, stop writing to old
- Cleanup β drop old structure after cool-down period
Testing Migrations
Test Against Production-Like Data
- Never test against empty or synthetic data only
- Use anonymized production snapshots
- Match data volume β a migration working on 1K rows may lock on 10M
- Reproduce edge cases: NULLs, empty strings, max-length, unicode
Migration CI Pipeline
- name: Test migrations
steps:
- run: docker compose up -d db
- run: npm run migrate:up # apply all
- run: npm run migrate:down # rollback all
- run: npm run migrate:up # re-apply (idempotency)
- run: npm run test:integration # validate app
- run: npm run migrate:status # no pending
Every migration PR must pass: up β down β up β tests.
Migration Checklist
Pre-Migration
- [ ] Tested against production-like data volume
- [ ] Rollback written and tested
- [ ] Backup of affected tables created
- [ ] App code compatible with both old and new schema
- [ ] Execution time benchmarked on staging
- [ ] Lock impact analyzed
- [ ] Replication lag monitoring in place
During Migration
- [ ] Monitor lock waits and active queries
- [ ] Monitor replication lag
- [ ] Watch for error rate spikes
- [ ] Keep rollback command ready
Post-Migration
- [ ] Schema matches expected state
- [ ] Integration tests pass against migrated DB
- [ ] Data integrity validated (row counts, checksums)
- [ ] ORM schema / type definitions updated
- [ ] Deprecated structures cleaned up after cool-down
- [ ] Migration documented in team runbook
NEVER Do
- NEVER run untested migrations directly in production
- NEVER drop a column without first removing all application references and deploying
- NEVER add
NOT NULL to a large table without a default value in a single statement
- NEVER mix schema DDL and data mutations in the same migration file
- NEVER skip the dual-write phase when renaming columns in a live system
- NEVER assume migrations are instantaneous β always benchmark on production-scale data
- NEVER disable foreign key checks to "speed up" migrations in production
- NEVER deploy application code that depends on a schema change before the migration has completed
π€AI Usage Analysis
Analysis is being generated⦠refresh in a few seconds.
More Data & Databases Skills
View all βOracleUse the @steipete/oracle CLI to bundle a prompt plus the right files and get a second-model review (API or browser) for debugging, refactors, design checks, or cross-validation.
484 installsby steipeteThings MacManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
440 installsby steipeteQmdLocal search/indexing CLI (BM25 + vectors + rerank) with MCP mode.
69 installsby steipeteDatabase OperationsUse when designing database schemas, writing migrations, optimizing SQL queries, fixing N+1 problems, creating indexes, setting up PostgreSQL, configuring EF Core, implementing caching, partitioning tables, or any database performance question.
22 installsby jgarrison929SupabaseConnect to Supabase for database operations, vector search, and storage. Use for storing data, running SQL queries, similarity search with pgvector, and managing tables. Triggers on requests involving databases, vector stores, embeddings, or Supabase specifically.
22 installsby stopmoclaySQL ToolkitQuery, design, migrate, and optimize SQL databases. Use when working with SQLite, PostgreSQL, or MySQL β schema design, writing queries, creating migrations, indexing, backup/restore, and debugging slow queries. No ORMs required.
13 installsby gitgoodordietrying