Table of Contents
The Stigma
Mention SQLite in a production architecture discussion and watch the room divide. Half the engineers will nod — they’ve shipped it and it works. The other half will look at you like you suggested storing data in a text file.
The stigma comes from a real place. SQLite is an embedded database. It runs in-process. There’s no server, no TCP connections, no connection pooling, no replication. For a generation of developers trained on PostgreSQL and MySQL, this feels like a toy.
It isn’t.
What Changed
Three things shifted the calculus:
1. Hardware Got Fast
Modern NVMe drives do 500k+ random reads per second. SQLite’s single-writer limitation matters a lot less when writes complete in microseconds. Most web applications are 90%+ reads anyway.
2. Litestream and LiteFS
Ben Johnson’s Litestream solved the backup and replication story. Continuous streaming of WAL changes to S3. Point-in-time recovery. The anxiety about “what if the disk dies” evaporated.
LiteFS went further — distributed SQLite with read replicas across multiple nodes. Fly.io runs their entire platform infrastructure tooling on it.
3. The Edge Happened
CDN-edge compute (Cloudflare Workers, Fly.io, Deno Deploy) runs your code in dozens of data centers simultaneously. You can’t connect to a central PostgreSQL instance from 200 edge locations without adding latency. But you can put a SQLite replica in each location.
The Architecture
For a typical content site or API with moderate write volume:
┌─────────────────────────────────────────┐│ Primary Node ││ ┌───────────┐ ┌──────────────────┐ ││ │ App │───▶│ SQLite (WAL) │ ││ └───────────┘ └──────────────────┘ ││ │ ││ Litestream ││ │ │└──────────────────────────┼──────────────┘ ▼ ┌──────────────┐ │ S3 Bucket │ └──────────────┘ │ ┌────────────┼────────────┐ ▼ ▼ ▼ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ Read │ │ Read │ │ Read │ │ Replica │ │ Replica │ │ Replica │ └──────────┘ └──────────┘ └──────────┘Writes go to one node. Reads fan out to as many replicas as you need. Replication lag is typically under a second.
Performance Numbers
On a $20/month VPS with NVMe storage, SQLite with WAL mode handles:
| Operation | Throughput |
|---|---|
| Simple SELECT by primary key | ~400,000/sec |
| SELECT with JOIN (2 tables) | ~50,000/sec |
| INSERT (single row) | ~80,000/sec |
| INSERT (batched, 1000 rows) | ~300,000/sec |
These numbers are per-process. No network roundtrip. No connection overhead. No query parsing on a remote server. The database is a function call.
When Not to Use It
SQLite is wrong when:
- Multiple writers are concurrent and frequent. SQLite serializes writes. If you need 10 services writing simultaneously, use PostgreSQL.
- Your dataset exceeds available disk. SQLite doesn’t shard. If you need distributed storage, it’s the wrong tool.
- You need row-level locking. SQLite locks the entire database on write. Short writes are fine; long-running write transactions are not.
- You need native JSON querying at scale. SQLite’s JSON support works but isn’t as mature as PostgreSQL’s
jsonb.
The Honest Assessment
SQLite won’t replace PostgreSQL for complex, write-heavy, multi-tenant SaaS applications. That’s not the claim.
The claim is simpler: for a surprising number of production workloads — content sites, personal projects, internal tools, read-heavy APIs, mobile apps, edge applications — SQLite is not just adequate. It’s better. Simpler to operate, faster to query, easier to back up, and one fewer service to keep running at 3am.
The best database is the one that solves your problem with the least operational burden. For more workloads than the industry admits, that’s SQLite.