Start with the right indexes
The single biggest performance lever in Postgres is indexing. Our baseline for any new project:
- B-tree indexes on all foreign keys (Postgres doesn’t add these automatically)
- Partial indexes for soft-deleted rows (WHERE deleted_at IS NULL)
- GIN indexes for full-text search columns
Query optimisation checklist
- Run EXPLAIN ANALYZE before and after every schema change
- Avoid SELECT * — always specify columns
- Use LIMIT with OFFSET carefully — switch to cursor-based pagination at scale
Connection pooling
We use PgBouncer in transaction mode for all production deployments. Without it, serverless functions will exhaust your connection limit within hours.
sh
#!/bin/bash
echo "--- System Health Check Report ---"
# Check CPU Usage
echo "CPU Usage:"
top -bn1 | grep "Cpu(s)" | awk '{print " " $2 + $4 "%"}'
# Check Memory Usage
echo "Memory Usage:"
free -m | awk 'NR==2{printf " Used: %s/%sMB (%.2f%%)\\n", $3,$2,$3*100/$2}'
# Check Disk Usage
echo "Disk Usage:"
df -h | awk '$NF == "/" {print " Root partition: " $5}'
echo "--- Report Complete ---"
Monitoring
pg_stat_statements is your friend. Enable it on day one and query it weekly to surface your slowest queries by mean execution time.

