I've realized the mechanics behind the site may not be all that interest to some (and likely disliked by many), so figured I'd share some of the technical details to give a little more 'meat' to the post.

The whole thing runs on three used Intel mini PCs (i5-10500T, 16-24GB RAM each) under my desk. k3s cluster with embedded etcd for HA. Total electricity cost is about $11/month.

Database: CloudNativePG operator manages PostgreSQL with automatic failover (5-30s). Built-in PgBouncer pooling via the Pooler CR so I don't need to manage a separate connection pooler. Continuous WAL archiving to a local Garage (Rust-based S3-compatible storage) machine gives me under 5 minutes RPO.

Cache/realtime: Redis Sentinel with 3 Redis + 3 Sentinel instances for automatic master failover. Socket.io sits on top with the Redis adapter so WebSocket connections work across multiple API replicas.

Backups are three-tier: Barman Cloud Plugin handles continuous PostgreSQL WAL + daily base backups. Restic does encrypted daily snapshots of secrets and pg_dumps. Longhorn handles volume snapshots. All three target the same Garage S3 box on my LAN. Six alerting rules monitor the entire chain — if any tier goes stale, I get a Telegram alert.

Screenshot generation was a fun one. When you post a message, the server generates a themed PNG of your message card for the email confirmation. Instead of spinning up a headless browser, I use Satori (JSX to SVG) + resvg (SVG to PNG). No Puppeteer, no Chrome, no browser at all. It renders the exact same React-like JSX with the user's theme colors and spits out a PNG in milliseconds.

The achievements system has 46 achievements across 8 categories and 6 tiers, all event-driven. When you post a message, leave a comment, or hit certain milestones, the evaluator checks eligibility and fires a WebSocket event for the toast notification. The whole thing is evaluated server-side so you can't fake progress.

Message value decay is borrowed from HN's own gravity-based ranking, actually. Values decrease over time following a configurable decay curve, and community reactions (likes/dislikes) directly influence the rate. Like a message and you slow its decay. Dislike it and it drops faster. The formula is feature-flagged so I can tune the gravity constant without a deploy.

Worker architecture: BullMQ with a dedicated worker process that runs independently from the API. Screenshots, emails, and async jobs all go through the queue. The worker can crash and restart without affecting API availability.

Monitoring is VictoriaMetrics + VictoriaLogs + Alloy + Grafana with 16 auto-provisioned dashboards. Probably the most overengineered monitoring setup for a message board in existence, but it's genuinely useful when something breaks at 3 AM.

The whole backend is NestJS with Prisma, frontend is Next.js 15 with React 19. CASL handles fine-grained permissions. Everything runs in non-root containers with dropped capabilities and network policies restricting pod-to-pod traffic.

Is this overengineered for a message board? Absolutely. But I've learned more about running production infrastructure in the last few months than I did in years of reading about it.