What does the system actually look like under the hood, and how does a non-developer maintain 12 services?
Part of the ForkIt! Case Study. Read the full story →
Three versions. Three layers of complexity. Each one added because the previous version couldn't do something users needed.
Three pieces. The app talks to a Vercel backend. The backend talks to Google Places API. One button, one API call, one random restaurant.
Total services: 3. Monthly cost: Google API only.
Fork Around sessions require real-time coordination: who joined, what filters they set, whether the host picked. Redis (Vercel KV) handles ephemeral session state with a 1-hour TTL.
The web joiner lets people without the app join from a browser. A static HTML page, no framework, no build step.
Total services: 5. Added: Redis, Web.
User accounts need auth (Clerk). History and favorites need a database (Neon PostgreSQL). Subscriptions need payment infrastructure (RevenueCat). Crashes need reporting (Sentry). Code needs automated gates (GitHub Actions).
Total services: 12. Everything else was free tier.
Vercel Hobby plan caps serverless functions at 12. The backend hit that limit. When Claude refactored endpoints and renamed them, backward-compatible rewrites were needed so deployed apps wouldn't break.
Those rewrites still count toward the function limit. Every new endpoint means consolidating or removing an old one.
Sentry was added after users reported crashes. Not before. CI was added after a bad deploy broke production. Not before.
Lesson: set up observability on day 1, not after the problems are already live.
An LLM built the code. An LLM also builds bugs, billing exposures, dead code, and accessibility failures. The review suite exists because trust, but verify applies double when neither you nor your tool has built software before.
Run with one command: npm run review
Read every source file. Cite lines. Rate by severity.
Findings table by severity (CRITICAL / HIGH / MEDIUM / LOW). Top 5 issues to fix first. Areas that passed clean. Recommended fix order grouped by related changes.
The last full run found 12 HIGH findings across sync logic, account deletion, test coverage, and cost caps. Every one became a GitHub Issue, triaged, and fixed in order.
The review suite is not a checkbox. It is the reason production hasn't broken since it was adopted. The code is LLM-generated. The quality gates are human-designed.
Four phases. Every feature, every bug fix, every session follows the same loop. The order matters: stability first, then polish, then new features.
Crashes, data loss, security exploits, broken deploys. Nothing else moves until every Tier 1 issue is verified closed on a physical device via USB debugging.
Theme inconsistencies, font scaling, layout breaks, misaligned elements. These affect trust. A polished app signals care.
New functionality only after stability and polish are clean. Features are scoped to a single session when possible. If a bug surfaces mid-feature, it gets filed as an issue, not fixed inline (prevents rabbit trails).
Run the full 31-review suite. Automated checks first (npm run review), then manual deep-dives. New findings become GitHub Issues. Triaged by severity.
Reviews always surface new issues. Those go back into the build queue at the appropriate tier. The cycle repeats until the review comes back clean enough to ship.
Never submit store builds without testing locally first. Commit, build local, test on a physical device, THEN submit. This rule exists because of the Annapolis demo: a backend update deployed without the app being tested against it. The app broke at a restaurant. Three users lost.
The guiding principle: "as free as possible." If it weren't for Google API costs, this app might not charge at all. The Pro tier exists to offset infrastructure, not as a revenue model.
Monthly operating cost
| Google Places API | Variable (per-call) |
| Vercel (backend + web) | Free tier |
| Neon (PostgreSQL) | Free tier |
| Vercel KV (Redis) | Free tier |
| Clerk (auth) | Free tier |
| RevenueCat (IAP) | Free tier |
| Sentry (crash reporting) | Free tier |
| GitHub Actions (CI) | Free tier |
| EAS (builds, when active) | $19/mo |
| Total | ~$20/mo + API |
The single most impactful optimization. Instead of calling Google Places on every tap, the first "Fork It" tap fetches a full pool of results. Every subsequent tap picks randomly from the cached pool locally, with zero API calls.
User taps 5 times = 5 API calls
Every re-roll hits Google Places. Costs scale linearly with user engagement.
User taps 5 times = 1 API call
First tap fetches pool. Re-rolls pick locally. Cache invalidates on filter change or after 4 hours.
Each one reduced per-search cost or eliminated unnecessary calls entirely.
Google killed the $200/month free credit in late 2024. The app hit Enterprise-tier pricing (higher per-call costs) without warning. Every optimization above was retroactive damage control. Cost architecture should be designed up front, not bolted on after the bill arrives.
The backend API had no rate limiting or origin checking for weeks. No user data was at risk (the search endpoint doesn't store or transmit personal data), but anyone who found the URL could have run up the Google Places API bill on my account. An LLM built it. An LLM didn't flag the gap. I didn't know to check.
No rate limiting. No CORS restrictions. No origin checking. The Google API key was server-side (good), but the endpoints that used it were open to the internet (bad). The risk was financial, not personal: someone could have run up the API bill, not accessed user data.
The security hardening took multiple sessions spread across weeks. None of it was in the original design. All of it should have been. The review suite (specifically Reviews 7, 19, and 21) now catches these patterns before they ship.
Fork Around lets a group of friends collectively pick a restaurant. One host, up to 8 participants, 4-letter session code, browser or app. Here is how it works at the system level.
/api/group/create. Backend generates a 4-letter code, stores session in Redis with 1-hour TTL. Session saved to AsyncStorage for host reconnection.
/api/group/join with the code and a display name. App users join in-app; browser users join via the web joiner at forkaround.io/group/.
/api/group/pick. Backend merges all filters (most restrictive wins), searches Google Places with the merged criteria, picks one randomly.
/api/group/leave.
If Alice wants a 5-mile radius and Bob wants 2 miles, the search uses 2 miles. If one person sets a $$ max price, no $$$ results appear.
This is deliberate. The alternative (union/broadest) returns results that someone actively excluded. The most restrictive merge respects everyone's constraints, even if it narrows the pool.
A standalone HTML page. No framework, no build step, no app install required. The host shares a link; the friend opens it in any browser.
It calls the same backend endpoints as the app. No duplicate logic. The web joiner is a thin UI layer over the same API.
Vercel serverless functions are stateless. They spin up, execute, and die. WebSockets require persistent connections, which means a dedicated server (added cost, added complexity).
Instead: polling. The app and web joiner poll /api/group/status every few seconds. It is not elegant. It works. It costs nothing. At 8 participants max and 1-hour sessions, the polling load is negligible.
The architecture is not clever. Twelve services, mostly free tiers, held together by a 31-point review suite and a dev cycle that prioritizes stability over speed.
Every service was added because the previous version couldn't do something users needed. Nothing was added for technical novelty. The constraint ("as free as possible") drove the design better than any architecture diagram could have.
The architecture isn't clever. It's cheap, observable, and it ships.