[001] / SELECTED WORK

·JOB

EUNO

Founding engineer at Euno. Rebuilt the product across three production repos and now lead the engineering team.

ROLE:
Founding Software Engineer
TIMELINE:
October 2025 – Present
STATUS:
Ongoing
STACK:
React Native (Expo 54), TypeScript, NestJS 11

Hired as the founding engineer at Euno in October 2025. Rebuilt the product end-to-end across three production repos — React Native iOS, NestJS backend, Vite web. Started solo; now lead a small engineering team while continuing to own architecture and the AI agent pipeline.

[METRICS]

0
COMMITS
0
PRODUCTION REPOS
0
MONTHS
0
INTEGRATION TESTS

[STACK]

  • React Native (Expo 54)
  • TypeScript
  • NestJS 11
  • Supabase (PostgreSQL + pgvector)
  • Upstash QStash
  • Anthropic SDK (Claude Haiku 4.5)
  • OpenAI SDK (GPT-4o-mini)
  • AES-GCM (Web Crypto + @noble/ciphers)
  • expo-local-authentication (Face ID / Touch ID)
  • Stripe
  • HealthKit
  • Vite + React + Tailwind + shadcn/ui
  • Langfuse
  • Sentry

[01]

THE PROBLEM

When I joined Euno, the product was a Swift iOS app with client-side scheduling and no real backend AI loop. It couldn't run on Android, state was trapped on-device, and scaling to a social layer or persistent AI was architecturally off the table. I was brought on to rebuild it solo without pausing shipping.

[02]

WHAT I BUILT

Three production repos, all in TypeScript: — A cross-platform React Native app (migrated from Swift) with biometric-gated field-level encryption, offline-first caching, and HealthKit sync. — A modular NestJS 11 backend handling auth, AI generation, semantic retrieval, social features, billing, and notifications, with full test coverage and production-grade error handling. — A Vite + React + Tailwind + shadcn/ui web app (243 files, 15 Supabase migrations) for end-users and licensed providers, with live Stripe billing, dual auth contexts, and two AI-powered Supabase edge functions. The team has grown since — I now lead a small engineering team while continuing to own architecture and the AI pipeline.

[03]

THE 7-STEP AI AGENT PIPELINE

The core of Euno is a deterministic 7-step pipeline (NestJS, 40 files, 15 services) that runs continuously per-user — snapshot extraction, gap/tension analysis, and personalized thought generation, all with cost-aware step gating and 22 behavioral integration tests. The data model is longitudinal: raw chat history compresses into encrypted snapshots → tensions → versioned portraits, which serve as substrate for every downstream step. By holding per-call LLM context under ~1,250 tokens, the pipeline runs cheaply enough to schedule on a heartbeat instead of on-demand.

[04]

LLM COST OPTIMIZATION

First production version of the pipeline ran two analysis LLM calls per user-tick plus a separate gate model. I collapsed the two analysis calls into one unified GPT-4o-mini call, removed the redundant gate, and switched the chat layer to Claude Haiku 4.5. Then I instrumented Langfuse for per-feature token tracking so I could see which step was leaking spend. The result: lower bill, fewer moving parts, faster response times.

[05]

BULLMQ → QSTASH MIGRATION

The first job queue was BullMQ on Redis. It worked, but Redis was a long-lived dependency that needed monitoring, and the worker model didn't match the bursty serverless shape of the rest of the stack. I replaced it with Upstash QStash — HTTP-triggered, durable retries, no Redis to operate. The migration unlocked variable-ratio scheduling: timezone-aware morning/evening anchors with jitter, which tested out as more engaging than fixed cron times.

[06]

PRIVACY ARCHITECTURE

Field-level AES-GCM encryption across every user-content table. Per-record random IVs. Biometric gate (Face ID / Touch ID) for key access. Production startup validation that fails the boot if keys aren't loaded correctly. On top of that: Supabase JWT migration and 7 RLS / security-hardening migrations to unblock the social product pivot. The point is that I can't read your data, and the system enforces that even if I tried.

[07]

WHAT I'D DO DIFFERENTLY

I was aggressive about production safety early — startup validation, rate limiting, RLS, generic error responses — and I'd do that again. The one architectural call I'd revisit is the job queue choice. I switched queues mid-build when requirements became clearer, and that pivot cost time in the lock and retry logic. Picking the queue first and designing the distributed state around it would've been cleaner.

[UP NEXT]

002 / COVERME