Three feeds merge into the Notion Roles + People databases: a daily upstream Exa-driven sweep across LinkedIn, Greenhouse, Lever, Ashby, Wellfound, Built In, YC, and a hand-picked editorial source list; user submissions via /gtme/submit; and a small slice of manually edited editorial fields (taglines, KPIs, notes). New postings are URL-deduped, live-checked, classified into one of the archetypes, and rated 1–10 on AI / GTM / data / experimentation / coding. Likely-Open URLs that close overnight flip to Closed.
Two public Notion databases hold the dataset. Roles is the big one — every JD with company, title, profile classification, status, posted date, and the 5-axis fit rating. People is a parallel table for practitioners. A private Submissions DB collects user-submitted rows.
A Cloudflare Pages build runs once a day. Its prebuild step pulls every Roles + People row, rewrites the static /public/data CSV + JSON files the site reads from, and mirrors the headline numbers back to a private Notion summary page. The Notion token lives in Cloudflare env vars only — no GitHub Action, no bot commits. User submissions land as Pending rows immediately but get folded in on the next daily rebuild.
A Vite + React app that pre-renders every /gtme/* page at build time and serves the dataset as plain CSV / JSON from /data. No backend, no database query at request time. The submission form is the one exception — a Cloudflare Pages Function that proxies a POST into the Notion Submissions DB.
Not original research and not comprehensive — an aggregated slice biased toward English-language postings, US/EU/remote roles, and surfaces Exa indexes well. The 5-axis ratings are keyword-mined heuristics; comp is quoted verbatim from the JD when present, blank when not. Nothing is human-reviewed. Spot something off? /gtme/submit and the next sweep picks it up.