Session replay, heatmaps, engagement scoring and event analytics — built on the same capacity-priced store as your app data. No per-query bill. No second vendor. No sampling.
Turn it onSession replay with rrweb, engagement scoring and frustration detection — the same data, stored once.
Drop in the event-capture SDK and start collecting. Everything below runs against the same distributed-search store that powers your product — no separate analytics warehouse, no ETL.
Every session gets a 0–100 engagement score with a frustration badge. Filter by rage clicks, duration, page depth — jump straight to the users who matter.
Full rrweb replay beside a colour-coded timeline: page views, clicks, scrolls, form submits, rage clicks. Skip inactive time, play at 1–8×.
Per-page click heatmaps and ten-band scroll-depth visualisations, rendered from raw events — not pre-aggregated. Any page, any time window.
Real-time KPIs, event-type breakdowns, top pages, OS/platform/geo facets. Auto-refresh on. Queries run in seconds at any data volume.
On a per-query warehouse, analytics isn’t just a second contract with FullStory or Amplitude. The real cost is what you stop doing because the meter is running.
A dedicated analytics vendor (PostHog, FullStory, Heap, Hotjar, Amplitude) and a warehouse (BigQuery, Databricks, Snowflake) running the SQL that vendor can’t. You’re paying twice to store the same events and query them two different ways.
A moderate agent workload — continuous scoring, retrieval loops, a handful of production agents hitting the same tables — can push a BigQuery or Databricks bill into six figures monthly with nothing you’d call “heavy.” Snowflake credits run cheaper per query but the same loop still burns through them faster than finance can approve. Per-query pricing, in any flavour, is not designed for agents that query in a loop.
When every query is metered, teams ration. Dashboards get simplified. Questions go unasked. Session replay gets disabled on all but a tiny sample. The warehouse bill looks fine — because nobody’s running anything. That’s not a saving; it’s unrealised product intelligence.
Most analytics vendors sample session replays and heatmaps by default once volume gets real. You’re debugging churn off a 1% slice. MinusOneDB stores every event at capacity cost — no sampling, no “we rolled up yesterday’s data to fit the tier.”
Because every query already runs at capacity. Once you’re on MinusOneDB, analytics is not a separate product — it’s the same data, the same store, the same bill.
Events, users, documents, search, session state — all in the same distributed-search datastore. No ETL into a separate analytics warehouse, no double storage, no reconciliation.
Run as many session-replay fetches, heatmap renders, agent loops and event queries as you want. Marginal cost per query trends to zero. We don’t meter curiosity, and we don’t meter your agents.
Cancel the analytics contract, the SSO setup, the DPA, the renewal cycle. The warehouse bill drops too — you’re not running the shadow SQL that vendor couldn’t answer.
Events live in your environment alongside your operational data. Same SOC 2 controls, same access policies, same perimeter. Nothing leaves.