MinusOneDB is a parallel database where distributed search is the foundation, storage and compute live on the same node, and pricing is for capacity — not for asking questions. This is how 100–1000× price-performance is architected, not marketed.
Read the docsA typical analytics pipeline has a database, a warehouse, an ETL layer, a stream processor, a cache, a lake, and a search engine — each metered, each staffed, each a renewal cycle.
One system replaces all of it. Events land once, index themselves, and are queryable at capacity cost. The org chart collapses with the stack.
Modern warehouses put storage on one side of the network and compute on the other. Every query crosses the wire both ways — and the meter runs either end. We put them back on the same node.
Execution happens where the data lives. No network tax per query. Aggregations that used to stream petabytes across a wire run against the indexes in place.
In the legacy stack, search is a separate cluster fed by a crawl job — minutes-stale, cost-metered, forever drifting from the primary store. In MinusOneDB, every query resolves to a search operation. Text filter, faceted aggregation, numeric range, vector similarity: same engine, same write path, no second system to keep in sync.
Every shard carries inverted indexes (text + structured filters), doc-value columns (for sort and facet), vector indexes (similarity), and range trees (numeric + temporal). All maintained by the same write. No ETL, no reindex job, no drift between systems.
You hand us a large dataset. We break it into chunks, stream them to N shard writers in parallel, and build every index around each document as it lands — inverted, doc-value, vector, range. Work a warehouse would do at query time happens at write time, once, amortised across every future read.
A query against a warehouse is a computation. Against MinusOneDB, it’s a lookup. Warehouses scan rows every time you ask. We paid that cost once, at ingest — so every query that comes later just finds the answer waiting.
One MinusOneDB environment exposes four optimised stores behind a single REST API + JS SDK — so you stop stitching together a database, a warehouse, a cache, and a lake yourself.
Constant-time distributed search across petabytes of documents. Full-text, structured fields, facets, aggregations, range queries — in one pass.
Live per-user state for frequency caps, real-time targeting, profile flags, and anything that needs write-then-read in the same second.
Raw, arbitrary-shape JSON at any scale. Land first, decide later. The schema can grow around the data, not the other way around.
Durable object-store backing for every record ever written. Any dataset at any scale rebuilds from archive in ~3 hours.
A REST request fans out to every shard in parallel, aggregates in place, and returns — all in one round-trip. Here it is, measured against the kind of number you’d use to time a keypress.
Same petabyte. Same query. MinusOneDB lives on the scale of a keystroke. Warehouses live on the scale of a coffee refill.
Because queries run on the same node as the data, there’s nothing to meter per request. So we price what actually costs us something: the infrastructure footprint. Your bill flattens while your usage scales.
Per-query is demand-priced — the warehouse’s incentive is to keep you running more queries. Capacity is supply-priced — ours is to keep you running efficiently.
When every query is free at the margin and session, search, lake and archive share a store, whole categories of application become viable that pay-per-query stacks can’t price.
A live identity object with full lineage, updated as signals land, queryable in the same request as the facts. No nightly batch ID graph, no match-rate decay.
In production: enterprise analytics →Score billions of events against thousands of model variants every day. Writes are cheap because the store is search-first — AutoML and nightly retraining become a line item, not a quarterly initiative.
AI-native workloads →Partners land data once in object store and run cross-domain filters inside an M1DB node. No per-query meter, no data movement, no lock-in.
Case study: Qonsent →Build, maintain, and sell user segments that refresh every ~2 seconds. Session + search stores combine so a segment is a live query, not a table rebuild.
In production: publishers & SSPs →