April 29, 2026 · 9 min read

Querying IndexedDB with MongoDB-style filters

IndexedDB's native API is verbose. A MongoDB-style filter language compiles down to the same cursor calls but is dramatically easier to read, share, and debug. Here's how it maps and where it helps.

Why we need a query language at all

IndexedDB’s native API is verbose by design — it predates async/await, was designed around opening a transaction, getting an object store, getting an index, opening a cursor, advancing it, collecting matches into an array. Half the code in any IDB-using app is plumbing.

A filter language hides the plumbing. The same query that takes 25 lines of cursor code can be expressed as a JSON object that’s shareable, version-controllable, and inspectable in a panel. The language doesn’t replace the API — it compiles down to it, with the same operational characteristics.

The shape of the language

MongoDB’s filter syntax is a good fit because IDB rows are (effectively) BSON-shaped: structured-clone documents, nested objects, arrays of primitives. The full IdxBeaver query is four fields:

{
  "store":  "orders",                      // required
  "filter": { ...mongo-style filter... },  // required (can be {})
  "project": ["id", "total", "status"],    // optional column projection
  "sort":   { "createdAt": -1 },           // optional, in-memory sort
  "limit":  50                             // optional
}

The filter is the interesting part. It supports the standard equality shorthand, plus operator-prefixed fields:

// equality
{ "status": "delivered" }

// comparison
{ "total": { "$gte": 20000, "$lt": 40000 } }

// membership
{ "currency": { "$in": ["USD", "EUR"] } }

// negation
{ "status": { "$ne": "refunded" } }

// composition
{ "$and": [
    { "status": "delivered" },
    { "createdAt": { "$gte": "2026-01-01" } }
] }

// nested paths use dotted keys
{ "shipping.city": "Lisbon" }

How it compiles down

The whole point of this layer is preserving IDB’s index machinery. The planner does two passes:

  1. Index-hint scan. Walk the filter looking for single-field equality or range expressions where an IDBIndex exists with a matching keyPath. If it finds one, the cursor opens against that index with an IDBKeyRange derived from the filter — bounded scan, not full-store.
  2. In-memory match.Apply the rest of the filter (compound operators, nested paths, anything the index can’t cover) to each row produced by the cursor. The remaining ops are cheap because the cardinality is already reduced.

The chosen plan is reported alongside the result so you can spot a missing index. A typical good plan reads:

used index "status" · scanned 18 · matched 12 · returned 12

And a typical bad plan — full scan because no useful index exists — reads:

full object-store scan · scanned 12,408 · matched 87 · returned 50

That’s the signal: add an index on the field you’re filtering by. The query doesn’t change; the next run picks up the index automatically.

Five examples that map cleanly

1. “Find recent refunds”

{
  "store": "orders",
  "filter": {
    "status": "refunded",
    "createdAt": { "$gte": "2026-04-01" }
  },
  "sort":  { "createdAt": -1 },
  "limit": 100
}

With an index on status the planner range-scans the refunded slice, then in-memory filters by date. No full table scan.

2. “Show users with no email”

{
  "store":  "users",
  "filter": { "email": { "$eq": null } }
}

3. “Find sync queue items pending for over an hour”

{
  "store": "syncQueue",
  "filter": {
    "$and": [
      { "state": "pending" },
      { "queuedAt": { "$lt": "$NOW - 1h" } }
    ]
  }
}

4. “Project a subset for export”

{
  "store":   "orders",
  "filter":  { "status": "delivered" },
  "project": ["id", "userId", "total", "shippingCity"],
  "limit":   1000
}

Combined with a CSV export, this is the fastest way to hand a tester or analyst a slice of production-shaped data without writing code.

5. “Negate a list”

{
  "store":  "events",
  "filter": { "type": { "$nin": ["heartbeat", "ping"] } }
}

What it doesn't do (yet)

  • Joins. IDB has no native join. Joining two stores client-side means running two queries and merging in code, which is fine for ~thousands of rows but breaks down for millions. Plain SQL with an actual relational engine is on the roadmap for larger workloads.
  • Aggregations. No $group / $sum yet — the project view gives you the rows; you do the math in your head or in a spreadsheet.
  • Mutations. The filter language is read-only by design. Writes happen through inline grid edits with undo/redo, not a query DSL.

Why this scales

A filter language that looks like Mongo and compiles to native IDB cursor code is the right level of abstraction for browser storage debugging. You get readable queries, an inspectable plan, no extra runtime cost over hand-written cursor code, and shareable artifacts (a filter is just JSON). The underlying API stays the same — the query layer just stops being a chore.

See the implementation in the IdxBeaver source — the planner is in src/background/index.ts, inside the injected executeStorageRequest function; the parser lives in src/shared/query.ts.


← All posts