Using nreactive with monorepos
Monorepos are common, and the obvious question when adopting a new monitoring tool is "do I register one app for the whole thing or one per package?" Both are supported in nreactive, and the right answer depends on how you want the dashboard to feel and how you want scheduled scans to behave. This post walks through both setups and the trade-offs.
Option A: one app per runtime surface
The common pattern: register one nreactive app per deployable artefact. A web client, an API server, a worker, an internal admin tool — each gets its own app id. Each SDK init uses its own app id, so errors from the worker land on the worker's error list and don't clutter the web client's.
Pros: the dashboard stays focused. Each app has its own sparkline, its own PR queue, its own deny list, its own schedule. Scheduled scans target each app's subtree via deny-list narrowing, so a scan of the API doesn't read the web client's source.
Cons: you have to register and configure each app. The overhead is small — a few clicks and a line of config — but it's nonzero and grows linearly with the number of deployables.
Option B: one app for the whole repo
The alternative: register a single nreactive app with a single app id, and have every runtime — client, server, worker — send under the same id. Errors from everywhere land in one place; scheduled scans read the whole repo.
Pros: minimum setup. Especially useful for early-stage projects or small teams where the surface area is modest.
Cons: the dashboard is noisier because different classes of errors share one list. Scheduled scans see the whole tree, which means the deny list has to do more work to keep scans focused on the parts of the repo that matter.
Practical defaults
For most teams we recommend Option A. The extra setup is a one-time cost, and the dashboard clarity pays off daily. Option B is fine for a weekend-project monorepo or a repo where the "monorepo" is really just a web app with a colocated package or two.
The origin badge added recently helps Option B. Client vs server errors are visibly separate even in a single list, which makes the noise more tolerable. If that's enough clarity for your team, Option B can scale further than it used to.
Deny lists in a monorepo
Deny lists are per-app, which is the main reason Option A scales well. Each app can exclude the parts of the repo it doesn't care about without affecting other apps' scans. For Option B, you end up maintaining a single deny list that covers everything — generated clients, vendored code, test fixtures, unrelated packages — which can become long.
A useful pattern for Option B deny lists: exclude every top-level package directory except the ones you care about, rather than listing exclusions one by one. packages/!(server|web)/** handles the case where you want the scan to see only packages/server and packages/web.
Branches and default branch resolution
The analyzer fetches source from the default branch, which is resolved live (the stored value can be stale). On a monorepo that's conventional — one default branch, everything on it. Where it gets fiddly is a monorepo that uses release branches per package: the analyzer will still fetch from the default branch, not from the per-package release branch. If you need per-app branch targeting, reach out via contact and we'll scope the right integration.
Shared utilities and cross-package fixes
A bug in a shared utility can fire in multiple apps. Under Option A, the same error fingerprint will be captured separately for each app and each generates its own PR proposal, which leads to conflicting PRs. The defensive play is a careful deny list on each app that excludes the shared utilities folder, so only one app's pipeline proposes fixes there — typically the app that owns the utility.
Under Option B, the shared utility's fix is just a single PR. That's cleaner for this case specifically.
CI interaction
Monorepo CI is often keyed to which paths changed. An nreactive PR targets specific files, so the incidental benefit is clean CI runs — only the affected packages' tests run. No special wiring is needed. If your CI is aggressive about running everything on any PR, that still works, it just runs a little slower than necessary.
Sizing scheduled scans
A monorepo scan is budget-limited just like any other scan: 25 files on Pro, 15 on Free. On a large monorepo that might not cover every important subtree in a single run. Two mitigations: register per-package apps (Option A) so each gets its own budget, or lean on the recency bias so the scan naturally follows active development across the repo.
For truly large monorepos — hundreds of packages, tens of thousands of files — neither scan setup will cover everything. Runtime monitoring remains the primary defence, and scans supplement it.