From runtime error to pull request
This post is a guided tour of a single runtime error as it moves through nreactive, from the moment the SDK captures it to the moment a pull request opens on your repository. Every step is logged on the dashboard, but it helps to see the whole pipeline laid out end to end.
Capture on the client
Whichever SDK you've installed — the browser script tag, the Node package, the Express or Fastify adapter — registers a set of global handlers on init. When an error fires, the handler builds a normalized event: the message, a cleaned stack, the source file and line, the severity class (critical, error, warn), and optional metadata like browser, OS, and timestamp. Server-side SDKs also attach a structured context blob with the request headers, URL, breadcrumbs, and any user-supplied context. Everything is scrubbed for secrets before it leaves the process.
The SDK batches events on a short interval and ships them to the /api/errors endpoint. Batching keeps the outbound traffic predictable and survives bursts without losing events.
Ingestion and fingerprinting
On arrival the server verifies the app id, rejects localhost traffic, and applies your ignore-list patterns. Surviving events go through fingerprinting: the message is normalized (hex ids, IPs, timestamps, and query strings are replaced with placeholders), the file path is normalized, and the top few stack frames are snapped to 10-line buckets. Those pieces are hashed together into a stable SHA-256 fingerprint.
The database does an upsert keyed by (appId, fingerprint). New fingerprint → a record is inserted with status "new". Existing fingerprint → the occurrence counter and daily count go up, the last-seen timestamp updates. If the last-seen was more than seven days ago, the record is marked as a regression.
Triage
A record with status "new" is queued for analysis. A few filters run first. Transient-looking errors (network timeouts, aborted fetches, chunk-load failures) are suppressed unless they've occurred enough in a short window to look like a real bug. Deny-listed files are stripped from consideration; if every referenced file is deny-listed, the record is closed with a note and the pipeline stops.
Past the filters, the pipeline transitions the record to "analyzing" and logs an activity entry. The triage stage is also where the origin badge gets attached — server or client — based on explicit SDK metadata or the browser and URL signals.
Context gathering
The analyzer pulls the real default branch from the repo (this is important because the stored value can be stale), then fetches the source file from the error if it's available and not deny-listed. If that doesn't yield any source, it parses up to eight file references out of the stack trace and fetches those too. Each path is normalized to strip query strings, hex chunks, and the Next.js _next/static/chunks/ prefix so remote URLs resolve to real repo paths.
The analyzer also fetches the last eight commits on the default branch — each with its message, date, and touched files — and reads the SDK-provided runtime context blob if it's present. Both feed the prompt so the model can reason about what changed recently and what the request looked like when the error fired.
Analysis
Sources are compressed (comment-only lines and blank runs removed), and the packaged prompt goes to the configured model. The response comes back as JSON: root cause, fix description, a confidence score from 0 to 1, and a list of changes with verbatim before/after snippets. If the model concludes the bug has already been fixed in the current source, it says so and the record closes as "fixed".
Decision
If confidence is under 0.6 or the change list is empty, the pipeline doesn't open a PR. Instead it opens a ticket in Linear or Jira if those are configured, with the root cause, confidence, and a stack preview. The record transitions to "fix_generated" to signal that analysis ran but produced no patch.
Above the threshold, a new branch is created and every change is applied through the snippet patcher. If nothing applies cleanly, a diagnostic log entry captures each strategy failure. If at least one change applies, the patched files are pushed, a test file is optionally added, and a PR is opened with a structured body.
After the PR
If auto-merge is enabled and confidence clears the per-app threshold (default 0.9), the PR is merged immediately and a 48-hour verification window starts. If the same fingerprint fires during that window, the PR is flagged as regressed and the error re-enters the pipeline. Otherwise the record transitions to "fixed" and the whole loop is done. You can trace any record through this pipeline via the activity log — see your logs page.