The anatomy of an nreactive pull request
Every PR nreactive generates has the same structure. That's by design. Reviewers should be able to skim the PR list, open any nreactive PR, and know within a minute whether the fix is worth merging. This post is a walkthrough of the structure and the reasoning behind each piece.
The title
Titles are prefixed with [nreactive] so they sort predictably in the PR list and are easy to filter. After the prefix, a short verb phrase summarizing the fix: Fix: guard against undefined user before accessing id, for example. The whole title stays under 72 characters.
Short titles aren't about aesthetics. They're about the review queue — a long title gets truncated in GitHub's list view, a short one is fully visible and scannable. The body is where details go.
The body
The body has five fixed sections, in this order:
- Root cause. One paragraph describing why the error occurs. Grounded in the specific code and the specific stack, not a generic "null access" explanation.
- Fix. One paragraph describing what the change does. Again, specific to the change, not a general class description.
- Confidence. A percentage from 0 to 100. The model produced this, and it's not post-processed.
- Model. The model id used for the analysis, fenced in a code span. Useful when comparing results across models or when debugging a run.
- Files changed. A bulleted list of the files the PR touches, in the order they were applied.
When a test is included, a sixth section appears noting the test path, framework, and the failing-before / passing-after contract. A closing footer reminds reviewers the PR is automated and links to the nreactive site.
The commit message
Each file-level commit uses a message derived from the fix description, truncated to stay under 72 characters. One commit per file, not one commit per change, because a single file might receive multiple snippet patches and bundling them makes the history more legible.
No trailing co-author line on commits. If you want attribution in the log, it's already in the PR metadata — duplicating it in every commit line adds noise. The PR body is where context lives; the commit is just the mechanical record of the code change.
What's deliberately absent
A few things we explicitly don't include, and why:
- Images or screenshots. The fix pipeline doesn't take them, and synthetic ones would add weight without signal.
- External links. Only internal links (to nreactive.com itself) in generated content. External links age poorly and reviewers don't want to leave the PR to understand it.
- Step-by-step reasoning. We show the conclusion, not the thought process. Reviewers can ask for more if needed; most don't.
- Estimates of impact or effort. The model isn't calibrated on those and speculative impact language actively hurts review quality.
Auto-merge metadata
When a PR qualifies for auto-merge, the action happens quickly after the PR opens. The PR's database record gets a autoMerged: true flag and a verificationWindowEnd timestamp 48 hours in the future. Those show up on the pull requests dashboard so you can distinguish human-merged from auto-merged at a glance.
The applied-patch report
Internally, each PR stores a report describing which snippet-patch strategy worked for each change (exact match, EOL-normalized, trimmed, or trimmed with noise-skipping). The report also lists any changes that failed to apply and the reason. This report lives in the PR's database record and feeds the dashboard's diagnostic view. If you're seeing unusually many "trimmed_lines_noise_skipped" entries, it's a signal that the model is hallucinating whitespace in a way the compressor isn't catching.
Style notes
PR bodies are Markdown. Headings are only used for the section boundaries; the body text inside each section is plain paragraphs. Code spans are used for file paths, symbol names, and error types, never for emphasis. Emphasis uses bold rather than italics — bold scans faster on most review interfaces.
Evolving the format
The format has changed a couple of times. Early versions included a "related changes" section that was almost always wrong (the model speculated about code it hadn't seen). An experiment with a risk-level tag was abandoned because the tag correlated with confidence and duplicated the signal. Every section that survives has earned its place by being useful at review time. If you find one redundant, flag it via contact.