About PCR.dev
If you build for humans,
humans should review the prompts.
Accountability preserves agency, and vice versa.
PCR.dev is the platform for Prompt & Code Reviews in AI-native teams. We are not automating code review. We are keeping the humans building software accountable for the AI decisions that shape what they ship.
What is a Prompt & Code Review (PCR)?
A Prompt & Code Review is a structured review of the AI prompts that produced a piece of code, surfaced alongside the code as part of the normal review workflow. Where a traditional code review asks what was written, a PCR also asks how it was written: what was asked of the AI, how the developer guided it, and what decisions were made along the way.
Think of it as reviewing a commit history, but for the prompt layer. The meaningful unit of AI-mediated work is the prompt-to-code decision, not the diff.
A PCR has three parts:
- 1.Problem framing and expected outcomes
- 2.Key prompts and interaction modes that led to the solution
- 3.A brief human-written rationale for the chosen approach
The goal is not an exhaustive prompt audit. Engineers self-curate two or three key prompts that shaped their solution. The review stays focused on the decisions that actually mattered.
“PCRs” (Prompt & Code Reviews) is a practice we coined and empirically validated with 20 software engineers as part of peer-reviewed research published at CHI 2026, the ACM Conference on Human Factors in Computing Systems, where it received a Best Paper Honorable Mention.
Why human-first?
“The notion of the programmer as an easily replaceable component in the program production activity has to be abandoned. Instead the programmer must be regarded as a responsible developer and manager of the activity in which the computer is a part.”
Peter Naur, 1985
Naur wrote that in 1985. It has only become more true. AI coding tools are genuinely powerful. They write production code at a speed no human can match. But that speed creates a new problem: the decisions shaping a codebase are happening faster than teams can track, and the humans responsible for those decisions have less and less visibility into them.
Our research points to one principle: to remain autonomous is to be the author of your reasons, not merely the approver of outputs. In practice, that means three things.
- 1.You can interrupt and override the agent at any point.
- 2.The code you ship has traceable provenance, not just AI-generated reasoning.
- 3.Changes arrive in small, test-bounded increments you can actually review.
We built PCR.dev entirely with AI tools. But as long as software is built for real people, the engineers building it should be able to account for the decisions that produced it. As teams grow more AI-native and AI tools grow more capable, the review process itself becomes training. Better prompts produce better code, and seeing what works compounds over time.
The problem we found
In our study of 20 software engineers, the same pattern kept surfacing. Junior engineers using agentic AI felt productive in the short term but reported a growing loss of ownership and understanding. The feeling was consistent enough that it shows up in the exact words they used.
“It has my name on it, but I have no idea why it works.”
Junior engineer, on AI-generated code
“When you’re coding with agents, it’s like you’re just free-falling.”
Junior engineer, on scope control with AI agents
Senior engineers saw the same dynamic from the other side. 6 of 10 juniors in our study reported lower confidence in their ability to code without AI after a single agentic debugging session. Seniors reviewing junior prompt histories in 12-minute sessions caught over-reliance patterns, incorrect AI suggestions, and weak prompting habits they would have missed from the code alone.
“I feel like junior engineers just have no intuition. I am routinely pinged on stuff like ‘hey, this thing doesn’t work’ with zero follow-ups on a speculation.”
Senior engineer, on AI and junior judgment
“At no point can you hand over your expertise. You’re just handing over the workload.”
Senior engineer, on delegation to AI
The common thread was visibility. Seniors could review the output. They could not review the process. When they first saw what a prompt review could look like, they said they wanted to implement it immediately.
For senior engineers
See how your team actually uses AI. Catch prompt habits before they compound into tech debt.
For junior engineers
Build a traceable record of your AI decisions. Get feedback on how you work, not just what you shipped.
For startups moving fast
Track AI-assisted decisions before they become undiagnosable tech debt. Our first customer is ourselves: we use PCR.dev to keep each other accountable.
For open source projects
Transparency about AI involvement in contributions matters to maintainers and communities. PCRs make that transparency possible without extra work.
How we think about it
The research pointed to three practices that help teams stay in control as AI grows more capable and more autonomous.
Preserving individual agency
Incremental changes, interrupting and verifying outputs, and staying accountable for every AI decision. One diff at a time.
Evolving the mentorship pipeline
Senior engineers are context holders and judgment builders, not just code reviewers. They ask the right questions when AI gives confident wrong answers.
Prompt & Code Reviews
The practice that connects the other two. Juniors document their key prompts. Seniors review the process alongside the output. Both stay accountable.
“PCRs operate under the principle that accountability preserves agency. When engineers must document and defend their prompting strategies, they remain authors of their reasoning even when AI generates the code.”
Feng, Yun, and Wang. CHI 2026.Research-backed practice
The conceptual foundation of PCR.dev is peer-reviewed research. We studied the problem with real engineers before building the product.
Our CHI 2026 paper ran a three-phase study with 20 engineers: 10 junior, 10 senior, across big tech, fintech, devtools, and enterprise SaaS. Phase 1 used Applied Cognitive Task Analysis (ACTA) with senior engineers to surface tacit expertise. Phase 2 put juniors through an AI-assisted debugging task using Cursor. Phase 3 had senior engineers do blind reviews of junior prompt histories, mirroring real mentorship. The findings shaped every design decision in this platform.
Feng, Dana, Bhada Yun, and April Yi Wang. “From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI-Mediated Software Engineering.” CHI 2026. Best Paper Honorable Mention. doi:10.1145/3772318.3791642
Read the paperBuilt through PCRs
PCR.dev is built using PCR.dev. Dana and Bhada use it to audit each other throughout development, which means our own prompt histories are the first test of every feature we ship.
They also use different tools. Bhada builds with Cursor. Dana builds with Claude Code. The capture agent works the same for both, and their sessions show up side by side in the same project. That is not a coincidence. It is by design. The tool you use should not determine whether you can participate in a team review workflow.

Bhada Yun
MSc Student, ETH Zürich
HCI researcher at ETH Zürich studying how people interpret and interact with AI systems. Goes mining for rocks in his free time.

Dana Feng
Software Engineer, Two Sigma
Software engineer and researcher based in Brooklyn. Designs for bakeries in her free time.

Prof. Dr. April Yi Wang
Professor, ETH Zürich
Faculty at ETH Zürich, where her lab works on human-centered computing and EdTech. Pets her Shiba Inu in her free time.
Open source and built to grow
The prompt capture agent is open-source and will stay that way. The AI tooling landscape is moving fast: Cursor, Claude Code, Codex, Gemini CLI, and whatever comes next. We cannot build integrations for all of them alone, and we should not have to.
If you use an AI coding tool that PCR.dev does not yet support, fork the capture agent and add it. The architecture is designed to make new integrations straightforward. Contributions are welcome.
Fork or contribute on GitHubReady to run your first Prompt & Code Review?