From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI-Mediated Software Engineering
CHI 2026 — ACM Conference on Human Factors in Computing Systems
Barcelona, Spain · April 13–17, 2026
Abstract
Juniors enter as AI-natives, seniors adapted mid-career. AI is not just changing how engineers code — it is reshaping who holds agency across work and professional growth.
We contribute junior–senior accounts on their usage of agentic AI through a three-phase mixed-methods study: ACTA combined with a Delphi process with 5 seniors, an AI-assisted debugging task with 10 juniors, and blind reviews of junior prompt histories by 5 more seniors. We found that agency in software engineering is primarily constrained by organizational policies rather than individual preferences, with experienced developers maintaining control through detailed delegation while novices struggle between over-reliance and cautious avoidance. Seniors leverage pre-AI foundational instincts to steer modern tools and possess valuable perspectives for mentoring juniors in their early AI-encouraged career development.
From synthesis of results, we suggest three practices that focus on preserving agency in software engineering for coding, learning, and mentorship, especially as AI grows increasingly autonomous.
Three suggested practices
The paper synthesizes its results into three evolving practices for AI-mediated software engineering. Together they aim to preserve agency in coding, learning, and mentorship as AI grows increasingly autonomous.
Preserving Individual Agency
Company-wide and personal practices for using AI tools — incremental changes, interrupting outputs, verifying claims — that help engineers stay accountable for the code that ships under their name.
Evolving the Mentorship Pipeline
Senior engineers as Socratic guides and organizational anchors, transmitting intuition and judgment so juniors retain agency over their professional growth in AI-mediated workflows.
Prompt & Code Reviews (PCRs)
A collaborative practice that structures AI interactions for accountability: juniors document and justify key prompts, seniors oversee the process, and both stay authors of their reasoning.
Method
A three-phase mixed-methods study with 20 professional software engineers across Big Tech, FinTech, DevTools, Enterprise SaaS, and HealthTech. Phases combined semi-structured interviews, structured elicitation (ACTA, Delphi), and task-based activities.
Phase 1
Senior expertise elicitation
5 senior engineers. ACTA + Delphi to surface tacit knowledge and converge on a realistic debugging scenario.
Phase 2
Junior debugging task with Cursor
10 junior engineers debug a React admin panel with three latent bugs using Cursor in Agent or Ask mode. Postmortems, NASA-TLX, SMEQ, reflective interviews.
Phase 3
Senior blind review of junior artifacts
5 different senior engineers review anonymized junior code, prompt histories, and reflections — mirroring real mentorship constraints.
Findings
Four research questions on agency, professional growth, mentorship, and the role of AI records in code review and learning.
RQ1
Agency is preconfigured at the organizational layer before individual preferences matter.
Across high-familiarity tasks both groups stayed in control through detailed delegation. In low-familiarity tasks, juniors oscillated between over-reliance and defensive resistance, while seniors separated design from generation and used AI for sense-making.
RQ2
Seniors steer modern tools with pre-AI foundational instincts; juniors gain speed but report fragile understanding and impostor syndrome.
After a single agentic debugging session, 6 of 10 juniors reported lower confidence in their ability to code without AI. Seniors framed growth around trial-and-error and strong mental models — and warned that AI cannot replace the experience of building those models yourself.
RQ3
AI is an accessible fallback mentor for basic guidance; humans remain indispensable for context and judgment.
Both groups treated AI as an always-available first responder for narrow questions, while reserving senior engineers for organizational context, why-questions, and the kind of correction that catches an AI confidently leading a junior toward the wrong solution.
RQ4
Prompt histories give seniors a window into junior thinking that diffs alone cannot.
In 12-minute review sessions seniors caught over-reliance patterns, missed verifications, and weak prompting habits that would have been invisible from the code review alone — and inferred junior competence from prompt quality and specificity.
In their words
Selected quotes from study participants. Pseudonyms have been removed; full attributions appear in the paper.
“It has my name on it, but I have no idea why it works.”
Junior engineer, on AI-generated code
“When you’re coding with agents, it’s like you’re just free-falling.”
Junior engineer, on scope control with agentic AI
“At no point can you hand over your expertise. You’re just handing over the workload.”
Senior engineer, on delegation to AI
“I feel like junior engineers just have no intuition. I am routinely pinged on stuff like ‘hey, this thing doesn’t work’ with zero follow-ups on a speculation.”
Senior engineer, on AI and junior judgment
From research to practice
Prompt & Code Reviews are the practice this paper coined and validated. They operate under the principle that accountability preserves agency: when engineers must document and defend their prompting strategies, they remain authors of their reasoning even when AI generates the code.
PCR.dev is the platform we built to make that practice low-friction in real teams. The CLI captures prompts as drafts on each engineer’s machine; the dashboard turns sealed bundles into reviewable artifacts, with inline comments alongside the diff each prompt produced. Reading the paper is the best way to understand the why; the product is the simplest way to try the how.
Authors
Feng and Yun contributed equally to this work (order was decided by last coin toss).
Cite this work
If our research informs your work, please cite the CHI 2026 paper. Both the ACM reference format and a BibTeX entry are below.
ACM Reference Format
Dana Feng, Bhada Yun, and April Yi Wang. 2026. From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI-Mediated Software Engineering. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI '26), April 13–17, 2026, Barcelona, Spain. ACM, New York, NY, USA, 24 pages. https://doi.org/10.1145/3772318.3791642
BibTeX
@inproceedings{feng2026junior,
author = {Feng, Dana and Yun, Bhada and Wang, April Yi},
title = {From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI-Mediated Software Engineering},
booktitle = {Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems},
series = {CHI '26},
year = {2026},
isbn = {979-8-4007-2278-3},
location = {Barcelona, Spain},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3772318.3791642},
doi = {10.1145/3772318.3791642},
numpages = {24},
}
Acknowledgments
The authors thank the anonymous reviewers, the senior and junior software engineers who participated, and the many referrers who dedicated their time, insights, and networks for this study.
Ready to run your first Prompt & Code Review with your team?