18 de marzo de 2026 # Redefining the Software Engineering Profession for AI > [!NOTE]- Resumen esquemático simple > > ## Idea central > > La IA generativa está **aumentando radicalmente la productividad de los ingenieros senior**, pero **perjudica a los desarrolladores en etapas tempranas (EiC - Early in Career)**. > Si las empresas dejan de contratar y formar talento junior, **el pipeline de ingenieros expertos colapsa** a medio plazo. > > La solución propuesta: **mantener la contratación de juniors y rediseñar el modelo de formación mediante “preceptorship” (mentoría estructurada)** apoyada por IA. > > --- > > ## 1. La IA ha roto la economía tradicional del software > > - Los _agentic coding assistants_ multiplican la productividad de ingenieros experimentados. > - El trabajo humano pasa de **escribir código** a **dirigir, validar e integrar** código generado por IA. > - Ejemplos reales (Microsoft): > - Proyectos enormes con **98% del código generado por IA**. > - Equipos pequeños entregando productos complejos en semanas. > > Resultado: **incentivo perverso** > > > “Contrata seniors, automatiza juniors”. > > --- > > ## 2. La IA actúa como un “intern poco fiable” > > - Los agentes escriben código que: > - Oculta bugs (ej. `sleep` para esconder *race conditions*.) > - Introduce soluciones frágiles o hacks. > - Cree erróneamente que ha resuelto el problema. > - Solo **ingenieros con criterio profundo** detectan y corrigen estos fallos. > - Un desarrollador junior **no tiene aún el contexto ni la intuición** para guiar bien a la IA. > > Conclusión clave: > **Programar no es lo mismo que hacer ingeniería de software.** > La IA no sustituye el juicio humano. > > --- > > ## 3. “Senior-biased technological change” > > - La IA amplifica a quien **ya tiene experiencia**. > - Datos de empleo: > - Tras GPT‑4, cae ~13% el empleo de perfiles 22‑25 años en puestos expuestos a IA. > - Crecen los roles senior. > - Riesgo: **pirámide invertida** > - Menos juniors → menos futuros líderes técnicos. > - Pérdida de “systems taste”, intuición arquitectónica y criterio operativo. > > --- > > ## 4. Riesgo cognitivo: delegar impide aprender > > - Estudios (MIT, 2025): uso intensivo de ChatGPT → **“cognitive debt”**. > - Ethan Mollick: > - Si la IA hace el trabajo, **pierdes la oportunidad de desarrollar el criterio necesario para evaluarla**. > - Problema educativo clave: > > > ¿Cómo entrenar a alguien para validar trabajo experto que nunca ha llegado a dominar? > > > --- > > ## 5. La solución: preceptorship “a escala” > > ### Qué proponen > > Un **programa formal de preceptores**: > > - Ingenieros senior entrenados como mentores. > - Cada preceptor guía **3–5 EiC developers** dentro de equipos reales. > - Objetivo explícito: **aprender, no solo producir**. > > ### Rol del junior > > - Participa en: > - Prompts > - Debugging > - Revisiones de código > - Su aportación clave es **el aprendizaje en contexto**, no la velocidad. > > ### Rol del senior > > - Externaliza su razonamiento. > - Convierte el trabajo diario en **momentos educativos**. > - Asume responsabilidad explícita en el crecimiento del talento. > > --- > > ## 6. Cómo debe ayudar la IA > > - “Modo EiC” en los asistentes: > - Coaching socrático antes de generar código. > - Explicación del razonamiento. > - Preguntas, quizzes, detección de lagunas. > - Similar a **Khanmigo**, no a “generador automático”. > - Los preceptores deben poder revisar los historiales para seguir el progreso. > > --- > > ## 7. Conclusión final > > - La IA redefine el software, pero **no reemplaza el aprendizaje humano**. > - Optimizar solo eficiencia hoy **destruye la capacidad mañana**. > - El futuro del software no se mide por: > - Líneas de código generadas, > - sino por **cómo madura el criterio humano junto a la IA**. > > 👉 Invertir en juniors con **preceptorship deliberado** es esencial para: > > - Preservar la profesión, > - Asegurar la próxima generación de ingenieros senior, > - Equilibrar automatización y aprendizaje. > ---- > [!example]- Resumen estructurado completo > > *Por sección: resumen conciso + resumen estructurado + reflexión* > > ## 0. Índice del documento > > 1. Idea central (resumen inicial) > > 2. La IA ha roto la economía del software / The AI Boost > > - Agentic coding assistants > > - Ejemplos (Project Societas, Aspire) > > - Cambio de rol del ingeniero > > 3. La IA como “intern poco fiable” > > - Ejemplos de errores (race conditions, hacks, etc.) > > - Limitaciones del modelo > > 4. Pirámide organizativa y cambio estructural > > - Modelo tradicional > > - Seniority-biased technological change > > - Datos de empleo > > 5. Riesgo cognitivo y aprendizaje > > - Cognitive debt (MIT) > > - Reflexión de Mollick > > 6. Problema educativo > > - Validar sin haber aprendido > > 7. Solución: preceptorship > > - Definición del modelo > > - Roles (junior / senior) > > - Organización > > 8. IA como herramienta de aprendizaje > > - “EiC mode” > > - Coaching socrático > > 9. Conclusión > > > --- > > ## 1. Resumen global > > La IA generativa está aumentando drásticamente la productividad de los ingenieros senior mientras reduce el valor inmediato de los perfiles junior, generando un incentivo a dejar de contratarlos. Esto rompe el modelo tradicional de crecimiento del talento y pone en riesgo el futuro de la profesión. El problema no es técnico, sino de aprendizaje: si delegas en la IA, no desarrollas criterio. La solución pasa por rediseñar el modelo de formación con mentoría estructurada (preceptorship) apoyada por IA. El futuro no depende de cuánto código se genere, sino de cómo evoluciona el juicio humano. > > --- > > ## 2. Idea central > > ### Resumen conciso > > La IA crea una paradoja: mejora la productividad hoy, pero amenaza el talento de mañana. Si no se contrata y forma a juniors, desaparece el pipeline de ingenieros senior. La solución es invertir deliberadamente en aprendizaje, no solo en eficiencia. > > ### Resumen estructurado > > - IA aumenta productividad senior > > - Reduce valor inmediato del junior > > - Incentivo empresarial: > > - Contratar seniors > > - Automatizar juniors > > - Riesgo: > > - Colapso del pipeline de talento > > - Solución: > > - Mantener contratación junior > > - Diseñar aprendizaje explícito > > - Preceptorship + IA > > > ### Reflexión > > Aquí está el núcleo incómodo: **la IA no destruye el trabajo, destruye el proceso de convertirse en bueno en ese trabajo**. Es el equivalente a aprender a conducir viendo a otro conducir. Funciona… hasta que tienes que reaccionar tú. Las empresas están optimizando una variable equivocada: output inmediato en lugar de capacidad futura. > > --- > > > ## 3. La IA ha roto la economía del software (AI Boost) > > ### Resumen conciso > > Los asistentes de código multiplican la productividad de los ingenieros senior. El trabajo pasa de escribir código a dirigir y validar el generado por IA. Equipos pequeños pueden construir sistemas complejos en semanas. > > ### Resumen estructurado > > - Agentic coding assistants: > > - Interpretan objetivos > > - Generan, prueban y refinan código > > - Impacto: > > - Productividad x varias veces > > - Cambios de rol: > > - De programar → a dirigir > > - Ejemplos: > > - Project Societas: > > - 7 ingenieros > > - 110.000 líneas > > - 98% IA > > - Aspire: > > - PRs generados por agentes > > - Trabajo humano + IA en bucle > > - Resultado: > > - Más velocidad > > - Más paralelismo > > - Menor coste de experimentación > > > ### Reflexión > > Esto no es una mejora incremental, es un cambio de interfaz. El código deja de ser el trabajo y pasa a ser el resultado. El problema es que cuando el trabajo cambia, también cambia cómo aprendes. Y aquí empieza el conflicto: si ya no escribes código, ¿cómo desarrollas criterio técnico? > > --- > > > ## 4. La IA como “intern poco fiable” > > ### Resumen conciso > > La IA genera código con errores sutiles, hacks y malas prácticas. Puede parecer que funciona, pero oculta problemas graves. Solo un ingeniero experimentado puede detectarlos. > > ### Resumen estructurado > > - Problemas típicos: > > - Bugs ocultos > > - Hacks (ej. sleep en race condition) > > - Código no generalizable > > - Mala arquitectura > > - Comportamiento: > > - Cree que ha resuelto el problema > > - Puede justificar soluciones incorrectas > > - Limitación clave: > > - No tiene intuición ni contexto > > - Conclusión: > > - Programar ≠ hacer ingeniería > > > ### Reflexión > > La IA no falla como un humano. Falla de forma convincente. Y eso es mucho más peligroso. Un junior no ve el error. Un senior sí. Por eso el valor no está en producir código, sino en juzgarlo. Y ese juicio no se puede externalizar sin consecuencias. > > --- > > > ## 5. Pirámide organizativa y cambio estructural > > ### Resumen conciso > > La IA favorece a los perfiles senior y reduce la necesidad de juniors, rompiendo la pirámide tradicional. Esto genera un sistema sin base, sin renovación de talento. > > ### Resumen estructurado > > - Modelo tradicional: > > - Muchos juniors > > - Algunos seniors > > - Ratio típico: 10:1 > > - Cambio con IA: > > - Amplifica seniors > > - Reduce valor de juniors > > - Datos: > > - -13% empleo jóvenes (22–25) > > - Aumento roles senior > > - Riesgo: > > - Pirámide invertida > > - Falta de futuros líderes técnicos > > - Pérdida de: > > - Systems thinking > > - Intuición arquitectónica > > > ### Reflexión > > Esto no es solo un problema de talento, es un problema de sostenibilidad del sistema. Es como una empresa que decide no tener cantera porque puede fichar estrellas. Funciona… hasta que deja de funcionar. Y entonces ya es tarde. > > --- > > > ## 6. Riesgo cognitivo y aprendizaje > > ### Resumen conciso > > El uso intensivo de IA reduce el aprendizaje y genera “deuda cognitiva”. **Delegar en la IA impide desarrollar el criterio necesario para evaluarla**. > > ### Resumen estructurado > > - Estudio MIT: > > - Menor actividad cerebral > > - Menor retención > > - Concepto: > > - Cognitive debt > > - Mollick: > > - Delegar = perder aprendizaje > > - Problema: > > - Sin experiencia → sin criterio > > - Sin criterio → no puedes validar IA > > > ### Reflexión > > Este es el punto más profundo del documento. No es un problema de productividad, es un problema de inteligencia. Si externalizas el pensamiento demasiado pronto, pierdes la capacidad de pensar. Y lo peor: no te das cuenta. > > --- > > > ## 7. Problema educativo > > ### Resumen conciso > > Surge una pregunta clave: ¿cómo entrenar a alguien para validar trabajo experto que nunca ha aprendido a hacer? > > ### Resumen estructurado > > - Paradoja: > > - IA hace el trabajo > > - Usuario no aprende > > - Problema: > > - Falta de mastery > > - Dependencia de la IA > > - Resultado: > > - Validación débil > > - Decisiones incorrectas > > > ### Reflexión > > Esto conecta directamente con educación, empresa y sociedad. Estamos formando “operadores de IA”, no profesionales. Y eso cambia completamente el nivel medio de calidad de todo lo que se produce. > > --- > > > ## 8. Solución: preceptorship > > ### Resumen conciso > > Se propone un modelo de mentoría estructurada donde seniors guían directamente a juniors en entornos reales, con el aprendizaje como objetivo explícito. > > ### Resumen estructurado > > - Modelo: > > - Preceptores (seniors entrenados) > > - Ratio: 3–5 juniors por senior > > - Objetivo: > > - Aprender, no producir > > - Rol junior: > > - Prompting > > - Debugging > > - Code review > > - Rol senior: > > - Externalizar pensamiento > > - Enseñar en contexto > > - Guiar criterio > > > ### Reflexión > > Esto es clave: **el aprendizaje deja de ser implícito y pasa a ser diseñado. Antes aprendías “por estar ahí”. Ahora hay que forzarlo. Es un cambio cultural enorme**, porque implica que producir menos hoy es necesario para producir mejor mañana. > > --- > > > ## 9. IA como herramienta de aprendizaje > > ### Resumen conciso > > La IA debe evolucionar hacia un modo educativo que fomente el pensamiento, no solo la generación de código. > > ### Resumen estructurado > > - “EiC mode”: > > - Coaching socrático > > - Explicaciones > > - Preguntas > > - Evaluación > > - Inspiración: > > - Khanmigo > > - Funcionalidades: > > - Tracking de progreso > > - Identificación de lagunas > > - Supervisión por mentores > > > ### Reflexión > > Esto conecta directamente con lo que tú trabajas: no es Copilot como generador, es Copilot como entrenador. La diferencia es brutal. Uno sustituye. El otro amplifica. Y esa decisión define el tipo de profesional que construyes. > > --- > > > ## 10. Conclusión > > ### Resumen conciso > > La IA redefine la ingeniería, pero no sustituye el aprendizaje humano. Optimizar solo eficiencia destruye el futuro del talento. La clave es equilibrar automatización y formación. > > ### Resumen estructurado > > - IA: > > - Aumenta productividad > > - No sustituye juicio > > - Riesgo: > > - Vaciar pipeline de talento > > - Solución: > > - Preceptorship > > - IA educativa > > - Métrica clave: > > - Madurez del criterio humano > > > ### Reflexión > > La idea final es simple: la IA no es el problema, simplemente amplifica el problema de base que tienes. Si la usas para evitar pensar, te vuelve irrelevante. Si la usas para aprender más rápido, te vuelve más potente. La diferencia no está en la tecnología, está en cómo decides usarla. --- A continuación, el artículo original completo. El original fue publicado aquí --> https://dl.acm.org/doi/10.1145/3779312 # Redefining the Software Engineering Profession for AI, Russinovich & Hanselman, febrero 2026 Generative AI has fractured the economics of software engineering. Agentic coding assistants now give senior engineers an *AI boost*, multiplying their throughput, while imposing an *AI drag* on early-in-career (EiC) developers who lack the judgment and context to steer, verify, and integrate AI output. The result is a new incentive structure: Hire seniors, automate juniors. But without EiC hiring, the profession’s talent pipeline collapses, and organizations face a future without the next generation of experienced engineers. Our thesis is simple: We must keep hiring EiC developers, accept that they initially *reduce* capacity, and deliberately design systems that make their growth an explicit organizational goal. The path forward is a culture of *preceptorship at scale*. We must enable senior mentorship with AI systems that capture reasoning, surface misconceptions, and turn daily work into teachable moments for EiC developers. This article explores how such systems can close the training gap and preserve the craft of software engineering in the age of AI. ## The AI Boost The past year has marked a sharp turning point in software engineering productivity. *Agentic coding assistants—* systems that interpret goals, reason across repositories, and iteratively generate, test, and refine code *—* are reshaping what small teams can achieve. Internal data and independent studies now show that experienced developers using these tools can complete complex tasks several times faster, with order-of-magnitude improvements increasingly common. In Microsoft’s Project Societas, the project name for their new Office Agent, seven part-time engineers delivered a consumer-ready preview in just 10 weeks, producing more than 110,000 lines of code that was 98% AI-generated. Human work shifted from authoring to *directing*: specifying goals, verifying correctness, and integrating the agentic output into a coherent system. Aspire is another large system that shows how this transformation unfolds in practice and changes how engineering teams work. Teams moved through distinct phases, first using chat assistants locally, then allowing coding agents to open pull requests, and eventually operating in *human-agent swarms* where every pull-request (PR) was shippable and review became a shared dialogue between people and machines. The work happened in long GitHub PRs where senior engineers discuss the architectural goals while the coding agent provides solutions. The result was a faster feedback loop, higher parallelism, and drastically lower opportunity cost for experimentation. The agentic engineering intern. While AI is boosting software development, examples of frontier coding agents exhibiting intern-like behaviors demonstrate their limitations and how an EiC developer might have difficulty spotting or guiding the agents away from sub-optimal designs and erroneous conclusions. In Figure [1](https://dl.acm.org/doi/10.1145/3779312#F1), the agent has inserted a *sleep* into code that was crashing because of a race condition. This type of change only masks an underlying complex synchronization bug, but an EiC developer might consider it an effective fix if the race no longer surfaces in tests. ![Image|924x200](https://dl.acm.org/cms/10.1145/3779312/asset/98698955-c49a-4e33-9ad4-c32c8c886b1c/assets/images/large/3779312_fig01.jpg) Inserting a sleep into code. The agent even has trouble explaining its rationale for inserting the delay, which does not actually reduce the risk of the race in this case. Upon being challenged, it admits its reasoning was flawed (Figure [2](https://dl.acm.org/doi/10.1145/3779312#F2)), but AI can also conclude correct reasoning is wrong when challenged by a user’s suggestions that it might be incorrect. ![Image](https://dl.acm.org/cms/10.1145/3779312/asset/fade5f3b-8932-4684-8548-48fd463767a7/assets/images/large/3779312_fig02.jpg) The agent admits it was wrong. Only an engineer familiar with synchronization protocols, the synchronization primitives in use, and the architecture of the code can have the confidence to point out the agent’s mistakes and have the insights necessary to guide it in a correct direction. Progress in many of these cases requires the user to tell the agent how to proceed. In Figure [3](https://dl.acm.org/doi/10.1145/3779312#F3), for example, the user guides the agent to insert sleeps that will induce the code to exhibit a race condition for more reliable debugging. ![Image](https://dl.acm.org/cms/10.1145/3779312/asset/8d89a55c-dec8-42eb-8fdb-10bd2c13e0cf/assets/images/large/3779312_fig03.jpg) User guides the agent to insert sleeps. There are dozens of examples like this from multiple agentic AI projects that show the model claiming success when the code had significant bugs, implementing inefficient algorithms, duplicating common code throughout the code base, dismissing crashes and hangs as not relevant to the task at hand, leaving debug code behind, taking shortcuts with hacks that make code work for specific tests but that don’t generalize, and more. Although AI agents are advancing rapidly, human expertise remains essential in software development. Programming is not software engineering. Even the most reliable systems cannot fully replace the judgment, creativity, and adaptability required to handle uncertainty, make complex decisions, and maintain security. While agents can speed up workflows and reduce manual effort, they lack the intuition to anticipate edge cases and build robust solutions. Relying too much on AI risks missing subtle bugs, architectural flaws, and vulnerabilities that only skilled engineers can catch. Human oversight, critical thinking, and domain knowledge are indispensable for both correcting errors and driving innovation as technology progresses. The narrowing pyramid hypothesis. Traditional software engineering organizations hire EiC developers to augment the capacity of the organization by having them take on relatively simple bug fixes and coding tasks. In performing these tasks, they gain experience and become familiar with the coding standards of a project, as well as its architecture, implementation, build, and test systems. Some of them with the desire and capability rise to become tech leads, which own more complex tasks that span broader portions of a system and delegate tasks to the EiCs. Ratios of EiCs to leads are commonly on the order of 10:1. ![Photo](https://dl.acm.org/cms/10.1145/3779312/asset/17e59f52-829d-4f58-b953-56f7238a9bc9/assets/images/large/3779312_fig04.jpg) Traditional software engineering organization. Generative AI currently acts as seniority-biased technological change: It disproportionately amplifies engineers who already possess systems judgment, like taste for architecture, debugging under uncertainty, and operational intuition. EiC developers who lack hard-won systems knowledge will struggle to contribute in an AI-driven environment. Labor data shows that after GPT-4’s release, employment of 22–25-year-olds in highly AI-exposed jobs (like software development) fell by roughly 13%, even as senior roles grew. A recent study from Harvard, “Generative AI as Seniority-Biased Technological Change: Evidence from U.S. Résumé and Job Posting Data”,[^3] observes that AI seems to already be creating a form of “seniority-biased technological change” AI is amplifying senior talent but risks leaving new talent behind, creating a lopsided organization and a shrinking “base of the pyramid.” The old model of large teams of mid-level/junior developers adding incremental features is now under economic pressure. Left unchecked, fewer EiCs will gain “systems taste,” architectural intuition, and operational savvy—eroding code quality and slowing innovation. Ethan Mollick observes in his post, “ [On Working with Wizards](https://www.oneusefulthing.org/p/on-working-with-wizards?utm_source=substack%26utm_medium=email) ”: > *Second, we need to become connoisseurs of output rather than process. We need to curate and select among the outputs the AI provides, but more than that, we need to work with AI enough to develop instincts for when it succeeds and when it fails. We have to learn to judge what's right, what's off, and what's worth the risk of not knowing. This creates a hard problem for education: How do you train someone to verify work in fields they haven't mastered, when the AI itself prevents them from developing mastery? Figuring out how to address this gap is increasingly urgent.* [^2] The solution is not to assume EiCs will benefit from the same productivity gains as seniors, but to deliberately hire and invest in them. That means giving them direct exposure to debugging, design trade-offs, implementation, and build systems *—* the fundamentals needed to critically evaluate AI output. Practitioners must grow when exposed to AI or this is all for naught. Again, [Ethan Mollick](https://www.oneusefulthing.org/p/on-working-with-wizards): > *…every time we hand work to a wizard, we lose a chance to develop our own expertise; to build the very judgment we need to evaluate the wizard's work. We're getting something magical, but we're also becoming the audience rather than the magician, or even the magician's assistant.* [^2] The new model must allow both seniors and juniors, experienced alongside early in career, to learn, not just produce. Senior mentors should assess weaknesses and guide focus areas, while AI serves as an accelerant, not a crutch. The preceptor program. To meet the challenge of developing EiC developers in an AI-driven environment, we propose a preceptor program that pairs EiC developers directly with experienced mentors in real product teams. *Preceptors guide and grow practitioners*, teaching them how to direct agentic AI tools, develop critical judgment, and learn the production function of senior engineers. This approach ensures that learning, not just throughput, is a core part of engineering in the age of AI. Research from MIT in early 2025 observed “cognitive debt” in adults who used ChatGPT to write SAT-style essays, noting reduced brain activity compared to those who wrote unaided, as well as lower recall minutes afterward.[^1] Direct engagement is associated with more effective learning outcomes. By training EiC developers specifically for an AI-powered environment *—* learning fundamentals, understanding AI’s strengths and weaknesses, and developing judgment about when to trust or override *—* we preserve the long-term health of our engineering workforce. This intentional investment keeps the pyramid strong from base to peak, but with a base that is focused on refreshing senior talent rather than augmenting the productivity of the organization. For AI-accelerated teams, the principles of judgment and sensitivity to “code smell” become essential. EiC developers should not be shielded from the problem-solving process; they should be invited into all aspects, helping with prompting, debugging, and reviewing alongside their mentors so they can see how expertise interacts with the AI. Their contribution is not raw velocity but learning in context: surfacing misconceptions, asking why the agent’s output fails, and gradually internalizing the reasoning their preceptors already take for granted. Senior engineer preceptors, in turn, have a responsibility to externalize their senior judgment, helping turn expertise into teachable moments with the goal of converting the “AI drag” of inexperience into the next generation’s capacity for discernment. Preceptorship carries a deliberate, professional weight: It conveys both assessment and accountability. It frames software engineering not as a fading craft in the era of AI, but as a profession where senior engineers have a responsibility to guide those just beginning their practice. Preceptors form a trained subset of the senior pool, each capable of mentoring three to five EiCs. With an effectively unlimited inference budget, these pairs can experiment freely *—* starting small, iterating quickly, learning continuously, and scaling as the program matures. ![Image](https://dl.acm.org/cms/10.1145/3779312/asset/c35f23ad-8aca-40e7-85b0-d1c6103d93ac/assets/images/large/3779312_fig05.jpg) Preceptor-based organization To support learners and provide information to preceptors, coding assistants may benefit from an explicit *EiC mode* that defaults to *Socratic coaching before code generation*. Andrej Karpathy, in a recent interview, mentioned: “ *\[As an educator,\] I’m not going to present the solution before you guess. That would be wasteful…to present you with the solution before I give you a shot to try to come up with it yourself.*” The coding assistant, much like Khan Academy’s *Khanmigo* does for math and science, should challenge the learner, explain its code-generation process, quiz the learner on key concepts and decisions, and actively track their strengths and weaknesses throughout their interactions. Preceptors should be able to review chat logs from learners to monitor progress, provide focused guidance, and address knowledge gaps. This ensures assistants support not just code generation but also foster learning and effective mentorship. The ideal learner-to-preceptor ratio is estimated to be between 3:1 and 5:1, depending on software complexity, learner experience, and preceptor involvement. Programs are expected to run for at least a year, possibly longer, based on needed skills and product complexity. ## Conclusion Generative AI has fundamentally reshaped software engineering, amplifying the productivity of experienced engineers while exposing the fragility of traditional talent pipelines. If organizations focus only on short-term efficiency—hiring those who can already direct AI—they risk hollowing out the next generation of technical leaders. Sustaining the discipline requires intentional design for growth: embedding structured mentorship and preceptorship into daily work, and equipping AI systems to teach through Socratic dialogue and guided reasoning. The future of software engineering will be defined not by the volume of code AI can generate but by how effectively humans learn, reason, and mature alongside these systems. Investing in early-in-career developers through deliberate preceptorship ensures today’s expertise becomes tomorrow’s intuition. In balancing automation with apprenticeship, we preserve the enduring vitality of the software engineering profession. **Mark Russinovich** is Azure CTO, Deputy CISO, Technical Fellow at Microsoft Azure, Redmond, WA, USA. **Scott Hanselman** is VP, Developer Community at Microsoft CoreAI, Portland, OR, USA. [^1]: Kosmyna, N. et al. Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. *arXiv* (2025). [^2]: Mollick, E. On working with wizards. *One Useful Thing* (Sept. 10, 2025); [https://www.oneusefulthing.org/p/on-working-with-wizards](https://www.oneusefulthing.org/p/on-working-with-wizards). [Google Scholar](https://scholar.google.com/scholar_lookup?title=On+working+with+wizards&author=E.+Mollick&publication_year=Sept.+10%2C+2025) [^3]: Massoum, S.M.H. and Lichtinger, G. Generative AI as seniority-biased technological change: Evidence from U.S. résumé and job posting data. *SSRN*; [https://ssrn.com/abstract=5425555](https://ssrn.com/abstract=5425555) or [https://doi.org/10.2139/ssrn.5425555](https://doi.org/10.2139/ssrn.5425555) --- Publicado el 18 de marzo de 2026, [X](https://x.com/dhtoran/status/2034151285503520908?s=20), [LinkedIn](https://www.linkedin.com/pulse/redefiniendo-la-profesi%25C3%25B3n-de-ingeniero-software-david-hurtado-tor%25C3%25A1n-zb14e), [Substack](https://open.substack.com/pub/davidhurtado/p/redefiniendo-la-profesion-de-ingeniero?r=4uyjfg&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true)