The AI Engineer

a theory of ai

Exploring the frontiers of artificial intelligence, consciousness, and the future of human-machine collaboration.

The AI Engineer
Featured
On the craft, dependency, and hard-won judgment required to become a serious engineer in the age of AI.

Consider what it means to become a master chef.

You can’t get there from cookbooks. You can approximate dishes, follow instructions carefully, produce results that are recognisable and sometimes good. But a cookbook is a record of someone else’s understanding, not a transfer of it. The knowledge that makes a chef, when the heat is right by the sound of the pan, what the dough is telling you through your hands, why this combination of flavours works and that one almost works and the distance between them isn’t in any recipe, that knowledge lives in the body and the judgment and the ten thousand hours of paying close attention to things going wrong. It can only be accumulated through the specific texture of doing the thing badly and then less badly and then well enough and then, for some, with a fluency that looks effortless from the outside and is anything but.

This is the arc of novice to master in any domain humans have ever seriously pursued. The martial artist who can feel an opponent’s weight shift before the movement begins. The carpenter who knows from the resistance of the saw whether the grain is running toward or away. The musician who hears the room and adjusts without thinking. In every case the knowledge that matters most has no propositional form. It exists only as cultivated perception, as judgment refined through years of being wrong in ways that taught something.

Software development has always been this kind of domain, though it hasn’t always been recognised as one. The tools look like intellect, reading and writing and logical reasoning, and so the knowledge looks like it should be transferable through text, through documentation, through courses and books. Some of it is. But the part that determines whether someone becomes genuinely serious is the part that isn’t: the feel for when a design is wrong before you can prove it, the sense that an abstraction is lying to you, the judgment that knows which problems to solve completely and which to leave alone. That knowledge accumulates the same way it does for the chef: through years of paying close attention to things going wrong.

There was a shape to a software career, for those who wanted to go somewhere in it. Not guaranteed, not fast, but legible.

You started knowing almost nothing. You were given small, well-defined tasks where the blast radius of your mistakes was limited. You broke things and fixed them and broke them again. You read other people’s code and were confused by it and then understood it and then understood why it was wrong. You were wrong often. Being wrong was the mechanism.

With time spent in genuine difficulty, you became someone who could be trusted to execute without close supervision. You had accumulated enough failures to recognise what failure looked like before it arrived, enough pattern recognition to move quickly on familiar ground and slowly on unfamiliar ground, and the judgment to tell the difference. This was the intermediate developer: not someone who had learned a certain number of things, but someone who had developed a quality of judgment about what they didn’t know.

Seniority was something else again. A senior engineer wasn’t a faster intermediate. They could hold a system in their head, not just the code but the forces acting on it, the decisions that had calcified into constraints, the places where the abstraction was lying. They could read a design and feel where it would fail under load or time or requirements that had not been written yet. They guided the junior and intermediate not by telling them what to do but by teaching them what to notice. Their knowledge wasn’t a list of facts. It was a posture toward problems.

Staff and above was rarer still, and different in kind. These were people who could design systems at scale, which isn’t the same as designing systems and making them bigger. Designing at scale means reasoning about interactions between systems, about organisational forces that shape technical decisions as much as technical forces do, about what a choice made today forecloses five years from now. The knowledge at this level was drawn from a long career’s worth of projects that succeeded and projects that failed and the hard-won understanding of why, accumulated across teams and companies and technologies and decades.

This wasn’t an efficient process. The inefficiency was the point. The struggle was the mechanism by which understanding moved from the outside to the inside, from something you had been told to something you knew in the way that only repeated personal experience produces. Mentorship transmitted not just knowledge but the texture of knowledge, the feel for when a pattern applied and when it didn’t, the intuitions that had no propositional form, only experiential ones.

The question worth sitting with is whether this process still works. Whether it can. And if it can’t, what we are producing instead.


The Floor and the Ceiling

Every major abstraction layer in software history has done the same structural thing. It raised the floor and widened the base. It let more people build real things. And without exception, it turned the knowledge beneath it, the knowledge that used to be simply what everyone had to learn, into a specialisation that most new entrants never needed and never developed. The expertise didn’t become less important. It became less visible, and the people who held it became fewer.

Assembly programmers existed before C. When C arrived, it was criticised in exactly the terms you would expect: you don’t know what the machine is doing, you lose control, the abstraction costs you something. They were right. It did. The question was always whether the trade was worth it, and it usually was. C let more people build operating systems and compilers than assembly ever had. The assembly programmers didn’t disappear. They became a smaller, more specialised layer beneath the C programmers. The pyramid inverted: the higher-order layer became vastly more populous, the foundational layer thinner.

C gave way to C++, then Java and Python and Ruby, each step widening the base. Then JavaScript arrived and did the same thing with unusual violence. JavaScript brought programming to the browser, to people who were not programmers. It produced, over time, a generation of developers deeply fluent at the JavaScript layer and almost entirely unfamiliar with anything beneath it.

None of this is a value judgment. A skilled JavaScript engineer building production systems for millions of users is doing real work, and doing it well. But they aren’t the same as someone who came up through C and then learned JavaScript, carrying twenty years of mental models about memory, about the cost of operations, about what the machine is actually doing underneath. They are operating at the same layer in a technical sense. They aren’t the same engineer. And the new layer being stacked on top, the one that claims to be just another step in this progression, isn’t the same as the JavaScript-only layer any more than that layer was the same as C. Claiming otherwise is disingenuous.

Each new layer isn’t equivalent to the layers beneath it. Accessibility isn’t depth. The floor rising isn’t the ceiling rising.


Mise-en-Place

The current conversation about AI and software development wants to have it both ways. The tools are so powerful that anyone can build anything, and the developers using them are becoming more capable than ever. Both claims are made in the same breath, by the same people, without apparent awareness of the tension between them.

The better frame is the kitchen.

A serious kitchen doesn’t do everything from scratch in the moment. The stock was made yesterday. The vegetables were prepped this morning. The supplier delivered the things that don’t need to be made in-house. This is mise-en-place, everything in its place so the act of cooking can happen at the speed and quality that service demands. A chef who uses pre-chopped peppers isn’t a lesser chef. They understand what their time and attention are for.

This is what AI tooling can be, used well. The model that generates boilerplate, drafts a first pass, surfaces relevant documentation, that’s the supplier delivering ready-made stock. It frees the engineer to do the work only the engineer can do: the judgment calls, the architectural decisions, the places where the right answer isn’t in the training data because it is specific to this system, this constraint, this moment. The experienced engineer with good tools is a chef with excellent mise-en-place. The tools serve the judgment.

But a line cook isn’t a chef. A line cook has real skill, real speed, real value. They own a station and execute with precision within a defined and bounded role. The path from line cook to chef isn’t more line cooking. It is the specific, painful, broadening experience of being responsible for decisions no one has scripted, of holding the whole kitchen in your head, of understanding not just your station but how every station affects every other one.

What the current moment is producing, at scale, is line cooks who believe they are chefs because the kitchen has installed equipment that automates parts of what chefs used to do manually. The equipment is real. The productivity gain is real. What isn’t happening is the development of the judgment that makes someone a chef. That judgment doesn’t come from the equipment. It comes from the years before the equipment, and the disciplined insistence, even after the equipment arrives, on understanding what it is doing and why.

The risk isn’t that the microwave exists. The risk is a five-course meal made in the microwave, plated carefully, and served as though the process that produced it is equivalent to the process it replaced. Most diners will not know the difference. The ones who do will feel, somewhere, that something is missing, a texture, a depth, a quality of attention that no amount of reheating restores.


What the Arc Required

The career arc described above had a necessary condition easy to overlook because it was so constant it seemed like the natural state of things: long, uninterrupted periods of genuine struggle with a single hard problem.

Not multitasking. Not parallel workstreams. Not the modern rhythm of fifteen-minute standups and Slack threads and three concurrent tickets in flight. The kind of focus where you sat with a problem that didn’t yield, where the discomfort of not understanding wasn’t immediately relieved, where the silence where the answer should have been forced you to go looking for it yourself. The looking was where the learning lived.

The deep, felt engagement with a problem that refused to resolve was what moved understanding from the outside to the inside. From something you had read to something you knew in your hands. The struggle wasn’t a cost of the process. It was the process.

The environment most developers work in today is optimised against this condition at every level. The tooling rewards speed. The process rewards throughput. And now the model fills the silence, producing a plausible answer before the not-knowing has had time to do its work. The junior developer who would once have spent three hours stuck on a problem, accumulating understanding through the texture of being stuck, now has an answer in thirty seconds. The answer is usually adequate. And adequate is the enemy of depth in ways that wrong never was, because wrong teaches you something and adequate teaches you nothing except that you can move on.

This isn’t an argument that everything should be done from scratch. It never was. Frameworks exist for a reason. Libraries exist for a reason. The entire history of good engineering practice is a history of not reinventing the thing that has already been solved well, so that attention and effort can be directed at the thing that hasn’t. The argument for AI tooling follows the same logic and it is a good argument. Boris Cherny, the head of Claude Code at Anthropic, said recently that coding is largely a solved problem. He meant it generously: the mechanical act of translating intent into working code is something the models can now do, freeing engineers to focus on the harder, more interesting work of deciding what to build and why.

But notice what makes that statement true for Boris Cherny specifically. He is a former principal engineer at Meta with decades of deep systems experience. When he uses Plan Mode and auto-accepts the output, the plan is being evaluated by someone whose judgment was built long before the tool existed. The tool serves his mastery. It accelerates a process of reasoning that he already owns. The question the statement doesn’t answer, and can’t answer, is what happens when the people using the tool never developed the mastery it is meant to serve.

Software isn’t a solved problem. Writing code may be approaching one, in the same way that chopping vegetables is a solved problem in a kitchen with good prep cooks. But the thing that determines whether the meal is worth eating was never the chopping. It was the knowledge of what to chop and why and how it fits into something larger. That knowledge has to come from somewhere, and the somewhere has always been the long, uncomfortable, inefficient process of learning through difficulty. If we optimise that process away, we aren’t making engineering more efficient. We are making it shallower and calling the shallowness speed.


The Plain Hamburger

The model produces statistically believable output. That’s both its power and its danger as a learning environment.

A developer early in their career, working with a language model as a primary thinking tool, gets answers that are grammatically correct, structurally plausible, and often functionally adequate. The answers look like the answers experienced engineers produce. They feel right. The developer ships them. The code runs. The feature ships.

But the developer didn’t learn what was wrong with the plan. They didn’t develop the capacity to distinguish between adequate and good, between good and excellent, between excellent and correct for this specific problem. They got the restaurant’s version of a hamburger: consistent, edible, forgettable. And they have no reference point that says this should be a different thing entirely.

Someone who grew up eating only chain restaurant hamburgers will recognise, when they encounter a great one, that something is different. What they will not have is the palate to evaluate the difference before tasting it, the capacity to look at a plan and say, before the work begins, this is going to be a problem.

That evaluation capacity is what depth gives you. It can’t be developed by repeatedly consuming adequacy.


The Language You Do Not Speak

There is an example that makes this problem harder to dodge than code or design, because it touches something most people already understand intuitively, even if they have never thought about it explicitly.

Imagine someone who has read a few Stephen King novels and enjoyed them. They have never written fiction. They open a language model and ask it to write a horror novel in King’s style. The model will produce something. It will probably be readable. It will have the surface markers, the small-town setting, the conversational narration, the slow build of dread. It may even, in places, inadvertently reproduce phrases and structures close enough to King’s own prose to raise questions about where pastiche ends and plagiarism begins. The models have, after all, demonstrated the ability to reproduce copyrighted works with unsettling fidelity when prompted in the right direction.

The result will be a book-shaped object. It will function as a book. Someone who has never read King might find it compelling. Someone who has read King extensively will feel, within a few pages, that something is missing, a quality of attention, a specificity of observation, a sense that the writer knows these people and this place in ways the words alone don’t fully convey. The difference between King and a King-shaped output is the difference between a writer with fifty years of watching how ordinary people behave under extraordinary pressure and a statistical model of what that watching produced on the page. The residue is similar. The knowledge is absent.

Now extend this. The person who has never written fiction and doesn’t speak Mandarin asks the model to produce a literary novel in Mandarin. Not a functional document. Not a business email. A work of literature, something that participates in the tradition of Chinese literary fiction, that carries the tonal and structural expectations of readers who grew up inside that language, that uses the particular capacities of written Mandarin to do things English can’t do and doesn’t try to.

Anyone who speaks more than one language knows that translation isn’t substitution. Languages don’t map onto each other word for word or even concept for concept. They encode different relationships between the speaker and the world, different assumptions about what needs to be said explicitly and what the silence is carrying, different textures of formality and intimacy and irony that have no equivalent in the other tongue. There is a reason new translations of Tolstoy continue to appear, generation after generation, not because the previous translators were incompetent but because the act of carrying meaning between languages requires a depth of feeling for both that no mechanical process can replicate. Something is always lost. What gets lost and what gets preserved is a function of the translator’s own understanding, and that understanding isn’t algorithmic. It is cultural and emotional and accumulated over a lifetime of inhabiting the space between two ways of thinking.

The monolingual anglophone who has never written more than an email, claiming that AI will make them the next great Persian writer, most people would find this claim absurd on its face. They would laugh, not cruelly but with the recognition that you can’t produce great work in a medium you don’t understand, using a tool you can’t evaluate, in a tradition you have never inhabited. The output might be valid Farsi. It might even be fluent Farsi. It will not be Persian literature, because Persian literature isn’t Farsi arranged correctly. It is a living tradition that the writer must be inside of to participate in, and no tool provides that from the outside.

Hold that feeling. That instinctive recognition that the claim is absurd, that the tool can’t bridge the gap between the person and the tradition, that valid output and meaningful output are different things.

Now apply it.


The Door

Consider a door.

Any door. The one at the entrance to a building you visited once, years ago. You remember almost nothing about it, but you could reach for the handle right now, eyes closed, and not miss. You know roughly where it will be, hand height, right side, set back from the edge just enough for your fingers to close around it. You know which way it opens. Not because you memorised it but because ten thousand doors have built a body memory so complete that the knowledge lives below thought.

This is what design actually is. Not the visual surface but the accumulated understanding of how people move through the world, encoded into an object so completely that the object becomes invisible. A well-designed door communicates its entire interface in a single glance, pull here, push there, the weight of it telling you whether it wants to swing wide or ease shut. The accessibility button is placed where someone in a wheelchair can reach it without having already passed through the door. The signage is where your eyes go when you are lost. None of this is accidental. All of it is the product of deep knowledge about how bodies occupy space and how minds navigate uncertainty.

A model trained on a vast corpus of door designs could produce a door that gets most of these things right. The statistical centre of door design is a pretty good door. But ask it to design something that departs from convention because the problem demands it, a door for a space that has never existed, or one that must communicate something the standard vocabulary can’t say. Will it enforce the constraints it knows about without being prompted to? Will it surface the question of where a person’s hand will be before the question of what the door looks like? Will it carry the understanding that a door isn’t a visual object but a threshold, a moment of transition with its own anxiety and expectation and social meaning depending on whether it opens toward you or away?

And the human working with it, the one who accepted the first output because it looked right, will they know to ask? Will they feel the wrongness of a handle placed two inches too low, or a door that opens the wrong way for the space it serves, or an entrance that’s technically accessible but communicates in every visual cue that the accessible path is the secondary one?

Carry this through to every element of architecture. The row house that clearly had a designer, you can see the intention in the facade, the considered materials, the deliberate composition, but something is wrong with it. Wrong in a way you can’t name but can’t unfeel. The proportions make you slightly uncomfortable without knowing why. The entrance doesn’t connect to the street the way your body expects. The windows are in the right places visually and the wrong places experientially. An architect would name the problem in thirty seconds. Most people will live in it for years and feel vaguely unsettled and never know the source.

A chef understands this at the level of flavour, the balance of salt and acid and fat and sweetness and umami that makes a dish feel complete rather than merely edible. The understanding isn’t a recipe. It is a trained palate, a feel for proportion that operates below conscious thought, an ability to taste what is missing before anyone else at the table has noticed something is wrong. The architecture of a meal and the architecture of a building and the architecture of a sentence in a language you have spent a lifetime inside, these aren’t metaphors for each other. They are the same thing expressed in different materials: the accumulated, embodied understanding of what works and why, which no tool has ever contained and no tool will.

This is what design is becoming. It is what development is becoming too.


The Designer Who Was Never Moving Pixels

AI-enabled design tools have allowed people who couldn’t previously produce visually coherent work to produce visually coherent work. Someone building a side project today can generate interface components that are aesthetically consistent, typographically reasonable, and roughly aligned with platform conventions. This is better than what they could have done before.

Good designers were never primarily doing what these tools now do for anyone. The craft of moving elements around a canvas, choosing from a type scale, applying a colour system, that was always the implementation layer of a much deeper practice. What the designer was actually doing was applying accumulated understanding about how people form mental models of interfaces, about what UI communicates before a single interaction occurs, about when a familiar pattern should be followed because it carries learned meaning and when it should be broken because the problem demands something the pattern can’t say.

That knowledge isn’t in the tool. It was never in the tool. It lived in the designer, built from years of watching people use things they built, from studying failures as carefully as successes, from the embarrassment of shipping something that looked right and felt wrong and learning exactly why.

The democratisation touched the surface layer, the one that was always the smallest part of what a serious designer was doing, and made it accessible to everyone. This is fine where the surface is sufficient. A landing page for a side project. An internal tool used by twenty people once a month.

It isn’t fine when the interface is doing load-bearing work. When a misread affordance has consequences. When the difference between a pattern that communicates clearly and one that merely looks like it does is the difference between a product that works and one that generates confusion by accident.

The danger is the conclusion that because more people can produce pleasant-looking work, the outcomes are equivalent to what a serious designer produces. They aren’t. A surface that looks professionally designed and a surface designed by someone who understands what interfaces communicate are identical until they aren’t. And when they aren’t, the gap is consequential.

The tools elevated the floor. They didn’t lower the ceiling. They made the ceiling harder to see, because the work that looks like it might reach it now has a finish that reads as professional even when the thinking beneath it would not survive contact with a real problem. The value serious practitioners hold is increasingly concentrated in the part of their work that’s hardest to see from the outside: the invisible decisions that prevented the problems that never happened.


The Pipeline

There is a shortage of senior engineers who can work effectively with AI. People who have the experience to evaluate what the model produces, to know when to trust it and when to push back, to see the structural problem underneath the working code, to architect systems the model can help build rather than systems that look like whatever the model produces when asked.

Companies are working around this by promoting people who are effective with AI tools and treating that as equivalent to depth. It isn’t. It is proximity.

Here is what the gap looks like in practice. A developer wants to make information from disparate systems available to agents in Cursor or Claude Code or Codex. They reach for MCP and start making API calls. The first integration works. The second works. Then the problems begin. Authentication and authorisation across systems with different identity models. Role-based access control that was never designed to accommodate machine callers. Distributed systems topologies where the data you need lives in three services that have different consistency guarantees and different failure modes. Caching, where, for how long, and the question that has ended more engineering careers than any other: when to invalidate it.

The model will help you write the API calls. It will scaffold the MCP server. It will generate plausible-looking integration code. What it will not do is tell you that the architecture you are building has a consistency problem that will surface at scale, or that the caching strategy works until the data changes in two places at once, or that the synchronous call chain you just constructed is going to fall over when any one of five upstream services has a bad day. These aren’t coding problems. They are systems design problems: performance, latency, eventual consistency versus strong consistency, events versus synchronous calls, the orchestration of data changes spread across microservices that were each designed in isolation by people who assumed they would own their data forever.

The secret of designing AI-enabled systems, one that the current conversation studiously avoids, is that they are all, under the hood, traditional distributed systems that continue to evolve as technologies and scale demand. Large language models are overlaid on and interwoven through these systems, but the systems themselves obey the same physics they always have. The CAP theorem doesn’t care whether your caller is a human or an agent. Network partitions don’t resolve themselves because the request came from a model. The hard problems of distributed computing, the ones that require genuine expertise to navigate, aren’t solved. They aren’t close to solved. They are, if anything, made harder by the introduction of nondeterministic components with variable latency and no guaranteed correctness.

AI is, in this sense, more adaptor pattern than solution. It is an extraordinarily powerful interface layer that connects to the same hard problems that have always been there. Once you get past the most trivial systems, you are immediately confronted with this: the thing the model does well, generating code, is the smallest part of what makes the system work. The rest is tradeoffs, and tradeoffs are what software engineering is, as distinct from coding.

None of this means that every developer needs to understand distributed systems theory. The spectrum of what gets built with software is vast, from planetary-scale infrastructure to a WordPress site that powers a little league team’s schedule. Both are valid. Both are useful. They aren’t the same, and the people who build them aren’t interchangeable, and this is fine. There will always be more work at the WordPress end of the spectrum than at the planetary end, and the tools that make the WordPress end accessible to more people are a genuine good.

But at the forward edge, the place where new patterns are discovered, where the systems that everyone else will eventually use as infrastructure are being designed for the first time, the question is whether the pipeline is still producing people with the depth to do that work. The cohort of engineers who developed pre-AI, who built their pattern recognition before the tools existed, isn’t growing. In five years it will be smaller. The people entering the field now are using AI as a primary thinking tool from the beginning. They will be fluent. They will ship. Some will be excellent. But the adversarial relationship with one’s own assumptions that years of hard systems problems produces, where does that come from now? When the model produces a plausible explanation for almost any failure, and the path of least resistance is to accept it, what creates the habit of not accepting it?

The current narrative answers this with abundance. The tools make everyone more productive. The models are getting better. The future is bright. But the floor rising isn’t the ceiling rising, and the field is about to discover what it means to need people who can lead it forward and find that the pipeline producing them has been running at a fraction of its former capacity for a decade.


The Subscription to Capability

In the third week of April 2026, GitHub paused new signups for its paid Copilot plans, tightened usage limits, and removed access to premium models from lower tiers. The same week, Anthropic briefly removed Claude Code from its twenty-dollar Pro plan’s pricing page before walking it back as a “small test on two percent of new prosumer signups,” a test that happened to involve updating every public-facing document and pricing table on the company’s website. The week before that, Anthropic had introduced usage caps that throttle access during peak hours.

These aren’t anomalies. They are the normal operation of a market finding its pricing. The cost of running agentic coding workflows has, by some reports, roughly doubled GitHub Copilot’s weekly operating costs since the start of the year. The subscription models that made these tools ubiquitous were never designed for what they are now being used for. The providers are doing what any rational business does when unit economics don’t work: they are adjusting access, raising prices, and segmenting tiers. This is predictable and, from their perspective, necessary.

But notice what it means for the developers who depend on these tools.

There is a practice now, common enough that it is discussed openly, of maintaining accounts with multiple AI providers so that if one goes down or hits a rate limit, you can switch to another and keep working. Think about what that sentence actually says. Developers have become dependent enough on AI tooling that an outage or a usage cap is functionally equivalent to a power failure. They can’t work without it. And the terms under which they can work with it are set entirely by someone else, subject to change on a week’s notice, and trending in one direction: more restrictive, more expensive, more segmented.

When others control the means of production, you have no agency. This isn’t a new observation. We have been here before. Compilers used to cost money. Development environments used to be proprietary. The open source movement was, among many other things, a direct response to the recognition that depending on someone else’s tools for your ability to create was a structural vulnerability. That fight was largely won, and a generation of developers grew up never knowing it had been fought.

The AI moment is a step back toward that mediated past. Not because the labs are malicious, but because the economics of large language models are fundamentally different from the economics of compilers. A compiler, once written, can be distributed at near-zero marginal cost. A language model requires compute for every interaction. The cost structure makes the subscription model almost inevitable, and the subscription model makes the dependency almost inevitable.

And the dependency compounds. A developer who has grown up on AWS may not have the ability to stand up a machine in a closet connected to a T1 line the way an earlier generation did. That’s mostly fine, because the abstraction is worth the trade. But the person who could do it, who understood networking at that level, carried an independence that the person using the managed service doesn’t. Not a superiority. An independence. The ability, if the service disappeared tomorrow, to rebuild from the ground.

There is a version of this argument that applies specifically to AI tooling and is harder to dismiss. If AI skills are table stakes, if the velocity gains are real and proven, if the market expects AI-assisted development as a baseline competency, then access to these tools isn’t optional. But the cost of that access is rising and will continue to rise as the providers find pricing that reflects actual compute costs. If the cost becomes high enough that only developers who are already proving value can afford it, then the tools required to develop competence are available only to people who have already developed competence. The ladder pulls up behind you.

Maybe all of this resolves. Maybe open source models close the gap. Maybe the economics shift. Maybe we look back on this period the way we look back on proprietary Unix: a temporary constriction before a broader opening. But we should be cautious, deeply cautious, about any trajectory that trades the ability to do the hard thing independently for the convenience of having someone else’s system do it for us. Every step in that direction is easy to take and hard to reverse, and the people taking it may not notice what they have lost until the moment they need it.


What You Carry

This isn’t a lament. It isn’t an argument that one kind of developer is better than another, or that the old ways were superior, or that the tools should be resisted. The tools are genuinely exciting. More people building more things is better. Access for more people to participate in the act of creation is better. The floor rising is a good thing. It has always been a good thing, at every layer, and it is a good thing now.

But it isn’t the only thing. And the part that’s being lost in the noise deserves to be said clearly, not as gatekeeping but as a strategic observation for anyone thinking seriously about where they want to go and how to get there.

If you want to work at a higher level of abstraction, using the tools the providers offer to build the solutions those tools are well suited for, you absolutely should. Understand that it is dependent on their benevolence and the prevailing economic interests of the moment, and that those interests can shift, as we have seen this month and will see again. But that kind of dependency is true to some extent in many domains and many careers. It is a tradeoff, and it can be a reasonable one, made consciously.

The argument here is different and more specific. Human experience and skill, the depth of knowledge that can only be achieved through time and effort and a disciplined dedication to honing your craft, still has value. Not nostalgic value. Strategic value. Because even as your job changes, even as the tools change, even as the layer you work at shifts beneath you, you carry those skills. They are yours. They don’t have a rate limit. They don’t get deprecated. They don’t require someone else’s permission to use.

More importantly, what you carry isn’t just the skills themselves but the ability to develop skills. The capacity for mastery. The habit of going deep, of sitting with difficulty, of building understanding through the specific texture of doing hard things over time. That capacity transfers. It is what allowed the C programmer to become an excellent systems architect, the systems architect to evaluate AI-generated plans, the experienced engineer to see through the statistically believable output to the structural problem underneath. The capacity for depth is the thing that lets you bring real judgment to whatever comes next, including tools that don’t exist yet and problems that have not been named.

Prompting, as a skill, can only make you better at prompting the agent of the day. Genuine depth of knowledge makes you better at prompting too, because prompting is ultimately just communicating knowledge, the same skill that existed before AI and went from developer to team, from senior to junior, from architect to organisation. The person who understands the problem deeply will always communicate it more effectively than the person who is guessing at what to ask for. The tool amplifies what you bring to it. If what you bring is shallow, the amplification is shallow.

If you want to be a staff engineer, an architect, someone who designs the systems that others build with and on, then dedicate yourself to learning and mastering those skills. Not the tools. The skills. Learn how distributed systems fail. Learn why consistency models matter. Learn what happens at the edges of specifications and at the intersections of systems that were never designed to meet. Use the tools to accelerate that learning, to explore faster, to test ideas more quickly. But don’t let the tools replace the learning itself. The struggle isn’t a cost. It is the mechanism.

This is, in the end, a call to the same spirit that built the open source movement. That movement was born from the recognition that depending on someone else’s proprietary systems for your ability to create was a vulnerability, and that the response wasn’t to reject the systems but to build alternatives that no one could take away. The masters of that era were not the ones who used the tools most fluently. They were the ones who understood the tools deeply enough to build new ones, to extend the platform, to create the infrastructure that the next generation would build on.

The way to become the masters of the future is the same. Not to reject the tools. Not to pretend the abstraction layer isn’t real or not valuable. But to remain the people who can, without the mediation or permission of others, continue to build. To stay curious. To dig deeper than the tool requires you to. To join, if you are new, not just the surface of this field but its depths, and to become the next generation of the experienced and skilled, the ones who carry understanding that no subscription provides and no pricing change can revoke.