The Interregnum

a theory of ai

Exploring the frontiers of artificial intelligence, consciousness, and the future of human-machine collaboration.

The Interregnum
Featured
As AI reshapes work, judgment, and the path from junior to expert, the future it creates is still a choice.

The think pieces, the keynotes, the breathless predictions: everyone agrees AI is going to change everything. Jobs, cities, medicine, war. The companies building it are happy to confirm this. They have every incentive to.

But underneath the hype there is something more human and harder to name. It’s the feeling of a graphic designer who spent years developing an eye, a taste, a way of seeing, watching a prompt produce in seconds what took them days. The developer who built a career on knowing a codebase intimately, who finds that an agent can navigate it without ever having lived in it. The writer, the analyst, the paralegal, the radiologist. The person who didn’t love their job exactly, but loved what it gave them: a role, a community, the particular respect of people in a shared situation, the feeling of being needed for something specific and hard.

Work has always been more than income. It has been identity, structure, belonging. The factory floor was brutal, but it was also where people found each other. The office was mundane, but it was where lives were built around something held in common. When that disappears, not slowly through the usual churn of an economy finding new shapes but suddenly and at scale, what goes with it is not just a job. It is a place in the world.

This is not a new dynamic. The history of human labour is the history of disruption: feudalism to cities, farms to factories, crafts to assembly lines, whole industries unmade by a cheaper process or a faster machine. People have always had to adapt, and they have, and the world has generally been richer for it, though the people caught in the transition rarely experienced it that way.

What is new is the scale and the speed. Not one industry, not one kind of work, but anything that touches digital, anything that can be reduced to pattern and inference, anything that can be run by an algorithm well enough. Everything, everywhere, all at once. The genie is not going back in the bottle, and the question of whether this is good or catastrophic may be less important than the question of how we live inside it, because we are already inside it, and the shape of what comes next is being decided now by the people and companies with the most to gain from one particular version of the future.

So there is wonder and there is dread. Panic and resignation. Desperate optimism and complete avoidance, sometimes all of these in the same person on the same afternoon. That is the feeling of the moment, and it deserves to be named before we get into the specifics, because the specifics matter and the place to look at them most clearly is the industry closest to the technology: software development, which has been living this transition longer than most, and whose experience is a preview of what is coming for everyone else.

How we got here

It didn’t arrive all at once.

The first tools were modest: autocomplete that went a little further than before, suggestions that occasionally surprised you. TabNine, then Copilot. Useful, occasionally impressive, easy to keep at arm’s length. You were still the developer and the tool was just faster syntax.

Then the models got better and the suggestions became solutions. You stopped correcting them as often. The context window grew and suddenly the tool could hold more of your codebase in mind than you could on a given morning. You started describing what you wanted in plain language and getting back something that worked, not always, but often enough to change the way you thought about the work.

Then came equivalence, which is not perfection but is close enough. An agent that produces code good enough, fast enough, that the gap between it and a junior developer becomes difficult to justify to a finance team. Equivalence is replacement too. It doesn’t need to be better. It needs to be good enough, cheaper, and available at three in the morning without complaint.

That trajectory from autocomplete to replacement took roughly four years, and it has not stopped.

The spec is the valuable part

AWS has published a process worth examining. Feed your requirements and architectural documents into a structured AI workflow. Let it surface the gaps, answer the questions, produce a comprehensive specification. Generate code from the spec. Want to change something? Change the spec. Want to rewrite in Rust? Change that part and rerun.

The code, they say, isn’t the valuable part. The spec is.

That’s either the clearest thing anyone has said about AI-assisted development, or a confession that nobody has thought carefully about where specs come from. Because the documents that feed the machine, coherent, complete, architecturally sound, require someone who has built systems like this before. That same person is usually the one who can answer the questions the AI surfaces during elaboration. The rest of the room is present, but one person is doing the work that matters.

The hard part was never hands on keyboards. It was always the judgment that preceded them. What processes like this reveal, underneath the genuine productivity gains, is that we have built a remarkably powerful way of automating the part that was never the bottleneck.

And the teams that adopt them face an immediate structural question that most organisations are not ready to answer. If one senior person with strong architectural judgment can now do what previously required a team, what does the team do? The optimistic answer is that they review the output, catch the errors, apply the judgment that the machine lacks. The honest answer is that reviewing hundreds of thousands of lines of machine-generated code is its own kind of unsatisfying work, and it isn’t obvious that it produces the understanding that building the code by hand once did. There is something almost poignant about imagining a future in which experienced developers spend their days reading code that machines wrote, making sure it’s acceptable. It has the feeling of a job that exists to make the transition more comfortable, not because the work itself requires it.

What gets lost

The senior developers are doing fine. Their mental models are clear, their experience translates directly into the ability to describe complex things precisely, and that is exactly what these tools reward. They can ask for more because they have built it before. They know what a good spec looks like because they have written bad ones.

The question nobody is asking honestly is what happens to the people who would have become them.

The junior work, legible, well-specified, ticket-driven, is what these tools do first and best. That work was never just output. It was how knowledge transferred. Understanding isn’t won painlessly. Code that hasn’t been struggled over doesn’t stick. Even architectural documents produced with AI assistance have a way of becoming opaque to the person who made them, because the knowledge lives in the process of making, not in the artifact itself. The younger developers being told that building things by hand is like a carpenter insisting on hand tools are not wrong to wonder where the skills come from, and what they are supposed to understand after a career spent directing machines.

The path from junior to senior is narrowing, and the industry is not being straight about it.

The longer view

When something becomes cheaper to produce, you get more of it. That is probably true of software: more of it, smaller, more specific, more niche. It does not follow that there will be more people building it, or that the economics of the profession will resemble what came before. What seems more likely is a bifurcation: a smaller number of people with deep technical judgment, working at leverage that previously required teams, and a much larger number doing something harder to name, directing, reviewing, translating between human intent and machine execution, work that is real but whose market value is still being discovered.

Software development is one industry. Apply the same logic to every domain where the work is primarily cognitive and the picture becomes harder to look at squarely. The paralegal, the analyst, the junior doctor reading scans, the copywriter, the financial advisor, the middle manager whose job is to synthesise information and make decisions. The pattern is the same: the most legible, most structured, most well-specified work goes first. What remains requires judgment, taste, context, and the kind of hard-won understanding that is difficult to train and impossible to fake, but that also requires a pipeline of people doing the legible work first in order to develop it.

Then the harder question, which applies to every domain: if a system can genuinely outperform a human in a given role, what is the principled argument against it? Not the emotional argument, which is real and worth taking seriously, but the principled one. We do not have it yet. We have discomfort, which is not the same thing.

There are also the material uncertainties that the optimism tends to skip past. Token costs are not stable. The infrastructure is expensive and concentrated. The valuations rest on a premise, that these systems will replace enough labour to justify the capital, that has not been tested at scale and may prove, in some domains, to be wrong in ways that matter. If the unit economics shift, or confidence wavers, the vector changes fast, and the future that was being built toward changes with it.

What we can say

Some things are observable now. The work is changing. The path in is narrowing. The leverage available to those with deep judgment is unlike anything the profession has seen before. These are facts, not predictions.

Some things are probable, given what we can see. Bifurcation in the labour market for cognitive work. Wage compression in the middle. More software made by fewer people, with the surplus value flowing to a smaller number of companies and individuals than the previous model allowed.

And some things are genuinely unknown. Whether the infrastructure economics hold. Whether the current tools, remarkable at execution and poor at reasoning about systems, will be the ones that define the next decade or just the ones we remember from the beginning. Whether the people inside this transition end up with more agency or less, more connection to meaningful work or a different and diminished relationship to it.

The foundations being laid right now, in tooling, in process, in what gets normalised and what gets discarded, will be difficult to revisit. We have recent precedent for what happens when we defer those decisions to the people building the technology. Social media was not something that happened to us like weather. It was a set of choices made by a small number of companies, and the rest of us adapted to the consequences because the choices had already been made and the infrastructure was already in place. By the time the costs were legible, to attention, to discourse, to the basic experience of being a person with a phone, the architecture was settled and the people who built it were insulated from the effects.

The same pattern is available here, and it is moving faster. The people building these systems are not villains. They are, in the main, optimising for what they know how to optimise for: capability, speed, market position. That is what companies do. The question is whether the rest of us treat their version of the future as inevitable, the way we treated algorithmic feeds and engagement metrics as inevitable, or whether we decide that the conditions under which people work and learn and develop judgment and find meaning in what they do are things worth designing for deliberately, not things that will sort themselves out once the technology matures.

They will not sort themselves out. They never have. But the choices are still open, for now, and they belong to more people than are currently making them. What those choices look like in practice, in schools, in hiring, in the way organisations structure work, in the questions we ask before adopting a tool rather than after, is not something any single essay can answer. But the first step is the simplest and the one most consistently skipped: to stop treating the version of the future being sold to us as the only one available.