The Middle

a theory of ai

Exploring the frontiers of artificial intelligence, consciousness, and the future of human-machine collaboration.

The Middle
Featured
The real leverage in AI is not the model itself but the architecture that decides how, when, and whether to use it.

Every major AI company is building toward the same position: the place between you and your work. The layer you can’t route around. The thing you have to go through to get anything done. That position is the middle, and whoever holds it sets the terms for everyone else.

The model providers want it. Of course they do. It is the most valuable position in any technology stack and they are spending hundreds of billions of dollars to claim it. But the middle is not the model. The model is a capability, increasingly commoditized, converging across providers, subject to version changes and pricing shifts and deprecation schedules you didn’t agree to. The middle is the thing that decides how the capability gets used. The logic, the architecture, the judgment about what matters and what doesn’t. That part is yours, or it should be, and most of the industry is giving it away without realizing it.

Prompts all the way down

Pull back the curtain on most AI tools being built right now and what you find is prompts. Layered, chained, sometimes sophisticated, sometimes remarkably naive, but prompts all the way down. One prompt produces a plan. Another evaluates it. A third refines the output. The engineering is real, but the thing being engineered is a coordination of requests to someone else’s model, using someone else’s weights, shaped by someone else’s training decisions.

Many of these products amount to: we wrote better prompts than you would have. The model is the same one you could call yourself. The data is often your data. The output is shaped by instructions that, if you read them, you would understand completely. What you are paying for is not capability. It is the fact that someone else thought about how to ask.

That is a real service. It has real value, the same way a consultant who asks better questions has real value. But it is not a moat. It is an arbitrage on the gap between what the model can do and what most people know to ask for, and that gap is closing fast.

The companies building these tools know this, which is why the smart ones are racing to make their products sticky in other ways: proprietary data, integrations, workflow lock-in, the accumulation of context that makes switching costly. The product is not the prompt. The product is the dependency.

Outsourcing thinking

If your process depends on a model you don’t control, shaped by prompts someone else wrote, producing output you evaluate but didn’t construct, where does your judgment live? Where is the understanding that makes you, or your company, or your product, different from anyone else with access to the same API?

Domain knowledge matters, but only if the people who hold it are still making decisions with it. When the process lives inside a tool you don’t control, shaped by prompts you didn’t write, the knowledge starts to atrophy. Not because anyone forgot it, but because no one is using it. The model makes the calls, the team reviews the output, and the understanding that made the domain knowledge valuable in the first place gradually stops being exercised. The competitor signs up for the same API and builds the same prompts and the difference between you and them turns out to have been thinner than you thought.

Your advantage lives in the architecture. The thing you built around the model. The system that encodes your decisions about what matters, what to check, what to trust, what to route where. The logic that says: use this model for this, use that one for that, and for this particular decision, don’t use a model at all, because this is a place where we know the answer and we are not going to leave it to inference.

That is the middle. The harness. The thing that holds the pieces together according to your judgment, not someone else’s.

Where the intelligence lives

Every team building with AI makes a decision early, often without realizing they are making it: where does the intelligence live? In the model, or in the system around it?

If the answer is the model, then everything else is plumbing. This is true whether you are building AI products or just running a business that has absorbed these tools into its operations. The marketing team that restructured around a writing assistant, the sales team that let an AI handle lead qualification, the finance team that automated its reporting pipeline. The value of what they have built is exactly as durable as the model’s next version, and they will find out what that means the first time an update changes the behavior they depend on. They will find out again when the pricing changes. They will find out a third time when the tool they built their process around pivots to serve a different market, because their workflow was never the product. They were the customer.

If the answer is the system, then the model is a component. Powerful but bounded. You use it where its strengths matter and you don’t where they don’t. You route decisions that require consistency through deterministic logic, because you know the answer and you are not willing to bet on a probability. You route decisions that require flexibility through the model, because that is what it is good at. And when the model changes, or the pricing changes, or a better one appears, you swap it the way you would swap any other dependency, because your architecture was never married to it.

It is an engineering distinction. It is also a question about where you want your agency to live.

What this looks like

Consider the problem everyone in AI is chasing right now: long-running agents. The pitch is that you give a model a complex task and it works on it for hours, autonomously, producing something that would have taken a team days. The implementations vary but the ambition is the same: remove the human from the loop for as long as possible and let the model do the work.

But you can build something like this yourself, and when you do, the architecture looks nothing like a single model thinking for hours. It looks like a process. One agent investigates, iterating through the parts of a problem, building up an analysis, self-evaluating as it goes. When it thinks it’s done, it passes the result to a second agent whose job is different: evaluate, interrogate, check claims against counterfactuals, decide whether the work meets a standard you defined. That agent passes back notes and a status. Iterate or sign off. An orchestrator coordinates the handoffs, the same way a project lead coordinates between a researcher and a reviewer and a stakeholder, because that is what this is. It is a process that mirrors how organizations have structured effort and feedback for decades, except now the participants happen to be model calls.

The models in this system are doing what models are good at: transforming information into natural language, selecting from options, summarizing, generating. They are not deciding what the process is. They are not defining what done looks like. They are not choosing which claims to interrogate. You are. The models are capabilities. The process is yours. And if one model gets worse or more expensive or disappears, you replace it, because no single model is structural to what you built.

This is what owning the middle looks like in practice. Not a better prompt. A better process, with the judgment encoded in the architecture and the models slotted in where they’re useful.

The older pattern

This dynamic is not new to AI. The history of software development is a history of trading control for convenience, and the trade has generally been worth making, until it isn’t.

We used to manage our own servers. We understood the machines our code ran on because we had to. Then AWS abstracted that away and the abstraction became the system we understood. The underlying infrastructure became someone else’s problem, and for most purposes that was a good trade. Infrastructure is a commodity. Letting someone else manage it so you can focus on what you’re building is sensible.

But there is a difference between outsourcing infrastructure and outsourcing process. When the thing you hand over is a server, you lose control of the hardware. When the thing you hand over is decision-making, you lose control of the outcome. And when the service you depend on changes its internals without warning, when the model behind the API shifts in ways that alter your results, the experience is less like a planned migration and more like showing up to work and finding that a key member of your team has been quietly replaced by someone who doesn’t quite understand the job yet.

The progression over the last thirty years, well before AI, has been one of markets captured through control and users ceding it. Every developer whose entire skillset is a single framework is a version of this. Every team that can’t ship without a specific vendor’s platform is a version of this. The tools replace understanding, then they replace judgment, and eventually the question becomes: what value are you individually bringing? In an era where AI is trying to automate all cognitive labor, that question becomes the only one that matters.

The moat

The models are converging, and they will continue to converge. The prompts are text, reproducible by anyone with the inclination. Access to the technology is broad and becoming broader. None of these are moats. They are table stakes.

The moat is in the decisions you made about how to use it. The architecture that reflects your understanding of your problem. The knowledge you encoded into a system rather than leaving it to inference. The places where you chose determinism over probability because you knew the answer and you weren’t willing to gamble on a model getting it right most of the time.

The current moment is full of tools that promise to do your thinking for you. They are seductive because thinking is hard and the outputs are often good enough. But good enough has a cost, and the cost is not always visible immediately. It is the gradual atrophy of the capacity to make your own decisions about your own work, the slow transfer of understanding from the person to the service, the quiet narrowing of your options as your dependency deepens. One day you look up and realize you could not do this without the tool, not because the work got harder but because you stopped maintaining the ability to do it yourself.

None of this is an argument against AI. The capabilities are real and they are extraordinary and the people using them well are producing work that was previously impossible to imagine. The argument is about the difference between using a tool and being used by one, and that difference comes down to something simple: do you understand how it works?

Not at the level of transformer architectures and attention heads. At the level of: what is this thing actually doing when I ask it a question? What are its failure modes? Where is it confident and wrong? What part of my process is it safe to hand over and what part do I need to keep? These are not academic questions. They are the questions that determine whether you are in control of your work or whether you have given that control to a company whose interests are not yours and whose systems change without asking.

There used to be a spirit on the web, inherited from older unix culture, that valued understanding how things worked. Not because everyone needed to be an expert, but because curiosity about your tools was a form of self-respect. That spirit has been eroding for thirty years under the weight of convenience and abstraction and market consolidation, and AI is accelerating the erosion. The companies building these systems are not going to encourage you to look behind the curtain. That is not what companies do.

So look anyway. Understand how it works. Build things you control. The middle is there for whoever wants it, and wanting it starts with refusing to treat someone else’s system as a black box you feed questions into and trust whatever comes back.