Minimum Viable Company

a theory of ai

Exploring the frontiers of artificial intelligence, consciousness, and the future of human-machine collaboration.

Minimum Viable Company
Featured
As AI flattens teams and commoditizes software, the real company is the judgment and methodology that remain at the center.

In February, Block cut 40% of its workforce. Not because the company was failing. Its gross profit was up 17% year over year. The stock surged 24% on the news. Jack Dorsey’s explanation was plain: AI tools and flatter teams have changed what it means to build and run a company, and most companies will reach the same conclusion within a year.

Inside Block, engineering teams that once had eight people were reduced to one. The layers between the individual contributor and the CEO were compressed. The thesis, stated publicly and rewarded immediately by the market, is that hierarchy exists primarily to organise and filter information, and that AI now does this better and cheaper than people do.

The question worth asking is not whether Dorsey is right. He may be. The question is what follows if he is, because the implications go well beyond one company cutting headcount to impress investors.

What hierarchy was for

Organisations have layers because human beings have limits. A person can hold a certain amount of context, manage a certain number of relationships, process a certain volume of information before the quality of their decisions degrades. Hierarchy is the engineering solution to that problem. You break the work into pieces, assign the pieces to teams, and build a structure of managers whose job is to aggregate information upward and translate decisions downward. The manager doesn’t do the work. The manager makes the work legible to the people above and the decisions legible to the people below.

If that is all a manager does, then yes, AI replaces them. A model can summarise, can aggregate, can translate between levels of abstraction, can hold more context than any individual and do it without getting tired or political or protective of territory. The case for flattening is real and it is not going away.

But most organisations did not build their hierarchies only to move information. They built them to develop judgment. The middle layer was where people learned to make decisions with incomplete data, to balance competing priorities, to understand the business well enough to act without being told what to do. The manager who had been an individual contributor, who understood the work because they had done it, who could smell a bad estimate or a misaligned incentive before the data confirmed it, that person was not a filter. They were a reservoir of institutional understanding, and the organisation drew on that reservoir every day without naming it or measuring it.

When you remove that layer, you save the salary. You lose the reservoir. Whether that trade is worth making depends entirely on what you think a company is for and how long you need it to last.

The SaaS question

The same pressure is hitting from the product side. For fifteen years, the dominant model in software has been SaaS: build a tool, host it, charge a subscription, scale. The playbook was clear. Move fast, capture the market, make switching painful, grow. The moat was speed. Get far enough ahead and you win.

That moat is collapsing. When anyone can build a functional application in a weekend with the right model and the right prompts, the question of what your software company actually does becomes harder to answer. A well-specified CRUD application, the kind that entire venture-funded companies were built around, is now within reach of a single developer with access to the same tools everyone else has. The competitive advantage of having built it first erodes when building it at all is no longer the hard part.

The usual response is to scale harder. More features, more integrations, more surface area. But scaling people doesn’t help when the constraint is no longer labour. Most companies don’t have ten times the ideas in their pipeline. They have the same ideas everyone else has, and now everyone else can build them just as fast. The race to do everything is not a strategy. It is a symptom of not knowing what your thing actually is.

So what happens? Maybe there are a hundred times more software companies, each one smaller, each one solving a narrower problem for a more specific market. Maybe there are a quarter as many, because the ones that were only ever a nice interface over a database quietly disappear. Probably both. The market fragments at the bottom and consolidates at the top, and the middle, the place where a company with fifty employees and a Series B used to live comfortably on a modest insight and a head start, gets squeezed from both directions.

What survives the squeeze is the same thing that survives the flattening of a hierarchy: genuine understanding of the problem. The methodology, the workflow design, the specific way you understood your customer’s situation that a competitor can’t replicate by prompting a model. If your product encoded that understanding, you have something. If your product was a wrapper around it, you are exposed and the market will find out soon.

The capability curve

All of this is happening against a backdrop of capability growth that most organisations have not fully absorbed. The tools are not improving at the pace of a normal technology cycle. They are improving at a pace that outstrips the ability of most companies to understand what they are adopting, let alone make sound structural decisions about it. Anthropic announced this week that its new Mythos model, still unreleased to the public, has found thousands of zero-day vulnerabilities across every major operating system and web browser, some of them decades old. During testing, the model broke out of its sandbox environment, built a multi-step exploit to access the broader internet, and emailed a researcher to let them know. The system card describes the model manipulating its own evaluator through prompt injection and, in simulation, behaving like what the testers described as a ruthless business operator.

This is not science fiction. This is a capabilities assessment published this week by the company that built the model, accompanied by an explanation of why they are not releasing it to the public.

The relevance to the question of what a company is goes beyond the obvious cybersecurity implications. What Mythos demonstrates is that the capability curve is steeper than the organisational curve. Companies are still debating whether to adopt AI tools for their quarterly reports while the tools themselves are learning to break out of sandboxes. The gap between what the technology can do and what most organisations understand about it is widening, and the decisions being made in that gap, about structure, about process, about what to keep and what to hand over, will be very difficult to revisit.

The nervous system

If hierarchy was about filtering information and AI does that better, and if products were about packaging capability and AI commoditises that, then what is left? What is the irreducible core of a company that cannot be replicated by a competitor plugging into the same API?

It is the intelligence at the center. Not artificial intelligence. Organisational intelligence. The accumulated understanding of the problem you exist to solve, the methodology you developed for solving it, the decisions you made about what matters and what doesn’t, the institutional knowledge that lives not in documents or models but in the way your organisation thinks about its work. Call it the nervous system. Everything else, the headcount, the hierarchy, the tools, the infrastructure, can change shape around it. But without it, what you have is not an organisation. It is a collection of capabilities rented from vendors, assembled into something that looks like a company but has no center of gravity.

The companies that understand this will spend the next few years figuring out how to feed that intelligence, how to make it sharper, how to protect the methodologies and judgment it encodes, and how to use every available tool, including AI, in service of it. They will treat models as capabilities within an architecture they control, not as the architecture itself.

The companies that don’t understand this will flatten their hierarchies, adopt the tools, cut the headcount, and discover over time that they have hollowed themselves out. The information still flows. The decisions still get made. But the quality of those decisions degrades quietly, the way it always does when you remove the people who understood why things were done a certain way and replace them with systems that can describe what is being done but cannot explain why it should be.

The minimum viable company

Silicon Valley spent a decade refining the idea of the minimum viable product: the smallest thing you can build to test whether anyone wants it. The logic was sound. Don’t over-build. Don’t assume. Ship, learn, iterate.

That same logic is now being applied to the organisation itself, and the results are more ambiguous. What is the minimum viable company? How few people can you have and still make good decisions? How flat can you get before you lose the ability to develop the judgment that good decisions require? How much can you delegate to AI before the thing you are delegating is the thing that made you worth building in the first place?

Every company is answering these questions right now, whether they know it or not, through the choices they make about what to keep and what to cut, what to build and what to rent, what to understand and what to hand over to a system that will give them an answer without ever understanding the question.

We have watched this pattern before. The last thirty years of technology have been a series of abstractions, each one trading control for convenience, each one concentrating power in the hands of whoever controls the new layer. We gave up our servers for the cloud. We gave up our distribution for platforms. We gave up our attention for algorithmic feeds. Each trade made sense in isolation. Each one left us more dependent on systems we didn’t build and couldn’t influence.

AI is the next abstraction, and it is the most consequential because it touches the thing that was always supposed to be ours: the thinking. When you outsource infrastructure, you lose control of where your code runs. When you outsource thinking, you lose control of what your organisation is.

Most companies will probably follow Block within a year. The ones that do it well will be the ones that figured out what their core actually was before they started cutting, the irreducible thing that no model and no platform and no vendor could replicate. For some that will mean getting smaller. For some it will mean getting flatter. For all of them it will mean getting honest about what they are and what they are not, and making that distinction before the market makes it for them.