How Amdahl's Law Will Eat Your Job

Competition drives us to do things faster, better, cheaper. In the world we are entering, faster, cheaper, better will become synonymous with “less centralised”.

Cost gravity is the endless fall to free. Technology itself becomes more decentralised over time simply because there are so many more devices. As the cost of chips (and the connections between them) arcs towards zero, we see a corresponding increase in scale and distribution.

Conway’s Law is an observation among engineers that the way we organise is directly proportional to the shape of the systems we build. The most economically efficient way of building decentralised software to take advantage of the exponential increase in chips and connections is to have an equally decentralised team. Free markets wipe out any structures that don’t keep up.

Amdahl’s Law tells us that organisational structures scale in direct proportion to decentralization. The economic pressure on organisational structures to become decentralised will continue to increase until the only systems capable of surviving the free market will be those which can operate without any level of central control. There is simply no room left for any kind of upfront consensus before solving a problem.

The Pareto distribution and equivalency principle combine to tell us that if you’re producing something you want people to use, it has to solve a problem they care about. It has to do it in the simplest and cheapest possible way given the currently available technology.

Wisdom of the crowd says that the more people involved in the problem solving process, and the less centralised they are, the more accurate and efficient their solution will be.


You might not have noticed the underlying trend towards decentralisation, but if you’re like most people, something occasionally bubbles forth from the deep and provides you with a glimpse into this strange new world.

I’m not just talking about the deep and pervasive impact of blockchain. Kickstarter and Indiegogo along with Uber and Airbnb are semi-centralised business models which hint at the raw power of the peer to peer economy. The most accurate and up to date encyclopaedia on the planet is one that anyone can edit at any time, without central management getting in the way.

Stable employment is a thing of the past; millions of people are choosing to work on a freelance basis, temporarily organising around projects and ideas instead of companies and mission statements. More than one in three Americans freelanced in 2018, and the majority of the US workforce is set to be working freelance by 2030.

None of this is a coincidence or a short-term trend. There are deep and powerful forces of economics and nature driving the world in this direction.

Cost Gravity

The first and most important underlying force driving us towards a decentralised future is called cost gravity.

Now and then, technology tabloids warn that Moore’s Law is about to end. The silicon chips at the heart of our computer systems can’t keep getting faster, we’re told, and when it ends, the future will fall into darkness and uncertainty. Yet inevitably and without fail, scientists find some way to extend it, and we collectively sigh in relief.

Moore’s Law isn’t a mythical beast that magically materialised in 1965 and threatens to unpredictably vanish at any moment. In fact, it’s part of a broader ancient mechanism that has no intention of stopping. This mechanism, which the late Pieter Hintjens termed cost gravity, pulls down the price of technology by about half every two years.

Cost gravity affects the entire human world, but it’s also the force responsible for multiple drug resistant bacteria which share dna in the same way open source developers share code. It is inevitable and unstoppable, driven by the spread of information and knowledge. The way it affects technology is that every two years, any given technology becomes twice as available at half the cost; and twice as powerful with half the bulk.

It’s hard for our limited human minds to understand exponential curves. We can look at history and collapse it into: “clean water and roads let the Romans build their empire” or “my phone has more computing power than the whole of NASA in 1962.” In 60 years, the average person on the planet will have access to more computing power and more connectivity than the entire Internet today.

Paper existed for thousands of years, yet only in the fourteenth century did it become a mass-market product. The price of paper fell, thanks to cost gravity, below the critical threshold where any household could buy a printed book.

Computing was once the key to global monopolies in finance and industry, but at some point the unchanging exponential decline in price crossed a magic line and every household could afford their own computer. The number of computers globally increased in proportion to the decrease in price, but to our human minds this exponential growth looked like a sudden change — one day no one had a computer, the next day everyone did.

You may think smartphones cropped up very suddenly, but pocket computers have been around since the 1980’s and their number has been growing exponentially ever since. The reason these pocket computers are connected to the Internet is because the number of connections and amount of available bandwidth has been growing exponentially since the 1850s.

Once we realise the curve has always existed and will always exist, we see that there is no coming “singularity.” What does happen, predictably, is that when the cost of key technologies fall below certain thresholds, these technologies lead to explosive changes in society. While the curve is mostly invisible, these tipping points are not.

Cost gravity is making computation and connectivity so cheap that we’re starting to see almost everything become “smart”. Your car is now a computer with an engine and wheels. Your light globes are computers that light your home in whatever hue and brightness you desire. Your washing machine is a computer that cleans clothes, and sooner than you think, your shirt will be a computer that tells your washing machine what kind of detergent and water temperature it likes.

Even our headphones now have computational power, their own operating system, and a high bandwidth wireless connection.

Apple Airpods have about 500x more computational power than the lunar module that put Neil Armstrong on the moon.

The cost of chips and connections drops at an exponential rate, which means their number increases at the same exponential rate. The Internet of Things (IoT) has come into existence because we simply can’t avoid it. Where else are these chips and connections going to go? It’s cheaper to put a computer in your washing machine than to spin the drum a few times too many.

Cost Gravity Drives Decentralisation

Cost gravity acts on all our systems. The drop in price of computation and connectivity means the size of our systems and the number of moving parts within them reliably doubles every two years. Complex, monolithic software that used to drive web applications has broken down into a large number of smaller microservices and even standalone functions, all with numerous connections between them. At the same time, these services are becoming increasingly distributed across a variety of systems, datacenters, and countries.

The total cost of ownership for a distributed system is cheapest when each part of the system can operate autonomously with no central point of synchronisation, or failure. Individual servers at Facebook and Google go down every day, but we don’t know or care. This is because there’s enough autonomy distributed throughout the system that the system as a whole can continue to operate as usual. At the time of writing, microservices (which many people in the field haven’t even caught up with yet) are already starting to lose out to the even further decentralised serverless architecture.

This level of decentralisation is increasing at an exponential rate. It doubles every two years in proportion with the halving of the cost of chips and connections. The number of parts in each system doubles every two years, and so does the bandwidth and number of connections between them. This forces anything using chips and connections to become more decentralised in the way they are produced, deployed, and used—it’s simply the only economical to take advantage of this reduced cost and the increased number of chips and connections. If you aren’t doing it, someone else will eat your lunch in front of your eyes.

This must inevitably affect the way we work. While 50 years ago most people worked for the same company for life, today the market demands that we frequently jump from job to job. Huge numbers of people don’t even have a job, they work entirely on a freelance and per-project basis. By 2027, the majority of the US workforce will consist of freelancers. They will be working at the most granular level that technology allows, as they always have done, because that’s the most efficient way to take advantage of every individual skill set. This is why freelancers work on a per-project basis — but what they’re really doing is solving problems for someone else. The number of people you personally solve problems for, and the number of people who solve problems for you, will continue to increase. Sooner than you think, everyone on the planet will be both your client and your employee.

It’s always been cheaper to rent someone’s spare room than to book a hotel, and it’s always been cheaper to pay someone to give you a ride than to call a taxi, but airbnb and Uber couldn’t exist until cost gravity pushed the cost of chips and connections down past a magic line where we could all suddenly afford to have enough computing power and bandwidth to participate in a realtime peer to peer network, which enables the efficient use of that spare room or vehicle across a large enough network of individuals.

We can very roughly measure this increase in decentralisation. A decade ago, the world was 32x more centralised than today. In two decades, our world will be roughly 1024x more decentralised than today.

We are heading towards extreme levels of decentralisation. Soon, you’ll wake up, and it will seem like everything became insanely decentralised overnight. Thanks to the exponential curve, and cost gravity, it will sneak up on us like home computers, smartphones, blockchain, airbnb, and the “gig” economy.

The Wisdom of Crowds

A group of random people will on average be smarter than a few experts. It’s a counterintuitive thesis that mocks centuries of received wisdom. Adding more experts to an expert group will make it more stupid, while adding laymen makes a stupid group of experts become smart again.

Surowiecki wrote about this phenomena in his book, The Wisdom of Crowds. He identified four elements necessary for a wise crowd:

  • diversity of opinion,
  • independence of members from one another,
  • decentralisation of their organisational structure,
  • and effective ways to aggregate opinions.

He describes the ideal wise crowd as consisting of many independently-minded individuals who are loosely connected, geographically and socially diverse, unemotional about their subject, each having many sources of information, and some mechanism to bring their individual judgments together into a collective decision.

Repeated studies conducted from 1907 to present have demonstrated the effectiveness of the smart crowd. You can even try this yourself. Take a large jar and fill it with jelly beans. Sit out on the street and ask everyone who walks past how many jelly beans are in the jar. The average guess become more accurate as you accumulate more guesses. The more random the group, the faster they will approach the correct number.

According to Surowiecki, the wise crowd makes fast and accurate judgments, organises itself to make the best use of resources, and cooperates without central authority. Some examples of wise crowds, such as Wikipedia, are extraordinarily successful despite the influx of trolls and vandals. This is because the organisational structure itself is [antifragile _](https://en.wikipedia.org/wiki/Antifragile)and requires adversary to _learn.

Is a random group of developers better than a smaller group of expert developers? Experimentation reveals that the answer is a resounding yes. The larger and more random a group of developers are, the faster and more accurately they solve problems when provided with the right protocol.

My own experiments have shown that a wise crowd is also the most cost efficient way to build software, mostly due to the complete lack of technical debt accrued during the development process. As evident from Wikipedia, a smart crowd is also the most cost efficient way to create an accurate encyclopaedia. Cost efficiency is obviously critical if you’re competing with others in a free market.

If something can take advantage of smart crowds, the free market will force it to do so, because it’s faster, cheaper, and solves problems more accurately. The next arms race will be between competing decentralised structures and systems as they engage in a hill-climbing battle to find the optimal solution to social scalability.

The Pareto Distribution

In 1895, the Italian philosopher Vilfredo Pareto noticed that people in society inevitably stratified into what he called the “vital few,” or the top 20 percent in terms of money and influence, and the “trivial many,” or the bottom 80 percent. What he found was that about 80 percent of the wealth and power was controlled by 20 percent of the population.

Since that time, an inordinate volume of research has been conducted into what economists, mathematicians, physicists, and engineers call the Pareto Distribution. Commonly known as the 80/20 principle, it affects almost every field of study. Both natural and sociological phenomena are affected by the Pareto distribution — 80% of the population lives in 20% of the landmass, 80% of the matter in the known universe is concentrated in 20% of the space, 80% of sales come from 20% of the customers, 80% of the work gets done in 20% of the time, and conversely, the hardest 20% of the work takes 80% of the time.

Most developers have probably experienced spending hours upon hours chasing down a bug, only to fix it with one line of code. That’s the Pareto distribution in action.

Software developers don’t get a choice in this, there’s nothing that a developer can do to avoid spending 80% of their time on 20% of the codebase. This is the same for any creative work. Given this is the case, it’s important to be able to effectively determine what is, and what is not, in the critical path. If you’re in the software development arena, at least make sure that the code you or your developers are spending 80% of their time writing is absolutely critical to success.

If the end goal is to build something that people will actually use, then it must solve a problem that people care about. We can break this problem/solution approach all the way down to the most granular level. Any and all code changes must solve a problem. A line of code that doesn’t help solve a problem is called technical debt. Conversely, ‘bad’ code is not bad code if it solves a problem.

We can also look at this from another perspective. The equivalency principle tells us that if you’re in a sealed box, it’s impossible to know if you’re falling to the ground or traveling towards Mars. Both feel the same. We need an external point of reference to know if we’re going in the right direction, or plummeting to the ground.

When it comes to software development, we also need an external point of reference. Adding beautiful and well thought out code to the codebase is worthless if that code is pushing the project towards the ground (by burning through funding) instead of towards somewhere worth going. If you want to end up in a place where people actually use whatever you’re building, you need to find an external reference point that you can compare to every code change to see if that’s the direction you’re really going in.

That external point of reference is quite easy to understand: is the patch you (or one of the developers on your team) working on right now solving an actual problem that’s worth solving? If not, then it’s going in the wrong direction. Code that pushes a project in the wrong direction is technical debt.

Amdahl’s Law

Amdahl’s Law is how computer scientists calculate the maximum speedup provided by adding more processing units to a system.

If your process runs on four cores and 25% of the time is spent in some kind of mutex situation where all cores have to stop and wait for an operation to finish before work can continue, then adding more cores doesn’t give you any speedup. The mathematics is quite straightforward.

We can translate this to team structures. The more a team needs to stop and synchronise, the less work they get done.

If a team spends an hour a day in meetings, the maximum team size is eight. Adding more people to the team will not improve the speed at which they get things done.

In practice, it’s usually much worse than that. Any form of consensus or blocking process decreases the maximum team size. In most organisations I’ve looked at closely, the maximum team size is more like four to six.

Teams working at Amazon are generally limited to however many people can be fed with two pizzas — 6 to 12 people. This is more than most companies can handle on one team before efficiency starts dropping. Amazon’s management structure requires less upfront consensus than most.

Most people who’ve looked at this two-pizza rule, and most people who work at Amazon seem to think this is because smaller teams are more efficient. This is not the case.

When Amazon teams grew larger than about 12 people, adding more people did not increase the speed that work got done. So they were paying more salaries but getting the same results.

This makes it look like larger teams are less efficient, which is why they have this two-pizza rule, but the real problem isn’t some magical team size or human psychology.

What’s actually happening is that Amazon is hitting the limits imposed by Amdahl’s Law.

If your management structure requires you to have meetings, if it requires central decision making of any kind, if people need to get approval to start new things, if they need code reviews before merging code into production, if you have any kind of mutex situation, then the management system is unable to take advantage of any extra people that you add to the team. The team does not become less efficient, the management structure itself is simply too centralised.

Upfront consensus doesn’t scale. Scalability of teams is proportional to the amount of consensus they require to do work. A team can only be massively scalable if they require absolutely no upfront consensus of any kind.

Conway’s Law

Like most other powerful observations in computer science, Conway’s Law is old. In 1967 Melvin Conway noticed that software tends to resemble the shape and structure of the organisation that created it.

To continue with the Amazon reference, small and independent teams at Amazon clearly define what service they are going to provide, and other teams can choose to consume these services (or not). The structure ends up being a loose coalition of small teams which both provide services to other teams and consume services provided by them.

The software the organisation creates looks the same. Their software consists of small, independent applications that communicate with each other using contracts called APIs. Each piece of software provides a service through an API, and can consume services provided by pieces of software. This structure is called a microservice architecture.

So what comes first? The organisational structure, or the structure of the software? Neither. The structure of the of the software is the structure of the organisation. The structure of the organisation is the structure of the software. They are one and the same.

You can fight Conway’s Law by spending more money. But this is inefficient and companies that do this are generally killed by the market. If you want to create scalable and distributed software, you need a scalable and distributed development team.

A Brave New World

Cost gravity tells us we’re heading towards extreme levels of scale and decentralisation. Conway’s Law says the most efficient way to build these systems is by having an organisational structure that is extremely scalable and decentralised. In 10 years, our systems will be 32x larger and 32x more decentralised than today, and so will the dominant organisational structure. Whichever structure can operate at these levels of scale and decentralisation will be the most efficient structure, and therefore the most dominant in a free market.

But Amdahl’s Law tells us that in order to scale, we cannot have any upfront consensus. Any structure where you have to agree on what you’re building before you start working is not going to scale. There can be no code reviews before putting code into production. There can be no roadmaps telling us where to go next. There can be no forward planning of any kind.

The Pareto distribution and equivalency principle tell us that if you’re building software that you want other people to use, it has to solve a problem for them and it has to do it in the simplest and cheapest possible way. Code that doesn’t solve a problem is pushing your codebase away from this goal, not towards it. Code that pushes you away from your goal is technical debt.

What about if we combine Amdah’ls Law with this need for all code to solve a problem? What happens if you make a rule that all pull requests to your codebase must solve a problem? Is it possible that a rule like this could also eliminate the need for upfront consensus?

If all pull requests solve some small problem, would they all come together to solve a much larger problem? Would the codebase evolve instead of being intelligently designed?

Wisdom of the crowd says that if we want to solve problems more accurately, we need more people. What happens when you have a huge number of random people solving tiny problems? Would that make the end result more accurate at solving a larger problem?

This experiment has already been done

Most software engineers don’t like the notion that powerful and effective solutions can come into existence without an intelligent designer actively creating them. And yet it’s very rare to find a software engineer who would deny the theory of evolution and believe that we were intelligently designed by a supernatural being.

It turns out that requiring every pull request to solve a problem is not the only rule you need, but it’s close. There are a few additional rules, which have been discovered through a process of trial and error.

After implementing these simple rules, your software can grow through a process of evolution — a hill climbing algorithm. The rules allow developers to work with no upfront consensus of any kind and instead sets your project off on a drunken stumble to greatness. No meetings, no planning, no roadmaps, no managers, no code reviews, which means the organisation does not have any blocking processes or mutex situations preventing it from scaling.

Conversely, this protocol forces the project owner to relinquish control over the direction of the project. Central control requires upfront consensus, which doesn’t scale unless you have an an effectively unlimited budget and are not working within the constraints of the free market.

There’s a simple protocol you can follow which will automatically implement all the rules you need. It’s called the Collective Code Construction Contract. It was originally created by Pieter Hintjens of the ZeroMQ project. In fact, ZMQ, which is trusted by large institutions and high frequency traders where a software failure costs millions, is simply an emergent property of the C4 protocol.

Written on July 29, 2019