by adam mathes
archive · subscribe

Technology At Scale is A Commodity

One explanation for Google’s early success is that it was derived from the application of computer science at scale. This scale was not just a difficult technical achievement fueled by incredibly smart technologists, but so difficult that competitors didn’t pursue similar efforts due to difficulty, uncertainty, and cost until they were proven out.

Google crawled the web faster, then did complex graph analysis on it nobody else did or even could do, creating a better search experience.

Gmail provided effectively infinite storage for email – a gigabyte – at a time when competitors were limiting storage to an amount that required user attention and deleting – single or double digit megabytes.

Moore’s law may slow as we reach physical limits, but the unit economics of things like computation, memory, and storage fall, and fall faster at scale. So as you grow Gmail, unit costs decrease as usage increases and technological advances carry you ahead.

This isn’t a long term defensible play in and of itself: competitors can swoop in and realize these same cost advantages, as the big webmail providers did quickly.

But now, you don’t have to be a giant company to make that play.

Scale at Scale

Amazon Web Services (and the similar efforts from IBM and others) have made computing power at scale itself a commodity. Tiny startups have access to it with minimal capital, and they can realize much of the advantages of huge scale before hitting it themselves.

Companies like Google could develop innovations like Gmail by asking questions - what if storage was so cheap users never had to delete an email? And then build out the solution to deliver it.

Crazy Technology Ideas Are Sensible Now

Probably the coolest feature I ever product managed at Google started with a conversation that went something like

“Hey, this is computationally infeasible but could we brute force the linking between books via quotes? Hash every sentence or n-gram and compare?”

“Well, that sounds computationally impossible. So no, not that way. That’s well, that’s not going to happen. But, yeah, maybe.

Then brilliant research scientists went off and wrote ground breaking software , comfortable in the fact that they had access to nearly infinite computing resources to run giant mapreduces on to solve it.

I don’t want to discount the high cost of complexity and technical achievement in something like popular passages or similar work – just that getting great engineers to solve problems is not the scarce resource. If you want to do something crazy today, the gating factor is not owning a data center, or having access to tools to handle large amounts of data, or getting access to smart engineers.

Open Source

While AWS has made scaled computing a commodity, open source software increasingly makes complex tasks like machine learning more accessible to more people and cuts their costs too.

It makes sense for Google, Amazon IBM and others to invest and distribute these tools – they drive additional usage of the scaled computing platforms.

While hiring is difficult we have plenty of brilliant people who will jump to work on interesting giant problems (with enough compensation.)

The cost of scaled computing continues to drop and become more accessible, and the tools to leverage it become cheaper and cheaper.


  1. companies won’t win on scaling infrastructure
  2. much of groundbreaking computer science 10 years ago is now a commodity
  3. a lot of what we think of as groundbreaking CS today will be a commodity in 10 years

· · ·

If you enjoyed this post, please join my mailing list