Warehouse Automation

View Original

Andy Jassy believes that on premises IT infrastructure will not go away



We have always been convinced, and remain so, that there is no way that the largest organizations in the world will move their computing to one of the big cloud builders. And ten years ago, when Amazon Web Services was still relatively small and yet growing fast enough to scare the heck out of those who sell IT infrastructure or make its components, the current chief executive officer of Amazon and the former head of its cloud division was fond of saying that “in the fullness of time” all workloads would move to the cloud.

One of the earliest references we can find for this statement is here, and we remember being at the November 2016 re:Invent conference – a press conference after the keynote, to be precise – and sitting right in front of Jassy in the front row and saying that, while that was an interesting statement, there was no way in hell this was going to happen. (We may have used more colorful language than that.) But that has been the party line from AWS since that time, until Jassy came on the call with Wall Street last week to go over the company’s overall financial results for the fourth quarter.

We quoted Jassy in full about the benefits of elasticity and how the AWS business was doing in our coverage of the year end AWS results, and we noticed a shift in attitude as well as a statistic that we do not believe to be true here in 2023. So we will cite that part again:

“I think it’s also useful to remember that 90 percent to 95 percent of the global IT spend remains on-premises,” said Jassy. “And if you believe that – that equation is going to shift and flip, I don’t think on-premises will ever go away – but I really do believe in the next ten to fifteen years that most of it will be in the cloud if we continue to have the best customer experience.”

So now Jassy believes that on premises IT infrastructure will not go away. Which seems more reasonable given data sovereignty issues, latency issues, cost issues, and just the desire by companies to control their own fates. Ya know, like the hyperscalers and cloud builders do. It’s funny how those who want you to give up your infrastructure and your code are the ones who never will. Do as I say, not as I do, we guess.

We don’t think cloud has peaked, and we definitely think that cloud has tremendous – dare we use this word? – utility. But we wonder about that cloud versus on premises percentage of datacenter compute, storage, networking, and software.


As we have said before, we think there are three different models that are evolving and we will see where the chips fall:

  • There are the infrastructure services and add-on software services from the major cloud builders like Amazon Web Services, Microsoft Azure, Google Cloud, IBM Cloud, Alibaba and Tencent as well as many smaller clouds and hosting providers who are getting more and more cloud like, especially in the adoption of cloud-like subscription pricing.

  • Then there are co-location facilities, which host bought or leased or utility priced IT gear on behalf of organizations, allowing them to get out of having to build, maintain, and depreciate datacenters but which allow organizations to have a lot more degrees of freedom and which, importantly, have high-speed links into the cloud builders. Interestingly, more than a few cloud builders also use these companies, with Equinix, QTS, Digital Realty, CyrusOne, and GDS (in China) being the big ones.

  • There are on premises datacenters owned and operated by organization, either using equipment they buy or lease subscribe to under utility pricing.

  • AWS Outposts, private Azure Stacks, and Google Anthos are really an extension of the cloud builder down into co-lo and on premises, and technically are a fourth deployment and pricing method for IT infrastructure. It is not clear if this is really being used except in corner cases. It is like a small version of AWS Govcloud, where AWS built a supersecure and isolated datacenter specifically for three-letter Federal government agencies in the United States. Either Govcloud is the first Outpost, or an Outpost is a very small, personal Govcloud.


The situation is very far from “cloud versus on premises.” It is more complicated than that. But just for fun, let us try to reckon how much of the global IT budget is actually being spent on cloud. We will have to mix and match some datasets.

According to Gartner, there was around $209 billion in IT spending for datacenter systems – servers, storage, switching, and operating systems for them – in 2022. Spending by hyperscalers (who really as SaaS vendors in a sense) and cloud builders for the gear in their datacenters. But Synergy Research says that in 2022, hyperscalers and cloud builders spent $97 billion on datacenter hardware. This is a cost of production for the hyperscalers and clouds. So the rest of the IT market – enterprises of all sizes, governments, educational institutions, research centers, telecommunications providers, and such – only spent around $112 billion on IT gear. So it looks like the hyperscalers and clouds represent around 46.4 percent of datacenter systems spending, which sounds about right.

On top of this, according to Gartner, there is another $790 billion in enterprise software spending in 2022. So basic IT spending outside of clouds – and not including myriad tech support, systems integration, application management, hosting, and cloud services – is $902 billion. (If you want to be fair, you would add in the amortized cost of having maybe 10 million programmers on the payroll at these non-cloud and non-hyperscaler companies. It is hard to reckon that, but it might be around $1 trillion. Some of these applications run in the cloud and some on premises and some in co-los.)

Now, again, according to Synergy Research, companies spent $195 billion on IaaS and PaaS services in 2022, and another $229 billion for managed private cloud, enterprise SaaS, and content delivery networks. We think managed private cloud and content delivery networks is a relatively small part of that. Call it 70 percent for enterprise SaaS, or $160 billion.

So, the universe of total IT spending on datacenter hardware and enterprise software (SaaS or not) is a cool $999 billion according to Gartner, and the portion that organizations are spending on cloud capacity (in the broadest sense) is $195 billion plus $160 billion, or $355 billion. When we do that math, the cloud penetration is 35.5 percent. Not 5 percent or 10 percent.

There is a portion of the remaining $644 billion up for grabs. But certainly not all of it, not based on the thinking we see out there among the IT customer base, which is plenty annoyed about the surprisingly high cost of cloud once you are into it, elasticity or not. There is a value to elasticity, but it is not an end unto itself. As we said in our comments earlier this week, the first $100 billion for AWS, which should happen in two years or so, is going to be a lot easier than the second additional $100 billion. And hence you see AWS moving up the stack selling software – its own software as well as that of competitors, for which it is getting a commission to run on its cloud.

Which brings us all the way to our point. How do you make it so that your IT organization wins and you don’t just end up in another sticky platform you can’t easily get off of when the discounts get thinner and thinner and the costs go up and up?

The answer is simple: Control your own platforms and your own code, control your own fate. You have to be like IT organizations of days gone by and more like hyperscalers and cloud builders themselves. It is expensive, but not as expensive as losing whatever competitive edges your own smart people will come up with over the decades. You cannot abdicate the value chain while AWS is trying to move up it. It is bad enough that you have to compete with AWS as it is.

Here is the idea. Way back when, as cloud was starting to take off, we used to joke that the last server in the corporate datacenter would be the LDAP or Active Directory server, the nexus through which all kinds of IaaS, PaaS, and SaaS services would be cross-connected. (It was funny to envision this giant four-socket X86 box sitting in the center or a raised tile datacenter with a zillion wires connecting it to a massive router.)

Out thinking has evolved, as it must. If we were running IT operations somewhere today, we would absolutely use cloud services, mainly to run test/dev or to put new ideas (like how to train AI models and how to integrate AI inference into applications) through the paces. But once we figured out what we were doing, we would never deploy those applications on a “public” cloud. No way. We would, however, deploy utility-priced servers and storage in co-location facilities adjacent to the clouds, just in case we needed extra compute capacity or fast access to cloud software stacks. We would also keep as much storage as possible in these co-location sites and the bare minimum of storage in the cloud. You can put data in a cloud for free, but they take your kidney if you want to move it.

Also: You need multiple co-los ones for high availability, cross-connected. And maybe you need to keep your storage and that LDAP/Active Directory server in a secure datacenter of your own, just in case the bit hits the fan. Replicate to your own facility if you want to be very safe. Consider it an online backup that can be used in a pinch and that is air-gapped against ransomware and hackers.

This is a kind of hybrid cloud that makes sense to us. One that uses substrates that can run across all of the clouds, on premises, and in co-los. Things like Red Hat OpenShift and HashiCorp HashiStack, or heaven help us even the full VMware stack with its Kubernetes layer on top, are expensive. Sure. But so is making Jeff Bezos the third richest man in the world for the next decade or two, which is only happening because of the profit margins of AWS.


Full Story >