Skip to main content

There’s no AI without the cloud, says AWS CEO Adam Selipsky

There’s no AI without the cloud, says AWS CEO Adam Selipsky

/

AWS has been around for nearly all the big computing transformations of the 21st century so far. Selipsky’s not worried about the next one.

Share this story

Adam Selipsky smiles into camera
Photo illustration by Alex Parkin / The Verge

Today, I’m talking with Adam Selipsky. He’s the CEO of Amazon Web Services, or as it’s usually called, AWS. AWS is quite a story. It started as an experiment almost 20 years ago with Amazon trying to sell its excess server capacity. And people really doubted it. Why was the online bookstore trying to sell cloud services?

But now, AWS is the largest cloud services provider in the world, and it’s the most profitable segment of Amazon, generating more than $22 billion in sales last quarter alone. By some estimates, AWS powers roughly one-third of the entire global internet. And on the rare occasion an AWS cluster goes down, an unfathomable number of platforms, websites, and services feel it, and so do hundreds of millions of users.

Adam was there almost from the start: he joined in 2005 and became CEO of AWS in 2021 when former AWS CEO Andy Jassy took over for Jeff Bezos as CEO of Amazon. That’s a long time, but pay attention to how excited Adam is about the prospect of yet more growth for AWS. Even with big competitors such as Microsoft and Google gaining ground, he estimates that only 10 percent of his potential customers overall have made the jump to the cloud. 

That leaves lots of room to grow, and I wanted to know where he thinks that growth can come from — and importantly, what will keep AWS competitive as the word “cloud” starts to mean everything and nothing.

The answer there, of course, involves AI. I’m gearing up to co-host Code in September this year, and that means I’m spending a lot of time thinking about AI in general, and Adam is a great person to help me think through it. AWS is going big on AI, but it has some challenges. OpenAI, which makes ChatGPT, has an exclusive deal with Microsoft for cloud services. Google, which has made a huge bet on AI, obviously runs its own cloud services and sells access to Google’s models exclusively through them. So AWS has to be great at everything else — and still has to compete for the hardware necessary to run these models, which is in short supply. Adam and I got into all of it and into the weeds of what it means to be an AI provider at scale. It’s uncharted territory.

I also couldn’t resist asking Adam about how AWS advertises in airports and with the NFL — is there anyone who needs AWS who doesn’t know about it? This is my favorite question to ask enterprise software CEOs, and Adam’s answer was pretty good.

A few notes before we start since we did chat about AI hardware for a while: the best AI chips right now are made by Nvidia, which made a big bet on using its GPU tech for AI a while back. Its A100 and H100 GPUs in particular are state of the art, and they’re hard to get, even for AWS. That means everyone is trying to make their own chips as well, and AWS has two of those, called Trainium and Inferentia.

Okay, Adam Selipsky, CEO of AWS. Here we go.

This transcript has been lightly edited for length and clarity.

Adam Selipsky, you are the CEO of Amazon Web Services, or AWS, as it’s commonly known. Welcome to Decoder.

Thanks a lot for having me on. I really appreciate it.

There is a lot to discuss. I was looking at the timeline here: depending on how you count, AWS is coming up on 20 years. It’s been 19 years since the first press release Amazon issued with the words “web services” in it, and it is now the most profitable part of Amazon. It is leading the charge into a lot of areas, including AI. You were on the early part of that ride. You left to go be the CEO of a company called Tableau. You came back as CEO in 2021. How do you think about AWS now?

When we started, we talked a lot about the IT benefits. We talked a lot about the “muck” or the undifferentiated heavy lifting of IT, as we used to call it. And we don’t really talk about that so much anymore, because I think what the world’s really figured out is as much as anything what AWS in the cloud enables you to do is to really transform the way that your organization operates. I think we’ve really become a part of the fabric, not only of how IT and the internet operates, but really one of the driving forces to how companies want to reshape and transform themselves.

So I have a goal for this interview. I’m telling you upfront so I’m not playing any games. Every time I talk to an enterprise software CEO, my goal is to get them out of the language of the ads in the airport and down onto the ground. So “cloud transformation, innovative IT change,” that’s airport ad stuff. It’s great. I’ve always wondered who looks at the ads in the airports.

My first question is: do you approve the airport ads? Because there’s a whole part of AWS that is just saying the words “AWS” to people in all kinds of spaces, like the NFL, transit systems. Does that work for you with that language that you’re using there — “cloud transformation?” Is that just to get people aware of AWS? Or are those code words for people in decision-making capacities to say, “Okay, I’m familiar with AWS. I have these problems you’re talking about. They’re the vendor of choice.”

Hopefully, if we’re doing our job well, it’s speaking to the reality of what’s happening. Because if you go talk to our customers to whom we’re important, they are transforming specific pieces of what they’re doing. And we should get detailed about what that means. But I think it speaks to them. It’s not code. I would call it shorthand. It’s shorthand for the transformations that they’re seeing. So let me give you a very specific example: I could show you a ton of pharmaceutical companies who used to have scientists, well-paid scientists, who would take 12 to 20 weeks to obtain servers, capital expenditure, actual physical servers to do their research. And they would sit around and be inefficient waiting.

And with this elastic computing model that AWS pioneered, you could have that in less than 30 minutes. Pharmaceutical after pharmaceutical will tell you that they have improved and shortened their time to market with using AWS in the cloud model. So I mean, that is a very specific example. If you have to buy a lot of CapEx for a big project, spend a lot of money, you’ve spent it. You’re not getting it back. It’s sitting on your premises. You feel like you have to succeed. The penalties for failure become huge. So you have people who, even if things are not going well, they can’t convince themselves or others that it’s okay that things are not going well. Things are always going well until they’re finally not. And it’s very hard to shut things down — just a little more time. And things take a long time because nobody wants to admit failure. So with the cloud model, you just turn stuff on and you turn stuff off. So what happens is you get rapid experimentation. So when I talk about transformation, it’s not a buzzword. It is about, for example, a specific concept of reducing the penalties for failure.

See, that’s a new version of the pitch. That you should not buy a bunch of computers and servers; you should rent them from Amazon or some other cloud provider just as you need them and scale up and down as your work loads. That’s the old pitch. That’s 20 years of that pitch that’s coming that has revolutionized the internet. It’s revolutionized a number of businesses. The idea that it reduces the penalty for failure that’s a new turn of it. Is that your term? Is that how you have been thinking about it? Or is that something that’s developed in the market over time?

We’ve been saying it for a long time. Different people hear different things. We may be better or less good at amplifying certain messages. But I think it’s resonating with people now, and part of the reason is because we’re still early in the cloud journey. I don’t know which analysts report to believe, but we’re probably, call it, 10 or 15 percent of IT has now moved to the cloud. And people think it must be more than that because we’re an $88 billion a year revenue business now, and there are other cloud providers as well, and they’re like, “Oh, these are huge businesses. So it must have already happened.” But IT is so huge. It is several trillion dollars a year of spend that it’s easy to quickly see that most of the migration has yet to happen.

When you say IT, I’m very curious about that phrase in particular. I think most people listening to this show here, you say IT, and their brains go to the people who provision their laptops, or their mouse is broken, or the printers don’t work. You’re talking about IT in a much different capacity. “I’m starting a business, that business is on the internet. In order to run that business, I need to run some code on a computer, that computer needs to be provisioned, maintained, service upgraded, and Amazon should do that work so you can focus on the code that’s running on the computer.”

That’s exactly right. It used to be that you would have to either have your own data center or rent space in somebody else’s data center. You’d have to have physical servers in that data center. You had to have networking into that data center. And then there was a bunch of software on those physical servers, whether it’s databases, storage software, applications like websites, or genomics analysis, or financial Monte Carlo simulations, whatever it is — you run all that software on that stack that you’d built. The first big revolution of the cloud is AWS basically replaced that. So now, you just brought your own applications, like those Monte Carlo simulations or your pharmaceutical compound analysis or whatever it is, and you just ran those up there somewhere. And that somewhere is the cloud, which is why it kind of came to be known as the cloud — because you didn’t really care where it happened.

And that’s what AWS pioneered. This concept of not having to be bound by all that stuff you’d have to buy, it changes people’s mindsets. And we’ve seen company after company, because we’ve seen it at Amazon, too, but customer after customer, people telling us, “Yeah, we just start spinning stuff up. We spin stuff down, we run experiments. We understand that some of them are going to fail.” And what happens next is that people inside the company get more innovative, so the company becomes more innovative. And if the company becomes more innovative, that’s the same as saying the culture of the company is changing. And so, when I talk about transformation, yeah, I guess maybe it is a code word. It’s a code word for: you reduce the penalties for failure, you increase the ability to innovate, and you actually get more great new breakthrough ideas per person per month than you used to get before. And that’s a cultural change inside of our customers, which they find to be incredibly powerful.

Do you look at the spend of AWS advertising on the NFL, the saturation advertising in airports, and say, this is definitely worth it? Or is that just, “Well, all the other enterprise companies do it, too, so we have to be there.”

We look very carefully at all of our spend. Amazon’s a very frugal company overall. AWS is no different as part of that. We’re a big enough business now that we have many different types of customers, very technical developers, who were our first customers and are still the lifeblood of AWS in many ways. And we also have CEOs of Fortune 50 companies and CIOs of government agencies and everyone in between, so you reach different people in different ways and in different places. And I think we probably do a lot less broad scale advertising per square inch, if you will, of our company than a lot of other companies do.

“So when I talk about transformation, it’s not a buzzword. It is about, for example, a specific concept of reducing the penalties for failure.”

Because I do think it’s very easy to misspend and to waste a lot of money on that. But we do think that certain messages with certain media partners aimed at certain of our customers, that advertising and awareness building is useful. But one thing I’ll say is — airport advertising is just kind of airport advertising — if you look at some of these partnerships we have, they’re not just media spend. They’re very much attached to use cases. So with the NFL, we’re not just advertising with the NFL, we’re innovating with the NFL. So we have this whole series of capabilities around next-gen stats, and the NFL is really innovating to provide its customers with incredibly interesting data. We’ve even got chips inside of the footballs now, and the quarterbacks can’t tell. We did tests, and the quarterbacks thought they could tell which footballs had the chips in the game but they can’t.

That’s great.

So we’ve got a separation of the receiver from the defender. We’ve got probability of catch. This is making for a better NFL experience. And then, really importantly, we’re partnering tightly with the NFL on player safety. We’ve got over 300 sensors on the football field and players’ bodies. And we’re looking at things like weather and the field itself and the equipment. And the NFL is going to be using all this analytical capability that we’re helping them build to reduce concussions, reduce knee injuries, and just make it safer for their athletes. Those are the types of stories we want to tell. We don’t just want to say, “Oh, AWS exists.” We want to say, “Here’s what AWS means for you who’s watching this football game. It means you get a better experience. And it means you can understand that these athletes, who are playing a pretty violent sport at times, are going to be safer.”

Those are the types of messages which we think come home and then can say, “Hey, wait, I work at this business, if AWS can do that for the NFL or if AWS can do that for Formula One, let’s think about maybe what it could do for my business.” But again, we try wherever possible to really link it in with the use cases related to that partner.

I’m always so tempted to ask these questions because my Twitter feed or X feed during an NFL game is full of people who already know what AWS is, and they’re all saying, “Who is in the market for AWS who isn’t already aware of AWS?” But it seems like you’re still reaching new people.

If you go back to, maybe the cloud is 10 percent penetrated into IT, then where’s the other 90 percent? And some of it is going from a small percentage to a large percentage in existing customers. Some of it is new customers. Of course, there are countries which are not as far along in their cloud adoption as the US is. And even the US is still early along. So again, I think people look at the size of business that we are and how quickly we’ve grown, and they say, “Well, it must be highly mature. There can’t be much more ahead.” And it’s not true, and that’s because the overall market, again, the segment is just so vast.

It’s still very early days in the cloud. So much of the innovation is behind us. I predict we’ll look back in 10 years and say, “Do you remember back in 2023 when everything was so young. And it was so early back then, can you believe X, Y, Z?” I mean we are nowhere near mature as a segment.

Your general pitch for AWS here is very familiar: “You should not run so many of your own computers. Give that to us. You can scale up and down, that’ll reduce your risk. It’ll make you more innovative.” You could probably glue some version of that pitch to Microsoft Azure or Google Cloud. And I know there are differences, and I do want to talk about them, but that’s the general shape of the big companies.

When you think about the competitive set, where are the disruptive competitors to your cloud business now? Where are the small companies that are doing things you’re not doing in different ways? Do you see that emerging yet? Or are the three giants kind of running over everything?

Well, the first thing I’ll say is that the large cloud service providers are not all the same. That’s really a misnomer. And we’ve got robust competition in our segment — as we should, by the way. It’s good for our customers. Frankly, it’s good for us. It makes us better. But we’re not all the same. If you just look at our track records, we are more secure than the other clouds. We do have fewer of the types of problems that you’ve seen reported. And by the way, we’re not cocky about that. Security is not something you ever want to be complacent about, and you never know what’s going to happen tomorrow. But empirically, we’ve just been more secure because of the approaches we take and the level of effort that we put into it.

Secondly, it’s incredibly important that we have absolutely stellar operational excellence and reliability. And again, while perfection is the only goal, we know we will never actually be statistically 100.0 percent perfect. Any time we have a service issue, it’s incredibly painful for us because it’s painful for our customers. But again, empirically, if you look at the third-party measurements, we have the highest uptime of any of the major cloud providers. Others over the past couple months have had some very notable multiday service disruptions, which has never happened in the history of AWS, and it’s because we’re architected differently. So we are not all the same. We have the broadest and deepest set of capabilities, and that’s why we are significantly larger than those other cloud providers.

Where’s the competition coming from besides those providers? I think you’re seeing with AI that it’s always just around the corner. You could very easily see some startup who maybe was born recently — or maybe hasn’t even been born yet, so none of us knows about it — come at these problems differently. I personally talk a lot about not wanting to act like an incumbent. We always want to act like an insurgent: incumbents worry about what they have and worry about how to protect it, and insurgents think about what’s possible for customers. How can we possibly delight them in ways that they’re not being delighted today? And let’s go do it, regardless of what it takes. Incumbents manage math and ratios, and insurgents manage either product or customers.

We try to get as many people as possible focused on product and focused on customers, not focused on delegation and managing ratios. You can see with all the innovation and change happening in the AI space that any one of these companies could wake up and decide, “Hey, Amazon — or any of the other big cloud providers — they thought they had a database business, or they thought they had a storage business. But instead, I’m an AI company who does this completely different thing. I’m looking at the world orthogonally.” I choose to be intensely paranoid about the startup who may or may not even yet exist, who’s going to come at a problem differently and solve a problem from a perspective that heaven forbid we be blind to because we have an existing business. I think that is a way bigger existential threat than the big companies that we know about and peer out there and see.

You’ve led me perfectly into the Decoder questions. The first one was actually inspired by talking to Amazon executives over the years, because Amazon does have a very strong set of leadership principles and a very clear decision-making process. You are headed toward day one versus day two. It sounds like you’re saying you want to be in the day one mindset, not in the day two protective mindset, but just to ask it directly: what is your decision-making framework? How do you make decisions?

It’s a difficult question to answer in the abstract, but let me give it a shot. You mentioned the Amazon leadership principles, and we have 16 of them, and it’s not plaqueware or HRware. I sometimes call it the operating system of Amazon. We use those leadership principles in hiring. So, if I’m doing an interview loop, I might be assigned “insist on the highest standards” or “think big” or “learn and be curious.” I’ll actually interview for that leadership principle. It really becomes part of the daily vernacular, part of the vocabulary of being at Amazon. They are incredibly important. If I had to pick one, the core of what Amazon is truly, is the leadership principle of customer obsession.

To get to your question, the way I think we, and I certainly am speaking for myself, make decisions is to always work backward from the customer. By the way, people misunderstand — I’ve learned that people mean different things when they say customer focus, or we say customer obsession. And I think a lot of people think that the way you exhibit that is on the emotional scale. You know, “Do I dislike my customers?” We have a certain set of very traditional IT competitors who seem to dislike their customers, as far as I can tell. Or you might like your customers, which is good, or you might love your customers, and people think that’s truly customer focus.

What I’ve learned is that you don’t measure this on the emotional scale, and that truly the most customer-obsessed things you can do are twofold. One, to deeply, deeply understand your customers in ways that most companies don’t take the time to do. Sure, they send out a survey or have a product manager talk to a few customers, but they don’t deeply understand exactly what problems their customers have and exactly what they think of what you’ve built so far.

“Maybe the cloud is 10 percent penetrated into IT, then where’s the other 90 percent?”

And then, the second part, and this is actually harder than the first, is to take that understanding and actually keep it at the center of your decision-making. It’s so easy. You have all this customer understanding, and then when you go to price something, you go, “Oh, well, what’s profit maximizing for me? What do I need?” I’ve just seen, at so many companies, they just routinely kind of park that customer information at the door when they make their most important decisions. “What’s our capital allocation plan for the year?”

We make sure to keep that customer perspective right in the center of the room as we are making decisions. The way we do that is this working backward process. So any time we are going to build something, not just a big new service but even a midsize feature for an existing service, we actually write a press release before a developer starts coding. If we can’t describe in plain language what is delightful and breakthrough about this thing that we want to build for a customer, then why on earth would we waste the time to go build it?

You also iron out all sorts of misunderstandings between teams. Are we building this for developers? Are we building it for IT staff? Is this for a line of business users? How high in the stack are we going to build it? Is it going to be built on our primitive services? Do we need new technology? So it’s a press release, and then it’s a detailed FAQ behind that press release. And we’ll do that tens or hundreds of times per year. That way we know that we’re about to embark on building something which we can at least have a good shot of being remarkable for our customers. That’s probably the center of how we make the decision-making. I think another piece, for me personally, is to hear a lot of voices.

I really like to assimilate a lot of different points of view. I don’t take for granted that the most senior people who are closest to me are always going to have the best ideas or always have the most piercing perspectives on something. When we get in these rooms and review PR FAQs, it’s a written document; everybody reads it. Everybody’s now in the same level playing field. Everybody has the same information, as opposed to a PowerPoint presentation where I dole it out to you a little bit at a time. And so you can get a product marketing specialist who can really communicate about this in a way that maybe somebody else on the team can’t. You’ve learned to kind of listen for those voices in the room and to try and encourage and solicit those voices in the room.

And it’s not a love fest. I mean, there’ll be challenges, and I will push and ask people to justify and defend what they’re saying. But we want to have that clash of ideas, if you will. And that’s very important to me to help us hopefully get to alignment. Or if we can’t get to alignment, at least get to a place where whoever the senior decision-makers are are going to be able to make a call with as much knowledge as possible.

Tell me how you arrived at the new size of AWS. We’re going to shrink the organization. That’s your decision, right? Ultimately, that kind of decision comes to the CEO. How did you put that into practice?

First, let me say that any time you’re eliminating roles, it’s incredibly painful. It’s people’s lives that you’re dealing with, and their livelihoods, and it involves their families, needless to say. We take it very seriously, and we understand the impact that it has on people. So I don’t want to minimize that in any way, shape, or form. AWS has grown extremely rapidly in the number of employees that we have on board. If you just look at 2020 to, I’ll say, the end of 2022, AWS added many tens of thousands of employees. And then, earlier this year as we just looked at the overall economic uncertainty, the macroeconomic climate, and also our real desire to get ourselves focused on our most important priorities, we did end up doing a small single-digit percentage layoff or reduction of roles. Not to minimize in terms of any one human being, but it was small as a percentage of our overall workforce.

We basically made the decision in terms of: how can we get more efficient while at the same time being confident that we still have tons of innovative potential kind of sitting here in our four walls? We’ve tried to get increasingly clear on what are our real priorities. We’ve done so many things for so long as we’ve grown so quickly. I think, any time you’re in that situation, it really pays to every once in a while take a step back and say, “What are our top priorities? What are the services which matter most to customers or the countries in the world?”

And in many cases, we just moved people and moved teams around to get focused this year on our most important priorities — not because the nice-to-haves aren’t nice to have or not because they’re bad ideas, but just because we decided to focus on the top things. In a minority of cases, it meant that we just didn’t have enough of the skill sets that we needed for a particular area, in which case we would say, “Hey, this thing over here is not our highest priority right now. And those skill sets are not really transferable.” So unfortunately, those will be some roles that we’ll eliminate, and then we’ll go net higher. We still have open positions we’re hiring for in the areas that are highest priority to us where we don’t have the existing skills on staff.

You have perfectly led into the other classic Decoder question. AWS is a big thing. It is hard to change. A lot of people rely on it. You’re talking about restructuring it, redoing the priorities. How is AWS structured now?

On our product side, we’ve always been — and continue to be — very decentralized. We optimize for innovation and for speed. Speed is one of these unsung heroes of business. I think people dramatically underestimate the importance and the power of speed. And they’re also very fatalistic about it. I hear customers all the time saying, “We’re just not a fast company. We’re not capable of being nimble.” And I tell them, speed is a choice. Now, what do you mean it’s a choice? Well, you choose how fast you’re going to move, and there’s a bunch of inputs that go into that choice: how you organize, how many and what types of people you have, and what senior leadership insists on from their teams.

There’s a whole series of things that go into being fast. One of them is organization. We’ve chosen to have what we often call separable teams. So we want teams to be as independent as possible. Now, of course, teams do have dependencies on each other, but you can have more, or you can have fewer. And we choose to kind of factor and refactor teams as much as possible so that they’re as autonomous as possible. They own as much of their own fate as possible. Another key concept is to be single-threaded. So if you take an existing successful business and a leader of that business, and you give that person a new project to work on, it almost inevitably gets starved out because they’ve got a revenue stream and a business and operations to keep operational, etc.

What we tend to do is, instead, take super successful leaders and move them out of the thing that they’re doing and make them single-threaded, single-minded, on the new thing. That way, it gets 100 percent of their attention. So we really have general managers of many little businesses or product areas where they own both development as well as product management, those types of functions. And when that’s unified under a single leader, they can move way faster than if we had some big, monolithic functional structure.

Now, on the go-to-market side, you don’t want to go to show up at customers with 200 individual services. So, on the go-to-market side, we’re much more structured around, “Hey, we’re going to have sales and field organizations, and those will be organized either by industry vertical or by geography.” And within those, we’ll have account owners. Then we have various experts that we can bring in for different products or different types of technical topics. But we’ll always sort of have an account owner as the lead into the account so we can present as much as possible one face of AWS.

That structure is very unique to Amazon. It has been iterated on massively. There are now books written about it. It is very focused, and it’s very effective. And the markets AWS in particular is in, you’re the market leader. You invented the category, and your competitors have adopted different approaches, and they’ve had to try different things, but it works there.

Now, I’m looking at AI — a totally nascent market. No one knows how it’s going to work. The only player in AI that’s making money at this point appears to be Nvidia, which is selling chips to everyone, and maybe cloud service providers like AWS and your competitors who are selling capacity to people. On the other end of it, the consumer applications seem really hot, but no one’s making any money yet. So the market just hasn’t developed a set of cost structures that makes sense. Are you applying the same approaches to how you’re organized for AI? Or are you saying we might have to be more flexible here as the market develops?

Oh, I think our basic approach is flexible. We have the ability to go create whatever team to focus on whatever thing needs to get built, that’s much more flexible than the monolith. AI is fundamental. There’s a reason for all of the hype. I definitely believe that virtually every application that we interact with, whether it’s professionally or in our personal lives, will be significantly disrupted and, in many cases, reinvented with AI. I wouldn’t confuse that for knowing how it’s going to play out, which is what you alluded to. We’re about three steps into a 10K race. And people want to say, “Well, which runner’s in front?” And it’s really not a relevant question.

Much more relevant questions are: What’s the course look like? Who are the spectators and participants? Where’s the finish line? We’re really focused on understanding as much as possible what the early things are the customers need built and how to set ourselves up to deliver that for them. Just like in 1996, if you and I had sat around and talked about the internet, and somebody had said, “Well, who’s the internet company going to be?” It’s sort of a silly question in hindsight. There’s not an internet company.

I think with generative AI being as fundamental as it’s going to be, there’s not going to be a single generative AI company. AI is not this separate thing. It is intrinsically bound up with the cloud. Now, why do I say that? Well, for one thing, you need a data strategy for AI to work for you at all. Whether you’re talking about serving education better, serving financial services clients better, whether you’re talking about drug discovery, whether you’re talking about media, asset creation, you have to know what data you have. You’ve got to know what data you want to take and have that as inputs into your generative AI.

The companies that have been working on their data platform inside of AWS for a long time have a huge advantage in being able to take that and say, “Okay, now this thing I want to build — a customer service chatbot or whatever it may be — I can do that way better because my data knows how to feed itself into there.” The modern data platform is in the cloud. It is on AWS. That’s a powerful example of how the data in the cloud and generative AI are bound to one another.

The other reason for this is, generative AI is not cheap. It is currently very expensive. GPUs are very performant, but they’re also quite expensive. To train models, for example, is incredibly expensive. And then, to run inference or run models and do queries in production on these models is also very expensive. In order for those tasks to be possible economically, you need the cloud. The vast majority of companies will need companies like AWS innovating to drive down cost dramatically over time in order to drive the exponential increases in volume that we will inevitably want to see around the use of generative AI.

While we certainly are one of the largest, maybe the largest, GPU-based hosts in the world and have a great relationship with Nvidia, who you mentioned, we also innovate and design our own silicon, our own chips. We’ve got general-purpose chips, which are already in their third generation, but we also have specific chips for AI and machine learning: Trainium for training models and then Inferentia for running models and production. Those are doing really well, growing quickly. I’m highly confident that they’re going to have the best price performance of any chip technology for doing AI. And that’s going to be incredibly important for startups like Coherent, Anthropic, Stability AI, and Hugging Face, who are building models.

It’s going to be incredibly important for the established companies that we’re already working with on AI, like Travelers and Ryanair and Bridgewater Associates, who are going to need the economics to work as well. So cloud and AI are not two different things. They’re really just two of the many faces of the same thing. And therefore, I think our kind of organizational model will work very similarly. We’ve set up specific targeted teams to build Amazon Bedrock, specific teams to build our own Amazon foundation models of the Titan models. We’re building a specific team that works on CodeWhisperer, which is our coding companion, etc.

Two things. One, my producer Kate promised me that you would say “three steps into a 10K race.” I just want to shout out Kate. It’s a good metaphor. I like it. It leads to some natural questions here. So, it’s a race. It seems like you don’t think there’s a finish to the race. The end state of the race is not Amazon is crowned the winner; it’s that these models AI generally infuses the next version of business, the next version of tools that all of us use, and everyone’s sort of competing. Is that how you think the race ends?

Congratulations to Kate. She nailed it. The race never ends for any of us in business. You’re only as good as what you’ve done for customers today. AWS obviously pioneered cloud computing. We launched our first cloud service that we have today, S3, our storage service, in 2006. We say we’re sort of 17 years old. By revenue, we’re the largest by a good margin I’ve seen. I don’t know if it’s true or not, but I’ve seen stats published saying we’re maybe twice as big as the next closest size competitor. But we have very robust competition, and we’re just getting started, and we’re no better than what we’re delivering to customers today. So the race is perpetual. It’s an infinite loop.

That next biggest competitor, that’s Azure. They are paying a lot of money for exclusive access to OpenAI and OpenAI’s models. If you want to use GPT-4, you are signing Azure contracts. There are a lot of startups in this world that are wrappers around GPT-4, and they’re on Azure. In contrast, you just announced at AWS Summit that Anthropic, Stable Diffusion, and Cohere are being added to Amazon as a sort of Bedrock library of models. So you have many more models to offer, some more tailored in other dimensions. Is that how we should think about this competition? There’s exclusive access to the model that seems to have captured everyone’s imagination, and then there’s much more flexibility on the Amazon side.

We say we’re customer obsessed and we work backward from customers. So let’s answer this question by laying out three things that we think are fundamental to deliver to customers — doesn’t matter who you are. The first is choice and flexibility. I find it a preposterous proposition to think that there’s going to be one model to rule them all, kind of back to the internet analogy, and I think any one company’s going to need many models for different use cases. Because it turns out that the same model is not actually good for the five or 10 or 50 use cases that I could lay out for you right now inside of one company, never mind the fact that there are thousands or tens of thousands or millions of companies who are going to need these things.

To me, it is obvious that there has to be a lot of choice. And so, we want to enable that choice. Furthermore, it’s so early, it’s not even day one. It’s like day 0.1 in generative AI that there is infinitely more that we don’t know than what we do know. It’s really important for customers to be able to experiment. So that’s No. 1 is choice and flexibility. No. 2 for any established company, especially for enterprises and government entities, is you have to have security. Your security and privacy don’t get to go out the window. And it’s been kind of amazing to me that some of the early, most well-known entrants in this space kind of started just by throwing some stuff out there.

There was no security model, and your data did go out over the internet. And any improvements you made algorithmically to the models would go back into the mothership and would benefit your competitors. Then they came back and said, “Oh, wait a minute, there’s going to be a V2, and that’ll be the secure version of this.” This is really important to me because security is not only about features; it’s about a philosophy in a way of operating. If I came to a big automotive company or a big bank and I said, “Hey, I’ve got a new database, and it’s really cool. It’s got great functionality. Now, it’s not secure like everything else is for me, but don’t worry, I’ll make the next version the secure version.” I mean, they would throw me out on my you know what. As they should, by the way.

This is why I’ve talked to at least 10 Fortune 500 CIOs who banned ChatGPT from their enterprises. And again, now they’re circling around. It’s like, “Well, wait a minute, okay, wait, there’s going to be a different model and so forth.” But you have to ask yourself who’s really going to take security seriously here. Then, the third thing we alluded to before, which is data. And your data strategy is part of your generative AI strategy; they’re not two separate orthogonal things.

So how does Amazon, how does AWS think about each of those three things? Amazon Bedrock, that’s our managed service for running generative AI models. Amazon is building its own models. We’ve been doing AI since 1998. Personalization on the Amazon website is AI. We launched in 2017 SageMaker, which is the largest machine learning platform in the world. We have over 100,000 customers doing machine learning on SageMaker. Then, if you want to talk specifically about generative AI and foundation models, Amazon has foundation models that have been running in production for a couple years now. Parts of retail websites search are powered by large language models. And if you look at Alexa, a lot of Alexa’s voice responses are powered by LLM. We’ve got a lot of expertise in this area and kind of pivoting it specifically to generative AI.

“Speed is one of these unsung heroes of business. I think people dramatically underestimate the importance and the power of speed.”

But, we’re building our own models — expanding the ones we have and building some new models. Those will be under the Titan brand. And these Titan models will come out later this year. We think they’ll be great, and they’ll be really powerful for a bunch of customers to use. But again, no one model to rule them all. We also have a great partnership with Anthropic, whose models are in there, with Stability AI, who does models for generating images. Cohere just joined Bedrock, AI21, and there will be others over time. It’s all a consistent API set. So it’s very easy for customers just to have the same kind of harness for framework on their side, and then they just kind of call via API the model that they want to use.

Our approach is to provide easy experimentation and very wide choice. So that’s the first concept.

The second one on security… if you use any of the models in Amazon Bedrock, it’s in the same isolated private environment that all of your other AWS resources are in. We call it a VPC, or virtual private cloud. So it’s all encrypted, none of it’s going out over the public internet. If you want to use one of these models, we basically instantiate that model inside of your own virtual private cloud. And now it’s only operating there. So if you make algorithmic improvements to the model, they’re not going back to the mothership to benefit your competitors. That’s really important.

And then, thirdly, we already talked about the data platform and how so many customers have their data platforms really running on AWS, and those customers need us to have a great generative AI set of capabilities because they know that where their data is, they have to have their generative AI as well. So that’s how we’re thinking about the capabilities that customers need us to build.

So that’s a lot of capabilitiesbut right now, the bottleneck is very much in the hardware. One of the things that strikes me in this conversation is how much you manage that on behalf of AWS customers. There are data centers, they’re large, they’re full of computers and networking equipment, and now, they are full of very hard to get Nvidia chips: A100s, H100s.

The H100s are apparently impossible to get. You can barely even use them at any of the cloud providers. And you’ve got your own chips. My colleagues have told me stories about startups basically needing an inside connection at AWS to get their AI applications online because the bottleneck of the chips is so high. Is the first problem you have to solve here that our chips need to be competitive or at least on par with Nvidia’s chips? Or is it: we just have to buy more Nvidia chips?

Well, I think everybody in the world wishes there were more chips capable of running these AI workloads. No matter who you are, you wish there were more. I think it’s not controversial at all to say that, at least in the short term, demand is outstripping supply, and that’s true for everybody.

Is that something that you were actively working on? There’s a part of this conversation where we’ve talked a lot about bits, and then there’s a part of the AI conversation with you in particular where it’s atoms. There just aren’t enough computers or chips in the world to address the market opportunity in AI. Is that the issue?

Well, there are a couple things. It’s a little more complex in addition. We run a ton of Nvidia GPU capacity — again, we’re one of the largest, maybe the largest, hosts of that in the world. Customers are snapping them up, and they’re called our P5 Instances. Customers are now running our P5 Instances in production. And we’re going to be bringing in a lot more of that capacity over the coming weeks and months for sure. And we’ll continue to host a ton of GPU-based capacity.

We’ll be a very substantial cloud hoster of that. In addition, we think it is so important that the supply be there for what customers need and for it to be price performant, of course, and energy-efficient that we have our own chips for this that we design, not GPUs. So we talked a little bit earlier about our Trainium chip, and Trainium is already out in the market. Trainium 1 has been out there for a good amount of time. You can imagine there might be future versions of Trainium as well.

You put that number in the name. It’s a pretty easy guess that there might be the next number.

“You can’t just have one supply chain that the whole world relies on ... unexpected things happen”

Yeah, exactly. Trainium provides fantastic price performance compared to any other alternative on the market for a whole bunch of machine learning use cases. And it’s only going to improve and improve. Also for all of our chips, our Graviton3 chip, for example, is 60 percent more energy-efficient than the equivalent x86-based chips. Similarly, for our Trainium and Inferentia chips for machine learning and AI, we’re going to be very, very energy-conscious there, which really matters a lot to our customers these days. So I think it’s incredibly useful and necessary for our customers that AWS be able to provide them a whole separate supply chain. I mean, you can’t just have one supply chain that the whole world relies on, and there could be all sorts of shortages and unexpected things happen.

Do you foresee a world in which you pick a model and that model is paired with some sort of proprietary Amazon chip and that becomes the differentiator?

It’s a great question. I would say that a lot of the models will choose to run on more than one chip, and there’ll be good reasons for them to do so. But I do think that you’ll see certain model providers really getting very close to folks like AWS and saying, “Hey, let’s optimize together. Let’s make sure that this model both drives improvements in the chips as well as takes advantage of the unique characteristics of that chip.” And so, they may choose to disproportionately — or maybe in some cases exclusively — focus on one chip because there are significant advantages to that focus. But I’m also quite sure that you’ll see a lot of models running on a lot of different chips.

So that’s the chip side. I think a huge advantage we provide to our customers, particularly the folks building models, is this whole separate supply chain, this whole separate set of capabilities of the Amazon-designed chips. The other thing you said, what’s the constraint? So I think chips are a big one. Another big one is power. I think it’s pretty well known that, in a lot of important locations where there’s a lot of compute capacity around the world, the demand is just growing so rapidly that it’s unclear as to when there will be enough power in those locations to power those data centers and those servers and those chips. We’re very thoughtfully but aggressively building out new capacity around the world in places that we think have a really good runway with abundant power — clean energy — because we are going to be 100 percent renewable energy across the whole company by 2025, which is just around the corner. We’re 90 percent renewable energy-powered today. So I think building out that whole chip supply chain and then building out power and data center capacity in places around the world, inside the US as well as other countries where it really makes sense, where you really have runway, is going to be key to our providing the supply that all of these customers that you alluded to are indeed demanding.

You head up sustainability across the US as well I do want to come back to that and talk to it. I just have a couple more questions on the AI field as we see it today. One, you’ve mentioned several times now training data going out on the open web. Other companies are getting themselves in trouble doing that. Maybe some privacy concerns, maybe some security concerns there.

Amazon is very big it is just a very big company. It has its fingers in a lot of things. Most notably, it runs a gigantic movie studio and streaming service that is tied up in a lot of questions about AI and the arts and copyright law and all of those things. AWS is where some of that data is hosted. It’s where some of these models are trained. Do you think about that as the infrastructure provider? There’s a set of copyright law questions about fair use that are coming, and maybe Stability is going to get in trouble with Getty. Or maybe Anthropic is going to get in trouble because of Reddit data scraping, or whatever might happen in the future. And we, as the underlying provider, have some responsibility to mediate that because what we do over here with AWS might get Prime Video in trouble over there with the actors and the writers.

AWS thinks a lot about responsible AI and privacy and all of the both ethical as well as regulatory and legislative issues that are very appropriately being discussed. At AWS, we’re not thinking about anything uniquely because of Prime Video or any of our other internal customers, just like everything else we do at AWS, Amazon is a great customer, a very large customer, a sophisticated customer who is often a great bellwether for where other sophisticated enterprises are going to go. But they don’t get special treatment.

I’m asking not in terms of Amazon proper I’m saying Amazon, as a company, makes art. Which is remarkable for a tech company of Amazon’s scale, like it is fully invested in making art. A thing that is causing turmoil in the creative community is generative AI: who gets the data, who owns the data, whether it’s fair use to train on the data. And then, on the other side of Amazon, you are making the tools that enable that to happen. And I’m just wondering for you personally, as the person in charge of those tools, if you’ve ever pumped the brakes and said, “We don’t know the answers to these questions. One, we might just be getting our friends at Prime Video in trouble. And two, maybe more importantly, more directly, we might be entering a world of liability here because we’ve enabled Stability to go out and train on Getty’s images.”

We’re not pumping the brakes, but we’re working very hard on all of these issues. It is very early into a very enormous and complicated set of issues here. It’s not going to be solved overnight, but it’s really important to be working on them now. What does working on them mean? So there’s the stuff which we actually control.

By the way, we’re not going to solve this ourselves. We’re going to try and be a leading voice in all of this, but it is intrinsically true that we cannot solve this ourselves. We interact with different pieces differently, so we build our own models, right? We talked about the Titan models that Amazon’s building, and we’re taking responsible AI very seriously in terms of the data that’s used to train the models. Things like toxicity. Accuracy is a really important one because there’s been all this appropriate talk about hallucinations in models — basically models giving you results which are not true or they’re made up, but they look like they’re true.

Hallucination is a delightful word for “it lies to you,” by the way.

I know. It does seem a bit of a euphemism.

We have softened it quite a bit.

We’re putting a ton of work into minimizing the amount of hallucination that can take place in our models and also having various methods of cross-checking. So the model can tell if it’s essentially making up stuff in an effort to be quote-unquote helpful to you. So I think that the models are going to have some real innovation around accuracy, around toxicity, and around appropriate training. A lot of that’s going to, in a positive way, launch itself into Amazon Bedrock. So we’ll have the Titan models but with the other model providers. We’re producing things called service cards. There’s been a lot of talk about wanting to have visibility and transparency into what’s going into these models and who trained them and what kind of data is used.

So we’re producing these service cards for each model, and we’re going to hopefully have those for all of the models inside of Bedrock where it’s going to provide essentially basic information on what that model is and, at least at a high level, what kind of data was used to train it and what the intended uses and limitations are of it. I don’t make out like that’s going to solve the transparency, going to solve the visibility problem, but it’s a step, at least in 2023, that we are taking in the right direction. Then, to go to the other end of the spectrum, from something that we don’t control but I think where we need to be one of a number of important voices, is in the kind of legislative and regulatory side of things.

I was just at the White House with the president and his chief of staff when President Biden announced a week and a half ago these voluntary commitments around AI. I think seven companies were there saying essentially, “Yeah, we’re going to voluntarily commit to these principles that the administration laid out around responsible AI.” I think it’s very important that there needs to be national leadership. I think the US government, I think other national governments around the world, need to lead on this. But it’s also very important that the leading practitioners and the people building this be in the conversation. We’re spending a lot of time with the administration, with members of Congress, with those equivalent types of bodies in Europe and other countries in the world.

“There’s not going to be a single generative AI company. AI is not this separate thing. It is intrinsically bound up with the cloud.”

Do you think of this as more of an app store model for you? For example, Meta just quote-unquote “open sourced” their model. You could theoretically go run it on AWS–

Not only theoretically, today you can. The day they announced that it became available, Amazon’s SageMaker JumpStart, which is essentially the private marketplace that we have to just instantiate the model and run it yourself.

So did you check to make sure Meta’s model is not going to hallucinate at a higher rate than everyone else’s or the Biden administration has asked for these restrictions, and we’re going to make sure–

We can’t do that. We don’t own that model. So if customers come and say, “Hey, SageMaker is this great machine learning platform, we want to run LLaMA in SageMaker.” We’re not going to say no, and we’re not going to go become the world experts in LLaMA. We’re going to certainly be part of this conversation about if model X is going to live in the world, what requirements is the world going to impose on model X to say that it’s safe for use? But we couldn’t, nor would we want to, try and police the entire world’s model.

We’re going to be responsible for our models, and I think Amazon Bedrock and other services like that are going to help to put an effective harness around a lot of this stuff. But at the end of the day, model providers need to be responsible. Governments need to decide how much they want to legislate their being responsible. And there needs to be visibility provided so that potential customers can decide if those are good models for them.

I think we could probably do another hour on where the regulation should target the effort, right? If you pass a law saying the model can’t do X, we have to figure out who’s going to enforce that. And one answer is infrastructure providers like AWS, right? For you to say, “This is what the cloud can’t do.” And maybe you can’t say, “This is what a MacBook can’t do.” That seems almost impossible to enforce. But it would be possible to come to you and say, “Okay, at scale, we’re definitely not allowing X to happen.”

Well, look, we’ve got an acceptable use policy. We alter it when we decide some important things come along. We don’t do it often, but it’s an evolving thing. We enforce it, and that includes AI. So if we need to change it tomorrow for something related to AI, we’ll change it tomorrow. But that’s very much enforced. So we’ve kind of got a way that we say, “Here’s acceptable use of all of AWS.” And AI will have unique characteristics, but it’s not intrinsically different. But I think that governments will decide, Hey, models of a certain size or complexity…”

People are talking about frontier models. Maybe we have to make sure that they’re independently tested, Red Team tested, for toxicity and things like that. And we take stuff like that very seriously. Just for example, a CodeWhisperer is this fantastic coding companion that we’ve built. It’s generally available. It’s being adopted very rapidly. You type in words, it gives you back code. It’s amazing. But in code was where we built the automatic ability for the model to tell you, if you’re using things like open-source code, what the licensing terms and governances around that open-source code you might be using are as well as to filter out anything to do with toxicity. And so, the services we control, we’re taking it super seriously and trying to build in these controls, which are not only ethically important but also, in many cases, legally important for our customers.

Let’s wrap up with sustainability. I really could do another hour with you on how you think of AWS’s responsibility to AI. And I think we probably should do that other hour very soon. But let’s wrap up with sustainability because it is a part of your portfolio at Amazon. And it’s also, I think, a challenge for you as you try to scale AI because, outside of blockchain, which I’ll just set aside, there hasn’t been a thing that has said “okay, we should use vastly more compute” in quite a while. Right? And AI is that thing, and it does have utility, and everybody can see it. And at the same time, the sustainability isn’t there. The price-performance curve has not come down. We’re just running GPUs as hot as we can. How do you see that coming down? Is it more specialized chips, is it just producing more renewable energy for the data centers and letting it rip? Where’s the balance there?

Well, there’s going to be tremendous demand from generative AI applications building and running models, essentially. I think it’s incredibly important that that be done in a really energy-efficient manner. So we’re really focused on the energy efficiency of all of AWS.

“It’s so early, it’s not even day one. It’s like day 0.1 in generative AI.”

For three years in a row, we have been the largest corporate purchaser of renewable energy in the world. And when these projects that we’ve already contracted come online, there’ll be enough to power, I think, over 3.5 million US households annually. Running these workloads in the cloud is going to be way more energy-efficient than companies trying to run the stuff themselves. So I think when customers say, “Hey, how can you help us be energy-efficient? How can you help us be sustainable?” Well, you could do that tomorrow by moving to the cloud.

We’ve seen that many enterprises could achieve 80 percent improvement in energy efficiency and therefore sustainability by moving to the cloud. If you look at Trainium, for example, it’s up to 29 percent better energy use essentially than the equivalents. So just the technology we use, the fact that our data centers are much more highly utilized because we’re large — economies of scale and that type of thing so that you have the service running very efficiently is incredibly important. And we’ve really developed just an immense capability to purchase renewable energy to participate and fund wind and solar projects around the world.

There are two concepts in here that I just want to peel apart a little bit. There’s energy efficiency — doing more with what you have — and there’s increasing renewable energy massively. Is there a balance in your head? We need more energy to run all these GPUs, and we also need to increase the sort of performance per watt of the chips we’re running now with things like Trainium.

Absolutely. Yeah. Trainium performance per watt is just incredible. I think that a lot of this will be solved technologically. So things like Trainium, which AWS has developed, I’m confident it’s going to be the most energy-efficient solution for running generative AI.

That’s where your focus is? On the balance there?

Our focus is wherever our customers need it to be.

[laughter]

No, seriously. We have a lot of customers who are consuming GPUs, and tomorrow, we’re going to have a lot more customers who want to consume GPUs. So yeah, we are the best place in the world for running Nvidia GPUs. Our P5 Instances absolutely rock. In addition, there are going to be a ton of customers who are going to want the innovation and the energy efficiency and the price performance for their use cases that we’ll have on our Trainium and Inferentia chips. It’s not an “or”; it’s an “and.” And we’re committed to providing the choice, and those will both be huge sets of demand, huge sets of use cases for it.

It’s really not a choice, but that’s just one example of the technology choices that we’ll make to drive energy efficiency. For example, we’re always looking at how can we have our servers be more highly utilized. So yeah, you want to just keep on every 10th of a percent, every percentage point, that we can get more highly utilized, get closer to 100 percent utilization is huge energy savings. There are many other examples of where we try and be more energy-efficient. Then as you said, we’re always going to consume energy. So that energy has to be renewable, and we’re going to be 100 percent by 2025. And so we are causing renewable energy to happen. We are investing in long-term 15-year projects that developers are building around wind, around solar, a lot of them groundbreaking.

We did the first offshore wind project ever in Japan, in partnership with Mitsubishi and other Japanese partners. So really groundbreaking stuff. We’re going to continue to try and help the world drive toward just having more renewable energy available. And that’s not a race against other companies; that’s a race against the thermometer. I mean, global warming is the challenge of our generation. I truly believe that. That’s why Amazon made a very public pledge to be net-zero carbon across all of Amazon by 2040, which is 10 years ahead of the Paris accord. It’s a hard, daunting challenge. I know how we’re going to do it in renewable energy. A lot of other parts of the company, I’ll be the first to say it, we don’t know how we’re going to do it.

There needs to be science happening before we can hit the targets in some areas, but all the more reason to take an audacious goal, all the more reason to make it public. We’ve had over 420 other companies join us in the climate pledge now — a lot of big organizations, big companies — and it’s about collaborating together. It’s about getting NGOs and governments involved, and Amazon’s important. But no matter what we do, obviously, needless to say, we’re not going to solve that problem ourselves. So what we want to do is catalyze and inspire others to join us. And I hope they actually outdo us. Please out-innovate us. That’s the best thing that could happen.

That’s amazing. Well, Adam, I think that’s a perfect place to end it. This has been an amazing conversation. Thank you so much for the extra time. We’re going to have to have you back soon. Thank you so much.

Great. Thank you. I enjoyed it. It was a really fun conversation.

Correction: An earlier version of this interview incorrectly stated that Adam became CEO of AWS in 2019 rather than in 2021. We regret the error.

Decoder with Nilay Patel /

A podcast about big ideas and other problems.

SUBSCRIBE NOW!