Part Two: Generative AI, Investing in the 21st Century Megatrend

Feature Stories | May 09 2024

A Megatrend like Generative AI creates an appealing top-down narrative, but how do investors understand the scope and size of the Generative AI market alongside the opportunities versus the risks?

- Enablers versus Adopters 
- How BIG is the Generative AI Megatrend?
- Limitations to growth
- The Edge and Software growth levers

By Danielle Ecuyer

The story below is the second installment in a series on Generative AI. The first installment was published on 2 May 2024:

Breaking Gen AI down into “Enablers” versus “Adopters” 

Citi’s report Unleashing AI: The AI Arms Race, breaks down Gen AI into two major  “Technology Value Stacks”, referred to as the “enablers” and the “adopters

The enablers -- Silicon (semiconductors and chips); Infrastructure (data centres/hyperscalers – multiple connected data centres); Models and Software Applications and Services (automation) facilitate the infrastructure to allow GenAI to work (large language models - compute) which are in turn transferable across multiple industries and sectors - adopters.

Citi rates the impact across sectors as Financials and Fin-tech at the top of the stack, followed by Consumers, Healthcare, Industrial and Mobility down to Natural Resources and Climate Tech.

The broker separates Tech and Communications, as these sectors are both enablers and adopters (more on this in the Edge section).

In an ideal world the adopters employ GenAI to generate better processes and outcomes in terms of productivity/efficiency gains and improved client/service experience. Ultimately the success of GenAI will depend on the successful monetisation of the investment.

This means transcribing the potential size of the future market into the present will vary depending upon the scale of the roll out of enablers and the success of the adopters.

Citi highlights GenAI is the “latest inflection point" of artificial intelligence and emphasises the take up rate of ChatGPT was the fastest in history.

It took only 2 months for 100m users to use ChatGPT, against 9 months for TikTok, 2 years for the Apple App store, 2.5 years for Instagram, 3.5 years for WhatsApp, 4.5 years for Facebook, 5 years for Twitter, 6.5 years for iTunes and 7 years for the World Wide Web.

Equally, GenAI is being used across the globe. 

Just how BIG is the Generative AI Megatrend?

The Bloomberg Intelligence report “Generative AI to become a $1.3trillion Market by 2032, Research Finds” states the GenAI market could compound at a 42% annual growth rate from some US$40bn in 2022 to US$1.3trn by 2032. 

Infrastructure as a service (enablers) to train the large language models are expected to be the largest component at US$242bn.

McKinsey & Company’s report “The economic potential of generative AI: The next productivity frontier” explains the new technology could add between US$2.6trn to US$4.4trn in economic benefits annually for its use cases. The total economic value is estimated at US$11trn to US$17.7trn.

Compared to global GDP per country, GenAI has the potential to rank as the third largest, after the US and China.

The market is moving so swiftly though, that even 2023’s estimates are being blown away. Beth Kindig, I/O Fund CEO and Lead Tech analyst has a long track record in this space and recently discussed on the Real Vision podcast “Unlocking the AI Megatrend” that McKinsey & Co has since upgraded the GDP impact from GenAI to US$25trn.

Kindig places this number in perspective by comparing it to the impact of mobile technology on GDP, which at the upper end is circa US$5trn, including hardware, apps and services, etc. 

McKinsey's updated projection is five times larger and helps wrap some numerical scale around the potential size of the GenAI market impact. Kindig explains when the technology is interlinked as a problem-solving service and matched with the correct product placement, there is a potential “hockey stick” growth potential in earnings for the winners.

Sam Altman’s analysis also gives colour to the scale of the technology.

Altman was quoted in Morgan Stanley's report “Tech diffusion and Gen AI”.

"Look, I think compute is going to be the currency of the future. I think it may be the most precious commodity in the world. And I think we should be investing heavily to make a lot more compute….. But compute is different, intelligence is going to be more like energy, where the only thing that makes sense to talk about is that at price 'X' the world will use this much compute and at price 'Y' the world will use this much compute. Because if it's really cheap, I'll have it reading my email all day, giving me suggestions about what I should think about or work on and trying to cure cancer. And if it's really expensive, maybe we'll only use it to try and cure cancer.”

Within this context, Morgan Stanley’s expectations of high growth in computing will benefit the largest players in the market; those companies with the size, scale and cashflow to enable the necessary investment. Google, Microsoft, Meta, and Amazon exemplified the investment horizon in their latest quarterly results (more on this in Part Three).

Morgan Stanley expects capital expenditure in data centres will reach US$155bn in 2024 and grow a further 13% in 2025 to US$175bn. The cost of GPU’s (graphic processing units) for a 100MW data centre using Nvidia’s B100s or H100s is estimated to cost circa US$1.5bn.

As noted in a recent Reuters report, Microsoft and OpenAI are in discussions around a US$100bn data centre to house Stargate, a supercomputer for large language models and potentially a new form of compute, referred to as "The Tree of Thoughts".

"The Tree of Thoughts” research by Google Deep Mind and Princeton University points to software architecture that would enable computational functions more akin to the human mind’s process of problem solving versus current models.

Morgan Stanley and Beth Kindig both argue the scale of the compute and the data requirements for the LLM’s make formidable barriers to entry and the hyperscalers (Google, Microsoft, Amazon, and Meta) have a significant advantage through their pre-existing scale and business models.

Nvidia and the likes of other chip (GPU) manufacturers like AMD, Qualcomm, Broadcom, and Intel are integral enablers for the compute across infrastructure as a service and device applications (robots, smart phones, autonomous vehicles etc).

ASML, the Dutch behemoth producer of lithography machines, and TSMC, the world’s largest chip manufacturer foundry, are also integral parts of the GenAI enablers.

Obstacles and potential roadblocks to growth: Is Nvidia the canary in the coal mine?

Drilling down into the feasibility of scaling the data centres, it is apparent the affordability and availability of the GPUs is material for not only the developers, but also suppliers like Nvidia. 

Morgan Stanley’s in-depth analysis in the aforementioned report concludes:

“There are risks that NVIDIA’s ability to rapidly lower the cost of compute shrinks the market, but this is not new. Our view is that as long as NVIDIA’s customers are innovating on model architecture to achieve higher levels of performance, we will see greater spending.”

The growing cost of development will limit the number of customers, highlights Morgan Stanley, to fewer, larger customers. The authors explain the scale of GPU demand outside of the larger, more competitive enablers to the likes of software applications, raises the risks of a decline in demand, if the monetisation of the investment is not achieved.

In other words, upfront strong demand creates a pull through of earnings for companies like Nvidia and the analyst community remains cautious on the durability of demand growth.

By contrast, Beth Kindig applies a different rationale to Nvidia.

The full story is for FNArena subscribers only. To read the full story plus enjoy a free two-week trial to our service SIGN UP HERE

If you already had your free trial, why not join as a paying subscriber? CLICK HERE