article 3 months old

Part Two: Generative AI, Investing in the 21st Century Megatrend

Feature Stories | May 09 2024

This story features PRO MEDICUS LIMITED, and other companies. For more info SHARE ANALYSIS: PME

A Megatrend like Generative AI creates an appealing top-down narrative, but how do investors understand the scope and size of the Generative AI market alongside the opportunities versus the risks?

– Enablers versus Adopters 
– How BIG is the Generative AI Megatrend?
– Limitations to growth
– The Edge and Software growth levers

By Danielle Ecuyer

The story below is the second installment in a series on Generative AI. The first installment was published on 2 May 2024: https://fnarena.com/index.php/2024/05/02/generative-ai-investing-in-the-21st-century-megatrend-part-one/

Breaking Gen AI down into “Enablers” versus “Adopters” 

Citi’s report Unleashing AI: The AI Arms Race, breaks down Gen AI into two major  “Technology Value Stacks”, referred to as the “enablers” and the “adopters

The enablers — Silicon (semiconductors and chips); Infrastructure (data centres/hyperscalers multiple connected data centres); Models and Software Applications and Services (automation) facilitate the infrastructure to allow GenAI to work (large language models – compute) which are in turn transferable across multiple industries and sectors – adopters.

Citi rates the impact across sectors as Financials and Fin-tech at the top of the stack, followed by Consumers, Healthcare, Industrial and Mobility down to Natural Resources and Climate Tech.

The broker separates Tech and Communications, as these sectors are both enablers and adopters (more on this in the Edge section).

In an ideal world the adopters employ GenAI to generate better processes and outcomes in terms of productivity/efficiency gains and improved client/service experience. Ultimately the success of GenAI will depend on the successful monetisation of the investment.

This means transcribing the potential size of the future market into the present will vary depending upon the scale of the roll out of enablers and the success of the adopters.

Citi highlights GenAI is the “latest inflection point” of artificial intelligence and emphasises the take up rate of ChatGPT was the fastest in history.

It took only 2 months for 100m users to use ChatGPT, against 9 months for TikTok, 2 years for the Apple App store, 2.5 years for Instagram, 3.5 years for WhatsApp, 4.5 years for Facebook, 5 years for Twitter, 6.5 years for iTunes and 7 years for the World Wide Web.

Equally, GenAI is being used across the globe. 

Just how BIG is the Generative AI Megatrend?

The Bloomberg Intelligence report “Generative AI to become a $1.3trillion Market by 2032, Research Finds” states the GenAI market could compound at a 42% annual growth rate from some US$40bn in 2022 to US$1.3trn by 2032. 

Infrastructure as a service (enablers) to train the large language models are expected to be the largest component at US$242bn.

McKinsey & Company’s report “The economic potential of generative AI: The next productivity frontier” explains the new technology could add between US$2.6trn to US$4.4trn in economic benefits annually for its use cases. The total economic value is estimated at US$11trn to US$17.7trn.

Compared to global GDP per country, GenAI has the potential to rank as the third largest, after the US and China.

The market is moving so swiftly though, that even 2023’s estimates are being blown away. Beth Kindig, I/O Fund CEO and Lead Tech analyst has a long track record in this space and recently discussed on the Real Vision podcast “Unlocking the AI Megatrend” that McKinsey & Co has since upgraded the GDP impact from GenAI to US$25trn.

Kindig places this number in perspective by comparing it to the impact of mobile technology on GDP, which at the upper end is circa US$5trn, including hardware, apps and services, etc. 

McKinsey’s updated projection is five times larger and helps wrap some numerical scale around the potential size of the GenAI market impact. Kindig explains when the technology is interlinked as a problem-solving service and matched with the correct product placement, there is a potential “hockey stick” growth potential in earnings for the winners.

Sam Altman‘s analysis also gives colour to the scale of the technology.

Altman was quoted in Morgan Stanley’s report “Tech diffusion and Gen AI“.

“Look, I think compute is going to be the currency of the future. I think it may be the most precious commodity in the world. And I think we should be investing heavily to make a lot more compute.. But compute is different, intelligence is going to be more like energy, where the only thing that makes sense to talk about is that at price ‘X’ the world will use this much compute and at price ‘Y’ the world will use this much compute. Because if it’s really cheap, I’ll have it reading my email all day, giving me suggestions about what I should think about or work on and trying to cure cancer. And if it’s really expensive, maybe we’ll only use it to try and cure cancer.”

Within this context, Morgan Stanley’s expectations of high growth in computing will benefit the largest players in the market; those companies with the size, scale and cashflow to enable the necessary investment. Google, Microsoft, Meta, and Amazon exemplified the investment horizon in their latest quarterly results (more on this in Part Three).

Morgan Stanley expects capital expenditure in data centres will reach US$155bn in 2024 and grow a further 13% in 2025 to US$175bn. The cost of GPU’s (graphic processing units) for a 100MW data centre using Nvidia’s B100s or H100s is estimated to cost circa US$1.5bn.

As noted in a recent Reuters report, Microsoft and OpenAI are in discussions around a US$100bn data centre to house Stargate, a supercomputer for large language models and potentially a new form of compute, referred to as “The Tree of Thoughts“.

“The Tree of Thoughts” research by Google Deep Mind and Princeton University points to software architecture that would enable computational functions more akin to the human mind’s process of problem solving versus current models.

Morgan Stanley and Beth Kindig both argue the scale of the compute and the data requirements for the LLM’s make formidable barriers to entry and the hyperscalers (Google, Microsoft, Amazon, and Meta) have a significant advantage through their pre-existing scale and business models.

Nvidia and the likes of other chip (GPU) manufacturers like AMD, Qualcomm, Broadcom, and Intel are integral enablers for the compute across infrastructure as a service and device applications (robots, smart phones, autonomous vehicles etc).

ASML, the Dutch behemoth producer of lithography machines, and TSMC, the world’s largest chip manufacturer foundry, are also integral parts of the GenAI enablers.

Obstacles and potential roadblocks to growth: Is Nvidia the canary in the coal mine?

Drilling down into the feasibility of scaling the data centres, it is apparent the affordability and availability of the GPUs is material for not only the developers, but also suppliers like Nvidia. 

Morgan Stanley’s in-depth analysis in the aforementioned report concludes:

“There are risks that NVIDIA’s ability to rapidly lower the cost of compute shrinks the market, but this is not new. Our view is that as long as NVIDIA’s customers are innovating on model architecture to achieve higher levels of performance, we will see greater spending.”

The growing cost of development will limit the number of customers, highlights Morgan Stanley, to fewer, larger customers. The authors explain the scale of GPU demand outside of the larger, more competitive enablers to the likes of software applications, raises the risks of a decline in demand, if the monetisation of the investment is not achieved.

In other words, upfront strong demand creates a pull through of earnings for companies like Nvidia and the analyst community remains cautious on the durability of demand growth.

By contrast, Beth Kindig applies a different rationale to Nvidia.

Her assessment is semiconductors are the way to play GenAI and these companies (Nvidia, AMD, Qualcomm, Broadcom, etc) will represent 50% of the AI spend compared to 20-30% in the mobile market.

Kindig stresses the recent tie up between Nvidia and Apple with their Vision Pro, in cloud omniverse software, exemplifies the overlay of GenAI software on GPUs as a source of revenue growth and value add.

Kindig posits it is almost unheard of for Apple to collaborate on software engineers.

When asked about the potential Nvidia valuation, Kindig opines the stock is at the same valuation as the October 2022 low or “eerily low” and the upside potential is as high as US$10trn in market capitalisation by 2030, subject to all the uncertainties around AI adoption, the tech sector, and the S&P500 performance.

The premise lies in not only the GPU story for the stock, but the yet unrecognised  potential for the growth in GPU AI software, such as the Apple Vision Pro application. Kindig envisages the GPU component of Nvidia’s market capitalisation will be between US$3trnUS$4trn, with the remainder in software.

These are enormous numbers, with multiple ifs and buts’ along the way. Nevertheless, Kindig remains resolute this company is a winner in the GenAI megatrend, with entry points subject to pullbacks and valuation.

Moving to the Edge: what will AI bring to your device?

Beth Kindig is one of many commentators highlighting the potential AI applications in “Edge” computing. 

“Edge” refers to running AI algorithms locally, directly on the user’s device, such as smartphones, notebooks, wearables, drones, AR/VR and Autos, as the major data sources, notes Morgan Stanley’s report “Tech Diffusion: Edge AI’s Growing Impetus“.

An estimated 30bn devices will in use by 2030, and at the Edge’ of networks offers multiple benefits to AI computation including lower costs and latency, personalisation, and improved security and privacy compared to centralised cloud computing (data centres/hyperscalers). Morgan Stanley points to research from Gartner that 50% of enterprise data will be created at the Edge by 2025.

Musk’s evolving Full Self Driving (FSD) is an example where data collation occurs from real life users through camera sensors. The data are then analysed to create the algorithms for FSD software.

Tesla expert, Adam Jonas from Morgan Stanley wrote recently in “Crossing the Chasm: 5 Thoughts on Tesla’s Confusing Metamorphosis” the EV market is transitioning from what has become a commoditised “dark age” to an AI and robotics “renaissance”.

Jonas is referring to camera systems on devices to produce real world data for the evolution of robot learnings via large language models and vision language models. He infers this form of data collation will allow robots (in all different forms) to learn more swiftly and efficiently compared to the pace of GenAI and the growth of Nvidia-equipped data centres.

Ultimately, Jonas asserts increasingly cameras attached to all forms of devices will be used as data collection points to train neural networks.

The obvious breaks on Edge devices and networks include battery life/power consumption; processing capabilities and memory, to name a few. Ultimately, hybrid systems between the cloud and the Edge could facilitate what is termed as the Internet of Things (smarter devices with AI computational power)

Morgan Stanley identified six companies that will be at the forefront of “at the Edge” networks, including Apple, Dell, Qualcomm, and Xiaomi. Apple’s Worldwide Developers Conference on June 2 is expected to outline how the company is looking to embed artificial intelligence into Apple devices, above and beyond what has been a disappointing Siri experience over the last decade.

The energy conundrum, bigger isn’t always easy

Data centres and hyperscalers will demand exponentially greater amounts of energy off the electricity grid. The European Utilities team at Morgan Stanley expect AI data centre energy usage will rise from 1.5% to 4% of total power consumption by 2025, and data centre electricity consumption will move from 3% to 8% of total US energy production by 2027.

Another forecast from Morgan Stanley is that by 2027, Gen AI will be using as much energy as 80% of the all the data centres in 2022.

Morgan Stanley goes further to assess the barriers to entry will be heightened by problems sourcing power supply, and the analysts propose large-scale data centres, like the mooted Stargate project, will be placed near nuclear power plants. 

Another issue is how data centres will accommodate increasing power density from higher density chips (more compute), resulting in cooling and storage problems (more racks, and servers). Companies exposed to data centre solutions such as connected racks, switches and cooling are referred to as the “picks and shovels” to power GenAI.

The risks associated with the development of new data centres, including power constraints, is one reason why Morgan Stanley is cautious on future chip (GPU) demand, as delays to the infrastructure roll out could result in stockpiles of unused chips.

The warning: “Given the multiple years, and risks, relating to planning new data centers, we believe there is significant risk that not all of the GenAI chips being purchased over the next few years will be rapidly deployed.”

Software solutions in the Gen AI arms race

RBC Capital Markets Report “The Software Investors Handbook to Generative AI” makes the salient observation that investors are “overestimating” the short-term impact from AI and “underestimating” the long-term impact.

RBC explains the current beneficiaries include companies that can leverage and utilise their data and distribution, thereby capitalising on the technology without having to source the data or network. Microsoft is an example of a large company moving swiftly to take advantage of its existing structure.

Secondly, the analysts refer to “vertical software leaders” that have both data and the ability to generate AI solutions across multiple industries. LLM’s start as a “blank canvas”, meaning those companies that can offer software solutions to train them to be industry specific have “winner-take-all” potential.

Thirdly, RBC explains there are mid-market companies which have the potential to move ahead of the larger companies, such as legacy software companies using GenAI to improve the service offering.

Companies which are selected as potential winners in the software space include the likes of Adobe, MongoDB, Microsoft, CrowdStrike, Pro Medicus ((PME)), Xero ((XRO)), and Zoom Video Communications.

Conversely, RBC points to four categories of companies that are at risk, including “Legacy, on-premise software companies” that are not on the cloud and have an inherent disadvantage of not being able to leverage GenAI.

Companies that don’t adopt Gen AI and re-architect their businesses around it; or that claim to be AI but in fact only use analytics, and companies in the basic “work/task management space” risk being made redundant in the future.

RBC puts US companies Asana, Palantir, Smartsheet, and Zoominfo Technologies in the ‘at risk’ category.

Coming next, Part Three of Generative AI, investing in the 21st Century Megatrend, we investigate company specifics for investors, including how to invest in the GenAI thematic in Australia.

The author owns shares in Microsoft, Nvidia, AMD, and Pro Medicus.

Find out why FNArena subscribers like the service so much: “Your Feedback (Thank You)” – Warning this story contains unashamedly positive feedback on the service provided.

FNArena is proud about its track record and past achievements: Ten Years On

Share on FacebookTweet about this on TwitterShare on LinkedIn

Click to view our Glossary of Financial Terms

CHARTS

PME XRO

For more info SHARE ANALYSIS: PME - PRO MEDICUS LIMITED

For more info SHARE ANALYSIS: XRO - XERO LIMITED