article 3 months old

Is AI All It’s Cracked Up To Be?

Feature Stories | Apr 10 2024

AI is a game-changer, analysts agree, but there remain questions over predicted productivity gains, potential job losses, costs, power requirements and how long AI adoption will take.

-Large language models and GPUs
-2024 a year of investment
-Disagreement over job losses
-Massive increase in power requirement
-Macro impact still years off

By Greg Peel

"The combination of accelerating AI technology development and the widespread productivity challenge underscores our view that AI is the most important investment theme of this decade. We view 2024 as an investment year that will be followed by measurable productivity improvements beginning in 2025. Like other major technologies, AI likely follows Amara's Law', which states that the market overestimates the potential of a technology in the near term but underestimates it in the long term."

This was the view of Morgan Stanley's North America quant analysts, expressed in January.

So what exactly is AI? Wilsons had a novel approach to answering that question. Ask AI. Technically, artificial intelligence is based on a "large language model". Wilsons asked ChatGPT the service that arguably kicked off the whole AI phenomenon with its launch in late 2022 to define itself.

Straight from the horse's mouth:

"A Large Language Model refers to a type of artificial intelligence model designed to generate human-like text and understand natural language. These models are trained on massive amounts of text data and employ deep learning techniques, specifically using neural networks with many layers. Large language models, such as OpenAI's GPT-3 are known for their impressive capabilities in tasks like text generation, translation, summarization, question answering, and more.

"The applications of large language models are diverse and range from chatbots and virtual assistants to content creation, language translation, sentiment analysis, and more. They have the potential to revolutionize numerous industries, including customer service, content generation, and language processing tasks."

Back in the 1970s, there was a brief "speed reading" craze. Proponents touted that the average human could be taught to read a full page of text by scanning it in a matter of seconds, take it all in and be able to summarise and answer questions.

It was all debunked fairly swiftly, but the bottom line is large language models speed-read. Rather than being able to read a page of text in seconds, however, they can read the entire contents of the internet in the blink of an eye.

And it's not just text. AI can take in images, videos and, importantly, numerical calculations just as quickly. AI is based on "machine learning". Teach a program a task, and that program will complete that task in seconds when a human might have taken days, or longer. And know what to do the next time.

That is why the great beneficiary of AI is seen as "productivity".

AI is not new. It did not begin with ChatGPT. Siri's been around for a while, for example, and is a form of AI, albeit a simplistic form in the context (and often annoying). So why did ChatGPT supposedly signal the AI revolution?

Wilsons puts it down to the convergence of two factors: computer chips and the internet.

The Chip

The SIM card in your mobile phone was once the size of a postage stamp, but is now so small you almost need a microscope to insert it. That is one example of the technological advancement of the silicon chip.

The introduction of ChatGPT also shone the spotlight on a chipmaking company called Nvidia, which prior to that time was just another chipmaker as far as anyone was concerned, but came to notice given its first-mover advantage in the field of AI chips on the basis of its graphic processing units (GPU), a step up from the standard central processing unit (CPU).

So what's the difference? Let's ask industry stalwart Intel:

"The CPU is suited to a wide variety of tasks, especially those for which latency or per-core performance are important for tasks such as web browsing. A powerful execution engine, the CPU focuses its smaller number of cores on individual tasks and getting things done quickly. This makes it uniquely well equipped for jobs ranging from serial computing to running databases.

"GPUs began as specialized ASICs (Application-Specific Integrated Circuits) designed for a specific purpose such as accelerating specific 3D rendering tasks. Over time, these fixed-function engines became more programmable and more flexible. While graphics and hyper-realistic gaming visuals remain their principal function, GPUs have evolved to become more general-purpose parallel processors as well, handling a growing range of applications, including AI."

Here's an idea: what if we combined the two?

"CPU/GPUs deliver space, cost, and energy efficiency benefits over dedicated graphics processors. Plus, they provide the horsepower to handle processing of graphics-related data and instructions for common tasks. Processors with built-in graphics are a great choice for immersive gaming, 4K streaming, video editing, and exploring the web with lightning-fast connectivity."

In June last year, Nvidia announced a new GH200 Grace Hopper "superchip", which combines CPU and GPU and is "specifically created for large-scale AI applications". Wilsons notes the Hopper chip generated results that were 4.5x better than its prior chip, and with this new chip, Nvidia expects "to dramatically accelerate AI and machine-learning applications in both training (creating a model) and inference (running it)".

When ChatGPT was launched in November 2022, Nvidia shares were trading around US$170. At the time of writing, they are trading around US$890, or 425% higher. There are now only two US companies with a greater market capitalisation; Microsoft and Apple. Falling in line behind Nvidia are Google, Amazon and Meta.

But for the Hopper superchip to be effective, it needs access to a wealth of data. As Wilsons notes, the world's data availability is "incredibly large", and AI models have the chance to perform better with more diverse data.

The proliferation of mobile devices, higher quality Wi-Fi that facilitates streaming and the expanded use of electric vehicles, which both create and consume large quantities of data, have all contributed to a wide body of data.

The convergence of ongoing knowledge of, and research into, AI, the order-of-magnitude improvement in computational power and the world's data almost tripling over the four years to 2022 have all contributed to advances in AI, Wilsons notes.

Just a Passing Fad?

The progress in the recent developments in artificial intelligence has attracted strong interest from many firms aiming to integrate content generation and decision making in their processes, Citi reports. Citi cites a 2022 AI report released by IBM, which measured the proportion of firms that have already deployed and plan to use AI in the coming year.

China and India were leading in these metrics with 58% and 57% of the firms having already deployed AI respectively, followed by Italy and Singapore. Overall, the report found the vast majority (more than 70%) of firms are expected to use AI in the coming years.

That was in 2022.

More recently, Morgan Stanley's chief information officer survey (US) suggested 2024 is a year of investment for AI.

The survey showed generative AI continues to gain mind and wallet share with CIOs — 68% of CIOs indicated AI-related impacts to their IT budgets in the December quarter 2023, an increase from 66% in the September quarter and 56% in the June quarter. Further, AI/machine learning emerged as the top CIO IT priority in December.

This development is further supported by recent AI-related data centre expansions as Morgan Stanley's analysts now forecast US hyperscaler data centre capex to grow some 40% year on year in 2024, up from around 30% before US December quarter earnings results.

Data centres provide the massive computing power required to drive AI systems in the cloud.

In terms of early adopters, Morgan Stanley analysts see efficiency/productivity gains occurring within Software/Internet first before the broader market/economy. That will be the 2024 story, with the broader market following in 2025.

AI and You

Throughout history, various technological revolutions have led to the end of old-fashioned jobs (in the context of that time) and the creation of new jobs. In between, some workers have been able to adapt their existing skills to the new technology or be reskilled. Others have simply become unemployed.

The explosion of AI has rekindled this fear yet again: Will I lose my job?

Utilising a new dataset covering skills and tasks across occupations, Jarden has estimated an AI Disruption Index across more than 1,200 occupations. Jarden found on average 54% of employee time is spent on tasks that could be disrupted by AI, with one-third of employees in occupations in which more than 70% of their tasks could be disrupted.

Importantly, Jarden views AI disruption as not necessarily replacing labour but likely augmenting workers and potentially driving increased productivity.

AI is also likely to drive shifts in demand for labour, Jarden suggests There will be an initial increase in demand for workers with the technical skills to implement AI, but in time a relative increase in demand for those physical skills which cannot be replaced, such as construction and care workers.

For companies, Jarden believes those firms in highly exposed industries, such as financial services, will likely face higher costs as they adopt and invest in new technological capabilities, but in time will benefit from improving productivity and potentially lower labour costs. Meanwhile, those companies not exposed to AI disruption are likely to face persistently higher labour costs.

Based on its AI Disruption Index across occupations, Jarden can map the potential impact across income, industries and geography.

Jarden finds (1) higher-paid occupations are generally more exposed to AI disruption; (2) the industries most exposed to disruption are financial services, professional services, IT and media, while mining, construction and agriculture are the least exposed; and (3) capital cities, in particular Sydney and Melbourne, are likely to see the greatest impact, with exposure to disruption outside capital cities generally limited.

Interestingly, says Jarden, in the professional services industry, we are already seeing significant developments and application of AI, which could drive meaningful improvements in productivity and potentially reduce labour needs.

Citi lists industry sectors from most likely to be impacted by AI to least likely.

In descending order of impact are financials and fintech, consumer services, healthcare services, industrial technology & mobility, real estate, and natural resources and climate technology. The least impacted of all is, of course, technology & communications, as this is the "enabling" sector.

This list is pretty basic. Within each of those sectors there can still be room for AI use and possible job disruption.

Jarden noted care workers (aged, child), for example, as an occupation unaffected by AI, but healthcare services are high on Citi's list. Indeed, advances in medical discoveries driven by AI adoption may lead to less lab workers being needed. AI could take over (some of the role) of doctors. But it will surely be a long time before a machine performs a heart transplant without supervision.

The management of medical records is nevertheless well open to AI.

Mining employs lots of workers, but already we've seen advances such as self-driving tip-trucks in massive iron ore mines putting truck drivers out of work.

Agricultural requires lots of labour, but again we see self-driving harvesters, for example. Soil testing and crop management could surely be assisted by AI.

In real estate, can a machine talk you into buying a house?

And so on and so forth.

To further complicate matters, Citi cites a paper released in 2023 which looked into the productivity results of adding generative AI to customer support agents.

The authors found the employees with access to this tool managed to increase their productivity by 14%, measured by the number of issues they resolved per hour. The results on the distributional impact of AI seem to align with the ones on college-educated professionals, as the study also found the greatest productivity impact was on novice and low-skilled workers, with minimal effects on experienced and highly skilled workers.

The authors found the AI model disseminated potentially tacit knowledge of more able workers and helped newer workers move up the experience curve. In addition, they showed AI assistance improved customer sentiment, reduced requests for managerial intervention, and improved employee retention.

Hang on, earlier we learned from Jarden's research that "higher-paid occupations are generally more exposed to AI disruption". This paper suggests "minimal effects on experienced and highly skilled workers," which one assumes are the more highly paid.

Consensus, it seems, is lacking, when it comes to AI. We might simply conclude that different industries will be impacted in different ways.

Consensus is also lacking on the subject of productivity gains.

Productivity

The greatest beneficiary of AI, all agree, will be productivity, or GDP per man-hour.

We already discussed the productivity question at length (and the pitfalls of AI) in Rise of the Machines: AI Has Arrived (https://fnarena.com/index.php/2023/11/16/rise-of-the-machines-ai-has-arrived/) published last November. But just to reiterate:

In recent decades we've seen the introduction of the web, the cloud, the smart phone and also the Internet of Things. Why, then, has global productivity been on a downward trend? It's been almost a decade since the first reports began to predict an AI-led surge in productivity growth. If pandemic impacts are extricated, G7 productivity in the past two years has been below the average since 2005.

The productivity boost from past transformative technologies has generally been drawn out and less dramatic than might have been expected given the importance of the inventions. Economists have been puzzled, Capital Economics noted, over why the digitalisation of the economy over the past two decades has been accompanied by such weak productivity growth.

There is as yet no evidence of any productivity boost from AI.

The famous quip, Citi notes, by Paul Krugman, Nobel Laureate in Economics, that "productivity isn't everything, but in the long run it is almost everything" runs at the heart of every innovation. Despite its importance and the impressive technological change that has taken place in the recent past, productivity growth has been slowing down for decades across advanced economies.

It is not surprising industry leaders and academics often refer to the potential of AI technologies as a way to end this downward trend, Citi suggests. While there is still no definitive answer, several researchers suggest this new wave of large language models is very promising, based on preliminary results from their studies.

Oxford Economics points to emerging evidence AI will lead to strong productivity growth in at least some sectors. But in the absence of widespread adoption, and large-scale innovation from using AI, the economic gains could be narrow for some time.

And there's another issue.

The Power of AI

No, not the capacity of AI to do your job, but the amount of energy required to power the massive data centres that will facilitate AI adoption.

In Morgan Stanley's base case, GenAI power demand grows from an average of less than 15 terawatt hours (TWh) in 2023 to 46 TWh in 2024 to 224 TWh in 2027.

A terawatt is one thousand gigawatts, or one trillion watts.

Morgan Stanley believes its projected growth may still be conservative as the broker expects users of GenAI hardware will have strong incentives to maximise the utilisation rate of this equipment. Using a GPU/Custom Silicon utilisation rate of 60% in its base case, Morgan Stanley estimates 2027 GenAI power demand will be equivalent to more than 75% of the total global data center power use in 2022.

In its bull case, for which GPU/Custom Silicon utilisation rate is increased to 90%, the broker estimates 2027 GenAI power demand equals 116% of 2022 total data center power usage.

That said:

"Contrary to consensus, we expect the net Sustainability impacts of GenAI to be positive. We think the impact on global carbon emissions is likely to be small, while the Sustainability benefits of GenAI are likely to be large".

We note both Google and Microsoft have recently disclosed they are contemplating building their own nuclear facilities to provide requisite power.

No Rush

US investment in AI-related hardware has surged, Goldman Sachs' global economists note, with revenues of semiconductor manufacturers rising by over 50% since early 2023 and company-level revenue forecast revisions implying an incremental US$250bn in annual AI hardware investment (1% of US GDP) through 2025.

This increased investment is not yet visible in official national accounts data that are relevant for GDP, but shipments for some AI-related components have recently picked up.

However, actual adoption of AI has only modestly increased so far, the economists found, with less than 5% of companies reporting use of generative AI in regular production. And while adoption is higher in industries that Goldman Sachs estimates will benefit the most from AI — including computing and data infrastructure, information services, and motion picture and sound production — and is expected to rise going forward, adoption remains well below levels necessary to see large aggregate productivity gains.

Low adoption has limited the labor market impact. Preliminary evidence suggests AI is modestly raising labor demand while driving negligible job losses, thereby creating a slightly positive impulse to net hiring.

Yet emerging evidence from early adopters points to large increases in labor productivity. While early estimates should be interpreted cautiously given selection and reporting biases, Goldman warns, recent academic studies imply an average 25% increase in labor productivity following AI adoption, with anecdotal company reports suggesting similarly large efficiency gains.

The sizable increase in AI-related investment and large productivity gains among early adopters adds to Goldman Sachs' confidence that generative AI poses meaningful economic upside, however, the slow adoption pace suggests any sizable macroeconomic impacts are still several years off.

Find out why FNArena subscribers like the service so much: "Your Feedback (Thank You)" – Warning this story contains unashamedly positive feedback on the service provided.

FNArenais proud about its track record and past achievements: Ten Years On

Share on FacebookTweet about this on TwitterShare on LinkedIn

Click to view our Glossary of Financial Terms