Is AI All It’s Cracked Up To Be?

Feature Stories | Apr 10 2024

AI is a game-changer, analysts agree, but there remain questions over predicted productivity gains, potential job losses, costs, power requirements and how long AI adoption will take.

-Large language models and GPUs
-2024 a year of investment
-Disagreement over job losses
-Massive increase in power requirement
-Macro impact still years off

By Greg Peel

"The combination of accelerating AI technology development and the widespread productivity challenge underscores our view that AI is the most important investment theme of this decade. We view 2024 as an investment year that will be followed by measurable productivity improvements beginning in 2025. Like other major technologies, AI likely follows Amara's Law', which states that the market overestimates the potential of a technology in the near term but underestimates it in the long term."

This was the view of Morgan Stanley's North America quant analysts, expressed in January.

So what exactly is AI? Wilsons had a novel approach to answering that question. Ask AI. Technically, artificial intelligence is based on a "large language model". Wilsons asked ChatGPT the service that arguably kicked off the whole AI phenomenon with its launch in late 2022 to define itself.

Straight from the horse's mouth:

"A Large Language Model refers to a type of artificial intelligence model designed to generate human-like text and understand natural language. These models are trained on massive amounts of text data and employ deep learning techniques, specifically using neural networks with many layers. Large language models, such as OpenAI's GPT-3 are known for their impressive capabilities in tasks like text generation, translation, summarization, question answering, and more.

"The applications of large language models are diverse and range from chatbots and virtual assistants to content creation, language translation, sentiment analysis, and more. They have the potential to revolutionize numerous industries, including customer service, content generation, and language processing tasks."

Back in the 1970s, there was a brief "speed reading" craze. Proponents touted that the average human could be taught to read a full page of text by scanning it in a matter of seconds, take it all in and be able to summarise and answer questions.

It was all debunked fairly swiftly, but the bottom line is large language models speed-read. Rather than being able to read a page of text in seconds, however, they can read the entire contents of the internet in the blink of an eye.

And it's not just text. AI can take in images, videos and, importantly, numerical calculations just as quickly. AI is based on "machine learning". Teach a program a task, and that program will complete that task in seconds when a human might have taken days, or longer. And know what to do the next time.

That is why the great beneficiary of AI is seen as "productivity".

AI is not new. It did not begin with ChatGPT. Siri's been around for a while, for example, and is a form of AI, albeit a simplistic form in the context (and often annoying). So why did ChatGPT supposedly signal the AI revolution?

Wilsons puts it down to the convergence of two factors: computer chips and the internet.

The Chip

The SIM card in your mobile phone was once the size of a postage stamp, but is now so small you almost need a microscope to insert it. That is one example of the technological advancement of the silicon chip.

The introduction of ChatGPT also shone the spotlight on a chipmaking company called Nvidia, which prior to that time was just another chipmaker as far as anyone was concerned, but came to notice given its first-mover advantage in the field of AI chips on the basis of its graphic processing units (GPU), a step up from the standard central processing unit (CPU).

So what's the difference? Let's ask industry stalwart Intel:

"The CPU is suited to a wide variety of tasks, especially those for which latency or per-core performance are important for tasks such as web browsing. A powerful execution engine, the CPU focuses its smaller number of cores on individual tasks and getting things done quickly. This makes it uniquely well equipped for jobs ranging from serial computing to running databases.

"GPUs began as specialized ASICs (Application-Specific Integrated Circuits) designed for a specific purpose such as accelerating specific 3D rendering tasks. Over time, these fixed-function engines became more programmable and more flexible. While graphics and hyper-realistic gaming visuals remain their principal function, GPUs have evolved to become more general-purpose parallel processors as well, handling a growing range of applications, including AI."

Here's an idea: what if we combined the two?

"CPU/GPUs deliver space, cost, and energy efficiency benefits over dedicated graphics processors. Plus, they provide the horsepower to handle processing of graphics-related data and instructions for common tasks. Processors with built-in graphics are a great choice for immersive gaming, 4K streaming, video editing, and exploring the web with lightning-fast connectivity."

In June last year, Nvidia announced a new GH200 Grace Hopper "superchip", which combines CPU and GPU and is "specifically created for large-scale AI applications". Wilsons notes the Hopper chip generated results that were 4.5x better than its prior chip, and with this new chip, Nvidia expects "to dramatically accelerate AI and machine-learning applications in both training (creating a model) and inference (running it)".

When ChatGPT was launched in November 2022, Nvidia shares were trading around US$170. At the time of writing, they are trading around US$890, or 425% higher. There are now only two US companies with a greater market capitalisation; Microsoft and Apple. Falling in line behind Nvidia are Google, Amazon and Meta.

But for the Hopper superchip to be effective, it needs access to a wealth of data. As Wilsons notes, the world's data availability is "incredibly large", and AI models have the chance to perform better with more diverse data.

The proliferation of mobile devices, higher quality Wi-Fi that facilitates streaming and the expanded use of electric vehicles, which both create and consume large quantities of data, have all contributed to a wide body of data.

The convergence of ongoing knowledge of, and research into, AI, the order-of-magnitude improvement in computational power and the world's data almost tripling over the four years to 2022 have all contributed to advances in AI, Wilsons notes.


The full story is for FNArena subscribers only. To read the full story plus enjoy a free two-week trial to our service SIGN UP HERE

If you already had your free trial, why not join as a paying subscriber? CLICK HERE

MEMBER LOGIN