Feature Stories | 10:00 AM
The pace of artificial intelligence innovation was reinforced with DeepSeek and it's all systems go with software and hardware working in combination
-DeepSeek confirms (extremely fast) pace of innovation
-Australian companies set to benefit
-Capex in AI continues to rise
-Machine learning takes a step towards reasoning
By Danielle Ecuyer
Jumping to AI conclusions
Markets are often quick to jump to conclusions. For that matter, the simplest and most symmetric conclusion often suffices, especially when investors, traders, and algorithms are confronted with complexity in areas of technological development that far outpace a more nuanced and deeper understanding.
The recent example of the sell-off in hardware, chip makers like Nvidia and Broadcom, and data centre stocks such as NextDC ((NXT)) and Goodman Group ((GMG)), following DeepSeek's perceived miraculous new AI compute capacity at a lower price, is a case in point.
Why Does It Matter?
Like investing in biotech, the fourth industrial revolution in artificial intelligence is both complex and evolving at a rapid pace. That is how innovation and disruption work.
Morgan Stanley and Macquarie draw upon Jevons' Paradox --"the more efficient they become, the more we will use them"-- as a reason why DeepSeek's new and more efficient software will not supersede demand for hardware to train AI models.
"Jevons' Paradox occurs when technological progress that increases the efficiency with which a resource is used leads to an overall increase in the consumption of that resource, rather than a decrease."
Developed by economist William Stanley Jevons in 1865, the theory noted that increasing the efficiency of coal use in steam engines led to greater overall coal consumption not a reduction.
Morgan Stanley believes the accelerated decline in the cost of compute, as exhibited by DeepSeek R1 (Release One), will result in increased demand for AI inference.
The analysts' modeling shows a decline in the unit cost of compute of -90% over a six-year period resulted in an increase in AI adoption.
Macquarie concurs, stating, "Lower production costs will drive a proliferation of products using LLMs; technological sprawl."
Beth Kindig, Lead Tech analyst at the I/O Fund, points to the analogy of DeepSeek as the Sputnik moment for AI.
The analogy refers to how an inflation-adjusted US$33m investment by the USSR in the Sputnik satellite led to the USA investing an estimated US$1trn in inflation-adjusted terms over a sixty-year period in response to the Soviets.
Shelly Palmer, a sought-after technology commentator, wrote last week:
"DeepSeek's real value isn't in replacing big compute' but in delivering meaningful efficiency gains and cost savings".
To appreciate the extent of investment in AI, Kindig explains if the software developments from China (DeepSeek) are compared to Sputnik, then achieving General Artificial Intelligence (AGI) would be equivalent to the moon landing moment.
AGI refers to machine learning that can perform intellectual tasks at human-level capacity. Digging deeper, Kindig stresses AGI requires large language models (LLMs) with a minimum of one trillion and up to ten trillion parameters.
Software like DeepSeek is not cannibalising hardware at this stage, as achieving AGI remains at least five to ten years in the future.
Palmer states AI innovation isn't an either-or proposition. As Kindig explains, algorithmic efficiency will not exceed brute-force compute, nor will one technology dominate over another. Hardware and software will evolve together.
Palmer sees AI productivity as a meeting of scale and optimisation as LLMs and hyperscalers move to trillion-parameter architectures.
According to technology writer Shelly Palmer, "DeepSeek isn't a black swan it's a glimpse of the new normal. We are living on the exponential, where breakthroughs that once seemed improbable now arrive with startling regularity. The pace of AI advancement isn't slowing down; it's accelerating. The only real question is whether you're ready to keep up."
Kindig is a prominent Nvidia expert, making the DeepSeek moment particularly significant.
Reinforcement learning via software ((inference)) is ultimately a step toward AGI. To achieve trillion-plus parameter models, Nvidia, the leader in GPUs and other AI design companies will be essential, Kindig highlights.
Macquarie states the market has developed a "false dichotomy of haves' and have-nots'," overlaid with Cold War intrigue. The broker explains new markets accelerate by volume, not cost. Since compute is the cost to train an LLM, software improvements drive efficiencies, then lower prices lead to volume growth and higher adoption.
The analyst details GPU hardware is very expensive, and software is used to drive outcomes more efficiently.
Morgan Stanley supports Kindig's view with internal US data centre pipeline analysis, confirming most new developments are for AI inference rather than AI training.
Macquarie tackles the issue from a different perspective, stating DeepSeek R1 has accelerated inference by moving the process of thought forward, rather than just the outcome of thinking.
The full story is for FNArena subscribers only. To read the full story plus enjoy a free two-week trial to our service SIGN UP HERE
If you already had your free trial, why not join as a paying subscriber? CLICK HERE