_007 Think different...
The lowering of costs is fundamentally a good thing

Michael Gilgallon
CEO, Red Shift
Insight

As the hype around AI starts to settle down and tangible value becomes more obvious, it is becoming increasingly apparent that AI will not “replace all jobs”, but rather act as a force multiplier as initially pointed out by many of us in the Tech industry (this does not include so-called “vibe coders”).
One clear benefit has been the amplifying of experienced software engineers, allowing them to magnify their experience and expertise, increasing throughput and in turn lowering costs.
This lowering of costs is fundamentally a good thing.
Cheaper goods, services and energy are only ever a net benefit, once you endure the disruptive phase of implementation.
There is still however, a narrative being pedalled - and I use that word deliberately, that increasing compute and inferencing will lead to a Godlike AGI. Much aside from the data we have already to disprove this theory, as LLMs in particular run into a cognitive ceiling quite quickly past a certain point and no amount of compute or data will resolve it, there is perhaps more promise with multimodal AI such as Gemini. However, even if we were to give the devil his due and pretend that simply scaling up these multimodal models will lead to AGI (which it won’t), there is still a glaringly obvious and fundamental impasse – liability and comprehension.
Let’s pretend we all have an AGI in our pocket, seamlessly integrated into every aspect of our digital environment. It can carry out any action, no matter how complex, flawlessly and consistently – even here, the role of the human being is not removed. Who tells the AGI what needs to be done? Who tells it what is important to that individual? How does it understand the criteria for success? How does it know when to stop? How does it appreciate context if it can only access the information fed to it within a narrow digital ecosystem? Even an AGI, with theoretical infinite cognition, is fundamentally incapable of understanding what is “true” relative to an individual. It has no agency and could only ever be a mere reflection of a user’s inputs and demands. This notion that all jobs will be automated in future is it akin to saying we no longer need designers because we have the sewing machine.
One dictates, the other does.
This dynamic will never change.
So why even mention it? Well, because capital markets have fallen victim to a cult like obsession with the promise of an all-capable AGI, diverting investment and funds away from real solutions for real problems here today, rather than the promise of a synthetic God. Much aside from the impossible physics of the commercial viability of these inferencing technologies and data warehousing (currently eye wateringly subsidised), even if they were to become commercially viable overnight, we would simply increase productivity and reduce the cost of services. Real problems will remain real problems that require ownership, decisions, oversight and accountability from real people.
Rather than divert precious energy and attention into the promises of deeply vested interests, we do have a clear criterion of success for any new such tooling: can it provide measurable value in the short term? If not, ignore and move on.
AI, in particular multimodal AI, is here to stay, but it is not the market’s job to find applications for the endless iterations of these tools. Rather, just as in the case with our pocket AGI above, they exist to provide value to the market.
If that value is not immediate and obvious, then it is not valuable.




