_002 AI Hysteria 

I’ve spent the past three years knee-deep in the AI hysteria that has gripped not just the tech industry but seemingly the entire world. During that time, I’ve watched reliable, unglamorous products — those that consistently deliver value and profit — get sidelined in favour of the allure of “10x growth,” often by awkwardly shoehorning in an “AI feature.”

 

To be clear, an LLM (Large Language Model) is not truly artificial intelligence in the classical sense. It doesn’t reason or output binary truth; it’s a semantic probability engine that predicts the next most likely token based on context. That distinction matters — it’s precisely why we have the hallucination problem.

 

If you were genuinely serious about AI, you’d have been studying it since the 1980s. The people who have quietly extracted serious capital from AI systems — like Jim Simons and his team at Renaissance Technologies (see The Man Who Solved the Market by Gregory Zuckerman) — have been applying algorithmic and machine learning techniques for over 40 years. When most people today talk about “AI,” they’re really referring to subsets of machine learning, natural language processing, and predictive modelling — not the symbolic reasoning and goal-directed cognition that defined classical AI research.

 

One reason for the explosion of LLMs is the low barrier to entry. Anyone can plug one into a product, slap “AI-powered” on a landing page, and call it innovation. But as we can already infer from the financial performance of the major “AI” players, that barrier will rise once the cold reality of profitability overtakes the promise of infinite growth.

 

When I attended Big Data London in 2023, what struck me was that despite the enormous energy and attention surrounding this so-called revolution, most offerings were little more than semantic search layers on top of vector databases. Useful, yes — but the lack of diversity in real-world application was telling.

 

My impression, as a tech professional who spends far too much time doom-scrolling on LinkedIn, is that many businesses still don’t grasp the difference between automating a proven process and delegating critical thinking to an LLM.

 

LLMs are excellent for streamlining workflows, analysing large datasets, and identifying probabilistic anomalies. They are a brilliant asset when used for those purposes. What they are not good at is appreciating real-world nuance and context — as anyone who’s tried to have a sustained, logical conversation with one can confirm.

 

For example, one client we worked with needed a bespoke LLM layer within their analytics stack. Combined with their private data partnerships, this allowed them to interdict incoming data and apply customised hygiene checks — reducing hallucination rates from over 20% to under 5%. That’s powerful. It provided end users with a genuine advantage over their previous dashboard, which had done nothing to improve the already dire signal-to-noise ratio.

 

Where this becomes concerning is in big data pipelines and cybersecurity. Sure, you can wrap your payloads and hope the LLM produces coherent signal — but if your hallucination rate exceeds 5%, the output rapidly becomes counterproductive.

 

The implications of “vibe coding” for cybersecurity make every infrastructure engineer shudder. When code generation becomes abstracted from logic comprehension, vulnerabilities multiply. After all, who cares about being detached from core business logic when any half-competent attacker can lift your keys through a rudimentary SQL injection?

 

So when I see the current wave of mass layoffs and vibe coding enthusiasm, I can’t help feeling we’re inflating another bubble.

 

Leaving aside the countless think pieces that might support or disprove that instinct, the logic alone is clear:

 

If you abstract your thinking to an LLM, you surrender control of your ecosystem.

 

And in most modern companies, your software ecosystem is your business logic. If you don’t fundamentally — and in detail — understand how that logic works, you can’t possibly understand your business.

 

Even if you could delegate critical thinking to an AI agent, is that even desirable?

 

We’ve had clients come to us for quotes, only to argue they could “vibe-code” the same thing for next to nothing. They’re welcome to try. But they quickly discover that half the challenge of software engineering lies in distilling deep, nested business logic into coherent, maintainable systems. Engineering isn’t just about codifying logic — it’s about understanding its wider implications.

 

If that process is fully abstracted to AI agents, the business itself becomes increasingly detached from the systems that generate its own value.

 

At a technical level, think of it as a black box problem. Every engineer knows the frustration of dealing with a system they can’t inspect — that one variable they have no sight or control over. The more logic you outsource to AI, the larger that black box becomes, until you’re completely dependent on it.

 

At that point, all you can do is infer through probability — not reason through understanding.

 

Semantic correlation does not equal comprehension.

 

Previous
Previous

_003 Skulls as Nodes

Next
Next

_001 Fidelity of Comprehension