AI is everywhere – but at Laurium Capital, judgement still wins

May 7, 2026
Kim Zietsman

It’s become almost impossible to read a news site, sit in a boardroom, or even have a casual conversation without “AI” coming up.

Learn more about Laurium Capital.

In South Africa’s investment industry, the volume is even louder.

Asset managers want to know how to use it, clients want to know if we’re using it, and teams want to know what it means for their careers.

The excitement is real, but so are the fears.

Will AI take our jobs? Which roles change first? What should our children study?

The uncomfortable truth is that nobody knows with certainty, because the technology is improving daily and the pace is extraordinary.

But in asset management, uncertainty is a familiar friend. We manage it for a living. That gives us a useful lens.

The right response is neither denial nor blind adoption, but disciplined experimentation with clear guardrails.

What AI can realistically do in asset management

Markets reward speed and clarity, but the work behind a decision often includes many repetitive steps.

Used well, AI can take on heavy lifting in those repeatable tasks so analysts can spend more time on interpretation and judgement.

And this fits neatly onto how we think about AI at Laurium.

It is most valuable where it reduces friction, not where it replaces conviction.

We can use it to speed up the mechanics while keeping investment judgement, portfolio construction, sizing, and risk in human hands.

The more important effect is not speed for its own sake, but reallocation of effort.

When preparation becomes more efficient, time shifts toward debate, challenge, and synthesis.

Analysts spend less time reproducing information and more time stress‑testing narratives, interrogating assumptions, and understanding what is not in the data.

That is where real investment edge still sits.

Where the real risk sits – confident mistakes

In investing, small mistakes compound fast.

An incorrect input flows into valuations, risk views and, ultimately, how capital is allocated.

The near‑term danger isn’t that AI becomes “too clever”, but that it can produce plausible‑looking errors that slip through when teams move quickly.

That’s why adoption has to be designed around oversight.

AI can suggest, summarise, sort and highlight, but people must review, validate and take responsibility for the final call.

In practice, that means setting clear boundaries (what the tool may touch and what it may not), keeping clean source data separate from calculations, tracking where information comes from, and using checklists so that faster work still meets the same standard of care.

Will AI take jobs in investment firms?

We don’t pretend to know exactly how jobs or careers will change, but we do believe some tasks will absolutely be automated.

If a role is dominated by copying numbers between PDFs and spreadsheets, formatting reports, or producing first drafts of routine commentary, AI will change that work dramatically in our opinion.

But based on previous waves of technological change, whole jobs rarely disappear overnight, they evolve.

In investing, the centre of gravity shifts up the value chain, from producing information to interpreting it, challenging it, and applying it under uncertainty.

In fact, the best analysts may become even more important as routine work becomes a commodity.

When everyone can reach an initial answer faster, the edge comes from thinking, distilling what truly matters, separating signal from noise, identifying what is missing from the narrative, and understanding where a model is most sensitive.

AI can accelerate a workflow, but it does not replace independent judgement.

How we think about AI at Laurium Capital

We are approaching AI the same way we approach most things that matter in investments, with curiosity, scepticism, structure and sound judgement under uncertainty.

We are not AI experts, but we are committed to building capability steadily, without compromising confidentiality, governance, or the standard of care our clients expect.

Our starting point was not “buy a tool and hope.”

It was to put deliberate and durable foundations in place.

Following an initial AI workshop, we established a cross‑functional AI Forum with representation across the business.

Its role is to surface practical use cases, challenge assumptions, and agree clear boundaries before anything is scaled.

We then used an OKR cycle to impose focus and accountability.

The emphasis of our first cycle was not sophistication, but readiness, defining acceptable use, addressing the risk of informal or “shadow” AI activity, and removing basic barriers to responsible experimentation.

In parallel, we’ve been doing the quieter work of organising data, tightening workflows, and strengthening our operating architecture.

Because AI layered onto fragile systems doesn’t just introduce risk, it creates faster problems.

But the inverse is also true, and this is where the opportunity becomes compelling.

When AI is embedded into clean data, repeatable workflows, and well‑designed decision processes, the benefits are not linear, they compound.

Each cycle of using AI on a solid foundation produces richer context, better support, and more leverage for the next cycle.

Over time, our belief is that the gap between firms who have done the groundwork and those who treat AI as an add‑on widens quickly.

This is why we began with internal bottlenecks, where the payoff is tangible and the risk is low.

While these use cases may not be flashy, they build fluency, trust, and momentum.

Crucially, they also create the data and process feedback loops that make more advanced use cases viable later, which is where we’re already starting to see some exciting developments.

Two principles sit at the centre of our approach.

First, accountability remains human.

If a decision affects client capital, a person owns it, full stop.

Second, architecture matters more than tooling.

We are far less concerned with which model we use at any point in time than with whether our workflows, data structures, and control environment are designed for AI‑enabled execution.

Tools will keep changing, the foundations are what allow value to compound.

AI is compelling because it keeps improving, and increasingly, it improves itself.

That makes early, well‑governed adoption especially powerful.

For asset managers, this does not remove the human role, it sharpens it.

The firms that do well will be those that pair compounding technology with durable disciplines, sound process, thoughtful operating design, and teams who can apply judgement clearly when the facts are messy.

Laurium Capital is an authorised financial services provider (FSP 34142)

www.lauriumcapital.com

Disclaimer: This article is for information and discussion purposes only and should not be construed as financial advice or a recommendation to buy or sell any security.

Article from: Business Tech

Search for a topic

more articles