In 2023, the phrase ‘System of Action’ entered the tech lexicon, popularized by David Yuan as the next evolution beyond Geoffrey Moore’s ‘Systems of Engagement’ and the earlier ‘Systems of Record’ that defined the SaaS era.
Systems of Action don’t just record or inform decisions — they execute them. Real-time. Dynamic. Embedded.
But beneath every action lies a more human question:
Can I trust this?
The next shift in AI isn’t about speed. It’s about certainty.
It’s about building the System of Trust — where predictions are tied to proof, outcomes are verified, and confidence compounds into adoption and growth.
This isn’t optional. It’s existential. Without trust, AI will stall at the moment of adoption.
The confidence gap
Spend five minutes with ChatGPT and you’ve likely felt it — that flicker of hesitation, that gut check: is this true?
That pause isn’t unique to casual users. It happens in boardrooms, on factory floors, and inside planning sessions everywhere. AI can now recommend what to stock, which drugs to develop, and what strategic moves to make next. But people hesitate before acting.
Why? Because every AI output raises questions:
- What is this based on?
- Can I trust the data?
- Whose interest is it serving?
This is the confidence gap — the space between what AI recommends and whether people believe it enough to act.
And it’s not just emotional. It’s mathematical. AI outputs are probabilities, not certainties. A forecast, a fraud alert, or a customer match score are all ranges of possible outcomes (confidence intervals). When a model says there’s an 87 per cent chance of success, that shadow of 13 per cent can mean millions in losses, broken relationships, or missed opportunities.
The result is familiar: companies spend millions on AI, pilots succeed, but scaling fails. The technology works, but humans won’t pull the trigger.
This isn’t a tech problem. It’s a trust problem.
How trust is built
Trust in AI doesn’t come from marketing or better dashboards. It comes from feedback.
Every prediction must be tied to its outcome. Every recommendation tested against reality. Every action looped back into the system.
That’s what closes the confidence gap.
Take demand forecasting. An AI recommends adjusting inventory. In a System of Action, the adjustment is made. In a System of Trust, the result is tracked:
- Did sales match the forecast?
- Were stockouts avoided?
- Did margins hold?
Each cycle becomes a proof point. Each proof point then compounds to boost confidence. Over time, organisations learn when to trust AI — and when to keep human oversight.
The infrastructure of trust
To build trust, we need infrastructure that current AI doesn’t provide.
Four areas stand out:
- Explainability that speaks business. Not just model diagnostics, but reasoning in plain language that leaders understand. When AI suggests a pricing change, the “why” should be clear in terms of market dynamics and history.
- Outcome tracking. Predictions must be tied to results, even months later. It’s not enough to know an AI was wrong — we need to know why.
- Confidence calibration. A system that claims 90 per cent confidence but delivers 70 per cent accuracy erodes trust. Honest calibration builds it.
- Governance frameworks. Not every decision should be automated, even if AI allows it. Trust requires clear rules on which decisions can be delegated, validated, or kept in human hands.
This isn’t just about smarter models. It’s about systems that are reliable, accountable, and transparent.
Why this evolution is inevitable
The System of Trust isn’t optional. It’s inevitable.
As AI touches more critical functions, the cost of the confidence gap will become unbearable. Companies that figure out how to close it will accelerate. Those that don’t will remain stuck in endless pilots.
The early signals are already here:
- Companies investing in outcome-tracking platforms.
- Regulators demanding explainability.
- Customers asking harder questions about AI-driven decisions.
The winners of the next decade won’t just be those with the best data or the most powerful models. They’ll be those who build systems that earn trust.
Because in the end, AI is only as powerful as the trust we place in it. And trust — unlike compute or datasets — can’t be bought. It must be earned, verified, and maintained.
That’s not just the next frontier.
It’s the foundation on which all future AI value will be built.
- Christopher Bartlett is the founder and CEO of Tapestry


Daily startup news and insights, delivered to your inbox.