The AI Adoption Paradox: We’re Using it More While Trusting it Less
Since everything changed when ChatGPT was released on November 30, 2022, AI has gone from mind-blowing experimentation to embedded in our everyday.
A recent report (Trust, attitudes and use of artificial intelligence: A global study 2025) from KPMG and The University of Melbourne found that two-thirds of people now intentionally use AI (as opposed to AI use embedded behind other tools like social media). Yet trust is lagging. Over half of respondents said they were wary of AI, and in many countries, trust levels have actually declined since 2022, even as adoption has surged.
In other words - we use AI more and more, but trust it less and less.
Why We’ve Embraced Tools We Don’t Fully Trust
Part of the answer lies in the pace and pressure of modern work. Microsoft’s ‘Work Trend Index Report’ calls it the ‘infinite workday’ - constant interruptions, blurred boundaries, and a growing sense of never being ‘done’.
40% of people who are online at 6am are reviewing email for the day’s priorities.
The average worker receives 117 emails and 153 instant messages daily and is interrupted every two minutes.
29% are still working at 10pm, and many are trying to catch up on weekends.
1 in 3 employees say the pace of work over the past five years has made it impossible to keep up.
When the workday and to-do list never really end, AI stops being a novelty and starts to feel like a lifeline. The KPMG/UM report reflects this reality - 58% of employees now intentionally use AI regularly at work, with nearly a third using it weekly or daily.
People are reaching for whatever tools help them meet deadlines, manage the flood of incoming information, or tame the backlog. It’s pragmatic. But it means AI is often being woven into workflows faster than organisations recognise or acknowledge.
And yet 66% of users say they rely on AI outputs without fully checking them. Under relentless pressure, speed outpaces scrutiny.
That tension - rising adoption with declining trust - is at the heart of the paradox.
Why This Paradox Demands An Experimental Mindset
The instinctive response to any new organisational strategic change is often to slow down, break it down, and only move when we understand it fully. But AI breaks that pattern.
As Straub, Groth, and Altenburg write in the California Management Review -
“AI isn’t waiting for legacy organizations to catch up. Technology advances exponentially, while legacy organizations remain stuck in outdated adoption cycles.”
AI isn’t static. It’s complex, fast-moving, and context-sensitive. Mainstream models like OpenAI and Claude have already moved from a prompt-based norm to an agentic AI norm. If we wait until we can map every risk or define every guardrail, the technology will already have outpaced our plans.
Deloitte’s AI at a Crossroads: Building trust as the path to scale report makes this explicit: only 9% of organisations have a ‘ready’ level of AI governance, yet companies embracing the journey to establishing trust frameworks for AI use see higher adoption rates and measurable performance gains, even if they are still in the process of developing and refining those frameworks.
In other words, you can’t build trust without use. You build both together. Yet use alone won’t result in trust.
The California Management Review article frames it provocatively - “AI initiatives don’t fail. Organizations do.”
Their research shows that pilots often get stuck as innovation theatre that fails to take root in organisational norms, because companies wait for perfect understanding instead of creating experimentation pathways.
This is the heart of the mindset shift - we can’t fully understand AI before we use it. We will learn to understand it by using it.
Global Perspectives: AI for Good 2025
One of the clear themes from the AI for Good 2025 Global Summit wasn’t just what AI could do, but how quickly it’s already reshaping what happens across a wide variety of sectors, and the experimentation that’s being embraced.
At the summit, speakers showcased AI projects in healthcare, climate resilience, food security, and humanitarian aid, many of them moving from concept to field testing in timeframes that would have previously been thought impossible.
No matter your sector, the bigger risk today is not experimenting with AI. It’s ignoring the potential.
That urgency reframes the trust-adoption paradox. Waiting until we have perfect clarity or full trust before engaging with AI isn’t neutral - it’s a decision to stand still while the world moves.
Or, as Carey Nieuwhof puts it in regard to leadership in general -
“The gap between how quickly you change and how quickly things change is called irrelevance. The bigger the gap, the more irrelevant you become.”
The summit’s examples made it clear that thoughtful experimentation isn’t a nice-to-have, it’s the only way to keep pace with a technology that’s moving this fast. In the face of rapid AI evolution, passivity is riskier than participation.
As the famous quote often apocryphally attributed to W. Edwards Deming goes -
“It is not necessary to change. Survival is not mandatory.”
What This Means for Each of Us
Of course, this doesn’t begin as an organisational strategy question. It starts with individual choices and mindsets.
Acknowledge the paradox - AI can help tremendously, and there are also good reasons not to fully trust it. That honesty keeps us intentional.
Adopt a beginner’s mindset - Keep trying out new, small, low-risk experiments with AI. Keep learning. Pay attention. Try things, and see what is useful and what isn’t.
Build critical literacy - Learn to question outputs. Verify sources and assumptions. Learn about data sets. Question what might be missing or where bias might have crept in.
Why Trust and Experimentation Need to Grow Together
The KPMG/UM report makes it clear that trust doesn’t automatically follow adoption - it has to be earned.
And as the Deloitte report shows, earning that trust requires use - it’s the act of experimenting, learning, and adjusting that builds it.
In other words, you don’t wait until you completely trust AI to start using it. You build that trust, as well as learning where trust shouldn’t be placed, by using it well.
And using AI well doesn’t mean using it perfectly. It means using it in a way that is informed, intentional, and iterative.