...

February 18, 2026

AI anxiety paradox: Why insight teams are right to be cautious

Share this blog:

Insight teams are under more pressure than ever. The board wants AI to accelerate speed to insight. Finance wants it to cut costs. Everyone wants more impact, and they want it now.

Under pressure

I’ve spoken with a lot of insight leaders, and most of them will tell you the same thing – they feel squeezed. They’re being handed tools they don’t fully understand and asked to stake their professional credibility on the outputs.

While AI vendors promise faster, cheaper, better research, the people who actually understand research methodology are asking questions. What data trained these tools? How do we know the outputs are accurate? What happens when we get it wrong?

That anxiety is well-founded. Insight leaders who feel it aren’t resisting change. They’re being cautious about tools that could fundamentally undermine the decisions they’re paid to inform.

The 5% problem

Take synthetic respondents. AI-generated survey responses can simulate thousands of consumers every hour, at a fraction of the cost of recruiting real people. Some providers claim 95% accuracy compared to traditional research.

Here’s what that pitch misses… all you need is 5% inaccurate data to make a wrong decision. If you’re building a global segmentation that will shape your marketing strategy, product development, and commercial priorities for the next three years, 95% accuracy isn’t good enough. The 5% you got wrong might be the segment that matters most. Miss a critical insight. Misread a behavioural driver. Launch the wrong product into the wrong market. The cost of that mistake isn’t measured in research budgets. It’s measured in millions of pounds of wasted investment.

Opaque tools

Synthetic data is the most visible example of AI risk in research, but it’s not the only one. Many AI tools operate as black boxes. They take inputs, produce outputs, and what happens in between stays unclear. What data sources are being used? How are models trained? What biases are baked in? For most tools, the honest answer is: we don’t really know.

This puts insight leaders in a bind. They’re being asked to champion AI adoption while unable to explain how the tools actually work. They’re being asked to stake their credibility on outputs they can’t fully interrogate. Feedback we’ve heard consistently is that this opacity creates genuine anxiety, not because people resist technology, but because their job is to provide reliable intelligence. And you can’t do that if you can’t trust your tools.

Looking up a circular staircase

The human element

For years, the best insight work has been built on trusted advisor relationships. Researchers who understand the business context, know the history, and can interpret findings through the lens of what the organisation actually needs.

Many AI tools promise to skip that step. Plug in your brief, get your insights, skip the expensive consultants. But in doing so, they skip the judgment, context, and pattern recognition that makes insight actually useful. Businesses get outputs faster, but lose the human intelligence that turns those outputs into decisions. They save money on the research, then waste it on the wrong strategy.

A principles-based response

So what’s the alternative? Reject AI entirely and stick with legacy methods? That’s not realistic, and it’s not smart. The efficiency gains are real, and the possibilities are genuine. Ignoring AI isn’t caution, it’s denial.

The better path is a principles-based approach that captures AI’s benefits while maintaining the rigour and human oversight that good research requires.

At STRAT7, we’ve developed six principles that guide how we build and deploy AI across our work. These emerged from extensive conversations with insight leaders facing exactly these pressures, and they shape everything we do.

Humans in control

AI handles the heavy lifting. Our people handle the heavy thinking. Every output gets human oversight before it shapes a recommendation.

Purpose-led adoption

We don’t use AI for everything. We use it where it delivers tangible, high-ROI value. That means being selective, not performative.

Governance and trust

We’re transparent about what data sources we use, how we test our AI agents, and what they can and can’t do. No black boxes.

Bias awareness

Large language models are trained on data that carries cultural biases and assumptions. We actively work to understand and counteract those biases in our outputs.

Continuous improvement

Our AI tools aren’t finished products. We constantly refine them based on client feedback and evolving best practice.

Transparency over salesmanship

If AI won’t work for a particular brief, we say so. If there are trade-offs, we name them. The trusted advisor relationship matters more than the technology sale.

These principles aren’t just internal guidelines. They’re commitments we make to clients who need to trust that the intelligence they’re receiving is reliable.

The real question

The conversation about AI in research has focused on efficiency metrics. How much faster? How much cheaper? How many more data points?

Those questions matter, but they’re not the most important ones.

The most important question is: can you trust the output enough to bet your strategy on it?

For synthetic data and black-box tools, the honest answer is often no. The accuracy isn’t proven, the methods aren’t transparent, and the biases aren’t understood.

For AI that’s built on sound principles, with human oversight, transparent methods, and a clear understanding of its limitations, the answer can be yes. Not blind trust, but earned confidence based on rigorous process.

Insight teams feeling anxious about AI aren’t being difficult. They’re doing their job. The challenge now is to channel that healthy scepticism into demanding better. Better tools, better transparency, better integration of human judgment.

Because the goal was never to do research faster. It was to make better decisions. And that requires intelligence you can actually trust.

Accelerate your insights using Nucleus, our AI hub of proprietary agents working alongside our consultants to speed up insight delivery and unlock new possibilities.

Featured content