EY Study Reveals Deep Divide Between Executives and Users on AI Trust and Risks

Pexels
A new EY study highlights a sharp contrast between executives and users on the trustworthiness of AI, with leaders optimistic about integration and users voicing concerns over privacy, transparency, and accountability.

A new international study by EY, based on interviews with 975 top executives, reveals significant differences in how company leaders and everyday users view the trustworthiness of artificial intelligence, particularly regarding data privacy, accuracy, and transparency.

According to the findings, most senior executives—including CEOs, CFOs, HR heads, and tech leaders—believe their organizations have largely implemented AI and tapped into its benefits. Nearly two-thirds think their companies and customers are aligned in how they perceive AI’s risks and rewards.

But the reality, warns George Tilesch, EY’s lead AI expert for the AI Confidence programme, is that most firms have only scratched the surface of AI’s real potential. One major misconception is equating AI with general-purpose tools like large language models (eg, ChatGPT), which limits how businesses think about and apply the technology—ultimately putting them at risk of falling behind.

The study found users are much more wary than executives when it comes to AI’s downsides. Key concerns include lack of transparency in decision-making, inadequate data protection, and unclear accountability for system outcomes. The spread of misleading content and potential for manipulation top the list of perceived dangers—while fears of job losses ranked low for both groups.

Despite the growing use of AI in business, responsible integration remains a challenge. Over half of executives admitted that creating effective governance frameworks for current AI models is difficult, and emerging applications—like synthetic data, autonomous robots, or AI agents—introduce even more uncertainty.

The study also highlights a divide within the C-suite itself. CEOs expressed more concern about AI’s responsible use than their peers in finance, marketing, HR, and risk management. This discrepancy is partly due to the fact that many departmental leaders either don’t feel responsible for AI’s use or lack a broad understanding of its company-wide implications.

For AI to be truly effective, Tilesch argues, businesses must rethink their processes, identify where AI delivers real value, and invest strategically. This also means educating employees, adapting organizational structures, and building stable governance systems.

‘Those decision-makers who recognize this early and work with experienced, global-minded experts will secure their future success,’ Tilesch said.

As companies begin planning for next-gen AI tools, EY warns that many are still underestimating the risks. Without deeper awareness and stronger oversight, the gap between potential and practice will only widen.


Related articles:

Artificial Intelligence Creates More Jobs than It Replaces, Study Finds
New Hungarian AI Strategy Nears Government Approval
A new EY study highlights a sharp contrast between executives and users on the trustworthiness of AI, with leaders optimistic about integration and users voicing concerns over privacy, transparency, and accountability.

CITATION