Part 1 - Land, Labour, Capital… and Data: The New Economics of Trusted AI
In 2026, AI has moved from experimentation to obligation for Boards and C-Suite teams. Global AI expenditure is projected to exceed approx. $1.8 trillion this year, yet the Salesforce State of Data and Analytics 2025 report shows that 53% of organisations cite poor data quality and availability as the leading barrier to adopting agentic AI. The result is a widening gap between AI ambition and realised value that is now primarily a data and trust problem, not a tooling problem.
With data now a recognised factor of production alongside land, labour and capital, Boards need to answer a strategic question: are we funding and governing data with the same seriousness as our other factors of production, or treating it as exhaust from operations?
Grounding AI in trusted data converts spend into defensible returns; ignoring this transforms data into a structural liability in the face of increasingly agentic AI integration and tightening regulation.
The Fourth Industrial Revolution: Data Joins Land, Labour, and Capital
While previous industrial revolutions preserved Adam Smith’s triad of land, labour, and capital, the Fourth Industrial Revolution (4IR, Industry 4.0) has fundamentally reconfigured it. We see this in the valuation of digital giants who create immense wealth with minimal land or labor, and in the formal recognition of data as a production factor by major economies like China.
For financial services, this shift is not theoretical; it is an operational reality where data has replaced capital as the primary input for pricing, credit, and risk decisions. In this context, poor data governance does not merely limit upside; it distorts the very mechanism of value creation, turning data into a systemic liability and compliance risk as agentic AI scales across Industry 4.0 environments.
The Value Gap: Why AI Investments Are Failing to Deliver
The “data is the new oil” metaphor remains useful, but only when refined for the AI era. Unlike oil, data is not finite and involves far lower transfer and storage costs, but like oil, raw data is largely inert until it is governed, cleansed, and structured for use. In practice, research demonstrates that this ‘refinement’ step is where most organisations are underinvested.
Deloitte’s 2025 poll of finance professionals underscores the point. While 80.5% believe AI tools will become standard within five years, only 13.5% were using agentic AI at the time, with trust in AI (including the underlying data and programming) cited as the top impediment at 21.3%, ahead of integration challenges and lack of skilled personnel. The binding constraint is confidence in the data, not access to algorithms.
The BARC Data, BI and Analytics Trend Monitor 2026 reinforces this reality: “High data quality standards are essential to increase flexibility for business users and strengthen their trust in data.” Organisations are learning that hallucinations, biased predictions, and inconsistent recommendations often trace back to noisy, incomplete, or poorly governed data rather than to model choice. When Boards underfund data foundations, they effectively accept higher model risk and a slower path to AI value.
Research confirms that robust data infrastructure including automated quality, metadata, and lineage cuts AI deployment cycles by up to 50%. Furthermore, companies prioritizing internal data collection achieved around 6% productivity gains compared to negligible gains for those relying on external solutions, proving that operational success depends on data capability, not software procurement.
For executives, the implication is straightforward: continuing to scale AI on untrusted data magnifies cost and risk without commensurate return.
Trust is the Currency: The Principal Barrier to AI Adoption
If data is the oil of the AI economy, trust is the currency. Without trust in data provenance, quality, and integrity, increased data volume does not translate into better decisions.
At a 2025 Innovation Breakfast hosted by New Icon, representatives from scientific research, national agencies, engineering, public sector leadership, and advanced technology sectors reached a clear consensus: “AI adoption isn't stalled by technology. It's stalled by uncertainty around data.” Organisations do not lack tools; they lack clear models for data ownership, ethical use, sharing, governance, and accountability.
Inside organisations, trust operates at the workforce level as well as in board discussions. Research in the Journal of Management Studies identifies four trust configurations:
- full trust
- full distrust
- uncomfortable trust
- blind trust
…by employees’ cognitive and emotional trust in AI. Employees in suboptimal configurations often change their behaviour (for example, withholding or manipulating digital footprints), which degrades AI performance and further erodes trust. In other words, weak trust can cause the very data quality issues Boards are seeking to overcome.
The business impact of workforce trust is material. Great Place to Work 2025 finds that organisations with high employee trust experience around 8.5x higher revenue per employee and roughly 3.5x stronger market performance. Companies that actively involve employees in AI integration report around 81% adaptability and double the innovation ratio compared to typical workplaces. AT&T’s billion-dollar reskilling initiative reduced turnover by about 25% among participants while building fluency in AI tools alongside human capabilities such as judgement, creativity, leadership, and resilience.
Externally, customer and client trust is similarly conditional. TD Bank’s 2025 survey of 2,500 U.S. consumers reported that 70% are comfortable with AI in fraud detection and 64% trust its use in credit score calculations, especially for “behind-the-scenes” applications. For financial services firms, this “employer trust premium” is a strategic asset, but one that can erode quickly if AI decisions appear opaque or unfair.
The risk of inaction is that trust decays faster than AI capabilities improve, leaving organisations with technically powerful systems that stakeholders refuse to rely on.
The Regulatory Imperative: From Principles to Enforcement
The regulatory environment now reflects the macroeconomic elevation of data and the centrality of trust. The EU AI Act represents the world’s first comprehensive AI regulation with concrete enforcement measures, entering its first major enforcement cycle in 2026. Orange Business Services’ January 2026 analysis describes AI governance as “one of the largest compliance-driven transformation programmes for the coming years.”
The Act categorises AI systems into four risk tiers, with high-risk applications-including credit scoring, insurance pricing, recruitment, and clinical decision support-subject to conformity
assessments, documentation, and human oversight obligations. Fines parallel or exceed GDPR levels, making AI governance a board-level exposure. For financial services, this directly affects core revenue-generating and risk-bearing activities.
Other jurisdictions are moving along similar lines. The UK’s Financial Conduct Authority maintains a principles-based, outcomes-focused approach, avoiding AI-specific rules given technology’s rapid evolution “every three to six months,” while reinforcing transparency, explainability, and accountability under the existing Senior Manager & Certification Regime. In the United States, Colorado’s Senate Bill 24-205, effective February 2026, requires financial institutions to disclose how AI-driven lending decisions are made, including data sources and performance evaluation methods, with the explicit goal of reducing discrimination in consequential decisions.
Across these regimes, the common requirements are clear:-
- Training and test datasets must be relevant, representative, and free of bias; documentation must demonstrate compliance;
- Privacy Impact Assessments and Fundamental Rights Impact Assessments are required for high-risk systems; and
- “Black Box” models are increasingly unacceptable in high-stakes domains.
Boards that treat these expectations as a narrow compliance issue rather than as a design principle for their data and AI architecture risk higher regulatory friction and slower growth.
Financial Services: From Pilot to Profit Engine
Financial services offers concrete proof that trusted data ecosystems turn AI into an operational profit engine. Mastercard Decision Intelligence Pro illustrates this clearly. The generative AI model analyses about 125 billion transactions annually, assessing transaction context in under 50 milliseconds. It has improved fraud detection rates by an average of roughly 20%, and up to 300% in some segments, while reducing false positives by more than 85%. This combination of better protection and less customer friction is only possible because of the quality and governance of Mastercard’s data and models.
JPMorgan Chase’s Contract Intelligence (COIN) platform and newer LLM suite automate the interpretation of commercial loan agreements, saving hundreds of thousands of labour hours annually while reducing manual error rates. The bank’s approach-starting with contained, well understood document workflows-shows how internal trust in AI can be built through repetitive, low-risk success before extending to higher-stakes areas.
In insurance, Zurich Insurance is piloting generative AI to streamline claims data extraction, focusing on identifying claims trends and surfacing relevant documents to handlers. Zurich explicitly positions this as augmentation, making claims handlers faster and more effective rather than replacing them, which aligns with the workforce trust dynamics described earlier.
Beyond individual firms, examples such as GlobalTrust Insurance illustrate what is possible when risk management is redesigned around data and AI. GlobalTrust integrated predictive analytics and ensemble learning across structured medical records and unstructured social media data, improving risk prediction accuracy by around 30% and reducing operational costs.
These cases share a common pattern: investment in data infrastructure, governance, and stewardship preceded AI scale. The risk for Boards is assuming that similar results can be achieved without comparable groundwork.
The Rise of Data and AI Stewards
Treating data as a factor of production demands clear ownership. Traditional data stewardship has evolved from passive oversight to active contribution to business value, analytics maturity, and AI readiness.
Stewards provide crucial context: why datasets might be incomplete, which fields are most reliable, and how historical changes in data collection affect current trends. When AI teams operate without this context, models diverge from operational reality. The modern Data/AI Steward requires a distinct skills matrix:
- Semantic resolution: disambiguating “grey area” data definitions for AI training to prevent context-driven hallucinations.
- Model forensics: tracing outputs back to training inputs for EU AI Act explainability and internal audit purposes.
- Audit trail management: maintaining immutable logs of human–agent interactions to manage liability in regulated decisions.
- Ethical guardrailing: defining “safe bounds” for agentic AI autonomy to reduce reputational and conduct risk.
A more specialised “AI Steward” role is emerging as an ethical steward responsible for ensuring AI agents operate within defined constraints, align with organisational values, and maintain transparency in decision-making. This is particularly important in heavily regulated environments where AI outcomes affect safety, access to finance, or other fundamental interests.
The key insight for Boards is that stewardship is not an optional add-on; it is a control function and value driver. Organisations that formalise Steward participation in AI projects are avoiding many pitfalls associated with ungoverned data and building a defensible foundation for AI at scale. Those that do not risk fragmented ownership and untraceable decisions.
Workforce Architecture: From Tool to Strategy
As AI capabilities become more agentic, leading organisations are shifting from treating AI as a tool to treating it as a workforce strategy. The World Economic Forum’s January 2026 blueprint stresses that capturing AI value requires simultaneous transformation of workforce, operating models, and governance. Technology changes alone are insufficient.
The Microsoft Work Trend Index 2025 highlights a “capacity strain”: while 78% of leaders plan to add AI-centric roles, employees often lack the skills to verify AI outputs. This verification literacy gap is emerging as one of the most significant workforce challenges. Organisations cannot simply hire their way into AI maturity, because experienced AI talent does not exist at the required scale. Developing capability from within becomes a differentiator.
Research links high-augmentation workflows to a wage premium of approximately 56%, and finds that neurodiverse team composition can be more predictive of innovation output than raw AI infrastructure spend. Middle managers need to act as “AI augmentation coaches,” translating capability into performance by guiding how humans and AI systems work together. There will be a fundamental change in how businesses develop “thinking” skill sets across their organisations.
We see a clear pattern: organisations that integrate responsible AI training into onboarding, performance evaluation, leadership development, and culture are better positioned to build and sustain trust in AI. Those that treat training as a one-off or optional intervention struggle to move beyond pilot projects.
Strategic Imperatives for Senior Leaders
Gartner predicts that 40% of enterprise applications will leverage task-specific AI agents by 2026. Salesforce describes a “GenAI Divide” between organisations with trusted data foundations and those that simply possess expensive tools. For Boards and C-suites, the question is no longer whether to invest in AI tools, but how to ensure data and trust keep pace with that investment.
Based on current evidence, senior leaders should prioritise five actions:
- Reclassify and fund data as a factor of production
- Mandate a data-trust audit across critical workflows
- Build governed, interoperable data environments
- Institutionalise Data and AI Stewardship, and consider ISO/IEC 42001 as a benchmark for governance maturity
- Close the verification literacy gap in the workforce by treating AI literacy enablement as part of the control environment
The through-line across these actions is that responsible AI and high-performing AI are converging. Underinvesting in governance and data is therefore both a risk choice and a performance choice.
The Question Every Board Must Answer
The economic verdict is clear: data has joined land, labour, and capital as a fundamental factor of production. In this new economy, trust is the currency that converts raw data into AI capability and competitive advantage.
The return on investment of governance is measurable. Organisations with trusted data ecosystems are realising:
- up to 300% improvement in fraud detection
- around 25% gains in risk prediction accuracy.
- up to 50% reduction in compliance incidents.
- substantial reductions in labour hours.
Conversely, those treating data as a by-product are finding that AI investments add complexity without delivering value. They remain "tool-equipped but value-starved."
The Board’s imperative is now binary: either fund and govern data with the same rigour as capital, or cede the market to those who do.
In Part 2, we will shift the focus from an economic and governance lens to execution strategies that enable Organisations to operationalise trusted AI at scale.
