Blog

Five predictions for the future of AI governance in retail

The coming months will mark a turning point for AI. After years of explosive capability growth, rising regulatory scrutiny, and public debate over risk, we are entering a new phase of responsible acceleration. In retail and CPG – sectors defined by thin margins, customer trust and fast‑moving competition – how organisations govern AI will matter just as much as the AI itself.

Here are the five forces we expect to shape the near future of AI governance, and what they mean for the businesses leading the next wave of transformation.

 

1. Agentic AI becomes the new governance frontier

Agentic AI – systems capable of taking autonomous, multi‑step actions and mutually interact – are shifting from theory to reality in 2026. Regulators such as the UK ICO warn that these systems introduce new behaviours, new dependencies, and new challenges for oversight. Their prediction is clear: agentic AI will soon be woven into everyday digital experiences, and its governance will require entirely new approaches.

For retail and retail media, the impact is varied:

  • Dynamic pricing and promotions executed end‑to‑end by AI agents
  • Autonomous replenishment decisions based on supply, weather, competitor activity
  • Retail media creative optimisation orchestrated via AI content agents
  • Supplier onboarding or QA agents handling verification and documentation
  • Customer service routing and resolution performed by AI instead of rule‑based bots

Autonomy brings complexity. Liability questions will dominate the next wave of regulation: who among the various actors in the AI use chain is responsible when an agent acts incorrectly or in violation of the law? This is where governance becomes a competitive advantage.

 

2. The ‘normalisation’ of AI inside organisations

While inside organisations, most workers may not build agents or perform advanced tasks with AI, they will increasingly leverage its advantages for their work. AI will shift from being a buzzword to a practical tool, integrated into our routines with less hype.

Within organisations, we will see:

  • AI integrated into everyday workflows rather than positioned as special projects
  • Less focus on “big AI launches” and more on incremental, embedded automation
  • More realistic expectations of what AI can (and can’t) deliver
  • Teams becoming more discerning about where AI genuinely adds value

People will become better at identifying where AI can truly add value. Discussions about risky AI will move away from organisations that are not in highly regulated sectors or are not involved in the building of frontier AI and have reached a sufficient level of maturity on AI governance. This will enable organisations to focus more on the actual practical risks and benefits deriving from their use of AI, whether innovative or embedded in routine tasks or automated processes.

 

3. AI laws and the global policy landscape: divergence and reinvention

Different approaches to AI regulation across different jurisdictions, and geopolitical tensions across various domains (including digital sovereignty and AI innovation), make the global landscape uncertain and difficult to navigate for multi-national companies.

The United States is currently experiencing internal tensions between state-level legislation and federal initiatives aiming to pre-empt state laws on AI. This is evidenced by the recent Executive Order 14365, which signals a move toward the implementation of a soft policy framework for AI at the federal level. We expect that the US will likely continue refraining from adopting a federal law on AI, and the Federal government will bring judicial challenge against certain State laws.

The European Union is currently debating digital sovereignty, specifically, how much digital infrastructure should be localised to safeguard fundamental rights and strategic interests versus staying open to international technologies. At the same time, the EU is reassessing its stance on data protection and AI regulations. These efforts are part of a broader push to encourage investment and innovation in AI by reducing compliance burdens and targeting laws toward the most serious risks posed by general-purpose and advanced AI models. Although there remains uncertainty about the changes to the AI Act included in the Digital Omnibus (a new set of proposed reforms for data and AI laws) we anticipate that more decisive and defined requirements will eventually be established, though this process may take several months. While the EU is committed to upholding human rights, which are central to its principles and digital policies, there may be some easing of requirements for organisations that are not directly developing or providing frontier or general-purpose AI systems.

 

4. Enforcement is becoming more preventative

Regulators are responding to AI advancements more quickly than legislative bodies, effectively leveraging existing laws, such as privacy regulations, to ensure organisations remain accountable for their use of AI. We anticipate that regulators worldwide will increasingly investigate issues like:

  • Algorithmic discrimination (e.g. differential pricing)
  • Automated systems that are opaque or difficult to explain
  • Hidden or undeclared AI features added by vendors
  • AI exploiting vulnerabilities related to e.g. minors or socio-economic factors
  • The application of AI in HR, especially recruitment processes

Encouragingly, enforcement is moving away from a purely punitive approach toward one that emphasises prevention and managing risk.

Authorities are shifting focus from reactive measures, such as fines and investigations, to proactive, risk-based, and preventive strategies. There's a growing emphasis on harmonisation and cross-border cooperation. Many regulators are also seeking feedback from industry to gain deeper insights into technologies and associated risks, promoting collaborative development of practical solutions. This approach benefits organisations already committed to responsible AI governance and risk management, positioning responsible AI as a valuable competitive advantage.

 

5. Frontier AI and the global ethical debate reach a new peak

Global experts agree that frontier AI is entering an “adolescence phase,” becoming a multiplicative force across domains. Debates now focus on self‑developing AI and the need for international safety standards.

Retailers and CPGs may not build frontier models, but they use them, meaning indirect exposure to risks and potential liability for how their use of AI impacts customers.

AI risk literacy is now crucial for all employees, and it will be increasingly expected in the market and by regulators across the AI supply chain.

The latest insights from our experts around the world

customer first data science analytics & machine learning services
Ready to get started?

Speak to a member of our team for more information

Contact us