Blog

AI’s Regulatory Crossroads: Innovation vs. Control

The surge in Generative AI (GenAI) – where users can input a variety of prompts to generate new content – has revolutionised the AI landscape. It’s a time of excitement (and mild fatigue for data science practitioners), but there’s no denying this latest wave of AI developments has been extremely interesting. New large language models (LLMs) have been released almost daily. As this post hits the web, Claude 3 has recently been released, disrupting the landscape with claims that it can beat GPT4 across multiple benchmarks. But given the rate of development, there could be another family of models that overtakes Claude by the time you’ve finished reading.

The excitement surrounding new LLMs is at times muted by the lawsuits in which many have found themselves embroiled. In December last year, The New York Times sued OpenAI and Microsoft for copyright infringement. OpenAI responded by asking a federal judge to dismiss parts of the lawsuit, claiming the NYT "hacked" ChatGPT to build its lawsuit. OpenAI has also come under fire from one of its own co-founders, tech titan Elon Musk, who accused the company of breaching its founding agreement and abandoning its original, non-profit mission by reserving some of its best AI technology for private customers. OpenAI responded by calling the lawsuit "frivolous" and "incoherent". Elsewhere, former Arkansas governor Mike Huckabee and a group of religious authors filed a lawsuit against a group of tech companies, including Microsoft, Meta and financial data provider Bloomberg L.P., claiming they trained AI tools on the authors’ books without permission.

As arguments abound, the permeation of AI in our lives is driving efforts to understand and regulate the use of AI. The European Union’s Artificial Intelligence Act, approved by European Parliament on 13 March, is designed to be the "world’s first comprehensive AI law", and categorises AI systems by risk, setting specific rules for each category, from minimal through to unacceptable risks. High-risk AI systems require rigorous assessment, while limited-risk ones need basic transparency.

But the road here has not been without twists and turns. In November 2023, French officials backed out of a deal shaping the EU legislation, objecting to proposed restrictions on so-called foundation models – the AI software behind products such as ChatGPT. Cédric O, a former politician who now works for one of Europe’s most promising startups, Mistral AI, was at the centre of the debate, arguing that only Silicon Valley big-hitters would be able to afford to comply with the strict regulations proposed by the AI Act – essentially downgrading European companies to second-class tech industry players. In a public letter, he argued that laws should govern the dangerous applications of AI, not the models behind them. O’s evolving role from a staunch proponent of AI regulation to one of its most vocal critics illustrates a wider issue: how can Europe lead the way on AI regulation without losing out on the benefits of rapid innovation in a global economy? France was one of the last holdouts to the AI Act but dropped its opposition after managing to secure strict conditions in the act seeking to balance transparency with business secrets and a reduced administrative burden. As the AI Act moves forward, the effects will soon be seen.

All of this brings us to a crucial and interesting question: what are the wider implications of regulation? Here are my top three predictions.

  1. Costs of AI services will increase
    Whether it is the lawsuits that keep coming (and go in an adverse direction to service providers) or mandatory investment required for regulatory compliance, operational costs for AI companies will increase – and as it usually happens in the longer term for most businesses, to account for increase in operational expenses, in this case the consumers of these AI products and services may feel the squeeze on their wallet.
  2. The UK will follow the EU AI Act and US tech will find itself a less regulated environment
    The UK is building its own framework for AI regulation, but given the close economic and technological ties between the UK and the EU, it will no doubt be influenced by the EU AI Act. This can create scope for harmonisation and consistency across policies, especially in areas where international co-operation is needed for safe and ethical use of AI. What will also be interesting is to see how things develop in the US with its more decentralised approach to regulation till now given lack of stringent federal legislations. There were over 440% more AI-related bills introduced by state lawmakers in 2023 compared with the previous year, The American Data Privacy and Protection Act is a proposed federal legislation that could, in theory, solve the looming issue of understanding and implications of state-wide laws.
    However, I predict that US tech will find a way to operate in a less regulated environment compared with the EU.
  3. We will find a balance on regulation – eventually
    "Whoever becomes the leader in this sphere will become the ruler of the world." So said Russian President Vladimir Putin of AI in 2017. While he and I may not agree on most matters, this one, I must concede on. Technology companies continue to create new job families and will continue to boost employment, but if countries or governments regulate "too hard", these "builders" will simply move elsewhere – and boost the economies of another country. Aside from slow innovation, there also exists a bigger challenge in understanding what is being built, and how. It may therefore be far better for governments to work together with tech companies – and each other. Instead of heavily penalising AI innovators, there should be a collective goal of transparency. While there will be initial tension between fostering innovation and ensuring ethical AI use, my hope is that most governments will be able to find the balance.

The latest insights from our experts around the world

The six drivers of private brand innovation
Why you need a demand model
AI: three breakout applications for consumer brands
customer first data science analytics & machine learning services
Ready to get started?

Speak to a member of our team for more information