Blog

Exploring the ethics of artificial intelligence (AI)

From its impact on education to its role in the recent Hollywood strikes – not to mention some dire warnings about the “existential threat” it might present – there can be little doubt that 2023 was the year of artificial intelligence (AI). AI has been one of the defining topics of the past 12 months and will continue to dominate attention throughout 2024 and beyond.

That AI is now the subject of mainstream conversation outside of sci-fi blockbusters is hardly surprising. While virtual assistants like Siri and Alexa helped to lay the foundations for AI as a constant co-pilot, generative technologies like ChatGPT and Midjourney have taken things to another level. As time goes by, AI will continue to have an even greater impact on our daily lives.

Over the longer term, that has implications across numerous areas. From the way we live to the way we work, AI has the potential to redefine our entire existence. In the shorter term, though, there will be some major hurdles to navigate – not least the fact that people will become increasingly aware about AI’s role in corporate decision-making.

Take the financial services sector, for example. As AI becomes more prevalent within the world of banks and insurance companies, consumers will undoubtedly want to know how decisions affecting their financial future are being made. Areas like healthcare and government are likely to attract similar scrutiny, particularly when third-party organisations happen to be involved.

Other industries will also be drawn into that conversation, though – retail among them. Here, AI has the potential to play a role in everything from pricing and promotions through to the personalisation of the customer experience; in fact, with AI subsets like machine learning taken into account, it is already. The key difference is that, as awareness of AI continues to grow, so too will the amount of attention paid to its responsible use.

As a data science company, and one that uses AI within many of its own products and services, it’s only natural that we have an interest in this issue. That’s why we’ve been hard at work on a programme that considers responsible AI, both within dunnhumby, and across the sector as a whole.

 

The five principles of responsible AI

Ethics isn’t a new issue for us, particularly when it comes to data science. Our focus on “customer first” principles is born out of the belief that, since data can provide retailers and brands with deep insight into shopper behaviours, those organisations have a responsibility to use that information in a way that benefits their customers.

In many ways, responsible AI is just a subset of data ethics. The situation is far more complex when it comes to AI, though, primarily due to the speed at which the market is moving. Despite only making its public debut on November 30th 2022, for instance, ChatGPT now has more than 100m weekly active users.

That lightning fast pace can make it difficult to keep up – not least for regulators and legislators. The European Union introduced the first ever AI Act this year, but that remains at a relatively early stage. The US’s Blueprint for a Bill of AI Rights is similarly nascent, and is further complicated by the country’s state-by-state approach to enforcement.

In lieu of a centralised set of guidelines to follow, most organisations are instead adopting voluntary ones. More often than not, these tend to echo the “soft laws” outlined by the likes of the OECD, Microsoft, Google, and UNESCO – all of which have their own principles about the creation of responsible AI. Broadly speaking, those principles typically cover five specific areas:

  1. Safety, security, and robustness
  2. Transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

Of those five, some are easier to manage than others. With the right controls in place, considerations like safety, transparency, and accountability can be relatively simple, particularly if an organisation already has robust data privacy standards. Other topics, though, are a little trickier to get right – with fairness being the prime example.

 

A fairer future for all

While the definition is undoubtedly up for debate, we tend to think about fairness like this: if two people behave in the same way, then they should be treated in the same way too. What that means from a data science and AI perspective is that, if two people buy the same things, spend the same amount, and shop with the same frequency, then they should be treated identically.

The benefit of thinking about fairness in this way is that it makes any disparities in your own data much easier to detect. Moreover, if you do spot any indicators of unfairness, it then becomes possible to investigate further and see why that might be happening.

Where this gets particularly interesting is that it runs completely contrary to the normal way of thinking about unfairness. Race, age, gender, sexual orientation: all these things are invisible within the data. So, rather than starting with those qualities and then looking for signs of discrimination, you can instead start at the point of discrimination and work backwards – learning along the way how your data model has allowed that to happen.

Reaching that level of understanding is important, too, because it can have a real impact on the end customer. If one of our identically-behaved customers above received 200 coupons per year, and the other only 10, that less fortunate one would understandably want to know why. Very quickly, that takes us from fairness into areas like explainability, contestability, and redress – one of many reasons that we’re so focused on that primary issue.

To that end, we have established a cross-functional team to explore fairness alongside those other guiding principles of responsible AI. Our legal and data science teams are actively involved, and we also have access to external expertise in this area. Their fresh eyes and fresh insights are helping us to sharpen our thinking around this critical issue.

Our progress so far is exciting. Already, we have started to explore algorithms that are specifically designed to find and measure fairness. We have also begun to look for unfairness in our own data, not least our ability to explain it should we find it. Ultimately, this is important to us, because it is important to our clients; just like their own customers, retailers and brands will need to be confident in the responsible use of AI, too.

Do we have all of the answers to the responsible application of AI, then? No – and nobody does. But the journey in search of them is one we’re excited to take. I look forward to sharing more in the future.

AI holds huge potential. It can deliver incredible things. Ensuring that it delivers them in a responsible, customer first way should be a priority for us all.

The latest insights from our experts around the world

AI's Regulatory Crossroads: Innovation vs. Control
Why you need a demand model
AI: three breakout applications for consumer brands
customer first data science analytics & machine learning services
Ready to get started?

Speak to a member of our team for more information