As a science team, we’ve been studying the rise of Generative Artificial Intelligence (Generative AI) models for some time now. Long enough, in fact, that when we started we weren’t even sure what to call them. Were they Foundation Models? Pre-Trained, or Large Language Models? Generative AI, as it obviously transpired, emerged as the winner.
While we’ve been studying Generative AI for a long time, though, it’s only in the last year that the subject has been thrown into the mainstream. Two notable events are behind this shift. Firstly, in November 2022, Open AI released ChatGPT to the public for the very first time. Six months later GPT-4 launched, redefining what the technology was capable of.
Since then, we’ve seen a great deal of excitement… and even more media hype. With all of that in mind, I’d like to share my take on the true potential of Generative AI.
Before we get too deep into things, let’s take a moment to remind ourselves of the difference between Generative AI, and AI as a whole.
AI is a broad term. It relates to the ability of a machine to perform tasks that typically require human intelligence. Some of the most common tasks in retail science – price optimisation, the recommendation of relevant products to customers, store clustering – all leverage machine learning algorithms. As a result, we can include all of them within that overarching category of AI.
Generative AI is a subset of this, and relates specifically to the creation of new content like summaries, images, songs, or code. To build this new content, we need models of a different scale; models that have been pre-trained on almost inconceivable amounts of data using the kind of compute that we couldn’t fathom just a few years back.
At a high level, these models are called Foundation Models, but there are further variations for specific types of content. If they are trained on text, for instance, then they are called Large Language Models. Large Image Models are trained on images, and so on.
Generating results? Or just more complexity?
Generative AI is moving at lighting speed. Almost every week we see a new model more powerful than the last, and we are all still learning about the limitations and applications of this emergent technology. Because of that, I think that there are three critical questions that any organisation needs to answer before making an investment into Generative AI:
Let’s look at all three, and see what needs to be considered in each area.
To understand Generative AI’s value, we first need to understand its potential use cases. At dunnhumby, that’s something we’ve explored across four key angles.
Can we use Generative AI to create a more personalised experience for our customers? Could we minimise manual tasks by enabling them to use natural language to navigate software products? What if they could just ask our interface for a summary of drivers of growth or decline in a given category, for example?
With large language and image models trained using a vast amount of data, they can also serve as sources of information on products and retailers in their own right. This, of course, comes with the caveat that while that information might be extremely helpful, it may not be entirely reliable.
Tools like Microsoft Copilot are well known for wider use cases but the rise of Generative AI means there’s now a tool for just about everything, from creative design to people coaching and skills development. Can our people benefit by embedding those applications into our existing processes?
This is something that we’re particularly focused on in the natural language processing space. Improved summaries and descriptions make for greater precision and better prescriptive capabilities – things that we’re naturally keen to see.
Interestingly, as we’ve looked at each of those issues, we’ve learned that many of the things we’re trying to achieve can actually be accomplished without Generative AI. That’s not surprising: we’re all still learning what these models are and how they differ from standard machine learning algorithms, after all. But it does show that it’s important not to rush into a Generative AI-heavy approach unnecessarily.
We’ve found it extremely useful to get an understanding of the different ways in which Generative AI models can be accessed, not least because the costs, privacy concerns, and enabling tasks differ for each.
Up until recently, the most talked about Generative AI models are what we term “industry models”. These include GPT-4, Google BARD, and Stability AI. Typically, these are all very powerful, but also come with two main drawbacks; data privacy concerns and cost.
Using external APIs, for instance, means that you share your data with the provider. Terms and conditions will vary on whether that data can then be used for retraining, but many companies are naturally deeply uncomfortable about that prospect, particularly when it comes to confidential or IP-related information.
The question then is how these large, externally trained models can be leveraged without a company’s own data becoming part of that dataset, which is prompting innovations in hosting, federated models, and firewalls to proceed at pace.
This brings us to open-source models like Falcon and Llama 2. The open-source community has played an enormous game of catchup in recent months, to the point at which these models can no longer be considered to be lagging in terms of quality.
The clear advantage with these models is that, if they can be hosted securely on a company’s internal systems, all of those data privacy concerns just dissolve away. The biggest challenge here, however is size: these systems tend to be prohibitively large.
To tackle that challenge, some organisations are opting to build their own Generative AI models. These leverage the underlying transformer infrastructure of a Foundation Model, but instead of training on external data, are trained on that company’s own data.
It’s unlikely that these home-grown model will compete with industry or open-source models for generic image or natural language tasks due to the expense involved in hardware and staffing. For tasks specific to an individual business, though, training on a smaller and more relevant dataset may actually be more useful than fine-tuning a larger one.
Regardless of whether we use industry, open-source, or proprietary models, any company employing Generative AI also has an ethical responsibility around how it is used and the outputs it creates. Some of the biggest considerations include:
While businesses set up policies to protect data and IP, those policies also need to be understood. Some of the tools and interfaces used to access Generative AI can look very benign, but actually present much wider security and privacy concerns. Educating employees on this new technology and creating a culture of strong privacy understanding is critically important.
Bias is an ethical consideration for any model build, not just AI. Basic predictive models in healthcare or the insurance industry may easily be found to have a bias against certain genders or income brackets, for instance. When it comes to large language and image models, though, the bias can be harder to explain. Regardless, any company utilising those tools has a responsibility for spotting and mitigating that problem.
Some Generative AI outputs can sound very official and authoritative, but that doesn’t guarantee that what they output is actually real. Just ask the lawyers who used ChatGPT-to generate citations, only to find out that some were completely fake. Remember that these models are designed to create new content, not necessarily real content – so their reliability always needs to be checked.
To understand Generative AI’s value, we first need to understand its potential use cases. At dunnhumby, that’s something we’ve explored across four key angles.
Can we use Generative AI to create a more personalised experience for our customers? Could we minimise manual tasks by enabling them to use natural language to navigate software products? What if they could just ask our interface for a summary of drivers of growth or decline in a given category, for example?
With large language and image models trained using a vast amount of data, they can also serve as sources of information on products and retailers in their own right. This, of course, comes with the caveat that while that information might be extremely helpful, it may not be entirely reliable.
Tools like Microsoft Copilot are well known for wider use cases but the rise of Generative AI means there’s now a tool for just about everything, from creative design to people coaching and skills development. Can our people benefit by embedding those applications into our existing processes?
This is something that we’re particularly focused on in the natural language processing space. Improved summaries and descriptions make for greater precision and better prescriptive capabilities – things that we’re naturally keen to see.
Interestingly, as we’ve looked at each of those issues, we’ve learned that many of the things we’re trying to achieve can actually be accomplished without Generative AI. That’s not surprising: we’re all still learning what these models are and how they differ from standard machine learning algorithms, after all. But it does show that it’s important not to rush into a Generative AI-heavy approach unnecessarily.
We’ve found it extremely useful to get an understanding of the different ways in which Generative AI models can be accessed, not least because the costs, privacy concerns, and enabling tasks differ for each.
Up until recently, the most talked about Generative AI models are what we term “industry models”. These include GPT-4, Google BARD, and Stability AI. Typically, these are all very powerful, but also come with two main drawbacks; data privacy concerns and cost.
Using external APIs, for instance, means that you share your data with the provider. Terms and conditions will vary on whether that data can then be used for retraining, but many companies are naturally deeply uncomfortable about that prospect, particularly when it comes to confidential or IP-related information.
The question then is how these large, externally trained models can be leveraged without a company’s own data becoming part of that dataset, which is prompting innovations in hosting, federated models, and firewalls to proceed at pace.
This brings us to open-source models like Falcon and Llama 2. The open-source community has played an enormous game of catchup in recent months, to the point at which these models can no longer be considered to be lagging in terms of quality.
The clear advantage with these models is that, if they can be hosted securely on a company’s internal systems, all of those data privacy concerns just dissolve away. The biggest challenge here, however is size: these systems tend to be prohibitively large.
To tackle that challenge, some organisations are opting to build their own Generative AI models. These leverage the underlying transformer infrastructure of a Foundation Model, but instead of training on external data, are trained on that company’s own data.
It’s unlikely that these home-grown model will compete with industry or open-source models for generic image or natural language tasks due to the expense involved in hardware and staffing. For tasks specific to an individual business, though, training on a smaller and more relevant dataset may actually be more useful than fine-tuning a larger one.
Regardless of whether we use industry, open-source, or proprietary models, any company employing Generative AI also has an ethical responsibility around how it is used and the outputs it creates. Some of the biggest considerations include:
While businesses set up policies to protect data and IP, those policies also need to be understood. Some of the tools and interfaces used to access Generative AI can look very benign, but actually present much wider security and privacy concerns. Educating employees on this new technology and creating a culture of strong privacy understanding is critically important.
Bias is an ethical consideration for any model build, not just AI. Basic predictive models in healthcare or the insurance industry may easily be found to have a bias against certain genders or income brackets, for instance. When it comes to large language and image models, though, the bias can be harder to explain. Regardless, any company utilising those tools has a responsibility for spotting and mitigating that problem.
Some Generative AI outputs can sound very official and authoritative, but that doesn’t guarantee that what they output is actually real. Just ask the lawyers who used ChatGPT-to generate citations, only to find out that some were completely fake. Remember that these models are designed to create new content, not necessarily real content – so their reliability always needs to be checked.
No matter what your personal view about the promise of Generative AI, one thing is very clear: we’re only at the start of what looks set to be a fascinating, if complex, era. I’m sure we’ll have more to say on the subject in the coming months.
A look at dunnhumby’s unique Customer Data Science, which is at the core of everything we do.
Explore our scienceCookie | Description |
---|---|
cli_user_preference | The cookie is set by the GDPR Cookie Consent plugin and is used to store the yes/no selection the consent given for cookie usage. It does not store any personal data. |
cookielawinfo-checkbox-advertisement | Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category . |
cookielawinfo-checkbox-analytics | Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Analytics" category . |
cookielawinfo-checkbox-necessary | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
CookieLawInfoConsent | The cookie is set by the GDPR Cookie Consent plugin and is used to store the summary of the consent given for cookie usage. It does not store any personal data. |
viewed_cookie_policy | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
wsaffinity | Set by the dunnhumby website, that allows all subsequent traffic and requests from an initial client session to be passed to the same server in the pool. Session affinity is also referred to as session persistence, server affinity, server persistence, or server sticky. |
Cookie | Description |
---|---|
passster | Set by Passster to remember that a visitor has entered a correct password, so they don’t have to re-enter it across protected pages. |
wordpress_test_cookie | WordPress cookie to read if cookies can be placed, and lasts for the session. |
wp_lang | This cookie is used to remember the language chosen by the user while browsing. |
Cookie | Description |
---|---|
fs_cid | Set by FullStory to correlate sessions for diagnostics and session consistency; not always set. |
fs_lua | Set by FullStory to record the time of the user’s last activity, helping manage session timeouts. |
fs_session | Set by FullStory to manage session flow and recording. Not always visible or applicable across all implementations. |
fs_uid | Set by FullStory to uniquely identify a user’s browser. Used for session replay and user analytics. Does not contain personal data directly. |
VISITOR_INFO1_LIVE | Set by YouTube to estimate user bandwidth and improve video quality by adjusting playback speed. |
VISITOR_PRIVACY_METADATA | Set by YouTube to store privacy preferences and metadata related to user consent and settings. |
vuid | Vimeo installs this cookie to collect tracking information by setting a unique ID to embed videos to the website. |
YSC | Set by YouTube to track user sessions and maintain video playback state during a browser session. |
_ga | The _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors. |
_ga_* | Set by Google Analytics to persist session state. |
_gid | Installed by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously. |
_lfa | This cookie is set by the provider Leadfeeder to identify the IP address of devices visiting the website, in order to retarget multiple users routing from the same IP address. |
__Secure-ROLLOUT_TOKEN | YouTube sets this cookie via embedded videos to manage feature rollouts. |
Cookie | Description |
---|---|
aam_uuid | Set by LinkedIn, for ID sync for Adobe Audience Manager. |
AEC | Set by Google, ‘AEC’ cookies ensure that requests within a browsing session are made by the user, and not by other sites. These cookies prevent malicious sites from acting on behalf of a user without that user’s knowledge. |
AMCVS_14215E3D5995C57C0A495C55%40AdobeOrg | Set by LinkedIn, indicates the start of a session for Adobe Experience Cloud. |
AMCV_14215E3D5995C57C0A495C55%40AdobeOrg | Set by LinkedIn, Unique Identifier for Adobe Experience Cloud. |
AnalyticsSyncHistory | Set by LinkedIn, used to store information about the time a sync with the lms_analytics cookie took place for users in the Designated Countries (which LinkedIn determines as European Union (EU), European Economic Area (EEA), and Switzerland). |
bcookie | LinkedIn sets this cookie from LinkedIn share buttons and ad tags to recognise browser ID. |
bscookie | LinkedIn sets this cookie to store performed actions on the website. |
DV | Set by Google, used for the purpose of targeted advertising, to collect information about how visitors use our site. |
ELOQUA | This cookie is set by Eloqua Marketing Automation Tool. It contains a unique identifier to recognise returning visitors and track their visit data across multiple visits and multiple OpenText Websites. This data is logged in pseudonymised form, unless a visitor provides us with their personal data through creating a profile, such as when signing up for events or for downloading information that is not available to the public. |
gpv_pn | Set by LinkedIn, used to retain and fetch previous page visited in Adobe Analytics. |
lang | Session-based cookie, set by LinkedIn, used to set default locale/language. |
lidc | Set by LinkedIn, used for routing from Share buttons and ad tags. |
lidc | LinkedIn sets the lidc cookie to facilitate data center selection. |
li_gc | Set by LinkedIn to store consent of guests regarding the use of cookies for non-essential purposes. |
li_sugr | Set by LinkedIn, used to make a probabilistic match of a user's identity outside the Designated Countries (which LinkedIn determines as European Union (EU), European Economic Area (EEA), and Switzerland). |
lms_analytics | Set by LinkedIn to identify LinkedIn Members in the Designated Countries (which LinkedIn determines as European Union (EU), European Economic Area (EEA), and Switzerland) for analytics. |
NID | Set by Google, registers a unique ID that identifies a returning user’s device. The ID is used for targeted ads. |
OGP / OGPC | Set by Google, cookie enables the functionality of Google Maps. |
OTZ | Set by Google, used to support Google’s advertising services. This cookie is used by Google Analytics to provide an analysis of website visitors in aggregate. |
s_cc | Set by LinkedIn, used to determine if cookies are enabled for Adobe Analytics. |
s_ips | Set by LinkedIn, tracks percent of page viewed. |
s_plt | Set by LinkedIn, this cookie tracks the time that the previous page took to load. |
s_pltp | Set by LinkedIn, this cookie provides page name value (URL) for use by Adobe Analytics. |
s_ppv | Set by LinkedIn, used by Adobe Analytics to retain and fetch what percentage of a page was viewed. |
s_sq | Set by LinkedIn, used to store information about the previous link that was clicked on by the user by Adobe Analytics. |
s_tp | Set by LinkedIn, this cookie measures a visitor’s scroll activity to see how much of a page they view before moving on to another page. |
s_tslv | Set by LinkedIn, used to retain and fetch time since last visit in Adobe Analytics. |
test_cookie | Set by doubleclick.net (part of Google), the purpose of the cookie is to determine if the users' browser supports cookies. |
U | Set by LinkedIn, Browser Identifier for users outside the Designated Countries (which LinkedIn determines as European Union (EU), European Economic Area (EEA), and Switzerland). |
UserMatchHistory | LinkedIn sets this cookie for LinkedIn Ads ID syncing. |
UserMatchHistory | This cookie is used by LinkedIn Ads to help dunnhumby measure advertising performance. More information can be found in their cookie policy. |
yt-remote-connected-devices | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
_gcl_au | Set by Google Tag Manager to store and track conversion events. It is typically associated with Google Ads, but may be set even if no active ad campaigns are running, especially when GTM is configured with default settings. The cookie helps measure the effectiveness of ad clicks in relation to site actions. |