Where will our data live tomorrow?

It’s been a busy decade for Max Schrems. Since 2013, the Austrian lawyer and privacy activist has filed several lawsuits against companies including Facebook, Google, Amazon, and Spotify, all relating to the collection and transferral of personal data. Schrems I and II,  the most significant of those suits, relate specifically to the transfer of personal data from the European Union to the United States.

The impact of this litigation has been far-reaching, to say the least. The “Safe Harbour” arrangement, which was intended to ensure that data transfers between the EU and US were compliant with the European Data Directive, was invalidated in the wake of Schrems I[1]. The EU-US Privacy Shield, essentially developed as a replacement for Safe Harbour, was then itself struck down in the EU Court of Justice’s ruling on Schrems II[2].

Despite these victories, Schrems isn’t finished yet. Even the new and refined EU-US Data Privacy Framework has attracted his ire, leading the lawyer to state that – although he is “sick and tired of [the] legal ping-pong” – he expects to be back in court on the matter early this year[3]. The road ahead looks both long and heavily litigious.

The Schrems I and II rulings are not just academic debates, either. They have been actively enforced, with the most famous example coming in 2023 when the Irish Data Protection Commission hit Facebook owner Meta with a record €1.2bn fine for violating the General Data Protection Regulation (GDPR)[4]. A year before that, fellow Meta companies Instagram and WhatsApp received[5] substantial fines of their own.

Other than going to show just how much attention is now being given to data privacy as a subject, these cases also speak to a much bigger truth: that there is a very tough and potentially unsolvable problem brewing around the safe transfer of personal data outside of the EU.


Local requirements complicate matters

Like many other hard-to-solve challenges, the root cause of the issue here is a fundamental difference in philosophy. In Europe, for example, the aforementioned GDPR offers a detailed, expansive, and highly regulated data protection framework. Not only is there no such equivalent in the US at a national level, the agencies and organisations that operate there also tend to have much greater powers around the monitoring and analysis of personal data.

 While the evolution from Safe Harbour to the Data Privacy Framework has brought those contrasting philosophies a little closer together, they are still defined more by their differences than their similarities. As a result, most organisations have come to the difficult realisation that – when it is transferred from the EU to the US – personal data is simply less secure by default.

That brings us to where we are today. Correct or not, the general feeling is that many of the lobby groups protesting against the Data Privacy Framework would like to get to the point at which data cannot be transferred out of the EU. That would take us into the same space as China’s Personal Information Protection Law, which dictates that any business handling a certain amount of personal data there localises its storage and processing operations[6].

Taken to its logical extreme, that kind of stipulation could ultimately require a company to have a dedicated – and segregated – data centre for every territory in which it operates. As well as the obvious cost implications, that would lead to fragmentation as well, making such an approach fundamentally unpalatable for many. Instead, it’s far likelier that this deadlock will lead to even greater interest in Privacy-Enhancing Technologies (PETs).


A technological solution to an intractable problem

At their core, PETs are a set of tools that help to reduce some of the most significant risks associated with personal data. Those tools cover a great deal of ground; under that overarching PETs umbrella, you’ll find everything from Tor browsers through to the concept of Self-sovereign Identity (SSI), which aims to give individuals greater control over their digital identity.

From a privacy perspective, some of the most interesting applications of PETs cover fields like encryption and anonymisation. Homomorphic encryption, for instance, allows data to be analysed without having to be decrypted first. Then, you have pseudonymisation (where personal identifiers are replaced with artificial ones) and differential privacy (which adds “noise” to a dataset to prevent individuals from being identified).

Ultimately, PETs are not new. Even something as basic as a secret ballot could be considered to be a PET – in the broadest sense of the definition, at least. At a high level, though, these technologies could drive a fundamental shift in the balance of power around knowledge and insights.

Take OpenSAFELY, for instance, an open-source software platform that enables researchers to analyse electronic health records data. Said data is highly secure and entirely anonymised, and all activity on the platform is publicly logged – but the information at the heart of OpenSAFELY is also freely available, an increasingly uncommon situation in a world where data has tangible commercial value.

Ideas like OpenSAFELY also take us into the field of federated learning, in which artificial intelligence (AI) models are trained using data that sits on “the edge”. MELLODDY – another healthcare-related project – employs just that approach, with predictive machine learning models trained using decentralised data from 10 global pharmaceutical companies.

The Open Data Institute (ODI) has been understandably supportive of projects of this kind, noting the growing importance of data stewardship, and the application of federated learning to the pursuit of “public, charitable or educational aims”.

As vital as these societal benefits may be, of course, they are not the only appealing thing about PETs. In a world where the safe and legal transfer of data between different geographic territories is becoming almost impossible, concepts like federated learning and decentralised analysis should also hold obvious appeal for any organisation that doesn’t want to be tied to localised data storage and processing.

For that reason alone, I believe we’ll see considerable investment into the companies that are developing PETs over the next few years, particularly those focused on some of the specific anonymisation and encryption techniques mentioned above.

PETs are not a blanket solution, of course. Even with the best technology – and the best intentions – there will inevitably be some kinds of analysis that cannot take place without exposing personal data. In those instances, unless data clean rooms can be used to bridge the gap, the conversation will again go back to where a dataset originated and how it should then be handled.

Nonetheless, as the seemingly endless loop of legislation and litigation continues, technological safeguards like PETs can be a crucial part of a solution, offering near-term options when combined with strong data stewardship. For those truly international organisations, where the accessibility and availability of international data forms the beating heart of their business, that day can’t come soon enough.


[1] The CJEU's Schrems ruling on the Safe Harbour Decision – European Parliament, 26th October 2015

[2] The CJEU judgment in the Schrems II case – European Parliament, September 2020

[3] EU seals new US data transfer pact, but challenge likely – Reuters, 10th July 2023

[4] Facebook owner Meta fined €1.2bn for mishandling user information – The Guardian, 22nd May 2023

[5] eta faces record EU privacy fines – Politico, 4th December 2022

[6] China’s new data-transfer mandate prompting multinationals to rethink market strategy - PwC

customer first data science analytics & machine learning services
Ready to get started?

Speak to a member of our team for more information

Contact us