Not to be confused with our already revealed data science trends, in this post we get the inside track from Alison Williams on how the year ahead might shape up from a data management perspective.
1. Architectural decisions demand serious thought
Accessibility is a big issue for any data-driven organisation. The easier we make it for people to get hold of the data they need, the more likely it is that they can use that information in collaborative, outcome-focused ways. Much as accessibility is commonly recognised as A Good Thing™, though, it’s also something that becomes increasingly hard to deliver as data volumes continue to grow.
That challenge is where architectural approaches like data meshes and data fabrics come into play. And, while both of those concepts are designed to break down key barriers to accessibility like silos and duplication, they’re also underpinned by different operational philosophies that bring an additional layer of complexity to proceedings.
A data mesh, for instance, takes a decentralised approach to data management, proposing a people-centric design that prioritises ownership and agency of data on a “per business function” basis. Data fabric, on the other hand, aims to bring together disparate systems in a centralised and automation-supported way.
Grossly oversimplified though these definitions may be, they do at least speak to the opposing nature of the two philosophies. While there might not be any hard deadline for organisations to work towards in terms of “picking” one of those approaches, 2023 is likely to see many beginning to think about which side of the fence their long term data future lies on.
2. Automation offers a solution to the challenge of data governance
The more data you have, the more important data governance becomes. Just like accessibility, however, governance is a problem that scales; more data equates to a greater number of data policies that need to be developed, enacted, and checked. Computational governance offers a way to alleviate some of that burden, and its importance will only increase through 2023 and beyond.
At its core, computational governance provides a way to automate the process of checking whether a data policy is being adhered to. As Dr. Sven Balnojan writes in this excellent piece on computational governance, this is a three-stage process that first requires each policy to be converted into an algorithm that can be comfortably processed by a computer.
Different levels of automation can then be applied to that algorithm, ranging from early warning systems that require human intervention before data can be accessed, through to fully autonomous systems that handle policy checks on an independent basis. It’s a fascinating idea – one that is too complex to do full justice to here – and one that promises to gain a great deal of traction over the coming months.
3. Privacy tech progresses thanks to PETs projects
What if you could get the same amount of value from a large consumer dataset without ever coming close to touching the sensitive information that we’d all rather remained hidden? That’s the primary objective of Privacy Enhancing Technologies (or, PETs), and it’s an issue that could see a significant amount of progress during 2023.
While the general concept of PETs can be traced all the way back to the late 1990s, it is one that has newfound relevance in the face of global health crises like the Covid-19 pandemic. Essentially allowing the analysis and sharing of information globally to take place without the underlying data ever needing to be exposed, PETs offer a potential solution to the often conflicting goals of personal privacy and public good.
Numerous approaches to PETs already exist, with many more being researched and developed. In July last year, government agency Innovate UK ran a £700,000 competition aimed at discovering new solutions to real-world privacy use cases; expect that kind of traction to continue.
4. Sustainability becomes a big part of the data dialogue
Every industry on earth is under scrutiny as to its environmental impact right now, and data science is no different. Big data requires big computation and storage to manage and analyse effectively, and that has significant repercussions from a power consumption, physical waste and sustainability standpoint.
While there might not be any immediate solutions to that challenge, we can at least expect the industry as a whole to step up to the issue and acknowledge that improvement is needed. Transparency will be the first step, with cloud providers under increasing scrutiny to acknowledge the volume of turn-over and therefore waste of physical hardware created by their exponential growth in recent years. Longer term radical disruption through concepts such as quantum computing may be the only way to make a significant impact.
5. And, finally… the metaverse beckons, but what to make of it all?
Whether it’s the future of human interaction or simply an ill-advised waste of $36bn remains to be seen, but one thing that’s certain about the metaverse is that there’s no getting away from it anytime soon. Despite significant shareholder alarm, the company formerly known as Facebook looks dead set on continuing its investment into virtualising the internet.
Whichever side of the metaverse coin you land on, it does at least hold the potential to serve as a new – and potentially very different – source of data. New sources of information, of course, also require new protocols and procedures around privacy, and it’s hard to say just yet quite how valuable the information generated might actually prove to be. Just because it’s new doesn’t mean it’s useful, after all.
Will it bring us to a new level of understanding about human behaviours, or simply serve as a shiny (but ultimately short-lived) distraction that promises much and delivers little? The jury is still firmly out for now, but 2023 should at least see the answers around the metaverse start to shape up just a little bit more.
The latest insights from our experts around the world
Speak to a member of our team for more information