The Ethical Machine: In Conversation with Dr Zeynep Engin

The Ethical Machine: In Conversation with Dr Zeynep Engin

Dr. Zeynep Engin is one of the foremost researchers at the conflux of data science and public policy. Based at University College London’s (UCL) Urban Dynamics Laboratory, she focuses on a range of issues relating to the legal and ethical implications of data-driven decision-making in public policy.

What’s the focus of your research?

It’s what’s known as algorithmic governance, using data-driven intelligence and algorithmic insights to support strategic decision-making processes and, going one step further, automation of governance decisions in both public and private sector contexts. Clearly, most critical part is ensuring legality and ethics of such decisions. I particularly focus on areas such as algorithmic fairness, transparency, and interpretability.

We look into those areas to try and find points for generalisation. Most of the research into  these topics is what I’d call “one-shot”. You might solve a problem for one specific case, for one stakeholder, but you can’t then apply that learning elsewhere. So, we try and focus on generalisation, algorithms that can explore “fairness” in a variety of situations. This means that we can apply that we can build up a series of rules or guidelines that everyone can use.

Data has become integral to decision-making, but what are the common positive and negative consequences you see?

Let’s start with the positives. I think that big data and its associated technologies give us enormous capacity to improve and inform any decision-making processes. We have more diverse information being fed into our thinking and our processes, and so our decision-making can be more evidence-based as a result.

We can also analyse complex information at much greater scale than we ever could before, which gives us a breadth of understanding that we might not have had before. You get the sense that if you can apply these positive aspects of data-driven decisioning to policy, you should be able to create a fairer, more ethical future based on real information.

If you flip that over and consider the negative side, then of course there’s a chance that everything might just go wrong. There’s a real danger that these systems just replicate the same biases that we have had throughout human history or, worse still, amplify them against various communities.

So instead of creating a fairer and more ethical society, we could just fall into the same bias, fairness, and inequality traps that are already making headlines today, but on a much larger scale.

With so much trust into algorithms and automation, will human intervention always be needed to ensure “fairness”?

I think it’s very easy to say that a human should always be involved somewhere along the line, if only for no better reason than we always want to point the finger at a real person when things go wrong.

It’s interesting though, because while we’re not comfortable having machines making important decisions in our lives, we’re still trusting humans to be unbiased or perform better than these very sophisticated tools can.

Think about legal systems. If you go to the courts to appeal, the typical response time is going to be months, or even years. That’s because there’s more information, more complexity than humans can process in a reasonable amount of time. So, the system gets slowed down.

If you think about things like that, I believe it becomes much harder to imagine that these kinds of processes will always be controlled by a human. I think the biggest priority, at least for the engineering and computer science community, is to try and embed as much ethical behaviour into the systems as possible.

The more we focus on making these systems “behave” ethically and legally, and the faster we course correct when things go wrong, the greater the trust we can build in them.

How do you think businesses can account for ethics as they shift over time?

It’s not just time, demographics are also involved. You have to take cultures and geographies into account as well. Something that is considered ethical here is completely unethical somewhere else in the world. And as technologies become more global, you have to factor in what matters to different cultures, different age groups, different demographic groups.

So, when I say that it’s important for us to embed ethical behaviour into these systems, you also have to think about how to make them adaptable to changes in time, changes in demographics, geographies, or other factors. That is much, much easier said than done, but it’s also why these problems are so interesting.

You mentioned earlier that learning algorithms can take on biases, and there was a recent example of this where more women than men were being rejected for applications for a specific credit card. How do you stop that kind of thing from happening?

If you think about computer science, what you’re essentially trying to do is formulate problems in mathematical forms and embed them into a deterministic system. But the problems we are trying to deal with aren’t very mechanical, so they need to be looked at differently.

That requires a culture change, mixing the computer science field with social sciences and humanities in order to constantly check the validity of a model. We need to understand the potential ethical consequences, not just the data.

I think as a data scientist, you also need to be very transparent about what you’re doing. You need to report on how you’ve used your data, its shortcomings, or any biases that you have observed. You need to be very open about any constraints in your data, really transparent about how you have reached your conclusion, and how you have trained your models.

So, data integrity is one thing. But you also need to take into account the fact that systems are designed by people, and people have biases and their own interpretations of a problem as well. You need to recognise that, and declare your assumptions and how they may have influenced your system design and methodology too.

We’ve seen a more general push around digital ethics recently. How do you see that impacting the world of data science and engineering?

To date, the main focus has been on performance and profit. How do we use data science to maximise profit, do things faster, cheaper, more efficiently?

I think that what is becoming more and more obvious is that the real challenge, and the more complex challenge, is making sure that these systems are accountable and ethical too. A lot of the focus on ethics around data has been on marketing and how personal information is used there. But if you think about the legal system, or loan applications, the potential for misuse is much, much greater.

If data science leads us to make unethical decisions against different societal groups then that is a real, real danger. And this is a much more complex and computational problem to overcome than just the optimisation of performance and profit.

Data science and engineering will become increasingly dominated by this issue. We’re going to need to focus on fostering a more human, ethical approach to research than just creating new systems and products. Creating new products is the easy part. Making them behave appropriately is much, much harder.

Would you agree that businesses need to put data ethics at the forefront of what they do?

I do believe that unless private companies get to grips with these issues now, they will see more and more backlash from society. And that will affect their profits. So even though data ethics might feel like a bit luxury for them right now, they’ll soon realise that it’s an absolute necessity.

We have to automate and find ways to make those systems trustworthy. We can’t expect human labour to match the level of complexity and skill that a machine can deliver. But we have to automate responsibly, and ethically, and businesses need to be prepared to be very open about what they’re doing.

Ready to get started?

Speak to a member of our team for more information

Contact us