From analytics to AI: Where next for government use of data?

by Eddie Copeland
295 views

The following article is based on a speech I gave on 22 March 2018 at the DataFest Summit in Edinburgh.

For the last five years I’ve been talking to anyone who’d listen about the benefits of public sector organisations making smarter use of their own data.

I emphasise the ‘of their own’ part, because the UK seems to have had a somewhat divergent approach to date.

On the one hand we’ve been strong champions of open data: but in a way that has often amounted to public sector bodies publishing their datasets in the vague hope that someone else might do something interesting with them.

On the other hand, we’ve been enthused by the notion of smart cities, investing millions of pounds in new technologies to give cities even more data… that they often have absolutely no idea how to use.

Now, I do, of course, acknowledge that there have been some great success stories in each of those fields.

But there’s a missing middle.

The missing part has been a widespread recognition by public sector organisations that they should be the primary consumers of their own data.

For years, journalists and commentators have semantically tortured themselves trying to decide if data is best understood as the new oil, fuel, or the fertile soil in which the seeds of innovation can thrive. Whatever the metaphor, the point is that data has value. It would be madness for public sector organisations to be the only ones not to benefit.

What is this value I mention?

Data Enables Better Ways of Working

As someone who’s spent a fair amount of time looking at how public sector organisations can improve themselves and respond to the pressures they face, it strikes me that data is the enabler of many of the tried and tested ways of working better.

The challenge is that the sector’s current approach to data holds it back from those very same ways of working. Specifically, it suffers from the jigsaw problem: every team has their little piece of the data puzzle, but no-one can put those pieces together, take a step back and see what the picture shows.

That is problematic, to put it mildly.

For example, we know that – if they are designed and executed well – shared services can deliver value. But how can organisations see where they could share more assets, teams and resources with their neighbours if they have no data on the scale and location of the problems, demand and opportunities beyond their boundaries?

We know that if organisations have scarce resources, it makes sense to target areas of greatest need. That’s very hard if public sector organisations can’t access the datasets which would show where the people and places of greatest need are located.

We know that some of our most complex and expensive areas of public service delivery, such as adult and child social care, can entail up to 30 organisations working together to support a family. How can they intelligently and efficiently coordinate if they don’t have the data on what each other is doing?

And lastly, the Holy Grail of public service reform: prediction and prevention, or prediction and early intervention. Quite simply, we know it’s better to intervene in problems when they are small; when minimum harm has been caused; and when, frankly, they are cheaper to resolve. Public sector organisations can do this, but only if they can bring together and analyse the datasets that might collectively point to cases of highest future risk.

The causes of the jigsaw problem are well known. They are also worth repeating ad nauseam until we successfully address them.

There are technical challenges. Over the course of many years, each public sector organisation has procured or developed its own IT systems, many of which don’t easily talk to each other to share data. Worse still, some IT suppliers have the audacity to charge public sector customers to access their own data. (Those companies should be named and shamed.)

There are data challenges. Even when it’s possible to access the datasets, they are often recorded according to different formats and conventions. Matching them can be like the digital equivalent of comparing apples and oranges.

There are legal challenges, both real and perceived. There are some things that the law does not allow, and rightly so. But there are many things that can be done with data that never happen because the average public sector worker is not confident in what data protection legislation does and does not allow. They understandably, therefore, urge on the side of caution.

Finally, there are organisational challenges. Every public sector body was set up at some point in the past to serve a certain community in a certain way. It takes a mental leap, a structural leap, a cultural leap to start systematically collaborating.

But collaborate they must.

Offices of Data Analytics

To help overcome some of these barriers, two years ago Nesta launched its Offices of Data Analytics programme, helping cities and regions source, analyse and act upon their collective data to reform public services.

To date we’ve been directly involved in three pilots, and in each case we’ve found that four questions help determine whether a given public service challenge can be tackled with data.

1 – Do we have a specific, impactful problem? Collaborating with data takes time, money, and political backing. It’s hard to secure those things unless the problem being tackled is motivating to all those involved.

2 – Do we know what intervention we’re trying to enable? It’s vital to have a good answer to the question: “What would do you differently if you had all the information you could ever want about the problem?”. The data is not the intervention. Data is useful to the extent that it leads to action, so what do we want to do?

3 – Do we know what information product we need? What would a frontline worker or service manager need to see on a piece of paper or tablet screen to make that intervention possible? A map, a heatmap, a prioritised list, a dashboard, an alert, etc?

4 – Is the data required to create the information product available? Is the data both accessible and legitimate for use for this purpose?

Using these questions, we worked with twelve London boroughs, the GLA and ASI Data Science on a pilot for a London Office of Data Analytics (LODA), which aimed to develop a predictive algorithm to spot unlicenced HMOs. This pilot led to the creation of a permanent LODA, announced by the Mayor of London on 28 March 2018.

London Office of Data Analytics Nesta Pilot

We worked with the Digital Catapult, local authorities, police and health organisations in the North East of England to join up data on issues related to alcohol harms to see if it could enable better conversations about the nature and scale of those problems. (This pilot was ultimately unsuccessful because we’d failed to clearly answer question 2 above, and pin down what intervention would follow from better information.)

North East Data Pilot Nesta

We’re now working with Essex Police, Essex County Council and its public sector partners to apply data analytics to help identify businesses exploiting slave labour.

Essex Centre for Data Analytics Nesta Pilot

Lessons to share

In each of these cases, the data has been important. Yet in many ways the real value has come from the conversations and processes associated with deciding how to use it. Regardless of the ultimate outcome, there’s huge value in different public sector organisations getting together to commonly define the problems they jointly face, and discussing what information they all need in order to work better.

Data acts as a gateway drug to better collaboration.

Happily, many other organisations seem to agree. There are now Offices of Data Analytics springing up in Greater Manchester, the West Midlands, Worcestershire, Essex, and soon it’s likely London will have one, too. Nesta hopes to support all these areas to learn from each other and distil what works.

From what we’ve seen so far, many of the lessons on making good use of data analytics are simple and unglamorous. I’ll highlight three of the most important:

Public sector leaders responsible for making data work

Public sector leaders need to create the space and culture for data to make a difference. By this I mean two things. First, public sector leaders need to create an expectation that it’s unacceptable to make a major decision or to reform a service without being aware of what the data shows. Public sector professionals have every right to choose a different path based on their own expertise, but they do not have the right to be ignorant of what the data says. That mindset starts at the top.

Second, they need to recognise that within their organisations, they may well have data analysts who are frustrated data scientists. Those analysts need to be freed up from working on monthly dashboards and KPI reporting, and instead be given the time to work with service managers and frontline staff on higher value problems.

Local authorities need to link their own data - Nesta

Local authorities need to be able to link their own data. Our experience has shown that public sector organisations that cannot link together datasets between their own internal IT systems find it almost impossible to collaborate with other organisations with their data. Specifically, they need to be able to link together datasets about addresses (using the UPRN) and people, creating golden records of geospatial and individuals’ data.

We need consistent legal advice across public sector - Nesta

We need consistent legal advice across the public sector. We need to end the current, maddening situation where every public sector body individually commissions its own legal advice to come up with different interpretations of the same legislation. The introduction of GDPR is chance to get this right – interpreting the new regulations in a common way for the benefit of all.

In short, if we want to have fun with the more exciting aspects of data science, we first need to fix the plumbing.

So what of those more exciting aspects?

The promise of artifical intelligence

Nesta is an innovation foundation: we are always looking to emerging trends. One of the developments of most interest to those wishing to make smarter use of public sector data is the rise of artificial intelligence.

There is, no doubt, a huge amount of hype in this field (see my blog on how to avoid it here). However, there do appear to be significant ways in which AI really could deliver significant value for governments and the wider public sector, as Geoff Mulgan outlined in a recent article.

To my mind, the area that looks to have the most potential (for both good and bad) is where machine learning is used to enhance algorithmic decision making.

Why might this be important?

Consider that we often talk about the need for evidence-based policy making. It’s all well and good for politicians and policymakers to use evidence at a macro level when designing policy, but it will fail to achieve much if poor decisions are taken at the front line. The real effectiveness of each public sector organisation is the sum total of all the thousands of little decisions made by their staff each and every day.

The promise of AI is that we could have evidence-based decision making in the field: helping frontline workers make more informed decisions in the moments when it matters most, based on an intelligent analysis of what is known to work.

Clearly, there’s huge scope for this to go badly wrong, especially where algorithms are used to inform decisions that could have a significant impact on a group or individual.

In my view, it’s unacceptable to use an algorithm for a purpose for which it was never designed, like the way some private sector firms tried to use individuals’ credit score to determine whether to hire them.

It’s unacceptable to deploy an algorithm without first considering the biases that will inevitably exist in the training data and putting in place measures to mitigate them.

And it’s unacceptable not to be transparent about the data that an algorithm uses to make an individual decision.

To help provoke further discussion, I recently published a draft Code of Principles for public sector use of algorithmic decision making. I’ve already received a huge amount of helpful feedback on how to improve it, and I’d welcome yours at bit.ly/NestaCode.

That code has ten principles, but they essentially boil down to one hunch: that it’s not ok to use AI to inform a decision if the person using it could not reasonably understand its basic objectives, function and limitations. We cannot allow the public sector to become littered with magical black boxes whose operations are essentially unknowable to those expected to use them.

How to use machine learning in government - Nesta

Get this wrong too many times, and we risk a collapse in public trust that makes the use of such technologies untouchable for years. Recent weeks have highlighted how even some of the largest companies in the world can face a backlash when they get this wrong.

That’s a mistake the public sector can ill afford.

When it comes to government use of data, Nesta will continue to be involved in helping communicate the basic lessons around the use of data, while exploring the best ways to embrace new technologies.

From analytics to AI, there is so much more to do.

Find me on Twitter

You may also like