It's a challenging time to be bold with redesigning services but you can do it. Learn more....

After the “problem algorithm” headlines — learning the lessons

miniature figures on a computer hard drive

Behind the headlines on Ofqual U-turns and councils scrapping prediction algorithms in benefit and welfare decisions is a wider debate about how our public administrations use technology and data.

Written by William Barker, Associate Director (Digital Ethics and Cyber Resilience) at Socitm and Mark Lumley, Socitm Vice President and Director Digital & IT at London Borough of Hounslow.

Current events follow recent Socitm work and studies by the Cardiff Data Justice Lab, the Committee for Standards in Public Life, and the Information Commissioner on re-thinking the challenges of AI and data analytics use in the public sector. Together they highlight the need for a forensic look at the way in which we use smart information systems to inform decision-making in public services.

Navigating our way through the complexity of these issues is not easy but research has been done across various disciplines. The stark conclusion emerging is that if you don’t fully understand how a program is working and what it will do if you use it with your data in your social, operational and political context, you are running a huge risk. If it will affect decisions about real people’s lives, you will want to think long and hard about whether using it is a good idea.

As you would expect, official guidance to the public sector stresses that the use of AI and data analytics should be considered, proportionate and ethical rather than the automatic first choice in all circumstances and situations. But if we are to do things successfully it is essential that we learn to identify, define and understand the potential of algorithmic risks from the outset so that we avoid discriminatory and adverse personal outcomes. The following steps start to provide a basis for how we do that.

1. Question

Is the proposed approach proportionate and ethical? This requires forensic questioning.

  • Is the system right for the purpose and better value than other solutions?
  • Will its use in the intended context be lawful, safe, acceptable to stakeholders, beneficial, and not have bad side effects?

2. Test

Rigorous testing in the operational setting with relevant data is critical. An algorithm’s use needs to be examined case by case, addressing the following alongside the questions above.

  • The precise calculations and data processing methods used.
  • The exact nature of the data used in the development and testing of the program, and in its intended operation.
  • Its purpose in relation to the social, political, professional, environmental and operational process and practice contexts within which it will be used — these are where consequences will be felt.

3. Review

It is essential to underpin these two initial steps with a robust set of checks and balances built into accountable governance, including political and executive scrutiny, due diligence, and appropriate standards.

image with title of Share National September 2020

Join us at Share National on 24 and 25 September 2020 to discuss in detail the use of algorithms, and more. Hosted by Vice President Mark Lumley of our Presidents Team, hear from and talk with your peers and expert guest speakers and practitioners on the topic of Ethical use of emerging technologies and data.
Twice daily sessions at 10:30–12:00 and 14:00–15:30 will cover digital ethics, security, emerging technologies and harnessing data.