Submit your nomination for the 2024 Socitm Awards

Consultation on a pro-innovation approach to AI regulation

Consultation on a pro-innovation approach to AI regulation

The Local Government Association (LGA), Society for Innovation, Technology and Modernisation (Socitm) and Society of Local Authority Chief Executives and Senior Managers (Solace) joint consultation response.

Since publication of the Government White Paper in late March 2023, there has been considerable public interest in generative AI, and reports of changes to the government’s position. Mindful of these developments, our response has been prepared through engagement with councils and is broader than the consultation questions proposed.

Our response focuses on risks and opportunities for local government and provides reflections on principles that should be applied to a future regulatory regime.

About our organisations

Socitm

The Society for Innovation, Technology and Modernisation (Socitm) is a membership organisation of more than 2,500 digital leaders engaged in innovation and modernisation of public services. Established for more than 30 years, our network combines to provide a strong voice, challenge convention, and inspire change in achieving better place-based outcomes for people, businesses and communities. 

Local Government Association logo

The Local Government Association (LGA) is the national voice of local government. We are a politically led, cross-party membership organisation, representing English councils. Our role is to support, promote and improve local government, and raise national awareness of the work of councils. Our ultimate ambition is to support councils to deliver local solutions to national problems. 

Solace logo

The Society of Local Authority Chief Executives and Senior Managers (Solace) is the UK’s leading membership network for public sector and local government professionals. We currently represent over 1600 members across the UK and have regional branches across the country which play host to a number of events such as regional development days, skills days and networking opportunities.

Key messages

  • Local Government is aware of the tremendous opportunities the application of AI can have in both the delivery of public services and the streamlining of business operations, but is also cautious about the risks it poses.
  • Given the significant resource and funding constraints in local government, and resident expectation, councils need to be part of an AI-powered future in the UK, and they must be supported to participate.
  • Risks identified include insufficient data foundations, a lack of capacity or knowledge within information governance and data protection teams, the perpetuation of digital exclusion and wider forms of exclusion, insufficient knowledge across different business areas in the council, a lack of transparency with suppliers, job losses, and the impact on resident trust if not implemented transparently and appropriately.
  • Any future regulatory regime should guard against these risks, foster public trust, be people-centred, underpinned by an ethical framework, and prioritise transparency and accountability. Any regulation should be accompanied by wider forms of support, including guidance for specific business areas such as information governance and procurement teams, standards for the development of technology, requirements for transparency in AI-powered products, and a vetted approved supplier list in high-risk contexts.
  • Continuous cross-sector engagement is necessary to understand ongoing risks and opportunities. There should be a formal role for the Local Government community in UK-wide AI governance and regulation. In support of this, we suggest that the LGA becomes a member of any cross-sectoral forums, such as the Foundation Model Taskforce, to ensure the sector is fully represented.

Introduction

The LGA, Socitm, and Solace are pleased to respond to this consultation and have done so through engagement with councils. Since the White Paper was published in late March, there has been considerable public interest in generative AI, and reports of changes to the government’s position. Mindful of these developments, our consultation response is broader than the questions proposed. This response focuses on risks and opportunities for local government and provides reflections on principles that should be applied to a future regulatory regime. We would welcome further engagement with Central Government both on issues beyond the scope of the consultation and as the AI public policy area develops.

In 2019, the LGA commissioned research into the use of predictive analytics across local government. Informed by this research, the LGA produced guidance for councils on how to use predictive analytics to unlock the transformative potential of AI and has been providing support for continuous improvement in this area through the Advanced Predictive Analytics Network. Given the high levels of interest within the sector, we will continue to facilitate opportunities for local government to learn good practice from each other and the application of AI to local service delivery in other contexts.

The Information Commissioner’s Office recently undertook an inquiry into the use of AI by local authorities, specifically on whether data was being handled appropriately, lawfully, and fairly in the delivery of services particularly welfare and/or social support. The ICO inquiry found no evidence to suggest claimants are subjected to any harm or financial detriment because of the use of algorithms or similar technologies in welfare and social care, and that there is meaningful human involvement before any final decision is made in these areas.

The Equality and Human Rights Commission (EHRC) has also recently been working with around 30 local authorities to understand how they’re using AI to deliver essential services, and their use of the public sector equality duty in its implementation. The EHRC also recently published guidance on the use of the public sector equality duty to support local authorities to prevent discrimination in the delivery of AI-informed/powered services.

Opportunities and risks for local government

The sector is enthusiastic about the opportunities AI presents in both increasing council capacity and providing intelligence and insights that will improve public services. There is also a risk that if councils do not use AI, they would fail to meet resident expectations and that local government could be left behind.

Public trust is of paramount importance and the use of AI within councils poses a risk to that trust if not managed appropriately. In a 2019 survey by the Open Data Institute (ODI) and YouGov, despite 87% of respondents stating that it was very important that organisations interact with data about them ethically, only 30% believed central government did so and 31% trusted local government.

Information Governance officers in councils are well-skilled but overstretched, and the introduction of AI systems without clear guidance and extra capacity for information governance teams will be challenging.

The effective and ethical application of AI is highly dependent on the integrity and accuracy of the data inputted. Local government handles an enormous amount of citizen data related to childrens’ services, social care, democracy (particularly with regard to election process security), housing, and welfare. Maintaining the highest standard of data protection is vital in sustaining public trust, and data protection officers and information governance leads within councils are crucial in ensuring the lawful and fair processing of data.

The use of AI may perpetuate digital exclusion as levels of digital familiarity are correlated with increased fears and concerns around the application of AI. In a recent CDEI public attitudes tracker, it was found that those with lower levels of ‘digital familiarity’ reported having increased fear and concerns around the application of AI.

Increasing levels of digital familiarity and skills will support those with reported fears and build trust amongst residents and council staff. Equally, councils reported that the unequal use of AI could perpetuate wider forms of exclusion. An example provided was that the use of LLMs in applying for jobs and developing CVs puts those that do not use those technologies at an automatic disadvantage.

The potential for AI in developing insight and intelligence to strengthen decision-making in local public services is enormous. However, the technology should augment, rather than replace, human decision-making, especially higher-risk decision-making such as in the delivery of social services.

Communicating how decisions are made, and how the application of AI augments decision-making rather than replaces it will be vital in retaining public trust and protecting the integrity of local service delivery. This corresponds with the finding from a recent Ada Lovelace and Alan Turing Institute public attitudes report that found despite high levels of support for the use of AI in healthcare such as in cancer detection, 56% were concerned about the over-reliance on technology rather than professional judgments, and 47% were concerned about the difficulty in knowing who is responsible for mistakes when using this technology.

As Local Government considers how to achieve savings often in time-pressured contexts, AI poses a risk to jobs. Consideration must be given to how to mitigate this employment risk and the impact this could have on the communities in which councils operate. Any employment impact on communities will likely drive-up demand for local public services where services are already stretched.

Training is essential to mitigating the risks associated with AI. Training should be tailored to business and service areas. For example, procurement leads will have different needs for HR teams, and it’s important that everyone within a council understands how to use AI effectively and ethically in their business area. This requires further financial support from Government to ensure that it’s high quality, up to date in a rapidly changing context, and factors in peer-to-peer learning exchanges given the complexity of local government.

Councils agreed that applying an ethical framework across the council was vital in creating a solid foundation, and different business areas should inform what that ethical framework looks like for it to be relevant across the council.

There are concerns over both a lack of transparency around how some products incorporate AI, and how some suppliers target service managers with insufficient levels of technical knowledge to make an informed decision.

More widely, a mechanism should be established for the lessons from central/local AI innovation and collaboration to be communicated with the presumption that it would support wider private/public sector technology transfers.

Principles for a regulatory regime

Councils agreed with the Government that there is a need to balance regulation with innovation within the development and use of AI. To achieve this balance, any regulatory approach should be underpinned by a strong ethical framework and be people-centred in its application. There was wide support for sandboxes in trialling innovative approaches which some councils are already using to innovate internally.

Trust and transparency are key to any regulatory regime, as is the alignment of ultimate accountability, authority, and responsibility to a human decision-maker. Protection against AI decision-making that does not involve human intervention/oversight is felt to be of paramount importance, as is the explainability of the technology. The White Paper argued that one of two reasons why AI needed to be regulated was to address questions of liability in decision-making; ensuring that AI is an assistive technology would address this concern and must be considered in the regulatory approach by Government. Councils noted that the Public Sector Equality Duty was vital in the development and implementation of AI, and support should be granted to councils in their application of it.

To support the ethical application of AI and to foster public trust, residents must be assured that all new technology is underpinned by the highest standards of privacy and data security. This corresponds with a finding from the Ada Lovelace Institute and Alan Turing Institute public attitudes tracker on AI where data security and privacy were felt to be the greatest risk for data use in society.

To mitigate the risk posed by the application of AI in public services perpetuating digital and wider forms of exclusion, protecting against this must be an integral part of any further regulatory regime.

Regulation needs to be adaptable enough to meet different council contexts and service needs. For example, in response to the EU’s approach of banning facial recognition AI technology, one council argued that they use facial recognition and sentiment tracking at a suicide hotspot in their community. The data is not stored but does allow for quick action on the part of the council to save lives. Especially as the use of LLMs does not carry the same risk in all contexts – for example, the inputting of sensitive data into Chat GPT or a similar tool to garner insight and develop a report would carry significantly more risk than other uses.

Poorly implemented regulation could increase the digital divide between sectors. This would create a barrier to councils drawing upon the opportunities this technology provides and prevent councils from meeting residents’ rising expectations. To reduce this risk, standards, and sector specific guidance must be considered alongside regulation.

The cost of AI systems could prevent councils from using them and support should be provided to local government to prevent their exclusion from an AI-powered future. This should be aligned with the Government’s vision for Levelling Up and therefore should ensure that it is not just more digitally mature councils that can avail of the advantages of AI.

There should be a multi-sectoral approach to the assessment of risk, with the Government drawing on the expertise of industry, civil society, academia, and councils. Given the fast pace of the technology, the regulatory regime should be adaptable, and cross-sectoral engagement would help ensure the regime is sufficiently responsive and relevant.

In relation to specific risks with suppliers in high-risk contexts, such as social care, Government should consider an approved set of AI partners. A list of pre-vetted organisations would reduce risk and increase value for money; though work would need to be done to ensure it does not create barriers to smaller firms. A trusted accreditation scheme should be established for suppliers to foster assurance in the quality of the AI technology purchased.

We welcome the news that OpenAI, DeepMind, and Anthropic have agreed to open their AI models to the Government for research and safety purposes. However, more must be done to introduce transparency and accountability for commissioners and purchasers of AI-powered technology, such as clear industry standards. To support councils in procuring these technologies safely, it was noted that procurement guidance and legislation should incorporate an AI focus

Underpinning all of these points there is a clear need for a common set of standards on the use of AI across public sector bodies and in the delivery of frontline public services. These should be formed around a common statement of AI principles across central/local government based on Nolan principles and the recommendations of its report on AI and Public Standards.

Key contacts

Socitm

Martin Ferguson
Director of Policy & Research
martin.ferguson@socitm.net

LGA

Jenny McEneaney
Senior Improvement Policy Adviser: Cyber, Digital, and Technology
jenny.mceneaney@local.gov.uk

Solace

Alison McKenzie-Folan
Spokesperson for Digital Leadership and Chief Executive of Wigan Council
a.mckenzie-folan@wigan.gov.uk