Submit your nomination for the 2024 Socitm Awards

Call for evidence: Large Language Models inquiry

Call for evidence: Large Language Models inquiry

Written evidence on LLMs in government submitted by the Local Government Association, Society for Innovation, Technology and Modernisation and Society of Local Authority Chief Executives and Senior Managers. Deployment of AI, particularly LLMs, offers significant opportunities in both the delivery of public services and the streamlining of business operations, but there are risks. Given the significant resource and funding constraints in local government and the growing expectations and responsibilities being placed on councils, they need to be part of an AI-powered future in the UK and must be supported to participate.

About us

The Local Government Association (LGA) is the national voice of local government. We are a politically led, cross-party membership organisation, representing English councils. Our role is to support, promote and improve local government and raise national awareness of the work of councils. Our ultimate ambition is to support councils to deliver local solutions to national problems.  

The Society for Innovation, Technology and Modernisation (Socitm) is a membership organisation of more than 2,500 digital leaders engaged in innovation and modernisation of public services. Established for more than 30 years, our network combines to provide a strong voice, challenge convention and inspire change in achieving better place-based outcomes for people, businesses and communities.  

The Society of Local Authority Chief Executives (Solace) is the UK’s leading membership network for public sector and local government professionals. We currently represent over 1600 members across the UK and have regional branches across the country which play host to a number of events such as regional development days, skills days and networking opportunities.  

Key messages

  • Local government is aware of the significant opportunities the deployment of AI, particularly LLMs, can have in both the delivery of public services and the streamlining of business operations, but is also cautious about the risks posed.
  • Given the significant resource and funding constraints in local government and the growing expectations and responsibilities being placed on councils, they need to be part of an AI-powered future in the UK and must be supported to participate. 
  • Opportunities with LLMs include supporting contact centre staff, analysing vast amounts of information such as consultation feedback, software development and easing administrative burdens in certain jobs, making recruitment into these job roles easier.
  • Risks identified include a lack of capacity or knowledge within information governance and data protection teams, the perpetuation of digital exclusion and wider forms of exclusion, insufficient knowledge across business areas in councils, a lack of transparency from suppliers and an impact on resident trust if not implemented transparently and appropriately. Insufficient data foundations are the biggest risk with AI and LLM systems more broadly. Information governance is where the biggest risk lies with the deployment of LLMs.  
  • Deployment of AI and LLMs needs to be underpinned by a useability framework that operationalises ethical principles such as transparency, accountability and explainability, as well as considering legal obligations and what is technologically possible. 
  • A future regulatory regime should guard against these risks, foster public trust and be people-centred, recognising the importance of awareness and capabilities required by policy and decision-makers, the workforce and the public. The twelve challenges of AI governance, as outlined in the recent Committee for Science, Innovation and Technology report, need to be carefully considered by government alongside a proposed thirteenth: the security challenges in the deployment of AI. 
  • Regulation needs to be accompanied by wider forms of support, including guidance for specific business areas such as information governance and procurement, and standards for the development of technology. 

Introduction and context 

The LGA, Socitm and Solace are pleased to respond to this call for evidence and have done so through engagement with councils. Since the government’s white paper ‘A Pro Innovation Approach to AI Regulation’ was published in late March,11 there has been considerable public interest in generative AI, particularly Large Language Models (LLMs). Councils have been responding rapidly to the widespread availability, interest and early-stage use of LLMs from political leaders, staff and residents. 

This response focuses on risks and opportunities for local government on AI generally and LLMs specifically, and provides reflections on principles that should be applied to a future regulatory regime as requested by the Inquiry. This response also highlights the support needs required by local government to enable the deployment of LLMs in a secure and ethical way, to enable more efficient and effective public service delivery in the wake of significant capacity challenges faced by councils. 

Where LLMs are being used, it is at an early exploratory stage in various contexts, such as to support administrative tasks internally, to power chatbots, to support contact centre staff, in translation services, and to support software development where councils develop their own software in-house. Given the widespread availability and accessibility of these tools, councils have been responding rapidly to govern their use. Depending on the risk appetite of the council at this early stage, some have banned access to LLMs including ChatGPT and Bard through council networks and on council devices; others have developed a more nuanced approach where service leads can undertake a Data Protection Impact Assessment and apply by exception to senior management to use the tool in service delivery; and others haven’t banned them but are developing policies and exploring existing or developing new governance structures to ensure their safe and ethical use.  

On the broader use of AI, the LGA has supported councils in their use of predictive analytics since 2019, when a survey carried out found that the technology was in early use across the sector. The LGA subsequently published guidance for councils on how to use predictive analytics to unlock the transformative potential of AI and has been providing support for continuous improvement in this area through the Advanced and Predictive Analytics Network. 

In response to the need of councils to rapidly respond to governance challenges related to LLMs, Socitm has published a sample corporate policy for the use of generative AI in local government2 and a ‘Generative AI: ‘Do’s and Don’ts Guide for Local Government’, developed with its partner association in New Zealand, the Association of Local Government Information Management.

There has been widespread interest in local government’s use of AI in the delivery of local public services. The Information Commissioner’s Office recently undertook an inquiry into the use of AI by local authorities, specifically on whether data was being handled appropriately, lawfully and fairly in the delivery of services particularly welfare and/or social support. The ICO inquiry found no evidence to suggest claimants are subjected to any harm or financial detriment because of the use of algorithms or similar technologies in welfare and social care, and that there is meaningful human involvement before any final decision is made in these areas. 

The Equality and Human Rights Commission (EHRC) has also recently been working with around 30 local authorities to understand how they’re using AI to deliver essential services and their use of the public sector equality duty in its implementation. The EHRC also recently published guidance on the use of the public sector equality duty to support local authorities to prevent discrimination in the delivery of AI-informed/powered services. It’s important to note that both these inquiries concluded before the widespread availability of LLMs.

Opportunities and risks for local government

Broadly speaking, the sector is enthusiastic about the opportunities AI and LLMs present if the risks are effectively managed, in both increasing council capacity, where there is an increasing cost of demand on public services and a lack of resources to keep pace with that demand, and providing intelligence and insights that will improve public services. Some examples include deployment in the analysis of large amount of consultation feedback to help improve the understanding of public consultation responses, in the provision of plain language advice and guidance to citizens seeking information and support from councils, and in refining and organising large amounts of unstructured data for the benefit of citizens and the council itself. There is a risk that if councils do not use AI, they will fail to meet resident expectations. There are concerns that local government may not be able to afford to procure the digital infrastructure necessary to deliver LLM/Generative AI solutions, and that local government could be left behind. 

Councils agree that where AI more broadly and LLMs specifically are deployed, it should be done so ethically, underpinned by the principles of transparency, accountability and explainability. We believe this should be underpinned by a useability framework that supports councils to operationalise these principles, as well as adhering to legal obligations and by an understanding of what is technologically possible. All governance and procedures put in place should strike the right balance between maintaining ‘human-in-the-loop’ decision-making and enabling councils to seize the potential transformation benefit in service delivery. There is recognition that the level of risk associated with the use of LLMs will depend on the context in which it’s being deployed. For example, summarising a public meeting compared with interacting with highly vulnerable social care clients.

Public trust is of paramount importance and the use of AI within councils poses a risk to that trust if not managed appropriately.3 Information Governance is where the biggest risk lies for local government when deploying LLMs given the enormous amount of citizen data held related to children’s services, social care, democracy (particularly regarding election process security), housing, and welfare. The effective and ethical application of AI is also highly dependent on the integrity and accuracy of the data inputted, therefore ensuring councils have strong data foundations is of paramount importance to prevent algorithmic bias.

Maintaining the highest standard of data protection is vital in sustaining public trust, and data protection officers and information governance leads within councils are crucial in ensuring the lawful and fair processing of data. Information Governance officers in councils are well-skilled but overstretched, and the introduction of LLMs and other AI tools will be challenging without clear guidance, training to build an understanding of how the tool works, and extra capacity for information governance teams. 

In the wake of significant capacity challenges, many councils are optimistic about the opportunities that LLMs pose in addressing capacity gaps. From a workforce perspective, whilst there is optimism about the role AI and LLMs specifically could play in supporting the administrative burden of some jobs which may make them more appealing (for example, developing lesson plans in teaching), there are also ethical concerns about the risk that broader AI deployment holds for jobs.

Consideration must be given to how to mitigate this employment risk nationally and the impact this could have on the communities in which councils operate. Any employment impact on communities will likely drive-up demand for local public services where services are already stretched.

There have been concerns raised by council Human Resources (HR) teams regarding the impact LLMs could have on perpetuating social and economic exclusion. The capacity of LLMs to generate CVs and application form responses enables digital native applicants to apply for multiple jobs at once, which poses an unfair advantage over applicants who are not digital natives. We know in response to these concerns; some councils are now including disclaimers in job postings that AI-generated applications will not be accepted. 

The use of LLMs may perpetuate digital exclusion within councils as levels of digital familiarity are correlated with increased fears and concerns around the application of AI.4 Increasing levels of digital familiarity and skills will support those with reported fears, building their confidence in the use of digital and AI tools,5 which will in turn build trust. 

If deployed, training is essential to mitigate the risks associated with the use of LLMs. Training should be tailored to business and service areas and include tailored training for councillors. For example, children’s social service leads will have different needs from HR teams as well as varying risk levels associated with deployment, and it’s important that, if they are being deployed, everyone within a council understands how to use LLMs effectively and ethically in their business area. This requires further financial support from the government to ensure that training is high quality, up to date in a rapidly changing context, and factors in peer-to-peer learning exchanges given the complexity of local government. 

There are security risks with LLMs that need to be carefully considered by the UK Parliament and Government, and was largely missing from the recently published UK Science, Innovation and Technology Committee report. LLMs store significant levels of data, and all data inputted (including sensitive data) will be stored within the library and utilised in future generated responses. Given the considerable levels of data stored by LLMs, these technologies will increasingly become a target for threat actors.6 It’s important to note that LLMs, such as ChatGPT, are composed of multiple components, and the security of data relies on the security of all those components. Users, including local government, require high levels of assurance of the security practices and cyber resilience of these suppliers, and increased transparency in the components used to mitigate cyber risk.7

There have been concerns raised about the use of LLMs to develop and spread misinformation, including by Open AI Founder Sam Altman. Potentially, LLMs provide new opportunities for actors to replicate divisive social media narratives or for the large-scale proliferation of mis- or disinformation that could have implications for local democracy, public trust, the effective delivery of public services and community cohesion.8 There are also well reported concerns regarding the use of LLMs in offensive cyber attacks, through the development of more sophisticated phishing emails and at a higher volume, in their use to rewrite malware to avoid detection, and to develop new novel attacks.9 There have been concerns raised by councils that the cyber threat to local government will therefore increase, and consideration must be given by the government on how to further support the cyber resilience of local government in the wake of more sophisticated attacks.

The potential for LLMs to support the collation of information, insight and intelligence to strengthen decision-making in local public services is enormous. However, technology should augment, rather than replace, human decision-making, especially higher-risk decision-making such as in the delivery of social services. There is also a need for a ‘human-in-the-loop’ to quality assure the material generated by LLMs and to protect against ‘hallucinations’. Political leaders in particular must be assured of the integrity of the information developed that inform their decision making. Communicating how decisions are made, and where LLMs have been used in tandem with human oversight will be vital in retaining public trust and protecting the integrity of local service delivery.10 It is therefore of paramount importance that councils are supported to establish ethical and effective governance and procedures to safely strike the right balance between addressing risks and seizing the transformational benefit of LLMs on service delivery.

Principles for a regulatory regime

Councils agreed with the government that there is a need to balance regulation with innovation in any regulatory regime for AI. To achieve this balance, any regulatory approach should be underpinned by ethical principles, including transparency, accountability and explainability, be people-centred in its application and govern against risks of bias. There was wide support for sandboxes in trialling innovative approaches which some councils are already using to innovate internally.

Trust and transparency are key to any regulatory regime, as is the alignment of ultimate accountability, authority, and responsibility to a human decision-maker. Protection against AI decision-making that does not involve human intervention/oversight is felt to be of paramount importance, especially in the context of LLMs with known ‘hallucinations’, as is the explainability of the technology. The white paper argued that one of two reasons why AI needed to be regulated was to address questions of liability in decision-making; ensuring that AI is an assistive technology would address this concern and must be considered in the regulatory approach by government. Councils noted that the Public Sector Equality Duty was vital in the development and implementation of AI, and support should be granted to councils in their application of it.

To support the ethical application of AI and to foster public trust, residents must be assured that all new technology is underpinned by the highest standards of privacy and data security.11

To mitigate the risk posed by the application of AI in public services perpetuating digital and wider forms of exclusion, protecting against this must be an integral part of any further regulatory regime.

Regulation needs to be adaptable enough to meet different council contexts and service needs,12 especially as the use of LLMs does not carry the same risk in all contexts.13

Poorly implemented regulation could increase the digital divide between sectors. This would create a barrier to councils drawing upon the opportunities this technology provides and prevent councils from meeting residents’ rising expectations. To reduce this risk, standards and sector specific guidance must be considered alongside regulation.

The cost of AI systems could prevent councils from using them and support should be provided to local government to prevent their exclusion from an AI-powered future. This should be aligned with the government’s vision for Levelling Up and therefore should ensure that it is not just more digitally mature councils that can avail of the advantages of AI.

There should be a multi-sectoral approach to the assessment of risk, with the government drawing on the expertise of industry, civil society, academia, and councils. Given the fast pace of the technology, the regulatory regime should be adaptable, and cross-sectoral engagement would help ensure the regime is sufficiently responsive and relevant.

In relation to specific risks with suppliers in high-risk contexts, such as social care, government should consider an approved set of AI partners. A list of pre-vetted organisations would reduce risk and increase value for money; though work would need to be done to ensure it does not create barriers to smaller firms. A trusted accreditation scheme should be established for suppliers to foster assurance in the quality of the AI technology purchased.

We welcome the news that OpenAI, DeepMind and Anthropic have agreed to open their AI models to the government for research and safety purposes. However, more must be done to introduce transparency and accountability for commissioners and purchasers of AI-powered technology, such as clear industry standards, and to increase transparency in software development. To support councils in procuring these technologies safely, it was noted that procurement guidance and legislation should incorporate an AI focus.

Underpinning all these points is the clear need for a common set of standards on the use of AI across public sector bodies and in the delivery of frontline public services. These should be formed around a common statement of AI principles across central/local government based on Nolan principles and the recommendations of its report on AI and Public Standards.

Key contacts

Endnotes

  1. The LGA, Socitm and Solace responded to that consultation, and the response is available on the LGA website: AI: Consultation on a pro-innovation approach to Artificial Intelligence regulation (LGA) ↩︎
  2. The policy document is intended as a framework for the use of Generative AI for generating text of content for reports, emails, presentations, images and customer service communication. There is an emphasis on promoting fairness and avoiding bias, whilst operating within the existing governance structures within a council – such as notifying the councils’ information governance team of the technology’s use in the wake of any doubts about the appropriateness of use. The policy will be subject to periodic review and updated as necessary. ↩︎
  3. In a 2019 survey by the Open Data Institute (ODI) and YouGov, despite 87% of respondents stating that it was very important that organisations interact with data about them ethically, only 30% believed central government did so and 31% trusted local government. ↩︎
  4. In a recent CDEI public attitudes tracker, it was found that those with lower levels of ‘digital familiarity’ reported having increased fear and concerns around the application of AI. ↩︎
  5. In a FutureDotNow study, they found that 59% of the UK workforce is unable to do all of the digital tasks essential for work, which will only be further perpetuated through the increasing use of AI. ↩︎
  6. ChatGPT already experienced a data breach affecting an open source library it uses: ChatGPT Confirms Data Breach, Raising Security Concerns (securityintelligence.com) ↩︎
  7. For more information on building software resilience and the transparency required, please see LGA and Socitm’s response to the DSIT consultation on building software resilience: Software resilience: LGA and Socitm response to call for views on software resilience and security for businesses and organisations (LGA) ↩︎
  8. For more information on the associated risks, please see this journal article: Artificial Intelligence, Deepfakes, and Misinformation (RAND Corporation) ↩︎
  9. For more information, please see this Forbes article: Why Large Language Models (LLMs) Alone Won’t Save Cybersecurity (Forbes) ↩︎
  10. This corresponds with the finding from a recent Ada Lovelace and Alan Turing Institute public attitudes report that found despite high levels of support for the use of AI in healthcare such as in cancer detection, 56% were concerned about the over-reliance on technology rather than professional judgments, and 47% were concerned about the difficulty in knowing who is responsible for mistakes when using this technology. ↩︎
  11. This corresponds with a finding from the Ada Lovelace Institute and Alan Turing Institute public attitudes tracker on AI where data security and privacy were felt to be the greatest risk for data use in society. ↩︎
  12. For example, in response to the EU’s approach of banning facial recognition AI technology, one council argued that they use facial recognition and sentiment tracking at a suicide hotspot in their community. The data is not stored but does allow for quick action on the part of the council to save lives. ↩︎
  13. For example, the inputting of sensitive data into ChatGPT or a similar tool to garner insight and develop a report would carry significantly more risk than other uses. ↩︎