People make places. Do not miss President's Conference 2024 in Birmingham. Register now.

Using Generative Artificial Intelligence Large Language Models – Do’s and don’ts

Produced in partnership with ALGIM

Authors and contributors: ALGIM, LOLA, Martin Ferguson, Mike Manson
Socitm infographic - Using Generative AI and LLMs

In partnership with ALGIM (Association of Local Government Information Management) and LOLA (Linked Organisation of Local Authority ICT Societies), this infographic has been put together to provide a quick reference guide for users of Generative Artificial Intelligence (AI) Large Language Models (LLMs) such as ChatGPT, Bard, Bing or other similar tools. Potential users include councils, charities and any other organisations providing local public services. These guidelines apply to all stakeholders, including but not limited to: employees, contractors, developers, vendors, temporary staff, consultants, councillors and trustees.

Socitm infographic - Using Generative AI LLMs

Why does it matter?

Generative Artificial Intelligence (AI) is evolving fast and being rapidly promoted by large technology-based organisations, all competing to be first to market, yet without legal or regulatory oversight. This technology is now appearing within tools, systems and processes used by organisations as part of upgrades or updates, but is being implemented without consideration of uncertainties and risks, and its wider implications are not well understood.

Illustration showing a computer with Artificial Intelligence visuals on-screen

AI and Generative AI explained

Artificial Intelligence

AI is the ability of machines or software to perform tasks that would normally require human intelligence​. It can process data, learn from it and make decisions or predictions based on that data​. AI encompasses many different types of systems and approaches to harnessing machine intelligence, including rule-based AI, machine learning, neural networks, natural language processing and robotics.​

Generative AI and Large Language Models (LLMs)

Generative AI learns from data about existing artifacts in order to generate new variations of content (including images, video, music, speech and text)​.

LLMs are a type of Generative AI that use ‘deep learning’ techniques and massively large data sets to understand, summarise, generate and predict text-based content.

Illustration of a woman using Generative AI on her computer

Purpose

These ‘do’s and don’ts’ provide guidelines for the use of Generative AI LLMs (such as ChatGPT, Bard, Bing or similar tools) by councils, charities and any other organisations providing local public services. They apply to all stakeholders, including but not limited to: employees, contractors, developers, vendors, temporary staff, consultants, councillors and trustees.

Do…

  • Do maintain human oversight and responsibility for making final decisions on output produced​
  • Do notify your manager and disclose that Generative AI LLMs have been used to generate output​
  • Do use responsibly and ethically​
  • Do use in accordance with relevant organisation policy​
  • Do comply with relevant laws and regulations​
  • Do specify the definitions and scope of your prompts with care​
  • Do use to create draft briefings, reports, presentations, customer responses, etc.​
  • Do use to improve and refine existing content ​
  • Do use to analyse publicly-available data​
  • Do fact check material generated by Generative AI LLMs​
  • Do be aware of the potential for disinformation and scams being generated​
  • Do take care to avoid use of output that may breach copyright or intellectual property rights​
  • Do be aware of risks including accuracy, bias, discrimination, confidentiality and security

Don’t…

  • Don’t use to record and process confidential data and information​
  • Don’t use to store or release non-public records​
  • Don’t use for private individual records​
  • Don’t let go of moral and ethical responsibility for output​
  • Don’t use if you are in doubt about the security of data or information being input​
  • Don’t assume that all of the output generated is factually correct​
  • Don’t use if data sovereignty practices of the Generative AI LLM supplier contravene any applicable legal and/or regulatory requirements
This document, or parts of it, may be used by any non-profit or public body to support us all in ensuring our use of AI is fair, legal and safe. You accept that Socitm, ALGIM and LOLA can have no responsibility whatsoever for any detriment or loss arising from your use of this document. Any use of the document should be attributed and it may not be used for commercial purposes.

Articles

Videos