Why does it matter?
Generative Artificial Intelligence (AI) is evolving fast and being rapidly promoted by large technology-based organisations, all competing to be first to market, yet without legal or regulatory oversight. This technology is now appearing within tools, systems and processes used by organisations as part of upgrades or updates, but is being implemented without consideration of uncertainties and risks, and its wider implications are not well understood.
AI and Generative AI explained
AI is the ability of machines or software to perform tasks that would normally require human intelligence. It can process data, learn from it and make decisions or predictions based on that data. AI encompasses many different types of systems and approaches to harnessing machine intelligence, including rule-based AI, machine learning, neural networks, natural language processing and robotics.
Generative AI and Large Language Models (LLMs)
Generative AI learns from data about existing artifacts in order to generate new variations of content (including images, video, music, speech and text).
LLMs are a type of Generative AI that use ‘deep learning’ techniques and massively large data sets to understand, summarise, generate and predict text-based content.
These ‘do’s and don’ts’ provide guidelines for the use of Generative AI LLMs (such as ChatGPT, Bard, Bing or similar tools) by councils, charities and any other organisations providing local public services. They apply to all stakeholders, including but not limited to: employees, contractors, developers, vendors, temporary staff, consultants, councillors and trustees.
- Do maintain human oversight and responsibility for making final decisions on output produced
- Do notify your manager and disclose that Generative AI LLMs have been used to generate output
- Do use responsibly and ethically
- Do use in accordance with relevant organisation policy
- Do comply with relevant laws and regulations
- Do specify the definitions and scope of your prompts with care
- Do use to create draft briefings, reports, presentations, customer responses, etc.
- Do use to improve and refine existing content
- Do use to analyse publicly-available data
- Do fact check material generated by Generative AI LLMs
- Do be aware of the potential for disinformation and scams being generated
- Do take care to avoid use of output that may breach copyright or intellectual property rights
- Do be aware of risks including accuracy, bias, discrimination, confidentiality and security
- Don’t use to record and process confidential data and information
- Don’t use to store or release non-public records
- Don’t use for private individual records
- Don’t let go of moral and ethical responsibility for output
- Don’t use if you are in doubt about the security of data or information being input
- Don’t assume that all of the output generated is factually correct
- Don’t use if data sovereignty practices of the Generative AI LLM supplier contravene any applicable legal and/or regulatory requirements
This document, or parts of it, may be used by any non-profit or public body to support us all in ensuring our use of AI is fair, legal and safe. You accept that Socitm, ALGIM and LOLA can have no responsibility whatsoever for any detriment or loss arising from your use of this document. Any use of the document should be attributed and it may not be used for commercial purposes.
- Are Generative AI and Large Language Models the Same Thing? (Folio3)
- What are Generative AI, Large Language Models and Foundation Models? (CSET – Center for Security and Emerging Technology)