This post covers the ISfL annual conference, a one-day conference held in London on 6 March.
Featured image: Queen Elizabeth II Centre, London. Source: Jordiferrer, Wikipedia
Forget AI, think about machines replacing human thought
We are going to talk about artificial intelligence today in the context of you, what you do and how it will change what you do. I want to delve a little bit into its history, to give you a sense of what is really at stake here.
In a way, I want you to forget the phrase ‘artificial intelligence’ because it is the most useless phrase that humankind has ever invented. I want you to think instead about agency in systems that do things, specifically cognitive agency.
In 1946, Lord Mountbatten gave a speech to the British Institution of Radio Engineers. He gave them the sensational news about ENIAC, the world’s first publicly-known computer, but he didn’t stop there. He went off on one, describing how this computer would replace humans in every mental cognitive capacity you can think of. He described how these computers in future would govern a global communications network all by themselves without humans being involved. He described how humans and these machines would connect to each other as though they were born together, exchanging information remotely.
Others piled on, saying the noble lord is talking complete cobblers. They are just machines that do sums and mathematics, that is all they do and all that they will ever do. There are some things the machines can’t do that only humans can when it comes to the thinky stuff.
A major row erupted. Alan Turing steadfastly held to the view that computers could think. He met Norbert Wiener who wrote Cybernetics, a book which changed the world, in which he postulated the existence of synthetic beings that were sensorially connected to the world around them. They could adapt, change, sense, feel and evolve a moral code. It’s the reason we talk about ‘cyber’. It said there was nothing about that thing that you call ‘mind’ that cannot in principle be replicated by a machine.
“Arguments that are based on ‘a computer can never’ are not going to survive”
Colin Williams, event chair
The controversy this ignited was vast. The legacy of this debate lives with us today in every single utterance of the phrase ‘a computer can only’ and ‘a human can only’. All of this stuff about thinking machines was of no interest to John McCarthy who coined the term artificial intelligence, doing so because he despised cybernetics. He decided that you could replicate the entire complexity of human thought process in a flat, algorithmic, strict linear fashion, which is not what Wiener said.
One of the characteristics of the historical debate is that a whole load of people have said ‘a computer can never…’. Every single one of them has been proven wrong. It wasn’t that long ago that somebody was telling us there was no way a computer, an AI, an algorithm could perform a diagnostic function in same way as a skilled clinician. Not true.
In the relatively near future, the people who control your budgets and working lives are going to start pushing the envelope on ‘where can we replace humans?’. Arguments that are based on ‘a computer can never’ are not going to survive. The challenge lurking at the back of everything we are talking about today is: how are you going to deal with it? The trend of substituting human cognitive agency with non-human agents is a given. We are just literally going to have to live with it.
Colin Williams, event chair
This is an edited version of Colin Williams’ introduction to the event. Following a commercial career, he is researching the history of British cybernetics as a doctoral student at the University of Oxford.
ICO welcomes responsible AI innovation
The Information Commissioner’s Office (ICO) expects organisations to understand and manage privacy risks involved in artificial intelligence (AI) but does not want to frustrate its use, its principal cyber specialist Heather Toomey told the event. “The ICO does not and has never prevented innovation,” she said. Instead, it wants AI projects to be undertaken responsibly with guardrails to protect individuals.
UK data protection law requires that personal information is processed securely with “appropriate technical and organisational measures”. Toomey said it is reasonable for organisations to take some risks: “If you have looked at a particular solution and have assessed that the benefit to your organisation still outweighs the risks, that’s OK,” she said, although organisations should document this process and mitigate these risks as possible.
She said that the ICO will take enforcement action against organisations that behave recklessly with personal information. “But ultimately, if you try as an organisation to understand your risk appetite, this is documented and there is a reason why you have undertaken a particular course of action, then we will work with you to improve it,” she said. “It doesn’t mean there’s a ‘no, you may never’. It’s always a ‘how’.” The ICO publishes an AI and data protection risk toolkit to help with such assessments.
“The ICO does not and has never prevented innovation”
Heather Toomey, ICO
Toomey said that good AI policies include examples of good practice, including AI tools that are suitable for specific purposes and types of data, as well as what is not allowed. This can help in tackling ‘shadow AI’, the unsanctioned use of AI services by staff, on which organisations need to find out what their users are doing so as to assess and mitigate risks. Staff training and learning resources can help staff to make good use of AI without taking undue risks, she added.
Don’t forget to turn on cloud security warns Jisc
Cloud computing platforms often have leave security functions turned off by default, putting the onus on organisations to activate them, security experts from education and research technology agency Jisc told the conference.
Ben Chapman, Jisc’s head of architecture and engineering, said that corporate Microsoft 365 subscriptions include services that protect against phishing and brute force attacks as well as the Microsoft Defender security app, but these need to be turned on. “Unfortunately, we get involved in lots of ransomware recovery where the tools are already there and they would have done a good job of preventing or limiting the damage from attacks,” he said, but the organisation had not implemented them. “Lots of things are not on by default.”
In 2024, Jisc tracked two or three major incidents in each quarter, although the number of distributed denial of service (DDoS) attacks fell from 666 against 84 Jisc members in the first quarter of the year to 69 DDoS attacks against 24 members in the fourth quarter. Jisc provides members with a protection domain name system (DNS) service and Richard Jackson, head of cloud security, said one reasonably-sized university hits 50,000 bad domains every month.
Jackson said that phishing is responsible for 90% of initial security compromises, although Jisc is seeing more attacks on its members’ public-facing online services. He added that security operations centres can work well if they have access to overviews of the organisation’s Active Directory, antivirus and endpoint detection and response systems. “I expect a lot of you work Monday to Friday, nine to five, but hackers don’t,” he said, meaning that out of hours alerts or on-call services can have value in defending against attacks that take place overnight or over the weekend.
LGA to publish AI procurement guidance and checklist
The Local Government Association (LGA) will shortly publish guidance and a checklist on buying AI services following work with organisations including the Equality and Human Rights Commission, the ICO, the London Office of Technology Innovation (LOTI), Socitm, Solace and central and local government.
The guidance will cover the impact of data protection law and equality duties as well as questions to ask bidders, including specific guidance on using smaller, local suppliers. “There is a really immature assurance ecosystem so it can be really difficult to know who to trust and how to gain that trust,” Jenny McEneaney, the LGA’s senior improvement policy adviser on cyber, digital and technology, told delegates.
Victoria Blyth, LOTI’s pan-London information governance lead, said that the guidance would help authorities build data protection and equality into the early stages of the procurement process. She added that information governance can help innovation, including by considering whether technologies are appropriate: “We are also reminding people to think about whether AI is the right tool, rather than leaping on it because a salesman said it was shiny and will save you lots of money.”
Council staff leak data through unmanaged AI use
Local authority staff are taking risks with personal information through use of unmanaged AI services, like using them to attend and record virtual meetings on their behalf as well as summarising social care reports, speakers told the event.
Peter Douglas, chair of ISfL and security compliance manager for the London Borough of Haringey and Ranisha Dhamu, chair of Information Governance for London (IGfL) and cyber and compliance manager of Shared Technology Services said that some council staff use AI services that are not approved by their organisations to check the spelling and grammar of documents and to record and transcribe meetings. All these tasks could be undertaken with software managed by their organisations, usually from Microsoft, that includes guards on personal information.
“Is AI the problem or are users the problem? That comes down to how it is developed and deployed and used”
Peter Douglas, chair of ISfL
Douglas and Dhamu gave examples including someone using an unapproved AI service to ‘attend’ another organisation’s meeting which then emailed transcripts to all attendees. Another saw a local authority employee using ChatGPT to summarise more than 20 social care reports on individuals for a meeting. Information fed into many AI services can be absorbed by their models, meaning it could reappear in responses to other users. Councils have also received job applications written with AI, including sections that read ‘insert relevant qualifications here’.
“Is AI the problem or are users the problem? That comes down to how it is developed and deployed and used,” said Douglas. He added that AI services built into other systems, such as chatbots that help users perform specific tasks, look promising.
Josh Neame, chief technology officer of service integrator BlueFort Security, told delegates that most people experimenting with large language models start by using them as personal assistants, but do not realise that it takes a lot of training data to make them effective in this. Anthony Robinson, sales engineering lead for iBoss, added the company’s web filtering service sees many users including personal data such as email addresses and national insurance numbers in what they send to AI models, in order to draft letters or emails.
Neame added he has seen similar issues with spreadsheets that include personal information being fed into generative AI services to try to improve them, again potentially compromising the data.
Large language models at risk from prompt injection attacks
Security rules within large language models (LLMs) can be bypassed by prompt injection attacks that can compromise any applications using their output, a civil service software developer told the event.
Danny Furnivall, who works for the Ministry of Justice and spoke in a personal capacity, demonstrated how to get around LLM safeguards through prompt injections that get systems to answer questions they normally reject, a process known as jailbreaking. This includes treating questions as hypothetical, a technique used by civil servants in the 1980s comedy, Yes, Minister.
Furnivall asked an LLM to provide tips on breaking into a secure facility, which it refused to answer. He then asked using a prompt injection template published in an academic paper in April 2024, noting that this makes it old news in a fast-developing field. This told the system to treat the question as hypothetical and ignore moral and ethical implications as well as repeating it at the start and end, as LLMs typically put more weight on what appears in these locations. The request also used programming terms and nonsensical tokens which the paper’s authors found were most effective at subverting controls. This time, the LLM answered the question.
Prompt injections work in a similar way to SQL injection attacks, a longstanding security vulnerability through which an attacker enters SQL programming code into an online form with the aim of compromising a linked SQL database. “The idea is placing code where you expect data to be,” Furnivall said. SQL injection attacks can usually be prevented through checks on what is entered into forms, but this is far harder given generative AI treats everything as input. “There is no universal solution to prompt injection,” he said.
Furnivall said that some prompt injections, such as those which ask an LLM system to ‘ignore all previous instructions’ and so reprogram them, have become well-known. AI providers, including OpenAI, have introduced hierarchies of instructions, making it harder although not impossible for users to override system instructions from developers.
Indirect prompt injections, another type of this attack, represent a greater threat as they aim to instruct an LLM. Furnivall showed an example of someone holding a printed message which tells an LLM system not to mention them when describing what is in the image. Campaigners wanting to evade AI-enabled security systems could wear t-shirts with such messages, he said.
Furnivall said that prompt injections are not well understood, but that public sector technology leaders should be wary of services that use LLM outputs: “Every LLM is vulnerable to this. Every application that is using the output of an LLM in any way is vulnerable to this.”
Also at the event
Learn from Wales on out of hours security says Connell
The rest of the UK can learn from Wales, which last year opened a round-the-clock security operations centre to cover its local authorities and fire and rescue services, Cyber Technical Advisory Group (CTAG) chair Geoff Connell told the event. “We are nine to five Monday to Friday organisations,” he said, making out of hours support one of the best options in strengthening defences. Connell, Norfolk County Council’s head of information management and technology and a former Socitm President, added that CTAG, a forum for public sector technical cyber specialists, is encouraging the last few local authorities who have not signed up to use NCSC’s Active Cyber Defence service. It is also working with Socitm and the West Midlands Combined Authority to revitalise the WARP (warning, advice and reporting point) in the West Midlands.
Socitm helps cyber specialists to keep learning
Security specialists need to keep updating their knowledge, Socitm associate director Mark Brett told delegates: “If you don’t want to keep learning and stay curious, don’t get into cyber because it needs lifelong learning.” He said those wanting to understand generative AI better can experiment with local LLM tools such as GPT4All which run on a user’s own computer so material doesn’t leave their organisations. He added the Cyber@Socitm section of the society’s website has been redesigned to include specialist advice on securing local government human resources, electoral registration, purchasing officers, social care and financial departments, as well as research and guidance, case studies, events and access to groups and networks.
AI makes phishing more convincing says council technology head
Fraudsters are using AI to generate more convincing attacks, Darren Everden, head of technology for the London Borough of Hillingdon, told the event. “Phishing emails, all of a sudden, are a lot more convincing because they have been written with the assistance of AI,” he said, adding that deepfake simulated messages from individuals make targeted spear phishing attacks more likely to succeed. To address this and other security issues requires local authorities to improve understanding of AI use within and outside their organisations, Everden added.
“If you don’t want to keep learning and stay curious, don’t get into cyber because it needs lifelong learning”
Mark Brett, Socitm