By Mark Brett, Socitm Advisor: Cyber Security and Resilience and Martin Ferguson, Director of Policy and Research
AI seems to be an all-encompassing concept at the moment, but it’s actually been around for a long time.
When we discuss the likes of ChatGPT, we are talking about large learning models (LLMs). These models are built on layers of natural language data. They really do mirror Isaac Newton’s famous phrase, “standing on the shoulders of giants”.
The very nature of these LLMs is that they can grow existentially over time. These models and their coding are extensive. The technology can be integrated into other software and applications using APIs.
The emergence of AI and its potential deployment in local public services raises a number of issues. The government AI Strategy is a good starting point to understand the context and background. It’s also useful to understand where AI fits into wider data science and information management disciplines.
Some key policy issues to consider are:
- Ethics
- Bias
- Privacy
- Copyright infringement
- Extensibility
- Secondary mining of the metadata
- Access control
- Consistency of output
- Data Protection (GDRR) – giving the right to query automated decisions.
- The right to be forgotten
- Legal implications – vicarious responsibility
Read the paper on the 5 key areas of cyber security that ChatGPT and AI could impact, for better and worse.
Large learning models “the game changer”
When we look back to the Encyclopaedia Britannica, the fount of all knowledge and scientific rigour for 250 years, it was rendered obsolete almost overnight by the internet, Wikipedia, and the cost of maintaining the old editorial and subscription model.
LLMs are the real game changer that will instantly contain both the contents of Britannica and Wikipedia. They will scoop up the searchable content of the internet and the vast amounts of new information being pumped into it daily.
The current pace of interest in AI can be seen as a paradigm shift and one that is here to stay. AI LLMs will soon touch just about every part of technology and our lives. Yes, that’s a sweeping generalisation. However, we are now seeing many software applications, becoming “AI aware” and offering “AI functionality” overnight in their new releases.
We have just seen this shift come about in the “Gitbook” app that we use for the Cybersecurity Technical Advisory Group documentation development, for instance.
From knowledge to data
Over the last 30 years we’ve always believed that future power lies with knowledge workers. Now it lies with data science.
Knowledge management has become a core skill, which supports and underpins a lot of what we do nowadays in the digital world:
- Knowledge is captured then published.
- Knowledge can then be revised, refined, and passed on.
- Knowledge goes out of date.
Tacit knowledge and ephemeral knowledge are very important but cannot be mimicked by AI.
The whole construct of artificial intelligence is continuously to keep learning and link knowledge together, in turn making new linkages. These linkages will generate previously non-existent genres and fields of research.
The emergent field of AI and Machine Learning will drive further innovations. It will be a fascinating world.
What makes it particularly interesting in our opinion is the true “Blue versus Red” thinking in information security, assurance and cyber:
- Red “adversaries” – the attacking forces those that want to break into networks and attack things
- Blue “defenders” – the network defenders
When you blend the two together you get “Purple teams”, which again are becoming a new and popular genre. This whole concept, like many of the stolen terms in cyber, came from the military as a way of stress testing strategy, systems and tactics.
Cyber risks and cyber opportunities
Introducing AI to cyber security and information assurance, brings both the blessings and curses of its speed, persistence, and capabilities. That’s why issues around legal, ethics, bias, replicability, and privacy become so important. As the 5 key areas paper notes, the sooner we start to think about these policy issues the better we will cope with these emergent threats and opportunities.
Our Sample Corporate Policy – Use of Generative Artificial Intelligence Large Language Models including ChatGPT aims to address these issues in a practical way. Produced in collaboration with ALGIM, Socitm’s partner association in New Zealand, it provides a framework for the use of generative Artificial Intelligence (genAI) large language models such as ChatGPT, Bard, Bing or other similar tools by council employees, contractors, developers, vendors, temporary staff, consultants or other third parties.
A Word version is available for download and may be adapted by councils for use in their own domain.
From science fiction to science fact
Self-learning, self-replicating and self-healing neural networks are being built into software applications and systems working at machine speed and will soon overtake human cognition.
This has been the thing of science fiction for a long time. The speaking computer in Star Trek is now defunct with Siri- and Alexa-type speak engines pervasive in many homes and work environments. Extending this to connected places, smart cities and the internet of things, we are already on the journey.
The new element is the non-technical way that we can verbalise, synthesise and create low-code and zero code solutions. It may not be long before connected places are able to integrate AI technologies capable of defining and executing micro-service (server less) software functions, automatically authored, integrated and deployed without human intervention.
These new applications will form themselves into automated solutions, where we will see smart software, self-deploying, learning and potentially replicating itself.
How will we incorporate these developments into our thinking, offering new approaches to strategic planning, policy frameworks, tasking, processes, and procedures?
The sooner we recognise these changes are coming, the better we will be able to both leverage the advantages and opportunities. We must also identify and mitigate the threats and defend against the vulnerabilities that will be introduced.
Some final thoughts
- This is the single most exciting and terrifying technology to emerge.
- It’s been around for years, we’ve seen it coming, however there is a paradigm shift underway.
- Machine learning LLMs bring a whole new meaning to Isaac Newton’s “standing on the shoulders of giants”.
- The growth in AI deployment and integration is already well underway.
- Think about it in terms of AI and Moore’s Law. We’ve saturated CPU growth, we’ve increased power of processors, made RAM and storage cheaper.
- Once this snowball starts, (it has) the momentum to accelerate is exponential.
- We may not be ready for it. We must think Blue/Red=Purple Teaming……
To misuse the (apocryphal?) Chinese proverb, the best time to start learning about AI was 20 years’ ago, the second-best time is now!
See 5 key areas of cyber security that ChatGPT and AI could impact, for better and worse.
Photo by Sanket Mishra on Unsplash