The new leadership team: How HR + IT can jointly build your future workforce

Shadow AI in the public sector: innovation without oversight?

Authors and contributors: Afsha Zeb

Image source: NCSC


Artificial Intelligence (AI) isn’t coming, it’s already here.

From frontline services to back-office operations, public sector staff are quietly integrating AI into their daily work. But much of this innovation is happening off the radar, beyond formal governance. This silent shift is creating a new frontier of risk, and opportunity, for ICT leaders.

From generative AI tools like ChatGPT and Google Gemini to embedded AI in platforms such as Microsoft 365 Copilot, the transformation is accelerating, often faster than policy, process, or governance can keep up.

And it’s not just happening in innovation labs or pilot projects. Staff are using AI to write emails, summarise reports, and analyse data, frequently without official approval, training, or oversight.

This is the new face of shadow IT. And it raises urgent questions about risk, responsibility, and the future of public sector ICT leadership.

What is shadow AI and why should we care?

Shadow AI refers to the unpermitted use of AI tools, especially free or freemium platforms, by staff of their own accord without ICT oversight, training, or governance. Like traditional shadow IT, it’s often driven by good intentions: solving problems, improving productivity, or bypassing slow processes but such use doesn’t fully consider the risks involved

Real-world examples of Shadow AI include:

  • Drafting case notes or internal documents using ChatGPT
  • Copy-pasting sensitive or personal data into public-facing tools for analysis
  • Using image or speech generators for accessibility without audit trails
  • Purchasing AI-enhanced tools without ICT or procurement involvement

All of these behaviours are widespread and might be happening in your organisation. It could be you or one of your colleagues.

A 2024 Gartner survey found that 45% of public sector employees admitted to using generative AI tools without formal approval. That figure is almost certainly an underestimate.

The risks of unmanaged AI use

While AI can boost productivity and creativity, unchecked use introduces serious risks:

  1. Data privacy and security
    Public-facing AI platforms may store, retain, or even train on user input. If staff input case data, health records, or personally identifiable information (PII), even unintentionally, it could breach data protection laws like GDPR.
    Example: A council officer uses ChatGPT to summarise a social care case. The data includes PII details about a vulnerable individual. That data is now outside the organisation’s control, and potentially exposed.
  2. Misinformation and bias
    Generative AI tools don’t “know” facts, they predict word sequences. Outputs may be convincing but incorrect, and reflect biases embedded in training data. In public services, passing on inaccurate or biased information, especially in health, education, or justice, can have real-world consequences.
  3. Accountability and audit
    If decisions or communications are based on AI-generated content, who is responsible for their accuracy or impact? Without audit trails or version histories, it’s difficult to trace how decisions were made or defend them under scrutiny.
  4. Procurement and cost duplication
    Uncoordinated AI adoption leads to inefficiencies. Teams may buy access to similar tools, duplicating spending and missing out on enterprise licensing or integration benefits.
  5. Ethical and egal liability
    AI-generated outputs can raise questions around intellectual property, consent, and fairness. If a service decision is influenced by AI, and later challenged, who is legally accountable?

Why staff use shadow AI

This isn’t about recklessness. In most cases, staff are trying to solve real problems, often under pressure, with limited resources and outdated tools.

Common drivers of using shadow AI include:

  • Time pressures: AI helps speed up admin tasks and reporting
  • Lack of internal tools: If supported AI services aren’t available, staff will find their own
  • Curiosity and creativity: People want to explore what AI can do
  • Perception of innovation: Using AI feels cutting-edge, and nobody wants to be left behind

But without guidance, training, or governance, these well-meaning efforts can lead to unintended harm.


What can ICT leaders do?

We can’t afford to ignore this. But banning AI won’t work, it will simply drive it further underground.

Instead, we need a mature, strategic response: one that acknowledges the innovation already happening and guides it safely.

1. Map the current landscape

Assume it’s happening and seek to understand how. Run anonymous surveys, hold discovery sessions, or engage digital champions to surface where, why, and how AI is being used unofficially.

Relevant resources:
See Socitm’s AI case studies on AI@Socitm, the LGA’s AI case study bank, the UK Government’s AI Knowledge Hub, Scottish AI Alliance case studies and examples from the Welsh Government’s Digital Sharing Hub and the Microsoft Innovation Forum.

2. Develop a clear, accessible AI use policy

If your organisation doesn’t have an AI policy, create one, and make it usable. Include:

  • What’s permitted (and what’s not)
  • Rules around personal and sensitive data
  • Approved platforms and licensing
  • Accountability for outputs
  • Guidance for staff (e.g. always fact-check AI-generated content)

Use plain language, and co-design it with service teams so it actually gets read and followed.

3. Educate and upskill staff

AI literacy is critical. Staff need to understand the capabilities and limitations of AI tools, and how to use them ethically. Offer:

  • Training sessions or webinars
  • Drop-in AI clinics
  • Internal knowledge hubs with trusted tools and best practices

Make training contextual, show how AI can (and can’t) help in real job scenarios.

Relevant resources:
Find groups and networks to advance your skills on AI@Socitm.

4. Provide safe, supported alternatives

Don’t just say “no.” Offer “yes, but safely.” For example:

  • Roll out enterprise AI tools within platforms like Microsoft 365
  • Create an internal AI sandbox for experimentation
  • Pre-vet and approve tools that meet compliance standards

This builds trust in ICT governance and gives staff legitimate options.

Relevant resources:
See LOTI Sandbox in Adult Social Care

5. Work in partnership across the organisation

AI is not just an IT issue. Work with:

  • Information governance (on data and compliance)
  • Legal teams (on intellectual property and liability)
  • Comms (on tone, language and accessibility)
  • HR (on training and culture change)

The most effective approaches will be cross-functional and co-owned.

6. Establish an AI governance framework

Go beyond policies. Create a cross-functional governance group to oversee AI strategy, risk management, and ethical use. Include representation from ICT, legal, data protection, service delivery, and even community stakeholders.


What good looks like

  • Staff know which AI tools are approved, and why.
  • AI use is documented, auditable, and aligned with service goals.
  • Training is ongoing, contextual, and accessible.
  • Innovation is encouraged, but within clear guardrails.
  • Risks are proactively managed, not reactively discovered.

Leading the change: from risk management to innovation enablement

Shadow AI is a symptom, not a problem in itself. It tells us that our people want to innovate, work smarter, and explore new tools. That’s a good thing. But it’s also a warning sign that our existing frameworks aren’t keeping up with the pace of change.

As ICT professionals in the public sector, we must do more than just manage risk.

We must lead the conversation, set the guardrails, and create space for safe experimentation. That means acknowledging uncertainty, being transparent about what we don’t yet know, and building new ways of working, together.

Because AI isn’t going away. The question is: will we shape its use, or will it shape us?

Over to you…

How is your organisation approaching AI and Shadow IT? What’s working, and what still needs to change?

Join the conversation. Let’s build a smarter, safer future together.

You might also like or find these useful