Jump to section
The story of cyber security isn’t just a chronicle of breaches; it’s a dark mirror reflecting the relentless speed of technological evolution. Every major advance in computing, networking and digital life has created a commensurate, and often catastrophic, new opportunity for malicious actors. From isolated university machines in the 1980s to the hyper-connected, identity-driven threat landscape of today, the evolution of cyber attacks has mirrored the internet’s journey from a niche experiment to the backbone of global society.
A look back

📆 Era 1: The birth of digital misuse (1985–1999)
In the mid-1980s, technology began shifting power away from the centralised mainframe to the desktop, marking the start of significant disruption for traditional IT departments. The early threat landscape was characterised by curiosity, digital vandalism and the emergence of the first forms of malicious code.
Malicious code
A key event cementing the need for cyber legislation occurred in the UK in 1986, when Robert Schifreen and Stephen Gold were convicted for accessing a Telecom Gold account. While overturned on appeal, this moment quickly led to the passing of the Computer Misuse Act 1990. This legislation arrived just as network computing was gaining traction. The Morris worm of 1988 illustrated the potential for network-level disruption, clogging 6,000 computers on the nascent ARPAnet. This led directly to the creation of the Computer Emergency Response Team (CERT).
The shift towards widespread connectivity, particularly the rise of the World Wide Web and email, made centralised corporate control increasingly difficult. The late 1990s brought the first global mass-scale malware events that exploited human elements and universal Microsoft products. The Melissa worm in March 1999 and the devastating ILOVEYOU worm in 2000 utilised email, Microsoft Outlook, and social engineering to spread rapidly, causing estimated damages of $1.2 billion and $15 billion respectively. Attackers were already moving away from mere “name-making” and focusing on stealing items of value, often exploiting poor application security architectures on commerce websites.
📆 Era 2: The internet goes critical (2000–2010)
Year 2000 problem
The turn of the millennium marked the internet’s leap from a novel platform to the essential backbone of global business, moving from focusing on operational efficiency to information exploitation and inter-enterprise operability. This conjunction ushered in a new era of complex, targeted and financially motivated attacks.
Meanwhile, the Year 2000 (Y2K) bug came and went, many saying that Y2K was a “damp squib.” However, the successful outcome was actually the result of detailed planning, preparation and testing. Reflecting back on that time, maybe the planning deterred a Millennium terrorist attack, presaging what was to come a year later.

In 2001, the tragic events of the Twin Towers in New York on September 11 brought a grim new meaning to business continuity planning and focused organisational attention on “mega-disasters”. Just a year later, the threat to the internet’s core infrastructure materialised: an attack struck 13 Domain Name Systems (DNS) root servers, successfully knocking out five of them in the first attempt to disable the internet itself.
The mid-2000s saw the rapid maturation of internet-based connectivity, an explosion of informational websites (portals) and technologies like Voice over IP, wireless Local Area Networks (LANs) and Virtual Private Networks (VPNs). This new connectivity brought corresponding risks. This period also saw the rise of the hacktivist group ‘Anonymous’ (formed in 2003). By 2004, Gartner predicted that cyber-attacks exploiting software flaws would double in speed by 2006, capitalising on missing patches and misconfigured systems.
Data breaches
In late 2007, two discs by HM Revenue and Customs (HMRC) went missing, containing data from all UK children under the age of 18 and their parents. This perceived data loss, although not cyber, led to the Data Handling Review and the realisation that information security really mattered. Its key findings and recommendations were:
- Improved data security: The need for stronger data protection measures and the recommendation that all government departments implement stricter controls on the use of electronic, removable media to prevent unauthorised access to sensitive information.
- Cultural change: The importance of fostering a culture of accountability and responsibility regarding data handling within public services, and training staff in data protection and the implications of data loss.
- Enhanced accountability: Clearer lines of accountability within departments, ensuring that senior officials are responsible for the management and security of data.
- Ongoing monitoring and improvement: Improving data handling as an ongoing process, requiring continuous assessment and adaptation to new challenges and technologies.
The first known cyber weapon
A profound escalation in geopolitical cyber conflict was confirmed with Operation Aurora in 2010, when Google exposed a highly sophisticated and targeted attack, originated from China and resulting in the theft of intellectual property.
Later that year, the discovery of Stuxnet – a sophisticated computer worm designed to target Supervisory Control and Data Acquisition (SCADA) systems in Iran’s nuclear facilities demonstrated that cyber weapons could move beyond data disruption and achieve real-world physical destruction.
The cyber threat to Critical National Infrastructure (CNI) was now globally undeniable. At that time, the naming of adversaries was very cautious and kept hidden from public view. The language did start to change from “if” there is a serious cyber attack, to “when…”
📆 Era 3: Mobility, identity, and hyper-connection (2011–2025)
The 2010s saw rapid advancements in mobile technology, cloud computing, and consumerisation. Introduction of the Government Connect Secure Extranet (GCSx) enabled secure email communication for local government.
This development quickly led to the formation of the Public Services Network, which enhanced the UK Government Secure Internet (GSI) by connecting central and local government, the NHS, and police services. The PSN Code of Connection (CoCo) emerged and went on to shape Local Government Information Assurance and Compliance for the following decade.
The Wikileaks controversy in 2011 highlighted the necessity for enhanced internal security systems to prevent confidential information leaks. Simultaneously, the News International phone hacking scandal illuminated the darker side of data theft, involving hacking private voicemail accounts and demonstrating that even low-tech social engineering and phreaking presented potent threats when targeting high-value information.
This era also saw the rapid rise of identity theft. Breaches became colossal in scale, such as the Target breach in 2013, exposing 40 million credit cards from their customers, and the U.S. Office of Personnel Management breach in 2015, stealing 21.5 million records, including social security numbers and fingerprints. This shift reflected a move by attackers “away from targeting individual devices to focusing on compromising user identity”. This identity-centric focus persists, with identity-based attacks accounting for 67.6% of incidents in the current landscape.
Ransomware
The defining trend of this era, however, is the explosion of ransomware. Following the invention of cryptovirology in 1996, ransomware scaled exponentially in the 2010s, culminating in the WannaCry attack in May 2017. This single event infected an estimated 300,000 systems globally in four days, severely disrupting over 80 NHS hospital trusts in the UK and costing the NHS approximately £92 million. WannaCry demonstrated that exploiting legacy systems and failing to apply patches in a timely manner had immediate and severe real-world consequences.
In recent times, geopolitical tensions (such as Russia’s invasion of Ukraine) significantly intensified cyber threats. The threats diversified, involving state actors, professional ransomware groups (often operating like professional SaaS businesses) and low-skilled hacktivist proxies.
The attacks on high-profile organisations like Marks & Spencer, Jaguar Land Rover and the Co-op Group (where 6.5 million members’ data was stolen by the DragonForce ransomware group) demonstrated that cyber attacks no longer just affect computers and data, but cause “empty shelves and stalled production lines” affecting “real business, real products, and real lives”. The ransomware attack on Synnovis, a pathology services provider, in August 2024 caused significant clinical healthcare disruption across London, incurred costs of £32.7 million and directly contributed to at least one patient death.
As organisations accelerate the adoption of new technology, including Artificial Intelligence (AI) and post-quantum cryptography, threat actors are quick to adapt. AI is now being used to increase the efficiency, effectiveness, and frequency of cyber intrusions, creating fully automated spear-phishing campaigns and aiding vulnerability research.
Conclusion
This chronology reveals a continuous feedback loop: as technology drives greater connectivity and utility (e-commerce, mobile apps, integrated networks and cloud services) attackers exploit the inherent complexities and vulnerabilities introduced by that innovation. The battleground has shifted from defending a static corporate perimeter to securing highly mobile, fragmented user identities and critical operational infrastructure.
The fundamental message remains clear: cyber security is now critical to business longevity and national security. The future lies in the exploitation of the Internet of Things, assisted by the rapid exploitation of artificial intelligence as both a transformational enabler disrupter, and potentially as a cyber weapon.
Useful resources
- Video: Cisco Talos Year in Review 2024: Top 3 attack trends (YouTube)
- Video: Round-the-clock defense: Why MDR matters (Socitm)
- Video: Zero trust in the AI era – Moving to a SASE architecture (Socitm)
- Topic hub: Cyber@Socitm
- Guide: Cyber security guidance for public sector practitioners (Socitm)
- Report: How local authorities have come together to secure public sector supply chains (Risk Ledger)
A leap forward

As we look ahead, cyber security is evolving at an extraordinary pace. The threats we face are no longer confined to isolated incidents; they have become more sophisticated, widespread and deeply connected with our daily lives. Moving forward, it is essential to anticipate new challenges, adapt our defences and embrace innovative solutions to safeguard our organisations and communities in this rapidly changing digital world.
Key emerging technology challenges and cyber threats to watch for in 2026 and beyond
🛡️ AI-driven malware and automated attacks
The rise of AI-powered attacks means that machine learning is now being used to mutate malicious code in real-time, helping it to avoid static detection, deepen its installation and adapt to endpoint defences. As information is exchange without intervention through a Model Context Protocol (MCP) server, there is an increased risk in time that poisoned AI models will gain a higher level of autonomy and when they detect the right conditions could initiate an automated attack at a future time.
By 2027, AI-enabled tools are highly likely to enhance threat actors’ capability to exploit known vulnerabilities (n-day exploitation), significantly increasing the volume of attacks against unpatched critical systems. Since AI-based infiltration outpaces manual threat hunting, organisations must pivot their defensive posture. Threat hunting is a new requirement in CAF version 4.
Actionable takeaways:
- Implement advanced anomaly detection: Move beyond traditional static defences by adopting advanced anomaly detection tools to counter real-time malware mutation and AI-based infiltration.
- Prioritise patching cycles: Ensure rigorous and timely patching of all critical systems, as AI-enabled tools are specifically enhancing the exploitation of known vulnerabilities (n-day exploitation).
- Invest in adaptive training: Develop and deploy AI-enhanced defensive measures, such as hyper-personalised learning ecosystems for cyber security training, tailored to individual employee risk profiles.
🛡️ The evolution of Ransomware-as-a-Service (RaaS)
Ransomware remains a dominant and highly sophisticated threat, with attacks becoming increasingly targeted and costly. The expansion of RaaS reduces the technical barrier to entry, leading to a surge of attacks that weaken public sector organisations and demand substantial payouts. The financial cost of recovering from a ransomware attack is already considerable, averaging over £2 million according to research data.
Organisations must treat this threat as a genuine, board-level risk. The NCSC Board Toolkit is recommended reading as part of the NCSC Cyber Governance Code of Practice.
Actionable takeaways:
- Maintain offline backups and segmentation: Implement necessary resilience strategies by maintaining segmented networks and robust, verified offline backups. Ensure to test and recover from these backups and have the technical expertise to do so.
- Implement strong prevention: Consistently apply encryption, firewalls, antivirus software, and multi-factor authentication throughout the organisation.
- Formalise incident reporting: Treat cyber security as a genuine board-level risk and ensure that incidents are reported quickly to relevant authorities for necessary guidance and resolution. Once the Cyber Resilience Bill takes effect, this will be required within 24 hours. Begin planning, practising, and testing your response procedures now.
🛡️ Supply chain and third-party risk
Cyber attackers are taking advantage of weaknesses in third-party vendors and suppliers to access larger networks, which can trigger widespread disruptions throughout various industries. Notable incidents involving supply chains have demonstrated how a single compromised update can lead to significant ripple effects.
A critical lack of oversight often means that organisations do not know all the third-party suppliers handling their data and Personally Identifiable Information (PII). For the UK public sector, strengthening defences against external threats targeting the high-value UK AI ecosystem is a priority, as state actors view this as a primary target.
Actionable takeaways:
- Embed supplier assurance in procurement: Mandate rigorous supplier assurance practices in all procurement documentation such as pre-qualification questionnaire (PQQ) and Invitation to Tender (ITT) for AI and digital systems, demanding transparency regarding data processing location, retention, and deletion.
- Map CNI dependencies: Increase the rigorous mapping of dependencies within Critical National Infrastructure (CNI) supply chains to reduce systemic vulnerability.
- Prioritise continuous monitoring: Invest in AI-driven and transparency-focused solutions that can vet and continuously monitor complex supply chains for vulnerabilities.
🛡️ Operational Technology and edge security
Operational Technology (OT) manages physical systems and infrastructure and is frequently mentioned alongside the Internet of Things (IoT) within the public sector.
OT systems represent a significant and emergent threat that must be urgently incorporated into a broad security focus. Attackers are increasingly targeting the often-weak defences of connected devices, especially those in government and critical infrastructure, leaving them vulnerable to disruptive activities such as surveillance manipulation or Distributed Denial of Service (DDoS) attacks. The governance of data and metadata generated by these sensors is critical, especially where it impacts individuals.
Actionable takeaways:
- Integrate OT into strategy: Actively integrate OT and edge devices (sensors, IoT) into the broad, overarching cyber security strategy.
- Monitor edge disruptions: Ongoing monitoring systems are required to identify disturbances and irregularities at the network’s edge, an essential step given the highly interconnected nature of these systems.
- Address OT data governance: Ensure that upcoming regulations and governance policies fully account for the data and metadata produced by IoT sensors, with special attention to compliance and protecting personal privacy.
🛡️ Social engineering via deepfakes and synthetic media
Social engineering exploits the human element, the most difficult factor to remediate in cyber security. Scammers are now leveraging AI to generate hyper-personalised phishing schemes, along with sophisticated deepfake audio and video content. These manipulated forms of social engineering make attacks quicker, easier, and much more convincing. Deep fake attacks have seen significant growth, with a high percentage of deepfakes impersonating CEOs or other C-suite executives.
These sophisticated attacks pose a credible threat to financial security and risk gradually destroying public trust in government and the public sector.
Actionable takeaway:
- Mandate advanced verification: Implement and enforce advanced verification steps in all critical processes (fund transfers, credential, changes in personal details) that rely on verbal or visual confirmation.
- Use advanced deepfake detection tools to combat disinformation and audio manipulation.
- Elevate awareness training: Increase employee education and awareness training on how to recognise AI-generated deepfake phishing and sophisticated social engineering attempts.
🛡️ Adoption of zero trust architectures
Zero Trust Architecture (ZTA) represents a necessary shift. Eliminate implicit trust and continuously verify every access request to your organisation’s network, applications and data. ZTA was a top cyber security trend for 2025, with more organisations adopting it. This continues to be the case.
ZTA proactively mitigates cyber threats and ensures data security by enforcing strict identity verification, least privilege access and micro-segmentation. Implementing ZTA is an essential component of a resilient cyber security framework.
Actionable takeaways:
- Develop a ZTA roadmap: Start transitioning your strategy to Zero Trust Architecture by deploying advanced access controls and strengthening endpoint security throughout your organisation.
- Adopt micro-segmentation: Utilise micro-segmentation, continuous session monitoring and user context checks as core components of the ZTA implementation.
- Integrate MFA and AI anomaly detection: Ensure the architecture integrates essential security features such as multi-factor authentication (MFA) and AI-driven anomaly detection to enforce verification.
🛡️ AI governance and assurance frameworks
Governance must move beyond simple compliance to implementing comprehensive assurance structures. The UK government’s AI strategy established five core pillars for implementation, including; safety, security, transparency and accountability.
AI governance must be integrated into strategic decision-making, not delegated purely as a technical concern. Because complex Large Language Models (LLMs) often function as “black boxes,” monitoring and evaluation are difficult, meaning accountability for any output always rests with the public organisation. The rapid pace of change means regulatory frameworks are struggling to keep up with future developments.
Actionable takeaways:
- Implement model integrity assurance: Adopt rigorous assurance methods to ensure the core integrity, ethics, and bias of AI models remain consistent, treating the core models as high integrity, “data sealed computational files”.
- Enforce Human-in-the-Loop (HITL): Mandate HITL as a critical control measure to ensure human involvement and supervision, especially in high-risk or high-impact situations, refraining from fully automated decision-making.
- Mandate Data Protection Impact Assessments (DPIAs): Ensure formal DPIAs are conducted for any AI system processing Personally Identifiable Information (PII) to ensure compliance with UK GDPR.
🛡️ Vulnerabilities specific to AI systems
AI models are increasingly incorporated into the UK’s technology base; specific attack vectors are emerging that target the integrity of the models themselves. These attack vectors include ’prompt injection’, which tricks the LLM with malicious input, and ’data poisoning’, where training data is manipulated and specific bias is introduced on purpose.
Attackers are actively developing malicious prompts designed to trick or subvert AI models. Prompt injection attacks pose a threat of unauthorised access or data exfiltration. The complexity of AI systems requires that organisations begin creating a new, separate incident category specifically for AI incidents to track, identify, and report these events accurately.
Actionable takeaways:
- Develop AI incident playbooks: Create playbooks and blueprints explaining how the organisation will respond to AI cyber incidents, including the development of AI risk profiles and the defining of risk appetites for AI deployment.
- Establish AI incident reporting: Begin the process of creating a new internal incident category specifically for AI incidents to ensure accurate tracking and analysis of emerging attack vectors.
- Load regulatory text for compliance: Ensure the text of all relevant regulations, such as the Data Protection Act, is loaded into AI models to guarantee compliance is embedded by design.
🛡️ Foundational resilience: Asset visibility and incident readiness
In this challenged environment, public sector organisations cannot afford to be passive; they must possess a proactive, detailed understanding of their digital estate. This requires meticulous knowledge of the information assets held, the networks and services connecting them, with predefined plans and playbooks for rapid recovery.
Recent high-profile cyber incidents involving UK local authorities, underscore the crucial need for emergency plans and swift detection and recovery capability. Without robust asset discovery and inventory tools, organisations cannot effectively monitor or map their expanding attack surface. This will be key for the planning, migration and implementation for the new organisations under current local government reorganisation (LGR).
Effective governance requires organisations to identify all third parties managing their Personally Identifiable Information (PII).
Actionable takeaways:
- Develop and maintain information asset registers: Deploy comprehensive asset discovery and inventory tools to establish a robust register of all hardware, software, and data assets, ensuring accurate monitoring and visibility into the expanding attack surface.
- Define and exercise incident response playbooks: Define a risk-based strategy that develops playbooks (or runbooks) explaining exactly how the organisation will respond to major cyber incidents, ensuring the response is quick and effective when an incident is detected.
- Map dependencies and implement monitoring: Align network documentation with CNI protection guidance by mapping dependencies within critical supply chains. Implement robust continuous monitoring systems (such as SIEM tools) to detect abnormal or suspicious activities across systems and networks faster than manual human oversight.
👉 For an overview of information security and cyber incidents (1985–2025), view the chronological timeline