Cybersecurity
AI & Cybersecurity Risk: Understanding the Value of the UK Code of Practice

Securing the Future
When many of us think of the risks related to AI, the focus has often been around bias, data privacy and other damaging social issues from adopting and using AI in our day-to-day activities without the proper measures in place. However, there is often a lack of attention on securing our AI and LLM tools from malicious activity and bad actors.
As companies adopt AI technologies, new vectors for threat actors have proliferated. Traditional cybersecurity frameworks evolved prior to this lack controls specific to these threats.
In response to new and distinct attack vectors, the United Kingdom government has released a voluntary Code of Practice for the Cyber Security of AI.
At Coalfire, I’ve had the opportunity to review many AI policies. This AI focused framework helps address many of the common pitfalls or mature areas that are often underdeveloped - while enabling businesses to benefit from prior investment into cybersecurity policies and procedures
The United Kingdom (UK) government published document a Code of Practice regarding the Cybersecurity Risk to AI (Jan 2025). Adoption of this Code would reduce the risk to sensitive datasets to prevent data poisoning or theft of data by malicious adversaries. Here at Coalfire we are always striving to help our customers leverage their tools in the most secure manner possible thus reducing their exposure to risk.
A New Vision: Cyber Security of AI and the UK Code of Practice
When developing a plan to secure any AI model the UK government has identified 13 principles, categorized into the five phases of an AI system’s lifecycle that require consideration. Each of these is intrinsically linked to the others and therefore a holistic understanding of how to secure an AI is essential.
- Secure Design
- Secure Development
- Secure Deployment
- Secure Maintenance
- Secure End of Life
Secure Design
- Principle 1: Raise awareness of AI security threats and risks
- Principle 2: Design your AI system for security as well as functionality and performance
- Principle 3: Evaluate the threats and manage the risks to your AI system
- Principle 4: Enable human responsibility for AI systems
This element aims to increase awareness of the threat and risk associated with designing and using AI systems and how prioritizing security as well as functionality, performance and quality of output allows developers and users to understand how to evaluate and manage risk within the tolerance of their own risk appetite.
There is no ‘one size fits all' and as such, evaluation and responsibility of the risk presented by AI must be understood and aligned with the overall risk strategy of an entity.
Secure Development
- Principle 5: Identify, track and protect your assets
- Principle 6: Secure your infrastructure
- Principle 7: Secure your supply chain
- Principle 8: Document your data, models and prompts
- Principle 9: Conduct appropriate testing and evaluation
The optimum approach to the security of AI allows Developers, System Operators, Data Custodians and End-users to track and protect the elements they’re responsible for. This relies on a comprehensive inventory of assets used in the development and use of AI, including full documentation of data, models and prompts.
Likewise, the securing of infrastructure, such as user access management, limiting the model access rates, should dovetail with existing policies and processes. A mature supply chain security protocol provides an understanding of any risk and mirrors the enterprise strategic risk appetite and profile.
Secure Deployment
- Principle 10: Communication and processes associated with End-users and Affected Entities
The secure use of AI does not stop once it is deployed. It should be regarded as an ongoing process that requires additional identification and dissemination of how and where data will be used, accessed and stored.
Documented guidance for end-users must be available and they must be proactively informed of security updates made to the AI model as well as any estimated impact to the security profile of the AI tool being used to ensure it remains aligned to the entity’s risk appetite.
Secure Maintenance
- Principle 11: Maintain regular security updates, patches and mitigations
- Principle 12: Monitor your system’s behavior
The developers of the AI model must have mechanisms and contingency plans to mitigate security risks. This may require understanding the impact of any changes and the associated risk.
The security of AI models benefits from monitoring tools that augment the security team’s ability to respond to incidents, ensure compliance and assist with vulnerability remediation.
Behavioral baselines once established can be leveraged to identify anomalous activity and alert developers and security experts to issues within the AI model that indicate possible malicious activity.
Secure End of Life
- Principle 13: Ensure proper data and model disposal
The Code informs developers and systems operators that transfer or shared ownership of an AI and its associated data and when the decommissioning of an AI model takes place, they must involve the relevant data custodians.
Whilst adhering to their own data deletion policies regarding the AI data and configuration, to ensure the integrity of their own datasets and prevent any unforeseen risk due to unauthorized access to sensitive information.
The Real World, From Theory to Implementation: Applying the Code of Practice
Of course, all things are possible in theory but when attempting to apply a methodology or framework in a real-world situation there are always challenges that need to be overcome. However, with the correct perspective one can leverage policies and processes that are already in place for the security of AI and the associated data it utilizes.
Consider how your security team already manages the different systems it monitors, how your development team already incorporates security into their software development lifecycle, how your compliance team adopts security into its practice on a daily basis, what policies are already matured and have been adopted widely within your organization, they all have the necessary skills required to make compliance with future AI related frameworks feasible, it simply requires the expertise and fresh perspective to bring these seemingly disparate skillsets together.
Coalfire’s holistic approach incorporates all of these elements into an AI Governance Plan driven by your strategic vision for the future. As client businesses deploy and develop AI solutions, Coalfire works symbiotically with stakeholders reviewing the organization's business objectives as they relate to AI, their current compliance posture, the applicability to AI of your SDLC, how current security policies fit or need to be modified and matured to understand and define your AI Risk Program.
Here at Coalfire we have the expertise and experience gained from working with industry leaders to be the catalyst for the change you want and need, ensuring your future in AI is secure.
Further Information
For interested clients, the Code also has an accompanying implementation guide (pdf).
This guide presents sections with examples for each of the 13 principles and their subitems, with specific scenarios for Chatbot apps, Machine Learning fraud detection, Large Language Model (LLM) providers, and open access LLM.