Skip to main content

Machine learning principles. Secure design

The National Cyber Security Centre has published a comprehensive article on the principles that help make informed decisions regarding the design, development, deployment, and operation of machine learning (ML) systems. Today, we will look at the first section, "Secure Design," and share with you a concise summary of the principles at the design stage of the ML system development lifecycle.

Raise awareness of ML threats and risks

Security in design is quite resource-intensive. However, applying this approach from the beginning of the system's lifecycle can save a significant amount of funds on various reworks and fixes in the future. It is essential to ensure protection at every stage of the development lifecycle.

What can help implement this principle?

Provide guidance on the unique security risks facing AI systems

It is necessary for developers' knowledge in the field of ML to always be up-to-date. For example, it is important to be aware of the types of threats that their systems may be exposed to: Evasion attacks, Poisoning attacks, Privacy attacks, and others.

Enable and encourage a security-centered culture

Developers are not always security experts, but if they understand how their solutions impact the system's vulnerability, it will ultimately be crucial. Therefore, it is important to promote collaboration with security experts and the sharing of knowledge and experience.

    Model the threats to your system

    It is very challenging to guarantee complete security against constantly evolving attacks from malicious actors. Therefore, it is important to understand how an attack on a specific ML component can affect the system as a whole.

    What can help implement this principle?

    Create a high-level threat model for the ML system

    You can create a high-level threat model to gain an initial understanding of the broader systemic consequences of any attack on your ML component. Assess the implications of ML security threats and model the failure modes of the entire system.

    Use CIA (confidentiality, integrity, availability) and a high-level threat model to explore the system.

    Model wider system behaviors

    When developing a system, it is important to combine knowledge and experience in ML, knowledge about the system, and an understanding of the limitations of ML components. It is advisable to use multi-layered countermeasures to authenticate requests and detect suspicious activity, including logic-based and rule-oriented controls outside the model. User access should also be considered: open web systems require stricter protective measures compared to closed systems with controlled access.

      Minimise an adversary's knowledge

      There is a practice of disclosing information after an attack, which undoubtedly benefits the entire ML community and helps enhance ML security across the industry. However, reconnaissance is often the first stage of an attack, and excessive publication of information regarding model performance, architectures, and training data can facilitate attackers in developing their attacks.

      When deciding on disclosure, it is important to consider the balance between the motivation for sharing (marketing, publications, or improving security practices) and protecting key details of the system and model.

      What can help implement this principle?

      Develop a process to review information for public release

      Assess the risks of publishing information about your system by involving specialists from various fields (e.g., ML practitioners, security experts, developers, and non-technical experts). The information review process may include:
      • Assessing information that will be disclosed concerning known vulnerabilities (e.g., vulnerabilities in MITRE ATLAS attacks);
      • Evaluating information that needs to be disclosed regarding vulnerabilities in your system;
      • Analyzing the benefits of publication for your organization, system, or community as a whole;
      • Justifying whether publication is warranted considering potential negative security consequences.

      Brief the risks to non-technical staff

      All employees involved in public information dissemination should undergo training on the potential consequences of releasing materials. During briefings, it is essential to ensure understanding of what material requires review and how to conduct that review.

        Analyze vulnerabilities against inherent ML threats

        Identifying specific vulnerabilities in workflows or algorithms during the design phase helps reduce the need for remediation in the system. The significance of vulnerabilities depends on factors such as data sources, data sensitivity, the deployment and development environment, and the potential consequences for the system in case of failure. It is recommended to regularly analyze decisions made during the development process and conduct a formal security review before deploying the model or system into production.

        What can help implement this principle?

        Implement red teaming

        Simulating the actions of an attacker to identify system vulnerabilities, or what is commonly referred to as applying a "red teaming mindset" on a regular basis, helps define security requirements and determine design solutions.

        Consider automated testing

        It is advisable to use automated tools to assess and test the security of the model against known vulnerabilities and attack methods. Additionally, keep an eye on new standards that may emerge in this area, paying attention to guidance from the AI Standards Hub dedicated to the standardization of AI technologies.

        The NCSC article also provides examples of open-source tools for assessing reliability.

          In upcoming articles, we will cover the next sections of the article: secure development, secure deployment, and secure operation. So don’t go too far! In the meantime, feel free to explore other articles from SAF 😉

          We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.