Articles

New guidance on AI security published by NCSC, CISA and other international agencies

The Guidelines for Developing Secure AI Systems were written to help developers ensure that security is built into the heart of new AI models.

The UK's National Cyber ​​Security Centre, the US Cybersecurity and Infrastructure Security Agency and international agencies from 16 other countries have published new guidance on the security of artificial intelligence systems.

Le guidelines for the safe development of artificial intelligence systems they are designed to guide developers specifically through the design, development, implementation and operation of AI systems and ensure that security remains a critical component throughout their lifecycle. However, other stakeholders in AI projects should also find this information useful.

These guidelines were released soon after world leaders committed to the safe and responsible development of artificial intelligence at the AI ​​Safety Summit in early November.

In summary: guidelines for developing safe AI systems

The Guidelines for Developing Safe AI Systems set out recommendations to ensure that AI models – whether built from scratch or based on existing models or APIs from other companies – “work as intended, are available when needed, and work without revealing sensitive data to unauthorized parties. “

Key to this is the “secure by default” approach advocated by NCSC, CISA, the National Institute of Standards and Technology and various other international cybersecurity agencies in existing frameworks. The principles of these frameworks include:

  • Take ownership of safety outcomes for customers.
  • Embracing radical transparency and accountability.
  • Build organizational structure and leadership so that “safety by design” is a top business priority.

According to the NCSC, a total of 21 agencies and ministries from a total of 18 countries have confirmed that they will approve and co-seal the new guidelines. This includes the National Security Agency and the Federal Bureau of Investigations in the United States, as well as the Canadian Center for Cyber ​​Security, the French Cyber ​​Security Agency, the Federal Office for Cyber ​​Security of Germany, the Singapore Cyber ​​Security Agency and Japan National Incident Center. Cybersecurity preparation and strategy.

Lindy Cameron, chief executive of the NCSC, said in a press release : “We know that artificial intelligence is developing at a phenomenal rate and there needs to be concerted international action, between governments and industry, to keep pace. ”.

Secure the four key phases of the AI ​​development lifecycle

The guidelines for the safe development of AI systems are structured into four sections, each corresponding to different phases of the development lifecycle of an AI system: secure design, secure development, secure implementation, and secure operation and maintenance.

  • Safe design offers specific guidance for the design phase of the AI ​​system development lifecycle. It emphasizes the importance of recognizing risks and conducting threat modeling, as well as considering various topics and tradeoffs when designing systems and models.
  • Safe development covers the development phase of the AI ​​system life cycle. Recommendations include ensuring supply chain security, maintaining thorough documentation, and effectively managing resources and technical debt.
  • Secure implementation addresses the implementation phase of AI systems. The guidelines in this case concern safeguarding the infrastructure and models from compromises, threats or losses, the definition of processes for incident management and the adoption of responsible release principles.
  • Safe operation and maintenance contain indications on the operation and maintenance phase following the deployment of the artificial intelligence models. It covers aspects such as effective logging and monitoring, managing updates and responsible information sharing.

Guidelines for all AI systems

The guidelines are applicable to all types of AI systems and not just “frontier” models which were discussed extensively at the AI ​​Safety Summit hosted in the UK on 1 and 2 November 2023. The guidelines They are also applicable to all professionals working in and around AI, including developers, data scientists, managers, decision makers, and other AI “risk owners.”

“We aimed the guidelines primarily at AI system vendors that use models hosted by an organization (or use external APIs), but we encourage all interested parties… to read these guidelines to help them make informed design decisions, development, implementation and operation of their artificial intelligence systems”, said the NCSC.

Results of the AI ​​Safety Summit

During the AI ​​Safety Summit, held at the historic site of Bletchley Park in Buckinghamshire, England, representatives from 28 countries signed the Bletchley Statement on AI Safety , which highlights the importance of designing and implementing systems artificial intelligence safely and responsibly, with an emphasis on collaboration. and transparency.

Innovation newsletter
Don't miss the most important news on innovation. Sign up to receive them by email.

The statement recognizes the need to address the risks associated with cutting-edge AI models, particularly in areas such as computer security and biotechnology, and supports greater international collaboration to ensure the safe, ethical and beneficial use ofIA.

Michelle Donelan, Britain's science and technology secretary, said the newly published guidelines “will put cybersecurity at the heart of the development ofartificial intelligence” from inception to deployment.

Reactions to these AI guidelines from the cybersecurity industry

The publication of the guidelines onartificial intelligence has been welcomed by experts and analysts cybersecurity.

Toby Lewis, global head of threat analysis at Darktrace, has defifinished the guide "a welcome project" for systems artificial intelligence safe and reliable.

Commenting via email, Lewis said: “I am pleased to see that the guidelines highlight the need for artificial intelligence protect their data and models from attackers and that AI users apply the right ones intelligence artificial for the right task. Those developing AI should go further and build trust by walking users through the journey of how their AI reaches the answers. With confidence and trust, we will realize the benefits of AI faster and for more people.”

Georges Anidjar, vice president for Southern Europe at Informatica, said the publication of the guidelines marks “a significant step towards addressing the cybersecurity challenges inherent in this rapidly evolving field.”

BlogInnovazione.it

Innovation newsletter
Don't miss the most important news on innovation. Sign up to receive them by email.

Latest Articles

The Benefits of Coloring Pages for Children - a world of magic for all ages

Developing fine motor skills through coloring prepares children for more complex skills like writing. To color…

May 2, 2024

The Future is Here: How the Shipping Industry is Revolutionizing the Global Economy

The naval sector is a true global economic power, which has navigated towards a 150 billion market...

May 1, 2024

Publishers and OpenAI sign agreements to regulate the flow of information processed by Artificial Intelligence

Last Monday, the Financial Times announced a deal with OpenAI. FT licenses its world-class journalism…

April 30 2024

Online Payments: Here's How Streaming Services Make You Pay Forever

Millions of people pay for streaming services, paying monthly subscription fees. It is common opinion that you…

April 29 2024