Le guidelines for the safe development of artificial intelligence systems they are designed to guide developers specifically through the design, development, implementation and operation of AI systems and ensure that security remains a critical component throughout their lifecycle. However, other stakeholders in AI projects should also find this information useful.
These guidelines were released soon after world leaders committed to the safe and responsible development of artificial intelligence at the AI Safety Summit in early November.
The Guidelines for Developing Safe AI Systems set out recommendations to ensure that AI models – whether built from scratch or based on existing models or APIs from other companies – “work as intended, are available when needed, and work without revealing sensitive data to unauthorized parties. “
Key to this is the “secure by default” approach advocated by NCSC, CISA, the National Institute of Standards and Technology and various other international cybersecurity agencies in existing frameworks. The principles of these frameworks include:
According to the NCSC, a total of 21 agencies and ministries from a total of 18 countries have confirmed that they will approve and co-seal the new guidelines. This includes the National Security Agency and the Federal Bureau of Investigations in the United States, as well as the Canadian Center for Cyber Security, the French Cyber Security Agency, the Federal Office for Cyber Security of Germany, the Singapore Cyber Security Agency and Japan National Incident Center. Cybersecurity preparation and strategy.
Lindy Cameron, chief executive of the NCSC, said in a press release : “We know that artificial intelligence is developing at a phenomenal rate and there needs to be concerted international action, between governments and industry, to keep pace. ”.
The guidelines for the safe development of AI systems are structured into four sections, each corresponding to different phases of the development lifecycle of an AI system: secure design, secure development, secure implementation, and secure operation and maintenance.
The guidelines are applicable to all types of AI systems and not just “frontier” models which were discussed extensively at the AI Safety Summit hosted in the UK on 1 and 2 November 2023. The guidelines They are also applicable to all professionals working in and around AI, including developers, data scientists, managers, decision makers, and other AI “risk owners.”
“We aimed the guidelines primarily at AI system vendors that use models hosted by an organization (or use external APIs), but we encourage all interested parties… to read these guidelines to help them make informed design decisions, development, implementation and operation of their artificial intelligence systems”, said the NCSC.
During the AI Safety Summit, held at the historic site of Bletchley Park in Buckinghamshire, England, representatives from 28 countries signed the Bletchley Statement on AI Safety , which highlights the importance of designing and implementing systems artificial intelligence safely and responsibly, with an emphasis on collaboration. and transparency.
The statement recognizes the need to address the risks associated with cutting-edge AI models, particularly in areas such as computer security and biotechnology, and supports greater international collaboration to ensure the safe, ethical and beneficial use ofIA.
Michelle Donelan, Britain's science and technology secretary, said the newly published guidelines “will put cybersecurity at the heart of the development ofartificial intelligence” from inception to deployment.
The publication of the guidelines onartificial intelligence has been welcomed by experts and analysts cybersecurity.
Toby Lewis, global head of threat analysis at Darktrace, has defifinished the guide "a welcome project" for systems artificial intelligence safe and reliable.
Commenting via email, Lewis said: “I am pleased to see that the guidelines highlight the need for artificial intelligence protect their data and models from attackers and that AI users apply the right ones intelligence artificial for the right task. Those developing AI should go further and build trust by walking users through the journey of how their AI reaches the answers. With confidence and trust, we will realize the benefits of AI faster and for more people.”
Georges Anidjar, vice president for Southern Europe at Informatica, said the publication of the guidelines marks “a significant step towards addressing the cybersecurity challenges inherent in this rapidly evolving field.”
BlogInnovazione.it
Developing fine motor skills through coloring prepares children for more complex skills like writing. To color…
The naval sector is a true global economic power, which has navigated towards a 150 billion market...
Last Monday, the Financial Times announced a deal with OpenAI. FT licenses its world-class journalism…
Millions of people pay for streaming services, paying monthly subscription fees. It is common opinion that you…