Abstract
This CETaS Research Report explores how national security bodies can effectively evaluate AI systems designed and developed (at least partially) by industry suppliers. We argue that involving industry in the design and development of AI is essential if UK national security bodies want to keep pace with cutting-edge capabilities. But, when stages of the AI lifecycle are outsourced, direct oversight may be reduced. Our tailored AI assurance framework for UK national security facilitates more transparent communication about AI systems and robust assessment of whether AI systems meet requirements. The framework centres on a structured system card template for UK national security. This provides guidance on how AI system properties should be documented – to cover legal, supply chain, performance, security, and ethical considerations. We recommend this assurance framework is trialled by national security bodies and industry suppliers in the immediate term.
Original language | English |
---|---|
Place of Publication | London |
Publisher | The Alan Turing Institute |
Number of pages | 70 |
Publication status | Published - 17 Jan 2024 |