Northrop Grumman is working to establish “justified confidence” in artificial intelligence systems by aligning AI development with the Department of Defense’s five AI ethical principles to ensure that such platforms are accountable, robust and reliable.
The company said Monday it is implementing an integrated approach to developing secure and ethical AI systems and one of the steps it is taking is working with partners like Silicon Valley-based startup Credo AI, which shares its governance tools to guide Northrop’s AI development efforts.
Northrop noted that it is also collaborating with Carnegie Mellon University and other academic institutions to come up with new best practices for ethical and secure AI development and extending its DevSecOps process to document and automate best practices with regard to AI software development, monitoring, testing and deployment.
The company cited other measures it is implementing to comply with DOD’s AI ethical principles: responsible, equitable, traceable, reliable and governable.
These include testing for data bias and employing a diverse engineering team; providing an immutable log of data provenance; enabling reliability by putting an emphasis on mission understanding; and advancing mission-focused employee training and DevSecOps practices.