Artificial intelligence has zoomed to the forefront of the public and professional discourse — as have expressions of fear that as AI advances, so does the likelihood that we will have created a variety of beasts that threaten our very existence. Within those fears also lay worries about the responsibilities of those who create the large language models (LLM) and engines that harvest the data that feed them to do so in an ethical manner.

To be frank, I hadn’t given the matter much thought until I was triggered by a recent discussion around the need for “responsible and ethical AI” which occurred amidst the constant blast that AI is evil personified or conversely that it is some holy grail.

I went away and began digging in and found the US Department of Defense (DoD) has a framework that it has used and shared publicly since early 2020 that comprises five principles that lay out what artificial intelligence should look like:

  1. Responsible — Exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable — Take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable — AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedures and documentation.
  4. Reliable — AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.
  5. Governable — Design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

Demonstrating a significant amount of prescience, Air Force Lt. General Jack Shanahan, then head of the Joint Artificial Intelligence Center (since integrated into the Chief Digital and Artificial Intelligence Office in 2022 led by Dr. Craig Martell, chief digital and artificial intelligence officer) noted in the context of the military’s use of AI in support of the warfighter, that “whether it does so positively or negatively depends on our approach to adoption and use. The complexity and the speed of warfare will change as we build an AI-ready force of the future. We owe it to the American people and our men and women in uniform to adopt AI ethics principles that reflect our nation’s values of a free and open society.”

In late 2021, the DoD published its Project Herald, which outlines the Defense Intelligence Digital Transformation Campaign Plan – 2022-2027. The plan embraces the aforementioned pillars of responsible AI and aligns perfectly with what every CISO should be addressing within their remit: people, process, and technology.

So here we are in 2023, and the White House has joined in with a plethora of steps all designed to help foster the evolution of responsible AI, and not a moment too soon. In early May, the administration announced the creation of additional National AI Research Institutes (and $140 million to make it happen). The seven new institutes will join the 18 existing entities all focused on AI research. 

The actions taken by the executive branch of the US government, coupled with its clear understanding that AI is a national security issue, should be easily translated by the CISO that AI is also a priority corporate security issue.

CISOs should embrace the DoD framework on AI

How does this distill down to actionable elements which will assist the CISO who is looking at the ad copy being thrown over their transom from marketeers and trying to determine what exists and what is infamous vaporware? I submit that the CISO should take this DoD framework and run with it in their evaluation of what is being considered for inclusion in their network.

  1. Responsible – Ensure both training and playbooks exist that will assist their personnel in the implementation of AI-based solutions in their technology stack.  
  2. Equitable – Determining a bias in an AI “black-box” solution may be the most difficult challenge facing CISOs. Yet, it may be the most important, as a bias will (not may) bring unintended consequences.
  3. Traceable – Black boxes are not the CISO’s friend. If you are unable to provide provenance for the information being revealed through the interrogation of your large language model, then you are merely hoping a bias isn’t present or that the engine isn’t just “best guessing” on your behalf. As the DoD emphasizes, transparency and auditable methodologies are your friends.
  4. Reliable – Is there such a thing as 100% reliable? With AI and machine-speed decision-making, reliability is foundational.
  5. Governable – There will be many large language models, some for general use, and others for specialized use, designed for specific functions. The DoD’s recommendation to build in the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior is paramount.

No one wants to have a situation where AI-empowered tools move at machine speed, make decisions that on paper should protect the enterprise, yet end up creating consequences that may or may not be detectable. Embracing the ethical pillars of responsible AI as detailed by the DoD is not a heavy lift, though it may be an inconvenient one. All in the cybersecurity realm understand the threat that “convenience” is to security, and thus investment in “the need to absorb the inconvenience” will be one more task put upon the CISO’s already full plate.

Copyright © 2023 IDG Communications, Inc.