web analytics

Accelerating Safe and Secure AI Adoption with ATO for AI: stackArmor Comments on OMB AI Memo – Source: securityboulevard.com

Rate this post

Source: securityboulevard.com – Author: stackArmor

Ms. Clare Martorana,

U.S. Federal Chief Information Officer,

Office of the Federal Chief Information Officer,

Office of Management Budget.

Subject: Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum

Ms. Martorana,

We appreciate the opportunity to comment on the proposed Memo on Agency Use of Artificial Intelligence. As the CEO and founder of a small and innovative solutions provider, stackArmor, Inc., headquartered in Tysons VA, I applaud your efforts to foster transparency and solicit ideas and comments. 

We believe the three most important initiatives in the memo to help agencies advance governance, innovation, and risk management for use of AI are:

(1) ensuring agencies have access to adequate IT infrastructure,

(2) modernizing cybersecurity authorization processes to better address the needs of AI application, and

(3) establishing uniform and consistent shared practices to independently evaluate the AI and conduct ongoing monitoring.

We base our remarks on our experience helping US Federal agencies transform their information technology systems using new and emerging technologies like cloud computing technologies since 2009 with the first migration of a government wide system – Recovery.gov to a commercial cloud service provider. Since then, we have had the privilege of supporting numerous transformation initiatives including being part of the GSA Centers of Excellence (COE) since 2018 and having contributed towards the development of the Cloud Adoption Playbook while supporting transformation engagements at agencies including USDA, HUD, NIH and OPM amongst others agencies.

Our approach to Risk Management and Governance is rooted in using open standards and frameworks provided by NIST. We believe that OMB’s guidance should encourage the augmentation and tailoring of existing risk management processes to ensure that Federal agencies can start accruing the benefits of AI technologies without costly delays associated with implementing a new governance program. A pragmatic approach that tailors NIST RMF, NIST SP 800-53 and governance models such as ATOs (Authority to Operate) to align with the NIST AI RMF (NIST AI 100-1) provides a balanced approach that ensures that AI specific risks like safety, bias and explainability are adequately covered while being able to leverage the existing cyber workforce, risk management procedures, and body of knowledge associated with NIST RMF/800-53.

We are providing the following specific comments to questions from OMB requesting feedback.

1. The composition of Federal agencies varies significantly in ways that will shape the way they approach governance. An overarching Federal policy must account for differences in an agency’s size, organization, budget, mission, organic AI talent, and more. Are the roles, responsibilities, seniority, position, and reporting structures outlined for Chief AI Officers sufficiently flexible and achievable for the breadth of covered agencies?

Given that most AI capabilities within an agency will be delivered by IT systems that are highly likely to be based on cloud computing technologies (public or private), the designated Chief AI Officers should have sufficient experience with and exposure to cloud computing technologies as well as the Federal Risk and Authorization Management Program (FedRAMP®) to ensure that cost-effective and secure commercial solutions can help meet the agency’s AI needs. Such experience helps agencies rapidly reap the benefits of AI capabilities, maximizing the use of secure and compliant commercial solutions will be critical and to the extent Chief AI Officers understand AI systems and commercial solutions, it will help in remove roadblocks and avoid duplication of efforts, where agencies re-create capabilities that already exist in the commercial sector.

Further Chief AI Officers should have a keen understanding of the agency’s mission and how AI can enhance and improve or bring new service delivery capabilities.  Agencies should have the flexibilities to determine appropriate reporting structures that best fit their needs, and where the Chief AI Officer is not dual hatted with the CIO or CDO, for example, ensure close collaboration and coordination with other CxO’s (e.g. CIO, CDO, CXO, CISO, Chief Privacy Officer).

2. What types of coordination mechanisms, either in the public or private sector, would be particularly effective for agencies to model in their establishment of an AI Governance Body? What are the benefits or drawbacks to having agencies establishing a new body to perform AI governance versus updating the scope of an existing group (for example, agency bodies focused on privacy, IT, or data)?

We believe that augmenting and building upon existing risk management mechanisms especially in the IT domain is likely to help accelerate AI adoption in support of the mission without causing costly delays associated with standing up a brand new governance body or model. Using an approach that ties NIST AI RMF to existing cyber risk management models based on NIST RMF, NIST SP 800-53 and NIST SP 800-53A as well as leveraging the work done by the Federal Privacy Council, there is a critical mass of understanding and knowledge that agencies can leverage to reduce the time and cost with AI adoption across the federal enterprise.  To help avoid a situation where every agency comes up with its own governance model, OMB could direct NIST, GSA and DHS/CISA to create a FISMA Profile for NIST AI RMF, which then can be tailored and adopted by each one of the 24 CFO Act agencies.

Additionally, given that most AI capabilities will be delivered using IT systems, modernizing existing cyber processes and equipping the workforce with critical skills like ethics, safety and civil rights awareness specific to AI systems can help ease the transition burden associated with new technology insertion.

3. How can OMB best advance responsible AI innovation?

OMB should consider creating a consistent and uniform governance model that does not vary from agency to agency. The creation of “snowflake” compliance models unique to a agency will deter participation by small and innovative solution providers across the country. Once the initial wave of foundational systems and AI computing platforms (e.g. commercial or private clouds), the enduring set of government or agency specific solutions are likely to come from small, nimble businesses. Therefore, ensuring market access for small businesses through existing channels like FedRAMP, SBA’s SBIR/STTR funding programs as well as reiterating the need to meet small business and socio-economic goals for AI solutions and systems are important actions that OMB can take to help advance the deployment of AI innovation while ensuring an equitable and competitive marketplace that does not get concentrated into a handful of large players.

OMB should also designate or delegate actions for defining criteria, processes, and operational creation of pathways for “independent review”.   Unless there is a responsible center point with funding to buildout the operational substantiation of this concept, there is a risk that the independent review because substantially costly or burdensome as to become a barrier to innovation. The FedRAMP program has established an objective and standards based program with third-party assessor organizations (3PAO), which could serve as a starting point for enabling an independent review framework.

4. With adequate safeguards in place, how should agencies take advantage of generative AI to improve agency missions or business operations?

We believe an iterative low-risk approach to generative AI adoption will likely be the most productive. In many ways, we draw parallels to how commercial cloud computing adoption occurred almost a decade ago. Given the lack of understanding and the initial trust deficit in cloud solutions, low risk public facing websites were some of the early workloads to migrate to the cloud. Some of the earliest systems to move to the cloud were Recovery.gov and Treasury.gov. More mission critical systems then began moving to the cloud once greater confidence, understanding and trust was established and the governance model matured.

Initially, NIST SP 800-53 Rev 3 was used with cloud computing overlays, then subsequently FedRAMP came along and NIST incorporated cloud computing-aware controls into SP 800-53 Rev 4. Similarly, as the governance model matures, mission critical use cases that will benefit from AI will start to emerge.

There are a number of relatively low-risk use cases around software development using code generators; marketing & outreach automation and enhanced customer engagement are areas of rapid industry innovation that translate for use across the federal enterprise.

Additionally, OMB should reinforce and support agencies on their overall data maturity such that agencies are better positioned to take advantage of AI capabilities.  The Federal Data Strategy 10-year plan, if followed, is a solid model created to drive government-wide data maturity.  Improved data maturity ensures faster, better, and more reliable AI generated outcomes.

5. Are there use cases for presumed safety-impacting and rights-impacting AI (Section 5 (b)) that should be included, removed, or revised? If so, why?

No comment

6. Do the minimum practices identified for safety-impacting and rights-impacting AI set an appropriate baseline that is applicable across all agencies and all such uses of AI? How can the minimum practices be improved, recognizing that agencies will need to apply context-specific risk mitigations in addition to what is listed?

We believe an approach that draws upon existing IT/cyber risk management practices offers a pathway to allow agencies to implement minimum baselines while having the  freedom to innovate and tailor the model to suit the wide diversity of mission requirements across the federal enterprise. Our ATO for AITM approach recommends considering a FIPS 199 like model where agencies categorize risk baselines as high, moderate and low across confidentiality, integrity and availability dimensions. Similarly, for AI systems risk baseline categorized as high, moderate or low across safety, bias/rights, and explainability dimensions. This allows every agency to suitably tailor the risk management controls based on its specific requirements while adhering to the overall guardrails agencies must follow. The consolidated NIST AI RMF mapped baseline should be based on all six categories – 1) confidentiality, 2) integrity, 3) availability, 4) safety, 5) bias/rights and 6) explainability.

7. What types of materials or resources would be most valuable to help agencies, as appropriate, incorporate the requirements and recommendations of this memorandum into relevant contracts?

We recommend OMB direct NIST, DOD, DHS/CISA and GSA to develop FISMA and FedRAMP Profile for NIST AI RMF that helps provide an actionable implementation model for agencies. We believe an approach that maps and augments NIST SP 800-53 controls to NIST AI RMF risk categories and sub-categories offers an expeditious pathway well understood by a broad cross section of acquisition, program and industry members. The acquisition solicitations can then reference the need to comply with NIST AI RMF/FISMA AI Profile as part of the acquisition’s language.

There should also be directive to separate the evaluation process for AI capabilities that are embedded in vendor solutions and AI capabilities that are built by government agencies.

Procurement and budget officials should also have a view of the key components of “AI” so that the suggested controls and evaluations for AI are applied to the appropriate elements of the acquisition – infrastructure, devices, data, software platforms and all related “as a service” elements.

8. What kind of information should be made public about agencies’ use of AI in their annual use case inventory?

We believe that the AI use case inventory should offer meaningful information on mission and quantifiable outcomes (work effort savings, elimination of errors, and efficiency gains amongst others) achieved through the deployment of the AI technology. Additionally, the AI use case inventory should also provide an indication on technology components used e.g. FedRAMP accredited cloud services or GOTS as a case in point. Such data will enable the analysis of consumption patterns, the estimation of supply chain risk to the government, the enhancement of overall learning, and improved decision-making. Because agencies work together and exchange information as they deliver services, the information should also reflect or indicate cross-agency use-cases.

I hope you find the information and contents of this brief document useful as OMB formulates and finalizes the OMB memo on safe and secure AI adoption in agencies.

Very respectfully,                                                                                                     

12/4/2023

Gaurav Pal

stackArmor, Inc

https://stackarmor.com/airmf-accelerator/

Appendix – ATO for AITM Open Governance Model based on NIST Standards

Based on our experience helping agencies, commercial organizations and regulated entities implement security controls, we have developed an open and standards-based governance model that we call ATO for AITM. This model begins with the seven trustworthy characteristics of AI and the NIST AI RMF risk categories & sub-categories and maps them to the NIST SP 800-53 Rev 5 control families and controls. . The model adds an AI Overlay construct that includes AI-specific controls not adequately covered by existing NIST SP 800-53 Rev 5 controls. Hence, the combination of tailored NIST SP 800-53 Rev 5 controls with an AI overlay provide an actionable and well-understood approach to risk management that can accelerate the adoption of AI while reducing the time delays and costs of alternate approaches to AI risk management and governance. The infographic below provides an overview of our overall approach to AI risk management and governance.

Original Post URL: https://securityboulevard.com/2023/12/accelerating-safe-and-secure-ai-adoption-with-ato-for-ai-stackarmor-comments-on-omb-ai-memo/

Category & Tags: Security Bloggers Network,AI,ATO,Blog,OMB – Security Bloggers Network,AI,ATO,Blog,OMB

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post

More Latest Published Posts