Accelerating Safe and Secure AI Adoption with ATO for AI: stackArmor Comments on OMB AI Memo

Ms. Clare Martorana,

U.S. Federal Chief Information Officer,

Office of the Federal Chief Information Officer,

Office of Management Budget.

Subject: Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum

Ms. Martorana,

We appreciate the opportunity to comment on the proposed Memo on Agency Use of Artificial Intelligence. As the CEO and founder of a small and innovative solutions provider, stackArmor, Inc., headquartered in Tysons VA, I applaud your efforts to foster transparency and solicit ideas and comments. 

We believe the three most important initiatives in the memo to help agencies advance governance, innovation, and risk management for use of AI are:

(1) ensuring agencies have access to adequate IT infrastructure,

(2) modernizing cybersecurity authorization processes to better address the needs of AI application, and

(3) establishing uniform and consistent shared practices to independently evaluate the AI and conduct ongoing monitoring.

We base our remarks on our experience helping US Federal agencies transform their information technology systems using new and emerging technologies like cloud computing technologies since 2009 with the first migration of a government wide system – Recovery.gov to a commercial cloud service provider. Since then, we have had the privilege of supporting numerous transformation initiatives including being part of the GSA Centers of Excellence (COE) since 2018 and having contributed towards the development of the Cloud Adoption Playbook while supporting transformation engagements at agencies including USDA, HUD, NIH and OPM amongst others agencies.

Our approach to Risk Management and Governance is rooted in using open standards and frameworks provided by NIST. We believe that OMB’s guidance should encourage the augmentation and tailoring of existing risk management processes to ensure that Federal agencies can start accruing the benefits of AI technologies without costly delays associated with implementing a new governance program. A pragmatic approach that tailors NIST RMF, NIST SP 800-53 and governance models such as ATOs (Authority to Operate) to align with the NIST AI RMF (NIST AI 100-1) provides a balanced approach that ensures that AI specific risks like safety, bias and explainability are adequately covered while being able to leverage the existing cyber workforce, risk management procedures, and body of knowledge associated with NIST RMF/800-53.

We are providing the following specific comments to questions from OMB requesting feedback.

1. The composition of Federal agencies varies significantly in ways that will shape the way they approach governance. An overarching Federal policy must account for differences in an agency’s size, organization, budget, mission, organic AI talent, and more. Are the roles, responsibilities, seniority, position, and reporting structures outlined for Chief AI Officers sufficiently flexible and achievable for the breadth of covered agencies?

Given that most AI capabilities within an agency will be delivered by IT systems that are highly likely to be based on cloud computing technologies (public or private), the designated Chief AI Officers should have sufficient experience with and exposure to cloud computing technologies as well as the Federal Risk and Authorization Management Program (FedRAMP®) to ensure that cost-effective and secure commercial solutions can help meet the agency’s AI needs. Such experience helps agencies rapidly reap the benefits of AI capabilities, maximizing the use of secure and compliant commercial solutions will be critical and to the extent Chief AI Officers understand AI systems and commercial solutions, it will help in remove roadblocks and avoid duplication of efforts, where agencies re-create capabilities that already exist in the commercial sector.

Further Chief AI Officers should have a keen understanding of the agency’s mission and how AI can enhance and improve or bring new service delivery capabilities.  Agencies should have the flexibilities to determine appropriate reporting structures that best fit their needs, and where the Chief AI Officer is not dual hatted with the CIO or CDO, for example, ensure close collaboration and coordination with other CxO’s (e.g. CIO, CDO, CXO, CISO, Chief Privacy Officer).

2. What types of coordination mechanisms, either in the public or private sector, would be particularly effective for agencies to model in their establishment of an AI Governance Body? What are the benefits or drawbacks to having agencies establishing a new body to perform AI governance versus updating the scope of an existing group (for example, agency bodies focused on privacy, IT, or data)?

We believe that augmenting and building upon existing risk management mechanisms especially in the IT domain is likely to help accelerate AI adoption in support of the mission without causing costly delays associated with standing up a brand new governance body or model. Using an approach that ties NIST AI RMF to existing cyber risk management models based on NIST RMF, NIST SP 800-53 and NIST SP 800-53A as well as leveraging the work done by the Federal Privacy Council, there is a critical mass of understanding and knowledge that agencies can leverage to reduce the time and cost with AI adoption across the federal enterprise.  To help avoid a situation where every agency comes up with its own governance model, OMB could direct NIST, GSA and DHS/CISA to create a FISMA Profile for NIST AI RMF, which then can be tailored and adopted by each one of the 24 CFO Act agencies.

Additionally, given that most AI capabilities will be delivered using IT systems, modernizing existing cyber processes and equipping the workforce with critical skills like ethics, safety and civil rights awareness specific to AI systems can help ease the transition burden associated with new technology insertion.

3. How can OMB best advance responsible AI innovation?

OMB should consider creating a consistent and uniform governance model that does not vary from agency to agency. The creation of “snowflake” compliance models unique to a agency will deter participation by small and innovative solution providers across the country. Once the initial wave of foundational systems and AI computing platforms (e.g. commercial or private clouds), the enduring set of government or agency specific solutions are likely to come from small, nimble businesses. Therefore, ensuring market access for small businesses through existing channels like FedRAMP, SBA’s SBIR/STTR funding programs as well as reiterating the need to meet small business and socio-economic goals for AI solutions and systems are important actions that OMB can take to help advance the deployment of AI innovation while ensuring an equitable and competitive marketplace that does not get concentrated into a handful of large players.

OMB should also designate or delegate actions for defining criteria, processes, and operational creation of pathways for “independent review”.   Unless there is a responsible center point with funding to buildout the operational substantiation of this concept, there is a risk that the independent review because substantially costly or burdensome as to become a barrier to innovation. The FedRAMP program has established an objective and standards based program with third-party assessor organizations (3PAO), which could serve as a starting point for enabling an independent review framework.

4. With adequate safeguards in place, how should agencies take advantage of generative AI to improve agency missions or business operations?

We believe an iterative low-risk approach to generative AI adoption will likely be the most productive. In many ways, we draw parallels to how commercial cloud computing adoption occurred almost a decade ago. Given the lack of understanding and the initial trust deficit in cloud solutions, low risk public facing websites were some of the early workloads to migrate to the cloud. Some of the earliest systems to move to the cloud were Recovery.gov and Treasury.gov. More mission critical systems then began moving to the cloud once greater confidence, understanding and trust was established and the governance model matured.

Initially, NIST SP 800-53 Rev 3 was used with cloud computing overlays, then subsequently FedRAMP came along and NIST incorporated cloud computing-aware controls into SP 800-53 Rev 4. Similarly, as the governance model matures, mission critical use cases that will benefit from AI will start to emerge.

There are a number of relatively low-risk use cases around software development using code generators; marketing & outreach automation and enhanced customer engagement are areas of rapid industry innovation that translate for use across the federal enterprise.

Additionally, OMB should reinforce and support agencies on their overall data maturity such that agencies are better positioned to take advantage of AI capabilities.  The Federal Data Strategy 10-year plan, if followed, is a solid model created to drive government-wide data maturity.  Improved data maturity ensures faster, better, and more reliable AI generated outcomes.

5. Are there use cases for presumed safety-impacting and rights-impacting AI (Section 5 (b)) that should be included, removed, or revised? If so, why?

No comment

6. Do the minimum practices identified for safety-impacting and rights-impacting AI set an appropriate baseline that is applicable across all agencies and all such uses of AI? How can the minimum practices be improved, recognizing that agencies will need to apply context-specific risk mitigations in addition to what is listed?

We believe an approach that draws upon existing IT/cyber risk management practices offers a pathway to allow agencies to implement minimum baselines while having the  freedom to innovate and tailor the model to suit the wide diversity of mission requirements across the federal enterprise. Our ATO for AITM approach recommends considering a FIPS 199 like model where agencies categorize risk baselines as high, moderate and low across confidentiality, integrity and availability dimensions. Similarly, for AI systems risk baseline categorized as high, moderate or low across safety, bias/rights, and explainability dimensions. This allows every agency to suitably tailor the risk management controls based on its specific requirements while adhering to the overall guardrails agencies must follow. The consolidated NIST AI RMF mapped baseline should be based on all six categories – 1) confidentiality, 2) integrity, 3) availability, 4) safety, 5) bias/rights and 6) explainability.

7. What types of materials or resources would be most valuable to help agencies, as appropriate, incorporate the requirements and recommendations of this memorandum into relevant contracts?

We recommend OMB direct NIST, DOD, DHS/CISA and GSA to develop FISMA and FedRAMP Profile for NIST AI RMF that helps provide an actionable implementation model for agencies. We believe an approach that maps and augments NIST SP 800-53 controls to NIST AI RMF risk categories and sub-categories offers an expeditious pathway well understood by a broad cross section of acquisition, program and industry members. The acquisition solicitations can then reference the need to comply with NIST AI RMF/FISMA AI Profile as part of the acquisition’s language.

There should also be directive to separate the evaluation process for AI capabilities that are embedded in vendor solutions and AI capabilities that are built by government agencies.

Procurement and budget officials should also have a view of the key components of “AI” so that the suggested controls and evaluations for AI are applied to the appropriate elements of the acquisition – infrastructure, devices, data, software platforms and all related “as a service” elements.

8. What kind of information should be made public about agencies’ use of AI in their annual use case inventory?

We believe that the AI use case inventory should offer meaningful information on mission and quantifiable outcomes (work effort savings, elimination of errors, and efficiency gains amongst others) achieved through the deployment of the AI technology. Additionally, the AI use case inventory should also provide an indication on technology components used e.g. FedRAMP accredited cloud services or GOTS as a case in point. Such data will enable the analysis of consumption patterns, the estimation of supply chain risk to the government, the enhancement of overall learning, and improved decision-making. Because agencies work together and exchange information as they deliver services, the information should also reflect or indicate cross-agency use-cases.

I hope you find the information and contents of this brief document useful as OMB formulates and finalizes the OMB memo on safe and secure AI adoption in agencies.

Very respectfully,                                                                                                     

12/4/2023

Gaurav Pal

stackArmor, Inc

https://stackarmor.com/airmf-accelerator/

Appendix – ATO for AITM Open Governance Model based on NIST Standards

Based on our experience helping agencies, commercial organizations and regulated entities implement security controls, we have developed an open and standards-based governance model that we call ATO for AITM. This model begins with the seven trustworthy characteristics of AI and the NIST AI RMF risk categories & sub-categories and maps them to the NIST SP 800-53 Rev 5 control families and controls. . The model adds an AI Overlay construct that includes AI-specific controls not adequately covered by existing NIST SP 800-53 Rev 5 controls. Hence, the combination of tailored NIST SP 800-53 Rev 5 controls with an AI overlay provide an actionable and well-understood approach to risk management that can accelerate the adoption of AI while reducing the time delays and costs of alternate approaches to AI risk management and governance. The infographic below provides an overview of our overall approach to AI risk management and governance.

Infographic demonstrating an end-to-end risk management and governance model that augments and builds upon existing processes, procedures and body of knowledge within an agency for AI

We have prepared detailed mappings of the controls and have vetted our approach by sharing it with the NIST public working groups on AI as well as leading industry and government executives as part of our AI Risk Management Centers of Excellence (CoE). Members of our CoE include:  

  • Ms. Suzette Kent, former U.S. Federal CIO 
  • Ms. Maria Roat, former U.S. Deputy Federal CIO  
  • Mr. Richard Spires, former U.S. Department of Homeland Security CIO 
  • Mr. Alan Thomas, former commissioner of the GSA Federal Acquisition Service
  • Ms. Teresa Carlson, transformational industry executive with over 25 years of leadership

Our COE members have rich operational and policy experience and have offered the following comments on our approach.

“Harnessing the power of AI for delivery of government mission and services will be transformational. But, it is complicated to align all the emerging policy, risk frameworks, approval processes and existing policy and law. I am thrilled to be included in the COE because I have seen the work of the stackArmor team to drill down to details and find a path to connect all the pieces. We can only get to use of operational AI at scale by working through these details. I hope the output of the COE will deliver tools that agencies can use to move faster and to confidently scale AI capabilities.”

Suzette Kent, Former Federal CIO. Ms. Kent as an extensive private and public sector background. As the Federal CIO, Ms. Kent was responsible for government-wide IT policies and spending, and also chaired the Federal CIO Council and the Technology Modernization Fund Board.

“The adoption of risk-based methods for managing and governing AI systems that leverage security controls defined in NIST SP 800-53 Rev 5 as well as established governance programs like FedRAMP can help agencies adopt AI more rapidly. Reducing the time and cost burden on agencies and supporting contractors by enhancing existing protocols is critical to ensuring timely deployment of AI systems for supporting the government mission.” 

Maria Roat, Former Deputy Federal CIO, SBA CIO and Director, FedRAMP PMO. Ms. Roat is a Senior Information Technology and Cybersecurity Executive with 3+ decades’ experience driving enterprise-scale digital transformation within Federal Government. Recognized as builder, collaborator, and solutions innovator with vision, audacity, and drive to lead complex multibillion-dollar technology initiatives.

“Managing risk associated with AI systems is essential to ensuring Government’s ability to improve agency effectiveness and efficiency using next generation AI and Automated Decision Support systems. stackArmor’s systems engineering approach to applying NIST security controls to AI systems provides a reasonable blueprint for AI risk management.”

Richard Spires, Former DHS, and IRS CIO. Mr. Spires provides advice to companies and government agencies in strategy, digital transformation, operations, and business development. He previously served as the Chief Information Office (CIO) of the U.S. Department of Homeland Security (DHS) and as CIO of the Internal Revenue Service (IRS).

“ATO for AI offers government agencies a fiscally prudent pathway to safe and secure AI adoption that builds upon lessons learned upon implementing existing governance frameworks like FISMA and FedRAMP. stackArmor’s approach of operationalizing NIST AI RMF with actionable control implementations can help agencies accelerate safe AI systems adoption without having to retrain thousands of program, acquisition and IT specialists on new governance models for AI.”

Alan Thomas, Former Commissioner, Federal Acquisition Service, GSA. Mr. Thomas is an Operating executive and former Federal political appointee with more than 25 years delivering mission critical programs, championing large scale digital transformation initiatives, and building deep functional expertise in acquisition and procurement.

“The unique combination of AI-enabled applications on cloud-computing powered services offers a once-in-a-generation opportunity to truly enable a digital-first government. Transforming legacy applications at scale by using accelerators that deliver safe and secure AI-native applications developed by innovative ISVs on FedRAMP accredited cloud service providers can help us dramatically shorten the time and cost of AI adoption.”

Teresa Carlson, transformational industry executive with over 25 years of leadership in modernizing public sector organizations using commercial solutions.

You can access more details about our open and standards based approach to helping accelerate AI deployments by augmenting existing risk management controls to align with NIST AI RMF by visiting our website downloading our whitepaper.

SHARE

MOST RECENT

CONTACT US