Blog

An Analysis of AI usage in Federal Agencies

Federal Agencies are rapidly deploying and utilization AI/ML technologies to further the mission. This blog attempts to understand the types of AI/ML systems being used by agencies and how best to develop relevant guardrails. OMB’s M-14-10 memo outlines specific requirements that must be met for ensuring Responsible AI deployments. Responsible AI Directives from OMB As part of its guidance to agencies to ensure Responsible AI use as recommended by the NIST AI RMF to maintain AI system and use case inventories, OMB’s guidance M-24-10 is prescriptive, and direct. Its states, in Sections 3-a-iv and 3-a-v: AI Use Case Inventories. Each agency (except for the Department of Defense and the Intelligence Community) must individually inventory each of its AI use cases at least annually, submit the inventory to OMB, and post a public version on the agency’s website. OMB will issue detailed instructions for the inventory and its scope through its

Read More »

Managing Generative AI Risk and Meeting M-24-10 Mandates on Monitoring & Evaluation

OMB’s memo M-24-10 (5c. Minimum Practices for Safety-Impacting and Rights-Impacting Artificial Intelligence) is prescriptive (and timebound): No later than December 1, 2024 and on an ongoing basis while using new or existing covered safety-impacting or rights-impacting AI, agencies must ensure these practices are followed for the AI: D. Conduct ongoing monitoring. In addition to pre-deployment testing, agencies must institute ongoing procedures to monitor degradation of the AI’s functionality and to detect changes in the AI’s impact on rights and safety. Agencies should also scale up the use of new or updated AI features incrementally where possible to provide adequate time to monitor for adverse performance or outcomes. Agencies should monitor and defend the AI from AI-specific exploits, particularly those that would adversely impact rights and safety.   E. Regularly evaluate risks from the use of AI. The monitoring process in paragraph (D) must include periodic human reviews to determine whether

Read More »

Test & Evaluation Techniques for Meeting M-24-10 Mandates to Manage Generative AI Risk

Overview The release of the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework (AI RMF) helped put a framework around how testing would enable organizations to manage and mitigate AI risks. While testing is predominantly considered a core part of model development, the NIST AI RMF emphasizes the importance of continuous testing and monitoring of AI. The validity and reliability for deployed AI systems are often assessed by ongoing testing or monitoring that confirms a system is performing as intended. Measurement of validity, accuracy, robustness, and reliability contribute to trustworthiness and should take into consideration that certain types of failures can cause greater harm NIST AI RMF §3.1 OMB’s memo M-24-10 goes into detail about the expectations around AI safety testing. Section 5c of the memo (5c. Minimum Practices for Safety-Impacting and Rights-Impacting Artificial Intelligence) has laid out the minimum practices for AI risk management. These are:

Read More »

Is it time to enforce an Authority-to-Operate (ATO) for Healthcare Organizations?

The Change Healthcare security breach has impacted over 94% of hospitals as reported by the American Health Association (AHA). A cascading set of events was unleashed starting with the Feb 21, 2024 announcement of the data breach at Change Healthcare requiring nearly $2B  in advance payments severely impacting nearly 900,000 physicians, 33,000 pharmacies, 5,500 hospitals and 600 laboratories. The security attack is reported to have been caused by a security vulnerability in software provided by ConnectWise and used by Change Healthcare. Security and cybersecurity incidents in healthcare are not new. However, what makes the Change Healthcare security breach standout is the widespread impact and the need for a forceful government response that included the Whitehouse. In many ways this cyberattack mirrors the Colonial Pipeline incident a few years ago where Citizens faced gas shortages serving as a wakeup call to policy makers that cybersecurity incidents can be disruptive to the

Read More »

GSA Small Business Office and FedRAMP PMO looking for Small Business Cloud Solutions

General Services Administration (GSA), Office of Small and Disadvantaged Business Utilization (OSDBU) and The FedRAMP PMO are hosting a webinar on March 21, 2024 to provide guidance to small business CSPs in becoming FedRAMP authorized. Small businesses are encouraged to attend and register for this free event. The topics that will be covered include: Gain insight into the benefits of partnering with FedRAMP Understand The FedRAMP Authorization process Identify FedRAMP Resources for CSPs Time: Thursday, March 21, 2024 at 1:00 PM in Eastern Time (US and Canada) The registration link is available here. Small businesses with cloud service offerings that cater to federal agencies with innovative SaaS or PaaS solutions should seek FedRAMP accreditation. What is FedRAMP? The Federal Risk and Authorization Management Program, or FedRAMP, is a government-wide program that provides a standardized approach to security assessment for commercial cloud service providers (CSPs). Any cloud system hosting federal must be

Read More »

FedRAMP ATO Prioritization for Generative AI Cloud Solutions

The US Government is continuing to move rapidly to ensure US competitiveness in the area of Artificial Intelligence (AI). The FedRAMP Program Management Office (PMO) published the Emerging Technology Prioritization Framework (ETPF) in January 2024. The ETPF is designed to help accelerate the availability of FedRAMP accredited Gen AI cloud solutions for federal agencies and users. The FedRAMP PMO is soliciting feedback and comments due by March 11, 2024 on the the proposed prioritization framework. Please read our blog to learn more about FedRAMP. US Government agencies including DOD and Federal Civilian agencies spent nearly $3B on AI solutions based on a 2023 report from Stanford University. This spending is expected to grow rapidly as agencies and public sector organizations start production deployments in the near future. To ensure the safe and secure deployment of AI technologies and to drive the development of standards, the Department of Commerce announced the creation

Read More »

stackArmor Announces Participation in Department of Commerce Consortium Dedicated to AI Safety

**stackArmor will be part of the leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI under new U.S. Government safety institute** MCLEAN, Va.–February 8, 2024–Today, stackArmor announced that it has been selected by the Department of Commerce to join the nation’s leading artificial intelligence (AI) stakeholders to participate in a Department of Commerce initiative to support the development and deployment of trustworthy and safe AI. Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the U.S. AI Safety Institute Consortium (AISIC) will bring together AI creators and users, academics, government and industry researchers, and civil society organizations to meet this mission. “Understanding that adopting AI in a safe and secure manner is a challenge for public sector agencies due to evolving guidance, standards for risk, and a shortage of resources, it’s of the utmost importance to offer proven solutions to the

Read More »

stackArmor’s ThreatAlert ATO® Accelerator Supports NIH AIM-AHEAD Program

Solution enables underrepresented communities greater access to AI/ML research capabilities MCLEAN, Va.–(BUSINESS WIRE)–stackArmor, a leading provider of cloud, security and compliance acceleration solutions for meeting FedRAMP, FISMA and CMMC 2.0, today announced it has been supporting Dr. Paul Avillach, one of the Multiple Principal Investigators of the National Institutes of Health (NIH)’s Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program. The AIM-AHEAD program mission is to enhance the participation of underrepresented communities in the development of AI/ML models. The program improves the capabilities of this emerging technology, beginning with electronic health records (EHR) and extending to other diverse data to address health disparities and inequities. A lack of diversity of data and researchers in the AI/ML field runs the risk of creating and perpetuating harmful biases in its practice, algorithms and outcomes. “We have been privileged to support Dr. Avillach’s vision to reduce the cost

Read More »

GAO Report Details FedRAMP ATO Challenges and Costs

The US Government Accountability Office (GAO) released a report on The Federal Risk and Authorization Management Program (FedRAMP®). The 37 page report provides highly relevant insights to both agencies and commercial organizations pursuing FedRAMP accreditations or ATOs. Highlights from the report are presented below. Key Challenges Faced by Agencies and Cloud Service Providers (CSP) Receiving timely responses from stakeholders: Agencies and CSPs reported that they had issues with receiving timely responses from stakeholders throughout the authorization process. Sponsoring CSPs that were not fully prepared Agencies reported that CSPs did not fully understand the FedRAMP process and lacked complete documentation. Lacking sufficient resources: Agencies reported that they lacked the resources (e.g., funding and staffing) needed to sponsor an authorization. Meeting FedRAMP technical and process requirements: CSPs reported that they had to update the infrastructure to meet federal security requirements. Finding an agency sponsor: CSPs reported that finding an agency sponsor was difficult. Engaging

Read More »

Understanding AI Risk Management – Securing Cloud Services with OWASP LLM Top 10

Welcome back to the era of GenAI, where the world remains captivated by the boundless potential of artificial intelligence. However, the proliferation of AI does not preclude us from considering the new risks it poses. As you may recall, I have been supporting numerous initiatives around AI Risk Management as part of our ATO for AI offering and recently explored the ethical risks surrounding AI using the IEEE CertifAIEd framework. As part of the NIST AI RMF, we need to continue to adopt new and emerging risk management practices unique to AI systems. Today, we’re going to be exploring the other type of risks: the security vulnerability vectors that are unique to AI systems. Today we will be examining the most common of those vectors, called the OWASP LLM Top 10 and how to protect yourself while building solutions using AWS Native AI services. What is the OWASP LLM Top 10?

Read More »

Accelerating Safe and Secure AI Adoption with ATO for AI: stackArmor Comments on OMB AI Memo

Updated on 5/24/2025 with new developments related to this topic. On May 22, 2025, NIST published a blog stating their intent to leverage existing control frameworks to protect AI systems as opposed to creating new frameworks. stackArmor’s position published in 2023 to use AI Overlays to develop guardrails for AI systems has been accepted as an approach that will be implemented by NIST. Ms. Clare Martorana, U.S. Federal Chief Information Officer, Office of the Federal Chief Information Officer, Office of Management Budget. Subject: Request for Comments on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence Draft Memorandum Ms. Martorana, We appreciate the opportunity to comment on the proposed Memo on Agency Use of Artificial Intelligence. As the CEO and founder of a small and innovative solutions provider, stackArmor, Inc., headquartered in Tysons VA, I applaud your efforts to foster transparency and solicit ideas and comments.  We

Read More »