Search
Close this search box.

California’s AI RAMP or FedRAMP for AI?

California’s AI RAMP or FedRAMP for AI?: Urgent need for an actionable and enforceable US safety and security framework for AI

California State Bill 1047 was passed today by the Assembly where it heads to the Senate and the Governor’s desk for consideration. SB 1047 is remarkable for the specificity of the governance requirements and penalties for developers of AI models. The proposed Act clearly spells out covered models and establishes the governance model which includes designating a “Government Operating Agency”, mandating a third-party audit and explicit guidance on change management & reporting.

The proposed law provides for flexibility by allowing the developer to choose the appropriate framework “(i) In fulfilling its obligations under this chapter, a developer shall consider industry best practices and applicable guidance from the U.S. Artificial Intelligence Safety Institute, National Institute of Standards and Technology, the Government Operations Agency, and other reputable standard-setting organizations.”

The act clearly outlines the responsibilities of the developer of a covered AI model “(d) A developer of a covered model shall annually reevaluate the procedures, policies, protections, capabilities, and safeguards implemented pursuant to this section.

Further, it imposes a reporting obligation “(g) A developer of a covered model shall report each artificial intelligence safety incident affecting the covered model, or any covered model derivatives controlled by the developer, to the Attorney General within 72 hours of the developer learning of the artificial intelligence safety incident or within 72 hours of the developer learning facts sufficient to establish a reasonable belief that an artificial intelligence safety incident has occurred.”

The Bill goes even further and places obligations on organizations operating a computing cluster to implement KYC protocols.

A new compliance knot in the making?

US corporations are battling the costs and complexities associated with complying with multiple security and compliance frameworks including DOD Cloud Computing Security Requirements Guide, FedRAMP and CMMC 2.0. Guidance around reciprocity, equivalence, and presumption of adequacy are band-aids to help address duplicate compliance burdens. The emergence of SB 1047 serves as a wake-up call to policy makers to think about a harmonized and standards-based regulatory framework that does away from the current “voluntary” approach to AI risk management. The governance model proposed in SB 1047 looks remarkably similar to FedRAMP.

Approval to Operate for AI

The Federal Risk and Authorization Management Program (FedRAMP) is a United States federal government-wide compliance program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. The use of NIST security controls coupled with an enforcement mechanism to implement and monitor the security and safety through the FedRAMP Authorization Act presents a well-established mechanism for enforcing risk management. By extending the NIST controls framework such as NIST SP 800-53 with AI Overlays to cover additional Trustworthy AI characteristics like Safety, Bias and Explainability, we can enhance existing mechanisms and protocols to adequately cover AI risk. This approach is likely to be cheaper and less burdensome than coming up with a completely new set of requirements. What is currently lacking is either an executive order or legislative guidance that directs the creation of a governance model with teeth to ensure enforcement. Absent such action, more States and progress organizations will continue to address the gap. This might lead to “compliance chaos”, which can potentially be avoided by directing the expansion of the existing risk management models like FedRAMP to cover AI. Learn more by visiting https://www,stackArmor.com/AI

SHARE

MOST RECENT

CONTACT US