All Eyes on AI: Rising Interest, Regulation, and Compliance Requirements

AI is so much more than a buzz term these days. It is a full blown technological revolution commanding the attention of industries and sectors across the board. Its surging role is particularly evident in the public sector where government and federal agencies are flocking to capture the benefits of the emerging tech. Take the Department of State for example. In order to automate the time-consuming task of documentation processing and declassification, the department has instituted an AI-driven pilot project that helps to streamline reviews.

AI Initiatives from The White House and Public Sector 

But as AI gains traction, there is also a rush to get ahead of its challenges. In a talk hosted by the Information Technology Industry Council, Arati Prabhakar, the director of the White House’s Office of Science and Technology Policy, discussed an expected executive order that focuses on balancing opportunity with risk. Of course, that became a reality at the end of October when President Biden signed such an executive order, officially introducing “the most aggressive step by the government to rein in the technology to date,” as described by Rebecca Heilweil at FedScoop.

In addition to addressing concerns with AI, the order outlines approaches to federal agency adoption of the tech, including contracting and hiring as well as other responsibilities. For instance, it has tasked the National Institute of Standards and Technology (NIST) with further developing methods of evaluation for AI systems. This has since spurred the NIST to launch a consortium to help achieve this mission, which it has called on organizations to join. In a recently published press release, NIST explained that the consortium will serve as “a core element of the new NIST-led U.S. AI Safety Institute” and reflect its tactic to “rely heavily on engagement with industry and relevant stakeholders.”

This all follows suit with and seeks to build on pursuits that have unfolded elsewhere. Not long before the executive order, the National Security Agency created the AI Security Center to help guide AI use in national security systems. Meanwhile, the Department of Homeland Security put forth their own policies to steer AI use in law enforcement practices.

stackArmor’s ATO for AI™

It’s clear that rates of adoption aren’t slowing down anytime soon. “The pace at which AI is embedded in commercial applications is breathtaking, and we think it’s just a matter of time where the same will apply to Federal and public sector workloads,” Gaurav “GP” Pal, founder and CEO at stackArmor, told MeriTalk. The expanding presence of regulation and compliance is only going to keep riding those coattails. So, to ensure that public sector and government organizations can get a handle on the perks of AI while navigating current and future frameworks, stackArmor has released its Approval To Operate (ATO) for AI™ accelerator. It adds onto the services that the company already provides to meet FedRAMP requirements by offering a competitive edge that pays close attention to the evolving AI inclusion and regulatory environment. “You can enhance and extend what we currently do and incorporate additional security and governance requirements for AI within the existing practices to get out of the gate faster,” explained GP.

Sources:

SHARE

MOST RECENT

CONTACT US