top of page

Expert Sound Off: Google's Secure AI Framework for Responsible and Secure Deployment of AI

Google is introducing the Secure AI Framework (SAIF), a conceptual framework aimed at establishing industry security standards for building and deploying AI technology in a responsible manner. SAIF incorporates security best practices and addresses the specific risks associated with AI systems. It emphasizes expanding strong security foundations, extending detection and response capabilities, automating defenses, harmonizing platform-level controls, adapting controls for faster feedback loops, and contextualizing AI system risks. Google aims to foster industry support for SAIF, collaborate with organizations to assess and mitigate AI security risks, share threat intelligence insights, expand bug hunters programs, and deliver secure AI offerings with partners. The goal is to ensure a secure and trustworthy AI ecosystem for all stakeholders. Here is what a few leading security industry experts said about this news from Google:

Patrick Harr, CEO at SlashNext, a Pleasanton, Calif.-based anti phishing company:

"As one of the leaders in AI advancements, Google is taking the necessary first steps to foster a culture of security for AI. As organizations try to take advantage of the benefits of AI, they are realizing the potential dangers. The most important takeaway is the need for a thorough security protocol when using AI-generated programs. As we progress through these uncharted waters, we will undoubtedly see more security tools and recommendations to mitigate the risks."

John Bambenek, Principal Threat Hunter at Netenrich, a San Jose, Calif.-based security and operations analytics SaaS company:

"We are only just getting started thinking about this and we’re drawing analogies on existing cyber security disciplines. For instance, having bug bounty programs makes sense if you’re talking about software applications, but in AI, we don’t even really know what penetration testing truly looks like.

The fact is, we are making it up on the fly, and we’re just going to have to revise and figure things out. In that sense, putting some of the stuff out there is a good first step because at least it gives us a starting point to figure out what works and what does not."

Sounil Yu, Chief Information Security Officer at JupiterOne, a Morrisville, North Carolina-based provider of cyber asset management and governance solutions:

"As the original creator of transformers, Google is well positioned to speak on the concerns associated with the safe use of AI technologies. The SAIF is a great start, anchoring on several tenets that are found in the NIST Cybersecurity Framework and ISO 27001. What is needed next is the bridge between our current security controls and those that are needed specifically for AI systems. Many of the challenges that are presented in the SAIF have patterns that are similar to threats against traditional systems. To ensure rapid adoption of the SAIF, we will need to find ways to adapt existing tools and processes (e.g., bug bounty) to fit the emerging needs instead of having to implement something entirely new. The primary difference with AI systems that makes the SAIF particularly compelling and necessary is that with AI systems, we won't have many opportunities to make mistakes.

AI safety is an extremely important principle to consider at the earliest stages of designing and developing AI systems because of potential catastrophic and irreversible outcomes. As AI systems grow more competent, they may perform actions not aligned with human values. Incorporating safety principles early on can help ensure that AI systems are better aligned with human values and prevent potential misuse of these technologies. Having a robust safety framework with corresponding measures from the start can make AI systems more trustworthy and dependable."

Piyush Pandey, CEO at Pathlock, a Flemington, New Jersey-based provider of unified access orchestration:

"The risks that the SAIF is hoping to mitigate have strong similarities to those that IT, audit, and security teams face when protecting business application - data extraction, malicious inputs, sensitive access, to name a few.

History Doesn’t Repeat Itself, but It Often Rhymes. Just as Sarbanes-Oxley (SOX) legislation created a need for separation-of-duties (SOD) controls for financial processes, it's evident that similar types of controls are necessary for AI systems.

SOX requirements were quickly applied to business applications executing those processes, and as a result, controls testing is now it's own industry, with software solutions, and audit and consulting firms, helping customers prove the efficacy and compliance of their controls. For SAIF to become relevant (and utilized), controls will need to be defined to give organizations a starting point to help them better secure their AI systems and processes.

For business leaders looking to use SAIF as a springboard to initiate their AI governance program, they should heavily lean on their IT, audit, and security teams for best practices and ways to define and enforce access controls."


###

Comments


bottom of page