Managing AI’s Transformative Impact on Business Strategy & Governance: Strategies for CISOs

Richard Marcus
Author: Richard Marcus, CISO at AuditBoard
Date Published: 14 June 2024

Editor’s note: The following is a sponsored blog post from AuditBoard.

The potential benefits of artificial intelligence (AI) are clear, and the adoption trends prove that it’s here to stay. Adoption of generative AI, however, is far outpacing governance and policy management. An ISACA poll showed that while only 28% of organizations expressly permit generative AI use, 41% said employees were using it regardless — and another 35% weren’t sure. Only 10% had a formal, comprehensive generative AI policy, and more than one in four had neither a policy nor plans to create one.

Much of the discussion surrounding generative AI’s transformative impact focuses on use cases and benefits. Organizations seeking to realize such benefits, however, must grasp how AI use will radically reshape strategy and governance. During a recent AuditBoard webinar, a distinguished panel of tech executives generously shared insights in these areas: Allison Miller, Cartomancy Labs Founder/Principal and former Reddit CISO; Jim Routh, Saviynt Chief Trust Officer; and Paul Vallée, Tehama Technologies Founder/CEO. Below are the top takeaways for helping organizations get a better handle on AI risks, governance, use and implementation.

AI Risks: Understanding the Layers

Though risk tolerance obviously varies across organizations, every organization must be mindful of the risks created by generative AI use. There’s no shortage of great articles (like this one) detailing AI’s manifold risks, which include more universal concerns such as data privacy, cybersecurity, legal and regulatory challenges, misinformation, bias, ethics and others. But generative AI compounds these challenges with complicated questions around risks such as equity, overreliance, accountability, explainability (i.e., the capacity to explain how AI makes decisions and creates conclusions) and more. The seriousness of these risks is underpinned by fast-increasing AI regulatory activity globally.

AI Controls Design: Start With Principles

As with disruptive technologies of years past, a primary mission for CISOs will be designing controls around AI use. Fortunately, existing control frameworks typically include controls that can be readily deployed across all vectors.  

Design principles are a great place to begin. As Routh put it, “Security by design, privacy by design, safety by design… we just need to figure out how to frame the questions raised by AI — things like equity and explainability — with these same principles.” For example, privilege access management (PAM) controls can help create evidence trails (e.g., data provided to AIs, user interactions) while strengthening cybersecurity and ensuring the “human in the loop,” such that humans leveraging AI remain responsible for actual decision-making.

CISOs can also leverage a fast-growing knowledge base around AI threats, such as MITRE ATLAS’ database of potential adversarial tactics against AI-enabled systems and Berryville Institute of Machine Learning’s basic architectural risk analysis of LLMs. Said Vallée, “If you can understand the nature of a threat, you can design controls to counter it.”

AI Detection and Monitoring: Three Vectors

“So much technology I see clients using is starting to get AI built in. It might be a smaller task to figure out which technologies won't bring AI into the organization,” said Miller. As use cases proliferate, it becomes increasingly critical to understand how and where AI is being used. Routh pinpointed three vectors in which organizations are likely to find AI usage:

  1. Public large language models (LLMs). Employees may be using LLMs like ChatGPT, GPT-4, FALCON, Claude 3, Gemini, LLaMa 2 and others.
  2. LLMs in build pipelines. Developers can access ~1,000 special-purpose open-source LLMs that may be integrated into software your organization builds.
  3. LLMs in Software as a Service (SaaS) applications. Hundreds of SaaS applications embedding LLMs are in production; that number will grow exponentially.

Fortunately, organizations have several options for products that detect and monitor AI usage in these areas. As Miller explained, “There are commercial and open source products that can be used in all three vectors to — at minimum — identify LLM usage. In some cases, products can offer full traceability on uses; in others, they can do filtering on both prompts and outputs.” Many products offer data loss prevention capabilities for public LLM usage.

AI Implementation Recommendations

Every organization’s AI journey will be different. Risk tolerances and attack surfaces vary, so organizations must assess risks relative to strategy and decide where they will and won’t take risks. Those decisions help drive both governance and training. Vallée’s recommendations:

  1. Agree on principles — then policies. Vallée said, “Principles are easier to get consensus on than prescriptive policies.” What general constraints are appropriate around how you want to use AI? Then, formulate policies to guide application.
  2. Develop an intake process (e.g., a cross-functional team evaluates employee-developed use cases, assessing risk in the context of principles and risk appetite).
  3. Evaluate control requirements necessary to manage project risk.
  4. Design additive controls within the existing controls framework.
  5. Plan staged, iterative implementation, starting with a proof-of-concept enabling validation of both benefits and controls.

While everyone highlighted the criticality of organization-wide AI training, Miller foresees a particular need in product development, since AI is an “awesomely powerful technology, and people will want to take product offerings to the next level.” Miller suggested, “Developers and product teams need to start thinking like threat modelers. Not just thinking through functional design and the ‘happy path,’ but also, how can these things go wrong, and how can we design around them?”

Approach with Curiosity and Humility

AI adoption is an iterative learning process. Routh’s call to action encapsulates our present mission: “Our job today is to get on the journey, and recognize we don't have all the answers. So use the technology at every opportunity, and put boundaries in place that allow you to learn. Learning how to use innovative technology isn’t possible without failure. When we create innovative technology, we break stuff. Then we learn, modify and pivot.”

Additional resources