From the mind-blowing learning speed that ChatGPT accelerated to the creative and eye-opening images generated by DALL.E and many others, artificial intelligence is a transformative force reshaping industries, influencing decision-making processes and even altering how we perceive our capabilities. The future of AI is still unknown, but it continues to surprise us. However, great power comes with risk.
Laws, regulations and standards are taking effect or in the pipeline to address the new situation that AI brings. While the technology and compliance landscape changes frequently, businesses face new challenges regarding navigating them and finding a path forward. In the recent white paper, “The Promise and Peril of the AI Revolution: Managing Risk,” ISACA provides thorough guidance for businesses on preparing themselves for this new era to enable innovation while managing risk.
The white paper provides background and business context on the current state of AI. Besides the popular ChatGPT, many other similar AI tools developed across industries are worth noting and comparing. Many organizations are inclined to “wait for the dust to settle” before coming up with an AI strategy, which might seem like the prudent approach, but it carries risks of its own. Another common practice is to restrict and ban the use of AI at work all at once. This approach might give competitive advantages to others who use it in the market and put organizations behind in their technology development and infrastructure readiness. To resolve these pain points, a risk impact analysis—to identify the risk and rewards of AI—and a benefit analysis can be useful.
The white paper includes a section identifying 10 AI risks organizations will encounter. Each type of risk is well explained with current events or examples. Among these risks are the hot topics on many regulators’ agendas when drafting new AI regulations, such as societal risk, IP leakage and invalidation, and cybersecurity. It is worth noting that some risks, such as skill gaps and overreactions, are just as important but are often overlooked. Skills gaps are not only focused on the talent resources but also the overall budget for security and vendors. Organizations also should avoid “overreacting favorably toward AI and placing too much trust in unvalidated AI output.”
The second half of the white paper introduces a three-step approach to AI risk management, along with protocols and practices on how to build an AI security program. It emphasizes that “organizations must adopt a continuous risk approach framework that tests assumptions with frequency and precision to ensure the quality of their AI output.” “Continuous” and “frequency” are key here, as AI technology is still evolving, learning and excelling. I cannot agree more with the white paper’s assertion that it is “important to remember that some risk is healthy and advantageous, while complete risk aversion—particularly when it comes to benefitting from new technology—can carry a high cost.”
When describing the eight protocols and practices for building an AI security program, the white paper steps into the shoes of smaller businesses and how they can deal with AI risk and put controls in place. Trust but verify when working with AI, as it is not a calculator but a continuously evolving tool. Bias has been a major concern with AI, making diverse stakeholders and subject matter experts more essential when designing and testing to ensure the policies that are enforced are safe and ethical. Additional protocols include a cost analysis for the build vs. buy decision. Furthermore, segregation of access to AI would help prevent IP leakage and other disclosures. Lastly, as organizations depend more on AI tools, they must consider business continuity and alternatives to address the risk if their AI tools stop functioning.
Overall, this white paper is both informational and practical—a good starting point for any organizations that are facing the “AI or not AI” decision. “As a society, now is the time to step back, reflect, and map out the risks and repercussions of the AI revolution.” And time to act on them.