“The [technology] also wants what every living system wants; to perpetuate itself, to keep itself going. And as it grows, those inherent wants are gaining in complexity and force.”- Kevin Kelly
This blog post is an offshoot to my previous post, titled, “Artificial Intelligence – A Damocles Sword?” published in December 2019. During that period, governance of AI was pronounced as a top priority and justifications were made for its urgent need. Now four years down the line, lots of thought papers, open frameworks, regulations and legislation have come showcasing the need for governance of AI and providing broad steps to achieve it.
The challenge now is to wade through this vast literature and find ways to operationalize AI governance tailor-made for a particular organization. An attempt is made in this regard here so that it can be useful to readers, especially those in the GRC fraternity. NIST’s AI Risk Management Framework, published in January 2023, has been considered in this blog post.
Assessing AI Risks
AI systems are designed to operate with varying levels of autonomy. What are the main risks that come with AI?
The 15 biggest risks of AI, as shown here, are:
- Lack of Transparency
- Bias and Discrimination
- Privacy Concerns
- Ethical Dilemmas
- Security Risks
- Concentration of Power
- Dependence on AI
- Job Displacement
- Economic Inequality
- Legal and Regulatory Challenges
- AI Arms Race
- Loss of Human Connection
- Misinformation and Manipulation
- Unintended Consequences
- Existential Risks
It is to be noted that AI can generate unique risks beyond universal risks depending on the nature of the organization using it.
AI Trustworthiness
NIST prescribes the following characteristics of trustworthy AI:
- Valid and Reliable
- Safe
- Secure and Resilient
- Accountable and Transparent
- Explainable and interpretable
- Privacy-enhanced
- Fair, with harmful bias managed
According to NIST, “A highly secure but unfair systems, accurate but opaque and uninterpretable systems, and inaccurate but secure, privacy-enhanced, and transparent systems are all undesirable. A comprehensive approach to risk management calls for balancing trade-offs among the trustworthiness characteristics.”
The key phrase above is “balancing trade-offs.”
AI Risk Management Framework Core
The RMF Framework prescribed by NIST comprises four functions and is shown in the diagram below:
- Govern
- Map
- Measure
- Manage
Governance is designed to be a cross-cutting function to inform and be infused throughout the other three functions. AI RMF core functions should be carried out after considering multi-disciplinary perspectives, potentially including views outside the organization.
Here are some suggested key steps for AI governance:
- The seven characteristics of trustworthiness should be considered for inclusion in risk metrics.
- Every characteristic should be valued either quantitatively or qualitatively based on the context, but preferably quantitatively and in percentages. A balance can be reached based on the risk-benefit cost analysis or as per regulatory or industry requirements.
- Multi-disciplinary stakeholder perspectives comprising both internal and external stakeholders should be considered for the trustworthiness characteristics of the AI systems, especially for those AI systems that have significant socio-economic consequences.
- Categories and sub-categories of each RMF core function should be taken as guidelines and applied while exercising each function.
- Importantly these guidelines should be considered based on context and not as a strict checklist.
- The audit fraternity should consider AI audit as “Management Audit” or “Social Audit” and apply those principles, and not those of financial audit or internal audit. This point applies regardless of the AI component that needs to be audited.
A Case Study on the Operationalization of AI Principles
A case study on how Microsoft used an AI ethics committee to govern its AI Development is briefly detailed here. In March 2018, Microsoft announced that it was establishing an AI and Ethics in Engineering and Research (AETHER) Committee led by both the president and executive vice president of the company’s AI and Research group. By early 2019, AETHER was expanded to stand for AI, ethics, and effects in engineering and research. AETHER is organized into seven working groups:
- Sensitive Uses to assess the impact of automated decisions on people’s lives
- Bias and Fairness to assess the impact on minority and vulnerable populations
- Reliability and Safety to ensure that AI systems are robust against adversarial attacks
- Human Attention and Cognition to monitor algorithmic attention-hacking and abilities of persuasion
- Intelligibility and Explanation to provide transparency into machine learning and deep learning models
- Human AI Interaction and Collaboration to enhance people’s engagement with AI systems
- Engineering Best Practices to recommend best practices for each stage of the AI system development cycle.
The decision to establish AETHER sends a very clear message to employees, users, clients and partners that Microsoft intends to hold its technology to a higher standard.
Industry-Specific Guidelines Can Strengthen AI Governance
AI carries lots of benefits and advantages for humankind, but the associated risks need to be managed in a comprehensive manner. Therefore, following a comprehensive framework will enable organizations to prevent making ad-hoc decisions and ensure better credibility and trust among various stakeholders.
As AI is evolving, regulatory bodies are coming out with open frameworks to suit various environments and industries. Specialized bodies like ISACA can complement those frameworks with industry-specific guidelines using its expertise and pool of knowledge.
Author’s note:The opinions expressed are of the author’s own views and does not represent that of the organization or of the certification bodies he is affiliated to.