When Everything Old is New Again: How to Audit Artificial Intelligence for Racial Bias

When Everything Old is New Again: How to Audit Artificial Intelligence for Racial Bias
Author: Ellen M. Hunt
Date Published: 6 December 2019

You may not know it, but artificial intelligence (AI) has already touched you in some meaningful way. Whether approving a loan, moving your resume along in the hiring process, or suggesting items for your online shopping chart, AI touches all of us – and in some cases, with much more serious consequences than just putting another item in your chart.

As this technology becomes more widespread, we are discovering that maybe it’s more human than we would like. AI algorithms have been found to have racial bias when used to make decisions about the allocation of health care, criminal sentencing and policing. In its speed and efficiency, AI has amplified and put a spotlight on the human biases that have been woven into and become part of the Black Box. (For a deeper dive into AI and racial bias, read the book, Automating Inequality, Weapons of Math Destruction, and Algorithms of Oppression: How Search Engines Reinforce Racism.)

As auditors, what is the best approach toward AI? Where and how can we bring the most value to our organizations as they design and implement the use of AI? Auditors need to be part of the design process to help establish clear governance principals and clearly documented processes for the use of AI by their organizations and its business partners. Because AI is not static, it is forever learning. Auditors need to take an agile approach to continuous auditing of the implementation and impact of AI to provide assurance and safeguards against racial bias.

Design and Governance: “In Approaching the New, Don’t Throw the Past Away”
In the United States, we like to think that the impact of slavery ended with the Civil War. It didn’t. We also want to believe that the landmark US Supreme Court case of Brown vs. Board of Education gave everyone access to the same education. It didn’t. Title VII of the Civil Rights Act of 1964 was passed to stop employment discrimination. It didn’t. Nonetheless, these “old” concepts of fairness and equality are still valid and are what is needed to be incorporated into the new AI; first, at the design and governance level; and then at the operational level. As the auditor, you should be asking what are the organization’s governance principles regarding the use of AI? A starting place may be to suggest that your organization adopt the OECD Principles on AI.

Do these principles apply only to the organization or also to its third parties and other business partners? How do these principals align with the organization’s values and code of conduct? What risks are associated with the use of AI that are not aligned with these principles? Conducting impact assessments to help create bias impact statements can help build out these principals. (See Model Behavior: Mitigating Bias in Public Sector Machine Learning Applications for eight specific questions that auditors can ask to help in the design phase to reduce bias in AI). Other resources to consider are After a Year of Tech Scandals, Our 10 Recommendations for AI, Algorithmic Bias Detection and Mitigation Best Practices and Policies to Reduce Consumer Harms, and
Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability.

Implementation and Impact: “Put it on Backwards When Forward Fails”
The greatest challenge with auditing AI is the very nature of AI itself – we don’t fully understand how the Black Box works. Further, the decisions it made yesterday may not be the same today. When looking at implementation and impact, a few frameworks have emerged (See ISACA's Auditing Artificial Intelligence and the IIA’s Artificial Intelligence Auditing Framework: Practical Applications, Part A & Part B.) To see how others have approached this challenge, looking at the numerous research projects in the public sector can be helpful. Regardless of the methodology used, because AI is always learning, an agile approach that provides for continuous auditing will be required to provide assurance against racial bias.

Editor’s note: For a forward-looking view of AI in the next decade, see ISACA’s Next Decade of Tech: Envisioning the 2020s research.