Artificial intelligence (AI) is intelligence demonstrated by machines as opposed to the natural intelligence displayed by humans or animals. For example, let’s propose that you have a machine at home that can understand how much sugar you prefer to add to your coffee and will do it for you automatically after you give it enough consistent preference information inputs. Those inputs are training data sets, and the way the machine learned and remembered is through AI algorithms.
In recent years, as more and more Artificial Intelligence/Machine Learning (AI/ML) is applied, the ethics of AI have become a hot topic due to several real legal and public concerns. For example, in 2019, a big financial institution was investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men than women on one of their cards. Another example is facial recognition algorithms made by big companies that have potential biases when it comes to detecting people’s gender due to the training data sets available.
As the topic becomes more critical for both corporations and end-users alike, let’s spend some time considering this topic from a control and governance perspective.
What are AI ethics?
In the review of 84 ethics guidelines for AI, 11 clusters of principles were covered in this topic. They are transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity and solidarity. The European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, also create standards for building “trustworthy AI.” After the April 2019 publication of the “Ethics Guidelines for Trustworthy AI,” the June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector and the public sector.
How to manage AI ethics risks?
Just like cloud risk assessment or audit, the responsibility model needs to be defined in the AI/ML industry. Some of the risks need to be covered from the vendor-side and some of the risks need to be covered on the client-side. The risks/audit reports that are like SSAE-18 need to be considered at the vendor-side, while the client-side is recommended to have the AI ethical risk program for the internal risk/audit scope management. For those small/mid-size companies, which cannot afford their own AI ethical risk program, third-party solutions/outsourcing services can step in to fill the gap.
Let’s look at AI ethics from a process, people and technology standpoint:
Process perspective
AI audits will not try to resolve/identify the data pattern flaws and incomplete algorithm/missing logics, which even data scientists and AI/ML developers are still struggling with. A framework/guidance needs to be leveraged to guide what scientists/developers should be considering at the model design and testing phases while auditors need to manage the risks from an AI/ML development lifecycle process level, change management level and entity level. This is to validate the process that was followed by the responsible parties who can perform the tasks and ensure that sufficient due diligence is put in place at the design, testing and modification stages. For the specific AI/ML audit, to enhance the risk coverage, the data inputs and outputs may be validated to ensure the reasonable assurance of the purpose of the AI/ML models.
People perspective
If the people behind the AI cannot address the ethical concerns, the AI that includes algorithms and data sets will not address those items, either. For example, if the people who design the AI/ML are not ethical enough or do not know how to identify the potential bias from the training data set, the algorithm may have the ethical flaw hidden and the training data sets may not be sufficient to lead to the desired target. As such, the AI ethical risk program should establish a well-designed AI ethics awareness training session and be responsible for applying the training to cover all important stakeholders such as data scientists, developers, executives, sourcing/vendor management team and end-users.
Technology perspective
As AI/ML is still at the adoption stage and each model tries to resolve specific business pain points, the technology area is still very much at the exploration stage. It is recommended that the high-tech industry develops some basic auto testing solutions to cover some rudimentary AI ethics risks so the basic standards can be adopted through systematic solutions.
Still an emerging technology
Even though AI/ML concepts have been discussed for many years, this area is still counted as an emerging technology due to the speed of practical adoption in the real world. As we all explore this area together, we hopefully can bring some ideas together to establish more discussion and drive innovation forward.
Editor’s note: Learn more on the topic from our recent LinkedIn Live session on AI and through ISACA’s Certified in Emerging Technology (CET) credential.