Maximizing Usage and Trust of AI: Insights from ISACA China Hong Kong Panel

Maximizing Usage and Trust of AI: Insights from ISACA China Hong Kong Panel
Author: ISACA Now
Date Published: 27 September 2024
Read Time: 7 minutes

Editor’s note: The ISACA China Hong Kong Chapter held its annual conference on 13 September 2024 with a theme of “AI in the Age of Digital Transformation: Risks, Strategies, and Governance.” The following is a summary of a panel discussion at the event.

In a world increasingly dependent on IT and OT, we are often asked—implicitly or explicitly—to trust AI systems. When it comes to maximizing the usage and trust in AI, regulators, senior management of AI providers, developers and auditors play crucial roles in ensuring that AI systems and their outcomes are responsible, fair, ethical, explainable, reliable and governable. This discussion focuses on maximizing the usage and trust in AI.

The expert panel, whose companies protect enterprises from various perspectives—data protection, digital trust, and security consultancy and assurance—share their views below. The panel consists of:

  • Steve COAD: Steve, who flew in from Melbourne, Australia, brings over 25 years of experience in IT vendor leadership across the Asia-Pacific region. He is currently the AsiaPac RVP for Druva, a company specializing in data protection and resiliency. Steve has led operations in this region for three years.
  • Anant DESHPANDE: Anant is the Regional Vice President for India & ASEAN at DigiCert Inc., one of the world’s largest Digital Trust providers. Their PKI solutions are used by enterprises and government agencies worldwide. Anant is passionate about using technology to solve real-world business problems. He arrived a couple of days earlier from his office in Mumbai.
  • Mike LO: Mike is the Regional Director & Team Lead at Wizlynx Group. He is deeply rooted in cybersecurity and consistently passionate about sharing knowledge in the cyber field. A frequent public speaker, Mike is devoted to promoting the CyberVerse—cybersecurity universe—to individuals everywhere. He holds certifications such as CISA, CISSP, and is a Certified Trainer of the Certificate of Cloud Audit Knowledge.
  • Welland CHU: Welland is the Secretary and Vice President (Certification) of the China Hong Kong Chapter. He had the privilege of moderating this expert panel.

Question #1: What new features does AI bring to your line of business

Steve: Modern cyber defense has shifted from mere protection to embracing identification, resiliency and recovery. Today, in Druva's case we are using Generative AI using ChatGPT4 feeds into our help and support portal to automate response, improve customer satisfaction and reduce support cases by up to 50%. We are using Machine Learning AI helping to power the real-time advanced analytics by developing patterns over time. We use this to build patterns and look for Unusual Data Activity (UDA) in a customers’ backups. We can then use this information to help the customer identify if there is cyber activity and to understand the scope of it.

Anant: The benefits of AI are tremendous. From drug discovery to seismic modeling to process automation, the potential of AI is vast. Compared to earlier disruptive technologies like cloud and mobile, the benefits of AI are almost instantaneous. There is a lot of pressure to adopt AI, especially Generative AI. However, we must guard against unintended consequences.

Mike: Our assessment service primarily relies on the technical expertise of our team. While AI may not play a significant role in performing the assessments themselves, it can be a valuable tool for streamlining our preparation work, such as drafting scripts or writing short code.

Question #2: What is one topic you are most excited and/or concerned about regarding AI?

Anant: AI is here not only to stay but to transform the world as we know it. The combination of AI with newer technologies like quantum computing can be massively disruptive, to say the least.

Mike: To ensure AI remains a beneficial tool and not a threat, we need to establish guidelines that minimize the risk of it becoming a liability. Developing a regulatory framework will help maintain a trustworthy relationship with AI.

Steve: Previous generational revolutions in IT such as the internet, mobility and cloud computing have taken 10-15 years to be fully adopted. But with AI this timeframe could be accelerated. And like these previous revolutions, the strongest use cases are unheard of in the early adopter phase. So, I am most excited to see how use cases develop and how they have a positive impact on our lives. It is an exciting time to be in IT. From the risk perspective, while we are using AI to build up our defenses against cyberattacks, the attackers are using AI to make their attacks more plausible and sophisticated, imitating the user and finding vulnerabilities. It could become like an AI arms race between the good guys (us) and the bad guys. With the power of cloud computing at our fingertips, we definitely hold an advantage over the criminals.

Question #3: Trust on AI: Are we there yet?

Welland: We’ve heard a lot of good things about AI. In reality, there is a gap between the data scientists who build the AI models, the senior management who is ultimately responsible for the results of AI adoption, and the regulators and auditors who set the policies and ensure adherence. So, where are we in terms of establishing trust in AI?

Mike: AI does help accelerate our work, but trust still needs to be established. We must exercise due diligence as gatekeepers and continuously train the AI with our insights to create a more reliable environment.

Steve: I think there is a lot of awareness that AI is coming but as it is still in its infancy in terms of real-world use, there is also a lot of misunderstanding around AI. So, there is hype about its capability and some fear of it. So, no, I don’t think we are there yet. But with gradual adoption and more real use cases, I think trust will be built.

Anant: Digital trust is fundamental to digital transformation. There can be no conversation on digital transformation without securing digital infrastructure. The same maxim applies to AI as well. Given the vast canvas that AI operates in, it is imperative to embed trust into the AI fabric. For instance, in this age of deepfakes, how can we inject provenance into content to help distinguish—not necessarily the authenticity, but at least the originality of the content — for example, a visual marker on a video that informs the viewer whether the video has been altered and, if so, the trail of changes.

Question #4: How would the responsible adoption of AI enhance the outcomes of users’ businesses?

Steve: A good example would be around improved threat hunting capabilities with the use of AI. Later this year we will be introducing a feature that uses AI models to proactively find threats based on its learning. A natural next step beyond this could be using AI for predictive threat hunting.

Anant: Organizations must, at the most basic level, ensure the security of the AI tools they deploy. For example, with AI tools, millions of lines of legacy software code can be rewritten quickly, driving enormous efficiencies and cost savings. However, as organizations leverage such capabilities, they must remain vigilant about vulnerabilities and threats that can infect the code through the use of open-source libraries.

Mike: Because there isn’t yet a common global framework or standard for AI, not all AI environments are 100% trustworthy. We need to maintain a healthy level of skepticism before we can fully trust it. Ultimately, senior management holds the responsibility for AI integration. AI has the potential to revolutionize business processes and drive organizations to new levels of success.

Question #5: Final Word: At the conclusion of the panel, what is the one piece of advice you would give to the audience on maximizing the usage and trust in AI?

Steve: As I mentioned earlier, we are likely entering an AI arms race. We must embrace the use of AI to protect ourselves from increasingly rampant and sophisticated cyberattacks on our valuable data assets.

Anant: Organizations must not only embrace but also promote the use of these technologies. As they do so, it is imperative to integrate people and processes into this triad. Explaining the benefits and risks of these technologies, as well as providing regular training for users, is of utmost importance.

Mike: There is a Chinese saying: “Trust your man without doubt or don’t work with him.” In the context of AI, it is: “Approach AI, use it with due diligence until you have fully established your trust.”

The panelists welcome further discussions on AI and cybersecurity with the ISACA community. They can be contacted at: Steve Coad steve.coad@druva.com, Anant Deshpande Anant.Deshpande@digicert.com, Mike Lo mike.lo@wizlynxgroup.com, and Welland Chu welland.chu@thalesgroup.com.

Additional resources