Decoding AI: Insights and Implications for InfoSec

Raef Meeuwisse
Author: Raef Meeuwisse, CISM, CISA, Author of Artificial Intelligence for Beginners
Date Published: 10 July 2023
Related: Artificial Intelligence Fundamentals Certificate

What if I told you that I could distill the essence of four years of research on artificial intelligence (AI) into a single blog post? That’s right, I’m offering you a fast pass to the insights from my book, Artificial Intelligence for Beginners. Would you believe it’s possible to save four years in a matter of minutes? As we place cybersecurity, and indeed much of our lives, into the virtual hands of AI, it’s crucial to know the entity we're dealing with. So, buckle up, and let’s embark on this crash course on AI!

Artificial Intelligence: A Brainless Brainiac

For all its glamour, every “smart” AI we have today is essentially a mega-math machine with a knack for self-learning. The human-friendly output we see is merely a “translation” of the mathematical patterns it deciphers. Contrary to common belief, AI is not a sentient entity—it is a computer program on steroids that thrives on a trial-and-error learning curve.

As improbable and impossible as this sounds, the “intelligence” is achieved by allowing a computer program to self-learn. It mostly learns by receiving feedback when the output is wrong. It turned out that if you can allow such a program to just guess wrong billions of times, it eventually works out how to find background math patterns that consistently provide correct outcomes. Let it loose on enough data, and it will pick up patterns that yield the right results with no programmer at all.

It might look smart when it does something right—but it had to do it wrong a dubious number of times before getting it right.

Common Sense? AI’s Got None

This brings us to a key point: AI is wonderfully adept at narrow tasks, but it is clueless beyond its specific training. It’s like a super-specialist who can thread a needle blindfolded but can’t understand why it shouldn’t sew its own fingers together. Say we task an AI with making a company network as secure as possible. It might suggest shutting down the network, preventing user access or even blocking external dataflows because, hey, it’s technically efficient!

“But that’s not what we meant!” We’d protest. The AI would just shrug, metaphorically, and say, “You didn’t include ‘keep network operational’ in my objectives!” This illustrates the AI conundrum: it lacks a fundamental layer of understanding and can’t figure out unstated human assumptions.

Take a real-world example from yesterday when I asked a very smart AI to formulate a playlist of music through the ages for an August birthday party—most of it was great, but some of the tracks were from, erm… let’s say, convicted felons whose music might no longer be appropriate to play in front of kids, while others were Christmas tunes.

AI is maze-bright—meaning it can become very good at very narrow tasks, but will never know what additional, unmentioned assumptions the humans might have meant to put in place. That can be pretty dangerous in security environments.

AI Guardrails: A Beautiful Dream

Now, you might have heard about “AI guardrails” that supposedly keep AI on the straight and narrow. While that’s an admirable goal, current attempts at AI guardrails are about as useful as an inflatable archery target. Think of it like building a fence around a tornado-prone town to keep the tornadoes at bay—a futile gesture, to say the least.

Why are guardrails a goal and not a reality? It’s because AI is not programmed. There is no hard-coded perimeter—and the amount of math an AI performs makes the inner workings and decisions of AI too vast and complex for human scrutiny.

You can try to put some guardrails around an AI, but any AI developing or training that goes very wrong has an extremely high probability of breaching such safety measures.

Safety measures can work on older, trained AI (reactive AI are basically vending from a fixed database), but guardrails are a fanciful concept when it comes to constraining models that are training or evolving.

AI’s Evolution: Breakneck Speed Ahead

What often gets underestimated is AI’s lightning-fast evolution. Today’s science fiction could be tomorrow’s reality. AI is not just solving complex problems, but it’s doing so at an astounding pace. It can run simulations amounting to millions of years of design work in mere seconds.

Keep in mind that the real challenge with AI lies not in what it can do today, but in how fast it is leveling up.

So, What Does It All Mean?

At its core, AI is a turbo-charged, math-based problem solver. It can burrow into challenges with a speed and depth that’s beyond human capabilities. However, it lacks the basic common sense we take for granted. AI won’t color outside the lines unless you explicitly tell it to—and even then, it’s not genuine creativity, just a pre-set deviation.

Moreover, AI is blissfully ignorant of the real world. It does not have a clue that tangible damage cannot be reset, or how impactful its own role within it can be. We can guide AI models toward certain criteria, but we cannot hardwire rules into them.

As AI steadily pervades our lives, homes and the infosec workplaces, remember this: unless AI’s structure evolves to incorporate a deeper understanding of the world before it starts self-evolving, we risk being left with an entity that is simultaneously mind-bogglingly brilliant and spectacularly stupid.

The Future Is Not Written Yet

AI could reshape the world of cybersecurity in unimaginable ways, making our lives easier and more efficient. However, it is essential to bear in mind that AI, despite its remarkable abilities, is essentially a tool. It lacks the human touch—our capacity for intuition, empathy and understanding that extends beyond the data. AI will undoubtedly keep improving, but it is on us to guide its evolution in a way that respects our shared humanity and safeguards our values.

So, the next time you see a headline touting the latest AI breakthrough, take a moment to appreciate the amazing technology—but remember that it’s not quite as “intelligent” as it might seem.

As we step further into the AI-driven future, keep your wits about you. We are sharing the world with a new entity, one that is simultaneously the most awe-inspiring computational phenomenon we have ever created and, in many ways, profoundly clueless. This brief exploration only scratches the surface of the world of AI. For a deeper understanding, feel free to explore the resources below. In the grand tapestry of life and intelligence, AI is a brilliant but narrowly focused thread. It is up to us to weave that thread in a way that enriches the whole without disrupting the intricate patterns we have worked so hard to create. AI can enhance our lives in countless ways, but it is on us to keep it grounded, safe and, above all, beneficial to all.

Related resources