Presented by CoCre8

Winning the cybersecurity war with AI

 ·8 May 2024

AI is changing many industries, but its impact is particularly evident in cybersecurity. On the face of it, AI adds a potent new capability to the already well-stocked armouries of cybercriminal gangs.

AI allows them to schedule automated attacks, thus scaling their operations and making it much harder for defenders to deal with them. Another challenge is that cybercriminals can employ AI to improve malware, equipping it to bypass traditional security measures. This increases the likelihood that attacks are not detected and neutralised in time.

Stephan Gilliland, Head of Information and Cybersecurity at CoCre8, says generative AI (GenAI) gives cybercriminals the capability to take their phishing attacks to a whole new level, using social engineering techniques to dupe unwary users more effectively.

For example, GenAI enables voice and even video phishing. A clerk receiving what seems like an instruction from the CFO to expedite a payment into a certain account is likely to obey it. It’s an unfortunate reality that AI can create a good audio facsimile after listening to the subject’s voice for only 90 seconds, a truly terrifying capability. Fake video is less advanced at this stage, but all indications are that it will improve. Welcome to the world of the believable deepfake.

“That’s the bad news. The good news is that AI puts the same powerful capabilities into the hands of corporate security teams. Here’s the kicker: like all technologies, AI has the effect of levelling the playing field for the underdogs,” he says. “Technology has enabled smaller companies to compete more effectively with corporate giants; similarly, overstretched and outgunned defence teams now have the capability to compete with the well-resourced cybercriminal gangs which, until now, seem to have had the upper hand.”

Doing the grunt work

For one thing, Gilliland says, defence teams are at an innate disadvantage because they have so many duties to perform, whereas attack teams can be single-minded. Now, defenders can outsource the analysis of network traffic and system logs to detect threats faster and more efficiently. “AI can also be used to respond to attacks automatically, blocking malicious traffic or isolating infected systems at the same ‘machine speed’ as the attackers, and much faster than any human could do,” Gilliland says.

An example of the smart use of AI is the Fortitude platform, developed locally in South Africa. It uses powerful GenAI to automate time-consuming but vital tasks: vulnerability assessment, penetration testing, running exploits and brute-force attacks, all performed at lightning speed. “Defence teams can do much more much more quickly, which is why 63% of IT and security professionals believe that AI will improve corporate cybersecurity, according to a recent Google survey,” he adds. “By taking care of highvolume, repetitive tasks, AI frees up scarce, skilled cybersecurity professions to focus on what they do best, thinking laterally and formulating strategy. They don’t have to wade through a lot of ‘noise’ in order to find the genuine threats.”

He points to NetApp as an example of the smart use of AI to enhance the security of its market-leading storage solutions. NetApp uses AI and machine learning to automate ransomware protection by detecting anomalies in the file system. If an anomaly is detected, the system automatically creates a snapshot so that damage is minimised and a rapid recovery is facilitated. NetApp also detects anomalies in user behaviour and takes immediate action.

Looking after the troops

In the pre-AI world, cybersecurity professionals were effectively on a war footing 24/7. Because their skills are scarce (and expensive), they carried a heavy load, constantly under pressure to keep up with a rapidly changing threat landscape. Long hours were the norm, made worse by the responsibility for keeping the organisation safe.

Security environments are typically as heterogenous as the systems they’re protecting, creating a tsunami of alerts that can prove overwhelming. Alert fatigue is a real challenge for security teams.

“As AI grows in sophistication, it can take on an increasing amount of the workload, which creates a much more positive working environment for humans,” Gilliland says. “It can filter out false positives and prioritise genuine threats for the team to investigate and respond to.”

It can also help bridge the skills gap that continues to characterise the industry. One recent study showed that there are some 500 000 positions unfilled worldwide, with the global shortfall likely to hit 3.5 million by 2025. Reasons for this dearth include tough working conditions and an intrinsically difficult job.

AI can reverse this trend

The key to leveraging AI successfully in the fight against cybercrime is not to see it as a way to reduce jobs, but, rather, to make current teams more effective. “CISOs and CIOs need to understand that the power of AI lies in its power to automate repetitive tasks and craft rapid, automatic responses,” Gilliland says.

“AI is at its best when it’s used to augment human capabilities, not replace them. There’s no doubt that the bad guys will continue to find new ways to use AI; it’s vital that defence teams have the human capability to come up with solutions. AI will give them the space they need.”

Contact Stephan Gilliland on [email protected]

Subscribe to our daily newsletter