Artificial intelligence is taking cyberattacks to a whole new level. Happily, it’s helping organisations defend themselves too.
“The use of AI in cyber defences is providing practitioners with new and more innovative ways to help keep track of attack vectors, spot new breaches and deploy counter measures to such attacks,” says Neil Redmond, director of PwC Ireland’s cyber security practice.
There are a number of ways cybersecurity has utilised AI, he says, including automated threat detection, where organisations can train technology to detect and recognise anomalous behaviour. “In tandem with this, we can train a large language model (LLM) to continuously learn about the evolving nature of threats,” he adds.
But the speed at which all this is taking place is unprecedented.
‘A gas emergency would quickly turn into an electricity emergency. It is low-risk, but high-consequence’
The secret to cooking a delicious, fuss free Christmas turkey? You just need a little help
How LEO Digital for Business is helping to boost small business competitiveness
‘I have to believe that this situation is not forever’: stress mounts in homeless parents and children living in claustrophobic one-room accommodation
“Gone are the days of threats not evolving in a short time frame, as now we can process large data sets quickly, we can detect patterns and then adapt our ‘defensives’ in real time,” says Redmond.
This includes the ability to detect and defend against malware. “The usual way to defend against malware is with signature detection, where we can look for the ‘tag’ that identifies individual software and plan an approach when we find it. With AI we can map and track the evolution of malware and then defend against it using an automated threat detection response that is using AI,” he explains.
Because cyber criminals are using AI powered tools too, their attacks are becoming more potent.
One of the main tools they use is a denial-of-service attack. This is where genuine users are prevented from accessing a website or service as a bad actor swamps them with nefarious requests, flooding the website with excessive traffic.
But organisations can protect themselves. “The simplest and most effective approach is employee training. Train employees on best practices for cybersecurity which includes recognising phishing, setting strong passwords and patch management for example. These approaches will assist individuals and organisations to be vigilant against AI threats,” says Redmond.
Continuously monitor internal networks for anomalous activity, have a well-practised incident response plan that is based on regular cybersecurity assessments, and implement effective access management regimes to limit access to critical files, he adds.
PwC’s GenAI lab focuses on identifying its clients’ critical data, classifying it and developing an access management system that supports an appropriate level for employees to access the appropriate data at the right time.
“This is similar to the traditional approach of securing data in ICT networks for new regulations such as the EU’s NIS 2 (Network and Information Systems 2) and DORA (Digital Operational Resilience Act),” he points out.
“These are the traditional fundamentals of cybersecurity and can be applied to AI threats as they emerge. In other words, organisations and individuals already have the basic knowledge and just need to apply it effectively in the age of AI.”
Now is the time to do it, suggests Vaibhav Malik, partner, cybersecurity and resilience at Deloitte. “There is considerable amount of interest in using LLMs for cybersecurity defence, but AI-powered defence is still in its early stages,” says Malik.
“LLMs have the ability to connect disparate pieces of information and show early signs in improving cybersecurity tasks like detecting vulnerabilities, analysing malware, and making software more secure.”
Initial research indicates that LLMs can identify phishing emails by analysing the text for malicious intent and comparing it to known phishing cases, for example.
There is ongoing research into how generative AI could enhance cybersecurity defences through the implementation of predictive models that identify threats and facilitate incident response. “However, there is a need for detailed data sets and techniques for adapting LLMs to cybersecurity defensive and offensive areas, with particular challenges in fine-tuning and training models due to complexity of security-related tasks,” he points out.
On the plus side, one interesting study suggests that LLM chatbots can mimic and automate human interaction with scammers, with a view to wasting attacker’s time, he adds. And while highly capable hacking groups are certainly experimenting with AI, researchers have, as yet, seen little evidence that they are generating major benefits so far.
“There is growing interest among bad actors in using LLMs for generating fake content, running social engineering campaigns, assisting malware development, creating more sophisticated phishing emails, and allowing criminal actors to assume the identities of individuals or organisations, raising the risk of identity theft. However, AI-based tools are not fully autonomous and require some kind of human intervention,” says Malik.
In the meantime, cyber defenders must stay aware of the changing threat landscape, strategically adapting to outmatch attackers, “and continue investing in the right technology and stealthier detection methods to increase cyber resilience”, he cautions.
Rest assured, criminals are using GenAI to become more effective, says Dani Michaux, head of cyber security practice at KPMG Ireland. “The ransomware threat is increasing and becoming faster. This presents fundamental challenges for security teams and organisations in dealing with ransomware attacks 100 times faster than before,” says Michaux.
“GenAI is being used to create more convincing phishing emails and produce new ones at an extremely rapid pace. Again, it is not a new threat, but it is potentially more potent.”
But even in the face of rapidly advancing AI, people remain your best defence.
“Part of the solution is to deploy AI to bolster cyber defences, but the human element of the equation remains critically important,” says Michaux.
“Humans are the ones who are best able to detect phishing emails and prevent ransomware attacks. Ultimately, the key to reducing risk is to bring human critical thinking and scepticism to bear. There is no substitute for keeping humans in the loop.”