Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Protecting the crown jewels: Data security must adapt to rapidly evolving AI

The reputational, financial and operational costs of a breach can be huge

In parallel with leveraging generative and agentic AI, organisations need to focus on the technology's security. Photograph: iStock
In parallel with leveraging generative and agentic AI, organisations need to focus on the technology's security. Photograph: iStock

Accessing and transferring data comes with inherent risks. Get it wrong and it will come back to bite you as an organisation, perhaps fatally.

“It can not only affect you reputationally but financially and operationally,” says Dennis Tougas, vice-president, global data security, at professional services firm Kyndryl.

He points to some of the swingeing fines that have already been levied on companies around the world by data protection authorities.

Under the EU’s GDPR data privacy rules, such fines can be up to four per cent of total global turnover the preceding year. “For a multi-billion-dollar corporation, that can be literally hundreds of millions, if not billions, of dollars,” he says, adding that regulators have the power to shut down operations too.

READ MORE

Then there is the cost of remedying breaches. He points to US pharmacy chain Rite Aid, which was hit by a multi-million-dollar legal settlement recently arising from a data breach that involved the personal information of more than two million customers.

Furthermore, the US Federal Trade Commission, a regulatory body, prohibited the chain from using the kind of facial recognition analytics it had previously been using, for a period of five years.

As the pace of technological change accelerates and increasingly leverages AI, so too must the internal data safeguarding systems that ensure companies remain in compliance.

“It’s the old saying that ignorance of the law does not constitute a feasible defence for violating it. It’s why it is absolutely vital that enterprises do this in a very thoughtful manner,” says Tougas.

The ways in which a breach can occur are manifold, whether in a hacking incident or internal misappropriation. “It is why data has to be properly governed and protected as a critical asset,” Tougas adds.

Certainly, cyber criminals, who are becoming ever more sophisticated, understand their value. These too come in various forms.

“There is intense cybercriminal activity both from nation states and organised crime entities, as well as what we might call ‘mom and pop’ small breed hackers. What we’re finding now is that, with AI, the attack surface is widening,” says Tougas.

“As with any new technology, it brings great potential value but also significant new risk. Unfortunately, data risk, historically, hasn’t really been talked about that much.”

In addition, what Tougas calls the “multiple vectors of risk” surrounding AI are only now emerging, as cyber criminals look to filch data either to sell or for nefarious purposes of their own.

“We are seeing all kinds of creative attacks, including prompt injection,” he says.

The way to invoke an AI model is to prompt it. By working out specific prompts, asking it a specific question or feeding it some information and query-related material, bad actors have been able to develop techniques that essentially allow them to infiltrate the sequence of its workflow to alter or corrupt a prompt.

“They can enter their own prompt to disguise themselves as a valid user, and basically leverage AI for their own purposes.”

Given the recent rise of agentic AI, in which machines don’t just provide answers but are designed to kick off and run a series of events without human intervention, understanding of the risks and possible repercussions of such hijacks is growing.

“In a given business process or workflow, instead of being human driven, you can actually employ AI agents who do reasoning and make decisions and then execute them, and you can string agents together,” Tougas explains.

Part of Kyndryl’s work is formulating crisis simulations to help clients understand how their AI or agentic AI could be compromised.

“It’s almost like having a rogue quasi-employee in the mix of your business process, and they can cascade impact to other agents, disrupting or changing your entire process and affecting decisions in the process,” says Tougas.

“It’s why you really need to focus on generative and agentic AI security in parallel with leveraging the technology. Numerous papers have been published about how rapidly this technology is set to evolve and impact virtually every aspect of business and personal life, so it’s imperative that people take this seriously.”

Sandra O'Connell

Sandra O'Connell

Sandra O'Connell is a contributor to The Irish Times