Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Unleashing the power of artificial intelligence in the face of new risks

Organisations need a clear data strategy before feeding into LLMs, covering such factors as security, privacy, transparency and regulatory compliance

AI: Precautions must be taken but at the same time care must be taken not to stifle its productive use. Photograph: iStock
AI: Precautions must be taken but at the same time care must be taken not to stifle its productive use. Photograph: iStock

The rapid adoption of artificial intelligence (AI) is understandable given the enormous power of the technology. However, with that power comes a new panoply of risks. The technology depends on accurate data to function; GenAI needs to be trained properly to prevent hallucinations and organisations need to ensure their data is accurate and dependable.

They also need to prevent sensitive data being shared with large language models which may in turn share that data with unauthorised third parties. On top of that, they need to ensure they own or have rights to the data transferred to AI algorithms or used to train GenAI models.

Tony DeBos, global leader for data protection, privacy and responsible AI at IT services and consulting firm Kyndryl, explains that AI model development is different from that of traditional software applications.

“In the past, data was used when building an application to test it and so on, and then thrown away. Data wasn’t that important. Now, if you’re using a data set to test and train an AI model, the trained model goes out with that data set. That’s a new risk.”

READ MORE

Large language models (LLMs) are creating new risks that haven’t been considered so far, he adds. Organisations need to address these risks through responsible AI principles including security, privacy, transparency, bias and so on.

“Models can be biased,” he notes. “You need accountability. If models are making their own decisions, who in the organisation is responsible for that?”

That brings up the issue of safety. DeBos points out that it is becoming easier for organisations to create their own AI solutions using low-code and no-code platforms. “You don’t need to be a programmer to do it, but not all of them really consider safety. It’s not the first thing they think about. That needs to be considered.”

Poor quality data is another risk. “Ensuring data integrity is paramount for the effectiveness of AI models,” says Jackie Hennessy, partner, risk consulting, at KPMG in Ireland. “Organisations can take several key actions to support with this.”

Jackie Hennessy, KPMG: 'Without a coherent data governance strategy, organisations risk developing inaccurate and unreliable AI models'
Jackie Hennessy, KPMG: 'Without a coherent data governance strategy, organisations risk developing inaccurate and unreliable AI models'

Data governance is a critical pillar for the accuracy and usefulness of AI models, she points out. “Effective governance ensures data is accurate, secure, and accessible, while aligning with regulatory frameworks such as the EU’s GDPR, Data Governance Act, or the EU AI Act. Prioritising data governance both helps to mitigate risks such as breaches or compliance fines but equally importantly it unlocks value through integrating data from general analytics or advanced AI. Without a coherent data governance strategy, organisations risk developing inaccurate and unreliable AI models.”

Organisations should consider data collection and cleaning. “Collecting high-quality data requires clear protocols to ensure consistency and relevance – random or unstructured gathering risks introducing bias or irrelevance,” Hennessy explains. “Robust cleaning processes are essential to eliminate errors, duplicates, and inconsistencies that can skew analysis or AI model outputs.”

The other step she recommends is adopting data erasure practices to ensure the most up-to-date data is used, in addition to meeting legal obligations. “Integrating erasure into governance frameworks is a proactive step to ensure data models remain relevant and operate based on current data and trends. By implementing these practices, organisations can ensure their AI assets are trained on accurate and reliable data, leading to more dependable and effective outcomes.”

Dennis Tougas, vice-president, global data security and responsible AI and privacy services practice leader at Kyndryl, agrees. “The first thing we find with a number of companies is that they jump in very quickly to exploit AI, but they don’t have their data properly organised. You need a data strategy. You need to know where it is, validate it, and assure its quality before feeding it into an LLM. As part of that, you need to look at the data to assess its potential for unintended bias.”

A parallel focus on data hygiene practices is required, he adds. “Organisations need to make sure security and privacy are considered before putting data into a model. Feeding data into a model is a bit like pouring a cup of water into a bathtub. It becomes part of the water and separating it out is virtually impossible. Organisations also need to ensure they have legal permission to use data in ways authorised by the data subjects. Sometimes there might be no way to redact the data short of withdrawing the solution.”

Having validated the data, it must be protected to prevent it being polluted or compromised. “Access control is very important. You must prevent unauthorised people doing prompt injections and so on.”

With all these precautions to take, care must be taken not to stifle the productive use of AI. “This requires a balanced approach,” says Hennessy. “Implementing end-to-end oversight through AI frameworks, policies, procedures and controls – from design and training to deployment and monitoring – ensures accountability, mitigates risks and aligns AI systems with ethical and regulatory standards. This fosters trust and consistency throughout their development and use.

“Adaptive security measures should also be considered. Instead of blanket restrictions, dynamic protocols – such as real-time threat detection and encrypted data pipelines – safeguard systems while granting AI the flexibility to operate at scale. This balance keeps innovation agile and secure.”

Using sandbox environments for experimentation can also allow developers to test new ideas and models without risking data security or quality, she adds. “These isolated environments provide a safe space for innovation. Equally, implementing automated tools for data quality and security checks can also reduce the manual burden and allows teams to focus on generating value through their AI development.”

The human element is also important. “Providing training and awareness programmes ensure that all team members understand the importance of data quality and security, and how to integrate these practices creatively,” Hennessy points out. “Educated teams are better equipped to balance security with innovation and productivity.”

Barry McCall

Barry McCall is a contributor to The Irish Times