As AI becomes grows more powerful, responsible AI has become an increasingly hot topic. Simply put, responsible AI is concerned with unlocking AI’s potential while managing its risks. According to PwC Ireland risk assurance partner Keith Power, it looks at how AI systems can be developed and deployed in ways that are ethical, transparent and aligned with human values.
“As AI systems increasingly influence crucial aspects of life, their impact on individuals and communities can be profound,” he says. “Without responsible AI practices, there is an increased risk of bias, privacy infringements or causing harm to society. By ensuring transparency, fairness and accountability, responsible AI mitigates these risks and fosters a more inclusive and ethical future.”
The principles underpinning responsible AI have been debated for as long as AI has been around, Power continues. “However, these have become more formalised over the last decade as the prevalence of AI use increased. Today responsible AI consists of a comprehensive framework of practices, tools and governance models addressing ethical principles including fairness, transparency, accountability, privacy, security and safety. Adoption of a responsible AI framework offers organisations a practical way to operationalise AI ethics and to build trust, ensure compliance and drive sustainable innovation.”
In the view of Erik O’Donovan, head of digital economy policy with Ibec, to adopt responsible AI practices businesses should educate decision makers, staff and stakeholders about AI opportunity and responsibility.
READ MORE
“Embed practices according to your organisation’s role in the AI value chain,” he advises. “Facilitate human oversight and strong data governance. Collaborate with the AI ecosystem to keep abreast of developments. There are several resources available in implementing responsible AI practices including the European ‘ALTAI’ assessment tool for trustworthy AI; public sector guidelines on AI use and services from CeADAR, Ireland’s national centre for AI.”
Responsible AI is important for businesses for a number of reasons, reputation being among the most important. “If a business cannot show it is using AI responsibly, people may not trust it,” says Forvis Mazars director David O’Sullivan.
O’Donovan agrees: “It is important in safeguarding human health, safety and rights in the development and deployment of AI. Businesses care about it as it supports trust, corporate values and compliance.”
“Responsible AI allows organisations to demonstrate their practical application of ethical values in a transparent manner,” says Power. “This also engenders trust among stakeholder groups which is critical to ensure high rates of AI adoption, which in turn is essential to reap the benefits of investment in AI initiatives.”
It also helps to address new risks. “AI use within an organisation introduces new risks and, by disrupting existing business processes, also renders existing controls potentially no longer fit for purpose,” Power explains. “Adherence to AI regulations, such as the EU AI Act, will address some, but not all, of these risks. Responsible AI allows organisations to look at risks holistically and to identify, understand and therefore manage and mitigate the diverse set of risks associated with AI initiatives across the enterprise.”
Even the best-intentioned businesses are likely to come under pressure as a result of market forces, however. “There are significant societal and human risks around AI which are likely to materialise,” says Liam McKenna, partner in the consulting practice in Forvis Mazars, who sounds a note of caution: “It is too early and too optimistic to say things will be grand. They could very well not be grand. We see organisations putting in place AI policies which are aspirational initially, but then a use case arises where they can make money, and they start to question the policy. Everyone has a policy until a money making opportunity comes along.”
That makes it critically important for policies to be in tune with the overall objectives of a business from the outset.
“It’s critical for businesses to define their overall strategic aims for adopting AI, ensuring that their AI initiatives are aligned with broader organisational goals while promoting ethical, transparent, and responsible AI practices,” says Power. “Building trust at all stages of the AI journey is essential to creating long-term value and safeguarding the organisation’s reputation.”