As we prepare to welcome a new cohort of students to colleges, concerns continue to swirl around the biggest change to hit education – and indeed the world – arguably since the invention of the internet: artificial intelligence.
Less than three years after the initial release of Chat GPT, universities are finding themselves firmly in its grip. The student who is not using AI is the exception, not the rule. Researchers are using it. Lecturers are using it. I am Trinity College Dublin’s chief academic officer and I can tell you I use it.
I understand the hesitancy, or even the threat, felt by many around this topic. Universities have always prided themselves on being places of discovery, critical thinking and human connection. Our campuses are places where the student comes to experience a valuable amalgam of structured learning and life experience.
Now AI is fully reshaping third-level education in ways that force us to rethink what we teach, how we assess and even why students come to college in the first place.
READ MORE
For generations, higher education has been a passport to opportunity, but for too many and too often it has also been a locked gate. If used wisely, AI has the potential to democratise third-level education in ways that once felt impossible.
Let’s be realistic here, AI will not solve all social inequities overnight. But if used responsibly then it could help deliver on a democratic vision already outlined in Europe’s Bologna Process (a 1999 movement designed to reform the EU’s universities) and in Ireland’s own National AI Strategy.
First, AI can expand our understanding of who a student is or who counts as a student by supporting efforts to expand lifelong learning and increasing accessibility.
Ireland has already shown leadership in widening participation in education through initiatives such as Springboard+ and the Human Capital Initiative. Yet demand continues to outstrip supply for further and higher education places. A whole range of challenges around affordability, flexibility and personalisation of the education experience remain.
AI can help us address these.
It can provide tutoring that supports personalised learning at scale, meaning future students could receive individualised, tailored mentoring that no overstretched academic could hope to supply.
AI-led automated transcription and translation supports can open doors for students with disabilities or those learning in a second language.
Researchers can – and already do – use AI to speed up literature reviews or to filter large data sets.
Algorithms can streamline timetables, grading tools can free up staff time and chatbots can respond immediately to student queries. All this time saved can be channelled into student supports.
From this perspective, AI represents a transformative opportunity to reimagine third-level education as a universal public good rather than a privilege.
But there is a darker side.
We cannot deny that over-reliance on generative AI risks blurring the line between original thought and machine output, leaving educators scrambling to maintain academic integrity.
This is why ethical considerations are central to the responsible integration of AI in education.
If AI tools are trained on biased or inaccurate data sources then they risk magnifying inequalities, potentially entrenching disadvantage
Most universities have by now deployed policies and institutional guidelines requiring the responsible engagement with and transparent acknowledgment of AI use by students and educators.
There are ways to deploy AI safely – and universities are finding these.
Data privacy is another critical issue, with not all users understanding that inputting student or institutional data into AI tools risks exposing sensitive information.
The even larger risk, perhaps, is that if AI tools are being trained on biased or inaccurate data sources then they risk magnifying inequalities, giving the illusion of objectivity while potentially entrenching disadvantage. Reliance on skewed or inaccurate training sets can potentially reinforce societal inequalities such as gender or racial bias.
Protecting source data and understanding the implications of its use are essential for mitigating these risks. Explorers in this Brave New World need to navigate the ethical minefield of data. We must always ask ourselves: who created it, who owns it, who profits from it, who uses it and how secure is it? Initiatives such as Trinity’s AI Accountability Lab led by Dr Abeba Birhane will help.
Perhaps the greatest danger in this new landscape is complacency. Universities risk treating AI as just another technology add-on to existing systems. But this is not like moving from log tables to scientific calculators or from overhead projectors to PowerPoint.
AI makes a beeline for, and shines a spotlight on, the very essence of higher education. If learning can be personalised, if knowledge can be generated on demand, then that which remains uniquely human – namely creativity, judgment and empathy – becomes more important, not less.
We have a real opportunity to take a lead here. Instead of rushing to adopt the latest tools from one of a handful of for-profit providers, perhaps we should be asking bigger questions.
How might we use AI to widen access, rather than deepen divides? What guiderails do we need to protect privacy and integrity? How do we equip staff and students not just to use AI but also to leverage it?
Ireland’s National AI Strategy places ethics, trust and inclusion at its centre. At EU level, the AI Act establishes a global standard for safe, transparent deployment. For universities, this means one thing: AI must serve students well, not replace educators.
It must be a tool for equity.
Where AI is concerned, the natural temptation will be to let the market set the pace, with universities scrambling to keep up. I believe that would be a mistake.
Universities should be the place where society debates the values that will shape and underpin technology, rather than the locations where technology is “deployed”.
The use of AI in universities should not be limited to the pursuit of efficiency or compromised by transitional worries about cheating. AI raises opportunities and questions about democracy, equity and the future of knowledge itself.
Let’s embrace these.
Professor Orla Sheils is vice-provost/chief academic officer and deputy president of Trinity College Dublin