Leaving Cert shows we will have to learn to live with algorithms

Grading model should alert us that algorithms will impact the shape of society

“For every digital algorithm shaping our lives today, there was a pre-digital human equivalent – the Facebook and Twitter newsfeeds are doing similar jobs to newspaper editors.” Photograph: Denis Charlet/AFP via Getty Images
“For every digital algorithm shaping our lives today, there was a pre-digital human equivalent – the Facebook and Twitter newsfeeds are doing similar jobs to newspaper editors.” Photograph: Denis Charlet/AFP via Getty Images

You don’t need to teach the Leaving Cert class of 2020 about algorithms. As digital natives, they probably have a more intuitive understanding than the rest of us about the role algorithms play in our everyday lives. Algorithms already decide what content shows up in their news feeds, what videos TikTok thinks they might like, what Google thinks they are searching for.

This week much of their academic future has been decided by the Leaving Cert calculated grades, shaped by an algorithmic model developed by the Department of Education over the summer and changed substantially just last week. Because of this change, this cohort of citizens have found themselves the subject of a natural experiment we’re inadvertently running at a country-wide level. The last-minute change, removing a school’s historical results from the algorithm, has inflated and altered the distribution of points around the country, narrowing the gap between the most disadvantaged schools and the rest.

Leaving Cert results day is always a big event each year, but 2020 in particular held lessons and insights for the rest of us too. It was a demonstration of how algorithms can impact our individual lives and in doing so change the shape of society.

It is also a case study on the need for transparency, the value of oversight and the impact of public discourse around these algorithms.

READ MORE

Set of instructions

An algorithm, in this instance, is just a set of instructions used to make a decision. The Facebook newsfeed algorithm, for example, will take the thousands of videos, images or news articles it could show you, then decide which 10 will appear in your newsfeed the next time you open the app.

To make this decision, most algorithms first make a prediction. We predict you’ll like this video the most, so it appears on your YouTube homepage. We predict you would have got this grade, so we give you these Leaving Cert points.

Although the inner workings of modern machine-learning algorithms may require a PhD to understand, the basic premise is nothing new. For every digital algorithm shaping our lives today, there was a pre-digital human equivalent – the Facebook and Twitter newsfeeds are doing similar jobs to newspaper editors. YouTube and TikTok are deciding what videos to show you just like the director of programmes at RTÉ or Virgin Media might do.

The massive scale of Instagram or YouTube might seem new, but the conversations we’re having now are similar in many ways to those we had at the advent of radio and television.

I don’t think, for example, that Mark Zuckerberg’s algorithm has a greater influence over Irish culture and politics in the 2010s and 2020s than, say, Gay Byrne’s editorial choices did in the 1980s and 1990s.

The Leaving Cert case study should also alert us to the fact that algorithms will come to impact the shape of our society not just through the social media platforms as we’d expect, but through fields we might not consider high-tech, such as education, healthcare and finance.

We need ongoing consumer education on why algorithms make the decisions they do and the incentives at play

Algorithms will decide if your driverless car prioritises the driver’s life or the pedestrian’s. Algorithms are replacing the branch manager and loans officer in the bank, they will help to prioritise limited healthcare resources in hospitals and possibly decide who can get car insurance and who can’t.

Although each of these sector-specific algorithms will have challenges and opportunities, there is one overarching benefit they all have, which is an increased opportunity for transparency and oversight.

It was hard to regulate how a thousand local bank managers assessed risk on loans, so it was hard to see the biases at play and the inequalities that were created. It will be easier to assess, and therefore regulate, the bank’s credit-approval algorithms, if we demand the right level of transparency and oversight and equip the right regulators to do the job.

Systemic biases

Even better still, the biases we find in these new algorithms are almost always an indicator of bias in the system they’re built upon. Now, for the first time, the biases and inequalities in the systems which were so hard to describe and pin down are being codified, written down in black and white for us all to see, understand and improve.

Amazon, for example, developed an algorithm for their human resources department to automatically screen CVs and job applications. They used a decade of their historic hiring data to create the algorithm, then fed it a random sampling of CVs to see which ones it recommended. Most of the recommendations were men. Many news outlets reported the results as “the algorithm is sexist”, but a better interpretation is that their hiring process had been sexist. For a host of reasons, conscious and unconscious, their hiring policies and practices favoured men over women. I’m sure this was something many people were aware of, but found it difficult to prove. Now they had proof from the algorithm, and could use that to improve their recruiting practices to attract and hire more women.

Over the coming years we are going to learn to live with algorithms, which will present its own challenges but also many opportunities to improve. We need ongoing consumer education on why algorithms make the decisions they do and the incentives at play. More importantly still we need to improve the level of discourse in our media and among our legislators, growing a better understanding of the trade-offs involved within each algorithm and the knock-on implications each change can have.

Our legislation should push for algorithmic transparency. These algorithms are often a technical black box, so regulators need the ability to inspect the data on which they are built, and to audit the predictions and decisions that emerge from their automated decision-making process. Just as we assess the different impact of budgets and laws on different groups within society, we should do the same here too.

We should be sceptical of technical algorithms that over-promise, especially while the technology is still young and the data sets needed are incomplete or unrepresentative. In the areas of law and justice, in particular, facial-recognition algorithms have been used with far too much confidence and have led to documented cases of mistaken arrests.

As always, we need to be mindful of the concentration of economic and political power in the gatekeepers who control these algorithms, as they continue to displace the gatekeepers of old.

Peter Tanham is a digital strategist who writes a weekly newsletter about tech policy in Ireland