Lies, damned lies and university rankings

Opinion: None of us would accept a student’s homework based on shoddy methodology of most ranking systems

Mark Twain popularised the phrase, attributed to Disraeli, "lies, damned lies and statistics". A modern-day take could apply well to university rankings.

Take for instance the headlines that appeared in three major Irish newspapers recently.

According to the Irish Examiner, "Irish colleges perform well in international rankings" having "been graded above international averages for teaching and learning standards in an international ranking system".

According to The Irish Times, the picture is one of despair as "Irish universities tumble down latest set of world rankings". The Irish Independent was equally despondent, claiming the "country's higher education sector falls further in global rankings".

READ MORE

The Irish Universities Association was not far behind in its lamentations – identifying the situation as "alarming" and placing the blame on the "impact of funding cuts over the years". Those cuts are real and their implications serious, but we don't need rankings to tell us that.

The confusion is compounded when we reflect that this time last year, The Irish Times proclaimed Trinity College and UCD had "climbed in latest world rankings" based on "positive sentiment from employers". And the Global Innovation Index ranked Ireland at overall 10th in the world.

Confused? You should be!

The clue to unravelling the mysteries of rankings can be found in the rankings themselves. The "positive" results are in the European Union-sponsored U-Multirank (UMR), while the "negative" results are in the 2019 QS World University Rankings.

Since their origins in 2003, there are now approximately 17 different global rankings.

The overwhelming majority of rankings are developed by commercial, often media, companies with only approximately 7 per cent developed by a government organisation. UMR stands out as one of the latter.

The popularity of rankings owes much to their simplicity. They claim to compare universities using a range of indicators which are given weightings based upon the criteria of the rankings compiler, and each compiler chooses its own methodology. There is no internationally objective set of criteria and there is no intrinsic reason why indicators need to be either weighted or aggregated into a single score.

Comparisons misguided

Many of the areas measured are important, but annual comparisons are misguided because institutions do not, and cannot, change significantly from year to year. Many of the indicators, or their proxies, have at best an indirect relationship to faculty or educational quality and could actually be counter-productive.

Despite a common nomenclature, rankings differ from each other. “Which university is best” is asked differently depending upon which ranking is asking the question. This presents a problem for users who perceive the final results as directly comparable. They are not.

The current situation is a case in point. QS is one of the “big three”, alongside Shanghai’s Academic Ranking of World Universities (ARWU) and the Times Higher Education World Rankings (THE).

QS claim to measure performance under academic reputation, employer reputation, student-faculty ratio, citations, international faculty and international student ratio.

But these categories hide the fact that, when weightings are attached, rankings are essentially a measure of research and reputation. For QS, allowing for some overlaps, research equates to 70 per cent of the overall total, and 50 per cent for reputation. Both ARWU and THE are based on 100 per cent research/research-related indicators.

In contrast, U-Multirank is a multi-dimensional ranking, with a strong emphasis on user customisation. It does not create a single score or profile. Rather it encourages comparability of similar types of universities. It measures performance under educational profile, student profile, research involvement, knowledge exchange, international orientation and regional engagement.

As such, U-Multirank is more sophisticated than other rankings – but its complexity is less compelling than the simplicity of QS. It is noteworthy that teaching and learning, the student experience or societal engagement don’t feature in the ranking which has received the most attention by universities or the media. Yet, those are key objectives of the Irish higher education system.

Superficially attractive

Rankings are superficially attractive in their apparent ability to provide simple, quick and easy measurements, but they can misrepresent and foster a strategy of “picking winners” with unintended consequences.

In assessing the performance of the higher education system, is it not far better to use indicators which align with national social and economic objectives rather than adopting indicators chosen by commercial organisations for their own purposes?

None of us would accept a student’s homework based on the shoddy methodology employed by most ranking systems. Yet, we are quick to use rankings when the opportunity suits us and the case we want to make.

One of the big lessons of global rankings is the extent to which higher education policy and institutional strategies have become vulnerable to an agenda set by others. We ignore this at our peril.

Ellen Hazelkorn is a founding partner in BH Associates www.bhassociates.eu and the author of 'Rankings and the Reshaping of Higher Education' ( 2015).