U-Multirank: quality from a multidimensional view
Universities love to boast of their success in university rankings, but reducing the quality of an institution to a single ranking position does not do justice to the richness of the education and research landscape. U-Multirank is a new tool that aims to remedy this with its unique ‘multidimensional’ comparison of higher education institutions. Urs Hugentobler from Finance and Controlling at ETH Zurich explains how.
Mr. Hugentobler, there are already a number of university rankings, for instance QS World University, Shanghai and the Times Higher Education (THE) ranking. Why do we need another one?
U-Multirank emerged from the European discussions concerning the Anglo-Saxon rankings such as QS and THE. The basic idea is to go beyond a one-dimensional university ranking, in which everything is compressed into a ranking list, and instead provide a more comprehensive picture – particularly one that takes into account the full richness of the higher education landscape. The goal was to not put the focus only on the large research universities but also include universities of applied sciences and small, specialist colleges that offer only undergraduate education.
So U-Multirank differs from the existing rankings in this respect?
Existing rankings include small institutions as well, but they in general only appear far down on the ranking list where nobody takes notice of them. With U-Multirank, users get to decide for themselves what they want to compare. For example, I can look at only publicly funded, mid-sized institutions that offer bachelor’s and master’s degrees but not PhDs. This way, I can make a preliminary selection and compare similar educational institutions with each other, “like with like”.
But is the comparison always based on the same criteria?
In theory, yes. But as a user, I can choose which criteria are especially important to me; for example, the international orientation of an institution or its regional engagement. The latter is particularly important in Europe.
So if the results from this comparison aren’t in the form of a ranking, what are they?
There is a ranking list of sorts, but it’s divided into five strength categories, ranging from very strong to weak. Unlike the rankings published by THE, for example, you therefore don’t have any forced ranking when there really isn’t much difference. With U-Multirank, for instance, the institutions ranking 1st and 29th can be part of the same strength category (A). Minimal differences between universities are no longer inflated to create large differences of ranking positions.
Does the new ranking system solve the problems of the existing ones?
If you want to do away with rankings that artificially inflate the small differences between universities, then U-Multirank can solve that problem. However, you can no longer say: “This one is the best university and this one is the second best.” From a media perspective, this is perhaps a step backwards because the results seem no longer clear and concise.
The rankings published by the Center for Higher Education (CHE) similarly strive towards more individual comparisons between institutions. Has U-Multirank evolved beyond the CHE rankings?
The CHE has played a major role in development of this ranking system because this methodology is based on its rankings, which mainly uses data from student surveys. And the multidimensional view, which allows users to adapt university comparisons based on their own parameters, was taken from the CHE and then expanded by U-Multirank for international comparisons.
How will the new ranking approach affect the ‘traditional’ rankings, such as QS and THE?
Probably not much. QS and THE offer institutions the opportunity to boast about their rankings with statements such as “We are number 5”. Institutions interested in this sort of thing will not take much notice of the new ranking. However, it creates more transparency and is actually useful to institutions and university departments. In the engineering sciences, for instance, U-Multirank allows us to obtain very detailed feedback from student surveys, which can then be used by the individual departments.
How does ETH perform in the eyes of U-Multirank?
We are in the top category for several indicators, such as research and knowledge transfer. However, the results need a bit of interpretation in some places. For example, ETH is somewhere in the middle in terms of the success rate of undergraduate students. But this is due to the general higher education policies in place and not the quality of education at ETH. In Switzerland, prospective students have free access to universities and selection takes place during the first one or two years of study. In countries where prospective students must apply for a spot at a university, selection takes place before students begin their studies. In this case, the institutions do everything they can so that the pre-selected students can graduate, which results in a 90% to 95% success rate.
Why weren’t these differences in policy taken into consideration?
This was discussed during the preliminary stages, but was then found to be too complex. It is indeed a big disadvantage of this ranking system that it does not take these different general conditions into account.
Does ETH have further potential for improvement according to U-Multirank?
This requires that you look at the results in detail and compare them with your specific strategy and goals as an institution. If national or regional engagement is important to us, it makes sense that we would offer a bachelor’s degree with a national focus. Accordingly, we perform poorly in the category of ‘Foreign language bachelor programmes’. Moreover, based on the comparison with selected institutions, we can identify room for improvement or at least raise questions that need to be addressed specifically. U-Multirank can provide an entry point for this; however, we need more detailed information to develop solutions.
Further Information
external page U-Multirank includes information on more than 850 universities, 1,000 faculties and 5,000 study programmes in 70 countries. The evaluation in the five categories teaching, research, regional engagement, knowledge transfer and international orientation is based on student surveys (for individual study areas), information provided by the institutions themselves (number of students, courses offered, teaching staff, etc.) and research publications based on data from Thomson Reuters, which is also used for THE rankings. The general evaluation encompasses the whole institution whereas rankings for individual study areas at ETH are at the moment only available for Electrical Engineering, Mechanical Engineering und Physics. U-Multirank is an EU consortium project led by the Center for Higher Education Policy Studies (CHEPS) at the University of Twente in the Netherlands and the Center for Higher Education (CHE) in Germany.