skip to main content
ACER
Predicting success in medical studies

Predicting success in medical studies

Research 5 minute read

Daniel Edwards discusses the findings of a multi-institution investigation of the ability of Australia’s medical school admissions processes to predict future achievement levels.

Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. Such selection processes commonly include previous academic achievement, interviews, references, personal statements and performance in an aptitude test.

The Undergraduate Health Sciences and Medicine Admission Test (UMAT) is an aptitude test that has been used by many Australian and New Zealand universities since the early 1990s as part of the admissions process. Revised annually and administered by the Australian Council for Educational Research (ACER) on behalf of the consortium of institutions using the test, UMAT assesses students’ aptitude in three areas: logical reasoning, understanding people and non-verbal reasoning.

Key questions that arise in developing and evaluating admissions processes are how well the tools and the process itself predict the course performance of selected students, and how well they identify individuals who will become good doctors. These questions of ‘predictive validity’ are important, because they explore the justification for certain tools to be used; however, sometimes the importance of predictive power of admissions variables can be over-emphasised.

Selection tools are usually designed to select who is likely to succeed, but not necessarily to predict actual performance in course assessment. In other words, they are used to identify who should be selected, rather than who is likely to perform the best. Further, and more fundamentally, the ability to succeed in a medical degree does not necessarily correlate with how good a medical doctor one will become.

An ACER multi-institution investigation of the predictive validity of Australia’s medical school admissions processes, published in BMC Medical Education journal in December 2013, has helped to highlight the pitfalls of drawing generalised conclusions regarding selection tools without recognising the diverse ways in which they are designed and the variation in the institutional contexts in which they are administered.

The study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school. Two cohorts of students were used, those who sat UMAT in 2005 and commenced university in 2006, and those who sat UMAT in 2006 and commenced university in 2007. The students in the study accounted for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007.

Each institution in the study used different combinations of school achievement, UMAT score and an interview to select students for admission. The study correlated these admissions tools with students’ Grade Point Average (GPA) at each year of study. Due to the lack of moderation or calibration processes in Australian higher education, GPAs are not comparable between institutions.

In the main, the analyses showed considerable variation in the predictive validity of all three admissions tools across the institutions involved in the study. Some of the more consistent findings were that academic performance in the first two years of study was largely found to correlate with the UMAT Total score – that is, students who achieved higher UMAT Total scores tended to achieve higher GPAs.

Exploring the individual section scores of UMAT, this correlation appears to be largely driven by the logical reasoning and understanding people sections. There was less of a relationship between GPA and non-verbal reasoning scores, which is understandable, given that this is not a skill commonly assessed in university outcomes. Different parts of UMAT may correlate more highly with different individual assessments at university.

Looking at the other admissions tools employed by institutions, interviews were not found to be a strong predictor of GPA – probably for similar reasons to the UMAT non-verbal reasoning scores. School achievement was a significant predictor of GPA across the first four years of study at one institution, across the first and third years at another institution, and not at all in the final institution.

It is difficult to generalise the results from predictive validity studies such as this, due to the diverse ways in which admissions tools are applied and the variation in the institutional context in which they are implemented. Yet, while there were differences between institutions, across years and among the tools used for admissions, one constant shown through this study is that the use of multiple tools does result in an increase in predictive validity.

Each selection tool incrementally added to explaining the variance of outcomes of medical school. Yet the tools were only able to explain between three and 36 per cent of the variance in GPA, indicating that these admissions tools only explain a relatively small proportion of how students will perform in their studies.

Overall, the study has been used to show that the same admissions tools can ‘perform’ very differently in different medical schools, and even in the same medical school across different cohorts. As such, while useful for identifying the efficacy of selection processes, predictive validity studies should be interpreted with careful consideration of the admissions and educational assessment practices in which they are being evaluated.

Read the full report:
Same admissions tools, different outcomes: a critical perspective on predictive validity in three undergraduate medical schools’ by Daniel Edwards, Tim Friedman and Jacob Pearce.

Subscribe to the Discover newsletter

Privacy policy