skip to main content
ACER
Questioning the standards of literacy and numeracy

Questioning the standards of literacy and numeracy

Research 10 minute read

David Tout and Juliette Mendelovits examine why we receive such differing reports on the literacy and numeracy skills of young Australians.

Questioning the standards of literacy and numeracy

Australia participates in several large-scale assessment programs that provide information about the knowledge and skills of the population at various points in the lifespan. Each of these programs tells its own story about literacy and numeracy standards in Australia, and some of these stories appear to contradict one another.

The 2006 Adult Literacy and Lifeskills Survey (ALLS) reported that about 50 per cent of Australians between the ages of 15 and 74 are below the minimum required standard of literacy and numeracy. Three years later, the 2009 OECD Programme for International Student Assessment (PISA) reported that 15 per cent of Australian 15-year-olds are below a baseline level of proficiency in reading and mathematics. Australia’s National Assessment Program – Literacy and Numeracy (NAPLAN), on the other hand, reported in 2011 that only six per cent of Year 9 students – who are around 14 years of age – are below the minimum standard of literacy and numeracy. Taken at face value, these results suggest a lot of improvement in a short space of time; however, trends observed over that same period within assessment programs do not support this view.

What, then, can explain these wildly different reports? Are these three assessment programs measuring completely different things? Or do expectations vary about what constitute adequate levels of literacy and numeracy? Or is there something else at play?

Further, if the reasons for the variation can be understood, is it possible to represent these standards on a single, coherent continuum of achievement?

Explaining the differences

The apparent discrepancies between different measures of literacy and numeracy can be explained by four key factors:

  • the definitions of literacy and numeracy used;
  • the stated and unstated program purposes;
  • the agenda of the stakeholders; and
  • the way standards are represented statistically.

Definitions of literacy and numeracy

One point of contrast between assessment programs is the scope of what is encompassed by the terms ‘literacy’ and ‘numeracy’.

Of the three programs under consideration, PISA deals only, and explicitly, with reading literacy. ALLS uses the terms ‘prose literacy’ and ‘document literacy’, but in both the assessment of literacy is confined to reading. NAPLAN takes a broader approach to literacy, conceiving it as the written forms of language: reading, writing, spelling and grammar. There is also variation in the types of texts used in the assessments: ALLS has no narrative or fiction component yet these occupy a major part of NAPLAN and a substantial proportion of PISA.

In terms of numeracy, the focus in NAPLAN is on theoretical or abstract mathematics, while the focus in PISA and ALLS is on the application of maths to everyday tasks. Variation also exists in the strands of mathematics addressed by the different assessment programs.

Another point of contrast between the assessment programs is found in the descriptions of standards, also referred to as levels or bands, which vary in the amount of detail and the types of skills and abilities described, and whether the standards are described in terms of the test takers’ competencies or the characteristics of the tasks.

The extent to which the definitions and descriptions actually inhibit alignment can be exaggerated. To some extent the differences observed in the standards are matters of style rather than necessarily of substance, and may not have a huge impact in practice. Therefore, despite these differences in definitions and descriptions, the extent to which they inhibit alignment is small in comparison to the next three factors.

Program purposes

Each assessment program has a different purpose, and accordingly takes a different approach to forming and applying levels or bands as well as minimum standards.

To serve its aim of reporting to school education systems across Australia, NAPLAN has described minimum standards for literacy and numeracy as the level below which students ‘are at risk of being unable to progress satisfactorily at school without targeted intervention’. NAPLAN has ten incrementally increasing achievement bands, of which Band 6 is described as the national minimum standard for Year 9 students.

PISA’s central purpose is to report to policymakers on the outcomes of compulsory schooling. Its minimum standard is therefore described as the level at which students ‘begin to demonstrate the [competencies] that will enable them to participate effectively and productively in life’. PISA has six levels of proficiency, of which Level 2 is described as the minimum standard.

Also primarily reporting to policymakers, ALLS has set a higher bar in the description of the minimum standard as a ‘suitable minimum for coping with the demands of everyday life and work in a complex, advanced society’. ALLS has five proficiency levels, of which Level 3 is described as the minimum standard.

The differences in demand between NAPLAN’s ‘managing the demands of schooling’, PISA’s ‘beginning to demonstrate effective skills for adult life’ and ALLS’s ‘coping with the demands of everyday life and work in a complex society’ are clear, and cast doubt on the extent to which it is meaningful to directly compare the percentages of the population attaining the minimum standard in each program.

Stakeholder agendas

Ideally, minimum standards would be set with regard to some research-based reference point, such as for predicting success in future life or schooling, or at least consensual expert judgement about what ‘satisfactory’ means. In reality, however, such criteria are quite slippery and difficult to verify.

It could be argued that the more remote from the consequences of the standards the stakeholders are, and the more responsibility for the results can be laid at the feet of someone else, the more stringent the standards are likely to be. A comparison of NAPLAN and ALLS supports this view. 

For NAPLAN the standards have been developed in a collaborative effort closely involving state and territory jurisdictions, which have responsibility for the learning and achievement of students in Australian schools. The ALLS standards were set by a group of international experts who have an interest – albeit an altruistic interest – in telling a bad-news story, in order to encourage resourcing for adult learning. Responsibility for adult literacy and numeracy levels is diffused or at least widely shared: among school and training systems, employers, and national social and immigration policies.

In explaining differences in the standards applied by various programs, it is hard to ignore the possibility that part of the explanation can be attributed to the degree of responsibility for attaining the standards by those who set them.

Statistical representation

The final key factor in explaining the apparent discrepancies between programs relates to the statistical methods used to report the standards. NAPLAN, PISA and ALLS each use Item Response Theory to rank tasks in order of difficulty and test takers in order of proficiency, enabling reporting of the probability of a test taker responding successfully to a given task.

A crucial point is that policy decisions must be made about the number of bands or levels against which tasks and achievement will be reported, and about the required level of probability of successful response – known as the RP value.

For example, PISA uses an RP value of 62 – meaning that students are said to be ‘at a level’ when they have a 62 per cent chance of responding successfully to tasks at that level. NAPLAN uses a similar RP value to PISA. ALLS, on the other hand, applies an RP value of 80. This is a much more stringent demand, meaning more test takers will appear to be performing at lower levels than if a lower RP value had been selected.

Tables 1 and 2 illustrate the impact that applying different RP values has on explaining the differences between literacy and numeracy standards. The figures in purple indicate the percentages below the minimum standard as defined by PISA and ALLS.

Table 1: PISA 2009 Reading Literacy/ALLS 2006 Prose Literacy

  % below level 2 % at level 2 % at or below level 2
PISA Reading Literacy RP 62 (as published) 14.3 20.4 34.7
PISA Reading Literacy RP 80 37.8 28.9 66.7
ALLS Prose Literacy RP 80 (as published) 15.3 36.7 52.0


Table 2. PISA 2009 Mathematical Literacy/ALLS 2006 Numeracy

  % below Level 2 % at level 2 % at or below level 2
PISA Mathematical Literacy RP 62 (as published) 15.9 20.3 36.2
PISA Mathematical Literacy RP 80 39.1 25.9 65.0
ALLS Numeracy RP 80 (as published) 19.7 37.0 56.7


Were PISA to use RP 80, like ALLS, the percentage of students below the minimum standard would rise dramatically and would be much closer to the percentage of people below the ALLS minimum standard.

There is still, however, a sizeable gap. Part of this gap can be attributed to the number of levels or bands in each assessment, and hence the number of locations at which standards can be set.  

Interestingly, results released in October 2013 for the most recent adult skills survey, the OECD Programme for the International Assessment of Adult Competencies (PIAAC), were reported using an RP value of 67. While this has the effect of making comparisons with surveys such as PISA more meaningful, it also complicates the mapping of trends since ALLS in 2006. New analysis is required to understand how PIAAC fits into the picture, particularly as the study has not defined a minimum standard in the way that ALLS did.

Aligning the standards

Given the sorts of variation outlined above, the question is whether it is possible to merge these distinct portraits of Australian literacy and numeracy standards into a single profile.

ACER recently commenced a project that aims to represent different standards on a single, coherent continuum of achievement for each discipline. Commissioned by the Victorian Department of Education and Early Childhood Development, the project will align not only NAPLAN, PISA and ALLS, but also the Victorian Certificate of Applied Learning and key Victorian Certificate of Education subjects.

All programs will be mapped against the Australian Core Skills Framework (ACSF), as it has an explicit literacy and numeracy orientation, covers a wide range of competency and is already linked to several of the other programs to be aligned. Since employer groups are increasingly becoming familiar with the ACSF’s design, it also provides a link between the worlds of work and education.

Preliminary work suggests that, roughly speaking, for literacy the minimum standard for PISA (Level 2) is slightly above NAPLAN’s Year 9 minimum standard (Band 6), while the minimum standard for ALLS (Level 3) appears to be just below NAPLAN Band 8 and between PISA Levels 3 and 4. For numeracy, the minimum standard for PISA (Level 2) appears to be just above NAPLAN’s Year 9 minimum standard (Band 6) while the minimum standard for ALLS (Level 3) is between NAPLAN Bands 8 and 9 and slightly above PISA Level 4.

Representing the various programs’ scales on a single continuum is a complex project that will involve a substantial number of subject matter experts independently matching work samples, assessment items or descriptors from one program to another. The project will continue into 2014.

This research is the result of a project sponsored by the Victorian Department of Education and Early Childhood Development and the Australian Council for Education Research.

Subscribe to the Discover newsletter

Privacy policy