Monday 16 January 2017

Pisa tests ( some facts to think about)

At the end of last year, the results of two international tests – the Trends in International Maths and Science Study (TIMSS), and the Program for International Student Assessment (PISA) Australia – were released and both showed Australia had fallen a few rungs in the international education league tables.

The attacks on Australian education began immediately. Opinion columns and letters to the editor accused schools and teachers of failing their students and suggested ways to fix the 'crisis'.

It is clear that the international tests comprise the sole evidence for the claim that standards in Australian education are slipping in comparison to other countries but what do they test?

TIMSS tests sample groups of students every four years in year 4 and year 8 maths and science in 49 countries; PISA tests sample groups of 15-year-old students every three years in maths, science and reading in 72 countries.

Since the tests are limited to three areas of the curriculum, they obviously can't paint a full picture of our schooling system. There are many important aspects of a well-rounded education that are not included.

At best, the TIMSS and PISA tests can tell us something about standards in the subject areas being tested. But do they even do that?

Let's look at one subject – science – to explore this question.

The most recent PISA results in science placed Australia well above the average of OECD countries. Of the 72 participating countries, there were only nine countries whose scores were significantly better than Australia's. The next eight countries were similar to Australia, and in turn were significantly higher than 55 other countries.

In the year 8 TIMSS test in science, 14 countries had scores which were significantly higher than Australia. The next five countries were similar to Australia, and in turn were significantly higher than 20 other countries.

Taking these results at face value, although there may be room for improvement in science, it is surely a stretch to call being placed in the top 10to 15countries a "crisis".

But scratch beneath the surface of the two tests and some puzzling questions and contradictions emerge. Here are just four:

1. The TIMSS test in science placed Australia well below Kazakhstan. And yet, Kazakhstan was precluded from the PISA league table because its "home-based assessors" were deemed to be too lenient to produce results which could be compared to other countries. What does this say about comparability within and between the tests?

2. The tests produce contradictory results. Half of the countries ranked higher than Australia in TIMSS ranked much lower than Australia in PISA. Which test results do we use to inform us about our relative standing in science?

3. Some of the top place-getters in both tests such as Hong Kong, Macao and Taipei, are cities/regions, not countries. This makes the comparison of results somewhat problematic. Is it valid to compare the results of cities with countries? If so, let's make the ACT with its comparatively affluent demographic our representative in the PISA tests, and Australia would rise to be fifth on the science ladder.

4.The raw test scores reveal nothing about student interest and engagement. The summary of student attitudes to science in the PISA report indicates that students from countries at the top of the league tables tend to have some of the lowest rates of wanting to pursue a "science-related career". Australia on the other hand is in the top group of countries in this area. So, do the 'academic' results tell the full story?

These few examples suggest that the results of the two major international tests need to be treated with some caution. They confirm the concerns expressed by many leading education academics around the world in an open letter to OECD's co-ordinator of the PISA tests.

These concerns relate to such matters as problems with the methodology, the difficulty of devising tests which are culturally neutral, and the administrative practices in some countries related to student sampling. 

In short, treating the results of two international tests as an objective truth about the standard of education in a country is a dangerous path to follow. We surely need more sophisticated ways to measure educational standards than simply reading off international league tables.

Despite this, whenever new test results are released commentators ignore such problems and pronounce those at the top – countries/cities such as Singapore, Korea, and Hong Kong – to be the embodiment of education quality. Policymakers are urged to copy them.

As the famous educator Yong Zhao points out, the irony is that many educators in these "top" countries are concerned about the ways in which testing culture is beginning to narrow the curriculum and place extreme pressure on students to perform, as after-hours "cramming" schools become the norm.

Of course, there are issues that must be addressed in Australian education, not the least being the need to lift the educational outcomes of students from educationally disadvantaged backgrounds. This will require innovative approaches and a needs-based approach to funding.

But these issues do not imply that standards overall in Australian education are declining.

Let's recognise and celebrate the high quality of Australian education while working to address those aspects which need improvement, rather than denigrate schools and teachers on the basis of flawed and partial evidence.

Alan Reid is Professor Emeritus of Education at the University of South Australia.
From The Age

No comments:

Post a Comment