18 minute read

International Education Statistics

The Use Of Indicators To Evaluate The Condition Of Education Systems



The use of widely published statistical indicators (also referred to as social indicators outside the purely economic realm) of the condition of national education systems has in the early twenty-first century become a standard part of the policymaking process throughout the world. Uniquely different from the usual policy-related statistical analysis, statistical indicators are derived measures, often combining multiple data sources and several statistics, that are uniformly developed across nations, repeated regularly over time, and have come to be accepted as summarizing the condition of an underlying complex process. Perhaps the best known among all statistical indicators is the gross national product, which is derived from a statistical formula that summarizes all of the business activity of a nation's economy into one meaningful number. In the closing decades of the twentieth century, international statistical indicators of educational processes made considerable advances in quantity, quality, and acceptance among policymakers.



These cross-national indicators often have significant impact on both the public and the education establishment. In the United States, for example, the New York Times gives high visibility to reports based on indicators of where American students rank in the latest international math or science tests, or those revealing how educational expenditures per student, teacher salaries, or high school dropout rates compare across countries. National education ministries or departments frequently use press releases to put their own spin on statistical indicator reports such as the annual Education at a Glance (EAG), published by the Organisation for Economic Co-operation and Development (OECD). The use of indicators for comparisons and strategic mobilization has become a regular part of educational politics. Dutch teachers, for example, used these indicators to lobby for increases in their salaries after the 1996 EAG indicators of teacher salaries showed that they were not paid as well as their Belgian and German neighbors. Similarly, in the United States, comparisons of a statistical indicator of dropout rates across nations were used to highlight comparatively low high school completion rates in 2000. In an extreme, but illustrative, case, one nation's incumbent political party requested that the publication of the EAG be delayed until after parliamentary election because of the potentially damaging news about how its education system compared to other OECD nations.

These examples of the widespread impact of international education statistical indicators are all the more interesting when one considers that an earlier attempt to set up a system of international statistical indicators of education during the 1970s failed. While attempts to create a national, and then an international, system of social indicators (known as the social indicators movement) faltered, early attempts by the OECD to develop statistical indicators on education systems fell apart as idealism about the utility of a technical-functionalist approach to education planning receded.

History of Social Indicators

The social indicators movement, born in the early 1960s, attempted to establish a "system of social accounts," that would allow for cost-benefit analyses of the social components of expenditures already indexed in the National Income and Product Accounts. Many academics and policy makers were concerned about the social costs of economic growth, and social indicators were seen as a means to monitor the social impacts of economic expenditures. Social indicators are defined as time series that are used to monitor the social system, which help to identify change and to guide efforts to adjust the course of social change. Examples of social indicators include unemployment rates, crime rates, estimates of life expectancy, health status indices such as the average number of "health days" in the past month, rates of voting in elections, measures of subjective well-being, and education measures such as school enrollment rates and achievement test scores.

Enthusiasm for social indicators led to the establishment of the Social Science Research Council (SSRC) Center for Coordination of Research on Social Indicators in 1972 (funded by the National Science Foundation) and the initiation of several continuing sample surveys, including the General Social Survey (GSS) and the National Crime Survey (NCS). As reporting mechanisms, the Census Bureau published three comprehensive social indicators data and chart books in 1974, 1978, and 1980. The academic community launched the international journal Social Indicators Research in 1974. Many other nations and international agencies also produced indicator volumes of their own during this period. In the 1980s, however, federal funding cuts led to the discontinuation of numerous comprehensive national and international social indicators activities, including closing the SSRC Center. Some have argued that a shift away from data-based decision making towards policy based on conservative ideology during the Reagan administration, coupled with a large budget deficit, helped to pull the financial plug on the social indicators movement. While field-specific indicators continue to be published by government agencies in areas such as education, labor, health, crime, housing, science, and agriculture, the systematic public reporting envisioned in the 1960s has largely not been sustained, although comprehensive surveys of the condition of youth have arisen in both the public and private spheres, such as those by the Annie E. Casey Foundation (2001) and the Forum on Child and Family Statistics (2001).

Some of the main data collections that grew out of the social indicators movement, including the GSS and NCS, continue, as do a range of longitudinal and cross-sectional surveys in other social areas. On the academic side, a literature involving social indicators has continued to grow, mostly focused on quality-of-life issues. While education is seen as a component of quality of life, it tends to be used in a fairly rudimentary fashion. For example, out of 331 articles published in Social Indicators Research between 1994 and 2000, only twenty-six addressed education with any depth. And although the widely cited Human Development Index compiled by the United Nations Development Programme has education and literacy components, these are limited to basic measures of school enrollments and, arguably, non-comparable country-level estimates of literacy rates, which are often based on census questions about whether someone can read or write. As a sub-field of social indicators, however, the collection and reporting of education statistics has expanded rapidly since the early 1980s in the United States, the early 1990s in OECD countries, and, more recently, in developing countries.

State of International Education Statistical Indicators Today

Among the current array of statistical indicators of education within and across nations are some that go far beyond the basic structural characteristics and resource inputs, such as student-teacher enrollment ratios and expenditures per student, found in statistical almanacs. More data-intensive and statistically complex indicators of participation in education, financial investments, decision-making procedures, public attitudes towards education, differences in curriculum and textbooks, retention and dropout rates in tertiary (higher) education, and student achievement in math, science, reading, and civics have become standard parts of indicators reports. For example, the OECD summarizes total education enrollment through an indicator on the average years of schooling that a 5-year-old child can expect under current conditions, which is calculated by summing the net enrollment rates for each single year of age and dividing by one hundred. Unlike the gross enrollment ratios (calculated as total enrollment in a particular level of education, regardless of age, divided by the size of the population in the "official" age group corresponding to that level) that have traditionally been reported in UNESCO statistical yearbooks, the schooling expectancy measures reported by the OECD aggregate across levels of education and increase comparability by weighting enrollment by the size of the population that is actually eligible to enroll.

Examples of other indicators that attempt to summarize complex issues into concise numerical indices include measures of individual and societal rates of return of investments in different levels of education, measures of factors contributing to differences in relative statutory teachers' salary costs per student, and effort indexes for education funding, which adjust measures of public and private expenditures per student by per capita wealth. Furthermore, the OECD is working to develop assessment and reporting mechanisms to compare students' problem-solving skills, their ability to work in groups, and their technology skills. There are few components of the world education enterprise that statisticians, psychometricians, and survey methodologists are not trying to measure and put into summary indicator forms for public consumption.

High-quality indicators require data that are accurate and routinely available. Behind the creation of so many high-quality indicators of national education systems is the routine collection of a wide array of education data in most industrialized countries. In addition to the costs of gathering data and information for national purposes, a large amount of human and financial investment is made to ensure that the data meet standards of comparability across countries. Country-level experts, whether from statistical or policy branches of governments, frequently convene to discuss the kinds of data that should be collected, to reach consensus on the most methodologically sound means of collecting the data, and to construct indicators.

Growth and Institutionalization of International Data Collections

Numerous international organizations collect and report education data, with the range of data types expanding and the complexity of collection and analysis increasing. Hence, the total cost of these collections has increased dramatically since 1990. Government financial support, and in some cases control, has been a significant component of this growth in both the sophistication of data and the scope of collections. Briefly described here are some of the basic institutional components involved in the creation of a set of international organizations or organizational structures that provide the institutional infrastructure for creating and sustaining international education statistical indicators.

IEA. Collaboration on international assessments began as early as the late 1950s when a group composed primarily of academics formed the International Association for the Evaluation of Educational Achievement (IEA). In 1965, twelve countries undertook the First International Mathematics Study. Since that time, the IEA has conducted fourteen different assessments covering the topics of mathematics, science, reading, civics, and technology. Findings from IEA's Second International Mathematics Study were the primary justification for the finding in the early 1980s that the United States was a "nation at risk." In the 1990s, government ministries of education became increasingly important for both funding and priority setting in these studies.

The results of the Third International Mathematics and Science Study (TIMSS) were widely reported in the United States and served as fuel for the latest educational reform efforts. As governments became increasingly involved in setting the IEA agenda, some key aspects of the research orientation of earlier surveys were no longer funded (e.g., the pre-test/post-test design in the Second International Science Study), while other innovative activities were added. For example, the TIMSS video study conducted in Germany, Japan, and the United States applied some of the most cutting-edge research technology to international assessments. As part of the 1999 repeat of TIMSS (TIMSS-R), additional countries have agreed to have their teachers videotaped and science classrooms have been added to the mix.

Over time, the IEA assessments have become more methodologically complex, with TIMSS employing the latest testing technology (e.g., item response theory [IRT], multiple imputation). As the technology behind the testing has become more complex, cross-national comparisons of achievement have become widely accepted, and arguments that education is culturally determined or that the tests are invalid, and thus that achievement results cannot be compared across countries, have for the most part disappeared.

OECD. While the IEA has been the key innovator in the area of education assessment, the OECD has led the development of a cross-nationally comparable system of education indicators. After a failed attempt to initiate an ambitious system of data collection and reporting in the early 1970s, the OECD, with strong support from the United States, undertook the development of a new system of cross-nationally comparable statistical indicators in the late 1980s. The ministers of education of OECD countries agreed at a meeting in Paris in November 1990 that accurate information and data are required for sound decision-making, informed debate on policy, and accountability measures. Ministers also agreed that data currently available lacked comparability and relevance to education policy.

Although led by the OECD, the core of the International Indicators of Education Systems (INES) projects was the organization of four country-led developmental networks: Network A on Educational Outcomes, Network B on Student Destinations, Network C on School Features and Processes, and Network D on Expectations and Attitudes towards Education–led by the United States, Sweden, the Netherlands, and the United Kingdom, respectively. The OECD secretariat chairs a technical group on enrollments, graduates, personnel, and finances. These networks–involving up to 200 statisticians, policymakers, and, in some cases, academics–designed indicators, negotiated the definitions for data collections, and supplied data for annual reporting. This model of shared ownership in the development of Education at a Glance (which was at first published biennially and later became an annual publication) contributed to its success. Participants in the networks and the technical group invested the time needed to supply high-quality data because they had a stake in the publication's success.

INES was initially a reporting scheme where administrative databases within countries were mined and aggregated, and it has evolved into an initiative that mounts its own cross-national surveys, including school surveys, public attitudes surveys, adult literacy surveys, and surveys of student achievement. The largest and most expensive project to date is the OECD Programme for International Student Assessment (PISA). PISA is an assessment of reading literacy, mathematical literacy, and scientific literacy, jointly developed by participating countries and administered to samples of fifteen-year-old students in their schools. In 2000 PISA was administered in thirty-two countries, to between 4,500 and 10,000 students per country. Expected outcomes include a basic profile of knowledge and skills among students at the end of compulsory schooling, contextual indicators relating results to student and school characteristics, and trend indicators showing how results change over time. With the results of PISA, the OECD will be able to report, for the first time, achievement and context indicators specifically designed for that purpose (rather than using IEA data) for country rankings and comparisons.

UNESCO. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has been the main source of cross-national data on education since its inception near the end of the Second World War. UNESCO's first questionnaire-based survey of education was conducted in 1950 and covered fifty-seven of its member states. In the 1970s UNESCO organized the creation of the International Standard Classification of Education (ISCED), a major step forward towards improving the comparability of education data. Although as many as 175 countries regularly report information on their education systems to UNESCO, much of the data reported is widely considered unreliable. Throughout the 1990s the primary analytical report on education published by UNESCO, the World Education Report, based many of its analyses and conclusions on education data collected by agencies other than UNESCO.

Between 1984 and 1996 personnel and budgetary support for statistics at UNESCO declined, and UNESCO's ability to assist member countries in the development of their statistical infrastructure or in the reporting of data was severely limited. In the late 1990s, however, the World Bank and other international organizations, as well as influential member countries such as the Netherlands and the United Kingdom, increased pressure and financial contributions in order to improve the quality of the education data UNESCO collects.

Collaboration between UNESCO and OECD began on the World Bank—financed World Education Indicators (WEI) project, which capitalized on OECD's experience, legitimacy, and status to expand the OECD Indicators Methodology to the developing world. Although this project includes only eighteen nations (nearly fifty if OECD member nations are included), it has helped to raise the credibility of indicator reporting in at least some countries in the developing world. Even though this project has in many ways "cherry-picked" countries having reasonably advanced national education data systems, the collaborative spirit imported from OECD's INES project has been quite effective.

A major step for the newly constituted UNESCO Institute for Statistics will be to take this project to a larger scale. Significantly expanding WEI will be quite a challenge, however, as the financial and personnel costs needed to increase both the quality of national data collection and reporting, as well as processing and indicator production on an international level, are likely to exceed the budget and staff capacity of the institute in the short term. The visible success of the WEI project, however, shows that the interest in high-quality, comparable, education indicators expands far beyond the developed countries of the OECD.

Integration of National Resources and Expertise into the Process

Many of the international organizations dedicated to education data collection were in operation well before the renaissance of the statistical indicator in the education sector, but these groups lacked the political power and expertise found in a number of key national governments to make them what they have recently become. A central part of the story of international data and statistical indicators has been the thorough integration of national governments into the process. As technocratic operations of governance, with its heavy reliance on data to measure problems and evaluate policies, became standard in the second half of the twentieth century, wealthier national governments invested in statistical systems and analysis capabilities. As was the case for the IEA and its massive TIMSS project, several key nations lent crucial expertise and legitimization to the process, factors that were clearly missing in earlier attempts. Although this "partnership" has not always been a conflict-free one, it has taken international agencies to new technical and resource levels.

The integration of national experts, often from ministries of education or national statistical offices, into international indicator development teams has improved both the quality of the data collected and the national legitimacy of the data reported. A number of decentralized states, including Canada, Spain, and the United States, have used the international indicators produced by OECD as benchmarks for state/provincial indicators reports. As more national governments build significant local control of education into national systems, this use of international indicators at local levels will become more widespread. In the case of Canada, the internationally sanctioned framework provides legitimacy to a specific list of indicators that might not otherwise have gained a sufficient level of agreement among the provinces involved.

The initial release of results from the PISA project will take this one step further, in that the OECD will provide participating countries reports focused on their national results, in a way similar to how the National Assessment of Educational Progress (NAEP) produces reports for each of the fifty U.S. states. This reporting scheme will allow participating countries to "release" their national data at the same time as the international data release. The same could easily happen with releases of subnational indicators in conjunction with international releases. This form of simultaneous release is seen as an effective way to create policy debate at a number of levels within the American system, as illustrated by the U.S. National Center for Education Statistics' ability to generate public interest in its release of achievement indicators from TIMSS and TIMSS-R. International education indicators provide constituencies within national education systems another vantage point to effect change and reform.

Conclusions

There have been four main trends behind the massive collection of data and the construction of cross-national statistical indicators in the education sector over the past several decades. These trends are: (1) greater coordination and networks of organizations dedicated to international data collection; (2) integration of national governments' statistical expertise and resources into international statistical efforts that lead to statistical indicators; (3) political use of cross-national comparisons across a number of public sectors; and (4) near universal acceptance of the validity of statistical indicators to capture central education processes.

Although examples presented here of each factor focus more on elementary and secondary schooling, the same could be said for indicators of tertiary education. The only difference is that the development of a wide range of international statistical indicators for higher education (i.e., indicators of higher education systems instead of research and development products of higher education) lags behind what has happened for elementary and secondary education. However, there are a number of signs that the higher education section will incorporate similar indicators of instruction, performance, and related processes in the near future. It is clear that international statistical indicators of education will continue to become more sophisticated and have a wider impact on policy debates about improving education for some time to come.

BIBLIOGRAPHY

ANNIE E. CASEY FOUNDATION. 2001. Kids Count Data Book 2001: State Profiles of Child Well-Being. Washington, DC: Center for the Study of Social Policy.

BOTTANI, NORBERTO, and TUIJNMAN, ALBERT. 1994. "International Education Indicators: Framework, Development, and Interpretation." In Making Education Count: Developing and Using International Indicators, ed. Norberto Bottani and Albert Tuijnman. Paris: Organisation for Economic Co-operation and Development.

FEDERAL INTERAGENCY FORUM ON CHILD AND FAMILY STATISTICS. 2001. America's Children: Key National Indicators of Well-Being, 2001. Washington, DC: U.S. Government Printing Office.

FERRISS, ABBOTT L. 1988. "The Uses of Social Indicators." Social Forces 66 (3):601–617.

GUTHRIE, JAMES W., and HANSEN, JANET S., eds. 1995. Worldwide Education Statistics: Enhancing UNESCO's Role. Washington DC: National Academy Press.

HEYNEMAN, STEPHEN P. 1986. "The Search for School Effects in Developing Countries: 1966– 1986." Economic Development Institute Seminar Paper No. 33. Washington DC: The World Bank.

HEYNEMAN, STEPHEN P. 1993. "Educational Quality and the Crisis of Educational Research." International Review of Education 39 (6):511–517.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 1982. The OECD List of Social Indicators. Paris: OECD Social Indicator Development Programme.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 1992. High-Quality Education and Training for All, Part 2. Paris: OECD/CERI.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2000. Investing in Education: Analysis of the 1999 World Education Indicators. Paris: OECD/CERI.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2001. Education at a Glance, OECD Indicators 2001. Paris: OECD/CERI.

PURYEAR, JEFFREY M. 1995. "International Education Statistics and Research: Status and Problems." International Journal of Educational Development 15 (1):79–91.

UNITED NATIONS. 1975. Towards a System of Social and Demographic Statistics. New York: United Nations.

UNITED NATIONS EDUCATIONAL, SCIENTIFIC AND CULTURAL ORGANIZATION. 2000. World Education Report 2000–The Right to Education: Towards Education for All Throughout Life. Paris: UNESCO Publishing.

UNITED NATIONS EDUCATIONAL, SCIENTIFIC AND CULTURAL ORGANIZATION and the ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2001. Teachers for Tomorrow's Schools: Analysis of the World Education Indicators. Paris: UNESCO Publishing/UIS/OECD.

INTERNET RESOURCE

NOLL, HEINZ-HERBERT. 1996. "Social Indicators and Social Reporting: The International Experience." Canadian Council on Social Development. <www.ccsd.ca/noll1.html>.

THOMAS M. SMITH

DAVID P. BAKER

Additional topics

Education - Free Encyclopedia Search EngineEducation EncyclopediaInternational Education Statistics - OVERVIEW, THE USE OF INDICATORS TO EVALUATE THE CONDITION OF EDUCATION SYSTEMS