35 minute read

International Education Statistics

OVERVIEW, THE USE OF INDICATORS TO EVALUATE THE CONDITION OF EDUCATION SYSTEMS



OVERVIEW
Eugene Owen
Laura Hersh Salganik

THE USE OF INDICATORS TO EVALUATE THE CONDITION OF EDUCATION SYSTEMS
Thomas M. Smith
David P. Baker

OVERVIEW

Comparisons between the education system in the United States and the systems of other countries have become an established element of the public discussion about education policy and practice in the United States. Publications aimed at the education profession often include articles describing approaches in other countries that are relevant to education in the United States, such as a discussion of tuition tax credits in Canada in Education Week and a special section in the Phi Delta Kappan about early childhood education in Europe.



Why is this information useful? Simply put, countries outside the United States provide something akin to a natural social laboratory in which one can observe education policy, practices, and outcomes under a variety of different conditions and environments. Knowing whether education in one country is the same or different from education in other countries provides a useful perspective for better understanding a country's system, and is also a useful source of ideas. Comparisons of learning outcomes provide a measure of the effectiveness not possible with any other approach. Given the prominent role of statistics and indicators for policymaking in the United States, it is not surprising that statistics have become an important source of information for these comparisons. At the beginning of the twenty-first century rich sources of statistics are available that can be used to compare education in the United States with education in other countries. But this is a relatively new development.

The Demand for International Education Statistics

As is the case for many aspects of history of education in the United States since the mid-1980s, the publication of A Nation at Risk in 1983 was a key event for setting the stage for change–in this case for strengthening international education statistics. A steady decline in national test scores during the 1960s and 1970s, coupled with the perception that foreign industrial and consumer products were superior, had led many Americans to question the performance of U.S. schools. An international perspective was central to the argument of A Nationat Risk, chief among the many reports prepared to document the problem. Citing the superiority of new manufactured goods from Japan, South Korea, and Germany, as well as the level of worker skill that these foreign products represented, the report proclaimed: "If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war" (p. 5). When presenting its list of indicators of the risk facing the country, the poor performance of U.S. students in international studies of achievement was the first item.

A second major event that increased the importance of international comparisons was the adoption of the National Education Goals in 1990. One of the six goals adopted by President George H. W. Bush and the nation's governors was that "by the year 2000, United States students will be first in the world in mathematics and science." Shortly after this announcement, the National Education Goals Panel was created and charged with producing a series of publications aimed at reporting national and state progress toward achieving the goals. This amplified the need to develop performance measures for the U.S. educational system in an international context, particularly in mathematics and science.

During this period, genuine concern about the future of the U.S. economy and the importance of looking outside of the United States for solving problems was a common theme of policy discussions in a variety of arenas. For example, the National Governors Association report Time for Results (1986) named global economic competition and poor performance of U.S. students (compared to those of other countries) as reasons why governors should take a more active role in education. America's Choice: High Skills or Low Wages (1990), published by the National Center on Education and the Economy, presented international comparisons of investment in employment and training policies and called for higher U.S. expenditures in this area.

But in spite of the prominence of international comparisons, it was widely recognized that the data were very weak. The reason that A Nation at Risk used studies that were a decade old was that they were the only ones available. Similarly, it was agreed that statistics comparing expenditures for education–which were relevant to perennial questions about costs–were not usable because of their low quality.

Comparability

Why does the international dimension present a particular challenge for statistics of education? The central idea is the notion of comparability. For basic statistics about education at the national level in the United States, there is general agreement about what constitutes the things being measured or counted (e.g., schools, teachers, students, school subjects). Although the decentralization of governance and diversity of approaches and programs require that care be taken when making comparisons among states or local districts, the issue is compounded many times when dealing with making comparisons among countries.

There are many instances in the arena of international education statistics in which differences among countries in fundamental aspects of the system can put statistics in jeopardy of being inaccurate and misleading. In the United States, for example, public schools are financed by public funds and private schools by private funds (at least the vast majority of the funding), whereas many countries in Europe finance privately governed schools with public funds. Thus, counting funding of public schools as a measure of a nation's financial support of education would be misleading. In many countries, it is common for staff considered to be teachers to also function in a position equivalent to a principal in small elementary schools. From the U.S. perspective, counting such staff members as teachers would lead to an overcount of staff resources devoted to teaching, an undercount of resources for administration, and misleading student—teacher ratios. Similarly, achievement tests that do not take differences in curricula and practices associated with testing and test-taking into account may inaccurately represent school learning.

In cases where there are common definitions and concepts, it is of equal importance to collect and process the data in a manner that captures the information desired. This often requires that countries recalculate national statistics based on these common definitions. Issues are even more complex when surveys are involved. For example, countries must agree on common practices for selecting comparable samples, setting acceptable response rates, and calculating response rates and margins of error. Another methodological issue is translation, which requires a high level of understanding of the concepts involved. Translation can even affect the difficulty of test items, a factor requiring attention in the development of international assessments.

During the 1980s, the generally weak comparability of the international education statistics prepared by the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organisation for Economic Co-operation and Development (OECD) was well known, and the published works received limited circulation and attention. The comparability of comparative studies of achievement was also routinely questioned. But with the demand created by the introduction of an international aspect to the education policy questions of the day, the need for improvement was clear.

Country Collaboration

The United States was not alone in its interest in improved statistics for comparing education systems. Similar circumstances–particularly pressure on public services to demonstrate their productivity and efficiency and a new awareness of economic competitiveness in a global market–led to a growing interest in education statistics in other OECD countries. The OECD responded to this demand by initiating the Indicators of Education Statistics (INES) program after two preparatory meetings, one hosted by the U.S. Department of Education in 1987, and the other by the French Ministry of Education in 1988.

To improve the quality of international statistics, it was necessary to have the participation of individuals with deep knowledge of their national education systems, access to national data systems, and the motivation and commitment to collaborate on an ongoing basis with those from other countries. The INES project put these elements together. The Netherlands, Scotland, the United States, and Sweden contributed substantial resources to lead working groups on topics such as enrollment in education, finance, learning outcomes, labor force outcomes, school functioning, and attitudes about education. Australia, Austria, and France also made important financial contributions. Meeting at regular intervals, individual delegates, sponsored by their countries, worked through comparability issues and served as contact points in their countries for the preparation and submission of data prepared specifically to meet the international standards. The result was much-improved comparability and a quality control process that was supported by peer review and the personal commitment of participants in the working groups. In addition, the participants gained expertise that enhanced their countries' resources for interpreting the statistics in a national context.

The OECD had extensive involvement in coordinating the process, collecting the data, and preparing and publishing a series of volumes of indicators under the title Education at a Glance: OECD Indicators. The first Education at a Glance (1992) was about 150 pages long, with parallel text in French and English, and several of the indicators were designated as provisional because of general questions about their validity or applicability. With succeeding years, new indicators have been added, as well as extensive background information relevant for interpreting the indicators. The 1995 and 1996 editions have annotated charts prepared by each country describing the organization of their education systems, including types and levels of schools and ages of students in each. The 2001 edition is almost 400 pages long, includes data from many non-OECD countries, and is available in English, French, and German. Selected editions of Education at a Glance have been translated into Czech, Italian, Japanese, Korean, and Spanish.

The establishment of the Program for International Student Assessment (PISA) highlights the importance of country collaboration in the development of international education statistics. To those working in the INES project, it quickly became clear that there was a lack of coherent data for indicators of student achievement. There was no extant database and no structure designed for governments to collaborate on an ongoing basis for regular data collections. Factors that needed to be overcome included major comparability issues ranging from conceptualization of outcomes to general survey practices; some countries' reluctance to publish indicators of achievement in a limited number of school subjects; and the large cost associated with producing the indicators. After several years of ongoing discussions, a consensus was reached to set up a project in which for the first time, countries shared the central cost of developing and implementing the assessment (countries typically are responsible for the cost of local data collection). The effort is governed by a board composed of representatives of participating countries and managed by the OECD secretariat. It includes assessments at three-year intervals of reading, mathematics, and science literacy, and of selected cross-curricular competencies such as attitudes and approaches related to learning.

Among the many other accomplishments associated with the country collaboration through the INES project, two that stand out are improvements in the comparability of expenditure data and of the categories for different levels of education (e.g., primary, secondary, tertiary). In the case of the levels of education, UNESCO's International Standard Classification of Education (ISCED), developed during the 1970s, was revised to reflect various changes–for example, growth in continuing education and training outside of education institutions and programs that straddle the boundaries between upper secondary and postsecondary education, and a manual was developed to facilitate consistent interpretation of the new system.

Studies conducted by the International Association for the Evaluation of Educational Achievement (IEA) also involve collaboration among governments and researchers. Three studies that have contributed to the data available for international education statistics are the Third International Mathematics and Science Study (TIMSS), the Second Information Technology in Education Study (SITES), and the Civic Education Study (Civ Ed). The Civic Education Study, working in an area in which different countries have different notions about many of the concepts involved (such as democracy and participation of individuals in democracy), relied heavily on international teams throughout the entire project for identifying common core ideas, developing assessment items, and interpreting results.

What Has Been Learned?

Because of the range and depth of international education statistics, it is impossible to provide more than a very brief summary in this limited space. What follows are a few illustrative highlights taken from publications of the OECD, the IEA, and the U.S. National Center for Education Statistics (NCES). NCES publications include explanatory notes and background information useful for interpreting the material from a U.S. perspective. Readers are encouraged to consult the publications directly to find additional statistics relevant to their particular interests.

Context of education. It is widely recognized that there is a large association between poverty and school achievement. Among a large number of countries participating in one study, the United States had by far the largest percentage of youth who were poor.

Participation in education. Into the 1990s, there was a general impression that the young people remained enrolled in school longer in the United States than in other countries. However, statistics show that the percentage of those fifteen to nineteen years old and those twenty to twenty-nine years old enrolled in education in the United States is quite close to the average percentage among OECD countries.

Expenditures. The United States spends a similar percentage of its gross domestic product (GDP) on education as other countries. On a per pupil basis, its expenditures are among the highest for elementary and secondary education. In higher education, the U.S. also spends considerably more per student than any other OECD country.

Teachers and teaching. Teachers in the United States have the highest number of teaching hours, although teachers in a few other countries put in almost as many hours. The student—teacher ratio in the United States is similar to the average of OECD countries in elementary, secondary, and postsecondary education, with the exception of vocational postsecondary education, where there are fewer students per teacher in the United States than in most other countries.

Learning outcomes for school-age youth. On studies of mathematics and science achievement, U.S. students typically score lower in relation to other countries as they progress through the grades. Performance of fifteen year olds in the United States was similar to the average for countries participating in PISA in reading, mathematics, and science literacy tasks designed to reflect real-life situations (in contrast to school curriculum). Many countries have variation in achievement in the same range as that of the United States, although some have less variation. Some countries with relatively little variation have high achievement, indicating that it is not necessary to "sacrifice" the bottom to have a high average score. Among countries participating in the IEA Civic Education Study, the United States is among the highest in knowledge of civic content among fourteen year olds, as well as the highest in civic skills, such as interpreting information.

States and nations. Because states have primary responsibility for education in the United States and are similar in size to many countries, comparisons between the states and other nations are of special interest to policymakers. Some U.S. states and school districts compare favorably in math and science with the highest-scoring countries, while others have scores in the same range as the lowest-performing ones. This pattern is repeated in other indicators that include both states and nations.

Adult literacy. Variation in adult literacy is strongly associated with income variation–countries that have a wider income distribution also have a higher percentage of adults in both high and low-literacy groups, with the United States having the highest variation in both income and literacy.

Conclusion

Increased demand in the mid-1980s for data to support solid comparisons among the education systems of different countries fueled major advances in the quality and quantity of international education statistics. These advances would not have been possible without an organizational structure that incorporated collaborative working relationships among statisticians and education professionals from different countries. This model was essential throughout the many phases of the work, including agreeing on common definitions of the concepts to be represented by statistics, developing methodologies for surveys and translation, and generating commitment and adherence to high standards and quality control. It will continue to be important to assure comparability as work proceeds to improve and broaden statistics in areas such as learning outcomes for students and adults, expenditures for education, teachers and teaching, and the relationship between education and the labor market.

BIBLIOGRAPHY

BALDI, STEPHANIE; PERIE, MARIANNE; SKIDMORE, DAN; GREENBERG, ELIZABETH; and HAHN, CAROLE. 2001. What Democracy Means to Ninth Graders: U.S. Results from the International IEA Civic Education Study. Washington, DC: U.S. Government Printing Office.

BARRO, STEPHEN. 1997. International Education Expenditure Study: Final Report, Vol. I: Quantitative Analysis of Expenditure Comparability. Washington, DC: National Center for Education Statistics.

BOTTANI, NORBERTO. 2001. "Editorial." Politiques d'Education et de Fromation: Analyses et Comparaisons Internationale 3 (3):7–12.

BOTTANI, NORBERTO, and TUIJNMAN, ALBERT. 1994. "International Education Indicators: Framework, Development, and Interpretation." In Making Education Count: Developing and Using International Indicators, ed. Norberto Bottani and Albert Tuijnman. Paris: Organisation for Economic Co-operation and Development.

GUILFORD, DOROTHY, ed. 1993. A Collaborative Agenda for Improving International Comparative Studies in Education. Washington, DC: National Academy Press.

HEYNEMAN, STEPHEN P. 1999. "The Sad Story of UNESCO's Education Statistics." International Journal of Educational Development 19:65–74.

INTERNATIONAL STUDY CENTER AT BOSTON COLLEGE. 2001. TIMSS 1999 Benchmarking Highlights. Boston: International Study Center at Boston College.

LEMKE, MARIANN; BAIRU, GHEDAM; CALSYN, CHRISTOPHER; LIPPMAN, LAURA; JOCELYN, LESLIE; KASTBERG, DAVID; LIU, YUN; ROEY, STEPHEN; WILLIAMS, TREVOR; and KRUGER, THEA.2001. Outcomes of Learning: Results from the 2000 Program for International Student Assessment of 15-Year-Olds in Reading, Mathematics, and Science Literacy. Washington, DC: U.S. Government Printing Office.

LUBECK, SALLY, ed. 2001. "Early Childhood Education and Care in Cross-National Perspective." Phi Delta Kappan 83 (3):213–254.

MATHESON, NANCY; SALGANIK, LAURA H.; PHELPS, RICHARD P.; PERIE, MARIANNE; ALSALAM, NABEEL; and SMITH, THOMAS M. 1996. Education Indicators: An International Perspective. Washington, DC: U.S. Government Printing Office.

MEDRICH, ELLIOTT A., and GRIFFITH, JEANNE E. 1992. International Mathematics and Science Assessments: What Have We Learned. Washington, DC: National Center for Education Statistics.

NATIONAL CENTER ON EDUCATION AND THE ECONOMY. 1990. America's Choice: High Skills or Low Wages! The Report of The Commission on the Skills of the American Workforce. Washington, DC: National Center on Education and the Economy.

NATIONAL COMMISSION ON EXCELLENCE IN EDUCATION. 1983. A Nation at Risk. Washington, DC: U.S. Government Printing Office.

NATIONAL EDUCATION GOALS PANEL. 1999. The National Education Goals Report: Building a Nation of Learners. Washington, DC: National Education Goals Panel.

NATIONAL GOVERNORS ASSOCIATION. 1986. Time for Results: The Governors' 1991 Report on Education. Washington, DC: National Governors' Association. (ERIC Document Reproduction Service No. ED 279603).

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 1999. Classifying Educational Programmes: Manual for ISCED-97 Implementation in OECD Countries. Paris: Organisation for Economic Co-operation and Development.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2001a. Education at a Glance: OECD Indicators. Paris: Organisation for Economic Co-operation and Development.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2001b. Knowledge and Skills for Life: First Results from PISA 2000. Paris: Organisation for Economic Co-operation and Development.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT and STATISTICS CANADA. 2000. Literacy in the Information Age: Final Report of the International Adult Literacy Survey. Paris: Organisation for Economic Co-operation and Development.

PHELPS, RICHARD; SMITH, THOMAS M.; and ALSALAM, NABEEL. 1996. Education in States and Nations: Indicators Comparing U.S. States with Other Industrialized Countries in 1991. Washington, DC: U.S. Government Printing Office.

SAUVAGEOT, CLAUDE. 2001. "Un outil au service des comparaisons internationale: CITE (ISCED)." Politiques d'Education et de Formation: Analyses et Comparaisons Internationale 3 (3):95–118.

SHERMAN, JOEL. 1997. International Education Expenditure Study: Final Report, Vol. II: Quantitative Analysis of Expenditure Comparability. Washington, DC: National Center for Education Statistics.

TORNEY-PURTA, JUDITH; LEHMANN, RAINER; OSWALD, HANS; and SCHULZ, WOLFRAM. 2001. Citizenship and Education in Twenty-Eight Countries: Civic Knowledge and Engagement at Age Fourteen. Amsterdam: Eburon.

U.S. DEPARTMENT OF EDUCATION, NATIONAL CENTER FOR EDUCATION STATISTICS. 1998. The Condition of Education, 1998 (NCES 98-013). Washington, DC: U.S. Government Printing Office.

U.S. DEPARTMENT OF EDUCATION, NATIONAL CENTER FOR EDUCATION STATISTICS. 2000. The Condition of Education 2000 (NCES 2000-062). Washington, DC: U.S. Government Printing Office.

U.S. DEPARTMENT OF EDUCATION, NATIONAL CENTER FOR EDUCATION STATISTICS. 2001a. The Digest of Education, 2000 (NCES 2001-034), by T. Snyder. Production Manager, C. M. Hoffman. Washington, DC: U.S. Government Printing Office.

U.S. DEPARTMENT OF EDUCATION, NATIONAL CENTER FOR EDUCATION STATISTICS. 2001b. The Condition of Education 2001. Washington, DC:U.S. Government Printing Office.

U.S. DEPARTMENT OF EDUCATION, NATIONAL CENTER FOR EDUCATION STATISTICS. 2002. The Digest of Education Statistics 2001. Washington, DC: U.S. Government Printing Office.

INTERNET RESOURCES

INTERNATIONAL ASSOCIATION FOR THE EVALUATION OF EDUCATIONAL ACHIEVEMENT. 2002. <www. iea.nl>.

NATIONAL CENTER FOR EDUCATION STATISTICS. 2002. <www.nces.ed.gov>.

EUGENE OWEN

LAURA HERSH SALGANIK

The use of widely published statistical indicators (also referred to as social indicators outside the purely economic realm) of the condition of national education systems has in the early twenty-first century become a standard part of the policymaking process throughout the world. Uniquely different from the usual policy-related statistical analysis, statistical indicators are derived measures, often combining multiple data sources and several statistics, that are uniformly developed across nations, repeated regularly over time, and have come to be accepted as summarizing the condition of an underlying complex process. Perhaps the best known among all statistical indicators is the gross national product, which is derived from a statistical formula that summarizes all of the business activity of a nation's economy into one meaningful number. In the closing decades of the twentieth century, international statistical indicators of educational processes made considerable advances in quantity, quality, and acceptance among policymakers.

These cross-national indicators often have significant impact on both the public and the education establishment. In the United States, for example, the New York Times gives high visibility to reports based on indicators of where American students rank in the latest international math or science tests, or those revealing how educational expenditures per student, teacher salaries, or high school dropout rates compare across countries. National education ministries or departments frequently use press releases to put their own spin on statistical indicator reports such as the annual Education at a Glance (EAG), published by the Organisation for Economic Co-operation and Development (OECD). The use of indicators for comparisons and strategic mobilization has become a regular part of educational politics. Dutch teachers, for example, used these indicators to lobby for increases in their salaries after the 1996 EAG indicators of teacher salaries showed that they were not paid as well as their Belgian and German neighbors. Similarly, in the United States, comparisons of a statistical indicator of dropout rates across nations were used to highlight comparatively low high school completion rates in 2000. In an extreme, but illustrative, case, one nation's incumbent political party requested that the publication of the EAG be delayed until after parliamentary election because of the potentially damaging news about how its education system compared to other OECD nations.

These examples of the widespread impact of international education statistical indicators are all the more interesting when one considers that an earlier attempt to set up a system of international statistical indicators of education during the 1970s failed. While attempts to create a national, and then an international, system of social indicators (known as the social indicators movement) faltered, early attempts by the OECD to develop statistical indicators on education systems fell apart as idealism about the utility of a technical-functionalist approach to education planning receded.

History of Social Indicators

The social indicators movement, born in the early 1960s, attempted to establish a "system of social accounts," that would allow for cost-benefit analyses of the social components of expenditures already indexed in the National Income and Product Accounts. Many academics and policy makers were concerned about the social costs of economic growth, and social indicators were seen as a means to monitor the social impacts of economic expenditures. Social indicators are defined as time series that are used to monitor the social system, which help to identify change and to guide efforts to adjust the course of social change. Examples of social indicators include unemployment rates, crime rates, estimates of life expectancy, health status indices such as the average number of "health days" in the past month, rates of voting in elections, measures of subjective well-being, and education measures such as school enrollment rates and achievement test scores.

Enthusiasm for social indicators led to the establishment of the Social Science Research Council (SSRC) Center for Coordination of Research on Social Indicators in 1972 (funded by the National Science Foundation) and the initiation of several continuing sample surveys, including the General Social Survey (GSS) and the National Crime Survey (NCS). As reporting mechanisms, the Census Bureau published three comprehensive social indicators data and chart books in 1974, 1978, and 1980. The academic community launched the international journal Social Indicators Research in 1974. Many other nations and international agencies also produced indicator volumes of their own during this period. In the 1980s, however, federal funding cuts led to the discontinuation of numerous comprehensive national and international social indicators activities, including closing the SSRC Center. Some have argued that a shift away from data-based decision making towards policy based on conservative ideology during the Reagan administration, coupled with a large budget deficit, helped to pull the financial plug on the social indicators movement. While field-specific indicators continue to be published by government agencies in areas such as education, labor, health, crime, housing, science, and agriculture, the systematic public reporting envisioned in the 1960s has largely not been sustained, although comprehensive surveys of the condition of youth have arisen in both the public and private spheres, such as those by the Annie E. Casey Foundation (2001) and the Forum on Child and Family Statistics (2001).

Some of the main data collections that grew out of the social indicators movement, including the GSS and NCS, continue, as do a range of longitudinal and cross-sectional surveys in other social areas. On the academic side, a literature involving social indicators has continued to grow, mostly focused on quality-of-life issues. While education is seen as a component of quality of life, it tends to be used in a fairly rudimentary fashion. For example, out of 331 articles published in Social Indicators Research between 1994 and 2000, only twenty-six addressed education with any depth. And although the widely cited Human Development Index compiled by the United Nations Development Programme has education and literacy components, these are limited to basic measures of school enrollments and, arguably, non-comparable country-level estimates of literacy rates, which are often based on census questions about whether someone can read or write. As a sub-field of social indicators, however, the collection and reporting of education statistics has expanded rapidly since the early 1980s in the United States, the early 1990s in OECD countries, and, more recently, in developing countries.

State of International Education Statistical Indicators Today

Among the current array of statistical indicators of education within and across nations are some that go far beyond the basic structural characteristics and resource inputs, such as student-teacher enrollment ratios and expenditures per student, found in statistical almanacs. More data-intensive and statistically complex indicators of participation in education, financial investments, decision-making procedures, public attitudes towards education, differences in curriculum and textbooks, retention and dropout rates in tertiary (higher) education, and student achievement in math, science, reading, and civics have become standard parts of indicators reports. For example, the OECD summarizes total education enrollment through an indicator on the average years of schooling that a 5-year-old child can expect under current conditions, which is calculated by summing the net enrollment rates for each single year of age and dividing by one hundred. Unlike the gross enrollment ratios (calculated as total enrollment in a particular level of education, regardless of age, divided by the size of the population in the "official" age group corresponding to that level) that have traditionally been reported in UNESCO statistical yearbooks, the schooling expectancy measures reported by the OECD aggregate across levels of education and increase comparability by weighting enrollment by the size of the population that is actually eligible to enroll.

Examples of other indicators that attempt to summarize complex issues into concise numerical indices include measures of individual and societal rates of return of investments in different levels of education, measures of factors contributing to differences in relative statutory teachers' salary costs per student, and effort indexes for education funding, which adjust measures of public and private expenditures per student by per capita wealth. Furthermore, the OECD is working to develop assessment and reporting mechanisms to compare students' problem-solving skills, their ability to work in groups, and their technology skills. There are few components of the world education enterprise that statisticians, psychometricians, and survey methodologists are not trying to measure and put into summary indicator forms for public consumption.

High-quality indicators require data that are accurate and routinely available. Behind the creation of so many high-quality indicators of national education systems is the routine collection of a wide array of education data in most industrialized countries. In addition to the costs of gathering data and information for national purposes, a large amount of human and financial investment is made to ensure that the data meet standards of comparability across countries. Country-level experts, whether from statistical or policy branches of governments, frequently convene to discuss the kinds of data that should be collected, to reach consensus on the most methodologically sound means of collecting the data, and to construct indicators.

Growth and Institutionalization of International Data Collections

Numerous international organizations collect and report education data, with the range of data types expanding and the complexity of collection and analysis increasing. Hence, the total cost of these collections has increased dramatically since 1990. Government financial support, and in some cases control, has been a significant component of this growth in both the sophistication of data and the scope of collections. Briefly described here are some of the basic institutional components involved in the creation of a set of international organizations or organizational structures that provide the institutional infrastructure for creating and sustaining international education statistical indicators.

IEA. Collaboration on international assessments began as early as the late 1950s when a group composed primarily of academics formed the International Association for the Evaluation of Educational Achievement (IEA). In 1965, twelve countries undertook the First International Mathematics Study. Since that time, the IEA has conducted fourteen different assessments covering the topics of mathematics, science, reading, civics, and technology. Findings from IEA's Second International Mathematics Study were the primary justification for the finding in the early 1980s that the United States was a "nation at risk." In the 1990s, government ministries of education became increasingly important for both funding and priority setting in these studies.

The results of the Third International Mathematics and Science Study (TIMSS) were widely reported in the United States and served as fuel for the latest educational reform efforts. As governments became increasingly involved in setting the IEA agenda, some key aspects of the research orientation of earlier surveys were no longer funded (e.g., the pre-test/post-test design in the Second International Science Study), while other innovative activities were added. For example, the TIMSS video study conducted in Germany, Japan, and the United States applied some of the most cutting-edge research technology to international assessments. As part of the 1999 repeat of TIMSS (TIMSS-R), additional countries have agreed to have their teachers videotaped and science classrooms have been added to the mix.

Over time, the IEA assessments have become more methodologically complex, with TIMSS employing the latest testing technology (e.g., item response theory [IRT], multiple imputation). As the technology behind the testing has become more complex, cross-national comparisons of achievement have become widely accepted, and arguments that education is culturally determined or that the tests are invalid, and thus that achievement results cannot be compared across countries, have for the most part disappeared.

OECD. While the IEA has been the key innovator in the area of education assessment, the OECD has led the development of a cross-nationally comparable system of education indicators. After a failed attempt to initiate an ambitious system of data collection and reporting in the early 1970s, the OECD, with strong support from the United States, undertook the development of a new system of cross-nationally comparable statistical indicators in the late 1980s. The ministers of education of OECD countries agreed at a meeting in Paris in November 1990 that accurate information and data are required for sound decision-making, informed debate on policy, and accountability measures. Ministers also agreed that data currently available lacked comparability and relevance to education policy.

Although led by the OECD, the core of the International Indicators of Education Systems (INES) projects was the organization of four country-led developmental networks: Network A on Educational Outcomes, Network B on Student Destinations, Network C on School Features and Processes, and Network D on Expectations and Attitudes towards Education–led by the United States, Sweden, the Netherlands, and the United Kingdom, respectively. The OECD secretariat chairs a technical group on enrollments, graduates, personnel, and finances. These networks–involving up to 200 statisticians, policymakers, and, in some cases, academics–designed indicators, negotiated the definitions for data collections, and supplied data for annual reporting. This model of shared ownership in the development of Education at a Glance (which was at first published biennially and later became an annual publication) contributed to its success. Participants in the networks and the technical group invested the time needed to supply high-quality data because they had a stake in the publication's success.

INES was initially a reporting scheme where administrative databases within countries were mined and aggregated, and it has evolved into an initiative that mounts its own cross-national surveys, including school surveys, public attitudes surveys, adult literacy surveys, and surveys of student achievement. The largest and most expensive project to date is the OECD Programme for International Student Assessment (PISA). PISA is an assessment of reading literacy, mathematical literacy, and scientific literacy, jointly developed by participating countries and administered to samples of fifteen-year-old students in their schools. In 2000 PISA was administered in thirty-two countries, to between 4,500 and 10,000 students per country. Expected outcomes include a basic profile of knowledge and skills among students at the end of compulsory schooling, contextual indicators relating results to student and school characteristics, and trend indicators showing how results change over time. With the results of PISA, the OECD will be able to report, for the first time, achievement and context indicators specifically designed for that purpose (rather than using IEA data) for country rankings and comparisons.

UNESCO. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has been the main source of cross-national data on education since its inception near the end of the Second World War. UNESCO's first questionnaire-based survey of education was conducted in 1950 and covered fifty-seven of its member states. In the 1970s UNESCO organized the creation of the International Standard Classification of Education (ISCED), a major step forward towards improving the comparability of education data. Although as many as 175 countries regularly report information on their education systems to UNESCO, much of the data reported is widely considered unreliable. Throughout the 1990s the primary analytical report on education published by UNESCO, the World Education Report, based many of its analyses and conclusions on education data collected by agencies other than UNESCO.

Between 1984 and 1996 personnel and budgetary support for statistics at UNESCO declined, and UNESCO's ability to assist member countries in the development of their statistical infrastructure or in the reporting of data was severely limited. In the late 1990s, however, the World Bank and other international organizations, as well as influential member countries such as the Netherlands and the United Kingdom, increased pressure and financial contributions in order to improve the quality of the education data UNESCO collects.

Collaboration between UNESCO and OECD began on the World Bank—financed World Education Indicators (WEI) project, which capitalized on OECD's experience, legitimacy, and status to expand the OECD Indicators Methodology to the developing world. Although this project includes only eighteen nations (nearly fifty if OECD member nations are included), it has helped to raise the credibility of indicator reporting in at least some countries in the developing world. Even though this project has in many ways "cherry-picked" countries having reasonably advanced national education data systems, the collaborative spirit imported from OECD's INES project has been quite effective.

A major step for the newly constituted UNESCO Institute for Statistics will be to take this project to a larger scale. Significantly expanding WEI will be quite a challenge, however, as the financial and personnel costs needed to increase both the quality of national data collection and reporting, as well as processing and indicator production on an international level, are likely to exceed the budget and staff capacity of the institute in the short term. The visible success of the WEI project, however, shows that the interest in high-quality, comparable, education indicators expands far beyond the developed countries of the OECD.

Integration of National Resources and Expertise into the Process

Many of the international organizations dedicated to education data collection were in operation well before the renaissance of the statistical indicator in the education sector, but these groups lacked the political power and expertise found in a number of key national governments to make them what they have recently become. A central part of the story of international data and statistical indicators has been the thorough integration of national governments into the process. As technocratic operations of governance, with its heavy reliance on data to measure problems and evaluate policies, became standard in the second half of the twentieth century, wealthier national governments invested in statistical systems and analysis capabilities. As was the case for the IEA and its massive TIMSS project, several key nations lent crucial expertise and legitimization to the process, factors that were clearly missing in earlier attempts. Although this "partnership" has not always been a conflict-free one, it has taken international agencies to new technical and resource levels.

The integration of national experts, often from ministries of education or national statistical offices, into international indicator development teams has improved both the quality of the data collected and the national legitimacy of the data reported. A number of decentralized states, including Canada, Spain, and the United States, have used the international indicators produced by OECD as benchmarks for state/provincial indicators reports. As more national governments build significant local control of education into national systems, this use of international indicators at local levels will become more widespread. In the case of Canada, the internationally sanctioned framework provides legitimacy to a specific list of indicators that might not otherwise have gained a sufficient level of agreement among the provinces involved.

The initial release of results from the PISA project will take this one step further, in that the OECD will provide participating countries reports focused on their national results, in a way similar to how the National Assessment of Educational Progress (NAEP) produces reports for each of the fifty U.S. states. This reporting scheme will allow participating countries to "release" their national data at the same time as the international data release. The same could easily happen with releases of subnational indicators in conjunction with international releases. This form of simultaneous release is seen as an effective way to create policy debate at a number of levels within the American system, as illustrated by the U.S. National Center for Education Statistics' ability to generate public interest in its release of achievement indicators from TIMSS and TIMSS-R. International education indicators provide constituencies within national education systems another vantage point to effect change and reform.

Conclusions

There have been four main trends behind the massive collection of data and the construction of cross-national statistical indicators in the education sector over the past several decades. These trends are: (1) greater coordination and networks of organizations dedicated to international data collection; (2) integration of national governments' statistical expertise and resources into international statistical efforts that lead to statistical indicators; (3) political use of cross-national comparisons across a number of public sectors; and (4) near universal acceptance of the validity of statistical indicators to capture central education processes.

Although examples presented here of each factor focus more on elementary and secondary schooling, the same could be said for indicators of tertiary education. The only difference is that the development of a wide range of international statistical indicators for higher education (i.e., indicators of higher education systems instead of research and development products of higher education) lags behind what has happened for elementary and secondary education. However, there are a number of signs that the higher education section will incorporate similar indicators of instruction, performance, and related processes in the near future. It is clear that international statistical indicators of education will continue to become more sophisticated and have a wider impact on policy debates about improving education for some time to come.

BIBLIOGRAPHY

ANNIE E. CASEY FOUNDATION. 2001. Kids Count Data Book 2001: State Profiles of Child Well-Being. Washington, DC: Center for the Study of Social Policy.

BOTTANI, NORBERTO, and TUIJNMAN, ALBERT. 1994. "International Education Indicators: Framework, Development, and Interpretation." In Making Education Count: Developing and Using International Indicators, ed. Norberto Bottani and Albert Tuijnman. Paris: Organisation for Economic Co-operation and Development.

FEDERAL INTERAGENCY FORUM ON CHILD AND FAMILY STATISTICS. 2001. America's Children: Key National Indicators of Well-Being, 2001. Washington, DC: U.S. Government Printing Office.

FERRISS, ABBOTT L. 1988. "The Uses of Social Indicators." Social Forces 66 (3):601–617.

GUTHRIE, JAMES W., and HANSEN, JANET S., eds. 1995. Worldwide Education Statistics: Enhancing UNESCO's Role. Washington DC: National Academy Press.

HEYNEMAN, STEPHEN P. 1986. "The Search for School Effects in Developing Countries: 1966– 1986." Economic Development Institute Seminar Paper No. 33. Washington DC: The World Bank.

HEYNEMAN, STEPHEN P. 1993. "Educational Quality and the Crisis of Educational Research." International Review of Education 39 (6):511–517.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 1982. The OECD List of Social Indicators. Paris: OECD Social Indicator Development Programme.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 1992. High-Quality Education and Training for All, Part 2. Paris: OECD/CERI.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2000. Investing in Education: Analysis of the 1999 World Education Indicators. Paris: OECD/CERI.

ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2001. Education at a Glance, OECD Indicators 2001. Paris: OECD/CERI.

PURYEAR, JEFFREY M. 1995. "International Education Statistics and Research: Status and Problems." International Journal of Educational Development 15 (1):79–91.

UNITED NATIONS. 1975. Towards a System of Social and Demographic Statistics. New York: United Nations.

UNITED NATIONS EDUCATIONAL, SCIENTIFIC AND CULTURAL ORGANIZATION. 2000. World Education Report 2000–The Right to Education: Towards Education for All Throughout Life. Paris: UNESCO Publishing.

UNITED NATIONS EDUCATIONAL, SCIENTIFIC AND CULTURAL ORGANIZATION and the ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT. 2001. Teachers for Tomorrow's Schools: Analysis of the World Education Indicators. Paris: UNESCO Publishing/UIS/OECD.

INTERNET RESOURCE

NOLL, HEINZ-HERBERT. 1996. "Social Indicators and Social Reporting: The International Experience." Canadian Council on Social Development. <www.ccsd.ca/noll1.html>.

THOMAS M. SMITH

DAVID P. BAKER

Additional topics

Education - Free Encyclopedia Search EngineEducation Encyclopedia