Education Encyclopedia - » Education Encyclopedia: Education Reform - OVERVIEW to Correspondence course

Cost Effectiveness in Education - Methodology, Examples, Use of Cost-Effectiveness Analysis

educational program alternatives school

Cost-effectiveness analysis is an evaluation tool that is designed to assist in choosing among alternative courses of action or policies when resources are limited. Most educational decisions face constraints in the availability of budgetary and other resources. Therefore, limiting evaluation to the educational consequences of alternatives, alone, without considering their costs provides an inadequate basis for decision-making. Some alternatives may be more costly than others for the same results, meaning that society must sacrifice more resources to obtain a given end. It is desirable to choose those alternatives that are least costly for reaching a particular objective or that have the largest impact per unit of cost. This is intuitively obvious because the most cost-effective solution will free up resources for other uses or allow a greater impact for any given investment in comparison to a less cost-effective solution.

Applying this to educational interventions, there are a host of options from which schools, school districts, and higher education institutions can choose to improve educational outcomes. Many have shown at least some evidence of effectiveness, although the standards of evidence vary considerably. Thus, at the very least, consistent standards of evidence are needed to compare the competing alternatives. But estimates of the costs of the alternatives are needed as well. Even if one alternative is 10 percent more effective than another, it will not be preferred if it is twice as costly. Thus, both costs and effectiveness must be known in order to make good public policy choices.

Before reviewing briefly the methodology of cost-effectiveness analysis, it is important to differentiate it from a closely related evaluation tool, cost-benefit analysis. The approach to measuring costs is similar for both techniques, but in contrast to cost-effectiveness analysis where the results are measured in educational terms, cost-benefit analysis uses monetary measures of outcomes. This approach has the advantage of being able to compare the costs and benefits in monetary values for each alternative to see if the benefits exceed the costs. It also enables a comparison among projects with very different goals as long as both costs and benefits can be placed in monetary terms. In education, cost-benefit analysis has been used in cases where the educational outcomes are market-oriented such as in vocational education or in consideration of the higher income produced by more or better education. It has also been used in cases where a variety of benefits can be converted into monetary values such as in the noted study of the Perry Preschool Program discussed in W. Steven Barnett's 1996 book. In most educational interventions, however, the results are measured in educational terms rather than in terms of their monetary values.


The method of doing cost-effectiveness can be summarized briefly, but it is best to refer to more extensive treatments of the subject if a study is being contemplated (for example, Cost-Effectiveness Analysis, by Henry M. Levin and Patrick J. McEwan). Cost-effectiveness begins with a clear goal and a set of alternatives for reaching that goal. Comparisons can be made only for alternatives that have similar goals such as improvement of achievement in a particular subject or reduction in absenteeism or in dropouts. A straightforward cost-effectiveness analysis cannot compare options with different goals and objectives, any more than a standard type of evaluation could compare results in mathematics with results in creative writing. Alternatives being assessed should be options for addressing a specific goal where attainment of the goal can be measured by a common criterion such as an achievement test. It should be noted that a more complex, but related, form of analysis, cost-utility, can be used to assess multiple objectives.

In almost all respects, measuring the effectiveness of alternatives for purposes of cost-effectiveness analysis is no different than for a traditional evaluation. Experimental or quasi-experimental designs can be used to ascertain effectiveness, and such studies should be of a quality adequate to justify reasonably valid conclusions. If a study of effectiveness does not meet reasonable standards in terms of its validity, there is nothing in the cost-effectiveness method that will rescue the result. What cost-effectiveness analysis adds is the ability to consider the results of different alternatives relative to the costs of achieving those results. It does not change the criteria for what is a good effectiveness study.

The concept of costs that is used in cost-effectiveness studies is one that is drawn from economics, namely, opportunity cost. When a resource is used for one purpose, individuals or society lose the opportunity to use that resource in some alternative use. In general, the concept of opportunity cost is viewed as the value of a resource in its best alternative use. This may differ from the everyday understanding of what a cost is. For example, many school districts will refer to an unused facility as having no cost to the district if it is used for a new program. That facility, however, has value in alternative use in the sense that it could be sold or leased in the market or used for other purposes that have value. In this sense it is not "free." If the school district uses it for a new program, it sacrifices the potential income that the facility could yield in the marketplace or the value to other programs that could use the facility.

There is a standard methodology for measuring the cost of an intervention in cost-effectiveness analysis. The ingredients required to replicate the interventions are specified for all alternatives. Most interventions require personnel, facilities, materials, equipment, and other inputs such as client time. Using these categories as organizing rubrics, the ingredients are listed in terms of both quality and quantity such as, for the personnel category, the number of full-time teachers and their qualifications as well as other staff. Information on ingredients is collected through interviews, reports, and direct observations.

When all of the ingredients are accounted for, their cost values are determined. There are a variety of ways to estimate these costs. In the case where ingredients are purchased in competitive marketplaces, the costs are readily obtainable. Of course, the total costs of personnel include both salaries and the employee benefits. Other approaches are often used to estimate the value of facilities and equipment. In general, the technique for measuring costs is to ascertain their annual value. Because facilities and equipment have a life that is greater than one year, the annual value is derived through determining annual depreciation and interest costs. There are standard methods for ascertaining the annualized value of costs for ingredients.

These costs are summed up to obtain total annual costs, and they are usually divided by the numbers of students to get an average cost per student that can be associated with the effectiveness of each intervention. The ratio of cost per unit of effectiveness can then be compared across projects by combining the effectiveness results with costs. Alternatives with the largest effectiveness relative to cost are usually given highest priority in decisionmaking, although other factors such as ease of implementation or political resistance need to be considered. The cost analysis can also be used to determine the burden of cost among different government or private entities where each alternative has different possibilities in terms of who provides the ingredients. In this respect it should be noted that the total cost of an intervention must even include volunteers and donated resources, although the cost to the sponsor may be reduced by others sharing the cost burden through providing resources in-kind.


The application of cost-effectiveness analysis can best be understood by providing examples of its use. In a 1984 study, Bill Quinn, Adrian Van Mondfrans, and Blaine R. Worthen examined the cost-effectiveness of two different mathematics curricula. One approach was based upon a traditional, textbook application. The other was a locally developed curriculum that emphasized highly individualized instruction with special methods for teaching mathematics concepts. With respect to effectiveness, the latter curriculum was found to be more effective in terms of mathematics achievement, on average, than the traditional program. It was also learned that the lower the socioeconomic status (SES) of the student, the greater were the achievement advantages of the innovative program.

But the innovative program had a cost that was about 50 percent higher per student than the traditional one. The question is whether the additional achievement justified the higher cost. The evaluators found that the cost per raw score point on the Iowa Tests of Basic Skills was about 15 percent less for the innovative program than for the traditional one, showing that the higher achievement more than compensated for the higher cost. For low SES students the cost per point of the innovative program was less than 40 percent that of the traditional program. For high SES students, however, the traditional program was slightly more cost-effective. This study demonstrates the value of cost-effectiveness and its usefulness as an evaluation technique among different types of students. In a low SES school or district the innovative program was far superior in terms of its cost-effectiveness. In a high SES school or district, the traditional program might be preferred on cost-effectiveness grounds.

One of the most comprehensive cost-effectiveness studies compared four potential interventions in the elementary grades: reductions in class size in a range between twenty and thirty-five students per class, peer tutoring, computer-assisted instruction, and longer school days. The measures of educational effectiveness included both mathematics and reading achievement. Tutoring costs per student were highest, followed by decreases in class size from thirty-five to twenty, computer-assisted instruction, and longer school days. The high costs for peer tutoring are a result of the cost of adult coordinators who must organize and supervise the tutoring activities of effective programs. Effectiveness measures were taken from evaluation studies that had focused on the achievement gains associated with each type of intervention. Although peer tutoring had a high cost, it also had very high effectiveness and the highest cost-effectiveness. In general, computer-assisted instruction was second in cost-effectiveness with class size and longer school days showing the lowest cost-effectiveness. Results differed somewhat between reading and mathematics, but the cost-effectiveness of reduced class size and of longer school days was consistently lower than those of peer tutoring and computer-assisted instruction.

A study in northeastern Brazil undertook a cost-effectiveness analysis of different approaches to school improvement. A range of potential school improvements was compared to ascertain effects on student achievement. These included teacher-training programs, higher salaries to attract better teaching talent, better facilities, and greater provision of student textbooks and other materials. The authors used statistical models to determine the apparent impact of changes in these inputs on Portuguese language achievement for second graders. Costs were estimated using the ingredients method outlined above. Effectiveness relative to cost was highest for the provision of more instructional materials and lowest for raising teacher salaries. Given the very tight economic resources available for improving schooling in Brazil, this type of study provides valuable guidance for those people making resource decisions.

Use of Cost-Effectiveness Analysis

Studies of the effectiveness of educational interventions are very common. Studies of their cost-effectiveness are rare. What might account for this discrepancy? There may be many reasons. Evaluators of social programs rarely have background in cost analysis. Few programs or textbooks in educational evaluation provide training in cost-effectiveness analysis. That decision makers are often unfamiliar with cost-effectiveness analysis limits their ability to evaluate and use such studies. Yet, in the early 1980s, the field of health was also limited in terms of both the production and use of cost-effectiveness studies. By the early twenty-first century, the concept had been widely applied to health decisions in response to severe resource stringencies in health care. Because the field of education is pressed with similar resource constraints, there might be increased development and use of cost-effectiveness techniques in educational decision-making.


BARNETT, W. STEVEN. 1996. Lives in the Balance: Age-27 Benefit-Cost Analysis of the High/Scope Perry Preschool Program. Ypsilanti, MI: High/Scope Press.

GOLD, MARTHE, ed. 1996. Cost-Effectiveness in Health and Medicine. New York: Oxford University Press.

HARBISON, RALPH W., and HANUSHEK, ERIC. 1992. Educational Performance of the Poor: Lessons from Rural Northeast Brazil. New York: Oxford University Press.

LEVIN, HENRY M. 2001. "Waiting for Godot: Cost-Effectiveness Analysis in Education." In Evaluation Findings that Surprise, ed. Richard Light. San Francisco: Jossey-Bass.

LEVIN, HENRY M.; GLASS, GENE V.; and MEISTER, GAIL. 1987. "Cost-Effectiveness of Computer Assisted Instruction." Evaluation Review 11 (1):50–72.

LEVIN, HENRY M., and McEWAN, PATRICK J. 2001. Cost-Effectiveness Analysis, 2nd edition. Thousand Oaks, CA: Sage.

ORR, LARRY L. 1999. Social Experiments. Thousand Oaks, CA: Sage.

QUINN, BILL; VAN MONDFRANS, ADRIAN; and WORTHEN, BLAINE R. 1984. "Cost-Effectiveness of Two Math Programs as Moderated by Pupil SES." Educational Evaluation and Policy Analysis 6 (1):39–52.

SHADISH, WILLIAM R.; COOK, THOMAS D.; and CAMPBELL, DONALD T. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. New York: Houghton Mifflin.


Council for Basic Education - History, Activities, Governance Legal Status and Publications, Assessment of CBE's Influence and Significance [next]

User Comments

Your email address will be altered so spam harvesting bots can't read it easily.
Hide my email completely instead?

Cancel or

Vote down Vote up

over 5 years ago


Vote down Vote up

over 5 years ago

Valuable concise information, just what I was looking for. Many Thanks

Vote down Vote up

over 5 years ago

Excellent Contribution. Thanks a lot.

Vote down Vote up

about 2 years ago

Valuableand helpful contribution.
thanks a lot.
With regards

Vote down Vote up

about 4 years ago

Excellent services.Best Regards