11 minute read

Assessment Tools

Technology Based



Assessment methods can be learning opportunities for students, though identifying methods to accomplish this can be challenging. Some new instructional methods may target learning outcomes that traditional assessment methods fail to measure. Auto-mated assessment methods, such as multiple-choice and short-answer questions, work well for testing the retrieval of facts, the manipulation of rote procedures, solving multiple-step problems, and processing textual information. With carefully designed multiple-test items it is possible to have students demonstrate their ability to perform causal reasoning and solve multi-step problems. However, students' participation in these kinds of traditional assessment activities do not necessarily help them "learn" and develop complex skills.



What students and teachers need are multiple opportunities to apply new information to complex situations and receive feedback on how well they are progressing toward developing the ability to synthesize and communicate ideas and to systematically approach and solve problems. Technologies are emerging that can assess students' ability to gather, synthesize, and communicate information in a way that helps improve their understanding and that informs teachers how they can improve their instruction. Several instructional techniques and technologies have been designed to help students develop complex skills and help teachers develop students' causal reasoning, diagnose problem-solving abilities, and facilitate writing.

Developing Causal Reasoning

Volumes of information can be shared efficiently by using a graphical image. The potential for students to learn from integrating ideas into graphical information can be very powerful and can demonstrate what they know and understand. One example of how information is portrayed in graphical form is a common street map. Expert mapmakers communicate a wealth of information about the complicated network of roads and transportation routes that connect the various locations in a city. Mapmakers use colored lines to indicate which roads have the higher speed limits, such as expressways and highways. Therefore, people can use the map as a tool to make decisions about a trip by using a map to plan out the fastest route, rather than taking the shortest distance–which may include city roads that have a lower speed limit or are potentially congested. A road map efficiently illustrates the relative location of one place and its relation to another, as well as information about the roads that connect these locations.

Learners can share what they know about a complex topic by creating a concept map, which illustrates the major factors of a topic and descriptive links detailing the relationships between these factors. For example, a river is a complex ecosystem containing a variety of elements, such as fish, macroinvertebrates, plants, bacteria, and oxygen, which are highly dependent on one another. Changes in any of these elements can have a ripple effect that is difficult to determine without some method to represent the links between the elements. Scientists often create a concept map of a system to help keep track of these interdependencies in a complex system. A concept map such as Figure 1 can help students notice when their intuitions are not enough to make sense of a complex situation. Therefore, creating a concept map can be a very authentic activity for students to do as they explore the intricacies of science.

Concept-mapping activities also provide an excellent opportunity for assessment. Students can demonstrate their current conceptions of a system at various stages of their inquiry. A simple method to evaluate concept maps is to compare what the learners create relative to what an expert creates. A point can be given for each relevant element and links identified by the students. A second point can be added for correctly labeling the link (e.g., produce as in "plants produce oxygen"). A common observation is that as students begin exploring a topic area like ecosystems, their concept maps contain only a few elements, many of which are irrelevant, and they use very few links or labels of links. If they include

FIGURE 1

links, they are unable to describe what the links are, though they know there is some dependency between factors. As students begin to investigate more about a system and how it works, they are often able to redraw their maps to include the relevant element links and labels to illustrate the interdependence of the elements of a system. However, grading these maps multiple times can be very time-consuming for a teacher.

New computer software has been developed to provide students with a simple interface to create hierarchical concept maps. The software can also score students' performance on these concept maps, depending on the goals of the instruction. For example, students' concept maps can be compared with those of experts for completeness and accuracy. An expert's model of a system would include a complete list of factors and named links. Comparing a student's map to an expert's map provides a method of identifying if students know what factors are relevant, as well as the relationships between factors.

Causal maps are similar to concept maps and include information about how one factor influences another factor. For example, the relationship "fish consume plants" in Figure 1 could also be expressed as "fish decrease plants" and "plants increase oxygen." The visual representation–with qualitative information about the relationship between factors–gives students an illustration they can use to make predictions about what will happen to the system when one of the factors changes. They can use the causal map as a tool to answer the question "What happens if we add a larger number of fish into a river?" Then, students can follow the increase and decrease links to derive a hypothesis of what they think might happen to the other factors.

Causal maps also provide a method to use technology as both an instructional tool and an assessment tool to measure students' understanding of a complex system. A research group at the Learning Sciences Institute at Vanderbilt University (formerly the Learning Technology Center) has created a computer system, called teachable agents, that provides students with a method to articulate what they know and to test their ideas by helping a virtual agent use their knowledge to answer questions. Students teach this agent how a particular system works by creating a causal map of a system, which becomes the agent's representation of its knowledge. Testing how well the agent has learned is accomplished by asking the agent questions about the relationships in the system. The agent reasons through questions using the causal map the students have provided them.

As the agent reasons through a question, the factors and links are highlighted to illustrate what information it is using to make a decision on how to answer the question. If the causal map is incomplete or has contradictory information, then the computer agent will explain that it doesn't know, or is confused about what to do next. The feedback from watching the computer agent "think" about the problem can help students identify what knowledge is missing or incorrectly illustrated in their causal map. Therefore, students learn by having to debug how their agent thinks about the world. This kind of computer tool tests a student's understanding of the processes associated with a system and provides an automatic method for self-assessment.

Diagnosing Problem-Solving Abilities

Problem solving is a process that incorporates a wide range of knowledge and skills, including identifying problems, defining the source of a problem, and exploring potential solutions to a problem. Many challenging problems, such as designing a house, creating a business plan, diagnosing a disease, troubleshooting a circuit, or analyzing how something works, involve a range of activities. Such situations require the ability to make decisions using available information–and the inquiry process necessary to locate new information. This process can also include making reasonable assumptions that help constrain a problem, thus making it easier to identify a potential solution. Novices often don't have enough background knowledge to make these decisions, relying instead on a trial-and-error method to search for solutions. If a teacher could watch each student solve problems and ask questions about why they made certain decisions, the teacher could learn more about what the students understand and monitor their progress toward developing good problem-solving skills.

The IMMEX system, created by Ron Stevens at UCLA, is a web-based problem-solving simulation environment that tracks many of the decisions a person makes while attempting to solve a problem. Stevens initially created IMMEX to help young immunologists practice their clinical skills. These interns are given a case study detailing a patient's symptoms, and they must make a range of decisions to efficiently and conclusively decide what is wrong with the patient. They must choose from a range of resources–including lab tests, experts' comments, and patient's answers to questions–to help gather evidence to support a specific diagnosis. Each decision that is made can have a cost associated with it in terms of both time and money. The young internists must use their current medical knowledge to make good decisions about what resources to use and when to use them. The IMMEX system tracks these decisions and reports them in the form of a node and link graph (visually similar to a concept map) that indicates the order in which the resources were accessed. In addition, a neural network can compare the decision path the intern makes with the decision path of an expert doctor to identify where the interns are making bad decisions. Students can use these traces to help them evaluate the strategies they use to solve a problem and learn about more optimal strategies. Also, an instructor can use these decision traces to evaluate common errors made by the students. The result is a system that provides students with the opportunity to solve complex problems and receive automated feedback they can use to improve their performance, while professors can use it to refine their instruction to better meet the needs of the students. IMMEX now has programs created for K–12 education.

Facilitating Writing

Writing is a fundamental skill that requires careful use of language to communicate ideas. Learning to write well takes practice and feedback on content, form, style, grammar, and spelling. Essay and report writing are therefore critical assessment tools used to capture students' ability to bring together ideas related to a course of study. However, a teacher can only provide a limited amount of feedback on each draft of a student's essay. Therefore, the teacher's feedback may consist of short comments in the margin, punctuation and grammar correction, or a brief note at the end summarizing what content is missing or what ideas are still unclear. Realistically, a teacher can only give this feedback on a single draft before students hand in a final version of their essays. Most word processors can help students check their spelling and some mechanical grammar errors, which can help reduce the load on the teacher. What students need is a method for reflecting on the content they've written.

Latent semantic analysis (LSA) has great potential for assisting students in evaluating the content of their essay. LSA can correlate the content of a student's essay with the content of experts' writings (from textbooks and other authoritative sources). The program uses a statistical technique to evaluate the language experts use to communicate ideas in their published writings on a specific topic area. Students' essays are evaluated with the same statistical technique. LSA can compare each student's writing with the writing of experts and create a report indicating how well the paper correlates in content on a scale from 1 to 5. The numerical output does not give students specific feedback on what content needs to change, but it helps them identify when more work needs to be done. Students can rewrite and submit their papers to the LSA system as many times as necessary to improve the ranking. The result should be that students' final essays have a much higher quality of content when they hand them in to the teacher. In addition, the students must take on a larger role in evaluating their own work before handing in the final project, allowing the teacher to spend more time evaluating the content, creativity, and synthesis of ideas.

Summary

Assessment of abilities such as problem solving, written communication, and reasoning can be a difficult and time-consuming task for teachers. Performance assessment methods such as class projects and presentations are important final assessments of students' ability to demonstrate what research they have done, as well as their ability to synthesize and communicate their ideas. Unfortunately, teachers often do not have enough time to give students multiple opportunities to engage in these kinds of activities, or to give them sufficient feedback before they perform these final demonstrations of what they have learned. Systems like teachable agents, IMMEX, and LSA provide a method for students to test what they know in a very authentic way as they progress toward their final objectives. These technologies provide a level of feedback that requires the students to reflect on their performance and define their own learning goals to increase their performance. In addition, teachers can use an aggregate of this feedback to evaluate where a class may need assistance. Technology can provide assessment methods that inform students on where they need assistance and that require the learners to define their own learning outcomes.

BIBLIOGRAPHY

BISWAS, GAUTAM; SCHWARTZ, DANIEL L.; BRANSFORD, JOHN D.; and TEACHABLE AGENTS GROUP AT VANDERBILT. 2001. "Technology Support for Complex Problem Solving: From SAD Environments to AI." In Smart Machines in Education: The Coming Revolution in Educational Technology, ed. Kenneth D. Forbus and Paul J. Feltovich. Menlo Park, CA: AAAI Press.

LANDAUER, THOMAS K., and DUMAIS, SUSAN T. 1997. "A Solution to Plato's Problem: The Latent Semantic Analysis Theory of the Acquisition, Induction, and Representation of Knowledge." Psychological Review 104:211–240.

INTERNET RESOURCES

CHEN, EVA J.; CHUNG, GREGORY K. W. K.; KLEIN, DAVINA C.; DE VRIES, LINDA F.; and BURNAM, BRUCE. 2001. How Teachers Use IMMEX in the Classroom. Report from National Center for Research on Evaluation Standards and Student Testing. <www.immex.ucla.edu/TopMenu/WhatsNew/EvaluationForTeachers.pdf>.

COLORADO UNIVERSITY, BOULDER. 2001. Latent Semantic Analysis at Colorado University, Boulder. <http://lsa.colorado.edu>.

IMMEX. 2001. <www.immex.ucla.edu>.

TEACHABLE AGENTS GROUP AT VANDERBILT. 2001. <www.vuse.vanderbilt.edu/~vxx>.

SEAN BROPHY

Additional topics

Education - Free Encyclopedia Search EngineEducation Encyclopedia: AACSB International - Program to Septima Poinsette Clark (1898–1987)Assessment Tools - PSYCHOMETRIC AND STATISTICAL, TECHNOLOGY BASED