You believe that the bad score that you received on a final exam doesn't accurately reflect what you learned in a course. Maybe you had a splitting headache on the morning of the exam and couldn't concentrate. You ask for the opportunity to retake the test, but the teacher denies your request on the grounds that it wouldn't be fair to the other students. This scenario demonstrates one of the problems with summative, or after-the-fact, assessments that are used to grade overall performance over a period of time.
Educational assessments are formative or summative. Formative assessments are those things done during the learning process to help students improve their performance. Ungraded feedback on a draft of an essay is an example of formative assessment. Summative assessments are measurements of outcome to gauge what a student has learned and compare it against a standard or benchmark. Examples of summative forms of assessment include end-of-chapter tests, final exams and such large-scale standardized tests as the SAT.
Proponents of high-stakes summative assessments say that tests motivate students to put more effort into their studies. However, in a comprehensive review of the impact on student motivation, the Evidence for Policy and Practice Information and Co-ordinating Centre at the University of London found a direct correlation between performance on national standardized tests and self-esteem. Students who did poorly experienced lowered self-esteem, which in turn reduced their willingness to put in the effort required for future academic success.
Reliability and Validity
To be free of distortion, an assessment must be constructed so that it accurately reflects the whole of the material it's intended to cover, including the way the material has been taught. There also needs to be consistency across tasks and how they are marked, both internally within the assessment and externally across different versions. Reliability and validity errors call into question the point of the summative assessment, which is to accurately measure student performance.
External summative assessments of students used to judge teacher and school performance can negatively impact what occurs in the classroom. With the perception that their jobs are at stake, teachers often feel enormous pressure to explicitly "teach the test" at the expense of other curriculum goals and objectives. A survey of teachers conducted by Harvard's Carnegie-Knight Task Force on the Future of Journalism Education found 60 percent of the respondents stated that preparing students to pass mandated standardized tests either dictated most of their teaching or substantially affected it.
Summative assessments with limited means of expression, particularly large-scale standardized tests that use multiple choice for automated grading, may unfairly disadvantage large classes of students, including non-native speakers with language or cultural barriers to understanding the questions asked, those with physical or learning disabilities, or those who do not do well under the pressure of the testing conditions.
Summative assessment may measure the wrong things. Professor David Rose of Harvard University's Graduate School of Education and principal architect of the Universal Design for Learning argues that the ways in which students are assessed do not provide accurate information about how they are doing. Questions are asked in ways that students do not understand or have difficulty answering. Neither of these get at whether the student really knows the material that was taught.
- National Center on Universal Design for Learning: UDL Guidelines -- Version 2.0: Principle II. Provide Multiple Means of Action and Expression
- Carnegie Mellon University: Formative vs Summative Assessment
- Carnegie-Knight Task Force on the Future of Journalism Education; Mandatory Testing and News in the Schools: Implications for Civic Education; January 2007
- EPPI-Centre, Social Science Research Unit, Institute of Education, University of London: A Systematic Review of the Impact of Summative Assessment and Tests on Students' Motivation for Learning
- The Iris Center: Assessment
Matthew Spira has over nine years of experience as an ESL teacher/tutor specializing in bilingual early childhood literacy development. Before teaching, he spent seven years in the call-center industry as a line supervisor, operations manager and workforce planning, forecasting and analysis manager. Spira holds a Bachelor of Arts in literature from the University of California, Santa Barbara.