The following is adapted from Linda Suskie’s forthcoming book Assessing Student Learning: A Common Sense Guide, to be published by Anker Press later this year.
What is a “good” assessment? More than anything else, it is an assessment that gives us truthful information; it tells us what our students have truly learned. Students who have truly learned what we want them to will do well on a “good” assessment; students who truly have not learned what we want them to will not do well on it.
Unfortunately, it’s not possible to determine with complete confidence exactly what our students have learned. We can’t get inside their heads to find out what they truly know and what they don’t. The best we can do is to look at samples of their behavior—what they write, produce, say, and how they perform—and from those samples try to estimate or infer what they truly know. Even under the best of circumstances, making an inference from these snapshots of behavior is bound to be at least somewhat inaccurate because of what psychometricians call “measurement error”—fluctuations in human performance that we can’t completely control. We can’t control, for example,
While our assessments will thus never give us absolutely accurate information about what students have learned, we should still aim to make them sufficiently truthful so that we can have confidence in our findings and use them with assurance to make decisions about our goals, curricula, and teaching strategies. It’s therefore vital that we take steps to maximize the quality of our assessment methods. The following approaches will help increase the accuracy and truthfulness of assessment strategies.
Start with clear statements of the most important things you want students to learn from the course, program, or curriculum.
Teach what you are assessing. Help students learn the skills needed to do the assessment task. For example, if you are giving a writing assignment, help your students understand how you define good writing and give them feedback on drafts.
Because each assessment technique is imperfect and has inherent strengths and weaknesses, Collect more than one kind of evidence of what students have learned. If you are assessing learning across an entire program, for example, rather than giving students only a culminating examination, you might also look at samples of papers they’ve written and perhaps internship supervisors’ ratings of their skills.
Make assignments and test questions crystal clear. Write them so that all students will interpret them in the same way and know exactly what they are expected to do.
Before creating an assignment, write a rubric: a list of the key things you want students to learn by completing the assignment and to demonstrate on the completed assignment.
Likewise, before writing test questions, create a test “blueprint”: a list of the key learning goals to be assessed by the test and the number of points or questions to be devoted to each learning goal.
Make sure that your assignments and test questions clearly relate to your key learning goals. Each test question, for example, should clearly correspond to the learning goal you’ve identified for it in your test blueprint. A writing assignment intended to assess how well students organize an essay shouldn’t be graded primarily on grammar and spelling.
Ask colleagues and students to review drafts of your assignments, rubrics, and (using former students) test questions to make sure they’re clear and appear to assess what you want them to.
Try out surveys and similar tools with a small group of students before using them on a larger scale. Check students’ responses to make sure they are giving answers that make sense. Ask them if they found anything unclear or confusing. Ask some students to “think out loud” as they answer a test question; their thought processes should match those you intended.
Collect enough evidence to get a representative sample of what your students have learned and can do. Collect a sufficiently large sample that you will be able to use the results with confidence to make decisions about your course, program, or curriculum.
Score student work fairly and consistently. Before scoring begins, have a clear understanding of the characteristics of meritorious, satisfactory, and inadequate papers. Then use a rubric to help score assignments, papers, projects, etc., consistently.
Use assessment results appropriately. Never base any important decision on only one assessment. (Failure to adhere to this maxim is one of the major shortcomings of many high-stakes testing programs.) Assessments shouldn’t make decisions for us or dictate what we should teach; they should only advise us as we use our professional judgment to make suitable decisions.
While one important characteristic of a “good” assessment is that it gives us truthful information, another important characteristic is that it be useful. If an assessment doesn’t help improve teaching and learning activities, why bother with it? Whether you are a faculty member, department head, or academic administrator, in order to be useful, assessments must correspond to your key learning goals and your curriculum. No one assessment is right for every program in every institution.
To ensure the usefulness of your assessments, periodically evaluate your assessment program and ask yourself whether your assessments are giving you useful information. If a particular assessment is not helping you or your students, stop doing it. Similarly, if a particular survey question isn’t providing information that you can use to help make decisions about your teaching or program, stop asking it. And periodically compare your assessment tools against your learning goals to ensure that they continue to align. Most importantly, remember that assessment does not have to be complicated. All it has to do is provide us with reasonably reliable information that we can use to improve teaching and learning
Linda Suskie is Director of Assessment at Towson University and a frequent speaker, consultant, and workshop presenter on higher education assessment topics to audiences across the globe. She appeared at this College during our Spring 2003 Professional Development Week as keynote speaker.