It’s that time of year when we all have the grading of our last semester’s bluebooks well behind us and the last few students have come in to review their exams. So we have packed up the bluebooks to be archived and they are out of sight and out of mind.
But wait! In the world of assessment, the bluebooks come back! These papers and exams have a wealth of assessment information for us to mine if we only take the time to gather, reflect and use that information.
How can you use your bluebooks for efficient assessment and improvement of student learning?
Many faculty gather holistic impressions as they grade about the performance of the students overall and the areas of difficulty and strength. To improve on this reflective process, faculty can take a few more simple steps:
1. Improve your data collection.
Rather than gathering general impressions as we grade bluebooks, we can mine the bluebooks for some more concrete data. Examine the distribution of performance on individual questions or issues. Note that you need not gather every data point possible from the bluebooks. Often it is helpful to begin with two or three items to analyze. For example, what is the one thing that nearly every student did well on the exam? What were the one or two questions/issues/approaches that many students had problems on? What percentage of the students had these problems?
2. Analyze your data.
For issues students appear to have learned well, look again at your questions. How confident are you that the question truly tested the student understanding? In this respect, essay questions are often easier to evaluate than multiple choice questions, because you can see the students reasoning on the former, whereas consistently correct answers on the latter can be the result of distractors that are patently wrong. What materials and techniques did you use to prepare the students for that question? When during the semester did you teach those matters? If the student performance is improved from prior exam administrations, what, if anything, did you change that may have caused this improved learning?
For issues or questions on which a significant percentage of student performance was deficient, again, begin by reexamining the question, its placement in the examination and the time allocated for responses, to identify other possible reasons for poor performance that are less related to student learning and more related to exam conditions. Look for patterns in the student errors or misconceptions that can help you diagnose what learning conditions led to the student poor performance. What materials and methods did you use to teach this doctrine?
3. Plan for the next class
When students are performing well on a doctrine or concept, especially when that competent performance appears to have been the result of your prior efforts to target and improve learning activities for that material, you may be tempted to rest on your (and your students’) laurels. However, consider that any change to one part of a course can affect other parts and each class brings with it different experiences and preparation.
To improve student learning on areas that have presented difficulties for students, consider not only improving teaching materials or methods related to that area, but also incorporate more formative assessments during the term to help you and the students identify earlier and more clearly the learning deficiencies.
4. What my bluebooks told me this semester:
To illustrate this process of mining bluebooks for assessment, I will discuss this semester’s Professional Responsibility exam. From this semester’s bluebooks, I gathered a range of data on materials well understood and poorly understood. I will share three examples of data to illustrate the process of using bluebooks for an assessment process.
The doctrinal winner this year in terms of student performance was multijurisdictional practice of law. Is this because the students understood these aspects of the course better than others? Reviewing the exam, I noticed that the question testing this subject called for a fairly low level of mastery (basic issue spotting and knowledge of rule) without any sophisticated analysis required. This was a topic for which I had provided a number of practice problems to the students and I had tested the issue in a similar fashion on a prior year’s exam, which I had made available for student review. Moreover, it is a subject that, because my law school is located on a state line, with dramatically different variations on this rule, the students understood that this was a rule that would impact their immediate future, as they chose which state bar exam to take first. What I learned from this is the fairly unremarkable understanding that my law students can and will master at a knowledge-level those topics for which they know they will be tested and for which they also have a more personal motivation to learn well. I concluded that I would and could generalize these understandings to not only raise the bar on testing this doctrine, requiring a more sophisticated understanding, but also would look for other areas in which I could improve student motivation by identifying the specific need-to-know circumstances looming in their immediate future for other rules.
A second topic about which I have been tracking student learning performance for many semesters is the student understanding of the distinction between the evidentiary attorney-client privilege and the ethical duty of confidentiality (among other doctrine). When I first began tracking, as many as 30% of students were demonstrating fundamental confusion on this topic – using language of “privilege” when the subject was confidentiality (or vice versa) or confusing the exceptions to the ethical duty with the crime-fraud exception to privilege. I knew from speaking with other Professional Responsibility teachers that this is a common area of confusion for students. Over the course of several semesters, I worked to improve student learning in this area: including more problems in course materials, writing and assigning a CALI lesson on the subject, and explicitly telling the students that this is something that I am tracking and cheering them on to “make this the 100% mastery year.” The efforts are bearing fruit. This semester was the best yet – only four out of 72 students used the vocabulary of the two doctrines improperly and three of these applied the correct rule even though they were not using the correct terminology in doing so.
An area on which I had thought I was making progress in student learning turned out to be a continuing problem. Students commonly are confused by the rule governing an attorney’s right to withdraw from representation. I have made the same efforts on this doctrine as I have with the privilege v. confidentiality confusions: increasing problems, providing additional outside resources (again, I wrote a CALI lesson on the subject); and providing in-class quizzes to assess understandings while there was still time to improve learning. However, I was puzzled to see 13 of the students declare that an attorney may not withdraw from representation if it would harm the client. What could have been the source of this confusion? Searching through my course materials and lesson plans, I uncovered the problem. A powerpoint lecture on withdrawal from representation when the client fails to pay the attorney contained a page with a bulletpoint list of reasons that courts might deny an attorney permission to withdraw even though the rules would permit the withdrawal. One of the bullet points listed “degree of harm to the client” as a factor the court would consider. Obviously some students had transferred the powerpoint slide into their notes on the general withdrawal rule rather than recognize that these factors were connected only to the judicial discretion to deny an otherwise permissible withdrawal. Again, a well-worn lesson learned anew: as helpful as powerpoint slides can be for organizing discussions and providing visual cues for learning, students will study text of these slides as definitive statements of law rather thumbnails of larger discussions and understandings. Conclusion: no shortcut summary slides!
Filed under: Best Practices & Curriculum, Best Practices, Outcomes & Assessment Techniques, Uncategorized | Tagged: Assessment |
Wow! Barbara this is so helpful and I am impressed with the careful analysis you use. I just forwarded to my faculty and reminded them that you will be presenting at the CELT conference here in Albany ion March 30th http://www.celtconference2012.org/
In clinic, we OFTEN find that students are confused about attorney-client evidentiary privilege (more rarely invoked) and the obligations inherent in attorney-client confidentiality which descends on the student and their client at the moment of meeting to hear the client’s request for service.
thaks for sharing your analysis…
It is inspiring to hear about your efforts. Imposing the self-discipline to review the bluebooks again, after grading them, seems like a difficult task from the start! But what a great example of multiple ways that we can approach the challenge of “closing the loop” by reflecting on our teaching in light of the learning demonstrated by our students.