If you’ve been reading the thoughts from CTL this semester, you’ve probably noticed a trend. The blog posts tend to focus on things we can control in our classroom. How we teach. How we think about teaching. How we interact with students. How we keep our sanity.
You know, the human element.
So when I went to a professional development training two weeks ago as part of my work on the local board of education, I was most certainly not enthused about the topic – “Process and Design for Assessments”. There’s no sexy hook when it comes to designing instruments to assess learning.
But the team putting on the workshop is first rate and I’ve always walked away with actionable knowledge in past sessions, so I thought I’d go in with an open mind. I mean, I’ve been teaching for almost twenty years; what possibly is there that I don’t know about designing tests?
Turns out that there is a lot more to consider when designing assessments than I thought.
The ninety minute session was well beyond crafting solid multiple choice questions; it was about the design process. I’d like to share the framework they used. This framework is from the Honeoye Falls-Lima school district, so some characteristics of the framework may not necessarily apply to the work you do in your classroom but the philosophy is sound.
The team that worked on this framework identified four different elements of design for assessments:
VALIDITY
Does the assessment actually measure what it purports to measure?
- Is there a crosswalk between learning objectives and the assessment components?
- Are the outcomes clearly articulated to the learners?
- Is there parity to the cognitive demand of the learning objectives?
RELIABILITY
Does the instrument produce accurate and consistent results?
- Is the assessment free of bias?
- Are all prompts, questions, criteria, directions, text, and multimedia presented in a way that is clear, direct, and understandable for all learners?
- Are the assessment items constructed with best practices?
- Are there a sufficient number of items to accurately assess each learning objective?
- Is there a rubric or scoring guideline for performance-based questions?
RELEVANCE
Does the experience address real-world issues, problems, and context?
- Do learners have opportunities during the instruction and assessment process to engage in real-world tasks and meaningful experiences?
- Can learners potentiate their learning such that they can make connections and transfer their learning to new and unique situations?
EFFECTIVE USE
Does the assessment data yield hope, efficacy, and achievement?
- Is there a post-mortem for the assessment that gauges
- instructional practices that were shown to be highly effective?
- patterns in student mistakes that can inform future teaching?
- the validity and reliability of the assessment items?
- interventions that may be needed to provide additional support?
- Does the feedback mechanism ensure
- timely and specific feedback?
- an opportunity for the student to self-assess their own work and contextualize avenues for improvement?
As I work on my classes over the summer, I’ll certainly be using this framework (or a variation of it) as I assess my assessments. I’m quite certain that I will not be able to inject every sentiment from this framework into my assessments, but it’s a start. I’m sure along the way I’ll find things I like about the framework. I might find things about the framework that do not necessarily apply to the way I teach.
But I think the takeaway here is that teaching is about continuous improvement. In fact, all the posts this semester are about challenging our own beliefs about teaching and learning.
As you enjoy the summer, please feel free to jot down thoughts about lessons you’ve learned in your career as an educator. Then share them with your peers.
Photo by Nguyen Dang Hoang Nhu on Unsplash