I just finished my own personal 3-week hell. It was spent in an online class (on Moodle for those who are wondering) for a U.S. nonprofit group that certifies Online Courses based on their own extensive rubric (and that *may* go by the abbreviation QM). I was taking the class to become a peer reviewer for my institution’s online courses because my institution has bought way, way into this rubric and its evaluation system for teaching online.
Before joining the class, my thought was, “Hey, I took the prerequisite course [where you learn about the rubric] and it wasn’t so bad, so this one won’t be too bad either, right?”
Reader, I was wrong. So, so wrong.
Part of me was curious as to how the rubric could be stretched in its application. And part of me wanted to know where and how the rubric could be broken.
Turns out the rubric is pretty robust and all of the parts I’d like to eliminate are mandatory. Yeah?!?!
So…here’s what I know after diving into this class…
- There IS such a thing as over-organizing and over-structuring an online class. Especially if you implement backwards design with a tightly constrained format that allows for little freedom to make the class your own. I’m almost 100% sure the folks who designed this rubric love using ADDIE in instructional design (and, yes, ADDIE was derived from military education and there’s a reason why it is design that has every single detail thought about and decided upon on in advance).
We don’t need robots teaching online courses. We need humans to teach online courses in a caring, hospitable, critical way. We need humans to teach in a “changing based on context” kind of way. Bottom line – we need humans to teach online classes.
- You should never, never expect to teach the same class for 5 (!) years and not change anything substantial. WHAT A REALLY, REALLY HORRIFIC IDEA!
- Online learning rubrics that call for alignment that go way beyond good pedagogy and move towards data justification on every level are not worth implementing.
- Online learning rubrics that fail to deeply consider community, caring, trauma based education and decolonization are just not worth implementing.
- Content mastery is measurable and we can write good outcomes for it. Learning is not directly measurable and is messy and fraught with dynamic and changing variables. Learning is notoriously hard to pin down, even using the best educational research methods we can employ. We *cannot* write “measurable learning outcomes”; that’s just a misnomer. Because context matters folks. In learning it matters more than most places. And growth of the human mind is difficult to measure.
The last question on the ending survey was something like “Would you implement this rubric when building online classes?” and I was like, “OH HELL NO!”
I thought the rubric was bendable. Maybe even breakable. But, in the end, it was less bendable and/or breakable than I expected. And while I expect I could easily design a class that uses the rubric and is pedagogically horrid, I think it would be far more difficult to design an awesome class pedagogically that can pass the “Met” expectation of the rubric. And that, my friends, is where the rubric fails most.