Synopsis/Reflection – Mastery learning, Practice and Feedback: Essentials for Learning
According to Ambrose et al. (2010), to achieve mastery in learning a certain domain, students must acquire key component skills, practice integrating those skills effectively, and apply those skills in appropriate contexts. We would like our students to achieve mastery because experts (interchangeable with masters in this context) apply knowledge with greater ease and in a quicker timeframe than novices can. Experts can also transfer knowledge to new contexts or between contexts (i.e. interdisciplinarily) at a level novices simply cannot.
The ways in which students practice and apply their skills matters, as does the feedback that is given to help facilitate that practice. Directed and deliberate practice must be combined with targeted feedback to enhance learning. Figure 1 (Ambrose et al., 2010, p. 126) provides an excellent visual of the cycle of practice and feedback. With the centering of goals, it becomes abundantly clear that instructors must explicitly communicate their goals for the class to their students through several means, including practice opportunities, assessments (i.e. observed performance), and feedback.
Figure 1. Cycle of Practice and Feedback (Ambrose et al., 2010, p. 126)
Again, this module didn’t really present anything new to me. It seems like recently it’s almost a critical component of teacher professional development 101 to require a discussion on mastery learning, including scaffolding and transfer, and the need for directed and deliberate goal-oriented practice and targeted feedback to help students achieve that mastery. Recognizing our own expert blindspots and unconscious competence in order to help students learn more effectively is a regular, mostly informal, discussion we have at CNM amongst the STEM faculty, many of whom share the same large office space. This ongoing reality check in the kitchen (mostly around the coffee machine), especially with peers who teach other subjects such as biology, engineering, math, and physics, allows for a collaborative informal learning environment that I continually appreciate. And it is a learning environment I need, especially when I teach a new class for which I haven’t fully thought through the scaffolding. I haven’t taught a new class in years, but I’m embarking on the adventure again when I teach “Introduction to Statistics” for the first time in Fall 2019.
Returning to the classes I’ve taught for at least a decade now, the expert vs. master discussion on Gary’s video was a nice reminder that there is a difference between these terms. I’ve spent perhaps too much time living in an educational frame where the two terms exist synonymously. While others may consider me a master teacher because I’ve spent WAY more than 10,000 hours (more like at least 30,000) teaching, I’m not sold on this designation due to two main reasons: 1. Not all of those hours were in deliberate practice (although at least a third of them were) and 2. I’d prefer to think about the term “master teacher” as a hypothetical goal (akin to the theoretical goal of absolute zero) – one that is just out of arm’s length, allowing us to strive for more and greater achievement at all times. The notion that master teachers exist gives me something to continue to strive for.
Yet, one of the teaching aspects I’m always striving for is helping my students gain greater agency in their own learning. One way to facilitate this agency is to help students provide feedback to each other (i.e. peer review). So, my research questions this week are:
- How do we, as instructors, train chemistry students to provide excellent peer review?
- How do we, as chemical educational researchers, analyze peer review in a way that supports student learning?
Research – Diving into peer review
Ambrose et al. (2010, pp. 148-152) suggest that instructors use the following strategies to provide targeted feedback:
- Searching and identifying patterns of errors in students’ work
- Prioritizing feedback such that the most important feedback is first
- Balancing positive and constructive feedback – DO NOT, however, make a feedback sandwich as students are less likely to hear the critical constructive feedback if sandwiched between two positive comments
- Providing real-time feedback frequently at a group level more than at an individual level (so that students don’t feel personally attacked)
- Incorporating peer feedback, including requiring students to explicitly communicate how they used feedback in future assignments
The last bullet is, of course, the subject of the research question I asked above. We will discuss peer feedback in chemistry using the ideas that formed the Calibrated Peer Review system and how those ideas have been implemented and then analyzed by chemical education researchers since. We will, however, revisit the other more instructor centric strategies Ambrose et al. propose within the Relate section of this assignment.
Russell (2013) describes Calibrated Peer-Review (CPR) as a web-based tool that was developed by the Molecular Science Project in the mid-1990’s. It required students to submit writing samples on chemistry topics and then to become “calibrated” in order to effectively and capably judge each other’s work. The process of reviewing each other’s work was designed using the work journal reviewers do – evaluation and rating of work based on a rubric. Students’ reviews were then calibrated to the instructor’s evaluation and rating of the same work; if a student’s review did not match that of the instructor, the student was required to redo the calibration procedure. Only once a student’s review was calibrated could they progress in the sequence. The CPR process is shown in Figure 2 (Russell, 2013, p. 133) below:
Figure 2. Flow-chart used to tutor students on the process and stages of a CPR assignment. (Russell, 2013, p. 133)
As students moved through the process of CPR, they also learned to assess their own writing better. Table I shows that students who undertook the process of CPR became overall more positive in terms of their ability to tell if their own essay was good and their skills in assessing their own writing than those students who did not engage in the CPR process. Both the process of providing feedback to peers and exposure to peer’s writing are credited with the results shown in Table 1. Overall, I perceive these results to mean that CPR had a positive effect on both student self-assessment and student-to-student peer review.
The calibration process in CPR is limited to the student vs. the instructor. However, more extensive analyses have been performed on peer review comments so as to expand the methods used to teach peer review. Finkenstaedt-Quinn et al. (2019) have recently analyzed the differences in peer feedback between initial writing drafts and revisions. Their coding scheme is given below in Figure 3:
Figure 3. General coding scheme for peer review comments and revisions. (Finkenstaedt-Quinn et al., 2019, p. 229)
Finkenstaedt-Quinn et al. (2019) wanted to learn if there was a correlation between a student’s ability to provide chemistry-specific feedback and that same student’s understanding of the conceptual topic reviewed.
Specifically, the peer review comments and changes upon revision between initial and final drafts were analyzed in an effort to (1) characterize the types of feedback that students provide their peers when guided by a rubric focusing on developing understanding of Lewis structures, (2) identify if certain types of feedback were more likely to lead to revision, and (3) characterize the types of revisions that students made. (Finkenstaedt-Quinn et al., 2019, p. 231)
The authors found that “problem/solution” type feedback (i.e. feedback that identified an area of difficulty) was the most “impactful characteristic of comments in leading to revision and thus the most likely form of peer comment to facilitate student learning of chemistry concepts” (Finkenstaedt-Quinn et al., 2019, p. 235). It was also the type of feedback that the majority of students provided in peer review.
Relate – Providing feedback as an instructor: Ungrading and its implications
For years I’ve provided “problem/solution” type feedback on exams. I’ve tried to provide positive and constructive feedback that identifies student error patterns in real-time both to groups of students while we work on formative assessments and to individuals on exams (i.e. summative assessment). And while the student groups mostly pay attention to the feedback and reiterate that same feedback to each other throughout the progression of the formative assessment, the individual feedback has largely gone unnoticed in lieu of grade point totals.
I decided to change that last pattern this semester and took on a process called ungrading instead. Ungrading is a process within my exam grading that I use to focus student attention on individual feedback and that involves a conversation between the student and the instructor. I provide feedback to the students and the students provide feedback to me about why they answered the ways they did on their exam and why they might think their answers are still correct, even if I have provided feedback to the contrary. The conversation in terms of written feedback and the oral conversation that happens once the written feedback process has ended has been illuminating in terms of what mastery learning entails and how to scaffold my students learning in future semesters.
I’m also always looking for: 1. ways to give better feedback that encourages and supports mastery levels of learning and 2. methods that enable students to take charge of their own learning, such that they can continue to excel even after my class ends. This week’s readings were incredibly helpful in both of those pursuits.
For more information, the multi-tiered system of feedback I’ve used in ungrading has been described in more detail here and here.
Resources/References
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M.C., & Norman, M.K. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco, CA: Jossey-Bass.
Finkenstaedt-Quinn, S. A., Snyder-White, E. P., Connor, M. C., Gere, A. R., & Shultz, G. V. (2019). Characterizing Peer Review Comments and Revision from a Writing-to-Learn Assignment Focused on Lewis Structures. Journal of Chemical Education, 96(2), 227–237. https://doi.org/10.1021/acs.jchemed.8b00711
Russell, A. A. (2013). The Evolution of Calibrated Peer ReviewTM. In T. Holme, M. M. Cooper, & P. Varma-Nelson (Eds.), Trajectories of Chemistry Education Innovation and Reform (Vol. 1145, pp. 129–143). https://doi.org/10.1021/bk-2013-1145.ch009