Well this summary is only about a year late. The struggle is real, y’all. ;o)
This blog is summarizing this article: Sorensen-Unruh, C. (2024). The ungrading learning theory we have is not the ungrading learning theory we need. CBE—Life Sciences Education, 23(3), es6: 1-12. https://doi.org/10.1187/cbe.24-01-0031
This article is open access so go download it asap for more information about everything covered here.
So when I wrote this article, I aimed to accomplish several implicit and explicit things. If we broke up the article approximately into thirds, the first third would quickly orient the reader to the problems with grading and my definition of ungrading as an emancipatory pedagogy (which is BROAD y’all). The common themes of emancipatory pedagogies, as I laid them out in the paper, include:
- engaging in critical reflection as a fundamental and important aspect of learning;
- redistributing the power in the classroom so that learners can better manage their own learning;
- increasing agency for students;
- involving students in participatory design;
- embracing dialogic engagement in the classroom, including the questioning of authority and authoritative institutions;
- remaining transparent throughout the course and the assessment process and
- asset framing.
Table 1 (p. 3), which includes definitions of ungrading and grading techniques, and Figure 1 (p. 5), a interpretive matrix that shows where the Table 1 techniques fall along axes of less to more student learning (horizontal axis) and less to more student agency (vertical axis), are included within this first third. The second third presents an in-depth explanation why self-regulated learning (SRL) – a learning theory we, as practitioners, default to without even critically reflecting why it may not work well with certain pedagogies – should not be the learning theory that undergirds ungrading. The last third argues that combining two theories – Funds of Knowledge (FoK) and Community Cultural Wealth (CCW) – forms a better learning theory to undergird ungrading because both FoK and CCW are asset framed. Both theories also embrace the emancipatory aspects of ungrading – mainly building upon students’ prior knowledge and culture (FoK) and redistributing power (CCW) – that are fundamental to its equitable and just implementation. Table 2 (p. 8), which shows some typical ungrading practices and how using SRL as the learning theory that undergirds ungrading differs from using FoK and CCW combined as an asset-framed learning theory that undergirds ungrading, is also in this last third.
To be fully transparent, some of my dissertation committee members had some larger issues with the published version of this paper. I think, in the end though, their pointed questions and suggestions made the paper better. The expanded sections that resulted from their suggestions include: 1. situating ungrading within current assessment models, including Wiggins’ of learning, for learning, and as learning model and Wiliam and Black’s formative vs summative assessment model; 2. expansion of Valencia’s definition of deficit thinking; 3. a more illustrated argument of why SRL in practice doesn’t work for emancipatory pedagogies; 4. defining student agency more precisely; and 5. a compare and contrast of two techniques (self-assessment and peer review) and why they exist where they exist on the interpretive matrix in Figure 1. These expanded sections were originally written for the accepted version of this paper by my dissertation committee, which will be published within my dissertation thesis by the UNM digital repository in *hopefully* December, 2025.
Situating ungrading within current assessment models
In general, assessments comprise the activities instructors use to determine whether and/or what students learned. According to Wiggins (1998), assessments can be designed by instructors with one of the following outcomes in mind: to test student learning gains (of learning), to continually assess and deliver feedback in a feedback loop (for learning), or to help student learning processes continue while taking the assessment (as learning). Assessments designed of learning are typically summative, which Wiliam & Black (1996) define as “assessments given at the end of units, mid-term and at the end of a course, which are designed to judge the extent of students’ learning of the material in a course, for the purpose of grading, certification, evaluation of progress or even for researching the effectiveness of a curriculum” (p. 537). Formative assessment, which falls in Wiggins’ for learning category, is defined by Black (2017, p. 295) as “an assessment activity can help learning if it provides information to be used as feedback, by teachers, and by their students, in assessing themselves and each other, to modify the teaching and learning activities in which they are engaged.” Even though this definition of formative assessment contains the idea of feedback as a tool to focus learning further, the directionality of the feedback is one-way: from instructor to student. Dialogically engaging with students about their learning and then using those conversations to modify grades is not discussed in the formative/summative assessment model, especially when summative assessment is explained (Black & Wiliam, 2012, 2018; Wiliam & Black, 1996).
Formative feedback also focuses on smaller, low-stakes assessments that might build towards larger, more high-stakes assessments to maximize learning and growth (Black & Wiliam, 2012). Increases in formative assessment usage have been repeatedly correlated with increases in longitudinal learning (Black, 2017; Black & Wiliam, 2012, 2018). Wiggins’ as learning category contains both formative and summative assessments and is often called authentic assessment (Wiggins, 1990).
Within all three of Wiggins’ assessment designs – of learning, for learning, and as learning, there is a student learning component and an evaluative component (Wiggins, 1998). Examples of the student learning component include projects, portfolios, papers, exams, quizzes, homework, etc. The evaluative component is typically not done by the student, and it takes the work completed and evaluates it against quality standards using evaluative judgement. Those standards can be external, like standards imposed by the department or institution, or by accrediting or review bodies. The standards can also be internal to the class, including individual instructor standards, personal student standards, or standards that the class votes on. Examples of the evaluative component are scoring, marking, assigning grades, and feedback (oral or written). While ungrading can be used in any of the assessment designs Wiggins proposes, much of ungrading, including self-assessment and peer assessment, falls in the as learning category. Since grades can be used with as learning as well, Wiggins’ conceptualization of as learning cannot fully accommodate the aims, the process, nor the outcomes of ungrading. Nor can formative assessment, as ungrading can be applied to formative or summative assessment. Both assessment models fail to describe ungrading adequately, especially when we define ungrading as emancipatory.
Expanding Valencia’s definition of deficit thinking
SRL (a learning theory that encapsulates everything a student needs to do to learn, given that the professor sets the standards for learning and assessment for learning) is both a theory that has been discussed extensively in the literature and it is a learning theory that is instantiated in practice. The argument I laid out in this paper almost exclusively focuses on the latter.
Deficit thinking, a theory that blames students and states that they fail due to their own deficiencies, is composed of six characteristics in the context of schooling: victim blaming (individual school failure due to their membership in an oppressed group), oppression (model further oppresses specifically students of color, LGBQTIA2S+ status, and/or disability who have low socioeconomic status), pseudoscience (deficit thinking can significantly bias research), temporal changes (alleged deficits passed from generation to generation), educability (the oppressed group mentioned above are inferior and therefore cannot be educated in the same ways), and heterodoxy (deficit thinking is embedded in ideological and scholarly spheres in a way that reflects and uplifts the dominant group) (Valencia, 2010, p. 18).
Why doesn’t SRL work for emancipatory pedagogies?
A further critique of SRL, as it is practiced in educational contexts, comes from Vassallo (2013), who argues that “the discourse of self-regulated learning is aligned with the logic of adaptation, prescription, and dependency – three processes and practices…[that] can be construed as compliance and obedience to neoliberal governance…” (Vassallo, 2013, p. 578). (Sorensen-Unruh, 2024, p. 4)
Vassallo (2015) expands this argument by saying that learning in SRL:
means that individuals engage in an iterative process in which they make psychological, behavioral, and affective adjustments based on their assessment of goals, personal characteristics, and environmental contingencies. Although considered empowering and agentic, shaping individuals to be adaptable within institutional structures ties them to those structures and renders them vulnerable to institutional caprice. Regulating learning can mean adapting to an existing social order in ways that require little oversight – it can be construed as the rapid and efficient exercise of power that enlists individuals in their own subordination. (p. 52)
Structures in educational contexts reinforce oppression by “elevating hegemonic cultural narratives and crowding out the experiences of marginalized communities” (Kolluri & Tichavakunda, 2023), p. 645).
Compliance and obedience within institutional structures that reinforce oppression without critical inquiry and consciousness is contradictory to emancipatory pedagogies. This means self-regulated learning, as it is practiced and implemented within higher education, is incompatible with emancipatory pedagogies. Vassallo argues that even if “SRL carries with it connotations of social emancipation and social betterment…SRL narrows possibilities for what can count as emancipation” as well as how one can pursue it (Vassallo, 2013, p. 578). (Sorensen-Unruh, 2024, p. 4)
Defining student agency even more
Agency is sometimes referred to as “socially transformative” with intersecting and overlapping roles based in “context, position, knowledge, and identity” (Barton & Tan, 2010), p. 191). (Sorensen-Unruh, 2024, p. 5)
Emirbayer & Mische (1998) define human agency as:
Temporally constructed engagement by actors of different structural environments – the temporal-relational contexts of action – which, through the interplay of habit, imagination, and judgment, both reproduces and transforms those structures in interactive response to the problems posed by changing historical situations (p. 970).
This definition is inherently social and relational (Emirbayer, 1997), which often applies for higher education contexts. My definition of agency is not just a choice a student might make in specific educational context – it is a choice based on how they interpret that context and their place in it. The student’s power or engagement in that choice is also directly tied to whether they interpret their choice to be impactful for their own learning or the learning of others, given structures of oppression that may prevent their agency from being fully realized in specific classroom contexts.
A compare and contrast of self-assessment and peer review and why they exist where they exist on the interpretive matrix in Figure 1
To expand on their placement further on the interpretive matrix shown in Figure 1 (p. 5), we can compare and contrast a few techniques to explain why those techniques have greater or lesser learning and greater or lesser agency. Self-assessment is rooted in assessment as learning (Wiggins, 1990), and “incorporates assessment activities that require students to examine and understand their own learning” (Bourke, 2018, p. 828). Self-assessment incorporates participatory design (Boud & Falchikov, 1989), and embraces critical reflective learning, power redistribution, and dialogic engagement as well as increased student agencies (Falchikov & Boud, 1989; Nieminen & Tuohilampi, 2020). Self-assessment therefore is the most agentive and the most learning oriented of the practices listed on the matrix.
Peer review, on the other hand, moves more towards compliance to instructor as calibration training is often needed to enact peer review effectively in a classroom (Gaynor, 2020; Li et al., 2020; Russell, 2013). Therefore, peer review can be considered partially an emancipatory learning process as this technique does embrace dialogic engagement, but not necessarily critically reflective learning, increased student agency, and power redistribution (Ibarra-Sáiz et al., 2020; Panadero & Alqassab, 2019). Because of the calibration needed to enact peer review, participatory design is difficult to effectively implement within peer review as well (Li et al., 2020).
Wrapping it up
As I said earlier in this blog, I think these expanded sections really helped instantiate the arguments that: 1. ungrading is emancipatory; 2. SRL is not the best learning theory to undergird ungrading; and 3. FoK and CCW together form an asset-framed learning theory that extends and amplifies ungrading, helping ungrading practitioners better realize ungrading’s emancipatory aims. A major implicit aim of the article was also to provide an extensive bibliography for ungrading practitioners and SOTL, DBER, and STEM educational researchers to use in their own research and institutional policy arguments. Having said all of this, I’m incredibly proud of this paper, both as the version that was originally published and as the version that will be published as part of the dissertation.
NOTE: Other minor changes were made for the dissertation, but these changes were less substantial and therefore not highlighted by this blog.
References
- Barton, A. C., & Tan, E. (2010). We Be Burnin’! Agency, identity, and science learning. Journal of the Learning Sciences, 19(2), 187–229. https://doi.org/10.1080/10508400903530044
- Black, P. (2017). Assessment in science education. In K. S. Taber & B. Akpan (Eds.), Science Education: An international course campanion (pp. 295–309). BRILL. http://ebookcentral.proquest.com/lib/unm/detail.action?docID=4777239
- Black, P., & Wiliam, D. (2012). Assessment for learning in the classroom. In J. Gardner (Ed.), Assessment and Learning (pp. 11–32). Sage Publications Ltd. https://doi.org/10.4135/9781446250808.n2
- Black, P., & Wiliam, D. (2018). Classroom assessment and pedagogy. Assessment in Education: Principles, Policy & Practice, 25(6), 551–575. https://doi.org/10.1080/0969594X.2018.1441807
- Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in higher education: A critical analysis of findings. Higher Education, 18(5), 529–549. https://doi.org/10.1007/BF00138746
- Bourke, R. (2018). Self-assessment to incite learning in higher education: Developing ontological awareness. Assessment & Evaluation in Higher Education, 43(5), 827–839. https://doi.org/10.1080/02602938.2017.1411881
- Emirbayer, M. (1997). Manifesto for a relational sociology. American Journal of Sociology, 103(2), 281–317. https://doi.org/10.1086/231209
- Emirbayer, M., & Mische, A. (1998). What Is Agency? American Journal of Sociology, 103(4), 962–1023. https://doi.org/10.1086/231294
- Falchikov, N., & Boud, D. (1989). Student self-assessment in higher education: A meta-analysis. Review of Educational Research, 59(4), 395–430. https://www.jstor.org/stable/1170205
- Gaynor, J. W. (2020). Peer review in the classroom: Student perceptions, peer feedback quality and the role of assessment. Assessment & Evaluation in Higher Education, 45(5), 758–775. https://doi.org/10.1080/02602938.2019.1697424
- Ibarra-Sáiz, M. S., Rodríguez-Gómez, G., & Boud, D. (2020). Developing student competence through peer assessment: The role of feedback, self-regulation and evaluative judgement. Higher Education, 80(1), 137–156. https://doi.org/10.1007/s10734-019-00469-2
- Kolluri, S., & Tichavakunda, A. A. (2023). The counter-deficit lens in educational research: Interrogating conceptions of structural oppression. Review of Educational Research, 93(5), 641–678. https://doi.org/10.3102/00346543221125225
- Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assessment & Evaluation in Higher Education, 45(2), 193–211. https://doi.org/10.1080/02602938.2019.1620679
- Nieminen, J. H., & Tuohilampi, L. (2020). ‘Finally studying for myself’ – examining student agency in summative and formative self-assessment models. Assessment & Evaluation in Higher Education, 45(7), 1031–1045. https://doi.org/10.1080/02602938.2020.1720595
- Panadero, E., & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 44(8), 1253–1278. https://doi.org/10.1080/02602938.2019.1600186
- Russell, A. A. (2013). The evolution of calibrated peer reviewTM. In T. Holme, M. M. Cooper, & P. Varma-Nelson (Eds.), Trajectories of Chemistry Education Innovation and Reform (Vol. 1145, pp. 129–143). American Chemical Society. https://doi.org/10.1021/bk-2013-1145.ch009
- Sorensen-Unruh, C. (2024). The ungrading learning theory we have is not the ungrading learning theory we need. CBE—Life Sciences Education, 23(3), es6: 1-12. https://doi.org/10.1187/cbe.24-01-0031
- Valencia, R. R. (2010). Dismantling contemporary deficit thinking: Educational thought and practice. Routledge.
- Vassallo, S. (2013). Critical pedagogy and neoliberalism: Concerns with teaching self-regulated learning. Studies in Philosophy and Education, 32(6), 563–580. https://doi.org/10.1007/s11217-012-9337-0
- Vassallo, S. (2015). Using self-regulated learning to reflect on the critical commitments in educational psychology. Knowledge Cultures, 3(2), 49–57. http://www.addletonacademicpublishers.com/knowledge-cultures
- Wiggins, G. (1990). The case for authentic assessment. Practical Assessment, Research, and Evaluation, 2(2), 1–3. https://doi.org/10.7275/FFB1-MM19
- Wiggins, G. (1998). Educative assessment. Designing assessments to inform and improve student performance. Jossey-Bass.
- Wiliam, D., & Black, P. (1996). Meanings and consequences: A basis for distinguishing formative and summative functions of assessment? British Educational Research Journal, 22(5), 537–548. https://doi.org/10.1080/0141192960220502