Building Equity into Systems of Assessment, whether Programmatic or Program-Wide: Why it Can't Be an Afterthought
Unsplash | 愚木混株 Yumu

Building Equity into Systems of Assessment, whether Programmatic or Program-Wide: Why it Can't Be an Afterthought

There’s exciting momentum toward sophisticated assessment systems like programmatic and program-wide models. But too often, #equity remains at afterthought.

As an advocate for student equity, at #HERDSA2025 last week this tension came into sharp focus for me through several presentations and conversations, and reflecting on my own experiences implementing programmatic and program-level systems of assessment.

A disclaimer that my sensitivity on the tensions between equity and assessment may have been heightened going into the conference by two unrelated events"

  • reading Juuso Henrik Nieminen article on 'The paradox of inclusive assessment' The article highlights the paradoxes between assessment reductionism to support decision-making and equity's inclusion of diversity and interdependence principles, and

  • an incidental conversation in between workshops the day before, with Sue Sharpe on the compounding systemic challenges faced by students from diverse backgrounds navigating university policies and processes.

There is compelling evidence that assessment bias exists, including in standardised testing, negatively affecting learners based on:  gender, race/ethnicity and disability status. In the 2020 A New Decade For Assessment: Embedding Equity into Assessment Praxis by Erick Montenegro & Natasha Jankowski, there is a call for socially_just assessment systems, asserting that:

‘[assessments should involve] really seeing who our students are and how they learn and what we can do to support them in the way that they are learners’

I am also a strong advocate for programmatic and program-level (wide) systems of assessment, but when equity isn't woven into the fabric of the assessment plan from day dot, I fear we are not just missing an opportunity to design a fairer system, but we risk unintentionally reinforcing the very inequities we hope to dismantle.

Programmatic vs Program-Level Systems of Assessment

For the sake of terminologies that are ever-evolving, I have included here my short summary on the similarities and differences between each systems of assessment.

Programmatic assessment is well defined, with 12 core principles outlined in the 2020 Ottawa Consensus Statement. Program-level (or program-wide) assessment has no formal definition but serves as an umbrella concept encompassing various institutional assessment approaches including program evaluation, curriculum-wide assessment strategies, and quality assurance processes. Fundamentally, program-level assessment represents a systematic program evaluation focused on measuring whether students achieve program-level learning outcomes and institutional compliance.

The primary difference is that program level assessment typically evaluates the program effectiveness focussing on aggregate program outcomes and accountability, while programmatic assessment is an integrated approach focussing on supporting individual learner continuous development while still providing program-level insights.

Similarities between Program-Level and Programmatic Systems of Assessment

Similarities between Program Level and Programmatic Assessment

Key Differences between Program-Level and Programmatic Systems of Assessment

Differences between Program Wide and Programmatic Assessment

Implementation complexity differs dramatically between these approaches. Programmatic assessment demands fundamental cultural transformation from a 'testing culture' to 'learning culture', with extensive faculty development to shift to mentor/coach roles, and robust technological infrastructure for continuous data collection and portfolio management. In contrast, program-wide assessment can operate within existing institutional structures, utilising established assessment cycles and familiar evaluation frameworks with periodic reviews aligned to accreditation timelines.

Successful implementation examples reflect these complexity differences. Programmatic assessment has found success in professional education contexts, particularly medicine, health sciences and business programs requiring competency-based progression. Program-wide assessment predominates in larger-scale contexts like science programs, where institutional accountability and systematic curriculum evaluation can be achieved through existing course structures and assessment coordination mechanisms, making it more feasible for programs with diverse majors and limited resources for comprehensive assessment transformation.

Programmatic and Program-Wide Systems of Assessment Promise a Fairer System

There is good evidence that programmatic assessment holds promise of a more fair 'holistic approach to assessment program design'; as defined in the Ottawa 2020, it optimises learning, decision-making, and quality-assurance functions. One of my favourite articles on programmatic assessment, with some embedded guardrails, is What programmatic assessment in medical education can learn from healthcare by Lambert Schuwirth and colleagues.

However, implementation challenges are significant. Lubberta de Jong and colleagues at Utrecht University found that quality of narrative information in #portfolios - which from my experience, is a big implementation hurdle - significantly affects decision-making quality. Studies also demonstrate that students perceive each individual data-point as high-stakes despite these being designed, and repeatedly communicated, as low-stakes.

From my work with students, perceptions of the stakes at risk and students' uncertainty (perhaps, even lack of trust) about the value of multiple low-stakes assessment events creates anxiety which significantly weighs into the learning culture, and can be compounded for students in equity groups.

Equity Considerations Can't Be a Footnote in the Assessment Framework

Research consistently shows that bias in assessment, whether conscious or unconscious, negatively impacts performance based on gender, ethnicity, and disability status. As most institutions are reforming assessment in response to #AI, it would be a missed opportunity to neglect to approach to assessment design through a critical lens, acknowledging that implicit bias can colour learning outcomes, teaching practices, and assessment - standing in the way of equitable learning and the social impact that is at core of higher education values.

Programmatic assessment, and similarly program-wide assessment, with its emphasis on collecting longitudinal data points across entire curricula, holds tremendous promise for creating more holistic and fair assessment systems but this same comprehensive approach can amplify existing biases if equity isn't front and centre in the assessment proposal and the design process. 

The Cost of Retroactive Equity

When equity isn’t central to the assessment strategy, it is like trying to retrofit accessibility into a building already constructed. It's expensive, often inadequate, and sometimes impossible.

Without equity considerations from the outset, programmatic and program-wide assessments can misidentify students as not yet competent to progress without providing the equitable opportunities to learn and demonstrate learning, thus ultimately creating disparities in academic achievements and limiting future career opportunities.

The consequences ripple outward:

  • Students face barriers that have nothing to do with their actual competence, and for some equity students, leaving their programs of study is perceived as the most appropriate action

  • Faculty unknowingly perpetuate biased assessment practices, and lose the opportunity to learn and develop innovative assessment tools and inclusive practices

  • Institutions fail their diversity and inclusion commitment to graduate a diverse body of students that are equipped to meet the needs of the community

  • Professions lose talented individuals who were unfairly filtered out of the system, and the status quo is retained

Equity-minded assessment isn't about lowering standards but about removing obstacles that interfere with student learning and ensuring all students have equal access to demonstrate their knowledge and skills.

Evidence strongly supports building equity into assessment systems from the beginning rather than retrofitting. Research on inclusive design principles shows that proactive approaches are more cost-effective and produce better solutions than retrofitting.

Success will require sustained institutional commitment, comprehensive faculty development (often underestimated), explicit attention to equity and inclusion, and careful integration of good practices for the institution's context and timing.

#HigherEducation #ProgrammaticAssessment #EquityInEducation #AssessmentDesign #InclusiveEducation #StudentSuccess

I’d love to hear how others are weaving equity as a central pillar in their assessment strategies; what's working and importantly what is not working? What can we learn from each other?

Sue Sharpe

Academic Development | Education Design | Accessibility and Inclusion Support

2mo

"As most institutions are reforming assessment in response to #AI, it would be a missed opportunity to neglect to approach to assessment design through a critical lens, acknowledging that implicit bias can colour learning outcomes, teaching practices, and assessment - standing in the way of equitable learning and the social impact that is at core of higher education values." I couldn't agree more, Nalini! It was wonderful to meet you and have a chat about things that matter - to us, to students and to HE overall.

To view or add a comment, sign in

Others also viewed

Explore content categories