Human-Centred Assessment: A Revolution from the Margins to Mainstream

Human-Centred Assessment: A Revolution from the Margins to Mainstream

Last week I had the privilege of facilitating a workshop on applying human-centered design to rethink curriculum and assessments with a brilliant group of educational leaders from several diverse disciplines. What struck me most wasn't the familiar concepts we discussed, but the collective ‘aha’ moment when my colleagues realised they'd been developing assessments without asking the right questions for years. Over lunch, one discipline head put it perfectly, "We've been asking What can we measure easily? instead of What do our students actually need to demonstrate they have learnt?. Being an engineer, he noted that it is like designing a car by starting with building a speedometer. The conversation that followed, peppered with everything from Tolkien references to tales of assessment adventures gone wrong, crystallised the sense of embarking on an unexpected journey.

That conversation has stayed with me because it captures the revolution we are witnessing of human-centered design moving from the margins to mainstream in higher education; a fundamental shift that could transform how we think about learning, measurement, and student success.

The Shoots in the Margins Moving into Mainstream

At its simplest, human-centred design is: start with the people you’re designing for, involve them deeply, and keep testing, evaluating and refining your ideas with them.

Human-centred design emerged from the convergence of multiple disciplines, tracing its origins to Nobel laureate Herbert Simon's groundbreaking 1969 work The Sciences of the Artificial, where he first described design as 'changing existing situations into preferred ones'. Simon's revolutionary thinking took years to migrate from design theory into educational practice, but it helped define what it means to decide, solve, and understand.

Don Norman's introduction of ‘user-centered design’ in his 1986 collaboration with Stephen Draper, User Centered System Design, and his influential 1988 book The Design of Everyday Things provided frameworks that emphasised designing systems around human psychology and behaviour rather than technological capabilities. Progressive educators recognised these ideas as transformative and began adopting them in curriculum design…but these remained largely confined to education innovators and pilot projects at the periphery.

Tim Brown and IDEO systematised these ideas in the 2000s, through his 2008 Harvard Business Review article and 2009 book Change by Design bringing human-centered principles to business and organisational contexts. He defined design thinking as 'a human-centered approach to innovation that draws from the designer's toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success'. The methodology gained momentum through institutions like Stanford’s d.school which demonstrated how empathy-driven iterative processes, combined with Scandinavian participatory design approaches could solve complex problems across disciplines and industries. What began as product design philosophy gradually migrated into educational contexts, as forward-thinking educators saw the potential to transform learning experiences by shifting from design-for-efficiency to designing with students for authentic learning. The intellectual foundations of human-centered assessment design were thus laid.

While design thinking provides the methodology (processes of empathise, define, ideate, prototype, and test), human-centred design represents the underlying philosophy that puts human needs, experiences, and agency at the centre of all design decisions. In assessment now, this distinction is crucial. We are not just applying design processes, but fundamentally reorienting evaluation around human learning needs rather than administrative convenience.

What's changed isn't the ideas themselves, it's the ‘one crisis to rule them all’ conditions created first by COVID and now AI have forced institutions to listen to what educational designers and innovators have been advocating from the margins for years. The forced shift to remote learning exposed the limitations of traditional assessment approaches and created urgency around student engagement, equity, and authentic evaluation that made human-centered design suddenly feel essential rather than experimental.

From Educational Innovators to Institutional Policy

What we're witnessing now is these margin conversations becoming mainstream imperatives, and it's based on evidence. For example, Sharon Guamán-Quintanilla et al.'s (2023) multi-factor study involving 910 university students found that design thinking approaches to assessment significantly improved problem-solving and creativity skills across multiple stakeholder perspectives. Luis Felipe Alvarado ’s 2025 systematic review in Frontiers in Education confirms that design thinking enhances the learning experience through active participation, critical thinking, creativity, and collaboration.

The aspect that really hooks me in though is the link to inclusive education and closing the achievement gaps between diverse student groups. Mollie Dollinger and colleagues NCSEHE (2022) report on higher education pathways notes that "co-design enables the perceptions of students, teachers, and carers to be authentically captured" and "promotes student autonomy" while creating more inclusive solutions.

The shift from margins to mainstream is perhaps most visible in policy development. What once existed as innovative practice in individual classrooms is now being codified into institutional guidelines and government frameworks, signalling that this revolution has moved far beyond experimental pilots and desirable positions to systematic transformation.

The principles of human-centred assessments that emerge from the research are refreshingly straightforward:

  • Authenticity: assessments that mirror real-world problems,
  • Collaboration: recognising that learning and work happen socially, and
  • Process-focus: valuing the learning journey alongside outcomes.

These align closely with established human-centred design frameworks like Grant Wiggins and Jay McTighe's Understanding by Design, which advocates starting with desired learning outcomes rather than the most convenient testing methods. Human-centered design in the assessment context means designing evaluation experiences with rather than for students, recognising learners as active participants who bring valuable perspectives to the design process. This differs from, but complements, related approaches like Universal Design for Learning (#UDL), which focuses on accessibility, and backward design, emphasising outcome alignment.

The AI Catalyst and Future Possibilities

The emergence of ubiquitous AI has made things interesting. It has forced our own 'there and back again' moment. AI is forcing us to completely reconsider what assessment should measure and how we measure it. The 2023 TEQSA discussion paper on Assessment Reform for the Age of AI by Jason M. Lodge , Sarah Howard , Margaret Bearman , Phillip Dawson emphasised that assessments should capture critical thinking, judgment, and reflection, competencies "AI is less able to simulate”. When we design assessments collaboratively with students, we naturally gravitate toward measuring these uniquely human capabilities: complex problem-solving in ambiguous contexts, interpersonal and collaborative skills, and the ability to work with diverse perspectives to tackle multifaceted challenges.

Cambridge Assessment's 2024 "The Futures of Assessment: Navigating Uncertainties Through the Lenses of Anticipatory Thinking" authored by Fawaz Abu Sitta, PhD and colleagues envisions assessments by 2050 that are "immersive, dialogue-based and interactive experiences”, a fundamental shift from conventional mass testing. The report describes a move toward "’stealth’ continuous and collaborative” assessments embedded within learning experiences rather than separate assessment events. This was the intention with some of my earlier work in VR, AR and game-based assessments, and in leading anatomy education at UNSW Medicine & Health to a blended model: providing immediate feedback, progressive challenges and deeper engagement, while collecting information on mastery and the development of collaborative learning skills.

As UQ's Jason Lodge noted in the Teaching in Higher Ed podcast episode 528 hosted by Bonni Stachowiak , ChatGPT prompted him to shift towards emphasising the human component in his assessments: It's less about the technology and more about the human, how we learn and how we understand ourselves. Similarly, then Dublin City University’s Dr Mark Glynn on #ThinkUDL podcast episode 75 hosted by Lillian Nave ,  advocates for using UDL principles to create authentic assessments that focus on real-world application rather than catching cheating after the fact.

An Implementation Reality Check

Let's be honest about the challenges, every fellowship faces its mountains to cross. Every participant in my workshop last week nodded when we discussed the big barriers: resource constraints, risk-averse institutional and academic cultures, and the sheer complexity of coordinating diverse stakeholder voices in design processes.

Human-centered design isn't quick or cheap. It requires significant time investment, specialised expertise, and institutional commitment to experimentation. Large-scale accountability pressures and micro-managing leaders can make collaborative approaches feel risky compared to standardised testing. As Temple Lovelace and Susan Lyons discussed in The Future of Smart podcast episode 28 hosted by Dr. Ulcca Joshi Hansen , we must think differently about what and how we measure to build truly human-centered education.

This isn't new thinking. I have been hearing these insights from colleagues like Cath Ellis for years, long before AI dominated our headlines. You know those profound ideas that immediately resonate as true but somehow always encounter resistance because it is not easy to do? This is exactly that kind of margin wisdom we see taking centre stage now.

Most importantly, it requires addressing power imbalances that can make "collaborative" design merely tokenistic rather than genuinely participatory. My one key takeaways from implementing human-centred curriculum design is that to succeed, you don't have to transform everything at once. TEQSA's guidance suggests using portfolios of varied tasks (written work, oral presentations, technical demonstrations) to "monitor attainment" over time, creating multiple pathways for students to demonstrate competence. This very much describes the intention of the longitudinal programmatic assessment systems we develop in medical programs.

This Matters Now More Than Ever

We're at an inflection point in higher education where ideas that were once confined to the margins of educational innovation are becoming institutional imperatives. The COVID-19 disruption, combined with AI's challenge to traditional assessment methods, has created conditions where human-centered approaches aren't just pedagogically sound, they are strategically essential.

What struck me most in last week's workshop was the palpable enthusiasm among my colleagues to take on the challenging barriers and make this approach work. I couldn't help but contrast this with similar conversations just three years ago, when educational leaders expressed genuine anxiety about even broaching assessment redesign with their staff.

That evolution from reluctance to readiness tells us something important about where we are in this margins-to-mainstream journey.

Human-centered design offers us a way to create assessment experiences that are more authentic, more engaging, and more equitable. Research consistently shows that empowered, inclusive design approaches improve outcomes for all students. But perhaps most importantly, it offers our students practice in the kind of collaborative problem-solving they will need in succeed in the skills market, where the ability to co-create solutions with diverse stakeholders isn't just helpful, but essential.

The conversation at lunch last week reminded me why I love working in education. When we get assessment right, when we design education with rather than for our students, we're not just measuring learning. We're modelling the kind of thoughtful, inclusive, human-centered approach to complex challenges that our world desperately needs and our students need to see work in practice before they enter the skills market.

This revolution from margins to mainstream isn't happening to us; we are forging it. Like the Fellowship of the Ring, we're leading this transformation together, embracing the opportunity to create more authentic, more engaging, and more equitable assessments. The question isn't whether we will reach our destination, but how we will support each other on the journey.
Nirusha Lachman

Professor and Chair Department of Clinical Anatomy Mayo Clinic Rochester MN, USA

1mo

What a thoughtful and well written account of your experience, understanding and knowledge in this field ! Very enlightening Nals! Thank you for putting this together!

Mollie Dollinger

Director of Assessment 2030, Curtin University

1mo

Love this, Nalini! 🙌

To view or add a comment, sign in

Explore content categories