Bio Class vs. Bot Class. Making AI-Ready Assessments the IB way....
Cikgu Andi, an MYP Year 4 Biology teacher at a lively international school in Kuala Lumpur, sorted a wobbling pile of photocopied quizzes 🙂. Those papers had checked photosynthesis facts for years, but headlines now shouted that almost a quarter of jobs will change by 2027 (World Economic Forum). A Gallup poll also found 60 percent of teachers save six hours a week with AI tools (Gallup). Andi blinked: if a bot could solve her quiz in seconds, was she still measuring real learning?
Colleagues in regional IB WhatsApp groups voiced the same worry. Some banned AI. Others shrugged and let learners copy-paste. Neither felt right. Parents at coffee mornings asked, “Will robots make our kids’ exams useless?” The question buzzed like cicadas after rain.
One day …
The principal announced a school-wide challenge: redesign one summative task per subject to embrace responsible AI. For Andi, that meant the “Blue Planet” unit on climate resilience. Nervous yet excited 🤔, she turned to three resources:
Pens, sticky notes, and a mug of Milo in hand, Andi crafted a plan any IB teacher can try tomorrow.
Step 1 Anchor Goals with the Cognitive Rigor Matrix
MYP Biology grades learners on four criteria: A Knowing and Understanding, B Inquiring and Designing, C Processing and Evaluating, D Reflecting on the Impacts of Science. Andi rewrote her unit goals with upper-band verbs: analyse DNA barcodes, evaluate ecosystem trade-offs, design a field investigation, create a data-driven action plan. When goals start at analysis and creation, AI cannot replace thinking; it supports it.
Step 2 Pick the Perfect AIAS Level for Each Task
This step deserved storytelling, not a table, so Andi walked through each level as if guiding a friend.
Level 1 No AI
Some knowledge must be mastered cold. During lab rotation, Andi used a rapid-fire oral quiz where students named organelles and enzymes without devices. This met Criterion A’s need for accurate scientific language. It also proved to moderators that learners could perform core skills solo.
Level 2 AI as Planner
Here AI is a brainstorming buddy. Learners had to craft research questions on coral bleaching. Andi modelled a ChatGPT prompt generating ten angles. Students critiqued relevance and bias, then kept their top three and cited the tool in process journals. The activity built ATL Research skills and met the IB’s integrity requirement for transparent acknowledgement.
Classroom hack: Ask learners to paste both prompt and response into their journal, highlighting ideas they kept and those they tossed. Visible thinking, visible integrity.
Level 3 AI as Collaborator
Now the bot offers feedback, structure, or starter drafts. Students wrote lab reports on algae pH tolerance. Grammarly flagged grammar; Claude suggested headings. Andi added a rubric line called “Metacognitive Commentary.” To hit the top band, learners annotated why they accepted, tweaked, or rejected each AI suggestion. That satisfied Criterion C, which values critical evaluation of methods and data, and sharpened ATL reflection skills.
Classroom hack: Require color-coded bubbles like, “AI recommended X, I chose Y because…”. Invisible edits become assessable evidence.
Level 4 AI as Co-Creator
Here humans and machines build artifacts neither could finish alone in class time. The summative product became a five-minute screencast, “Mangroves: Carbon Warriors.” Students commanded AI to generate animated CO₂ infographics, multilanguage subtitles, and a synthetic narration track. The rubric rewarded clear explanation of how each AI element enhanced audience understanding. Criterion D asks learners to discuss science in context; the co-creator role helped them communicate with flair.
Guardrail: The IB policy insists on fact-checking AI text. Andi therefore required a “Source Triangulation” slide listing two peer-reviewed articles confirming every AI-supplied statistic.
Level 5 AI as Explorer
The classroom turned into a mini-innovation studio. Teams proposed ways to monitor local air quality with AI sensors. Teacher and students co-constructed success criteria, reflecting MYP’s focus on agency. Project journals, meeting notes, and community feedback served as moderation evidence. Because projects aimed at real environmental improvement, students recorded Service as Action hours too.
This explorer level illustrated the learner profile trait “risk-taker” and met Southeast Asian sustainability goals without shouting them out.
Step 3 Blend Varied Assessment Methods
Andi mixed quick checks, a criterion-related summative, and an open-ended portfolio:
Variety captured the whole learner, aligning with the IB principle that assessment should be balanced and authentic.
Step 4 Write Transparent Rubrics and Guardrails
Dual descriptors: Each rubric row included “Your Thinking” and “AI Engagement.” Example: Criterion C Processing and Evaluating—“Explains why the chosen AI model’s regression output suits local reef data.”
AI reflection log: Learners documented prompt tweaks, model versions, and verification steps. This matched IB guidance that AI use must be traceable.
Honesty alignment: The school’s policy now quotes the IB AI appendix so that expectations are crystal clear across subjects.
Step 5 Pilot, Reflect, Iterate
The first run was lively. Some groups pasted entire Wikipedia paragraphs into AI prompts; the result sounded global, not local, and earned mid-band marks. Others spotted bias in a coral-data set and captured the issue in reflections, earning higher scores 🎉. Andi gathered feedback with Mentimeter, refined prompts, and shared wins during a cross-campus webinar.
Five Takeaways Teachers Can Use Tomorrow
Eco-Fair night arrived. Fatin and Miguel streamed their screencast featuring drone shots blended with AI-animated data. Mid-video, Miguel paused to explain why an AI claim about “mangroves in Antarctica” was removed, and he cited two journal articles verifying their statistics. The hall burst into applause. A visiting workshop leader whispered, “This is everything the IB hopes to see.” Andi felt lighter than a breezy monsoon afternoon.
A Year 9 learner sits at a laptop. Ten AI assistants wait politely. She lifts an imaginary baton and asks, “How can we heal these wetlands?” Purpose, ethics, creativity, and collaboration fill the room. The resulting music of learning will echo far beyond tomorrow’s lesson 🙂.
Works Cited
Gallup. K-12 Teacher AI Adoption Survey 2025. Gallup, 2025.
Hess, Karin. Cognitive Rigor Matrix/Depth of Knowledge. Ohio Department of Education, 2019.
International Baccalaureate. Artificial Intelligence in Learning, Teaching, and Assessment. International Baccalaureate Organization, 2025.
International Baccalaureate. MYP: From Principles into Practice. International Baccalaureate Organization, 2021.
Perkins, Mike, et al. “The AI Assessment Scale Revisited: A Framework for Educational Assessment.” Journal of University Teaching and Learning Practice, vol. 21, no. 6, 2024.
World Economic Forum. Future of Jobs Report 2023: Up to a Quarter of Jobs Expected to Change in the Next Five Years. World Economic Forum, 2023.