AI Insights: Focus - The EU AI Act's Hidden Challenge: Could It Push Educational AI Tools Out of Classrooms?

AI Insights: Focus - The EU AI Act's Hidden Challenge: Could It Push Educational AI Tools Out of Classrooms?

I have a tendency to overthink. I’ll admit it. Whether it’s wondering if I locked the door or spiralling into existential dread over whether a typo in an email made me seem unprofessional, my brain loves to overanalyse. So naturally, when I came across the EU AI Act and its “high-risk AI systems” classification, my immediate thought was: “Are my AI teaching bots about to land me in trouble?”

For context, I teach IGCSE and A-Level Computer Science, and my classroom is home to Adabot, Psudobot, and StudySesh. Adabot is a Python problem-solver. It helps students troubleshoot code by guiding them to the answers without just handing them solutions. Psudobot works the same magic but for pseudocode, and StudySesh is the revision assistant every teacher wishes they had, breaking down course units into digestible study materials.

I never use AI to teach the core concepts of my lessons. That part is entirely down to me (I tried it once and it did not go well). The AI bots are there for support, helping students tackle problems, solidify their understanding, and revise in ways tailored to their individual needs.

These tools have had a positive impact in my classroom. Students who were hesitant to ask questions now engage more confidently. It is exactly what AI in education is supposed to deliver: personalised, accessible learning.

But here is where I pause. The EU AI Act defines high-risk systems as those that influence educational outcomes, admissions, or access to learning opportunities. While my bots are not grading exams or deciding university admissions, they are guiding students’ learning in subtle ways. As decisions are being made to support their learning, and not by me.

Could the tools I see as empowering actually be influencing my students in ways that cross into this high-risk territory? Could the same regulation designed to protect students end up pushing these tools out of the classroom?

I might be wrong about this. Maybe I am overthinking the risks, but I think it is important to share my thinking.

How AI Shapes Learning

At first glance, tools like Adabot, Pseudobot, and StudySesh do not seem to fit the EU’s high-risk definition. They do not grade exams, monitor tests, or decide who gets into university. However, the influence they have on students can be more subtle.

StudySesh, for instance, interacts with students in areas they identify as needing support based on their input. While this is helpful for building confidence, it might unintentionally limit the student’s exposure to broader content. By personalising the experience, is it inadvertently ring-fencing content the student does not explicitly ask for? Similarly, Adabot might guide a struggling coder to focus on simpler problems, reinforcing their understanding of the basics but potentially signalling they are not ready to move onto more advanced problems.

Adding to this complexity is the probabilistic nature of generative AI. These tools do not “know” the best answer but instead generate responses based on patterns in the data they were trained on. This means their guidance is shaped by probabilities rather than certainty, leaving room for misunderstandings or unintended outcomes. For instance, a student might misinterpret advice from Adabot because it generates a plausible response that doesn’t fully align with their actual needs or the curriculum.

I take steps to minimise these risks through careful prompting and regular testing of the bots to ensure they align as closely as possible with the IGCSE and A-Level Computer Science syllabus. However, due to the nature of AI, it is impossible to test every possible scenario or output. The sheer variability in how the AI generates responses makes complete control unrealistic.

These tools are designed to support learning, but their personalised and probabilistic approach means they guide students in specific directions. Over time, those small nudges can influence a student’s confidence, focus, and even aspirations. Are these tools empowering students, or could they be steering them in ways that unintentionally limit their potential?

What About AI Plagiarism Detectors?

Another area where AI might cross into high-risk territory is plagiarism detection. Many schools and universities rely on these systems to catch academic dishonesty. The problem is, these tools are not always accurate.

False positives happen. A student’s original work might be flagged as plagiarised because it shares common phrases with another source or matches public data in the detector’s database. For students, this is more than a frustrating error. It could lead to disciplinary actions, failed assignments, or even jeopardised university admissions.

These tools often lack transparency, so students do not know why their work has been flagged. Given the high stakes, should these systems also be classified as high-risk under the EU AI Act? More importantly, should schools rely on them without proper safeguards?

Oversight Is Essential

One thing is clear: no AI should be used in education without oversight.

As teachers, we are often the ones experimenting with new tools, but the risks of unmonitored AI are too great to ignore. I have seen my bots go off-track before. StudySesh might still teach hardware, but if its suggestions drift from the specific requirements of the IGCSE or A-Level syllabus, students might be left underprepared for their exams or overwhelmed by higher-level content than the course demands.

This is why I only use AI bot platforms where I can access the transcripts of conversations and have oversight of what students have discussed.

I have also used AI tools like Google’s NotebookLM to generate podcasts for my students, summarising key topics. These can be a great resource, but they are not always perfect. Have I listened to every second before sharing? Have I checked that all the content aligns with the syllabus? If I have not, I am risking letting AI influence my students in ways I have not fully vetted.

School leaders must play a role here. Teachers need clear guidance on how to use AI responsibly and systems in place to monitor its impact. Without oversight, even the best-intentioned tools can do more harm than good.

The Problem of AI Literacy

Another issue I have noticed is how students interact with these tools. When I introduced Adabot and Pseudobot to my classes, some students quickly picked up how to use them. They asked precise questions, tested responses, and gained valuable insights. Others struggled, unsure how to phrase their questions or interpret the bots’ advice. Some treated it like Google, not understanding how to engage in a back-and-forth interaction.

This highlighted a gap in AI literacy. If students do not understand how to use these tools effectively, they cannot fully benefit from them. Worse, they might be misled or frustrated by them - this will surely have an educational infulence.

AI literacy is no longer optional. It needs to become a core part of education. Students should know how these tools work, what their limitations are, and how to use them critically. Without these skills, we risk widening gaps between students who are comfortable with AI and those who are not.

The Risks and Rewards

There is no question that AI has helped me and my students. The AI bots I have used have helped my students gain confidence, tackle difficult concepts, and take ownership of their learning. But I cannot ignore the risks.

Transparency is essential. Students, parents, and educators need to understand how these tools work and why they make certain recommendations. Oversight is equally important to ensure AI tools align with the curriculum and do not misguide students.

And for tools like plagiarism detectors, accountability should be a non-negotiable standard. A false positive can have life-altering consequences for a student.

Will the EU AI Act Kill Student-Facing AI?

I hope not. But the Act does force us to ask important questions about how we use these tools.

As a teacher, I know the incredible potential of AI to transform education. I have seen my students grow in confidence, tackle difficult concepts, and explore learning in ways they might not have without these tools. But I also know how easily things can go wrong. This is a balancing act, embracing the positives of AI while making sure we protect our students from its risks.

One thing the EU AI Act has made clear to me is that informal approaches are no longer enough. While I have been using AI tools extensively for two years and have a good sense of what works, this is still based on my own impression rather than a formal risk assessment. A structured evaluation process would help identify potential blind spots and ensure that the tools align with both ethical and educational standards. This is not just a recommendation for me but for any teacher using or planning to use AI tools in their classrooms.

What about teachers who are new to AI? I have spent years refining my approach, learning through trial and error, and building an understanding of what works for my students. A teacher just starting out with AI does not have that experience. What kind of training will they need to use these tools effectively? How do we ensure they can use AI in a way that supports learning without introducing unnecessary risks? This is where schools, leaders, and policymakers need to step in to provide clear training and guidance for educators.

I don’t intend to stop using AI in my classroom. But I will be thinking a little harder about its impact, testing a little more, and tightening up my feedback loops to ensure these tools are supporting my students in the best way possible.

Because while AI is just a tool, the futures it touches, those of our students, are very real. And we owe it to them to get this right.

Great article Matthew, really love your ethos around student centred learning, this approach is so much more powerful for them than the old 'sage on the stage', being a 'guide on the side' and if your concerned whether your tools are potentially 'at risk' of the EU AI Act, you may like to attend our upcoming FREE webinar! With Chapter I and II of the act coming into force on 2 February 2025, we start exploring what it means. We hope you can join us. https://www.linkedin.com/events/empoweryourbusiness-compliancew7275096189484642304/theater/ We may not have all the answers, but we just might have some, that will alleviate your concerns.

Like
Reply
Armand Ruci M.A, M.Ed

Innovative Educator | Al Thought Leader | Cybersecurity Enthusiast | Resume Writer | LinkedIn Optimizer | Helping Experts Craft Better Bios, Books & Breakthroughs

6mo

Matthew Wemyss Thank you for sharing this! The insights into the regulatory framework of the EU AI Act are essential for grasping its potential global impact. It's intriguing to observe how it prioritizes transparency and accountability while also fostering innovation. I'm looking forward to seeing how these policies influence the development and deployment of AI in the future!

Salome Oloo

Rides and attractions host MOTIONGATE™ Dubai

6mo

Very helpful

Like
Reply
Aileen Wallace

Scottish secondary school teacher. From ed tech phobic to AI advocate. Hoping to encourage others to try a little Ed tech too. Co-Founder of the Eduguardians. Curipod Coach.

6mo

This needs to be emailed to every head teacher and anyone making decisions on education

C. Harun Böke

Helping teachers reclaim agency by fostering student ownership —and love teaching again

6mo

Great article, Matthew Wemyss. You probably considered this, hence I won't "suggest", but rather ask; have you considered creating a bot that monitors students' convos with your bots?

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics