Jessica Smagler, Head of Research and Outcomes, Kyron Learning
Across higher education, the most common question about AI is no longer “what can it do?” It’s “how do we know it will behave?” That question reflects something important about where the sector stands right now: enthusiasm is no longer the barrier to adoption. Trust is.
Trust in AI isn’t built through promises – it’s built through systems. Without clear internal accountability structures, AI tools operate on good faith alone – and good faith isn’t a governance model.
Institutions evaluating AI-powered tools should look for three interlocking commitments, treated not as product features but as obligations: guardrails and benchmarks, educator control, and a foundation in learning science. The first line of that accountability is governance – specifically, the guardrails that define how AI is permitted to behave.
Guardrails Built for Learning
AI chatbots routinely welcome off-topic conversations, taking focus away from intended course content and derailing learning goals. Without strict guardrails to ensure that AI behavior stays aligned with education, safety, and institutional expectations, the integrity of the learning experience is at risk.
In the edtech space, guardrails should operate at two levels. One is educational, making sure the AI stays within lesson boundaries, supports reasoning without shortcuts, and redirects students who go off course. The other is around student security and privacy, ensuring student data is protected, sensitive information is automatically redacted, and access to systems remains tightly controlled. And these guardrails should be structural rather than add-ons, built into how the system works from the ground up, not applied as a filter after the fact.
Neither layer is visible to students but both matter to the administrators and instructors who are responsible for what happens in their courses. Together, these are guardrails that institutions can trust and hold companies accountable to, because in education, responsible behavior must be verifiable, not assumed.
Benchmarking for Measurable Results
Guardrails define how an AI system should behave. Benchmarking is how companies prove that it does – and how institutions can hold them to it. Without continuous measurement, guardrails are a promise rather than a practice.
In practice, benchmarking should also operate at two levels. Continuous benchmarking should be run against real learner interactions to detect behavioral drift and measure ongoing alignment with learning objectives. Periodic broader evaluations – run across curated datasets in multiple academic domains – should test for safety, instructional integrity, and consistency.
Critically, institutions should expect AI providers to share benchmarking results openly. A track record that institutions can point to is what separates accountable innovation from well-intentioned experimentation.
End to End Educator Control
AI should amplify great instructors – not replace them. Human oversight is an essential component of AI-powered instruction and should extend across the entire learning cycle, from content creation to the student experience.
At Kyron, for example, no lesson reaches a student without educator review and approval. Instructors set the learning objectives that shape what our AI generates, and retain full editorial control before anything is deployed. This process ensures that what students experience is always aligned to institutional goals.
Educator control should not end once a lesson is deployed to learners. Products should offer visibility into what students are struggling with in aggregate and at the individual learner level, allowing faculty to intervene, adjust, and improve. Insight into misconceptions creates a feedback loop that keeps instructors and institutions informed on student progress.
Grounded in Learning Science
Responsible AI providers don’t stop at governance and oversight. They are also accountable for whether students actually learn. Rooting instruction in established learning science frameworks – like Chi and Wylie’s ICAP model and Vygotsky’s Zone of Proximal Development – isn’t just good teaching. It is a standard that AI providers should hold themselves to and a standard institutions should expect when adopting AI-powered instruction.
Decades of research have made clear that real learning doesn’t happen by giving students answers. It happens when students are encouraged to think critically, reason logically, and develop conceptual understanding. It happens when lessons are intentionally structured to achieve clear learning goals.
A landmark study by Graesser and Person found that 92% of student questions focused on surface facts rather than deeper reasoning, meaning most misunderstandings go undetected and unaddressed. Because students can so easily appear engaged without building true conceptual understanding, instruction must be intentionally designed to surface reasoning and guide deeper thinking.
When AI providers root their tools in learning science, they are making a verifiable commitment to student outcomes – not just a promise of engagement.
This is Accountable Innovation
AI in education demands more than innovation, it demands accountability. Institutions have a right to ask AI providers how they know their tools will behave, and AI providers have an obligation to answer concretely. Organizations using AI must do so in ways that are safe for students and transparent to institutions. By setting guardrails and benchmarks, keeping educators in control throughout, and grounding tools in learning science, we can be confident that we are innovating responsibly.
At Kyron, our commitment to safe, accountable AI has helped us build enduring partnerships with institutions like Miami Dade College and Western Governors University, while creating opportunities to collaborate with forward-looking colleges like Rio Salado College and established curriculum companies like McGraw Hill. When we answer questions with evidence rather than promises, we build the kind of trust that makes responsible AI adoption possible, both for our partners and for the students they serve.
Interested in learning more about Kyron Learning? Visit www.kyronlearning.com or connect with our team to get started.

Recent Comments