Live Events
Can AI Prove Understanding? Rethinking Assessment in the Age of Generative AI
Share This Post
:format(webp))
As generative AI reshapes how learners produce answers, traditional approaches to assessment are increasingly under question.
This session examines how understanding can be more meaningfully evidenced through application in unfamiliar contexts. Drawing on research in transfer, productive failure, and motivation, alongside a recent pilot, we explore how AI-mediated simulations and role-plays create conditions where learners must act, decide, and justify under uncertainty.
What to Expect
The difference between performance and understanding in AI-supported learning
How transfer and productive failure contribute to deeper evidence of learning
The role of uncertainty and novel contexts in strengthening assessment
Designing AI-supported assessment that is ethical, motivating, and psychologically safe
About the Experts
Cohen Ambrose
Cohen Ambrose is Course Director at the Digital Learning Institute and an educational developer, learning experience designer, and researcher with over 15 years’ experience across higher education and professional learning.
His work focuses on digital pedagogy, learning theory, and the evolving role of AI in education. He has led the design and delivery of blended and online programmes, supported educators in developing effective learning experiences, and contributed to curriculum and programme reform across both academic and workplace contexts.
Cohen’s research and practice centre on how learning design can better support performance, transfer, and learning in the flow of work.
Share This Post
:format(webp))
:format(webp))
:format(webp))