Watch the full webinar on demand
Catch the full session with Mark Melia, plus Q&A highlights.
SALE NOW ON: University credit rated certifications or upgrade to an AI-Ready Bundle at no extra cost!
Blog
Share This Post
:format(webp))
:format(webp))
Insights from a recent Digital Learning Institute (DLI) webinar with Mark Melia
In a recent Digital Learning Institute webinar, Mark Melia explored a topic that many learning teams know matters, but still find difficult to operationalize: learning analytics.
Mark moved beyond dashboards and reporting for reporting’s sake. Instead, it focused on a more useful question:
How can learning analytics help us make better decisions about learning, performance, and business impact without losing sight of ethics, context, and data responsibility?
What followed was a practical introduction to what learning analytics really is, where meaningful data actually comes from, why data sovereignty matters more than ever, and how tools like Metabase and generative AI are making analytics more accessible to learning teams.
This blog captures the core ideas from the session, rewritten as a guide you can use.
Catch the full session with Mark Melia, plus Q&A highlights.
One of the most useful starting points in the webinar was a simple reset: learning analytics is not just a nicer-looking report.
Mark drew on the well-known definition from the Society for Learning Analytics Research, describing learning analytics as the collection, analysis, interpretation, and communication of data about learners and their learning in order to generate relevant, actionable insight.
That last word matters: actionable.
There is little value in collecting data unless it helps you do something better. That might mean identifying where learners are struggling, spotting a pattern in course engagement, improving course design, or deciding whether a learning intervention is actually influencing performance.
The point is not the dashboard itself. The point is what the insight allows you to change.
A major theme in the session was that most learning environments are already rich in data, even if teams are only scratching the surface of what is available.
Mark described LMSs as highly “chatty” systems. They record far more than just completions and quiz scores. Depending on your setup, your available data may include:
enrolment and retention patterns
quiz attempts, scores, and timing
click and navigation behaviour
forum participation and interaction patterns
survey and feedback responses
learner profile and demographic data
prior educational or competency information
SCORM, xAPI, or other activity logs
And once you look more closely, even familiar tools produce more nuanced data than many teams realise.
For example, forums are not only sources of qualitative feedback. They can also reveal how often someone posts, the length and tone of their contributions, whether they ask or answer questions, whether others respond to them, and even whether they are using key concepts from the course appropriately.
Similarly, assessments can tell us far more than whether someone passed or failed. They can show how long learners spent on a question, whether they changed answers, where misconceptions are clustering, and whether certain items are actually distinguishing stronger from weaker performers.
The takeaway here was clear: the challenge is often not a lack of data, but knowing what data matters and how to interpret it.
A particularly important point from the webinar was that just because data is available does not mean it should automatically be used.
Mark suggested three conditions for using learning analytics well:
the data should support a credible conclusion
the conclusion should be something valuable
the conclusion should lead to action
This is where many analytics efforts go wrong. Teams jump to available metrics before asking what problem they are really trying to solve.
A strong example from the session was a hypothetical claim that a learner who has not posted in a forum during the first six weeks is unlikely to succeed in a course. That may turn out to be true in some contexts, but it involves several reasoning steps. First, we are assuming that lack of forum posting means lack of engagement. Second, we are assuming that lack of engagement predicts failure.
Those assumptions might be valid, but only if they are supported by evidence.
This is what Mark referred to as a chain of evidence. If analytics is going to influence interventions, decisions, or learner pathways, then the logic linking data to conclusion needs to be explicit and defensible.
Mark also offered a helpful breakdown of the kinds of analytics learning teams are often dealing with:
Descriptive analytics
What happened?
Diagnostic analytics
Why did it happen?
Predictive analytics
What is likely to happen next?
Prescriptive analytics
What should we do about it?
This matters because learning analytics should not stop at description.
It is useful to know that engagement dropped. It is more useful to understand why. It is even more useful if that insight helps you decide what support, redesign, or intervention is needed next.
That creates what Mark described as a continuous learning analytics cycle: learners generate data, data becomes metrics, metrics are analyzed, insights lead to action, and those actions then shape future learning and future data.
In other words, analytics is not a static report. It is part of an improvement loop.
Some metrics are noisy. “Time spent learning” is a good example. It sounds useful, but it is often difficult to interpret accurately. Did the learner spend 20 minutes focused on the module, or leave the tab open while doing something else?
Mark also pointed to the limitations of SCORM, which remains widely used but is often quite restricted in the data it can reliably capture and communicate back to the LMS. Messages can be lost, tracking may be incomplete, and the level of detail may not match the questions you want to ask.
If the right data is not being collected, you cannot analyze it later.
You may be able to improve reporting retrospectively, but you cannot recover data that was never captured in the first place.
That makes upfront thinking about data design especially important.
Mark highlighted several core issues learning teams need to think through:
consent
transparency
data control and ownership
privacy and protection
validity and reliability
moral responsibility
The moral responsibility point was particularly strong.
If your data suggests that a learner is at risk, struggling, or likely to fail, there is a responsibility to do something with that insight. Otherwise, you are not using analytics to support learning. You are simply observing failure more efficiently.
This is where analytics becomes more than a technical exercise. It is about how institutions and organizations use data in ways that are fair, transparent, and genuinely supportive.
The webinar also touched on the implications of using AI in these contexts. Where AI tools are being used to analyze learner data or shape decisions that affect people’s progression, performance, or opportunities, transparency becomes essential. Teams need to understand what the system is doing, how it is doing it, and what risks may come with that.
Mark noted that while many organizations host or manage their LMS within controlled environments, they often export data into third-party analytics tools that sit outside their own ecosystem. That raises important questions:
where is the data going?
who controls it?
what jurisdiction applies?
what rights are being granted when that data is uploaded or processed?
This is especially relevant for public sector and European organizations, where sensitivity around data protection, governance, and digital sovereignty is growing.
The broader point was not that every external tool is automatically problematic. It was that learning teams need to understand the data implications of their analytics stack, not just the functionality.
In practical terms, this means being much more deliberate about where learner data is processed, who has access to it, and whether the organization retains meaningful control.
To make this more concrete, Mark used part of the webinar to demonstrate Metabase, an open source business intelligence tool that can sit within an organization’s own infrastructure.
His point was not simply that Metabase is powerful, but that it offers a model where analytics can happen within an environment the organization controls.
Using Moodle as an example, he showed how Metabase can connect to LMS data, support custom SQL queries, and generate dashboards that answer practical questions such as:
which users are enrolled in which courses
who has been enrolled but never accessed the course
which users have not logged in
how often learners are engaging with a given course
He also showed how these dashboards can be embedded into Moodle, creating a more seamless analytics experience for administrators, managers, and potentially even learners themselves.
The key message was not that every learning professional needs to become a database expert. In fact, the opposite.
A standout part of the webinar was Mark’s demonstration of how generative AI can help users create SQL queries for tools like Metabase.
Traditionally, using a tool like this well required a good understanding of database structures and query design. That often meant needing specialist support.
Now, however, a learning professional can describe the question they want answered in natural language, specify the LMS and version they are using, and ask a generative AI tool to draft the relevant SQL. That query can then be brought into Metabase, tested, refined, and visualized.
This is a significant shift.
It means that learning analytics is becoming more accessible to non-specialists, especially in open source environments where the underlying structures are more visible and better documented.
That does not remove the need for judgement. You still need to know whether the question is sound, whether the data is meaningful, and whether the interpretation is credible. But it does reduce the technical barrier to getting started.
Perhaps the strongest message from the webinar was that learning analytics should be approached with both ambition and restraint.
Ambition, because there is real opportunity here. The combination of rich learning data, open source tools, and generative AI is making it easier than ever to explore questions that were once technically out of reach.
Restraint, because not every metric is meaningful, not every conclusion is justified, and not every use of data is ethical simply because it is possible.
What matters is whether analytics helps us support learners better, improve learning design, and connect learning more credibly to performance and outcomes.
If you are thinking about learning analytics in your own context, here are some useful questions to start with:
What decisions are we actually trying to improve?
What questions do we want our data to help answer?
Are we collecting the right data, at the right level of detail?
Are our conclusions based on evidence, or assumption?
What action will we take if the data reveals a problem?
Where is our learner data being processed, and who controls it?
What would become possible if analytics tools were easier for our team to use?
Learning analytics is not just about seeing more. It is about seeing clearly enough to act responsibly.
And as this DLI webinar with Mark Melia made clear, the future of analytics in learning may be less about bigger dashboards and more about better questions, better judgement, and better control over the data that matters.
Learning analytics is the use of data to understand learner behavior and improve learning outcomes.
Common data includes completions, assessment scores, activity logs, engagement, and feedback.
It helps identify issues, improve design, and connect learning to real performance outcomes.
Descriptive (what happened), diagnostic (why), predictive (what next), and prescriptive (what to do).
Too much data, unclear purpose, weak metrics, and difficulty linking to real impact.
By combining learning data with CRM, HR, or performance data to find meaningful correlations.
It’s about where your data is stored, processed, and controlled, especially when using external tools.
Yes, to analyze data and generate insights, but it must be used responsibly and securely.
LMS reporting, BI tools like Power BI, and open-source tools like Metabase.
Less than before, but you still need to ask the right questions and interpret data carefully.