Watch the full webinar on demand
Catch the full session with Joe Houghton, plus Q&A highlights.
SALE EXTENDED: Advance your career for less. Our biggest discount or get an extra certification at no extra cost!
Blog
Share This Post
:format(webp))
:format(webp))
Insights from a recent Digital Learning Institute (DLI) webinar with Joe Houghton.
Generative AI is moving fast, but the real shift is not “new chatbots”. It is how AI tools are starting to connect to your work, your files, and each other.
In our February webinar, Joe Houghton joined the Digital Learning Institute (DLI) to share the most important developments educators and L&D teams should pay attention to right now. From AI-generated Word documents and slide decks to connectors, agentic workflows, and assessment redesign, this session was a practical tour of what is already possible and what is coming next.
Below is a recap of the most useful takeaways, with clear examples you can apply to teaching, training, content design, and learning operations.
Catch the full session with Joe Houghton, plus Q&A highlights.
AI is shifting from “chat” to connected workflows across tools like Notion, Google Drive, Gmail, and slide and document builders.
Claude demonstrated a leap in productivity, including creating a polished Word document from a single prompt and generating decks through Gamma.
AI “connectors” are becoming a major capability layer, enabling agentic workflows where AI can carry out multi-step tasks.
In education, the biggest implication is assessment: AI detection is unreliable, so the focus needs to move to competence, performance, and experiential assessment.
Research workflows are improving, including more powerful “deep research” capabilities and better ways to organize and query your own curated resources.
For many teams, the early AI phase looked like this:
Ask a chatbot for an outline
Copy and paste it into a document
Edit it manually
Repeat across tools
The webinar’s central message was that we are starting to move beyond that. AI is increasingly able to connect to other platforms and help with multi-step work. This reduces friction and creates opportunities for faster content development, better knowledge management, and more responsive learning support.
Joe described this as a move toward agentic workflows, where AI agents can complete tasks across systems rather than producing isolated answers.
One of the standout moments from the session was Joe’s reaction to Claude 4.6, describing it as the biggest single jump he had seen in some time.
Claude produced a fully polished Word document from a single prompt, including professional formatting elements such as a title page, table of contents, headings, and consistent layout.
This signals a shift from “drafting text” to generating finished outputs that are closer to publish-ready.
For learning teams, this is relevant because many workflows rely on repeatable document formats:
learning briefs and design documents
facilitator guides
stakeholder reports
policy and governance templates
program proposals and evaluation write-ups
When AI can produce a stronger first draft in the right structure, designers can spend more time on instructional decisions, quality, and outcomes.
A major part of the webinar focused on connectors, which Joe described as a way to amplify what AI can do by allowing it to work across platforms.
Examples mentioned in the session included connecting AI to:
Google Drive
Gmail
Notion
slide tools such as Gamma
workflow tools and automation platforms
Learning work is rarely contained in one place. Content, briefs, stakeholder notes, and assets often live across drives, knowledge bases, and email threads. Connectors point toward a future where you can:
search and summarize documents across folders
pull information from past decisions and stakeholder emails
assemble a brief using content from multiple sources
generate outputs aligned to your existing templates and style
A key caveat raised during the webinar was privacy and control. The value is real, but granular permissioning is still evolving, and teams need to treat access thoughtfully.
The session repeatedly returned to one theme: AI workflows are becoming joined-up.
Instead of doing tasks in separate tools and stitching them together manually, the goal is for AI to:
retrieve the right material
complete steps in sequence
produce an output that is usable
Joe described this as the early stages of agentic workflows, where multiple agents can operate autonomously, share findings, and assemble more complex outputs.
“Create a microlearning pack from our policy PDF and last month’s stakeholder notes, then draft a 5-question knowledge check.”
“Summarize our last 10 learner feedback emails, cluster themes, and draft improvements for the next cohort.”
“Generate a slide deck and facilitator guide for a workshop using our Notion knowledge base and existing brand theme.”
This is where AI begins to look less like a chatbot and more like a learning ops assistant.
A very practical demo in the webinar showed Claude using connectors to generate a slide deck through Gamma, producing a visually structured set of slides that was:
branded using an existing theme
shareable via link
exportable to PowerPoint, PDF, or Google Slides
For educators and L&D teams, this matters because slides are a constant bottleneck. The promise is not “AI makes slides”, but:
AI can create a structured first pass
you refine for accuracy, pedagogy, and tone
you spend less time on layout and starting from blank
This can be especially useful for:
internal training decks
webinar slide prep
program overviews
workshop materials
stakeholder presentations
Another practical section demonstrated Claude inside Excel as an add-in. The example used a cashflow forecast to show how AI could:
analyse patterns and risks (e.g., seasonality, “cash cliff”)
provide commentary and guidance based on the spreadsheet
For learning and business teams, the bigger point is that AI is moving closer to where work actually happens. In L&D, this could translate to:
analysing evaluation spreadsheets
summarizing survey exports
checking cohort performance trends
drafting insights for monthly reporting
Joe highlighted NotebookLM as a preferred tool for working with curated sources and building a structured knowledge base. A key update discussed was a way to query across multiple notebooks through Gemini “gems”, effectively allowing:
multiple notebooks to be used as a combined knowledge set
guided learning style querying across curated material
This matters for learning teams who already manage:
policy repositories
program documentation
content libraries
research sources
facilitation notes
When AI can query your own trusted sources first, you reduce hallucination risk and improve accuracy for internal workflows.
The webinar also referenced updates to “deep research” style features, with improvements like:
larger context windows
better targeting of sources
the ability to interrupt a research job and add instructions mid-way
The common thread across tools is friction reduction:
fewer repeated prompts
fewer restarts
more control over sources
better outputs from the same effort
For educators, this means research and synthesis work can become more reliable and less time-consuming, especially when you are building learning resources or updating curriculum.
A particularly important section of the Q&A addressed AI detection tools and plagiarism.
Joe’s position was clear:
AI detection tools are not reliable due to high false positives
institutions are exposed if they rely on detectors to accuse learners
the more sustainable approach is to redesign assessment
shift toward competence and performance-based assessment
use experiential and applied tasks
assess learners in ways that require judgement, context, and real-world application
This aligns with a wider trend in education and professional learning:
scenario-based assessments
workplace simulations
portfolio evidence
reflective justification and decision logs
authentic tasks tied to real outputs
The underlying message: if AI can draft an essay in seconds, essays alone cannot be your primary proof of competence.
If you want to act on this webinar’s ideas without getting overwhelmed, focus on three practical moves.
Choose a single repetitive task:
turning notes into a brief
drafting workshop slides
summarising learner feedback
writing a learning resource
Then test AI on that workflow end-to-end.
A recurring theme was using different tools for what they do best. For example:
one tool for research
one tool for synthesis and writing
one tool for slides
one tool for knowledge base querying
If assessment is part of your remit, begin by:
identifying tasks where AI can produce a “good enough” output
redesigning the assessment so that learners must demonstrate judgement, application, and decision-making
incorporating authentic constraints and real contexts
Connectors allow an AI tool to access and work with other platforms (for example, a drive, knowledge base, or slide tool). They reduce manual copy and paste and enable multi-step workflows across systems.
An agentic workflow is where AI can carry out a sequence of actions autonomously, such as retrieving information, generating an output, and completing tasks across tools, rather than only responding in chat.
No. AI detection tools have high false-positive rates and are not considered reliable for proving misconduct. The safer approach is assessment redesign toward performance and competence.
Applied, experiential, and competence-based assessments tend to be more robust. Examples include scenario-based tasks, portfolios, simulations, reflective decision logs, and authentic work outputs.