Live Events

Webinar: The Power of AI Enhanced VR Experiences

21 August


Share This Post

Webinar: The Power of AI Enhanced VR Experiences

On August 15th we were joined by guest speakers Sophie Thompson: Co-founder & CEO, VirtualSpeech & Helen Hooper: Head of Learning Design, VirtualSpeech who hosted an online webinar exploring ‘The Power of AI Enhanced VR Experiences‘. 

Sophie Thompson co-founded VirtualSpeech in 2016 as a way to overcome her fear of public speaking. She realised that VR was uniquely positioned to help people practice skills in a safe environment and build their skills & confidence for real-world situations. 

In just 18 months of using VR, Sophie went from being too anxious to order food at a restaurant to being interviewed live on BBC World News. Sophie’s has won awards such as ‘Inspirational Woman in Tech’ for her achievements with the company.

Helen Hooper is an experienced instructional designer with a particular passion for creating engaging and effective learning experiences through active participation.

Her rich background includes 12 years founding and running 2 award-winning educational start-ups focused on learning through storytelling via a unique blend of interactive video, personalised learning paths and team leader facilitated workshops.

VirtualSpeech is an online education platform that blends elearning with active learning online and in virtual reality. They focus on using AI and VR to increase both competence but crucially confidence as well as soft skill like public speaking, sales, customer service and so on. 

Learn more about VR and AI in our Professional Diploma in Digital Learning Design

Learn More

The power of AI enhanced VR experiences

Helen believes that AI and VR are so useful in a learning context because they are so good for enhancing active learning experiences. 

Helen then starts of with a quote that she feels sums up one of the most important points about effective learning and its something that is often forgotten and for context in Helens previous role she did an active learning platform for pre-schoolers.

“Tell me and I forget, Teach me and I remember, Involve me and I learn” 

The point is that when children are actively involved in a learning experience, that is when they actually learn. Whereas if we just tell them something they might completely forget or have some kind of abstract memory of it possibly not in context at all. 

When providing children with active learning, you can instantly tell that they would learn so much more by acting this out and physicalising the experience rather than just being told. Most people are happy to accept that this kind of activity is good for children, but we can sometimes forget how powerful active learning is for adults too.

Why do we learn best through active learning experiences every day?

We learn best through our everyday experiences more so than through books or lessens. We learn through active experiences every day. For example, the experience of a child. They climb onto a chair, they topple off and hurt themselves a little bit. They are learning the concept of pain. They are learning to internalise this idea that they need to be careful when they are climbing on furniture and they’re really learning that through the actual experience rather than if they were just being told to be careful. It is a completely different thing and it’s the same for adults. 

For example, public speaking. You go in and give a speech in a conference. You do it, some things go well, some things don’t. overall we are internalising this actual experience of actually doing it and every time we go through and do something like that we get better and better and we are obviously learning more through doing than from someone telling you. 

There is absolutely no getting away from the fact that whether you are a three year old or a 65 year old senior director, as humans we all learn best through active experience. Obviously we can’t solely rely on first hand experiences for all learning, so we need to find other methods for training. 

As learning designers, when we find a way where we can simulate those real world experiences in a training environment we get really excited. This is where virtual reality comes in a great tool to help us as adults immerse ourselves in our imagination. 

For example, in VirtualSpeech it could work like this. You put a headset on and you find yourself completely immersed in a great big conference room and you can look all around you with ambient sounds. So you can really trick part of your brain into believing that its actually happening right now. So just like a child being absorbed in a story, we become absorbed in the experience.

The immersion triggers the imagination and the imagination triggers an emotional response, so that’s the nerves and the anxiety that inevitably comes with a real life speaking situation. It’s by experiencing speaking in public with this heightened feeling of stress and anxiety that’s when we really improve. Through VR it feels like we are having the same emotions that you would have if you were doing it in the real world. That’s when we really see it working and we really learn and grow. 

But what about AI? Where does AI come into it?

On a general level AI has improved interactivity and allows for personalised learning. Starting with feedback on delivery. 

So whether you have just delivered a presentations or whether its an interview or your in a meeting you can then get feedback on the way you delivered that speech. At then end you will get a panel like the one below after you have spoken. It will give you feedback on how long you spoke for, feedback on your eye contact, a split of left or right or whether you are looking down or up. It also shows how many filler words you use, speaking pace whether your fast or slow, too loud or too quite. Listen ability is a measure that shows how easy it is to understand someone when they are speaking.

This feedback from AI is really useful to VirtualSpeech because all of the activities that they do are designed to be practiced. The idea is that you go in and practice, you get your feedback and then reflect on it. The feedback you get informs your self-reflection, it helps you think about how you did and you can also play back the audio recording of yourself. They have now recently introduced a function called playback also, which allows you to see your body language, you can see what your head is doing etc which is so useful when trying to improve. 

So all of this helps you self-reflect and make you think about what went well. You can then make a plan on how you can improve and go back and practice again. 

The next is AI feedback on the content that you have said. This is very new at VirtualSpeech and they have used ChatGPT for this. There are some activities that lend themselves really well to feedback on the content of what you said. Three examples of this type of AI feedback is active listening, elevator pitch and roleplay. At VirtualSpeech for active listening they have an exercise in active listening where an Avatar will give you a speech, generated by ChatGPT. You then have to summarise it in 20 seconds and then the avatar will give you feedback on that summary. 

Similar in the elevator pitch they give some training on how you might like to break the elevator pitch down, structure it and then you deliver your pitch to an avatar and it will then tell you what you did well and how they think you could improve. 

AI for questions is next. Questions can really help personalise your learning experience and there is different ways you can have questions. For example, if your giving a speech or in a meeting or in an interview you can generate AI questions from an audience by typing your topic of your choice into the system and then you’ll receive questions on that topic, some of which you will not have prepared for since they are AI generated. Even more exciting is when AI listens to your speech and at the end you can ask them to ask you questions based entirely on what you have said.

Next we have AI for roleplay. So we all know how roleplay can help people to learn and AI can be a really valuable tool for role-playing a range of scenarios. Helen and Sophie have outlined some key components of traditional role play and AI powered role play. 

When it comes to role play in a more traditional sense, it can be time restricted, it requires another person to practice and people can feel a little bit awkward and if they aren’t knowledgeable on the subject at hand then it’s of limited use to the other person. 

However, with AI powered roleplays you can alleviate some of these problems. AI role play scripts and prompts can be written once and fed into the system so then anyone can roleplay different scenarios on demand, which is more scalable and accessible at point of need. You can even program your roleplaying avatars to be pleasant or angry for example in a customer service situation which is much more of a realistic practice than a branch learning scenario which after a couple of times is quite predictable what’s going to happen. 

Feedback with AI roleplays is really interesting because AI can provide contextualized feedback. For example in an interview it will tell you how you have performed in that interview and give you things you could have said instead of what you did say. It can respond directly based on what you have said and then give you an example of what you could have said instead. Another benefit of conducting these roleplays with AI is that you can do so in different languages, for example for the first 7 years of VirtualSpeech existence they were purely English and all of their content was in English. In the last couple of months they have added 8 languages to the AI role plays so now for those who speak languages other than English have the option to do these roleplays in their language. 

Another key thing to think about if you want to implement AI generated content into your learning is the ability to have customer prompts. You can have custom prompts, which can be customised depending on your team or your industry which is something VirtualSpeech have only recently launched so that their customers can build their own role play scenarios and they can tell the avatars the context and which role to take on in the role play and then let the conversation flow from there. 

Why use AI in your learning journey and the benefits of them?

1. Increased Realism

You can practice two way interactions in a realistic way. The questions asked at the end of presentation. The feedback given to you after you finished practicing and the free flowing conversations are all much more realistic and adaptive with AI and enhanced avatars compared to things like branch learning which is much more static.

2. Instant Feedback

Instant feedback with AI can give you feedback, results and advice on your delivery content and context of your dialogue. This feedback is entirely personalised to the individual so it can be tailored to any topic or industry which really opens up the training material. 

3. Personalised

This feedback is entirely personalised to the individual so it can be tailored to any topic or industry which really opens up the training material. 

4. Scalable

In terms of scalability simulations mean that we don’t have to wait for the opportunity anymore to learn through real life experience. We can now actively learn through experience via simulations and use tools like VR to trigger imagination and emotional response to the learning material which Is crucial for that lasting behavioural change. 

5. Roleplay Conversations

The ability for learner to role play on demand from their workplace or home means that they have the opportunity to practice situations when they need to build their skills and their confidence.

From a business perspective that is more efficient in terms of cost and also in terms of time. The ability to repeat on demand also effectively makes it more effective. 

6. Customisable

Finally, in VirtualSpeech you can add your own prompts or elsewhere program it to add those in yourself and then it’s designed entirely for your organisations training needs. 

Since 2015, VirtualSpeech have been using VR in their training and various levels of AI to support that. They had started off with just the feedback in terms of how dialogue is delivered so here are some of the key things that they have learnt in that time. 

Best practices for using AI and VR in Learning

1. Lead with the learning not the tech

VR and AI are additional tools in the learning toolbox and like with any tool should be used when it’s the best method for the problem that you’re trying to solve. 

For example, Sophie explains how they would advocate to start with the method and then work your way backwards and hope that it solves your problem. 

2. Think like a learner

It is also important to bear in mind that most learners at this stage of the technology will likely be very unfamiliar to them. So this does bring a level of vulnerability to some people.

For example, when VirtualSpeech have been at conferences and somebody puts a headset on their head they quickly learnt that once people have the headset on they can feel quite exposed and vulnerable because maybe their friends might want to take a picture of them with their headset on. 

Another useful thing to have is onboarding sessions because again some people are less tech savy so this just gives people the opportunity to check to see if they are comfortable and to show them how to use the different technologies. 

3. Assessment

AI isn’t perfect and it has its limitations just like humans do, use VR and AI for practice, reflection and learning rather than assessment. 

4. Accessibility

It is always a good idea to have a plan B for your learning in times when may be a small percentage of learner who won’t be able to use it or can’t use it for whatever reason. 

For example, VirtualSpeech started off as a virtual reality app then they added eLearning but now you can do all of the virtual reality practice from directly within your browser. So now people that don’t have access to a headset, they can just go online and do the practice exercise version instead. 

5. Prompts

Your learning material is only as good as your prompts. You will need to add parameters to keep the avatar focused on the learning objective that you have asked it to help with. 

It’s not quite as simple as when you’re playing around with ChatGPT at home and asking it random questions. You do want to put some parameters into the learning to make sure that the avatar stays focused and they don’t go off on a tangent and bring in things that you don’t actually want them to bring in. 

Discover our Professional Diploma in Digital Learning Design to become an expert in all things AI and VR or click on the button below to book a call with one of our education consultants!

Watch now