Calling all academic institutions: Special offer on the Varjo XR-4 Series available until Nov 30. Apply now.

Talk to sales

How to Leverage Mixed Reality for Medical Simulation and Training: Your Questions Answered

September 13, 2023
by Ben Krynski, Co-Founder, Real Response
|
Training and Simulation

During our latest webinar “How to Leverage Mixed Reality for Medical Simulation and Training” with Real Response and Royal Australian Air Force, we were flooded with great questions from the participants, so many that we couldn’t address all of them in the live session. In this blog post, you can now find answers to the remaining questions from Real Response experts. Access the complete recording of the webinar below.

Answers provided by Ben Krynski, Co-Founder of Real Response

Q: Does the physical doll provide haptic feedback that synchronises with the virtual doll?  

A: The manikin (doll) you see in the footage is a real physical object that the trainee is able to interact with. Thus, the student can physically insert an IV, and perform all the interventions they would normally perform on a physical manikin. If the manikin provides haptic feedback then it is felt by the student.  

Q: Is it possible to make suggestions by AI for an emergency? For example if someone hurt, then patient need stitches, the AI will suggest how many stitches and where to stitch with some highlighting point in human.

A: Not with the current state of AI image/video detection.  For example with a wound that may require stitches, the AI would have no way of knowing whether it’s a scratch or a deeper cut, and the fidelity of the manikins or moulage also affects this.  

We have experimented with AI detection of the medical interventions but there are issues detecting many medical items with 100% (or close to) accuracy.  A tourniquet for example has many different shapes and angles. Likewise translucent items such as syringes can have a massive variety of appearances depending on reflections, what is in the background, and the angle and state of the item.  Our conclusion was that AI is not yet suitable for these types of uses where accuracy is paramount, however we are working with partners to ensure that as soon as it is that we can incorporate it into our system. 

Q: Do you have similar training solutions targeting the Oil & Gas industry?  

A: We were messing around with an engineering solution onboard a submarine – see here. The idea was to practice the fine motor skills of repairing a leak in a pipe while immersed in a MR experience on a sub. Although not Oil & Gas, we think the ability to practice maintenance repairs on pipes has direct correlations to use cases in this industry. If you are interested in leveraging our work for a specific use-case please reach out to us at Real Response.

Q: Can you explain in brief networking requirements and challenges while deploying /developing solutions?  

A: BlueRoom uses a network switch to connect the Facilitator PC and headset PCs. There is a lot of activity over the network as data is sent from Mission Control to the headset PCs and vice versa. One of the main challenges we faced was the video data which is being streamed in real time from the headsets to Mission Control. We had to optimise the logic as well as the size of the data by increasing the compression and adjusting the resolution of the videos.

Q: How much training is needed for people who operate the technology? Second question, is it possible to rent the system?  

A: We allow a three hour session to train someone to operate the technology. This may take shorter or longer depending on the technological proficiency of the operator.

We hope to be in a position to rent systems and also rent out our MR training space in the future. 

Q: This is incredibly exciting and inspirational. Question for the Blue Room team. What sort of timescale is involved with developing virtual environments? I work in Search and Rescue and we’re looking at multiple environments, as well as aircraft layout. 

A:  Thank you, we really appreciate your feedback. The time really depends, a model can take as little as a few hours to many months. The detail should follow the learning objectives, e.g. an aeromedical scenario may require less detail on the fire extinguisher but more on the medical objects. Not everything needs to be HD. Only if it adds value to the learner. 

If you would like to discuss a specific use case please reach out and we would love to go into detail with you.  

Q: Where eye tracking is used for this operational purpose?  

A: Eye tracking is active, but we are not using it as we record the video from the user’s perspective so we know what they’re looking at.

Q: Does the training include interaction with the aircrew? e.g. roleplaying loadmasters or pilot interactions  

A:  Absolutely it does, the students are able to interact with both real and virtual characters as required by the learning objectives.

Q: Have you conducted any work with the police and how could this work when there are so many variants, such as different scenes of crimes, stop and search?  

A: Law enforcement is an area of interest and we would love to support police with mixed reality training systems. We have some exciting ideas of how BlueRoom can be used and are looking for a law enforcement partner to support this capability.

Q: Are wireless headsets something that’ll be available in the near future?  

A: Nothing to share on that today. We’re focused on powering the highest-end XR experiences that require a lot of computing.

Q: Is there any future technology in development to simulate in controllers a scalpel/fine medical instruments?  What are the constraints of developing such technology?  

A: BlueRoom bypasses the need for controllers and allows trainees to use their own hands and real objects. Thus, BlueRoom in its current form allows the use of scalpels and other fine medical instruments.  

Q: In reference to Airway Management – are Video Laryngoscopes being used in BlueRoom ? Can the student and facilitator see airway problems i.e. Anterior Larynx etc.?  

A: They certainly are, we work closely with Teleflex and have used their video laryngoscopes and EZ-IO successfully in BlueRoom.  

Q: Where is it possible to meet you/see – and test your solutions in near future?  

A: This is a link to our upcoming events. You are also welcome to visit us in Melbourne, Australia.

Q: Are there any use cases for medical education in the university setting? Such as remote teaching and simulation?  

A: Not yet, we would love to work with medical, nursing and paramedical schools in a University setting in the near future.

Q: Is there a possibility to utilise the Blue Room tech for a system with more mobility given to the trainees? Does the system lend itself to wireless solutions (e.g.. backpack PCs or wireless video transfer)  

A:  At this point, probably not.  We are using the Varjo headset with a laptop, but the power requirements mean that a wireless solution is not really feasible at the moment, you would also need to carry a bulky battery pack capable of powering the laptop and headset (not to mention any other peripherals). 

In terms of a wireless version of the headset, having experimented with other such systems with a wireless adapter they experience regular dropouts and once again need a lot of power, with battery changes required every couple of hours.  This means lots of resetting and tweaking, something we are keen to avoid wherever possible for our use cases. 

Any standalone headset in the near future, with the high resolution, chroma key and near zero latency achieved by the Varjo headsetwill only be possible by sacrificing one or more of these features.  

Want to learn more about using virtual and mixed reality for medical simulation?

 

Download free e-book

Browse more Varjo Insider posts