Nothing Special   »   [go: up one dir, main page]

WO2020073103A1 - Virtual reality system - Google Patents

Virtual reality system Download PDF

Info

Publication number
WO2020073103A1
WO2020073103A1 PCT/AU2019/051114 AU2019051114W WO2020073103A1 WO 2020073103 A1 WO2020073103 A1 WO 2020073103A1 AU 2019051114 W AU2019051114 W AU 2019051114W WO 2020073103 A1 WO2020073103 A1 WO 2020073103A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
real
virtual
life
manikin
Prior art date
Application number
PCT/AU2019/051114
Other languages
French (fr)
Inventor
Jeremy Burton
Tayla Rachelle JAMES
Yang Beng NG
Peter Mark CAREY
Timothy MOLONY
Benjamin Ward
Original Assignee
St John Ambulance Western Australia Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2018903876A external-priority patent/AU2018903876A0/en
Application filed by St John Ambulance Western Australia Ltd filed Critical St John Ambulance Western Australia Ltd
Publication of WO2020073103A1 publication Critical patent/WO2020073103A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/288Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for artificial respiration or heart massage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/0007Signalling

Definitions

  • the present invention relates to Virtual Reality System
  • the invention has been devised particularly, although not necessarily solely, in relation to a virtual reality training system and assessment system, in particular a virtual reality training system and assessment system in connection with medical procedures such as first-aid procedures.
  • Provision of accredited training courses require trainers in the form of human beings; this is particularly true for training courses related to course involving medical activity such as first-aid courses. This makes delivery of these courses cumbersome and relative expensive; this is particularly true due the fact the in many first-aid procedures such as CPR the trainers (when issuing to a trainee accreditation of completion of the first-aid course) need to be present in order to confirm that the trainee has acquired the skills for conducting CPR correctly. Also, the need of the trainers to be present results in that first aid courses typically are not conducted in non-working hours.
  • a system for providing a virtual environment comprises sensory devices and a computer system adapted for transferring signals representative of particular information between the sensory devices and the computer system for generation of the virtual environment comprising at least one virtual object, means for immersing the user in the VR environment, and at least one real-life object permitting the user to interact with the real-life object, the real-life object being adapted to interchange signals with the computer system, wherein the signals being representative of particular information related to the interaction between the user and the real-life object.
  • the system is configured for providing a first VR environment including scenes being absent of particular virtual objects for viewing by the user, the particular objects having counterpart real-life objects that are present in the area where the system is located, and subsequently providing a second VR environment including scenes comprising the first VR including particular virtual objects for viewing by the user, the particular objects having counterpart real-life objects that are present in the area where the system is locate
  • the system is adapted to provide instructions for communication between the user and the real-life object concerning the interaction.
  • the interaction between the user and the real-life object comprises training sessions for the user to conduct particular activities of the training sessions while interacting with the real-life object or virtual object.
  • the system is configured to generate virtual objects counterpart to the real-life objects.
  • the system is configured to generate virtual objects counterpart of the body parts of the user.
  • the body parts of the user that have counterpart virtual objects comprise at least one hand.
  • the system is configured to permit interaction between the user and at least one virtual object with at least one virtual object counterpart to a body part of the user.
  • the user’s body part having a counterpart virtual object comprises at least one hand of the user.
  • the system is configured to provide the user simultaneously with (1 ) a hands-on experience while interacting with the real-life object and (2) a virtual experience while manipulating the real-life object.
  • the system is configured to provide feedback to the user regarding the interaction while the user is interacting with the real-life object and/or the virtual object.
  • the system is configured to provide the feedback through the virtual objects comprising providing instructions and information to the user.
  • the virtual objects comprise graphical elements adapted to provide information to the user related to the interaction between the user and the real- life object.
  • the system is configured for generating virtual buttons and pop up menus for permitting the user to select the type of interaction the user will have with the real-life and/or virtual objects and controlling the interaction.
  • the system is configured to permit the user to select, deselect and skip particular interactions.
  • the virtual user’s hand comprises the virtual buttons.
  • the menu comprises pop-up menus popping up in the virtual environment where the user is immersed.
  • the pop-up menu pops out from the virtual user’s hand.
  • the system is configured for receiving and emitting commands for controlling the interaction.
  • the commands comprise voice commands and commands resulting from sign-language generated by the user or virtual objects.
  • the system is configured for providing a virtual assistant for providing the voice commands and commands resulting from sign-language.
  • the commands comprise introduction content, explaining the activity to be conducted by the user and providing assistance and feedback when the user is conducting the interaction.
  • the system is configured for providing a human like appearance to the virtual assistant.
  • the system is configured for the virtual assistant to communicate via voice and sign language.
  • the system is configured for the virtual assistant to interact with the user through virtual body parts of the user.
  • the system is configured for providing first-aid sessions.
  • the real-life body comprises a manikin configured for permitting the user to conduct particular activities of the selected first aid-sessions.
  • the particular activities comprise CPR, MMR and Patient Handling.
  • the manikin communicates with the computer system wirelessly.
  • the manikin comprises sensor devices for monitoring the actions of the user during CPR, MMR and/or Patient Handling.
  • the sensor devices comprises at least one pressure sensor for generating signals representative of the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
  • the sensor devices comprises at least one sensor for generating signals representative of the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
  • the system is configured to provide a virtual object for providing feedback to the user while conducting CPR.
  • the virtual object comprises a graphical element providing an indication to the user of the amount of pressure being applied by the user to the chest region of the manikin.
  • the graphical element comprises a bar comprising an outer border defining an inner region divided into plurality of regions permitting visualisation by the user of the amount of pressure applied to the chest region by the percentage of inner region that is highlighted.
  • the sensor devices comprises at least one flow sensor for generating signals representative of the breathing activity of the user while the user is conducting MMR.
  • the sensor devices comprises at least one proximity sensor for generating signals representative of the location of the open mouth of the user with respect to the open mouth of the manikin while the user is conducting MMR.
  • the sensor devices comprises at least one gyroscope for generating signals representative of the orientation of the manikin while the user is conducting patient handling activities.
  • the system is configured for providing a VR environment that includes activities absent of any virtual object that is a counterpart version of a real-life object located at the location of the system.
  • a method for delivering training sessions to users for the users while being immersed in the VR environment to acquire a plurality of skills while interacting with virtual and/or real life objects comprises the steps of: a. generating at least one virtual object that is a counterpart version of a real-life object adapted for interaction with the user; b. recording the interaction between the user and the real-life object for generating feedback content to the user; and c. generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting the feedback content to the user.
  • the method further comprises generating instructional content related to instructions for interacting with the real-life objects or virtual objects for provision to the user and transducing the instructional content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting the instructional content to the user.
  • the method further comprises receiving commanding content from the user related to commands for interacting with the real-life objects or virtual objects and transducing the instructional content into signals representative of the commanding content for controlling the interaction between the user and either the virtual objects of the VR environment or real-life objects.
  • the interaction between the user and the real-life object comprises performing first-aid procedures for training purposes and subsequent accreditation.
  • the method further comprises the step of tailoring the training sessions based on prior performances of the user.
  • the method further comprises generating virtual buttons and pop up menus for permitting the user to select the type of interaction the user will have with the real-life and/or virtual objects and controlling the interaction.
  • the interaction comprises conducting first-aid activities on a real- life manikin while the user being immersed in the virtual reality is simultaneously interacting with a virtual object that is a counterpart version of the real-life manikin.
  • the interaction comprises CPR, MMR and Patient Handling.
  • the method further comprises the step of recording the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
  • the method further comprises the step of recording the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
  • the method further comprises, based on the signals representative of the pressure applied by the user and the periodicity that the user is applying pressure, generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting to the user.
  • the method further comprises the step of recording the location of the open mouth of the user with respect to the open mouth of the manikin.
  • the method further comprises the step of recording the breathing activity of the user while the open mouth of the user and the open mouth of the manikin are fluidly connected.
  • the method further comprises based on the signals representative of the location of the open mouth of the user and the breathing activity of the user generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either virtual objects of the VR environment for transmitting to the user.
  • the method further comprises the step of recording the location and orientation of the manikin while the user is conducting patient handling exercises with the manikin.
  • the method further comprises the step of storing the information recorded during interaction between the user and the real-life object and/or virtual object and comparing the recorded information with a standard information representative of the interaction between the user and the real-life object and/or virtual object that is required for accreditation purposes.
  • a real-life object for interaction by a user, the real-life object comprising sensor devices for monitoring the interaction between the user and the real-life object, the real-life object being adapted for operative connection with a system configured for generating VR environment comprising at least one virtual object being a counterpart version of the real-life object.
  • the real-life object is adapted for operative connection with the system in accordance with first aspect of the invention.
  • the real-life object comprises sensor devices for detecting action of the user during interaction by a user.
  • the real-life object comprises a manikin configured for conducting first-aid activities on the manikin.
  • the first-aid activities comprise CPR, MMR and/or Patient Handling conducted on the manikin.
  • the sensor devices comprises at least one pressure sensor for generating signals representative of the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
  • the sensor devices comprises at least one sensor for generating signals representative of the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
  • the sensor devices comprises at least one flow sensor for generating signals representative of the breathing activity of the user while the user is conducting MMR.
  • the sensor devices comprises at least one proximity sensor for generating signals representative of the location of the open mouth of the user with respect to the open mouth of the manikin while the user is conducting MMR.
  • the sensor devices comprises at least one gyroscope for generating signals representative of the orientation of the manikin while the user is conducting patient handling exercises.
  • the manikin is configured for wirelessly connecting with the computer system.
  • the real-life object comprises an electric circuitry for managing real time stream of data from the sensors and send that data to the computer system.
  • processing includes reading data streams with information representative of the interaction between the user and the manikin, sorting which sensor data is required and then uses it to affect the VR environment experienced by the user.
  • the electric circuitry has a lower voltage than the conventional electric circuitry to improve battery life.
  • the electric circuitry accepts commands from the computing device to enable or disable sensor data streams.
  • Figure 2 shows a user wearing a head mounded device (FIMD) connected to the computing device for immersion in a VR environment;
  • FIMD head mounded device
  • Figure 3 shows a flowchart illustrating a particular arrangement of a training method in accordance with the present embodiment of the invention
  • Figures 4 and 5 show images of particular animated scenes of the training method shown in figure 3 including a VR assistant in accordance with the present embodiment of the invention
  • FIG. 6 shows a schematic view of the VR assistant shown in figures 4 and 5 holding a human heart during training method in accordance with the present embodiment of the invention
  • Figure 7 shows a user wearing a head mounded device FIMD looking at her/his hand
  • Figure 8 shows a top perspective view of the virtual hand being seen by user shown in figure 7;
  • Figure 9 shows a bottom perspective view of the virtual hand being seen by user shown in figure 7 including an activable hand menu
  • Figure 10 shows the user’s virtual hands shown in figure 7 immersed in a particular VR environment for conducting a first-aid activity for a particular patient;
  • Figure 1 1 shows the user’s virtual hands shown in figure 7 approaching the patient shown in figure 10;
  • Figure 12 shows the user’s real-life hands performing a cardio-puimonary resuscitation (CPR) activity on the real-life manikin shown in figure 12;
  • Figure 13 shows is a schematic top view of a virtual manikin in accordance with the present embodiment of the invention during a CPR activity;
  • CPR cardio-puimonary resuscitation
  • Figure 14 is a schematic top view of the vital hands shown in figure 13 and a graphical element for visually representing performance of the CPR activity conducted by the user shown on figure 7;
  • Figure 15 is a schematic top view of the manikin shown in figure 1 used for the CPR activity of the training method shown in figure 3;
  • Figure 16 is a plan view a control circuit in accordance with the present embodiment of the invention of the manikin shown in figure 10;
  • Figure 17 shows a flowchart illustrating a particular arrangement of the training activity for MMR exercise accordance with the present invention
  • Figures 18 to 21 shows images of particular animated scenes of another arrangement of the training method in accordance with the present embodiment of the invention.
  • Figure 22 is a block diagram showing a particular arrangement system architecture of the invention.
  • Figure 1 show a particular arrangement of a system 10 in accordance with an embodiment of the invention to provide a particular virtual reality (VR) to a user 12.
  • VR virtual reality
  • the system 10 in accordance with the present embodiment of the invention comprises sensory devices (such as sensors 22 and 24) and a computer system 17 adapted for transferring electric signals (representative of particular information) between each other for generation of the VR environment.
  • the system 10 in accordance with a particular arrangement of the present embodiment of the invention also comprises real-life objects such as, for example, a manikin 16 permitting a user 12 to interact with the real-life objects. Interaction between the manikin 16 and the real-life objects allows, for example, the user to practice particular activities of training sessions such as CPR on the manikin 16 for training purposes.
  • the real-life objects are adapted to interchange electric signals (representative of particular information) with the computer system 17 in order to provide to the user 12 instructions of the training process and feedback while the user 12 is conducting the particular activities on the real life objects such as manikin 16.
  • the computer system 17 may be adapted to be connected to either the internet or an internal network. Alternatively, the computer system 17 may be a standalone computer system.
  • the system 10 is adapted to generate a VR environment for immersion of a user 12 in the VR environment.
  • the VR environment comprises virtual objects 38 for viewing by the user 12 when immersed in the VR environment.
  • the virtual objects 38 are counterpart objects with respect to the real-life objects (such as the manikin 16) or body-parts (for example hands 21 ) of the user 12.
  • the VR environment further comprises virtual objects that may not necessarily have counterpart real-life objects.
  • virtual objects pop-up menus 66 and activation buttons 58 shown on hands 12 as shown in figures 8 and 9, and interactive graphs 70 shown in figures 13 and 14.
  • VR assistant 36 see figures, for example, 4 and 5 for providing assistance to the user 12 during the virtual training sessions.
  • creation of the virtual counterpart version of the manikin 16 permits the user 12 (while conducting a particular activity to acquire a particular skill) manipulating a real-life object (such as the manikin 16) but viewing a counterpart virtual version (the virtual manikin 38) of the real-life object.
  • the system 10 comprises a VR system 14 and one or more objects (such as a manikin 16) for the user 12 to interact with the object (the manikin 16) for conducting particular activities, for example, to acquire one or more skills of training sessions such as sessions for training users 12 to conduct first-aid procedures.
  • first-aid activity are resuscitation procedures including cardio-pulmonary resuscitation (CPR) including mouth-to-mouth resuscitation (MMR) as well as handling of patients that, for example, have been injured in an accident or have been victims of crime.
  • CPR cardio-pulmonary resuscitation
  • MMR mouth-to-mouth resuscitation
  • the VR system 14 is configured to present virtual reality content to a user 12 in such a manner that the user 12 is immersed in a VR environment.
  • the VR system at least comprises a computing device 18, a user interface 20, and a plurality of sensor devices such as sensors 22 and 24 shown in figure 1 and sensors 76 and 74 shown in figure 15.
  • the computing device 18 is configured to communicate over the internet with a server 26; for this, the computing device 18 includes communication lines, or ports to enable the exchange of information with the server 26 over the internet.
  • individual components of system 10 e.g., display 20, sensor devices such as 22, 22, 24, 76 and 74
  • the server 26 is configured to provide the content for generation of the VR environment in which the user 12 will be immersed by hosting the processor and storage devices; for this, the server 26 is configured to communicate with the computing device 18 over the internet or a particular network.
  • Server 26 includes electronic storage devices, one or more processors, and communication components, among others.
  • Server 26 includes communication lines, or ports to enable the exchange of information over the internet with the computing device 18 for generation of the content producing the VR to be provided by the user and/or other computing platforms via a network.
  • Server 26 includes a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server 26. In a particular arrangement, to the server 26 may be implemented by a cloud of computing platforms operating together as a server 26.
  • the computing device 18 may be a stand alone computing device 18 including a plurality of hardware, software, and/or firmware components and storage devices (for storing user’s profiles and performance history as well as information used for producing the content that will be provided to the user interface) operating together to provide the functionality attributed herein to the computing device 18.
  • Computing device 18 may include, for example, a cellular telephone, a smartphone, a laptop, a tablet computer, a desktop computer, a television set-top box, smart TV, a gaming console, a virtual reality headset, and/or other devices.
  • individual components of system 10 e.g., display 20 (a user interface 20), and sensors 22 and 24
  • the content that generates the VR system 14 is to be provided to the user 12 for immersion in the VR environment.
  • Immersion in the VR environment occurs when the user 12 interacts with a particular user interface 20 such as a particular enclosed area or devices that permit displaying to the user’s eyes the content in a 3D configuration.
  • HMD head mounted device 28
  • the HMD 28 acts as the user interface 20 to provide the content to the user 12 - see figure 2.
  • the user interface 20 comprises a HMD 28 that is worn on the head 23 of a user 12.
  • the VR content may be presented to the user 12 via a display included in the HMD 28.
  • the HMD 28 is configured such that a perception of a three dimensional space is created by two stereoscopic movies. Each movie is generated for one of the user’s eye. Each movie is rendered in real time and then displayed for view by the user’s particular eye 12. The convergence of these two movies in real time (along with how those views are reactive to the user’s head 23 rotation and position in space) creates a 3D effect generating a virtual environment.
  • the HMD 28 comprise a chamber 30 comprising the display to be presented to the user’s eye for the user 12 to perceive the immersive 3D effect mentioned above and permitting immersion of the user 12 in the VR environment.
  • the HMD 28 is operatively connected (wired or wirelessly) to the computing device 18 (connected via the internet or a network to the servers 26) for receiving the content that generates the VR environment in which the user 12 will be immersed.
  • the HMD 28 comprises headsets 32 for providing aural information to the user 12.
  • the aural information may comprise music, descriptions and instructions to permit the user 12 navigate within the VR environment.
  • a microphone may also be provided to permit the user 12 to provide, for example, complete objectives using voice command or give verbal instructions to, to be described at a later stage, a virtual assistant 36 (see figure 5).
  • the system 10 also comprises sensor devices such sensors 22 and 24.
  • the sensor devices (for example, 22 and 24) are configured to generate output signals conveying information related to the direction that the user 12 is gazing (the view direction) and/or other information such as the particular location of the user’s body and a particular user’s body part such as the user’s head 23 and hands 21.
  • the view direction of the user 12 comprise a (1 ) physical direction toward which a particular gaze of the user 12 is directed, (2) an orientation of one or more parts of the user's body (for example: the user's head 23 may be tilted or the user 12 may be leaning over), and (3) a position of a user 12 within the area of the location of the system 10, and/or other directional information.
  • the view direction may include a first view direction that corresponds to a first physical direction toward which the gaze of the user 12 is directed (for example, the user 12 may be looking in a forward direction).
  • the user 12 may rotate to look at her/his surroundings changing the view direction to second view directions that correspond to second physical directions toward which the gaze of the user is directed.
  • the sensor devices may be configured to generate output signals conveying information related to any number of different view directions of the user.
  • the sensor devices 22 and 24 may include one or more of a GPS sensor, a gyroscope, an accelerometer, an altimeter, a compass, a camera based sensor, a magnetic sensor, an optical sensor, an infrared sensor, a motion tracking sensor, an inertial sensor, a CCB sensor, an hand tracking sensor, a facial tracking sensor, an eye tracking sensor, one or more body tracking sensor, among others.
  • the sensor devices comprises a first set of sensors 22 adapted to track orientation and location of the user 12, and a second sensor 24 to track the user’s body parts such as the user’s hands 21.
  • the manikin 16 comprises a plurality of sensor devices such as pressure and proximity and air flow sensors (76 and
  • gyroscopes are used for recording information generated while the user 12 is conducting the CPR and/or MMR routines to be conducted on the manikin 16; storing in memory information related to the CPR and/or MMR routines for each user 12 permits providing feedback to the user 12 as well as for storing in the user’s profile for testing the user 12 performance and subsequent accreditation, if applicable.
  • the sensors 22 are configured and located at particular locations at the location of the system 10 to permit tracking the user 12 and in particular of the HDM 28 that is mounted on the user’s head 23. Tracking of the user’s head 23 permit establishing the particular location and orientation of the user 12 at any particular moment in time. This information permits configuring the VR environment in accordance with the view direction of the user 12.
  • tracking of the FIDM 28 permits the system 10 to detect when the user’s head 23 is located at the correct location for conducting mouth-to-mouth resuscitation allowing configuring the VR environment to reflect the fact that the user’s head 23 is located at the correct location for conducting mouth-to-mouth resuscitation; this permits providing feedback to the user 12 when the user’s head 23 is in the incorrect location.
  • the second sensor 24 is configured and located at a particular location of the FIMD (see figure 7) to permit in particular tracking the user’s hands 21 for identifying the location of the user’s hand 21. Tracking of the user’s head 23 permits establishing using the sensor 24 the particular location and orientation of the user’s hand 21 at any particular moment in time. This allows configuring the VR environment so as to show the user’s virtual hands 56 (see figure 10) at a particular location within the VR environment that correspond to the location where the user’s real-life hand 21 (counterpart to the user’s virtual hands 56) is located - see figure 7).
  • the system 10 comprises also real-life objects for interaction of the user 12 during the training sessions.
  • the system 10 comprise a manikin 16 configured for permitting the user 12 to interact with the manikin 16 in real- life for conducting particular activities to acquire skills that form part of training sessions that the system 10 provides to the user 12 and that the user 12 may select.
  • figures 3 to 20 refer to a particular arrangement of a preferred operation for the system 10 in accordance with the present embodiment of the invention.
  • the particular arrangement of the method comprises delivering training sessions to provide users 12 of the system 10 with a plurality of skills for performing a particular activity such as performing first-aid procedures for training purposes and subsequent accreditation.
  • the system 10 comprises storage devices for recording the performance (the performance data) of each user 12 involved in the procedures of a particular training sessions such as a first-aid training sessions selected by the user 12.
  • a particular user 12 has completed all of the activities of the selected training session, the system 10 proceeds to process the performance data of the particular user 12 and compare the user’s performance data against the standard performance data set at the level required for passing the selected training session.
  • accreditation proceeds.
  • the user 12 may continue the training by re-selecting the training sessions that the user 12 has not passed.
  • FIG. 3 shows a preferred operation for the system 10 wherein a user 12 accesses system 10 by interacting with the computing device 18 (and ergo the server 26) to start generating the VR content for delivery to the HMD 28 to immerse the user 12 in the VR environment. Immersion of the user 12 into the VR environment occurs as the user 12 mounts the HMD 28 onto her/his head 23.
  • the process of accessing the system 10 may comprise signing up by providing the user’s details for generating a user’s profile and issuance of a password for security purposes (or signing in, if the user already has signed up) .
  • Once signed in the user 12 may, at this instance, select the particular training session(s) allowing the user 12 to tailor the training experience to her/his needs using the system 10.
  • selection of particular training session(s) by the user 12 may occur while the user 12 is immersed in the VR environment (these alternative arrangements are shown in figure 3).
  • the selection while being immersed in the VR environment occurs, via virtual sensors (to be described at a later stage with reference to figures 7 to 9) to permit the user 12 to select or deselect particular procedures of the selected training session; also, the user 12 may deselect a previously selected training session and choose another training session.
  • FIG 4 shows a detail of a particular scene of the VR environment. This particular scene shown in figure 4 relates to the process in which the user 12 is introduced to the preferred operation for the system 10.
  • FIG. 4 to 14 and 17 to 20 depict one of the preferred operations for the system 10 and it relates to a first-aid training session.
  • the VR environment comprises a virtual scenario 34 including the VR assistant 36, the virtual manikin 38 mounted on a virtual surface 40 and virtual equipment 42 contained in a virtual suitcase 44.
  • the virtual manikin 38 is the virtual counterpart version of the manikin 16. Creation of the virtual counterpart version (the virtual manikin 38) of the manikin 16 and the presence of the real-life manikin 16 permits the user 12 (while conducting a particular activity to acquire a particular skill) manipulating a real-life object (such as the manikin 16) but viewing a counterpart virtual version (the virtual manikin 38) of the real-life object instead of viewing the real-life manikin 16.
  • the VR assistant 36 acts as trainer introducing the user 12 to the capabilities of the system 10 and how to operate the system 10 as well as providing the user 12 with step-by-step indications for completing the activities to acquire the skills required to pass the selected training session as well as providing feedback to the user 12 .
  • FIG. 5 shows a close up view of the VR assistant 36.
  • the VR assistance 36 comprises human-like features and human-like body-parts.
  • the VR assistant 36 comprises a body 46 and a head 48 as well as arms 50 with hands 52 including fingers 54.
  • the hand 48 with fingers 54 permit greeting and communicating with the user 12 via sign-language. It is particularly useful that the VR assistant 36 is able to communicate with users 20 via sign-language because it permits deaf persons to use the system 10.
  • the VR assistant 36 is able to directly interact with the user 12 (direct virtual contact) for teaching purposes by, for example, using the VR assistant 36 to engage the VR hands 56 (see figures 8 and 9) of the user 12 for illustrating a particular hand posture or body posture required for handling a virtual patient.
  • the VR assistant 36 may engage a virtual version of a body-part of the user 12 (such as the users virtual hand 56) and move the virtual version of a body-part of the user 12 in a particular manner (having as purpose teaching the user 12 a particular activity); this permits the user 12 to move her/his real- life hands 21 to the particular posture or location towards which the virtual version of a body-part of the user 12 (for example, the hand 21 a) have been moved by the VR assistant 36.
  • the VR assistant 36 is able to provide demonstrations to the user 12 while being immersed in the VR environment.
  • the VR assistant 36 can provide an explanation of the effects onto a human heart during the CPR routine comprising periodical compression of the chest by directly interacting with the human heart.
  • the VR assistant 36 can hold a virtual version of a human heart to explain the mechanism that permits resuscitation of a patient (in particular: reinstatement of the heart beats) used during CPR.
  • the fact that the virtual assistant 36 has arms 52 and hands 48 is particularly useful because it allows replicating demonstrations typically performed by real-life trainers but using virtual objects such as the virtual heart shown in figure 6.
  • the system 10 is configured to receive commands from the user 12 for providing instructions to the VR assistant 36 as well as for controlling the computing device 18 (and server 26). Being able to control the computing device 18 and server 26 permits the user 12 to direct the VR assistant 36 to perform particular actions such as repeating a particular activity which the user 12 may have not comprehend completely or missed due to lack of concentration.
  • the commands for directing the VR assistant 36 may be voice commands and languages that use the visual-manual modality to convey meaning such as sign-language commands based on, for example, hand movement.
  • the use of these particular commands (voice commands and sign-language commands) for directing the VR assistant 36 is particularly useful because it makes the training sessions between the user 12 and the VR assistant more user-friendly. It is also particularly useful that the VR assistant 36 may be directed by users 20 via sign- language because it permits deaf persons to use the system 10.
  • the user 12 may control the computing device 18 (and server 26) with other type of commands; for example, these particular type of commands may be in the form of direct contact of the user’s body parts.
  • these particular type of commands may be generated by direct contact of the user’s real-life body parts (such as the fingers of one of the user’s hand 21 a) by tapping particular locations of the other hand 21 b of the user 12.
  • virtual menus and other type of commands may be generated through the movements by the user 12 of her/ his hands 21 and fingers for tapping particular locations of virtual menus 66 popping up in the VR environment (see figure 9).
  • the particular signal representing the particular command (such as the finger of one hand 21 a of the user 12) tapping a particular location of the other hand 21 b (of the menu 66) is generated when the sensor 24 (used for tracking the user’s hands 21 ) detects the specific hand movement (a finger from one hand tapping a particular location of the other hand) that represent tapping of the finger on the particular location.
  • the signal is generated when the spatial coordinates of the finger (in particular of the end of the finger coincide with the spatial coordinates of the button (located on the other hand 21 b or of the virtual menu 66) prompting that the processing means of the computing unit 18 (or server 26) generate the particular signal representing the particular command.
  • the processing means of the computing unit 18 or server 26
  • the signal is generated when activation of particular locations in the virtual menus 66.
  • the user 12 may activate particular features of the system 10 through speech; for this, the computing device 18 (or server 26) comprises a speech recognition software that upon emission of voice messages by users 12 and recognition of the voice message will prompt particular commands for conducting a particular action.
  • figures 8 and 9 show one of the virtual hands 56 of the user 12 shown in figure 7.
  • the virtual hand 56 comprises virtual buttons 58 and 60 on its fingers 62 and dorsal side of the virtual hand 56.
  • the palmar side of the virtual hand 56 compromises a virtual button 64 that when activated generates the virtual menu 66 defined by the virtual pop-out screen 66 incorporated in the VR environment.
  • Control of the computing device 18 for implementation of the command so as to obtain the desired outcome is done by transferring the signal (generated by the sensors 22 or 24 due to detecting a particular body part movement, detection of coinciding coordinates during virtual tapping of virtual buttons, or microphones capturing a particular sound such as a voice command as described before) to the computing device 18.
  • the signal generated by the sensors 22 or 24 due to detecting a particular body part movement, detection of coinciding coordinates during virtual tapping of virtual buttons, or microphones capturing a particular sound such as a voice command as described before
  • the computing device 18 (or the server 26) comprises processing units including circuitry and software (stored in memory devices) configured for running particular algorithms for transducing signals generated by: (1 ) the sensors 22 and 24 or any other sensors, (2) the detection of coinciding coordinates during virtual tapping of virtual buttons or menus 66, or (3) the microphones capturing a particular sound such as user’s voice commands providing instructions for generating the desired outcome such as a selecting a particular training session.
  • Each training session comprises a plurality of training exercises that need to be completed.
  • the training session related to first-aid comprises several training exercises including CPR and Patient Handling Techniques (PHT) for, for example, properly handling patients injured in accidents or during criminal activity.
  • PHT Patient Handling Techniques
  • Figures 10 to 13 show particular stages of the CPR exercise.
  • Figure 10 shows the user 12 (shown in figure 7) immersed in a particular VR environment representing an occurrence of heart failure in a particular patient 68.
  • the user 12 After the user 12 has received the training including demonstrations of the CPR exercise performed by VR assistant 36, the user 12 is now ready to practice CPR using the real- life manikin 16 but while being immersed in the VR environment and by viewing the virtual manikin 38.
  • the user 12 is immersed in the VR environment including the virtual patient 68.
  • the user 12, for conducting CPR on the virtual patient 68 kneels down to permit her/his virtual hands 56 approach the chest region of the patient 68.
  • the user 12 engage her/his real-life hands 21 such that one hand is on the top of other hand with their fingers interlocked with respect to each other as shown in figure 12.
  • the real-life hands 21 engage each other, the user 12, due to being immersed in the VR environment, visualises the virtual hands engaging each other as they approach the chest region of the virtual patient 68.
  • the user As the real-life hands 21 of the user 12 reach the upper surface of the chest of the real-life manikin 16, the user’s virtual hands 56 rest on the chest region of the patient 68 as appreciated in figure 13. At this stage, the user 12 has her/his hands 21 resting on the manikin 16 and can start applying periodically pressure (compression) on the chest region of the manikin 16 with the objective of conducting the CPR exercise to the manikin 16.
  • the system 10 provides feedback to the user 12 while conducting the CPR exercise to the manikin 16.
  • the feedback may be trough voices message, indications including aural indications of the VR assistant 36.
  • the computer system 17 may receive signals from the manikin 16 representative of, for example, (1 ) the amount of pressure the user applies to the chest at each compression, (2) the periodicity with which the compression is applied, (3) the particular location of the interlocked hands 57 with respect to the correct location for CPR to function effectively. These signals are processed by the computing device 18 (or by the servers 26) and compared with the standard values required for resuscitating a patient that has undergone heart failure with the objective of providing feedback to the user 12.
  • the values generated during the CPR exercise may also be stored for future reference. Storage of this information is particularly useful for use during accreditation of the user 12 after having completed each exercise of each particular training session or tailoring future training activities in light of the user’s poor performance requiring further training in the particular activity.
  • the feedback may be provided simultaneously while the user 12 is conducting the CPR.
  • the system 10 may provide the feedback via voices messages (for example, via the VR assistant 36) generated based on the comparison of the standard CPR values with the values generated by the user 12.
  • the system 10 in particular, the computing device 18 or server 26
  • Figures 13 and 14 show a particular arrangement of such as virtual content. Aural elements representing successful outcome of the activity or representing unsuccessful outcome may also be generated.
  • Figures 13 and 14 show a particular graphical element in a shape of a semicircular bar 70 surrounding the interlocked virtual hands 56.
  • the bar 70 comprises an outer border defining an inner region 72 divided into plurality of regions 72a to 72c permitting visualisation of the amount of pressure applied to the chest region by the percentage of the inner region 72 that is, for example, coloured - if the entire inner region is fully coloured at the end of a compression, it is an indication to the user 12 (and to the system 10) that the correct amount of pressure has been applied during compression. .
  • This permits the user 12 to visualise how well she/he is performing giving the user 12 the opportunity to continuously improve the performance until the proper compression standard is reached.
  • graphical or aural elements are provided for providing a measure of the periodicity that the user 12 is conducting repeatedly the chest compression.
  • figures 15 and 16 depict a schematic view of the manikin 16.
  • the manikin 16 is adapted to provide signals representative of (1 ) the pressure and the periodicity that the chest of the manikin is compressed by the user 12 and (2) whether the interlocked hands 57 are located at the proper location for CPR to be effective.
  • the manikin 16 is provided with a sensor devices 74 for measuring the amount of pressure applied to the chest and for measuring the periodicity with which the pressure is applied and detection of the interlocked hands at the correct location.
  • the sensor devices 74 are operatively connected to the computing device 18 (or servers) via cabling 80.
  • communication between the virtual manikin 38 and the computer system 16 may be wireless using, for example, Bluetooth ® technology or any other suitable wireless technology such as WIFI capable of transferring signals between the virtual manikins 38.
  • MMR Mouth-to-Mouth Resuscitation
  • the system 10 may also provide feedback to the user 12 during the MMR exercise.
  • the manikin 16 is provided with sensors 76 for measuring the particular strength and the periodicity of the breathing activity during the MMR.
  • the senor 74 may comprise a proximity sensor for detecting the presence of the user’s face as the user approaches the head of the manikin 16 for conducting the MMR exercise.
  • Figure 17 shows a flowchart illustrating a particular arrangement of the training activity MMR exercise accordance with the present embodiment of the invention using the manikin 16.
  • Figure 16 shows the sensor 76 and the electric circuitry 78 incorporated in the manikin 16.
  • the electronic circuity 78 is configured to receive the electric signals generated by the sensor 76 and for conditioning the electric signals prior transference to the computing device 18 for processing in the computing device 18 and server 26.
  • the electronic circuitry 78 is adapted to receive the signals generated by the sensors 74 and 76 and for transfer via cabling 80 (or wirelessly) to the computing device 18 for processing in the computing device 18 or server 26.
  • the electric circuitry 78 is designed to manage the real time stream of data from the CPR sensors 74 and send that data to the computing device 18.
  • the electric circuitry 78 reads data from all of the sensors of the manikin 16 including breathing sensors, compression speed and depth, hand location in the chest and the gyroscope that measures x,y,z position of the real-life manikin 16 in the real physical space. The data is sent via cabling 80 (or wirelessly) to the computing device 18 for processing therein or in servers 26.
  • Processing includes reading the data stream, sorting which sensor data is required and then uses it to affect VR environment experienced by the user 10.
  • the electric circuitry 78 has a higher voltage than the conventional electric circuitry to improve accuracy of all of the sensor data by increasing granularity. There is custom filtering on electric circuitry 78 to reduce noise and increase the accuracy of the data signal.
  • the electric circuitry 78 also accepts commands from the computing device 18 to enable or disable sensor data streams.
  • the previously described exercises include use of real life objects such a manikin 16.
  • the VR environment is composed exclusively of virtual objects which do not have counterpart real-life objects included in the system 10.
  • the exercises to be described below may be implemented using virtual objects having real-life counterpart objects.
  • the exercise depicted in figures 18 to 21 relates to handling of an emergency situation involving a potentially injured person 82 and potential hazards that may be surrounding the person.
  • this particular arrangements illustrate the use of virtual messages contained in the VR environment for guiding the user 12 through a particular exercise.
  • Figure 18 shows a particular scene of a VR environment depicting the potentially injured person 82 lying on the floor.
  • a virtual message 85 is included in the VR environment at this particular stage of the exercise. The message indicates the first step that is required to do, when encountering a potentially injured person, is to check whether any danger is present.
  • the danger consists of the presence of a VR sharp knife 84.
  • Figures 19 and 20 show the process of removing the VR sharp knife 84.
  • the process of removing the VR sharp knife 84 comprises moving one of the real-life hands 21 in such a manner that the virtual hand 56 (counterpart to the real-life hand 21 ) approaches the VR sharp knife 84 and closes the hand 56 when hovering above the VR sharp knife 84 in order for the virtual hand 56 to engage the VR sharp knife 84 and being able to remove the VR sharp knife 84 by moving the real-life hand 21 in such a manner that the virtual hand 56 (through movement of the user’s real- life hands 21 ) moves towards the counter 86 for storage in a safe location.
  • the next step in the process is to check for a response from the potentially injured person 82; as indicated in the message 87 this is done by squeezing the shoulders of the person.
  • this particular arrangement is particularly useful for guiding the user 12 step by step through particular procedures comprising a plurality of task that perhaps do not need particular skill instead it is important that all tasks are conducted and that the tasks are conducted in the required order such as detection and removal of any danger prior handling of the potentially injured person 82.
  • Figure 23 shows a particular arrangement of hardware architecture of the system 10 for practicing the previously described methods.
  • This particular hardware architecture relates to the system 10 comprising the server 26 and the computing device 18 operatively connected to the server 26.
  • the sensors devices and HMD are operatively connected to the computing device 18 for transferring the data generated by the sensors devices and HMD to the server 26 for processing and receiving in return the processed information in the form of the content that generates the VR environments.
  • the server 26 comprises storage devices for storing the user’s profiles and training sessions and information required of the simulation controller 27 generating the content that generates the VR environments.
  • the simulation controller 27 comprises the control processor, the judgment processor, the decisions processor and the display processor required for generating the content that generates the VR environments.
  • the control processor controls the overall organization and operation of the application.
  • the judgment processor evaluates user input to determine performance of the users and generates auditory feedback to be presented to the user via headphones or the VR environment to provide feedback.
  • the display processor handles the creation and animation of the VR environments.
  • performance of users can be evaluated in an adaptive way in order to progress to successive skills when the users exhibit successful performance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Medicinal Chemistry (AREA)
  • Mathematical Optimization (AREA)
  • Medical Informatics (AREA)
  • Chemical & Material Sciences (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Cardiology (AREA)
  • Human Computer Interaction (AREA)
  • Entrepreneurship & Innovation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A system for providing a virtual environment (VR), the system comprises sensory devices and a computer system adapted for transferring signals representative of particular information between the sensory devices and the computer system for generation of the virtual environment comprising at least one virtual object, means for immersing the user in the VR environment, and at least one real-life object permitting the user to interact with the real-life object, the real-life object being adapted to interchange signals with the computer system, wherein the signals being representative of particular information related to the interaction between the user and the real-life object. A method for delivering training sessions to users for the users while being immersed in the VR environment and a real-life object for interaction by a user, the real-life object comprising sensor devices for monitoring the interaction between the user and the real-life object.

Description

Virtual Reality System TECHNICAL FIELD
[0001 ] The present invention relates to Virtual Reality System
[0002] The invention has been devised particularly, although not necessarily solely, in relation to a virtual reality training system and assessment system, in particular a virtual reality training system and assessment system in connection with medical procedures such as first-aid procedures.
BACKGROUND ART
[0003] The following discussion of the background art is intended to facilitate an understanding of the present invention only. The discussion is not an acknowledgement or admission that any of the material referred to is or was part of the common general knowledge as at the priority date of the application.
[0004] Provision of accredited training courses require trainers in the form of human beings; this is particularly true for training courses related to course involving medical activity such as first-aid courses. This makes delivery of these courses cumbersome and relative expensive; this is particularly true due the fact the in many first-aid procedures such as CPR the trainers (when issuing to a trainee accreditation of completion of the first-aid course) need to be present in order to confirm that the trainee has acquired the skills for conducting CPR correctly. Also, the need of the trainers to be present results in that first aid courses typically are not conducted in non-working hours.
[0005] It is against this background that the present invention has been developed.
SUMMARY OF INVENTION
[0006] According to a first aspect of the invention there is provided a system for providing a virtual environment (VR), the system comprises sensory devices and a computer system adapted for transferring signals representative of particular information between the sensory devices and the computer system for generation of the virtual environment comprising at least one virtual object, means for immersing the user in the VR environment, and at least one real-life object permitting the user to interact with the real-life object, the real-life object being adapted to interchange signals with the computer system, wherein the signals being representative of particular information related to the interaction between the user and the real-life object.
[0007] Preferably, the system is configured for providing a first VR environment including scenes being absent of particular virtual objects for viewing by the user, the particular objects having counterpart real-life objects that are present in the area where the system is located, and subsequently providing a second VR environment including scenes comprising the first VR including particular virtual objects for viewing by the user, the particular objects having counterpart real-life objects that are present in the area where the system is locate
[0008] Preferably, the system is adapted to provide instructions for communication between the user and the real-life object concerning the interaction.
[0009] Preferably, the interaction between the user and the real-life object comprises training sessions for the user to conduct particular activities of the training sessions while interacting with the real-life object or virtual object.
[0010] Preferably, the system is configured to generate virtual objects counterpart to the real-life objects.
[001 1 ] Preferably, the system is configured to generate virtual objects counterpart of the body parts of the user.
[0012] Preferably, the body parts of the user that have counterpart virtual objects comprise at least one hand.
[0013] Preferably, the system is configured to permit interaction between the user and at least one virtual object with at least one virtual object counterpart to a body part of the user.
[0014] Preferably, the user’s body part having a counterpart virtual object comprises at least one hand of the user. [0015] Preferably, the system is configured to provide the user simultaneously with (1 ) a hands-on experience while interacting with the real-life object and (2) a virtual experience while manipulating the real-life object.
[0016] Preferably, the system is configured to provide feedback to the user regarding the interaction while the user is interacting with the real-life object and/or the virtual object.
[0017] Preferably, the system is configured to provide the feedback through the virtual objects comprising providing instructions and information to the user.
[0018] Preferably, the virtual objects comprise graphical elements adapted to provide information to the user related to the interaction between the user and the real- life object.
[0019] Preferably, the system is configured for generating virtual buttons and pop up menus for permitting the user to select the type of interaction the user will have with the real-life and/or virtual objects and controlling the interaction.
[0020] Preferably, the system is configured to permit the user to select, deselect and skip particular interactions.
[0021 ] Preferably, the virtual user’s hand comprises the virtual buttons.
[0022] Preferably, the menu comprises pop-up menus popping up in the virtual environment where the user is immersed.
[0023] Preferably, the pop-up menu pops out from the virtual user’s hand.
[0024] Preferably, the system is configured for receiving and emitting commands for controlling the interaction.
[0025] Preferably, the commands comprise voice commands and commands resulting from sign-language generated by the user or virtual objects. [0026] Preferably, the system is configured for providing a virtual assistant for providing the voice commands and commands resulting from sign-language.
[0027] Preferably, the commands comprise introduction content, explaining the activity to be conducted by the user and providing assistance and feedback when the user is conducting the interaction.
[0028] Preferably, the system is configured for providing a human like appearance to the virtual assistant.
[0029] Preferably, the system is configured for the virtual assistant to communicate via voice and sign language.
[0030] Preferably, the system is configured for the virtual assistant to interact with the user through virtual body parts of the user.
[0031 ] Preferably, the system is configured for providing first-aid sessions.
[0032] Preferably, the real-life body comprises a manikin configured for permitting the user to conduct particular activities of the selected first aid-sessions.
[0033] Preferably, the particular activities comprise CPR, MMR and Patient Handling.
[0034] Preferably, the manikin communicates with the computer system wirelessly.
[0035] Preferably, the manikin comprises sensor devices for monitoring the actions of the user during CPR, MMR and/or Patient Handling.
[0036] Preferably, the sensor devices comprises at least one pressure sensor for generating signals representative of the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
[0037] Preferably, the sensor devices comprises at least one sensor for generating signals representative of the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR. [0038] Preferably, the system is configured to provide a virtual object for providing feedback to the user while conducting CPR.
[0039] Preferably, the virtual object comprises a graphical element providing an indication to the user of the amount of pressure being applied by the user to the chest region of the manikin.
[0040] Preferably, the graphical element comprises a bar comprising an outer border defining an inner region divided into plurality of regions permitting visualisation by the user of the amount of pressure applied to the chest region by the percentage of inner region that is highlighted.
[0041 ] Preferably, the sensor devices comprises at least one flow sensor for generating signals representative of the breathing activity of the user while the user is conducting MMR.
[0042] Preferably, the sensor devices comprises at least one proximity sensor for generating signals representative of the location of the open mouth of the user with respect to the open mouth of the manikin while the user is conducting MMR.
[0043] Preferably, the sensor devices comprises at least one gyroscope for generating signals representative of the orientation of the manikin while the user is conducting patient handling activities.
[0044] In a particular arrangement, the system is configured for providing a VR environment that includes activities absent of any virtual object that is a counterpart version of a real-life object located at the location of the system.
[0045] According to a second aspect of the invention there is provided a method for delivering training sessions to users for the users while being immersed in the VR environment to acquire a plurality of skills while interacting with virtual and/or real life objects, the method comprises the steps of: a. generating at least one virtual object that is a counterpart version of a real-life object adapted for interaction with the user; b. recording the interaction between the user and the real-life object for generating feedback content to the user; and c. generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting the feedback content to the user.
[0046] Preferably, the method further comprises generating instructional content related to instructions for interacting with the real-life objects or virtual objects for provision to the user and transducing the instructional content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting the instructional content to the user.
[0047] Preferably, the method further comprises receiving commanding content from the user related to commands for interacting with the real-life objects or virtual objects and transducing the instructional content into signals representative of the commanding content for controlling the interaction between the user and either the virtual objects of the VR environment or real-life objects.
[0048] Preferably, the interaction between the user and the real-life object comprises performing first-aid procedures for training purposes and subsequent accreditation.
[0049] Preferably, the method further comprises the step of tailoring the training sessions based on prior performances of the user.
[0050] Preferably, the method further comprises generating virtual buttons and pop up menus for permitting the user to select the type of interaction the user will have with the real-life and/or virtual objects and controlling the interaction.
[0051 ] Preferably, the interaction comprises conducting first-aid activities on a real- life manikin while the user being immersed in the virtual reality is simultaneously interacting with a virtual object that is a counterpart version of the real-life manikin. [0052] Preferably, the interaction comprises CPR, MMR and Patient Handling.
[0053] Preferably, the method further comprises the step of recording the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
[0054] Preferably, the method further comprises the step of recording the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
[0055] Preferably, the method further comprises, based on the signals representative of the pressure applied by the user and the periodicity that the user is applying pressure, generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting to the user.
[0056] Preferably, the method further comprises the step of recording the location of the open mouth of the user with respect to the open mouth of the manikin.
[0057] Preferably, the method further comprises the step of recording the breathing activity of the user while the open mouth of the user and the open mouth of the manikin are fluidly connected.
[0058] Preferably, the method further comprises based on the signals representative of the location of the open mouth of the user and the breathing activity of the user generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either virtual objects of the VR environment for transmitting to the user.
[0059] Preferably, the method further comprises the step of recording the location and orientation of the manikin while the user is conducting patient handling exercises with the manikin.
[0060] Preferably, the method further comprises the step of storing the information recorded during interaction between the user and the real-life object and/or virtual object and comparing the recorded information with a standard information representative of the interaction between the user and the real-life object and/or virtual object that is required for accreditation purposes.
[0061 ] According to a third aspect of the invention there is provided a real-life object for interaction by a user, the real-life object comprising sensor devices for monitoring the interaction between the user and the real-life object, the real-life object being adapted for operative connection with a system configured for generating VR environment comprising at least one virtual object being a counterpart version of the real-life object.
[0062] Preferably, the real-life object is adapted for operative connection with the system in accordance with first aspect of the invention.
[0063] Preferably, the real-life object comprises sensor devices for detecting action of the user during interaction by a user.
[0064] Preferably, the real-life object comprises a manikin configured for conducting first-aid activities on the manikin.
[0065] Preferably, the first-aid activities comprise CPR, MMR and/or Patient Handling conducted on the manikin.
[0066] Preferably, the sensor devices comprises at least one pressure sensor for generating signals representative of the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
[0067] Preferably, the sensor devices comprises at least one sensor for generating signals representative of the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
[0068] Preferably, the sensor devices comprises at least one flow sensor for generating signals representative of the breathing activity of the user while the user is conducting MMR. [0069] Preferably, the sensor devices comprises at least one proximity sensor for generating signals representative of the location of the open mouth of the user with respect to the open mouth of the manikin while the user is conducting MMR.
[0070] Preferably, the sensor devices comprises at least one gyroscope for generating signals representative of the orientation of the manikin while the user is conducting patient handling exercises.
[0071 ] Preferably, the manikin is configured for wirelessly connecting with the computer system.
[0072] Preferably, the real-life object comprises an electric circuitry for managing real time stream of data from the sensors and send that data to the computer system.
[0073] Preferably, processing includes reading data streams with information representative of the interaction between the user and the manikin, sorting which sensor data is required and then uses it to affect the VR environment experienced by the user.
[0074] Preferably, the electric circuitry has a lower voltage than the conventional electric circuitry to improve battery life.
[0075] Preferably, there is custom filtering on the electric circuitry to reduce noise and increase the accuracy of the data signal and the sensor data by increasing granularity.
[0076] Preferably, the electric circuitry accepts commands from the computing device to enable or disable sensor data streams.
BRIEF DESCRIPTION OF THE DRAWINGS
[0077] Further features of the present invention are more fully described in the following description of several non-limiting embodiments thereof. This description is included solely for the purposes of exemplifying the present invention. It should not be understood as a restriction on the broad summary, disclosure or description of the invention as set out above. The description will be made with reference to the accompanying drawings in which: Figure 1 show a particular arrangement of a system in accordance with an embodiment of the invention to provide a particular virtual reality (VR) to a user;
Figure 2 shows a user wearing a head mounded device (FIMD) connected to the computing device for immersion in a VR environment;
Figure 3 shows a flowchart illustrating a particular arrangement of a training method in accordance with the present embodiment of the invention;
Figures 4 and 5 show images of particular animated scenes of the training method shown in figure 3 including a VR assistant in accordance with the present embodiment of the invention;
Figure 6 shows a schematic view of the VR assistant shown in figures 4 and 5 holding a human heart during training method in accordance with the present embodiment of the invention;
Figure 7 shows a user wearing a head mounded device FIMD looking at her/his hand;
Figure 8 shows a top perspective view of the virtual hand being seen by user shown in figure 7;
Figure 9 shows a bottom perspective view of the virtual hand being seen by user shown in figure 7 including an activable hand menu;
Figure 10 shows the user’s virtual hands shown in figure 7 immersed in a particular VR environment for conducting a first-aid activity for a particular patient;
Figure 1 1 shows the user’s virtual hands shown in figure 7 approaching the patient shown in figure 10;
Figure 12 shows the user’s real-life hands performing a cardio-puimonary resuscitation (CPR) activity on the real-life manikin shown in figure 12; Figure 13 shows is a schematic top view of a virtual manikin in accordance with the present embodiment of the invention during a CPR activity;
Figure 14 is a schematic top view of the vital hands shown in figure 13 and a graphical element for visually representing performance of the CPR activity conducted by the user shown on figure 7;
Figure 15 is a schematic top view of the manikin shown in figure 1 used for the CPR activity of the training method shown in figure 3;
Figure 16 is a plan view a control circuit in accordance with the present embodiment of the invention of the manikin shown in figure 10;
Figure 17 shows a flowchart illustrating a particular arrangement of the training activity for MMR exercise accordance with the present invention;
Figures 18 to 21 shows images of particular animated scenes of another arrangement of the training method in accordance with the present embodiment of the invention; and
Figure 22 is a block diagram showing a particular arrangement system architecture of the invention.
DESCRIPTION OF EMBODIMENT(S)
[0078] Figure 1 show a particular arrangement of a system 10 in accordance with an embodiment of the invention to provide a particular virtual reality (VR) to a user 12.
[0079] The system 10 in accordance with the present embodiment of the invention comprises sensory devices (such as sensors 22 and 24) and a computer system 17 adapted for transferring electric signals (representative of particular information) between each other for generation of the VR environment. The system 10 in accordance with a particular arrangement of the present embodiment of the invention also comprises real-life objects such as, for example, a manikin 16 permitting a user 12 to interact with the real-life objects. Interaction between the manikin 16 and the real-life objects allows, for example, the user to practice particular activities of training sessions such as CPR on the manikin 16 for training purposes. As will be described at a later stage, the real-life objects are adapted to interchange electric signals (representative of particular information) with the computer system 17 in order to provide to the user 12 instructions of the training process and feedback while the user 12 is conducting the particular activities on the real life objects such as manikin 16.
[0080] The computer system 17 may be adapted to be connected to either the internet or an internal network. Alternatively, the computer system 17 may be a standalone computer system.
[0081 ] The system 10 is adapted to generate a VR environment for immersion of a user 12 in the VR environment. In accordance with a particular arrangement, the VR environment comprises virtual objects 38 for viewing by the user 12 when immersed in the VR environment. The virtual objects 38 are counterpart objects with respect to the real-life objects (such as the manikin 16) or body-parts (for example hands 21 ) of the user 12.
[0082] Further, the VR environment further comprises virtual objects that may not necessarily have counterpart real-life objects. Examples of these particular virtual objects pop-up menus 66 and activation buttons 58 shown on hands 12 as shown in figures 8 and 9, and interactive graphs 70 shown in figures 13 and 14. Another examples of virtual objects that may not necessarily have counterpart real-life objects are one or more VR assistant 36 (see figures, for example, 4 and 5) for providing assistance to the user 12 during the virtual training sessions.
[0083] In a particular arrangement, creation of the virtual counterpart version of the manikin 16 permits the user 12 (while conducting a particular activity to acquire a particular skill) manipulating a real-life object (such as the manikin 16) but viewing a counterpart virtual version (the virtual manikin 38) of the real-life object.
[0084] Being able to view the virtual manikin 38 while conducting the particular activity on the real-life manikin 16 is particularly advantageous because it provides the user 12 simultaneously with (1 ) a hands-on experience while manipulating the real-life manikin 16 and (2) with a virtual experience that permits - as will be described below - providing instant feedback of how the user 12 is performing the particular activity of the training session. The instant feedback may be in form of virtual objects such as virtual graphical elements popping up in the virtual reality and instructions and suggestions from the VR assistant 36 to improve the user’s performance.
[0085] The system 10 comprises a VR system 14 and one or more objects (such as a manikin 16) for the user 12 to interact with the object (the manikin 16) for conducting particular activities, for example, to acquire one or more skills of training sessions such as sessions for training users 12 to conduct first-aid procedures. Examples of first-aid activity are resuscitation procedures including cardio-pulmonary resuscitation (CPR) including mouth-to-mouth resuscitation (MMR) as well as handling of patients that, for example, have been injured in an accident or have been victims of crime.
[0086] The VR system 14 is configured to present virtual reality content to a user 12 in such a manner that the user 12 is immersed in a VR environment. For this, the VR system at least comprises a computing device 18, a user interface 20, and a plurality of sensor devices such as sensors 22 and 24 shown in figure 1 and sensors 76 and 74 shown in figure 15.
[0087] In the particular arrangement shown in figure 1 , the computing device 18 is configured to communicate over the internet with a server 26; for this, the computing device 18 includes communication lines, or ports to enable the exchange of information with the server 26 over the internet. In the arrangement shown in figure 1 , individual components of system 10 (e.g., display 20, sensor devices such as 22, 22, 24, 76 and 74) are coupled to (either: wired or wirelessly) with the computing device 18.
[0088] The server 26 is configured to provide the content for generation of the VR environment in which the user 12 will be immersed by hosting the processor and storage devices; for this, the server 26 is configured to communicate with the computing device 18 over the internet or a particular network.
[0089] Server 26 includes electronic storage devices, one or more processors, and communication components, among others. Server 26 includes communication lines, or ports to enable the exchange of information over the internet with the computing device 18 for generation of the content producing the VR to be provided by the user and/or other computing platforms via a network. Server 26 includes a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server 26. In a particular arrangement, to the server 26 may be implemented by a cloud of computing platforms operating together as a server 26.
[0090] In an alternative arrangement, the computing device 18 may be a stand alone computing device 18 including a plurality of hardware, software, and/or firmware components and storage devices (for storing user’s profiles and performance history as well as information used for producing the content that will be provided to the user interface) operating together to provide the functionality attributed herein to the computing device 18. Computing device 18 may include, for example, a cellular telephone, a smartphone, a laptop, a tablet computer, a desktop computer, a television set-top box, smart TV, a gaming console, a virtual reality headset, and/or other devices. In the arrangement shown in figure 1 , individual components of system 10 (e.g., display 20 (a user interface 20), and sensors 22 and 24) are adapted to communicate (wired or wirelessly) computing device 18.
[0091 ] As mentioned before, the content that generates the VR system 14 is to be provided to the user 12 for immersion in the VR environment. Immersion in the VR environment occurs when the user 12 interacts with a particular user interface 20 such as a particular enclosed area or devices that permit displaying to the user’s eyes the content in a 3D configuration.
[0092] In the particular arrangement shown in figure 1 , immersion of the user 12 into the VR environment occurs when the user 12 mounts on her/his head 23 a head mounted device 28 (HMD). The HMD 28 acts as the user interface 20 to provide the content to the user 12 - see figure 2.
[0093] As shown in figure 2, the user interface 20 comprises a HMD 28 that is worn on the head 23 of a user 12. The VR content may be presented to the user 12 via a display included in the HMD 28. The HMD 28 is configured such that a perception of a three dimensional space is created by two stereoscopic movies. Each movie is generated for one of the user’s eye. Each movie is rendered in real time and then displayed for view by the user’s particular eye 12. The convergence of these two movies in real time (along with how those views are reactive to the user’s head 23 rotation and position in space) creates a 3D effect generating a virtual environment. [0094] The HMD 28 comprise a chamber 30 comprising the display to be presented to the user’s eye for the user 12 to perceive the immersive 3D effect mentioned above and permitting immersion of the user 12 in the VR environment.
[0095] In the particular arrangement shown in figure 2, the HMD 28 is operatively connected (wired or wirelessly) to the computing device 18 (connected via the internet or a network to the servers 26) for receiving the content that generates the VR environment in which the user 12 will be immersed.
[0096] Further, the HMD 28 comprises headsets 32 for providing aural information to the user 12. The aural information may comprise music, descriptions and instructions to permit the user 12 navigate within the VR environment. A microphone may also be provided to permit the user 12 to provide, for example, complete objectives using voice command or give verbal instructions to, to be described at a later stage, a virtual assistant 36 (see figure 5).
[0097] Furthermore, as mentioned before, the system 10 also comprises sensor devices such sensors 22 and 24. The sensor devices (for example, 22 and 24) are configured to generate output signals conveying information related to the direction that the user 12 is gazing (the view direction) and/or other information such as the particular location of the user’s body and a particular user’s body part such as the user’s head 23 and hands 21.
[0098] The view direction of the user 12 comprise a (1 ) physical direction toward which a particular gaze of the user 12 is directed, (2) an orientation of one or more parts of the user's body (for example: the user's head 23 may be tilted or the user 12 may be leaning over), and (3) a position of a user 12 within the area of the location of the system 10, and/or other directional information. For example, the view direction may include a first view direction that corresponds to a first physical direction toward which the gaze of the user 12 is directed (for example, the user 12 may be looking in a forward direction). Or, the user 12 may rotate to look at her/his surroundings changing the view direction to second view directions that correspond to second physical directions toward which the gaze of the user is directed. Further, the sensor devices may be configured to generate output signals conveying information related to any number of different view directions of the user. In some implementations, the sensor devices 22 and 24 may include one or more of a GPS sensor, a gyroscope, an accelerometer, an altimeter, a compass, a camera based sensor, a magnetic sensor, an optical sensor, an infrared sensor, a motion tracking sensor, an inertial sensor, a CCB sensor, an hand tracking sensor, a facial tracking sensor, an eye tracking sensor, one or more body tracking sensor, among others.
[0099] In the particular arrangement shown in figure 1 , the sensor devices comprises a first set of sensors 22 adapted to track orientation and location of the user 12, and a second sensor 24 to track the user’s body parts such as the user’s hands 21.
[00100] Further, as will be described at a later stage, the manikin 16 comprises a plurality of sensor devices such as pressure and proximity and air flow sensors (76 and
74), and gyroscopes. These particular sensor devices are used for recording information generated while the user 12 is conducting the CPR and/or MMR routines to be conducted on the manikin 16; storing in memory information related to the CPR and/or MMR routines for each user 12 permits providing feedback to the user 12 as well as for storing in the user’s profile for testing the user 12 performance and subsequent accreditation, if applicable.
[00101 ] Referring to figure 1 , the sensors 22 are configured and located at particular locations at the location of the system 10 to permit tracking the user 12 and in particular of the HDM 28 that is mounted on the user’s head 23. Tracking of the user’s head 23 permit establishing the particular location and orientation of the user 12 at any particular moment in time. This information permits configuring the VR environment in accordance with the view direction of the user 12. For example, as will be described with reference to the activity for providing mouth-to-mouth resuscitation, tracking of the FIDM 28 permits the system 10 to detect when the user’s head 23 is located at the correct location for conducting mouth-to-mouth resuscitation allowing configuring the VR environment to reflect the fact that the user’s head 23 is located at the correct location for conducting mouth-to-mouth resuscitation; this permits providing feedback to the user 12 when the user’s head 23 is in the incorrect location.
[00102] The second sensor 24 is configured and located at a particular location of the FIMD (see figure 7) to permit in particular tracking the user’s hands 21 for identifying the location of the user’s hand 21. Tracking of the user’s head 23 permits establishing using the sensor 24 the particular location and orientation of the user’s hand 21 at any particular moment in time. This allows configuring the VR environment so as to show the user’s virtual hands 56 (see figure 10) at a particular location within the VR environment that correspond to the location where the user’s real-life hand 21 (counterpart to the user’s virtual hands 56) is located - see figure 7).
[00103] Moreover, as mentioned before, the system 10 comprises also real-life objects for interaction of the user 12 during the training sessions.
[00104] In the particular arrangement, shown in figure 1 , the system 10 comprise a manikin 16 configured for permitting the user 12 to interact with the manikin 16 in real- life for conducting particular activities to acquire skills that form part of training sessions that the system 10 provides to the user 12 and that the user 12 may select.
[00105] Referring now to figures 3 to 20, figures 3 to 20 refer to a particular arrangement of a preferred operation for the system 10 in accordance with the present embodiment of the invention. In particular, the particular arrangement of the method comprises delivering training sessions to provide users 12 of the system 10 with a plurality of skills for performing a particular activity such as performing first-aid procedures for training purposes and subsequent accreditation.
[00106] As mentioned before, the system 10 comprises storage devices for recording the performance (the performance data) of each user 12 involved in the procedures of a particular training sessions such as a first-aid training sessions selected by the user 12. Once a particular user 12 has completed all of the activities of the selected training session, the system 10 proceeds to process the performance data of the particular user 12 and compare the user’s performance data against the standard performance data set at the level required for passing the selected training session. As shown in figure 3, if the user 12 has passed the selected training session, accreditation proceeds. However, if the user 12 has not passed the selected training session, accreditation will not proceed; and, the user 12 may continue the training by re-selecting the training sessions that the user 12 has not passed. The user 12 will be able to, for example, repeat the activities of the particular training sessions that the user 12 has not passed. In view that the user’s performance data is stored in storage devices, the system 10 may tailor the training activities (provided, for example, by the VR assistant 36) of the procedures of the particular training sessions that the user 12 has not passed with the objective of strengthen the user’s skills in those particular procedures. [00107] FIG. 3 shows a preferred operation for the system 10 wherein a user 12 accesses system 10 by interacting with the computing device 18 (and ergo the server 26) to start generating the VR content for delivery to the HMD 28 to immerse the user 12 in the VR environment. Immersion of the user 12 into the VR environment occurs as the user 12 mounts the HMD 28 onto her/his head 23.
[00108] The process of accessing the system 10 may comprise signing up by providing the user’s details for generating a user’s profile and issuance of a password for security purposes (or signing in, if the user already has signed up) . Once signed in the user 12 may, at this instance, select the particular training session(s) allowing the user 12 to tailor the training experience to her/his needs using the system 10. As will be described at a later stage, in alternative arrangements, selection of particular training session(s) by the user 12 may occur while the user 12 is immersed in the VR environment (these alternative arrangements are shown in figure 3). The selection, while being immersed in the VR environment occurs, via virtual sensors (to be described at a later stage with reference to figures 7 to 9) to permit the user 12 to select or deselect particular procedures of the selected training session; also, the user 12 may deselect a previously selected training session and choose another training session.
[00109] Once the user 12 has accessed the system 10 and immersed her/himself in the VR environment by mounting the HMD 28 on her/his head 23, the user 12 visualises the VR environment. Figure 4, shows a detail of a particular scene of the VR environment. This particular scene shown in figure 4 relates to the process in which the user 12 is introduced to the preferred operation for the system 10.
[001 10] The particular arrangement shown in figure 4 to 14 and 17 to 20 depict one of the preferred operations for the system 10 and it relates to a first-aid training session.
[001 1 1 ] As shown in figure 4, the VR environment comprises a virtual scenario 34 including the VR assistant 36, the virtual manikin 38 mounted on a virtual surface 40 and virtual equipment 42 contained in a virtual suitcase 44. The virtual manikin 38 is the virtual counterpart version of the manikin 16. Creation of the virtual counterpart version (the virtual manikin 38) of the manikin 16 and the presence of the real-life manikin 16 permits the user 12 (while conducting a particular activity to acquire a particular skill) manipulating a real-life object (such as the manikin 16) but viewing a counterpart virtual version (the virtual manikin 38) of the real-life object instead of viewing the real-life manikin 16.
[001 12] Being able to view the virtual manikin 38 while conducting the particular activity on the manikin 16 is particularly advantageous in view that it provides the user 12 simultaneously with (1 ) a hands-on experience while manipulating the real-life manikin 16 and (2) a virtual experience that permits - as will be described below - providing instant feedback to the user 12 of how the user 12 is performing the particular activity. The instant feedback may be in form of virtual graphical elements popping up in the virtual reality and instructions and suggestions from the VR assistant 36 to improve the user’s performance.
[001 13] Thus, the VR assistant 36 acts as trainer introducing the user 12 to the capabilities of the system 10 and how to operate the system 10 as well as providing the user 12 with step-by-step indications for completing the activities to acquire the skills required to pass the selected training session as well as providing feedback to the user 12 .
[001 14] Figure 5 shows a close up view of the VR assistant 36. The VR assistance 36 comprises human-like features and human-like body-parts. As shown in figure 5, the VR assistant 36 comprises a body 46 and a head 48 as well as arms 50 with hands 52 including fingers 54.
[001 15] The hand 48 with fingers 54 permit greeting and communicating with the user 12 via sign-language. It is particularly useful that the VR assistant 36 is able to communicate with users 20 via sign-language because it permits deaf persons to use the system 10.
[001 16] Also, the VR assistant 36 is able to directly interact with the user 12 (direct virtual contact) for teaching purposes by, for example, using the VR assistant 36 to engage the VR hands 56 (see figures 8 and 9) of the user 12 for illustrating a particular hand posture or body posture required for handling a virtual patient.
[001 17] In a particular arrangement, the VR assistant 36 may engage a virtual version of a body-part of the user 12 (such as the users virtual hand 56) and move the virtual version of a body-part of the user 12 in a particular manner (having as purpose teaching the user 12 a particular activity); this permits the user 12 to move her/his real- life hands 21 to the particular posture or location towards which the virtual version of a body-part of the user 12 (for example, the hand 21 a) have been moved by the VR assistant 36.
[001 18] Referring to figure 6, the VR assistant 36 is able to provide demonstrations to the user 12 while being immersed in the VR environment. For example, the VR assistant 36 can provide an explanation of the effects onto a human heart during the CPR routine comprising periodical compression of the chest by directly interacting with the human heart. In particular, the VR assistant 36 can hold a virtual version of a human heart to explain the mechanism that permits resuscitation of a patient (in particular: reinstatement of the heart beats) used during CPR. As shown in figure 6, the fact that the virtual assistant 36 has arms 52 and hands 48 is particularly useful because it allows replicating demonstrations typically performed by real-life trainers but using virtual objects such as the virtual heart shown in figure 6.
[001 19] Furthermore, the system 10 is configured to receive commands from the user 12 for providing instructions to the VR assistant 36 as well as for controlling the computing device 18 (and server 26). Being able to control the computing device 18 and server 26 permits the user 12 to direct the VR assistant 36 to perform particular actions such as repeating a particular activity which the user 12 may have not comprehend completely or missed due to lack of concentration.
[00120] In a preferred arrangement, the commands for directing the VR assistant 36 may be voice commands and languages that use the visual-manual modality to convey meaning such as sign-language commands based on, for example, hand movement. The use of these particular commands (voice commands and sign-language commands) for directing the VR assistant 36 is particularly useful because it makes the training sessions between the user 12 and the VR assistant more user-friendly. It is also particularly useful that the VR assistant 36 may be directed by users 20 via sign- language because it permits deaf persons to use the system 10.
[00121 ] The user 12 may control the computing device 18 (and server 26) with other type of commands; for example, these particular type of commands may be in the form of direct contact of the user’s body parts. For example, a particular type of command may be generated by direct contact of the user’s real-life body parts (such as the fingers of one of the user’s hand 21 a) by tapping particular locations of the other hand 21 b of the user 12.
[00122] Further, virtual menus and other type of commands may be generated through the movements by the user 12 of her/ his hands 21 and fingers for tapping particular locations of virtual menus 66 popping up in the VR environment (see figure 9).
[00123] Generation of these particular commands (in the form of virtual direct contact between the user’s body parts and the virtual menus 66) is due that fact that a signal is generated that interacts with the computing device 18 (and the server 26) for prompting particular actions in the system 10 such as a direction to the VR assistant, or skipping or repeating a particular activity.
[00124] The particular signal representing the particular command (such as the finger of one hand 21 a of the user 12) tapping a particular location of the other hand 21 b (of the menu 66) is generated when the sensor 24 (used for tracking the user’s hands 21 ) detects the specific hand movement (a finger from one hand tapping a particular location of the other hand) that represent tapping of the finger on the particular location.
[00125] In particular, the signal is generated when the spatial coordinates of the finger (in particular of the end of the finger coincide with the spatial coordinates of the button (located on the other hand 21 b or of the virtual menu 66) prompting that the processing means of the computing unit 18 (or server 26) generate the particular signal representing the particular command. The same occurs when activation of particular locations in the virtual menus 66.
[00126] Similarly, each particular configuration of the user’s virtual hand 56 while the user 12 is using sign-language with her/his real-life hand 21 when coinciding with a particular configuration of a meaningful sign of a particular sign language stored in the memory device of the computing device 18 (or server 26) prompt particular commands for conducting a particular action.
[00127] Further, the user 12 may activate particular features of the system 10 through speech; for this, the computing device 18 (or server 26) comprises a speech recognition software that upon emission of voice messages by users 12 and recognition of the voice message will prompt particular commands for conducting a particular action.
[00128] Referring now to figures 8 and 9, figures 8 and 9 show one of the virtual hands 56 of the user 12 shown in figure 7. As shown, the virtual hand 56 comprises virtual buttons 58 and 60 on its fingers 62 and dorsal side of the virtual hand 56. Further, the palmar side of the virtual hand 56 compromises a virtual button 64 that when activated generates the virtual menu 66 defined by the virtual pop-out screen 66 incorporated in the VR environment.
[00129] Control of the computing device 18 for implementation of the command so as to obtain the desired outcome (for example: selecting a particular training session) is done by transferring the signal (generated by the sensors 22 or 24 due to detecting a particular body part movement, detection of coinciding coordinates during virtual tapping of virtual buttons, or microphones capturing a particular sound such as a voice command as described before) to the computing device 18. The computing device 18 (or the server 26) comprises processing units including circuitry and software (stored in memory devices) configured for running particular algorithms for transducing signals generated by: (1 ) the sensors 22 and 24 or any other sensors, (2) the detection of coinciding coordinates during virtual tapping of virtual buttons or menus 66, or (3) the microphones capturing a particular sound such as user’s voice commands providing instructions for generating the desired outcome such as a selecting a particular training session.
[00130] Referring now back to figure 3, after the introduction process which presents the features of the system 10 to the user 12, the user 12 has the option to select particular training sessions as described in the previous paragraphs.
[00131 ] Once particular training session has been selected the VR training process will begin. Each training session comprises a plurality of training exercises that need to be completed. For example, the training session related to first-aid comprises several training exercises including CPR and Patient Handling Techniques (PHT) for, for example, properly handling patients injured in accidents or during criminal activity.
[00132] Figures 10 to 13 show particular stages of the CPR exercise. [00133] Figure 10 shows the user 12 (shown in figure 7) immersed in a particular VR environment representing an occurrence of heart failure in a particular patient 68. After the user 12 has received the training including demonstrations of the CPR exercise performed by VR assistant 36, the user 12 is now ready to practice CPR using the real- life manikin 16 but while being immersed in the VR environment and by viewing the virtual manikin 38.
[00134] As shown in figure 11 , the user 12 is immersed in the VR environment including the virtual patient 68. The user 12, for conducting CPR on the virtual patient 68, kneels down to permit her/his virtual hands 56 approach the chest region of the patient 68. While approaching the chest region, the user 12 engage her/his real-life hands 21 such that one hand is on the top of other hand with their fingers interlocked with respect to each other as shown in figure 12. As the real-life hands 21 engage each other, the user 12, due to being immersed in the VR environment, visualises the virtual hands engaging each other as they approach the chest region of the virtual patient 68.
[00135] Further, as the real-life hands 21 of the user 12 reach the upper surface of the chest of the real-life manikin 16, the user’s virtual hands 56 rest on the chest region of the patient 68 as appreciated in figure 13. At this stage, the user 12 has her/his hands 21 resting on the manikin 16 and can start applying periodically pressure (compression) on the chest region of the manikin 16 with the objective of conducting the CPR exercise to the manikin 16.
[00136] In accordance with the present embodiment of the invention, the system 10 provides feedback to the user 12 while conducting the CPR exercise to the manikin 16. The feedback may be trough voices message, indications including aural indications of the VR assistant 36. For example, the computer system 17 may receive signals from the manikin 16 representative of, for example, (1 ) the amount of pressure the user applies to the chest at each compression, (2) the periodicity with which the compression is applied, (3) the particular location of the interlocked hands 57 with respect to the correct location for CPR to function effectively. These signals are processed by the computing device 18 (or by the servers 26) and compared with the standard values required for resuscitating a patient that has undergone heart failure with the objective of providing feedback to the user 12. The values generated during the CPR exercise may also be stored for future reference. Storage of this information is particularly useful for use during accreditation of the user 12 after having completed each exercise of each particular training session or tailoring future training activities in light of the user’s poor performance requiring further training in the particular activity.
[00137] In a particular arrangement, the feedback may be provided simultaneously while the user 12 is conducting the CPR. For example, the system 10 may provide the feedback via voices messages (for example, via the VR assistant 36) generated based on the comparison of the standard CPR values with the values generated by the user 12. Alternatively, the system 10 (in particular, the computing device 18 or server 26) may generate a particular virtual content that permits visualisation of virtual graphical elements representative of the performance of the user 12. Figures 13 and 14 show a particular arrangement of such as virtual content. Aural elements representing successful outcome of the activity or representing unsuccessful outcome may also be generated.
[00138] Figures 13 and 14 show a particular graphical element in a shape of a semicircular bar 70 surrounding the interlocked virtual hands 56. The bar 70 comprises an outer border defining an inner region 72 divided into plurality of regions 72a to 72c permitting visualisation of the amount of pressure applied to the chest region by the percentage of the inner region 72 that is, for example, coloured - if the entire inner region is fully coloured at the end of a compression, it is an indication to the user 12 (and to the system 10) that the correct amount of pressure has been applied during compression. .This permits the user 12 to visualise how well she/he is performing giving the user 12 the opportunity to continuously improve the performance until the proper compression standard is reached. Similarly, graphical or aural elements are provided for providing a measure of the periodicity that the user 12 is conducting repeatedly the chest compression.
[00139] Referring now to figures 15 and 16, figures 15 and 16 depict a schematic view of the manikin 16.
[00140] As mentioned before, the manikin 16 is adapted to provide signals representative of (1 ) the pressure and the periodicity that the chest of the manikin is compressed by the user 12 and (2) whether the interlocked hands 57 are located at the proper location for CPR to be effective. For this, the manikin 16 is provided with a sensor devices 74 for measuring the amount of pressure applied to the chest and for measuring the periodicity with which the pressure is applied and detection of the interlocked hands at the correct location. The sensor devices 74 are operatively connected to the computing device 18 (or servers) via cabling 80. In an alternative arrangement, communication between the virtual manikin 38 and the computer system 16 may be wireless using, for example, Bluetooth® technology or any other suitable wireless technology such as WIFI capable of transferring signals between the virtual manikins 38.
[00141 ] Moreover, another resuscitation technique that may be practiced by the user using the manikin16 is Mouth-to-Mouth Resuscitation (MMR). MMR comprises fluidly connecting the open mouths of the manikin 16 and user 12 and exhaling and inhaling with a specific strength and the periodicity required to, in a real scenario with a real patient, reinstate the breathing activity of the real patient.
[00142] As is the case with CPR, the system 10 may also provide feedback to the user 12 during the MMR exercise. For this, the manikin 16 is provided with sensors 76 for measuring the particular strength and the periodicity of the breathing activity during the MMR.
[00143] In a particular arrangement, the sensor 74 may comprise a proximity sensor for detecting the presence of the user’s face as the user approaches the head of the manikin 16 for conducting the MMR exercise. Figure 17 shows a flowchart illustrating a particular arrangement of the training activity MMR exercise accordance with the present embodiment of the invention using the manikin 16.
[00144] Figure 16 shows the sensor 76 and the electric circuitry 78 incorporated in the manikin 16. The electronic circuity 78 is configured to receive the electric signals generated by the sensor 76 and for conditioning the electric signals prior transference to the computing device 18 for processing in the computing device 18 and server 26.
[00145] The electronic circuitry 78 is adapted to receive the signals generated by the sensors 74 and 76 and for transfer via cabling 80 (or wirelessly) to the computing device 18 for processing in the computing device 18 or server 26. In particular, the electric circuitry 78 is designed to manage the real time stream of data from the CPR sensors 74 and send that data to the computing device 18. The electric circuitry 78 reads data from all of the sensors of the manikin 16 including breathing sensors, compression speed and depth, hand location in the chest and the gyroscope that measures x,y,z position of the real-life manikin 16 in the real physical space. The data is sent via cabling 80 (or wirelessly) to the computing device 18 for processing therein or in servers 26. Processing includes reading the data stream, sorting which sensor data is required and then uses it to affect VR environment experienced by the user 10. The electric circuitry 78 has a higher voltage than the conventional electric circuitry to improve accuracy of all of the sensor data by increasing granularity. There is custom filtering on electric circuitry 78 to reduce noise and increase the accuracy of the data signal. The electric circuitry 78 also accepts commands from the computing device 18 to enable or disable sensor data streams.
[00146] Referring now to figures 18 to 21 , the previously described exercises include use of real life objects such a manikin 16. In the particular arrangement shown in figures 18 to 21 , the VR environment is composed exclusively of virtual objects which do not have counterpart real-life objects included in the system 10. However, in alternative arrangements, the exercises to be described below may be implemented using virtual objects having real-life counterpart objects.
[00147] The exercise depicted in figures 18 to 21 relates to handling of an emergency situation involving a potentially injured person 82 and potential hazards that may be surrounding the person. In this particular arrangements illustrate the use of virtual messages contained in the VR environment for guiding the user 12 through a particular exercise.
[00148] Figure 18 shows a particular scene of a VR environment depicting the potentially injured person 82 lying on the floor. As shown in figure 18, a virtual message 85 is included in the VR environment at this particular stage of the exercise. The message indicates the first step that is required to do, when encountering a potentially injured person, is to check whether any danger is present. In this particular, the danger consists of the presence of a VR sharp knife 84.
[00149] Figures 19 and 20 show the process of removing the VR sharp knife 84. As shown in figure 20, the process of removing the VR sharp knife 84 comprises moving one of the real-life hands 21 in such a manner that the virtual hand 56 (counterpart to the real-life hand 21 ) approaches the VR sharp knife 84 and closes the hand 56 when hovering above the VR sharp knife 84 in order for the virtual hand 56 to engage the VR sharp knife 84 and being able to remove the VR sharp knife 84 by moving the real-life hand 21 in such a manner that the virtual hand 56 (through movement of the user’s real- life hands 21 ) moves towards the counter 86 for storage in a safe location.
[00150] After finding the dangers and removing them as explained with reference to figures 18 and 19; the next step in the process is to check for a response from the potentially injured person 82; as indicated in the message 87 this is done by squeezing the shoulders of the person.
[00151 ] As can be appreciated this particular arrangement is particularly useful for guiding the user 12 step by step through particular procedures comprising a plurality of task that perhaps do not need particular skill instead it is important that all tasks are conducted and that the tasks are conducted in the required order such as detection and removal of any danger prior handling of the potentially injured person 82.
[00152] Figure 23 shows a particular arrangement of hardware architecture of the system 10 for practicing the previously described methods. This particular hardware architecture relates to the system 10 comprising the server 26 and the computing device 18 operatively connected to the server 26.
[00153] The sensors devices and HMD are operatively connected to the computing device 18 for transferring the data generated by the sensors devices and HMD to the server 26 for processing and receiving in return the processed information in the form of the content that generates the VR environments.
[00154] The server 26 comprises storage devices for storing the user’s profiles and training sessions and information required of the simulation controller 27 generating the content that generates the VR environments. The simulation controller 27 comprises the control processor, the judgment processor, the decisions processor and the display processor required for generating the content that generates the VR environments.
[00155] The control processor controls the overall organization and operation of the application. The judgment processor evaluates user input to determine performance of the users and generates auditory feedback to be presented to the user via headphones or the VR environment to provide feedback. The display processor handles the creation and animation of the VR environments. In the decision processor, performance of users can be evaluated in an adaptive way in order to progress to successive skills when the users exhibit successful performance.
[00156] Modifications and variations as would be apparent to a skilled addressee are deemed to be within the scope of the present invention.
[00157] Further, it should be appreciated that the scope of the invention is not limited to the scope of the embodiments disclosed.
[00158] Throughout this specification, unless the context requires otherwise, the word "comprise" or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated integer or group of integers but not the exclusion of any other integer or group of integers.

Claims

1. A system for providing a virtual environment (VR), the system comprises sensory devices and a computer system adapted for transferring signals representative of particular information between the sensory devices and the computer system for generation of the virtual environment comprising at least one virtual object, means for immersing the user in the VR environment, and at least one real-life object permitting the user to interact with the real-life object, the real-life object being adapted to interchange signals with the computer system, wherein the signals being representative of particular information related to the interaction between the user and the real-life object.
2. A system according to claim 1 wherein the system is configured for providing a first VR environment including scenes being absent of particular virtual objects for viewing by the user, the particular objects having counterpart real-life objects that are present in the area where the system is located, and subsequently providing a second VR environment including scenes comprising the first VR including particular virtual objects for viewing by the user, the particular objects having counterpart real-life objects that are present in the area where the system is locate
3. A system according to claims 1 or 2 wherein the system is adapted to provide instructions for communication between the user and the real-life object concerning the interaction.
4. A system according to any one of claims 1 to 3 wherein the interaction between the user and the real-life object comprises training sessions for the user to conduct particular activities of the training sessions while interacting with the real-life object or virtual object.
5. A system according to any one of claims 1 to 4 wherein the system is configured to generate virtual objects counterpart to the real-life objects.
6. A system according to any one of claims 1 to 5 wherein the system is configured to generate virtual objects counterpart to the body parts of the user.
7. A system according to claim 6 wherein the body parts of the user that have counterpart virtual objects comprise at least one hand.
8. A system according to any one of claims 6 or 7 wherein the system is configured to permit interaction between the user and at least one virtual object with at least one virtual object counterpart to a body part of the user.
9. A system according to any one of claims 6 to 8 wherein the user’s body part having a counterpart virtual object comprises at least one hand of the user.
10. A system according to any one of claims 1 to 9 wherein the system is configured to provide the user simultaneously with (1 ) a hands-on experience while interacting with the real-life object and (2) a virtual experience while manipulating the real-life object.
1 1. A system according to any one of claims 1 to 10 wherein the system is configured to provide feedback to the user regarding the interaction while the user is interacting with the real-life object and/or the virtual object.
12. A system according to claim 1 1 wherein the system is configured to provide the feedback through the virtual objects comprising providing instructions and information to the user.
13. A system according to claims 1 1 or 12 wherein the virtual objects comprise graphical elements adapted to provide information to the user related to the interaction between the user and the real- life object.
14. A system according to any one of claims 1 to 13 wherein the system is configured for generating virtual buttons and pop-up menus for permitting the user to select the type of interaction the user will have with the real-life and/or virtual objects and controlling the interaction.
15. A system according to any one of claims 1 to 14 wherein the system is configured to permit the user to select, deselect and skip particular interactions.
16. A system according to any one of claims 9 to 15 wherein, the virtual user’s hand comprises the virtual buttons.
17. A system according to any one of claims 14 to 16 wherein the menu comprises pop-up menus popping up in the virtual environment where the user is immersed.
18. A system according to claim 17 wherein the pop-up menu pops out from the virtual user’s hand.
19. A system according to any one of claims 1 to 19 wherein the system is configured for receiving and emitting commands for controlling the interaction.
20. A system according to claim 19 wherein the commands comprise voice commands and commands resulting from sign-language generated by the user or virtual objects.
21. A system according to claim 20 wherein the system is configured for providing a virtual assistant for providing the voice commands and commands resulting from sign-language.
22. A system according to claim 21 wherein the commands comprise introduction content, explaining the activity to be conducted by the user and providing assistance and feedback when the user is conducting the interaction.
23. A system according claims 21 or 22 wherein the system is configured for providing a human like appearance to the virtual assistant.
24. A system according to any one of claims 21 to 23 wherein the system is configured for the virtual assistant to communicate via voice and sign language.
25. A system according to any one of claims 21 to 24 wherein the system is configured for the virtual assistant to interact with the user through virtual body parts of the user.
26. A system according to any one of claims 1 to 25 wherein the system is configured for providing first-aid sessions.
27. A system according to any one of claims 1 to 26 wherein the real-life body comprises a manikin configured for permitting the user to conduct particular activities of the selected first aid-sessions.
28. A system according to claim 27 wherein the particular activities comprise CPR, MMR and Patient Handling.
29. A system according to claims 27 or 28 wherein the manikin communicates with the computer system wirelessly.
30. A system according to any one of claims 27 to 29 wherein the manikin comprises sensor devices for monitoring the actions of the user during CPR, MMR and/or Patient Handling.
31. A system according to claim 30 wherein the sensor devices comprises at least one pressure sensor for generating signals representative of the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
32. A system according to claims 30 or 31 wherein the sensor devices comprises at least one sensor for generating signals representative of the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
33. A system according to any one of claims 31 or 32 wherein the system is configured to provide a virtual object for providing feedback to the user while conducting CPR.
34. A system according to claim 33 wherein the virtual object comprises a graphical element providing an indication to the user of the amount of pressure being applied by the user to the chest region of the manikin.
35. A system according to claim 34 wherein the graphical element comprises a bar comprising an outer border defining an inner region divided into plurality of regions permitting visualisation by the user of the amount of pressure applied to the chest region by the percentage of inner region that is highlighted.
36. A system according to any one of claims 30 to 35 wherein the sensor devices comprises at least one flow sensor for generating signals representative of the breathing activity of the user while the user is conducting MMR.
37. A system according to any one of claims 30 to 36 wherein the sensor devices comprises at least one proximity sensor for generating signals representative of the location of the open mouth of the user with respect to the open mouth of the manikin while the user is conducting MMR.
38. A system according to any one of claims 30 to 37 wherein the sensor devices comprises at least one gyroscope for generating signals representative of the orientation of the manikin while the user is conducting patient handling activities.
39. A system according to claim 1 wherein the system is configured for providing a VR environment that includes activities absent of any virtual object that is a counterpart version of a real-life object located at the location of the system.
40. A method for delivering training sessions to users while being immersed in the VR environment to acquire a plurality of skills while interacting with virtual and/or real life objects, the method comprises the steps of: a. generating at least one virtual object that is a counterpart version of a real-life object adapted for interaction with the user; b. recording the interaction between the user and the real-life object for generating feedback content to the user; and c. generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting the feedback content to the user.
41. A method according to claim 40 wherein the method further comprises generating instructional content related to instructions for interacting with the real-life objects or virtual objects for provision to the user and transducing the instructional content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting the instructional content to the user.
42. A method according to claims 40 or 41 wherein the method further comprises receiving commanding content from the user related to commands for interacting with the real-life objects or virtual objects and transducing the instructional content into signals representative of the commanding content for controlling the interaction between the user and either the virtual objects of the VR environment or real-life objects.
43. A method according to any one of claims 40 to 42 wherein the interaction between the user and the real-life object comprises performing first-aid training sessions and subsequent accreditation.
44. A method according to claim 43 wherein the method further comprises the step of tailoring the training sessions based on prior performances of the user.
45. A method according to any one of claims 40 to 44 wherein the method further comprises generating virtual buttons and pop-up menus for permitting the user to select the type of interaction the user will have with the real-life and/or virtual objects and controlling the interaction.
46. A method according to any one of claims 40 to 45 wherein the interaction comprises conducting first-aid activities on a real-life manikin while the user being immersed in the virtual reality is simultaneously interacting with a virtual object that is a counterpart version of the real-life manikin.
47. A method according to claim 46 wherein the interaction comprises CPR, MMR and Patient Handling.
48. A method according to claims 47 wherein the method further comprises the step of recording the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
49. A method according to claims 47 or 48 wherein the method further comprises the step of recording the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
50. A method according to claims 48 or 49 wherein the method further comprises, based on the signals representative of the pressure applied by the user and/or the periodicity that the user is applying pressure, generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either visual or aural messages, or virtual objects of the VR environment for transmitting to the user.
51. A method according to claims 40 to 50 wherein the method further comprises the step of recording the location of the open mouth of the user with respect to the open mouth of the manikin.
52. A method according to any one of claims 40 to 51 wherein the method further comprises the step of recording the breathing activity of the user while the open mouth of the user and the open mouth of the manikin are fluidly connected.
53. A method according to claims 51 or 52 wherein the method further comprises based on the signals representative of the location of the open mouth of the user and/or the breathing activity of the user generating feedback content and transducing the feedback content into signals representative of the feedback content for generation of either virtual objects of the VR environment for transmitting to the user.
54. A method according to any one of claims 40 to 53 wherein the method further comprises the step of recording the location and orientation of the manikin while the user is conducting patient handling exercises with the manikin.
55. A method according to any one of claims 40 to 54 wherein the method further comprises the step of storing the information recorded during interaction between the user and the real-life object and/or virtual object and comparing the recorded information with a standard information representative of the interaction between the user and the real-life object and/or virtual object that is required for accreditation purposes.
56. A real-life object for interaction by a user, the real-life object comprising sensor devices for monitoring the interaction between the user and the real-life object, the real-life object being adapted for operative connection with a system configured for generating VR environment comprising at least one virtual object being a counterpart version of the real-life object.
57. A real-life object according claim 56 the real-life object is adapted for operative connection with the system as defined in any one of claims 1 to 39.
58. A real-life object according to claim 56 or 57 wherein the real-life object comprises sensor devices for detecting action of the user during interaction by a user.
59. A real-life object according to any one of claims 56 to 58 wherein the real-life object comprises a manikin configured for conducting first-aid activities on the manikin.
60. A real-life object according to claim 59 wherein the first-aid activities comprise CPR, MMR and/or Patient Handling conducted on the manikin.
61. A real-life object according to any one of claims 58 to 60 wherein the sensor devices comprises at least one pressure sensor for generating signals representative of the pressure the user is applying to the chest of the manikin while the user is conducting CPR.
62. A real-life object according to any one of claims 58 to 61 wherein the sensor devices comprises at least one sensor for generating signals representative of the periodicity that the user is applying pressure to the chest of the manikin while the user is conducting CPR.
63. A real-life object according to any one of claims 58 to 62 wherein the sensor devices comprises at least one flow sensor for generating signals representative of the breathing activity of the user while the user is conducting MMR.
64. A real-life object according to any one of claims 58 to 63 wherein the sensor devices comprises at least one proximity sensor for generating signals representative of the location of the open mouth of the user with respect to the open mouth of the manikin while the user is conducting MMR.
65. A real-life object according to any one of claims 58 to 64 wherein the sensor devices comprises at least one gyroscope for generating signals representative of the orientation of the manikin while the user is conducting patient handling exercises.
66. A real-life object according to any one of claims 58 to 60 wherein the manikin is configured for wirelessly connecting with the computer system.
67. A real-life object according to any one of claims 58 to 60 wherein the real-life object comprises an electric circuitry for managing real time stream of data from the sensors and send that data to the computer system.
68. A real-life object according to any one of claims 59 to 67 wherein processing includes reading data streams with information representative of the interaction between the user and the manikin, sorting which sensor data is required and then uses it to affect the VR environment experienced by the user.
69. A real-life object according to claims 67 or 68 wherein the electric circuitry has a higher voltage than the conventional electric circuitry to improve accuracy of all of the sensor data by increasing granularity.
70. A real-life object according to any one of claims 67 to 69 wherein there is custom filtering on the electric circuitry to reduce noise and increase the accuracy of the data signal.
71. A real-life object according to any one of claims 67 to 70 wherein the electric circuitry accepts commands from the computing device to enable or disable sensor data streams.
PCT/AU2019/051114 2018-10-12 2019-10-14 Virtual reality system WO2020073103A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2018903876A AU2018903876A0 (en) 2018-10-12 Virtual Reality System
AU2018903876 2018-10-12

Publications (1)

Publication Number Publication Date
WO2020073103A1 true WO2020073103A1 (en) 2020-04-16

Family

ID=70163650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2019/051114 WO2020073103A1 (en) 2018-10-12 2019-10-14 Virtual reality system

Country Status (1)

Country Link
WO (1) WO2020073103A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408761A (en) * 2021-07-14 2021-09-17 喻海帅 Communication infrastructure maintenance skill training system based on VR virtual reality technology

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6500008B1 (en) * 1999-03-15 2002-12-31 Information Decision Technologies, Llc Augmented reality-based firefighter training system and method
US20170213473A1 (en) * 2014-09-08 2017-07-27 SimX, Inc. Augmented and virtual reality simulator for professional and educational training
US20180293802A1 (en) * 2017-04-07 2018-10-11 Unveil, LLC Systems and methods for mixed reality medical training
WO2018200692A1 (en) * 2017-04-26 2018-11-01 The Trustees Of The University Of Pennsylvania Methods and systems for virtual and augmented reality training for responding to emergency conditions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6500008B1 (en) * 1999-03-15 2002-12-31 Information Decision Technologies, Llc Augmented reality-based firefighter training system and method
US20170213473A1 (en) * 2014-09-08 2017-07-27 SimX, Inc. Augmented and virtual reality simulator for professional and educational training
US20180293802A1 (en) * 2017-04-07 2018-10-11 Unveil, LLC Systems and methods for mixed reality medical training
WO2018200692A1 (en) * 2017-04-26 2018-11-01 The Trustees Of The University Of Pennsylvania Methods and systems for virtual and augmented reality training for responding to emergency conditions

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408761A (en) * 2021-07-14 2021-09-17 喻海帅 Communication infrastructure maintenance skill training system based on VR virtual reality technology
CN113408761B (en) * 2021-07-14 2022-06-10 喻海帅 Communication infrastructure maintenance skill training system based on VR virtual reality technology

Similar Documents

Publication Publication Date Title
KR102045260B1 (en) Simulation method for training first aid treatment using augmented reality and virtual reality
Stanney et al. Extended reality (XR) environments
Biocca et al. Immersive virtual reality technology
JP6263252B1 (en) Information processing method, apparatus, and program for causing computer to execute information processing method
Stanney et al. Virtual environments
JP2022535325A (en) Arm Gaze-Driven User Interface Element Gating for Artificial Reality Systems
Wattanasoontorn et al. A kinect-based system for cardiopulmonary resuscitation simulation: A pilot study
US8779908B2 (en) System and method for social dancing
US20170092223A1 (en) Three-dimensional simulation system for generating a virtual environment involving a plurality of users and associated method
US20180272189A1 (en) Apparatus and method for breathing and core muscle training
JP2022535182A (en) Artificial reality system with personal assistant elements gating user interface elements
US11443650B2 (en) Method and apparatus for VR training
JP2022535322A (en) Gesture-Driven User Interface Element Gating to Identify Corners for Artificial Reality Systems
JP7160669B2 (en) Program, Information Processing Apparatus, and Method
CN107918482A (en) The method and system of overstimulation is avoided in immersion VR systems
US20240153407A1 (en) Simulated reality technologies for enhanced medical protocol training
US20230214007A1 (en) Virtual reality de-escalation tool for delivering electronic impulses to targets
US10088895B2 (en) Systems and processes for providing virtual sexual experiences
CN104511079A (en) Method for removing psychological disorder for patient by virtual technology
US20230335139A1 (en) Systems and methods for voice control in virtual reality
JP2022087111A (en) Program executed by computer to provide virtual space via head mount device, method, and information processing device
Lee et al. Development of an extended reality simulator for basic life support training
US11287971B1 (en) Visual-tactile virtual telepresence
WO2020073103A1 (en) Virtual reality system
JP6535699B2 (en) INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND INFORMATION PROCESSING APPARATUS

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19870239

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19870239

Country of ref document: EP

Kind code of ref document: A1