CN108169761A - Scene of a fire task determines method, apparatus, system and computer readable storage medium - Google Patents
Scene of a fire task determines method, apparatus, system and computer readable storage medium Download PDFInfo
- Publication number
- CN108169761A CN108169761A CN201810050662.8A CN201810050662A CN108169761A CN 108169761 A CN108169761 A CN 108169761A CN 201810050662 A CN201810050662 A CN 201810050662A CN 108169761 A CN108169761 A CN 108169761A
- Authority
- CN
- China
- Prior art keywords
- target
- task
- rescued
- user
- fire scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 96
- 230000007613 environmental effect Effects 0.000 claims abstract description 54
- 238000001514 detection method Methods 0.000 claims description 40
- 238000001931 thermography Methods 0.000 claims description 38
- 230000008859 change Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 3
- 239000000463 material Substances 0.000 abstract description 3
- 230000035699 permeability Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010000369 Accident Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 231100000614 poison Toxicity 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000003440 toxic substance Substances 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J5/0014—Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation from gases, flames
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J5/0022—Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation of moving bodies
- G01J5/0025—Living bodies
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J5/48—Thermography; Techniques using wholly visual means
- G01J5/485—Temperature profile
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J5/00—Radiation pyrometry, e.g. infrared or optical thermometry
- G01J2005/0077—Imaging
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Automation & Control Theory (AREA)
- Alarm Systems (AREA)
Abstract
The embodiment of the present disclosure discloses a kind of scene of a fire task and determines method, apparatus, system and computer readable storage medium.The method includes:The first helmet dressed from least one first user obtains the environmental information of fire field environment where first user;Pending task and its priority are determined according to the environmental information;At least one pending task and its priority are sent to the second helmet of second user wearing, to be shown on the permeability display unit of second helmet;The second user and first user are identical or different.The embodiment of the present disclosure can be according to the environmental information got, automatic planning rescue task priority, can in rescue operations significantly more efficient execution rescue task, it is real-time, task is quickly and easily obtained convenient for user such as fireman, and the safety of current fire-fighting helmet is not influenced, can reduce casualties rate, and reduce material damage.
Description
Technical Field
The disclosure relates to the technical field of intelligent identification, in particular to a fire scene task determination method, a fire scene task determination device, a fire scene task determination system and a computer-readable storage medium.
Background
Along with the continuous development and progress of society, fire accidents emerge endlessly, the causes of fire are complex and various, the safety problem becomes more severe, and how to more quickly and effectively rescue people in danger becomes an increasingly important subject in the field of fire fighting. At present, the fire-fighting standard ensures the safety of firemen as much as possible, and a large number of fire-fighting devices are arranged inside and outside various buildings to prevent and eliminate fire.
Disclosure of Invention
The embodiment of the disclosure provides a fire scene task determination method, a fire scene task determination device, a fire scene task determination system and a computer readable storage medium.
In a first aspect, an embodiment of the present disclosure provides a fire scene task determining method, including:
the method comprises the steps that environmental information of a fire scene environment where at least one first user is located is obtained from first head-mounted equipment worn by the first user;
determining tasks to be executed and priorities thereof according to the environment information;
sending at least one task to be executed and the priority of the task to be executed to a second head-mounted device worn by a second user to be displayed on a transparent display unit of the second head-mounted device; the second user may be the same or different from the first user.
Optionally, a laser radar module and a thermal imaging module are arranged on the first head-mounted device; wherein, the obtaining the environmental information of the fire scene environment where the first user is located from the first head-mounted device worn by at least one first user comprises:
acquiring radar information detected by the laser radar module and thermal distribution information detected by the thermal imaging module;
determining a three-dimensional scene model of the fire scene environment of the first user according to the radar information;
and superposing the thermal distribution information to the three-dimensional scene model to obtain a three-dimensional image including the thermal distribution information of the fire scene environment where the user is located.
Optionally, a vital sign detection module is further disposed on the first head-mounted device; wherein, the obtaining the environmental information of the fire scene environment where the first user is located from the first head-mounted device worn by at least one first user further comprises:
and acquiring the life body characteristics of the first user in the fire scene environment detected by the life characteristic detection module.
Optionally, determining the task to be executed and the priority thereof according to the environment information includes:
identifying the position and the type of a target to be rescued according to the target shape, the target temperature and/or the life body characteristics in the three-dimensional image;
and determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
Optionally, determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued includes:
determining the weight of the target to be rescued according to the category of the target to be rescued;
determining the distance between the target to be rescued and the second user according to the position of the target to be rescued;
calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the target to be rescued and the second user; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
Optionally, determining the task to be executed and the priority thereof according to the environment information includes:
and taking the environment information as input, and determining the task to be executed and the priority thereof by utilizing a pre-trained priority recognition model.
Optionally, determining the task to be executed and the priority thereof according to the environment information, further comprising:
and determining a rescue route and an evacuation route of the task to be executed according to the environment information and the map information of the fire scene.
Optionally, sending at least one of the to-be-executed task and the priority thereof to a second head-mounted device worn by a second user for display on a transparent display unit of the second head-mounted device, includes:
transmitting one or more of the content of the at least one task to be performed and its priority, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route to the second head-mounted device to be displayed on a transparent display unit of the second head-mounted device.
Optionally, determining the task to be executed and the priority thereof according to the environment information, further comprising:
and determining at least one rescue worker as the second user according to the information of the target to be rescued corresponding to the task to be executed and the current states of a plurality of rescue workers.
In a second aspect, an embodiment of the present disclosure provides a fire scene task determining method, which is executed on a first head-mounted device, and includes:
acquiring environmental information of a fire scene environment where a first user wearing the first head-mounted device is located;
determining tasks to be executed and priorities thereof according to the environment information;
and displaying the task to be executed and the priority of the task to be executed on a transparent display unit of the first head-mounted device.
Optionally, a laser radar module and a thermal imaging module are arranged on the first head-mounted device; wherein, the environmental information of the fire scene environment where the first user wearing the first head-mounted device is located is obtained, including:
determining a three-dimensional scene model of the fire scene environment of the first user according to the radar information detected by the laser radar module;
and superposing the thermal distribution information detected by the thermal imager module to the three-dimensional scene model to obtain a three-dimensional image of the fire scene environment where the first user is located, wherein the three-dimensional image comprises the thermal distribution information.
Optionally, a vital sign detection module is further disposed on the first head-mounted device; wherein, the environmental information of the first user's scene of a fire environment of wearing the head-mounted device of acquisition includes:
and detecting the life body characteristics of the first user in the fire scene environment through the life characteristic detection module.
Optionally, determining the task to be executed and the priority thereof according to the environment information includes:
identifying the position and the type of a target to be rescued according to the target shape, the target temperature and/or the life body characteristics in the three-dimensional image;
and determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
Optionally, determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued includes:
determining the weight of the target to be rescued according to the category of the target to be rescued;
determining the distance between the target to be rescued and the second user according to the position of the target to be rescued;
calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the target to be rescued and the second user; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
Optionally, determining the task to be executed and the priority thereof according to the environment information includes:
and taking the environment information as input, and determining the task to be executed and the priority thereof by utilizing a pre-trained priority recognition model.
Optionally, determining the task to be executed and the priority thereof according to the environment information, further comprising:
and determining a rescue route and an evacuation route of the task to be executed according to the environment information and the map information of the fire scene.
Optionally, displaying the task to be performed and the priority thereof on a transparent display unit of the first head mounted device includes:
displaying one or more of the content of the task to be performed and its priority, task execution location information, environmental information, safety countdown information, rescue route, and evacuation route on a transparent display unit of the first head-mounted device.
Optionally, displaying the task to be performed and the priority thereof on a transparent display unit of the first head mounted device includes:
and superposing and displaying one or more of the priority, the task type and the reason for generating the task to be executed on the target to be rescued corresponding to the task to be executed.
Optionally, the method further comprises:
receiving tasks to be executed and priorities thereof of other users, which are sent by head-mounted equipment and/or a server worn by other users;
and displaying the tasks to be executed and the priorities of the other users on a transparent display unit of the first head-mounted device.
Optionally, the priority of the to-be-executed task is divided based on all to-be-executed tasks in the fire scene, or is divided based on to-be-executed tasks to be executed by the first user.
Optionally, the method further comprises:
identifying the change of the object form in the fire scene environment of the first user according to the environment information;
determining the possibility of danger according to the object form change;
issuing a warning message when the likelihood of the occurrence of a hazard is greater than a predetermined threshold.
In a third aspect, an embodiment of the present disclosure provides a fire scene task determining device, including:
the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is configured to acquire environmental information of a fire scene environment where at least one first user is located from first head-mounted equipment worn by the first user;
the first determination module is configured to determine the tasks to be executed and the priorities thereof according to the environment information;
a transmitting module configured to transmit the at least one high-priority task to be performed to a second head-mounted device worn by a second user to be displayed on a transparent display unit of the second head-mounted device; the second user may be the same or different from the first user.
Optionally, a laser radar module and a thermal imaging module are arranged on the first head-mounted device; wherein, the first obtaining module comprises:
a first acquisition sub-module configured to acquire radar information detected by the lidar module and thermal distribution information detected by the thermal imaging module;
the first determining submodule is configured to determine a three-dimensional scene model of a fire scene environment where the first user is located according to the radar information;
and the second acquisition sub-module is configured to superimpose the thermal distribution information on the three-dimensional scene model to obtain a three-dimensional image of the fire scene environment where the user is located, wherein the three-dimensional image comprises the thermal distribution information.
Optionally, a vital sign detection module is further disposed on the first head-mounted device; wherein, the first obtaining module further comprises:
and the third obtaining sub-module is configured to obtain the vital sign in the fire scene environment of the first user detected by the vital sign detecting module.
Optionally, the first determining module includes:
the first identification submodule is configured to identify the position and the category of a target to be rescued according to the shape of the target, the temperature of the target and/or the characteristics of the life body in the three-dimensional image;
the second determination submodule is configured to determine the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
Optionally, the first determining module includes:
a third determining submodule configured to determine a weight of the target to be rescued according to the category of the target to be rescued;
a fourth determining submodule configured to determine a distance to the second user according to the position of the target to be rescued;
the first calculation submodule is configured to calculate the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the second user and the target to be rescued; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
Optionally, the first determining module includes:
and the fifth determining submodule is configured to use the environment information as input and determine the tasks to be executed and the priorities thereof by utilizing a pre-trained priority recognition model.
Optionally, the first determining module further includes:
a sixth determining submodule configured to determine a rescue route and an evacuation route of the task to be performed according to the environment information and the fire scene map information.
Optionally, the sending module includes:
a transmission sub-module configured to transmit one or more of content of the at least one task to be performed and its priority, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route to the second head-mounted device to be displayed on a transparent display unit of the second head-mounted device.
Optionally, the first determining module further includes:
and the seventh determining submodule is configured to determine at least one rescuer as the second user according to the information of the target to be rescued corresponding to the task to be executed and the current states of a plurality of rescuers.
In a fourth aspect, an embodiment of the present disclosure provides a fire scene task determining apparatus, which is operated on a first headset, including:
the second acquisition module is configured to acquire environmental information of a fire scene environment where a first user wearing the first head-mounted device is located;
the second determination module is configured to determine the tasks to be executed and the priorities thereof according to the environment information;
a first display module configured to display the task to be performed and the priority thereof on a transparent display unit of the first head mounted device.
Optionally, a laser radar module and a thermal imaging module are arranged on the first head-mounted device; wherein the second obtaining module includes:
the eighth determining submodule is configured to determine a three-dimensional scene model of the fire scene environment where the first user is located according to the radar information detected by the laser radar module;
and the fourth obtaining sub-module is configured to superimpose the thermal distribution information detected by the thermal imager module onto the three-dimensional scene model, so as to obtain a three-dimensional image of the fire scene environment where the first user is located, wherein the three-dimensional image includes the thermal distribution information.
Optionally, a vital sign detection module is further disposed on the first head-mounted device; wherein the second obtaining module includes:
the detection sub-module is configured to detect the vital body characteristics of the first user in the fire scene environment through the vital body characteristic detection module.
Optionally, the second determining module includes:
the second identification submodule is configured to identify the position and the category of a target to be rescued according to the shape of the target, the temperature of the target and/or the characteristics of the life body in the three-dimensional image;
and the ninth determining submodule is configured to determine the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
Optionally, the second determining module includes:
a tenth determining submodule configured to determine a weight of the target to be rescued according to the category of the target to be rescued;
an eleventh determining submodule configured to determine a distance to the second user according to the position of the target to be rescued;
the second calculation submodule is configured to calculate the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the second user and the target to be rescued; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
Optionally, the second determining module includes:
and the twelfth determining submodule is configured to use the environment information as input and determine the tasks to be executed and the priorities thereof by using a pre-trained priority recognition model.
Optionally, the second determining module further includes:
a thirteenth determination submodule configured to determine a rescue route and an evacuation route of the task to be performed according to the environment information and the fire scene map information.
Optionally, the first display module includes:
a first display sub-module configured to display one or more of contents of the task to be performed and a priority thereof, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route on a transparent display unit of the first head-mounted device.
Optionally, the first display module includes:
and the second display submodule is configured to display one or more of the priority, the task type and the reason for generating the task to be executed in a superposition manner on the target to be rescued corresponding to the task to be executed.
Optionally, the apparatus further comprises:
the receiving module is configured to receive tasks to be executed and priorities thereof of other users, which are sent by head-mounted equipment and/or a server worn by the other users;
a second display module configured to display the tasks to be performed and the priorities thereof of the other users on the transparent display unit of the first head mounted device.
Optionally, the priority of the to-be-executed task is divided based on all to-be-executed tasks in the fire scene, or is divided based on to-be-executed tasks to be executed by the first user.
Optionally, the apparatus further comprises:
the identification module is configured to identify the change of the entity form in the fire scene environment of the first user according to the environment information;
a third determination module configured to determine a likelihood of a hazard from the entity morphology change;
an early warning module configured to issue early warning information when the likelihood of a hazard being present is greater than a predetermined threshold.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In a fifth aspect, an embodiment of the present disclosure provides a server, including a memory and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to perform the method steps of the first aspect.
In a sixth aspect, embodiments of the present disclosure provide a head-mounted device comprising a transmissive display unit, a memory, and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of the second aspect.
In a seventh aspect, an embodiment of the present disclosure provides a fire scene task determining system, including: a server, at least one first head mounted device and at least one second head mounted device;
the at least one first headset includes a first lidar module and a first thermal imaging module; the first laser radar module is used for detecting radar information in a fire scene environment where a first user wearing the first head-mounted device is located; the first thermal imaging module is used for detecting the thermal distribution information in the fire scene environment of a first user wearing the first head-wearing device;
the server determines at least one task to be executed and the priority thereof according to the radar information and the thermal distribution information received from the first head-mounted device, and sends the at least one task to be executed and the priority thereof to the at least one second head-mounted device;
the at least one second head-mounted device comprises a second transparent display unit for displaying the received at least one task to be executed and the priority thereof on the second transparent display unit.
Optionally, the first head-mounted device further includes a first vital sign detection module, configured to detect vital signs in a fire scene environment of a first user wearing the first head-mounted device.
Optionally, the first headset further comprises a first transparent display unit for displaying at least one task to be performed of the first user and a priority thereof.
Optionally, the second head-mounted device further comprises a second lidar module and a second thermal imaging module; the second laser radar module is used for wearing radar information in a fire scene environment where a second user of the second head-mounted device is located; the second thermal imaging module is used for detecting the thermal distribution information of the fire scene environment where a second user wearing the second head-mounted device is located.
Optionally, the second head-mounted device further includes a second vital signs detection module, configured to detect vital signs in a fire scene environment where a second user wearing the second head-mounted device is located.
In an eighth aspect, the disclosed embodiments provide a computer-readable storage medium for storing computer instructions for a fire scene task determination device, which includes computer instructions for performing the fire scene task determination method in the first aspect or the second aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the embodiment of the disclosure provides a fire scene task determination scheme, which is used for collecting fire scene environment information by using a head-mounted device worn on the head of a user, formulating a corresponding rescue task and a planning task priority according to the environment information, distributing a task to be executed and the task priority to the corresponding user, and displaying the task to be executed and the task priority on the head-mounted device of the user. The embodiment of the disclosure can automatically plan the priority of the rescue task according to the acquired environmental information, can more effectively execute the rescue task in the rescue process, has strong real-time performance, is convenient for users such as firefighters to quickly and conveniently acquire the task, does not influence the safety of the current fire-fighting head-mounted equipment, can reduce the casualty rate, and reduces the material loss.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow chart of a fire scene mission determination method according to an embodiment of the present disclosure;
FIG. 2 shows a flow chart of step S101 according to the embodiment shown in FIG. 1;
FIG. 3 illustrates a flow chart of a fire scene mission determination method according to yet another embodiment of the present disclosure;
FIG. 4 illustrates a block diagram of a fire scene mission determination device according to an embodiment of the present disclosure;
fig. 5 is a block diagram showing the structure of a fire scene task determination device according to still another embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of a fire scene mission determination system according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a server suitable for implementing a fire scene task determination method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
However, the prior art has at least the following problems: the existing fire fighting equipment does not have intelligent equipment and cannot provide an environmental geographic model; the person or entity to be rescued cannot be identified quickly; the inability to provide a fire field thermodynamic diagram; failure to set rescue priority order, etc.
In view of the above technical problems in the prior art, the embodiments of the present disclosure provide a fire scene task determination scheme, which is used for collecting environment information of a fire scene by using a head-mounted device worn on a head of a user, formulating a corresponding rescue task and a task priority according to the environment information, and distributing a task to be executed and the task priority to the corresponding user and displaying the task to be executed and the task priority on the head-mounted device of the user. The embodiment of the disclosure can automatically plan the priority of the rescue task according to the acquired environmental information, can more effectively execute the rescue task in the rescue process, has strong real-time performance, is convenient for users such as firefighters to quickly and conveniently acquire the task, does not influence the safety of the current fire-fighting head-mounted equipment, can reduce the casualty rate, and reduces the material loss.
Fig. 1 shows a flow chart of a fire scene task determination method according to an embodiment of the present disclosure. As shown in fig. 1, the fire scene task determination method includes the following steps S101 to S103:
in step S101, obtaining environmental information of a fire scene environment where at least one first user is located from a first head mounted device worn by the first user;
in step S102, determining a task to be executed and a priority thereof according to the environment information;
in step S103, sending at least one of the to-be-executed task and the priority thereof to a second head-mounted device worn by a second user for display on a transparent display unit of the second head-mounted device; the second user may be the same or different from the first user.
In this embodiment, the method for determining the fire scene task may be performed on a background server, where the background server performs information interaction with a user in the fire scene, such as a head-mounted device worn by a firefighter, through a communication network, acquires environment information of the fire scene from the head-mounted device of one or more first users, determines a task to be executed and a priority thereof according to the environment information, and then sends the task to be executed and the priority thereof to a second user suitable for executing the task, such as a head-mounted device of the firefighter, so as to be displayed on a display unit of the head-mounted device, so that the second user, such as the firefighter, can execute a rescue task in time according to the task priority.
In this embodiment, the first head-mounted device and the second head-mounted device may be smart devices that can be worn on the head of a human body, and components such as a processor, a memory, a display device, and the like may be disposed inside the smart devices. The display device can be a transparent display unit, and after the head-mounted equipment is worn on the head of a human body, the transparent display unit can be just positioned at the visible part of eyes, so that when data is displayed on the transparent display unit, the data on the transparent display unit can be viewed without manually moving the position of the head-mounted equipment and the like; meanwhile, the natural reflected light of the environmental entity can normally pass through the permeable display unit, so that the user can check the surrounding environment and things through the permeable display unit without influencing the sight of the wearer. The transmissive display unit may also allow external optical fibers to pass through to convey the displayed image into the eye of the wearer simultaneously with the background light source, enabling modification and enhancement of the background image. The head-mounted device can be a helmet, an eye, or other physical device.
In this embodiment, part or all of the head-mounted devices in the fire scene collect environmental information in the fire scene environment of the first user wearing the head-mounted devices, and the background server reconstructs a fire scene environment model according to the environmental information collected by the head-mounted devices and generates a corresponding task based on the target to be rescued in the scene environment model. Here, the first head-mounted device may be a part or all of the head-mounted devices in a fire scene, the second head-mounted device may also be a part or all of the head-mounted devices in the fire scene, and the first head-mounted device and the second head-mounted device may be the same or different, that is, the first user and the second user may be the same or different. The background server can reconstruct a fire scene site environment model at the position of a fire fighter wearing the head-mounted device aiming at the environment information acquired by the head-mounted device, and can also jointly reconstruct the whole or part of fire scene site environment model based on the environment information acquired by a plurality of head-mounted devices. In the former case, if a task to be executed is generated and its priority is planned based on environmental information collected by a first headset, the backend server assigns the task to a second user wearing a second headset, where the second user may be the first user or another user, and certainly the second headset may be the first headset or not. According to the embodiment of the invention, the background server plans the rescue tasks and the priority thereof, so that the rescue priority can be planned from the whole situation and distributed to a plurality of firefighters to execute the rescue tasks according to the planned priority, and the efficiency and the accuracy of executing the tasks are improved.
In an optional implementation manner of this embodiment, a laser radar module and a thermal imaging module are disposed on the first head-mounted device; as shown in fig. 2, the step S101 of obtaining the environmental information of the fire scene environment of the first user from the first head mounted device worn by at least one first user further includes the following steps S201 to S202:
in step S201, radar information detected by the laser radar module and thermal distribution information detected by the thermal imager module are acquired;
in step S202, determining a three-dimensional scene model of the fire scene environment where the first user is located according to the radar information;
in step S203, the thermal distribution information is superimposed on the three-dimensional scene model, and a three-dimensional image including the thermal distribution information of the fire scene environment where the user is located is obtained.
In this optional implementation, the lidar module and the thermal imaging module may be integrated on the first head-mounted device, and the lidar module scans the fire scene in real time to obtain radar information of each entity in the fire scene environment. Specifically, through signals transmitted by the laser radar, the signals return to the laser radar receiving module after encountering an obstacle, and radar information such as the distance, the direction, the height and even the shape of each entity in a fire scene can be determined according to the time length and the signal strength of the returned signals, and information such as the corresponding azimuth angle, elevation angle and the like. The thermal imaging module detects thermal radiation of the surfaces of all entities in a fire scene to generate colored pictures to represent the thermal distribution information of all entities in the scene. After radar information detected by a laser radar module and thermal distribution information detected by a thermal imaging module are obtained, a three-dimensional scene model of a fire scene environment is established according to the radar information, then the thermal distribution information detected by the thermal imaging module is superposed into the three-dimensional scene model to construct a three-dimensional image with the thermal distribution information, wherein in the three-dimensional image with the thermal distribution information, the surfaces of all entities are displayed in different colors representing different temperatures, and when a user such as a fireman views the fire scene environment through a permeable display unit, the surfaces of all entities viewed through the permeable display unit are superposed with different colors representing different temperatures, so that the user can view all the entities in the fire scene more clearly and intuitively. For example, when the intelligent helmet scans a burning square table in a room, the three-dimensional model establishes a model of the square table, including the table top and the table legs, and the temperature of the burning part of the table is displayed on each surface of the table model according to the thermal imaging module. Specifically, the coordinates in the three-dimensional scene model established by the laser radar may be aligned with the coordinates in the thermal distribution map generated by the thermal imaging module, so that the thermal distribution map may be accurately matched to different regions of the three-dimensional scene model.
In an optional implementation manner of this embodiment, a vital sign detection module is further disposed on the first head-mounted device; in step S101, the step of obtaining environmental information of a fire scene environment where at least one first user is located from a first head-mounted device worn by the first user further includes the following steps:
and acquiring the life body characteristics of the first user in the fire scene environment detected by the life characteristic detection module.
In this optional implementation, in order to detect trapped people in a fire scene environment, the trapped people may be detected on the fire scene by providing a vital sign detection module on the head-mounted device. The vital sign detection module can be an infrared heat detection module, and can also be other detection modules for detecting human physiological features, such as detection equipment for detecting human heartbeat, pulse and the like. The vital signs detection module may detect vital signs in the environment if the first user is trapped in the fire scene environment. And further, the position information and the like of the person to be rescued can be judged according to the detected vital signs.
In an optional implementation manner of this embodiment, a vital sign detection module is further disposed on the first head-mounted device; wherein, the step S102, namely the step of determining the task to be executed and the priority thereof according to the environment information, further includes the following steps:
identifying the position and the type of a target to be rescued according to the target shape, the target temperature and/or the life body characteristics in the three-dimensional image;
and determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
In this optional implementation, after the three-dimensional image with thermal distribution is detected by the laser radar module and the thermal imaging module, the position and the type of the target to be rescued, such as a person or an entity, may be identified according to the shape and the surface temperature of each target in the three-dimensional image and/or according to the vital signs detected by the vital sign detection module. The vital sign detection module may detect whether the target is a person, and may also detect a temperature, a position, and the like of the person. When the target in the fire scene environment is identified, the target to be rescued and the category thereof can be identified based on the target shape in the three-dimensional scene model constructed by the information detected by the laser radar module and the target temperature in the thermal distribution diagram detected by the thermal imaging module, and whether a person is in the fire scene environment, the temperature and the position of the person and the like can also be determined based on the vital sign detection module. For example, the target type is determined by a target shape contour constructed by the information detected by the laser radar module, or whether the target is a person to be rescued is determined according to the temperature information, or whether the target is the person to be rescued is determined according to the vital signs. After the target to be rescued and the target category (object or person) are identified, the task priority for rescuing the target is determined according to the weight of the target, the distance from a firefighter and the like.
In an optional implementation manner of the embodiment of the present disclosure, determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued includes:
determining the weight of the target to be rescued according to the category of the target to be rescued;
determining the distance between the target to be rescued and the second user according to the position of the target to be rescued;
calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the target to be rescued and the second user; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
In one embodiment, the task priority is obtained based on a predefined priority calculation method, which may be calculated based on the weight of the target and the distance to the rescuer, different weights being set for different classes of targets to be rescued. For example, the rescue priority is set according to the priority algorithm P ═ w (weight)/(10 + d (distance)) according to the different weights of the person and the object (the weight of the person is higher than the weight of the object), and the relative distance to the firefighter. For example, if the person weight is 100, the distance to the nearest firefighter is 20 meters, the object weight is 10, and the distance to the nearest firefighter is 5 meters, then the person priority P (person) is 100/(10+20) 3.3, the object priority P (substance) is 10/(10+5) 0.6, and the person priority is higher than the object priority.
In another embodiment, the task priority may further include a calculation formula of the zone temperature, where P is w (weight) × t (temperature)/(10 + d (distance)), where the temperature is a temperature index obtained by converting one degree fahrenheit, the higher the temperature index is, the higher the priority index of the person or entity is, and the priority algorithm may direct the fire fighter to perform rescue for the person or entity earlier, so that the priority algorithm may truly reflect the danger situation of the rescued person.
In an optional implementation manner of this embodiment, the step S102, that is, the step of determining the task to be executed and the priority thereof according to the environment information, further includes:
and taking the environment information as input, and determining the task to be executed and the priority thereof by utilizing a pre-trained priority recognition model.
In the optional implementation manner, the priority can also be obtained by a machine learning-based method, a priority recognition model is trained in advance by using a large amount of labeled training data, and the training data comprises environmental information such as a three-dimensional scene map, a thermodynamic distribution diagram and the like and labeled priority data of the person or object to be rescued; when the system is used, the priority recognition model can automatically calculate the priorities of different targets under different scenes, and the currently acquired environmental information can be used as the input of the priority recognition model to obtain the target to be rescued, the priority and the like.
In an optional implementation manner of this embodiment, the step S102, that is, the step of determining the task to be executed and the priority thereof according to the environment information, further includes:
and determining a rescue route and an evacuation route of the task to be executed according to the environment information and the map information of the fire scene.
In the optional implementation mode, the rescue route and the evacuation route of the task to be executed can be determined according to environment information acquired by the head-mounted device, such as a three-dimensional scene model and a thermal distribution diagram of a fire scene and map information of the fire scene. Rescue routes and evacuation routes may be formulated based on temperature, fire, smoke level, whether there is a danger around the route, distance of the route, ease of walking, and the like.
In an optional implementation manner of this embodiment, in step S103, that is, the step of sending at least one of the to-be-executed task and the priority thereof to a second head-mounted device worn by a second user for displaying on a transparent display unit of the second head-mounted device further includes:
transmitting one or more of the content of the at least one task to be performed and its priority, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route to the second head-mounted device to be displayed on a transparent display unit of the second head-mounted device.
In this alternative implementation, after the rescue task and the priority are determined, in addition to the content of the task to be executed and the priority thereof being sent to the second head-mounted device for display, information related to the task, such as one or more of task execution position information, environmental information where the target to be rescued is located, safety countdown information, a rescue route, and an evacuation route, may also be sent to the second head-mounted device for display. The environment information of the target to be rescued can be a three-dimensional image of a fire scene with a thermal distribution diagram, so that a second user can know the geographic graph, the fire condition state and the like of the rescue environment according to the three-dimensional image and the thermal distribution diagram, the safety countdown information can be safety time estimated based on the current fire condition and other danger factors, the second user is prompted to complete a rescue task within the safety time, and otherwise the target to be rescued or fire fighters can be exposed to danger.
In an optional implementation manner of the embodiment of the present disclosure, determining the task to be executed and the priority thereof according to the environment information further includes:
and determining at least one rescue worker as the second user according to the information of the target to be rescued corresponding to the task to be executed and the current states of a plurality of rescue workers.
In the optional implementation manner, when the background server determines a rescue task and the priority thereof based on the environmental information acquired by the one or more head-mounted devices, after the target to be rescued is identified according to the environmental information and other sensing data, when it is determined that a plurality of rescuers can execute the rescue task according to the information of the target to be rescued and the states of the firefighters around the target to be rescued, an optimal rescue worker can be selected from the plurality of rescuers, and the rescue task, the priority and/or other related information are sent to the second head-mounted device of the optimal rescue worker. The information of the target to be rescued comprises a target category, a distance between the target and rescuers, a danger degree of the surrounding environment of the target and the like; the current state of the rescuer may include the current physical condition of the rescuer, the rescue ability of the rescuer itself, whether other rescue tasks are currently being performed, etc. For example, when there are a plurality of rescuers wearing the head-mounted device around the target to be rescued, the distance between the firefighter and the target to be rescued, whether the firefighter is currently performing other rescue tasks, whether the firefighter can complete the rescue task within the safety countdown, whether there is a dangerous obstacle which is difficult to cross between the firefighter and the target to be rescued, the current physical state of the firefighter, the rescue ability of the firefighter and other factors can be comprehensively considered, and an optimal firefighter is selected to perform the rescue task, that is, the optimal firefighter is designated as a second user, so that information related to the rescue task is sent to the second head-mounted device worn by the optimal firefighter.
Fig. 3 illustrates a flow chart of a fire scene mission determination method according to an embodiment of the present disclosure. As shown in fig. 3, the fire scene task determination method is executed on the first head set, and includes the following steps S301 to S303:
in step S301, obtaining environmental information of a fire scene environment where a first user wearing the first head-mounted device is located;
in step S302, determining a task to be executed and a priority thereof according to the environment information;
in step S303, the task to be performed and the priority thereof are displayed on the transmissive display unit of the first head mounted device.
In this embodiment, each of the headsets may also determine the task to be executed and the priority of the first user wearing the first headset based on the environment information collected by the headset itself. That is, the fire scene task determination method may be executed on a head-mounted device, and similar to the fire scene task determination method executed on the background server in the embodiment shown in fig. 1, the to-be-executed tasks and the priorities are determined based on the fire scene environment information collected by the head-mounted device, and the to-be-executed tasks and the priorities are displayed on the head-mounted device of the task performer. In contrast, the fire scene task determination method in the embodiment shown in fig. 1 may formulate tasks to be performed (including rescue tasks, evacuation tasks, and the like) from a global perspective based on environmental information collected by one or more headsets; in the method for determining the fire scene task in the embodiment, the task to be executed by the first user wearing the first head-mounted device and the priority thereof are determined based on the environment information collected by the first head-mounted device, and the task to be executed and the priority thereof are displayed on the transparent display unit of the first head-mounted device. Some details of the above method steps in this embodiment can be referred to the description of the embodiment shown in fig. 1, and are not repeated herein.
In an optional implementation manner of the embodiment of the present disclosure, a laser radar module and a thermal imaging module are disposed on the first head-mounted device; in step S301, that is, the step of obtaining the environmental information of the fire scene environment where the first user wearing the first head-mounted device is located includes:
determining a three-dimensional scene model of the fire scene environment of the first user according to the radar information detected by the laser radar module;
and superposing the thermal distribution information detected by the thermal imager module to the three-dimensional scene model to obtain a three-dimensional image of the fire scene environment where the first user is located, wherein the three-dimensional image comprises the thermal distribution information.
In this alternative implementation, as described in the embodiment shown in fig. 2, the first head-mounted device is integrated with a laser radar module and a thermal imaging module, and is configured to detect environmental information of a fire scene environment where a first user wearing the first head-mounted device is located, and specific detection details may be referred to the above description of the embodiment of fig. 2.
In an optional implementation manner of the embodiment of the present disclosure, the first head-mounted device is further provided with a vital sign detection module; in step S301, that is, the step of obtaining the environmental information of the fire scene environment where the first user wearing the first head-mounted device is located further includes:
in an optional implementation manner of the embodiment of the present disclosure, the first head-mounted device is further provided with a vital sign detection module; in step S301, the vital signs in the fire scene environment of the first user are detected by the vital sign detection module.
In this alternative implementation, a vital signs detection module is provided on the first headgear to detect a living being in a fire scene environment. For specific details, reference may be made to the related description in the above fire scene task determination method, and details are not described herein again.
In an optional implementation manner of the embodiment of the present disclosure, the step S302, namely, the step of determining the task to be executed and the priority thereof according to the environment information, includes:
identifying the position and the type of a target to be rescued according to the target shape, the target temperature and/or the life body characteristics in the three-dimensional image;
and determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
In an optional implementation manner of the embodiment of the present disclosure, determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued includes:
determining the weight of the target to be rescued according to the category of the target to be rescued;
determining the distance between the target to be rescued and the second user according to the position of the target to be rescued;
calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the target to be rescued and the second user; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
In this alternative implementation, the task priority is determined by a priority calculation. For specific details, reference may be made to the related description in the above fire scene task determination method, and details are not described herein again.
In an optional implementation manner of the embodiment of the present disclosure, the step S302, that is, the step of determining the task to be executed and the priority thereof according to the environment information, further includes:
and taking the environment information as input, and determining the task to be executed and the priority thereof by utilizing a pre-trained priority recognition model.
In this alternative implementation, the task to be executed and its priority are determined using a machine learning model. For specific details, reference may be made to the related description in the above fire scene task determination method, and details are not described herein again.
In an optional implementation manner of the embodiment of the present disclosure, the step S302, that is, the step of determining the task to be executed and the priority thereof according to the environment information, further includes:
and determining a rescue route and an evacuation route of the task to be executed according to the environment information and the map information of the fire scene.
In the optional implementation mode, the to-be-executed task and the priority are formulated, and meanwhile the rescue route and the evacuation route can be determined according to the environmental information and the scene of fire map. For specific details, reference may be made to the related description in the above fire scene task determination method, and details are not described herein again.
In an optional implementation manner of the embodiment of the present disclosure, the step S303 of displaying the task to be executed and the priority thereof on the transmissive display unit of the first head mounted device includes:
displaying one or more of the content of the task to be performed and its priority, task execution location information, environmental information, safety countdown information, rescue route, and evacuation route on a transparent display unit of the first head-mounted device.
In this alternative implementation, one or more of the above-mentioned information may also be displayed in an augmented reality manner superimposed on a rescue target or an environmental entity within the visual range of the user by directly displaying the above-mentioned information at a predetermined position of the transparent display unit of the head-mounted device, that is, at a position that does not affect the sight line of the user. For example, if the target to be rescued can be seen in the visual range of the user, a rescue mark can be superimposed on the target to be rescued, or the display brightness of the target to be rescued is enhanced in a display enhancing manner, or even the display of other entities around the target to be rescued can be weakened to highlight the target to be rescued, specifically, the display can be realized by shielding or weakening natural light reflected by the entities around the target to be rescued, or the natural light reflected by the target to be rescued is enhanced, so that the target to be rescued is seen by a firefighter wearing the head-mounted device through the transparent display unit more prominently, and other surrounding environments are darker; it will be appreciated that more than the target to be rescued may be displayed in this manner, the rescue route, evacuation route, etc. may also be enhanced displayed to the firefighter in such a manner that the firefighter is able to quickly and accurately perform rescue tasks and/or evacuation tasks.
In an optional implementation manner of this embodiment, the step S103 of displaying the task to be executed and the priority thereof on the transparent display unit of the first head-mounted device further includes:
and superposing and displaying one or more of the priority, the display task type and the reason for generating the task to be executed on the target to be rescued corresponding to the task to be executed.
In the optional implementation manner, the rescue priority, the task type or the reason for the task to be executed and the like of the target to be rescued can be displayed in an augmented reality manner in a superimposed manner. For example, colors or marks representing different priorities are displayed on the target to be rescued in an overlapping manner, such as red for high-priority targets; the task type can be a rescue task, a fire extinguishing task and the like, and the easily understood marks for rescue, fire extinguishing and the like can be displayed on the target to be rescued in a superposed mode, the reasons for the task to be executed can include that the target to be rescued is trapped personnel, flammable and explosive articles, toxic substances and the like, and the reasons are displayed on the corresponding target to be rescued in a superposed mode through different marks. By means of the augmented reality display mode, the firefighters can be reminded of information related to tasks in the most obvious mode under the condition that the vision of the firefighters is not affected, and the firefighters can know the basic conditions of the tasks to be executed at the first time without any manual operation.
In an optional implementation manner of the embodiment of the present disclosure, the method for determining a dangerous task further includes:
receiving tasks to be executed and priorities thereof of other users, which are sent by head-mounted equipment and/or a server worn by other users;
and displaying the tasks to be executed and the priorities of the other users on a transparent display unit of the first head-mounted device.
In this alternative implementation, the head-mounted device may also receive tasks to be performed and their priorities by other firefighters from other head-mounted devices or servers and display them on the transparent display unit. Tasks to be executed and priorities thereof of other firefighters can be superimposed and displayed on the target to be rescued in an augmented reality manner when the corresponding target to be rescued appears in the sight line of a first user wearing the first head-mounted device, so as to remind other task machines to be executed existing in the fire scene environment of the first user, and therefore the first user can take necessary measures when executing own tasks or having emergency situations.
In an optional implementation manner of the embodiment of the present disclosure, the priority of the to-be-executed task is divided based on all to-be-executed tasks in the fire scene, or is divided based on to-be-executed tasks to be executed by the first user.
In this alternative implementation, the first headset may prioritize tasks to be performed by the first user to prompt the first user to perform tasks from high to low according to the prioritized tasks; the first head-mounted device can also prioritize all tasks to be executed in a fire scene after information interaction with other head-mounted devices and/or a server, so that when the first user encounters high-priority tasks distributed to other users in the rescue process, emergency measures and the like are taken in emergency situations. For example, in the process that the first user executes the task, the first user encounters a task with higher priority executed by other users, such as rescue of personnel, and estimates that the other users have difficulty in completing the task alone or cannot complete the task with higher priority in time, and when the task priority of the first user is low, the first user can choose to give up the task to be executed temporarily, so as to help the other users complete the task with high priority. The specific setting can be carried out according to the actual situation, and is not limited herein.
In an optional implementation manner of the embodiment of the present disclosure, the method for determining a fire scene task further includes:
identifying the change of the object form in the fire scene environment of the first user according to the environment information;
determining the possibility of danger according to the object form change;
issuing a warning message when the likelihood of the occurrence of a hazard is greater than a predetermined threshold.
In the optional implementation mode, the three-dimensional scene model in the fire scene environment can be reconstructed in real time through distance information and the like in the fire scene environment detected by the laser radar module, the possibility of danger (such as danger like house collapse) is determined according to the change of the object form in the three-dimensional scene model, and early warning information is sent out when the possibility is high. The early warning information can be sound, twinkling light or a mark displayed at the position of the danger in an overlapping mode. Through the mode, the possibility of danger occurrence can be predicted, early warning information is provided for firemen, and the safety of the firemen is further ensured.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 4 shows a block diagram of a fire scene task determination device according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both. As shown in fig. 4, the fire scene task determination device includes a first obtaining module 401, a first determining module 402, and a sending module 403:
a first obtaining module 401, configured to obtain, from a first headset worn by at least one first user, environment information of a fire scene environment where the first user is located;
a first determining module 402 configured to determine the task to be executed and the priority thereof according to the environment information;
a sending module 403 configured to send the at least one high-priority task to be performed to a second head-mounted device worn by a second user for display on a transparent display unit of the second head-mounted device; the second user may be the same or different from the first user.
In an optional implementation manner of the embodiment of the present disclosure, a laser radar module and a thermal imaging module are disposed on the first head-mounted device; the first obtaining module 401 includes:
a first acquisition sub-module configured to acquire radar information detected by the lidar module and thermal distribution information detected by the thermal imaging module;
the first determining submodule is configured to determine a three-dimensional scene model of a fire scene environment where the first user is located according to the radar information;
and the second acquisition sub-module is configured to superimpose the thermal distribution information on the three-dimensional scene model to obtain a three-dimensional image of the fire scene environment where the user is located, wherein the three-dimensional image comprises the thermal distribution information.
In an optional implementation manner of the embodiment of the present disclosure, the first head-mounted device is further provided with a vital sign detection module; the first obtaining module 401 further includes:
and the third obtaining sub-module is configured to obtain the vital sign in the fire scene environment of the first user detected by the vital sign detecting module.
In an optional implementation manner of the embodiment of the present disclosure, the first determining module 402 includes:
the first identification submodule is configured to identify the position and the category of a target to be rescued according to the shape of the target, the temperature of the target and/or the characteristics of the life body in the three-dimensional image;
the second determination submodule is configured to determine the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
In an optional implementation manner of the embodiment of the present disclosure, the first determining module 402 includes:
a third determining submodule configured to determine a weight of the target to be rescued according to the category of the target to be rescued;
a fourth determining submodule configured to determine a distance to the second user according to the position of the target to be rescued;
the first calculation submodule is configured to calculate the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the second user and the target to be rescued; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
In an optional implementation manner of the embodiment of the present disclosure, the first determining module 402 includes:
and the fifth determining submodule is configured to use the environment information as input and determine the tasks to be executed and the priorities thereof by utilizing a pre-trained priority recognition model.
In an optional implementation manner of the embodiment of the present disclosure, the first determining module 402 further includes:
a sixth determining submodule configured to determine a rescue route and an evacuation route of the task to be performed according to the environment information and the fire scene map information.
In an optional implementation manner of the embodiment of the present disclosure, the sending module 403 includes:
a transmission sub-module configured to transmit one or more of content of the at least one task to be performed and its priority, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route to the second head-mounted device to be displayed on a transparent display unit of the second head-mounted device.
In an optional implementation manner of the embodiment of the present disclosure, the first determining module 402 further includes:
and the seventh determining submodule is configured to determine at least one rescuer as the second user according to the information of the target to be rescued corresponding to the task to be executed and the current states of a plurality of rescuers.
The above-mentioned fire scene task determining device corresponds to and is consistent with the fire scene task determining method described in the embodiment and relevant portions shown in fig. 1, and specific details can be referred to the description of the embodiment and relevant portions shown in fig. 1.
Fig. 5 shows a block diagram of a fire scene task determination apparatus according to an embodiment of the present disclosure, which may be implemented as part or all of a head-mounted device by software, hardware, or a combination of both. As shown in fig. 5, the fire scene task determining apparatus includes a second obtaining module 501, a second determining module 502, and a first displaying module 503:
a second obtaining module 501, configured to obtain environmental information of a fire scene environment where a first user wearing the first head-mounted device is located;
a second determining module 502 configured to determine the task to be executed and the priority thereof according to the environment information;
a first display module 503 configured to display the task to be performed and the priority thereof on a transparent display unit of the first head mounted device.
In an optional implementation manner of the embodiment of the present disclosure, a laser radar module and a thermal imaging module are disposed on the first head-mounted device; the second obtaining module 501 includes:
the eighth determining submodule is configured to determine a three-dimensional scene model of the fire scene environment where the first user is located according to the radar information detected by the laser radar module;
and the fourth obtaining sub-module is configured to superimpose the thermal distribution information detected by the thermal imager module onto the three-dimensional scene model, so as to obtain a three-dimensional image of the fire scene environment where the first user is located, wherein the three-dimensional image includes the thermal distribution information.
In an optional implementation manner of the embodiment of the present disclosure, the first head-mounted device is further provided with a vital sign detection module; the second obtaining module 501 includes:
the detection sub-module is configured to detect the vital body characteristics of the first user in the fire scene environment through the vital body characteristic detection module.
In an optional implementation manner of the embodiment of the present disclosure, the second determining module 502 includes:
the second identification submodule is configured to identify the position and the category of a target to be rescued according to the shape of the target, the temperature of the target and/or the characteristics of the life body in the three-dimensional image;
and the ninth determining submodule is configured to determine the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
In an optional implementation manner of the embodiment of the present disclosure, the second determining module 502 includes:
a tenth determining submodule configured to determine a weight of the target to be rescued according to the category of the target to be rescued;
an eleventh determining submodule configured to determine a distance to the second user according to the position of the target to be rescued;
the second calculation submodule is configured to calculate the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the second user and the target to be rescued; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
In an optional implementation manner of the embodiment of the present disclosure, the second determining module 502 includes:
and the twelfth determining submodule is configured to use the environment information as input and determine the tasks to be executed and the priorities thereof by using a pre-trained priority recognition model.
In an optional implementation manner of the embodiment of the present disclosure, the second determining module 502 further includes:
a thirteenth determination submodule configured to determine a rescue route and an evacuation route of the task to be performed according to the environment information and the fire scene map information.
In an optional implementation manner of the embodiment of the present disclosure, the first display module 503 includes:
a first display sub-module configured to display one or more of contents of the task to be performed and a priority thereof, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route on a transparent display unit of the first head-mounted device.
In an optional implementation manner of the embodiment of the present disclosure, the first display module 503 includes:
and the second display submodule is configured to display one or more of the priority, the task type and the reason for generating the task to be executed in a superposition manner on the target to be rescued corresponding to the task to be executed.
In an optional implementation manner of the embodiment of the present disclosure, the method further includes:
the receiving module is configured to receive tasks to be executed and priorities thereof of other users, which are sent by head-mounted equipment and/or a server worn by the other users;
a second display module configured to display the tasks to be performed and the priorities thereof of the other users on the transparent display unit of the first head mounted device.
In an optional implementation manner of the embodiment of the present disclosure, the priority of the to-be-executed task is divided based on all to-be-executed tasks in the fire scene, or is divided based on to-be-executed tasks to be executed by the first user.
In an optional implementation manner of the embodiment of the present disclosure, the method further includes:
the identification module is configured to identify the change of the entity form in the fire scene environment of the first user according to the environment information;
a third determination module configured to determine a likelihood of a hazard from the entity morphology change;
an early warning module configured to issue early warning information when the likelihood of a hazard being present is greater than a predetermined threshold.
The above-mentioned fire scene task determining device corresponds to and is consistent with the fire scene task determining method described in the embodiment and relevant portions shown in fig. 3, and specific details can be referred to the description of the embodiment and relevant portions shown in fig. 3.
Fig. 6 shows a block diagram of a fire scene mission determination system according to an embodiment of the present disclosure. As shown in fig. 6, the fire scene task determination system includes a server 601, at least one first head mounted device 602, and at least one second head mounted device 603;
the at least one first headset 602 includes a first lidar module 6021 and a first thermal imaging module 6022; the first laser radar module 6021 is configured to detect radar information in a fire scene environment where a first user wearing the first head-mounted device 602 is located; the first thermal imaging module 6022 is configured to detect thermal distribution information in a fire scene environment of a first user wearing the first head-mounted device 602;
the server 601 determines at least one task to be executed and its priority according to the radar information and the thermal distribution information received from the first head mounted device 602, and transmits the at least one task to be executed and its priority to the at least one second head mounted device 603;
the at least one second headset 603 includes a second transmissive display unit 6034 for displaying the received at least one to-be-executed task and its priority on the second transmissive display unit 6034.
The first head mounted device 602 also includes a first vital signs detection module 6023 that detects vital signs in a fire scene environment of a first user wearing the first head mounted device 602.
In an optional implementation manner of the embodiment of the present disclosure, the first head mounted device 602 further includes a first transparent display unit 6024 configured to display at least one task to be performed of the first user and a priority thereof.
In an optional implementation manner of the embodiment of the present disclosure, the second head-mounted device 603 further includes a second laser radar module 6031 and a second thermal imaging module 6032; the second laser radar module 6031 is configured to receive radar information in a fire scene environment where a second user wearing the second head-mounted device 603 is located; the second thermal imaging module 6032 is configured to detect thermal distribution information in a fire scene environment where a second user wearing the second head-mounted device 603 is located.
In an optional implementation manner of the embodiment of the present disclosure, the second head-mounted device 603 further includes a second vital sign detecting module 6034, configured to detect a vital sign in a fire scene environment where a second user wearing the second head-mounted device 603 is located.
The server 601 cooperates with the first head-mounted device 602 and the second head-mounted device 603 to complete the method for determining the fire scene task in the embodiment and the related parts shown in fig. 1, and the first head-mounted device 602 and the second head-mounted device 603 may also respectively complete the method for determining the fire scene task in the embodiment and the related parts shown in fig. 3, for specific details, reference may be made to the above description of the embodiment and the related parts shown in fig. 1, and the embodiment and the related parts shown in fig. 3, and details are not described here again.
Fig. 7 is a schematic structural diagram of a server suitable for implementing the fire scene task determination method according to the embodiment of the present disclosure.
As shown in fig. 7, the electronic apparatus 700 includes a Central Processing Unit (CPU)701, which can execute various processes in the embodiment shown in fig. 1 described above according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The CPU701, the ROM702, and the RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the method described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the fire mission determination method of fig. 1. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711.
The embodiment of the disclosure also discloses a head-mounted device, which comprises a transparent display unit, a memory and a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of the embodiment shown in fig. 3 and the related portions.
The head-mounted equipment further comprises a laser radar module and a thermal imaging module; the laser radar module is used for wearing radar information in a fire scene environment where a user of the first head-mounted device is located; the thermal imaging module is used for detecting the heat distribution information in the fire scene environment where the user wearing the head-mounted device is located.
The head-mounted device further comprises a vital sign detection module used for detecting vital signs in a fire scene environment where a user wearing the head-mounted device is located.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (47)
1. A method for determining a fire scene task, comprising:
the method comprises the steps that environmental information of a fire scene environment where at least one first user is located is obtained from first head-mounted equipment worn by the first user;
determining tasks to be executed and priorities thereof according to the environment information;
sending at least one task to be executed and the priority of the task to be executed to a second head-mounted device worn by a second user to be displayed on a transparent display unit of the second head-mounted device; the second user may be the same or different from the first user.
2. The fire scene task determination method according to claim 1, wherein a laser radar module and a thermal imaging module are provided on the first head-mounted device; wherein, the obtaining the environmental information of the fire scene environment where the first user is located from the first head-mounted device worn by at least one first user comprises:
acquiring radar information detected by the laser radar module and thermal distribution information detected by the thermal imaging module;
determining a three-dimensional scene model of the fire scene environment of the first user according to the radar information;
and superposing the thermal distribution information to the three-dimensional scene model to obtain a three-dimensional image including the thermal distribution information of the fire scene environment where the user is located.
3. The fire scene task determination method according to claim 1 or 2, wherein a vital sign detection module is further provided on the first head-mounted device; wherein, the obtaining the environmental information of the fire scene environment where the first user is located from the first head-mounted device worn by at least one first user further comprises:
and acquiring the life body characteristics of the first user in the fire scene environment detected by the life characteristic detection module.
4. The fire scene task determination method according to claim 3, wherein determining the tasks to be executed and the priorities thereof according to the environment information comprises:
identifying the position and the type of a target to be rescued according to the target shape, the target temperature and/or the life body characteristics in the three-dimensional image;
and determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
5. The fire scene task determination method according to claim 4, wherein determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued comprises:
determining the weight of the target to be rescued according to the category of the target to be rescued;
determining the distance between the target to be rescued and the second user according to the position of the target to be rescued;
calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the target to be rescued and the second user; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
6. The fire scene task determination method according to claim 1, wherein determining the tasks to be executed and the priorities thereof according to the environment information includes:
and taking the environment information as input, and determining the task to be executed and the priority thereof by utilizing a pre-trained priority recognition model.
7. The fire scene task determination method according to claim 1, wherein the tasks to be executed and the priorities thereof are determined according to the environment information, further comprising:
and determining a rescue route and an evacuation route of the task to be executed according to the environment information and the map information of the fire scene.
8. The fire scene task determination method according to claim 1, wherein sending at least one of the tasks to be performed and the priority thereof to a second head-mounted device worn by a second user to be displayed on a transparent display unit of the second head-mounted device comprises:
sending one or more of the content of the at least one task to be performed, the priority of the task to be performed, task performance location information, environmental information, safety countdown information, rescue routes, and evacuation routes to the second head-mounted device for display on a transparent display unit of the second head-mounted device.
9. The fire scene task determination method according to claim 1, wherein the tasks to be executed and the priorities thereof are determined according to the environment information, further comprising:
and determining at least one rescue worker as the second user according to the information of the target to be rescued corresponding to the task to be executed and the current states of a plurality of rescue workers.
10. A method for determining a mission in a fire, the method operating on a first headset, comprising:
acquiring environmental information of a fire scene environment where a first user wearing the first head-mounted device is located;
determining tasks to be executed and priorities thereof according to the environment information;
and displaying the task to be executed and the priority of the task to be executed on a transparent display unit of the first head-mounted device.
11. The fire scene task determination method according to claim 10, wherein a laser radar module and a thermal imaging module are provided on the first head-mounted device; wherein, the environmental information of the fire scene environment where the first user wearing the first head-mounted device is located is obtained, including:
determining a three-dimensional scene model of the fire scene environment of the first user according to the radar information detected by the laser radar module;
and superposing the thermal distribution information detected by the thermal imager module to the three-dimensional scene model to obtain a three-dimensional image of the fire scene environment where the first user is located, wherein the three-dimensional image comprises the thermal distribution information.
12. The fire scene task determination method according to claim 10, wherein a vital signs detection module is further provided on the first head-mounted device; wherein, the environmental information of the first user's scene of a fire environment of wearing the head-mounted device of acquisition includes:
and detecting the life body characteristics of the first user in the fire scene environment through the life characteristic detection module.
13. The fire scene task determination method according to claim 10, wherein determining the tasks to be executed and the priorities thereof according to the environment information includes:
identifying the position and the type of a target to be rescued according to the target shape, the target temperature and/or the life body characteristics in the three-dimensional image;
and determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
14. The fire scene task determination method according to claim 13, wherein determining the rescue priority of the target to be rescued according to the position and the category of the target to be rescued comprises:
determining the weight of the target to be rescued according to the category of the target to be rescued;
determining the distance between the target to be rescued and the second user according to the position of the target to be rescued;
calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the target to be rescued and the second user; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
15. The fire scene task determination method according to claim 10, wherein determining the tasks to be executed and the priorities thereof according to the environment information includes:
and taking the environment information as input, and determining the task to be executed and the priority thereof by utilizing a pre-trained priority recognition model.
16. The fire scene task determination method according to claim 10, wherein determining the task to be executed and the priority thereof according to the environment information further comprises:
and determining a rescue route and an evacuation route of the task to be executed according to the environment information and the map information of the fire scene.
17. The fire scene task determination method according to claim 10, wherein displaying the task to be executed and the priority thereof on a transparent display unit of the first head-mounted device includes:
displaying one or more of content of the task to be performed, priority of the task to be performed, task performance location information, environmental information, safety countdown information, a rescue route, and an evacuation route on a transparent display unit of the first head-mounted device.
18. The fire scene task determination method according to claim 10, wherein displaying the task to be executed and the priority thereof on a transparent display unit of the first head-mounted device includes:
and superposing and displaying one or more of the priority, the task type and the reason for generating the task to be executed on the target to be rescued corresponding to the task to be executed.
19. The fire scene task determination method according to claim 10, further comprising:
receiving tasks to be executed and priorities thereof of other users, which are sent by head-mounted equipment and/or a server worn by other users;
and displaying the tasks to be executed and the priorities of the other users on a transparent display unit of the first head-mounted device.
20. The fire scene task determination method according to claim 10, wherein the priority of the tasks to be executed is divided based on all the tasks to be executed inside the fire scene, or is divided based on the tasks to be executed by the first user.
21. The fire scene task determination method according to claim 10, further comprising:
identifying the change of the object form in the fire scene environment of the first user according to the environment information;
determining the possibility of danger according to the object form change;
issuing a warning message when the likelihood of the occurrence of a hazard is greater than a predetermined threshold.
22. A fire mission determination device, comprising:
the system comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is configured to acquire environmental information of a fire scene environment where at least one first user is located from first head-mounted equipment worn by the first user;
the first determination module is configured to determine the tasks to be executed and the priorities thereof according to the environment information;
a transmitting module configured to transmit the at least one high-priority task to be performed to a second head-mounted device worn by a second user to be displayed on a transparent display unit of the second head-mounted device; the second user may be the same or different from the first user.
23. The fire scene task determination method according to claim 22, wherein a laser radar module and a thermal imaging module are provided on the first head-mounted device; wherein, the first obtaining module comprises:
a first acquisition sub-module configured to acquire radar information detected by the lidar module and thermal distribution information detected by the thermal imaging module;
the first determining submodule is configured to determine a three-dimensional scene model of a fire scene environment where the first user is located according to the radar information;
and the second acquisition sub-module is configured to superimpose the thermal distribution information on the three-dimensional scene model to obtain a three-dimensional image of the fire scene environment where the user is located, wherein the three-dimensional image comprises the thermal distribution information.
24. The fire scene task determination method according to claim 22 or 23, wherein a vital signs detection module is further provided on the first head-mounted device; wherein, the first obtaining module further comprises:
and the third obtaining sub-module is configured to obtain the vital sign in the fire scene environment of the first user detected by the vital sign detecting module.
25. The fire scene task determination method of claim 24, wherein the first determination module comprises:
the first identification submodule is configured to identify the position and the category of a target to be rescued according to the shape of the target, the temperature of the target and/or the characteristics of the life body in the three-dimensional image;
the second determination submodule is configured to determine the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
26. The fire scene task determination method of claim 25, wherein the first determination module comprises:
a third determining submodule configured to determine a weight of the target to be rescued according to the category of the target to be rescued;
a fourth determining submodule configured to determine a distance to the second user according to the position of the target to be rescued;
the first calculation submodule is configured to calculate the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the second user and the target to be rescued; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
27. The fire scene task determination method of claim 22, wherein the first determination module comprises:
and the fifth determining submodule is configured to use the environment information as input and determine the tasks to be executed and the priorities thereof by utilizing a pre-trained priority recognition model.
28. The fire scene task determination method of claim 22, wherein the first determination module further comprises:
a sixth determining submodule configured to determine a rescue route and an evacuation route of the task to be performed according to the environment information and the fire scene map information.
29. The fire scene task determination method of claim 22, wherein the sending module comprises:
a transmission sub-module configured to transmit one or more of content of the at least one task to be performed, priority of the task to be performed, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route to the second head-mounted device for display on a transparent display unit of the second head-mounted device.
30. The fire scene task determination method of claim 22, wherein the first determination module further comprises:
and the seventh determining submodule is configured to determine at least one rescuer as the second user according to the information of the target to be rescued corresponding to the task to be executed and the current states of a plurality of rescuers.
31. A fire mission determination apparatus, the apparatus operating on a first headset, comprising:
the second acquisition module is configured to acquire environmental information of a fire scene environment where a first user wearing the first head-mounted device is located;
the second determination module is configured to determine the tasks to be executed and the priorities thereof according to the environment information;
a first display module configured to display the task to be performed and the priority thereof on a transparent display unit of the first head mounted device.
32. The fire scene task determination method according to claim 31, wherein a laser radar module and a thermal imaging module are provided on the first head-mounted device; wherein the second obtaining module includes:
the eighth determining submodule is configured to determine a three-dimensional scene model of the fire scene environment where the first user is located according to the radar information detected by the laser radar module;
and the fourth obtaining sub-module is configured to superimpose the thermal distribution information detected by the thermal imager module onto the three-dimensional scene model, so as to obtain a three-dimensional image of the fire scene environment where the first user is located, wherein the three-dimensional image includes the thermal distribution information.
33. The fire scene task determination method according to claim 31, wherein a vital signs detection module is further provided on the first head-mounted device; wherein the second obtaining module includes:
the detection sub-module is configured to detect the vital body characteristics of the first user in the fire scene environment through the vital body characteristic detection module.
34. The fire scene task determination method of claim 31, wherein the second determination module comprises:
the second identification submodule is configured to identify the position and the category of a target to be rescued according to the shape of the target, the temperature of the target and/or the characteristics of the life body in the three-dimensional image;
and the ninth determining submodule is configured to determine the rescue priority of the target to be rescued according to the position and the category of the target to be rescued.
35. The fire scene task determination method of claim 34, wherein the second determination module comprises:
a tenth determining submodule configured to determine a weight of the target to be rescued according to the category of the target to be rescued;
an eleventh determining submodule configured to determine a distance to the second user according to the position of the target to be rescued;
the second calculation submodule is configured to calculate the rescue priority of the target to be rescued according to the weight of the target to be rescued and the distance between the second user and the target to be rescued; or calculating the rescue priority of the target to be rescued according to the weight of the target to be rescued, the distance between the target to be rescued and the second user and the temperature of the target to be rescued.
36. The fire scene task determination method of claim 31, wherein the second determination module comprises:
and the twelfth determining submodule is configured to use the environment information as input and determine the tasks to be executed and the priorities thereof by using a pre-trained priority recognition model.
37. The fire scene task determination method of claim 31, wherein the second determination module further comprises:
a thirteenth determination submodule configured to determine a rescue route and an evacuation route of the task to be performed according to the environment information and the fire scene map information.
38. The fire scene task determination method of claim 31, wherein the first display module comprises:
a first display sub-module configured to display one or more of content of the task to be performed, priority of the task to be performed, task execution location information, environmental information, safety countdown information, a rescue route, and an evacuation route on a transparent display unit of the first head-mounted device.
39. The fire scene task determination method of claim 31, wherein the first display module comprises:
and the second display submodule is configured to display one or more of the priority, the task type and the reason for generating the task to be executed in a superposition manner on the target to be rescued corresponding to the task to be executed.
40. The fire scene task determination method of claim 31, further comprising:
the receiving module is configured to receive tasks to be executed and priorities thereof of other users, which are sent by head-mounted equipment and/or a server worn by the other users;
a second display module configured to display the tasks to be performed and the priorities thereof of the other users on the transparent display unit of the first head mounted device.
41. The fire scene task determination method of claim 31, wherein the priority of the tasks to be executed is divided based on all the tasks to be executed inside the fire scene or based on the tasks to be executed by the first user.
42. The fire scene task determination method of claim 31, further comprising:
the identification module is configured to identify the change of the entity form in the fire scene environment of the first user according to the environment information;
a third determination module configured to determine a likelihood of a hazard from the entity morphology change;
an early warning module configured to issue early warning information when the likelihood of a hazard being present is greater than a predetermined threshold.
43. A server, comprising a memory and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of any of claims 1-9.
44. A head-mounted device comprising a transmissive display unit, a memory, and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of any of claims 10-21.
45. A fire mission determination system, comprising: the server of claim 43, at least one first head mounted device, and at least one second head mounted device.
46. A fire mission determination system, comprising: a server, at least one first head mounted device and at least one second head mounted device;
the at least one first headset includes a first lidar module and a first thermal imaging module; the first laser radar module is used for detecting radar information in a fire scene environment where a first user wearing the first head-mounted device is located; the first thermal imaging module is used for detecting the thermal distribution information in the fire scene environment of a first user wearing the first head-wearing device;
the server determines at least one task to be executed and the priority thereof according to the radar information and the thermal distribution information received from the first head-mounted device, and sends the at least one task to be executed and the priority thereof to the at least one second head-mounted device;
the at least one second head-mounted device comprises a second transparent display unit for displaying the received at least one task to be executed and the priority thereof on the second transparent display unit.
47. A computer-readable storage medium having stored thereon computer instructions, characterized in that the computer instructions, when executed by a processor, carry out the method steps of any of claims 1-21.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810050662.8A CN108169761B (en) | 2018-01-18 | 2018-01-18 | Fire scene task determination method, device and system and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810050662.8A CN108169761B (en) | 2018-01-18 | 2018-01-18 | Fire scene task determination method, device and system and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108169761A true CN108169761A (en) | 2018-06-15 |
CN108169761B CN108169761B (en) | 2024-08-06 |
Family
ID=62515235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810050662.8A Active CN108169761B (en) | 2018-01-18 | 2018-01-18 | Fire scene task determination method, device and system and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108169761B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765872A (en) * | 2018-08-06 | 2018-11-06 | 上海瀚莅电子科技有限公司 | A kind of estimation method, system and the intelligent wearable device of stranded object environment parameter |
CN109672875A (en) * | 2018-11-30 | 2019-04-23 | 迅捷安消防及救援科技(深圳)有限公司 | Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product |
CN109717537A (en) * | 2018-11-30 | 2019-05-07 | 迅捷安消防及救援科技(深圳)有限公司 | Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product |
CN109920099A (en) * | 2019-01-29 | 2019-06-21 | 迅捷安消防及救援科技(深圳)有限公司 | Removable module wisdom fire-fighting Support Equipment on duty and Related product |
CN109965434A (en) * | 2019-01-29 | 2019-07-05 | 迅捷安消防及救援科技(深圳)有限公司 | Removable module wisdom fire-fighting Support Equipment on duty and Related product |
CN110624196A (en) * | 2019-07-15 | 2019-12-31 | 国网浙江省电力有限公司嘉兴供电公司 | Automatic flame-retardant control method applied to intelligent fire fighting |
CN112138297A (en) * | 2020-09-28 | 2020-12-29 | 烟台艾睿光电科技有限公司 | Fire scene rescue wearing equipment, fire scene auxiliary rescue method, device and equipment |
CN116229024A (en) * | 2022-12-14 | 2023-06-06 | 广州视享科技有限公司 | Augmented reality display device, control method, and computer-readable storage medium |
Citations (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020196202A1 (en) * | 2000-08-09 | 2002-12-26 | Bastian Mark Stanley | Method for displaying emergency first responder command, control, and safety information using augmented reality |
WO2003060830A1 (en) * | 2002-01-15 | 2003-07-24 | Information Decision Technologies, Llc | Method and system to display both visible and invisible hazards and hazard information |
CN101794432A (en) * | 2010-02-05 | 2010-08-04 | 民政部国家减灾中心 | Disaster information collection and supporting method and system |
CN101867797A (en) * | 2010-07-09 | 2010-10-20 | 公安部上海消防研究所 | Helmet type infrared detection and image transmission processing system |
US20110214082A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Projection triggering through an external marker in an augmented reality eyepiece |
CN103065413A (en) * | 2012-12-13 | 2013-04-24 | 中国电子科技集团公司第十五研究所 | Method and device of acquiring fire class information |
CN103150012A (en) * | 2011-11-30 | 2013-06-12 | 微软公司 | Shared collaboration using head-mounted display |
CN103210434A (en) * | 2010-09-15 | 2013-07-17 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Visual driver information and warning system for driver of motor vehicle |
CN203523884U (en) * | 2013-10-09 | 2014-04-09 | 武汉理工大学 | Fire-protection intelligent helmet with environment monitoring function |
CN104427327A (en) * | 2013-08-23 | 2015-03-18 | 索尼公司 | Image capturing device, image capturing method, and information distribution system |
CN104606836A (en) * | 2014-12-29 | 2015-05-13 | 中国人民解放军信息工程大学 | Fire disaster rescue system and information processing method |
CN104660995A (en) * | 2015-02-11 | 2015-05-27 | 尼森科技(湖北)有限公司 | Disaster relief visual system |
CN104731894A (en) * | 2015-03-18 | 2015-06-24 | 百度在线网络技术(北京)有限公司 | Thermodynamic diagram display method and device |
CN105077865A (en) * | 2015-08-31 | 2015-11-25 | 郑州捷利工业设备有限公司 | Environment display system applicable to fire fighting helmet |
CN105228100A (en) * | 2015-08-31 | 2016-01-06 | 湖南汇博电子技术有限公司 | Rescue system and method |
CN105574797A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Fireman-oriented synergetic head-wearing information integration device and method |
CN105632049A (en) * | 2014-11-06 | 2016-06-01 | 北京三星通信技术研究有限公司 | Pre-warning method and device based on wearable device |
KR20160070503A (en) * | 2014-12-10 | 2016-06-20 | 한국정보공학 주식회사 | Method and system for providing a position of co-operated firemen by using a wireless communication, method for displaying a position of co-operated firefighter, and fire hat for performing the method |
CN105701964A (en) * | 2014-12-15 | 2016-06-22 | 西门子公司 | Dynamic virtual fencing for a hazardous environment |
KR101656873B1 (en) * | 2015-05-29 | 2016-09-12 | 한국교통대학교산학협력단 | Helmet for fire protection using recognizing three-dimension object |
CN106101164A (en) * | 2015-04-30 | 2016-11-09 | 许耿祯 | Building rescue information system |
CN106210627A (en) * | 2016-07-04 | 2016-12-07 | 广东天米教育科技有限公司 | A kind of unmanned plane fire dispatch system |
CN106530576A (en) * | 2016-12-29 | 2017-03-22 | 广东小天才科技有限公司 | Fire disaster dangerous case early warning method and device based on mobile equipment |
CN106730556A (en) * | 2016-12-16 | 2017-05-31 | 三汽车制造有限公司 | Fire-fighting equipment and its control system and method |
US20170161004A1 (en) * | 2015-12-02 | 2017-06-08 | Samsung Electronics Co., Ltd. | Method and apparatus for providing search information |
CN106820394A (en) * | 2017-02-24 | 2017-06-13 | 深圳凯达通光电科技有限公司 | The crash helmet that a kind of fire rescue personnel wear |
CN106991681A (en) * | 2017-04-11 | 2017-07-28 | 福州大学 | A kind of fire boundary vector information extract real-time and method for visualizing and system |
CN107261374A (en) * | 2017-06-30 | 2017-10-20 | 魏涵潇 | A kind of multi-functional wear-type intelligent fire-pretection system and its control method |
CN107506886A (en) * | 2017-06-28 | 2017-12-22 | 湖南统科技有限公司 | Search and rescue the distribution method and system of resource |
CN208921868U (en) * | 2018-01-18 | 2019-05-31 | 上海瀚莅电子科技有限公司 | Helmet |
-
2018
- 2018-01-18 CN CN201810050662.8A patent/CN108169761B/en active Active
Patent Citations (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020196202A1 (en) * | 2000-08-09 | 2002-12-26 | Bastian Mark Stanley | Method for displaying emergency first responder command, control, and safety information using augmented reality |
WO2003060830A1 (en) * | 2002-01-15 | 2003-07-24 | Information Decision Technologies, Llc | Method and system to display both visible and invisible hazards and hazard information |
CN101794432A (en) * | 2010-02-05 | 2010-08-04 | 民政部国家减灾中心 | Disaster information collection and supporting method and system |
US20110214082A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Projection triggering through an external marker in an augmented reality eyepiece |
CN101867797A (en) * | 2010-07-09 | 2010-10-20 | 公安部上海消防研究所 | Helmet type infrared detection and image transmission processing system |
CN103210434A (en) * | 2010-09-15 | 2013-07-17 | 大陆-特韦斯贸易合伙股份公司及两合公司 | Visual driver information and warning system for driver of motor vehicle |
CN103150012A (en) * | 2011-11-30 | 2013-06-12 | 微软公司 | Shared collaboration using head-mounted display |
CN103065413A (en) * | 2012-12-13 | 2013-04-24 | 中国电子科技集团公司第十五研究所 | Method and device of acquiring fire class information |
CN104427327A (en) * | 2013-08-23 | 2015-03-18 | 索尼公司 | Image capturing device, image capturing method, and information distribution system |
CN203523884U (en) * | 2013-10-09 | 2014-04-09 | 武汉理工大学 | Fire-protection intelligent helmet with environment monitoring function |
CN105574797A (en) * | 2014-10-09 | 2016-05-11 | 东北大学 | Fireman-oriented synergetic head-wearing information integration device and method |
CN105632049A (en) * | 2014-11-06 | 2016-06-01 | 北京三星通信技术研究有限公司 | Pre-warning method and device based on wearable device |
KR20160070503A (en) * | 2014-12-10 | 2016-06-20 | 한국정보공학 주식회사 | Method and system for providing a position of co-operated firemen by using a wireless communication, method for displaying a position of co-operated firefighter, and fire hat for performing the method |
CN105701964A (en) * | 2014-12-15 | 2016-06-22 | 西门子公司 | Dynamic virtual fencing for a hazardous environment |
CN104606836A (en) * | 2014-12-29 | 2015-05-13 | 中国人民解放军信息工程大学 | Fire disaster rescue system and information processing method |
CN104660995A (en) * | 2015-02-11 | 2015-05-27 | 尼森科技(湖北)有限公司 | Disaster relief visual system |
CN104731894A (en) * | 2015-03-18 | 2015-06-24 | 百度在线网络技术(北京)有限公司 | Thermodynamic diagram display method and device |
CN106101164A (en) * | 2015-04-30 | 2016-11-09 | 许耿祯 | Building rescue information system |
KR101656873B1 (en) * | 2015-05-29 | 2016-09-12 | 한국교통대학교산학협력단 | Helmet for fire protection using recognizing three-dimension object |
CN105228100A (en) * | 2015-08-31 | 2016-01-06 | 湖南汇博电子技术有限公司 | Rescue system and method |
CN105077865A (en) * | 2015-08-31 | 2015-11-25 | 郑州捷利工业设备有限公司 | Environment display system applicable to fire fighting helmet |
US20170161004A1 (en) * | 2015-12-02 | 2017-06-08 | Samsung Electronics Co., Ltd. | Method and apparatus for providing search information |
CN106210627A (en) * | 2016-07-04 | 2016-12-07 | 广东天米教育科技有限公司 | A kind of unmanned plane fire dispatch system |
CN106730556A (en) * | 2016-12-16 | 2017-05-31 | 三汽车制造有限公司 | Fire-fighting equipment and its control system and method |
CN106530576A (en) * | 2016-12-29 | 2017-03-22 | 广东小天才科技有限公司 | Fire disaster dangerous case early warning method and device based on mobile equipment |
CN106820394A (en) * | 2017-02-24 | 2017-06-13 | 深圳凯达通光电科技有限公司 | The crash helmet that a kind of fire rescue personnel wear |
CN106991681A (en) * | 2017-04-11 | 2017-07-28 | 福州大学 | A kind of fire boundary vector information extract real-time and method for visualizing and system |
CN107506886A (en) * | 2017-06-28 | 2017-12-22 | 湖南统科技有限公司 | Search and rescue the distribution method and system of resource |
CN107261374A (en) * | 2017-06-30 | 2017-10-20 | 魏涵潇 | A kind of multi-functional wear-type intelligent fire-pretection system and its control method |
CN208921868U (en) * | 2018-01-18 | 2019-05-31 | 上海瀚莅电子科技有限公司 | Helmet |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108765872A (en) * | 2018-08-06 | 2018-11-06 | 上海瀚莅电子科技有限公司 | A kind of estimation method, system and the intelligent wearable device of stranded object environment parameter |
CN108765872B (en) * | 2018-08-06 | 2020-05-22 | 上海瀚莅电子科技有限公司 | Method and system for inferring environmental parameters of trapped object and intelligent wearable equipment |
CN109672875A (en) * | 2018-11-30 | 2019-04-23 | 迅捷安消防及救援科技(深圳)有限公司 | Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product |
CN109717537A (en) * | 2018-11-30 | 2019-05-07 | 迅捷安消防及救援科技(深圳)有限公司 | Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product |
CN109920099A (en) * | 2019-01-29 | 2019-06-21 | 迅捷安消防及救援科技(深圳)有限公司 | Removable module wisdom fire-fighting Support Equipment on duty and Related product |
CN109965434A (en) * | 2019-01-29 | 2019-07-05 | 迅捷安消防及救援科技(深圳)有限公司 | Removable module wisdom fire-fighting Support Equipment on duty and Related product |
CN109965434B (en) * | 2019-01-29 | 2021-09-21 | 迅捷安消防及救援科技(深圳)有限公司 | Movable modular intelligent fire-fighting on-duty guarantee equipment and related products |
CN110624196A (en) * | 2019-07-15 | 2019-12-31 | 国网浙江省电力有限公司嘉兴供电公司 | Automatic flame-retardant control method applied to intelligent fire fighting |
CN112138297A (en) * | 2020-09-28 | 2020-12-29 | 烟台艾睿光电科技有限公司 | Fire scene rescue wearing equipment, fire scene auxiliary rescue method, device and equipment |
CN112138297B (en) * | 2020-09-28 | 2022-03-22 | 烟台艾睿光电科技有限公司 | Fire scene rescue wearing equipment, fire scene auxiliary rescue method, device and equipment |
CN116229024A (en) * | 2022-12-14 | 2023-06-06 | 广州视享科技有限公司 | Augmented reality display device, control method, and computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108169761B (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108169761B (en) | Fire scene task determination method, device and system and computer readable storage medium | |
US11282248B2 (en) | Information display by overlay on an object | |
CN108458790B (en) | Fire scene danger degree and fire source point determining method and device and head-mounted equipment | |
CN108378450B (en) | Method for realizing intelligent fire-fighting helmet for sensing explosion accident and predicting risk | |
JP5553405B2 (en) | Augmented reality-based system and method for indicating the location of personnel and sensors in a closed structure and providing enhanced situational awareness | |
US20020196202A1 (en) | Method for displaying emergency first responder command, control, and safety information using augmented reality | |
CN112119396A (en) | Personal protective equipment system with augmented reality for security event detection and visualization | |
US20030210228A1 (en) | Augmented reality situational awareness system and method | |
CN108765872B (en) | Method and system for inferring environmental parameters of trapped object and intelligent wearable equipment | |
CN108279419A (en) | Fire field environment display methods, device, helmet and readable storage medium storing program for executing | |
CN109672875A (en) | Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product | |
TWI755834B (en) | Visual image location system | |
EP1466300A1 (en) | Method and system to display both visible and invisible hazards and hazard information | |
CN208921868U (en) | Helmet | |
Arregui et al. | An augmented reality framework for first responders: the RESPOND-a project approach | |
CN109717537A (en) | Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product | |
Streefkerk et al. | Evaluating a multimodal interface for firefighting rescue tasks | |
CN109717536A (en) | Fire-fighting and rescue intelligent helmet, fire-fighting and rescue method and Related product | |
CN109284008A (en) | A kind of split type VR system | |
Wilson et al. | Head-mounted display efficacy study to aid first responder indoor navigation | |
CN110536248A (en) | A kind of processing method, device, readable storage medium storing program for executing and the equipment of fire-fighting data | |
KR102700093B1 (en) | Disaster Evacuation Training System with Augmented Reality | |
Park et al. | SAFT: Firefighting Environment recognition improvement for firefighters | |
US20240184507A1 (en) | Remote Sensing System and Method for Article of Personal Protective Equipment | |
CN108261188A (en) | The control method of life detection equipment, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |