CN108279419A - Fire field environment display methods, device, helmet and readable storage medium storing program for executing - Google Patents
Fire field environment display methods, device, helmet and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN108279419A CN108279419A CN201810050656.2A CN201810050656A CN108279419A CN 108279419 A CN108279419 A CN 108279419A CN 201810050656 A CN201810050656 A CN 201810050656A CN 108279419 A CN108279419 A CN 108279419A
- Authority
- CN
- China
- Prior art keywords
- entity
- user
- information
- category
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 32
- 230000007613 environmental effect Effects 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims description 27
- 238000001931 thermography Methods 0.000 claims description 20
- 210000005252 bulbus oculi Anatomy 0.000 claims description 9
- 238000012986 modification Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 abstract description 3
- 230000002708 enhancing effect Effects 0.000 abstract description 3
- 230000035699 permeability Effects 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000003190 augmentative effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 239000000779 smoke Substances 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 210000001508 eye Anatomy 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 238000013145 classification model Methods 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000004148 unit process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Helmets And Other Head Coverings (AREA)
Abstract
The embodiment of the present disclosure discloses fire field environment display methods, device, helmet and readable storage medium storing program for executing.The method is run on helmet, including:The environmental information of fire field environment where obtaining the user of wearing helmet;The classification of entity in the fire field environment is identified according to environmental information;Default processing is carried out to the entity of stating being shown on the permeability display unit of helmet according to the classification of entity;It is default processing include to the image for the entity being shown on permeability display unit carry out enhancing show, change display and shielding show at least one of.It being capable of user pays close attention in automatic identification fire field environment entity class by the disclosure, and show entity of interest and related entities in a manner of more obvious, and weaken the display of interference entity, user such as fireman is enable quickly and accurately to observe important entity in fire field environment, contribute to user such as fireman to execute rescue task, and the personal safety of user can be further ensured.
Description
Technical Field
The disclosure relates to the technical field of intelligent identification, in particular to a fire scene environment display method and device, a head-mounted device and a computer-readable storage medium.
Background
With the rapid development of the augmented reality technology and the research and development of enterprises, the augmented reality technology shows obvious advantages and values in various fields such as medicine, military, fire fighting, entertainment, daily life and the like, and a disaster relief system based on the augmented reality technology is also continuously emerged.
Disclosure of Invention
The embodiment of the disclosure provides a fire scene environment display method and device, a head-mounted device and a computer readable storage medium.
In a first aspect, an embodiment of the present disclosure provides a fire scene environment display method, which is executed on a head-mounted device, and includes:
acquiring environmental information of a fire scene environment where a user wearing the head-mounted equipment is located;
identifying the category of the entity in the fire scene environment according to the environment information;
performing preset processing on the entity displayed on a transparent display unit of the head-mounted device according to the category of the entity; the preset processing includes at least one of enhancement display, modification display, and mask display of the image of the entity displayed on the transmissive display unit.
Optionally, a laser radar module and a thermal imaging module are arranged on the head-mounted device; wherein, the environmental information of the current environment of the user wearing the head-mounted device is obtained, including:
detecting distance information in a fire scene environment where the user is located through the laser radar module, and establishing a three-dimensional scene model of the fire scene environment where the user is located;
and detecting the thermal distribution information in the fire scene environment of the user through the thermal imaging module, and superposing the thermal distribution information to the three-dimensional scene model to obtain a three-dimensional image comprising the thermal distribution information of the fire scene environment of the user.
Optionally, a vital signs detection module and/or an image sensor are further disposed on the head-mounted device; wherein, acquire the environmental information of the environment that the user who dresses the head-mounted device is currently located, still include:
detecting the vital body characteristics of the user in the fire scene environment through the vital body characteristic detection module; and/or the presence of a gas in the gas,
and detecting image information in the fire scene environment of the user through the image sensor.
Optionally, identifying the category of the entity in the fire scene environment according to the environment information includes:
identifying the shape contour of the entity according to the distance information, and identifying the category of the entity according to the shape contour of the entity; or,
and identifying the shape contour of the entity according to the distance information detected by the laser radar module, and identifying the category of the entity by combining the thermal distribution information, the image information and/or the life body characteristics.
Optionally, identifying the category of the entity in the fire scene environment according to the environment information includes:
and taking one or more of the distance information, the thermal distribution information, the image information and the vital body characteristics as input, and identifying the category of the entity according to a pre-trained entity identification model.
Optionally, the preset processing of the entity displayed on the transmissive display unit of the head-mounted device according to the category of the entity includes:
if the entity belongs to a first preset category, when the entity is displayed through a permeable display unit of the head-mounted device, the display intensity of the entity is enhanced; and/or the presence of a gas in the gas,
if the entity belongs to a second preset category, shielding natural light reflected by the entity when the entity is displayed through a transmissive display unit of the head-mounted device; and/or the presence of a gas in the gas,
and if the entity belongs to a third preset category, modifying the display content of the entity when the entity is displayed through a permeable display unit of the head-mounted device.
Optionally, identifying the category of the entity in the fire scene environment according to the environment information further includes:
determining attention information of a user wearing the head-mounted device;
determining a first entity concerned by the user according to the attention information of the user and the environment information;
a category of the first entity is identified.
Optionally, after determining the first entity currently noticed by the user according to the attention of the user and the environment information, the method further includes:
determining a second entity related to the first entity;
a category of the second entity is identified.
Optionally, determining attention information of a user wearing the head-mounted device comprises:
and determining attention information of the user according to the eyeball image characteristics of the user and the physiological characteristics of the user.
Optionally, after identifying the category of the entity in the fire scene environment according to the environment information, the method further includes:
determining a task currently performed by the user by analyzing the attention information of the user and the category of the first entity of interest.
Optionally, the categories of the entities are divided according to a relationship between the entities and the tasks executed by the user in the fire scene environment.
In a second aspect, an embodiment of the present disclosure provides a fire scene environment display apparatus, which operates on a head-mounted device, including:
the first acquisition module is configured to acquire a three-dimensional image of the current environment where a user wearing the head-mounted device is located and parameters of an entity in the three-dimensional image;
an identification module configured to identify a category of an entity in the three-dimensional image according to the parameter;
a processing module configured to perform preset processing on the entity displayed on a transmissive display unit of the head-mounted device according to a category of the entity; the preset processing includes one or more of enhanced display, modified display, and masked display of the image of the entity displayed on the transmissive display unit.
Optionally, a laser radar module and a thermal imaging module are arranged on the head-mounted device; wherein, the first obtaining module comprises:
the establishing submodule is configured to detect distance information in a fire scene environment where the user is located through the laser radar module and establish a three-dimensional scene model of the fire scene environment where the user is located;
the first obtaining sub-module is configured to detect thermal distribution information in a fire scene environment where the user is located through the thermal imaging module, and superimpose the thermal distribution information on the three-dimensional scene model to obtain a three-dimensional image including the thermal distribution information in the fire scene environment where the user is located.
Optionally, a vital signs detection module and/or an image sensor are further disposed on the head-mounted device; wherein, the first obtaining module further comprises:
the first detection submodule is configured to detect the vital signs of the user in the fire scene environment through the vital sign detection module; and/or the presence of a gas in the gas,
and the second detection submodule is configured to detect image information in the fire scene environment where the user is located through the image sensor.
Optionally, the first identification module includes:
a first identification submodule configured to identify a shape contour of an entity according to the distance information and identify a category of the entity according to the shape contour of the entity; or,
a second identification sub-module configured to identify a shape profile of an entity from the distance information detected by the lidar module and identify a category of the entity in combination with the thermal distribution information, the image information, and/or the vital body characteristics.
Optionally, the first identification module includes:
and the third identification submodule is configured to take one or more of the distance information, the thermal distribution information, the image information and the characteristics of the living body as input, and identify the category of the entity according to a pre-trained entity identification model.
Optionally, the processing module includes:
a first display sub-module configured to, if the entity belongs to a first preset category, enhance a display intensity of the entity when the entity is displayed through a transmissive display unit of the head-mounted device; and/or the presence of a gas in the gas,
a second display sub-module configured to shield natural light reflected by the entity when the entity is displayed through a transmissive display unit of the head-mounted device if the entity belongs to a second preset category; and/or the presence of a gas in the gas,
a third display sub-module configured to modify display content of the entity when the entity is displayed through a transmissive display unit of the head-mounted device if the entity belongs to a third preset category.
Optionally, the identification module further includes:
a first determination sub-module configured to determine attention information of a user wearing the head-mounted device;
a second determination submodule configured to determine a first entity of interest to the user from the attention information of the user and the environment information;
a fourth identification submodule configured to identify a category of the first entity.
Optionally, after the second determining sub-module, the method further includes:
a third determination submodule configured to determine a second entity related to the first entity;
a fifth identification submodule configured to identify a category of the second entity.
Optionally, the first determining sub-module includes:
a third determining sub-module configured to determine attention information of the user according to eyeball image characteristics of the user and physiological characteristics of the user.
Optionally, after the identifying module, the method further includes:
a determination module configured to determine a task currently performed by the user by analyzing the attention information of the user and the category of the first entity of interest.
Optionally, the categories of the entities are divided according to a relationship between the entities and the tasks executed by the user in the fire scene environment.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the fire environment display apparatus includes a memory and a processor, the memory is used for storing one or more computer instructions for supporting the fire environment display apparatus to execute the fire environment display method in the first aspect, and the processor is configured to execute the computer instructions stored in the memory. The fire scene environment display device can also comprise a communication interface for communicating with other equipment or a communication network.
In a third aspect, embodiments of the present disclosure provide a head-mounted device, including a transmissive display unit, a memory, and a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer-readable storage medium for storing computer instructions for a fire scene environment display apparatus, which contains computer instructions for executing the fire scene environment display method in the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the embodiment of the disclosure, the environmental information in the fire scene environment is automatically acquired, the entity type in the environment is identified according to the environmental information, and then the entity type is displayed on the transparent display unit of the head-mounted device in a display enhancement, shielding display or modification display mode. By the aid of the method and the device, the entity types concerned by the user in the fire scene environment can be automatically identified, the concerned entities and the related entities are displayed in a more obvious mode, and the display of the interference entities is weakened, so that the user such as a fireman can quickly and accurately observe important entities in the fire scene environment, the user such as the fireman can be helped to execute rescue tasks, and the personal safety of the user can be further guaranteed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow chart of a method of displaying a fire scene environment according to an embodiment of the present disclosure;
FIG. 2 shows a flow chart of step S101 according to the embodiment shown in FIG. 1;
FIG. 3 shows a block diagram of a fire scene environment display device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a head-mounted device suitable for implementing a fire scene environment display method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
The existing enhancement technology mainly has the following expression modes: one is that physiological signals and other information are utilized to obtain the current cognitive load level of a user, and the real-time cognitive load level of the user is utilized to perform self-adaptive adjustment on elements and enhancement information related to a task, so that although the cognitive load and the enhancement information of the user can be in a dynamically balanced level, objects which interfere the sight of the user cannot be removed, and objects which the user focuses on cannot be visually displayed; the other method is that in the operation process of the equipment, the head posture change of a user is tracked in real time through a camera, and the poses of different visual angles of the operation equipment are displayed in a three-dimensional mode according to the three-dimensional model of the operation object, so that although the user can be helped to observe a real-time operation scene from different angles, the automatic identification and the classification processing of each object can not be carried out on the specific object concerned by the user.
Therefore, the self-adaptive enhanced display scheme based on the head-mounted equipment is provided, the environmental information in the fire scene environment is automatically acquired, the entity type in the environment is identified according to the environmental information, and then the entity type is displayed on the transparent display unit of the head-mounted equipment in an augmented reality mode, so that a user in the fire scene environment can know and visually check the entity information in the fire scene environment in time, rescue tasks of the user such as firemen can be facilitated, and the personal safety of the user can be further ensured.
Fig. 1 shows a flowchart of a fire scene environment display method according to an embodiment of the present disclosure. As shown in fig. 1, the fire scene environment display method includes the following steps S101 to S103:
in step S101, obtaining environmental information of a fire scene environment where a user wearing the head-mounted device is located;
in step S102, identifying a category of an entity in the fire scene environment according to the environment information;
in step S103, performing preset processing on the entity displayed on the transparent display unit of the head-mounted device according to the category of the entity; the preset processing includes at least one of enhancement display, modification display, and mask display of the image of the entity displayed on the transmissive display unit.
In this embodiment, the fire scene environment display method may be implemented on a head-mounted device, the head-mounted device may be an intelligent device capable of being worn on the head of a human body, and the inside of the head-mounted device may be provided with a processor, a memory, a display device, and other components. The display device can be a transparent display unit, and after the head-mounted equipment is worn on the head of a human body, the transparent display unit can be just positioned at the visible part of eyes, so that when data is displayed on the transparent display unit, the data on the transparent display unit can be viewed without manually moving the position of the head-mounted equipment and the like; meanwhile, the natural reflected light of the environmental object can normally pass through the permeable display unit, so that the user can check the surrounding environment and things through the permeable display unit without influencing the sight of the wearer. The transmissive display unit may also allow external optical fibers to pass through to convey the displayed image into the eye of the wearer simultaneously with the background light source, enabling modification and enhancement of the background image.
In this embodiment, after a user, such as a firefighter, wears the head-mounted device to enter a fire scene, the head-mounted device collects environmental information of the fire scene environment where the user is located in real time, and the collected environmental information may be used to reconstruct a fire scene environment model, identify a fire source point, an entity and the like in the fire scene environment, and determine a risk factor and the like of the fire scene environment.
In this embodiment, after acquiring the environmental information in the fire scene environment, information such as a shape profile and a temperature of an indoor entity may be determined according to a fire scene three-dimensional model constructed by the environmental information, and then a category of the entity may be identified based on an auxiliary condition of the shape profile and the temperature of the entity, where the category of the entity may include living bodies such as people, animals, and the like, and articles and the like. The categories of the entities may be divided based on the influence of users, such as firefighters, on the rescue task when performing the rescue task, for example, differences between life bodies and objects may affect the rescue manner and priority of the rescue task, and thus may be divided into different categories, and dangerous objects and non-dangerous objects may also affect the rescue manner and priority of the rescue task differently, and thus may also be divided into different categories, and the like.
In this embodiment, after the category of the entity is identified, the category of the entity may be subjected to a preset process, such as an augmented reality display preprocessing, on the entity display on the transmissive display unit on the head-mounted device. The preset process may include at least one of an enhanced display, a modified display, and a masked display of the physical image. That is, the permeable display unit processes the entity in the fire scene environment through a preset adaptive augmented reality graph calculation mode, and controls the permeable display unit to project the preset processed result to the eyes of the user, so that the combination of the virtual image and the real image is formed and the predicted display effect is achieved, namely, the preset entity in the fire scene environment observed by the user through the permeable display unit is the combination of the virtual image and the real image after the preset processing. For example, a real surface inside the fire scene environment observed by the user through the transmissive display unit has a color adapted to the temperature (the real surface is free of such a color), and the like.
In one embodiment, the enhanced display may be a manner of superimposing, and enhancing the display intensity of the entity or entities in the fire scene environment observed by the user through the transparent display unit, so that the user can have a higher perception of the object. The shielding display may be implemented by shielding natural light of an entity or entities in the fire scene environment observed by the user through the transparent display unit, so that the user can have a low perception of the object and even cannot see the shielded entity with naked eyes. Modifying the display may be modifying the display content of the entity or entities in the fire scene environment observed by the user through the transparent display unit in such a way that the imaging content of the entity or entities observed by the user is different from the imaging content of the real entity, rather than being masked or attenuated or enhanced. That is, the modified display may be a display other than the mask display and the enhanced display, such as an overlay display on the physical image, a change in display content somewhere on the physical image, a deletion of a portion of content on the physical image, an addition of a portion of display content to the periphery of the physical image, and so forth.
In an optional implementation manner of this embodiment, as shown in fig. 2, a laser radar module and a thermal imaging module are disposed on the head-mounted device; the step S101, namely the step of acquiring the environment information of the current environment of the user wearing the head-mounted device, further includes the following steps S201 to S202:
in step S201, detecting distance information in a fire scene environment where the user is located through the laser radar module, and establishing a three-dimensional scene model of the fire scene environment where the user is located;
in step S202, the thermal imaging module detects thermal distribution information in the fire scene environment where the user is located, and superimposes the thermal distribution information on the three-dimensional scene model, so as to obtain a three-dimensional image including the thermal distribution information in the fire scene environment where the user is located.
In this optional implementation, the lidar module and the thermal imaging module may be integrated on the head-mounted device, and the lidar module scans the fire scene in real time to obtain distance information of each entity in the fire scene environment. The signal transmitted by the laser radar returns to the laser radar module after encountering an obstacle, and parameters such as the distance, the direction, the height and even the shape of each object in a fire scene can be determined according to the time length and the signal strength of the returned signal, and information such as the corresponding azimuth angle and elevation angle. The thermal imaging module detects thermal radiation of the surfaces of all objects in a fire scene to generate colored pictures to represent the thermal distribution information of all objects in the scene. After radar information detected by a laser radar module and thermal distribution information detected by a thermal imaging module are obtained, a three-dimensional scene model of a fire scene environment is established according to the radar information, then the thermal distribution information detected by the thermal imaging module is superposed into the three-dimensional scene model to construct a three-dimensional image with the thermal distribution information, wherein in the three-dimensional image with the thermal distribution information, the surfaces of all entities are displayed in different colors representing different temperatures, and when a user such as a fireman views the fire scene environment through a permeable display unit, the surfaces of all entities viewed through the permeable display unit are superposed with different colors representing different temperatures, so that the user can view all the entities in the fire scene more clearly and intuitively. For example, when the intelligent helmet scans a burning square table in a room, the three-dimensional model establishes a model of the square table, including the table top and the table legs, and the temperature of the burning part of the table is displayed on each surface of the table model according to the thermal imaging module. Specifically, the coordinates in the three-dimensional scene model established by the laser radar may be aligned with the coordinates in the thermal distribution map generated by the thermal imaging module, so that the thermal distribution map may be accurately matched to different regions of the three-dimensional scene model.
In an optional implementation manner of this embodiment, a vital sign detection module and/or an image sensor are further disposed on the head-mounted device; the step S101, namely, the step of obtaining the environmental information of the current environment where the user wearing the head-mounted device is located, further includes the following steps:
detecting the vital body characteristics of the user in the fire scene environment through the vital body characteristic detection module; and/or the presence of a gas in the gas,
and detecting image information in the fire scene environment of the user through the image sensor.
In this optional implementation, in order to detect trapped people in a fire scene environment, people to be rescued can be detected on a fire scene by arranging the vital sign detection module on the head-mounted device. The vital sign detection module can be an infrared heat detection module, and can also be other detection modules for detecting human physiological features, such as detection equipment for detecting human heartbeat, pulse and the like. The vital signs detection module may detect vital signs in the environment if the first user is trapped in the fire scene environment. And further, the position information and the like of the person to be rescued can be judged according to the detected vital signs. In addition, an image sensor can be arranged on the head-mounted equipment so as to further acquire image information in the fire scene environment, and the image information is combined with the distance information detected by the laser radar module to form a three-dimensional image in the fire scene environment.
In an optional implementation manner of this embodiment, the step S102, namely the step of identifying the category of the entity in the fire scene environment according to the environment information, further includes the following steps:
identifying the shape contour of the entity according to the distance information, and identifying the category of the entity according to the shape contour of the entity; or,
and identifying the shape contour of the entity according to the distance information detected by the laser radar module, and identifying the category of the entity by combining the thermal distribution information, the image information and/or the life body characteristics.
In this optional implementation manner, the shape profile of the entity may be obtained by reconstructing the entity model based on the distance information of each entity in the fire scene environment detected by the laser radar module, and the category of the entity may be identified based on the shape profile of the entity. The category of the entity can be identified by combining the temperature of the surface of the entity, the image and/or vital signs acquired by the image sensor and the like in the thermal distribution information on the basis of the shape contour.
Alternatively, given the complexity of a fire scene, a single sensor may not have sufficient information to complete a classification task. For example, lighting conditions and smoke disturbances may render the image sensor inoperable, but the lidar is still able to obtain sufficient range information. Therefore, the embodiment of the disclosure improves the accuracy of entity class identification by arranging various sensors such as a laser radar module, a thermal imaging module, a vital sign detection module and/or an image sensor and the like and integrating data of the various sensors for analysis and identification. For example, a classification model may be trained in advance, and the sensor data may be input into the classification model to obtain accurate classifications of different entities in a fire scene environment. For example, the classification model can perform contour fitting through a point cloud image obtained by a laser radar module and complete the identification of a wall. The life detection module can detect the vital sign signals and fuse the vital sign signals with the distance information detected by the laser radar module to obtain the position of the life body and the like.
In an optional implementation manner of this embodiment, the step S102, namely the step of identifying the category of the entity in the fire scene environment according to the environment information, further includes the following steps:
and taking one or more of the distance information, the thermal distribution information, the image information and the vital body characteristics as input, and identifying the category of the entity according to a pre-trained entity identification model.
In this optional implementation, various information acquired by a plurality of sensors, such as one or more of distance information detected by a laser radar module, thermal distribution information detected by a thermal imaging module, image information detected by an image sensor, and vital signs detected by a vital sign detection module, is input into a pre-trained entity recognition model, and an entity and a category in a fire scene environment are output by the entity recognition model. By the implementation mode, the entity identification model obtained by the machine learning algorithm can be used for classifying and identifying the environment information to obtain the entity, the entity category and the like in the fire scene environment.
In an optional implementation manner of this embodiment, the step S102, namely, the step of performing preset processing on the entity displayed on the transmissive display unit of the head-mounted device according to the category of the entity, further includes the following steps:
if the entity belongs to a first preset category, when the entity is displayed through a permeable display unit of the head-mounted device, the display intensity of the entity is enhanced; and/or the presence of a gas in the gas,
if the entity belongs to a second preset category, shielding natural light reflected by the entity when the entity is displayed through a transmissive display unit of the head-mounted device; and/or the presence of a gas in the gas,
and if the entity belongs to a third preset category, modifying the display content of the entity when the entity is displayed through a permeable display unit of the head-mounted device.
In this optional implementation, after the category of the entity is determined, when the user views the entity through the transparent display unit, different preset processing may be performed on the entity according to the difference in the category of the entity, so that the entity imaging observed by the user is different from the real entity imaging.
The preset processing mode may include at least one of the following:
when the entity of the first preset category in the fire scene environment is identified, natural light reflected by the entity is enhanced, so that when a user views the entity through the transparent display unit, the image of the entity in the visual field is more obvious and clear. For example, a user such as a firefighter can be made more visible by enhancing the display brightness of a person or object to be rescued.
When the entity of the second preset category in the fire scene environment is identified, natural light reflected by the entity is shielded, so that a user cannot sense or only has weak sensing power when looking at the entity through the transparent display unit. For example, to facilitate viewing of potential life by a user, such as a firefighter, the physical images of flames, smoke, obstructions, and the like may be image-removed or weakened.
When an entity in a third preset category in the fire scene environment is identified, the image content of the entity on the transparent display unit is modified, for example, a virtual image, a mark or a color is superimposed on the surface of the entity, so that a user such as a firefighter can further know other potential information of the entity, such as the degree of danger, the category and the like.
In an optional implementation manner of this embodiment, after the step S102, that is, after the step of identifying the category of the entity in the fire scene environment according to the environment information, the method further includes the following steps:
determining attention information of a user wearing the head-mounted device;
determining a first entity concerned by the user according to the attention information of the user and the environment information;
a category of the first entity is identified.
In the above alternative implementation, the category of the first entity that the user is focusing on is identified according to the attention information of the user. The attention information is related to an entity which the user is focusing on, and when the attention of the user is focused on a certain position in the fire scene environment, the entity at the position can be a target of the user to perform a task, so that whether the entity in the fire scene environment is a target class entity of the user to perform the task can be identified in this way.
In an optional implementation manner of this embodiment, after determining, according to the attention of the user and the environment information, a first entity that the user is currently paying attention to, the method further includes:
determining a second entity related to the first entity;
a category of the second entity is identified.
In this alternative implementation, when the entity concerned by the user is identified by the user attention information, a second entity related to the entity concerned by the user may also be identified, for example, to facilitate observation by a firefighter, and by identifying non-concerned entities around the entity concerned by the firefighter and performing preprocessing of shielding display on these entities, the interference factors in the field of view of the firefighter, such as smoke, flames, blocked passages, etc., are eliminated.
In an optional implementation manner of this embodiment, determining attention information of a user wearing the head-mounted device includes:
and determining attention information of the user according to the eyeball image characteristics of the user and the physiological characteristics of the user.
In this optional implementation, the attention information may be determined by information such as an eyeball image feature of the user, a physiological feature of the user, and the like, for example, by determining a sight line, an eyeball angle, a physiological feature, and the like of the user; the physiological characteristics include, for example, the heart rate, pulse rate, electroencephalogram information, voice characteristics, etc. of the user, i.e., characteristics for representing the current psychological expectation and task focus of the user. For example, when a fire fighter pays attention to an entity to be rescued or a fire source point to be extinguished in a fire scene environment, not only the eyeball angle, the sight line and the like are fixed on the entity, but also the heartbeat speed, the pulse speed and the like may be accelerated, which can be distinguished from a situation that a user looks at an entity when staring at the entity to stay or go on nothing.
In an optional implementation manner of this embodiment, after the step S102, that is, after the step of identifying the category of the entity in the fire scene environment according to the environment information, the method further includes:
determining a task currently performed by the user by analyzing the attention information of the user and the category of the first entity of interest.
In this alternative implementation, the specific intention of the user when the entity concerned by the user is identified by performing analysis learning on the attention information of the user (wherein the analysis learning may use image processing, fuzzy recognition, big data analysis, machine learning, and the like). I.e. the potential purpose of the user after identifying the behaviour by paying attention to it. After identifying the entity information concerned by the user, obtaining the specific position, temperature information and the like of the entity concerned by the user in the scene by combining the three-dimensional image of the fire scene environment, and then carrying out classification and identification according to the relationship between all entity information in the three-dimensional image and the entity concerned by the user; for example, the entity concerned by the user may be a trapped person, but scene and thermal information related to the trapped person may directly affect rescue of the trapped person, so that joint classification processing may be performed on the related entities to form local scene recognition, and a recognition result is displayed through the transparent display unit. For example, potential dangerous goods around the trapped person can also remind the firefighter of handling the dangerous goods through display enhancement; the smoke and temperature around the trapped person are also enhanced by the display to remind the firefighter to evacuate the trapped person as soon as possible, and the like. And the entity information in the scene can be classified according to the specific relation between the task information executed by the user and the entities in the scene. For example: in a fire scene, automatically identifying and analyzing entities concerned by a user, and performing cumulative analysis on attention for multiple times to obtain that the people mainly concerned by the user are trapped people, so that the user is judged to execute a rescue task in the fire, and all entities in the scene can be automatically classified and enhanced to be displayed, for example, the trapped people can be enhanced to be displayed according to the executed task and the biological characteristics of the trapped people; in another scenario, it may also be possible to identify that the task of the current user is to search for and extinguish a fire, and then display enhancement may be performed on the high temperature area to assist the fire fighter in finding the fire quickly.
In an optional implementation manner of this embodiment, the categories of the entities are divided according to a relationship between the entities and the tasks executed by the user in the fire scene environment.
In this alternative implementation, the categories of entities may be divided by the relationship of tasks performed by users, such as firefighters, in the fire scene environment, such as entities related to rescue tasks, entities unrelated to rescue tasks, and the like, and entities unrelated to rescue tasks may also be divided into entities indirectly related to rescue tasks and entities completely unrelated to rescue tasks. For example, entities of a certain class are fire-fighting disturbance factors, such as smoke, flames, blocked passages, etc.; certain entities are fire rescue targets, such as people, animals and the like; certain types of entities are potentially dangerous factors, such as, for example, combustibles, dangerous chemicals, and the like.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 3 is a block diagram illustrating a fire scene environment display device according to an embodiment of the present disclosure, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both. As shown in fig. 3, the fire scene environment display device includes a first obtaining module 301, an identifying module 302, and a processing module 303:
a first obtaining module 301 configured to obtain a three-dimensional image of an environment where a user wearing the head-mounted device is currently located and parameters of entities in the three-dimensional image;
an identification module 302 configured to identify a category of an entity in the three-dimensional image according to the parameter;
a processing module 303 configured to perform preset processing on the entity displayed on the transmissive display unit of the head-mounted device according to the category of the entity; the preset processing includes one or more of enhanced display, modified display, and masked display of the image of the entity displayed on the transmissive display unit.
In an optional implementation manner of this embodiment, a laser radar module and a thermal imaging module are disposed on the head-mounted device; wherein, the first obtaining module comprises:
the establishing submodule is configured to detect distance information in a fire scene environment where the user is located through the laser radar module and establish a three-dimensional scene model of the fire scene environment where the user is located;
the first obtaining sub-module is configured to detect thermal distribution information in a fire scene environment where the user is located through the thermal imaging module, and superimpose the thermal distribution information on the three-dimensional scene model to obtain a three-dimensional image including the thermal distribution information in the fire scene environment where the user is located.
In an optional implementation manner of this embodiment, a vital sign detection module and/or an image sensor are further disposed on the head-mounted device; wherein, the first obtaining module further comprises:
the first detection submodule is configured to detect the vital signs of the user in the fire scene environment through the vital sign detection module; and/or the presence of a gas in the gas,
and the second detection submodule is configured to detect image information in the fire scene environment where the user is located through the image sensor.
In an optional implementation manner of this embodiment, the first identifying module includes:
a first identification submodule configured to identify a shape contour of an entity according to the distance information and identify a category of the entity according to the shape contour of the entity; or,
a second identification sub-module configured to identify a shape profile of an entity from the distance information detected by the lidar module and identify a category of the entity in combination with the thermal distribution information, the image information, and/or the vital body characteristics.
In an optional implementation manner of this embodiment, the first identifying module includes:
and the third identification submodule is configured to take one or more of the distance information, the thermal distribution information, the image information and the characteristics of the living body as input, and identify the category of the entity according to a pre-trained entity identification model.
In an optional implementation manner of this embodiment, the processing module includes:
a first display sub-module configured to, if the entity belongs to a first preset category, enhance a display intensity of the entity when the entity is displayed through a transmissive display unit of the head-mounted device; and/or the presence of a gas in the gas,
a second display sub-module configured to shield natural light reflected by the entity when the entity is displayed through a transmissive display unit of the head-mounted device if the entity belongs to a second preset category; and/or the presence of a gas in the gas,
a third display sub-module configured to modify display content of the entity when the entity is displayed through a transmissive display unit of the head-mounted device if the entity belongs to a third preset category.
In an optional implementation manner of this embodiment, the identification module further includes:
a first determination sub-module configured to determine attention information of a user wearing the head-mounted device;
a second determination submodule configured to determine a first entity of interest to the user from the attention information of the user and the environment information;
a fourth identification submodule configured to identify a category of the first entity.
In an optional implementation manner of this embodiment, after the second determining sub-module, the method further includes:
a third determination submodule configured to determine a second entity related to the first entity;
a fifth identification submodule configured to identify a category of the second entity.
In an optional implementation manner of this embodiment, the first determining sub-module includes:
a third determining sub-module configured to determine attention information of the user according to eyeball image characteristics of the user and physiological characteristics of the user.
In an optional implementation manner of this embodiment, after the identifying module, the method further includes:
a determination module configured to determine a task currently performed by the user by analyzing the attention information of the user and the category of the first entity of interest.
In an optional implementation manner of this embodiment, the categories of the entities are divided according to a relationship between the entities and the tasks executed by the user in the fire scene environment.
The above fire scene environment display device corresponds to the fire scene environment display method described in the embodiment and the related parts shown in fig. 1, and specific details can be referred to the description of the fire scene environment display method described in the embodiment and the related parts shown in fig. 1, which are not described herein again.
Fig. 4 is a schematic structural diagram of a head-mounted device suitable for implementing a fire scene environment display method according to an embodiment of the present disclosure.
As shown in fig. 4, the head-mounted device 400 includes a Central Processing Unit (CPU)401 that can execute various processes in the embodiment shown in fig. 1 described above according to a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the head-mounted device 400 are also stored. The CPU401, ROM402, and RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
The following components are connected to the I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output section 407 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), a transmissive display unit, and the like, and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
In particular, according to embodiments of the present disclosure, the method described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the fire environment display method of FIG. 1. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 409, and/or installed from the removable medium 411.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (24)
1. A fire scene environment display method, wherein the method is executed on a head-mounted device, and comprises:
acquiring environmental information of a fire scene environment where a user wearing the head-mounted equipment is located;
identifying the category of the entity in the fire scene environment according to the environment information;
performing preset processing on the entity displayed on a transparent display unit of the head-mounted device according to the category of the entity; the preset processing includes at least one of enhancement display, modification display, and mask display of the image of the entity displayed on the transmissive display unit.
2. The display method according to claim 1, wherein a laser radar module and a thermal imaging module are provided on the head-mounted device; wherein, acquire the wearing the environmental information of the user place scene of a fire environment of head-mounted device includes:
detecting distance information in a fire scene environment where the user is located through the laser radar module, and establishing a three-dimensional scene model of the fire scene environment where the user is located;
and detecting the thermal distribution information in the fire scene environment of the user through the thermal imaging module, and superposing the thermal distribution information to the three-dimensional scene model to obtain a three-dimensional image comprising the thermal distribution information of the fire scene environment of the user.
3. The display method according to claim 2, wherein a vital sign detection module and/or an image sensor is further provided on the head-mounted device; wherein, acquire the environmental information of the environment that the user who dresses the head-mounted device is currently located, still include:
detecting the vital body characteristics of the user in the fire scene environment through the vital body characteristic detection module; and/or the presence of a gas in the gas,
and detecting image information in the fire scene environment of the user through the image sensor.
4. The display method of claim 2 or 3, wherein identifying the category of the entity in the fire scene environment according to the environment information comprises:
identifying the shape contour of the entity according to the distance information, and identifying the category of the entity according to the shape contour of the entity; or,
and identifying the shape contour of the entity according to the distance information, and identifying the category of the entity by combining the thermal distribution information, the image information and/or the life body characteristics.
5. The display method of any one of claims 3-4, wherein identifying the category of the entity in the fire scene environment from the environmental information comprises:
and taking one or more of the distance information, the thermal distribution information, the image information and the vital body characteristics as input, and identifying the category of the entity according to a pre-trained entity identification model.
6. The display method according to claim 1, wherein the preset processing of the entity displayed on the transmissive display unit of the head-mounted device according to the category of the entity includes:
if the entity belongs to a first preset category, when the entity is displayed through a permeable display unit of the head-mounted device, the display intensity of the entity is enhanced; and/or the presence of a gas in the gas,
if the entity belongs to a second preset category, shielding natural light reflected by the entity when the entity is displayed through a transmissive display unit of the head-mounted device; and/or the presence of a gas in the gas,
and if the entity belongs to a third preset category, modifying the display content of the entity when the entity is displayed through a permeable display unit of the head-mounted device.
7. The display method of claim 1, wherein identifying the category of the entity in the fire scene environment based on the environmental information further comprises:
determining attention information of a user wearing the head-mounted device;
determining a first entity concerned by the user according to the attention information of the user and the environment information;
a category of the first entity is identified.
8. The display method according to claim 7, wherein after determining the first entity focused on by the user according to the attention information of the user and the environment information, further comprising:
determining a second entity related to the first entity;
a category of the second entity is identified.
9. The display method of claim 7, wherein determining attention information of a user wearing the head-mounted device comprises:
and determining attention information of the user according to the eyeball image characteristics of the user and the physiological characteristics of the user.
10. The display method of claim 7, after identifying the category of the entity in the fire scene environment according to the environment information, further comprising:
determining a task currently performed by the user by analyzing the attention information of the user and the category of the first entity of interest.
11. The display method as claimed in claim 1, wherein the categories of the entities are divided according to the relationship of the entities with the user performing tasks in the fire scene environment.
12. A fire scene environment display device, wherein the device is operated on a head-mounted device, comprising:
the first acquisition module is configured to acquire a three-dimensional image of the current environment where a user wearing the head-mounted device is located and parameters of an entity in the three-dimensional image;
an identification module configured to identify a category of an entity in the three-dimensional image according to the parameter;
a processing module configured to perform preset processing on the entity displayed on a transmissive display unit of the head-mounted device according to a category of the entity; the preset processing includes one or more of enhanced display, modified display, and masked display of the image of the entity displayed on the transmissive display unit.
13. The display device according to claim 12, wherein a lidar module and a thermal imaging module are disposed on the head-mounted apparatus; wherein, the first obtaining module comprises:
the establishing submodule is configured to detect distance information in a fire scene environment where the user is located through the laser radar module and establish a three-dimensional scene model of the fire scene environment where the user is located;
the first obtaining sub-module is configured to detect thermal distribution information in a fire scene environment where the user is located through the thermal imaging module, and superimpose the thermal distribution information on the three-dimensional scene model to obtain a three-dimensional image including the thermal distribution information in the fire scene environment where the user is located.
14. The display device according to claim 13, wherein a vital sign detection module and/or an image sensor is further disposed on the head-mounted apparatus; wherein, the first obtaining module further comprises:
the first detection submodule is configured to detect the vital signs of the user in the fire scene environment through the vital sign detection module; and/or the presence of a gas in the gas,
and the second detection submodule is configured to detect image information in the fire scene environment where the user is located through the image sensor.
15. The display device according to claim 13 or 14, wherein the first identification module comprises:
a first identification submodule configured to identify a shape contour of an entity according to the distance information and identify a category of the entity according to the shape contour of the entity; or,
a second identification submodule configured to identify a shape profile of an entity from the distance information and identify a category of the entity in combination with the thermal distribution information, the image information and/or the vital body feature.
16. The display device according to claim 14 or 15, wherein the first identification module comprises:
and the third identification submodule is configured to take one or more of the distance information, the thermal distribution information, the image information and the characteristics of the living body as input, and identify the category of the entity according to a pre-trained entity identification model.
17. The display device according to claim 12, wherein the processing module comprises:
a first display sub-module configured to, if the entity belongs to a first preset category, enhance a display intensity of the entity when the entity is displayed through a transmissive display unit of the head-mounted device; and/or the presence of a gas in the gas,
a second display sub-module configured to shield natural light reflected by the entity when the entity is displayed through a transmissive display unit of the head-mounted device if the entity belongs to a second preset category; and/or the presence of a gas in the gas,
a third display sub-module configured to modify display content of the entity when the entity is displayed through a transmissive display unit of the head-mounted device if the entity belongs to a third preset category.
18. The display device of claim 12, wherein the identification module further comprises:
a first determination sub-module configured to determine attention information of a user wearing the head-mounted device;
a second determination submodule configured to determine a first entity of interest to the user from the attention information of the user and the environment information;
a fourth identification submodule configured to identify a category of the first entity.
19. The display device of claim 18, wherein the second determination sub-module is followed by:
a third determination submodule configured to determine a second entity related to the first entity;
a fifth identification submodule configured to identify a category of the second entity.
20. The display device according to claim 18, wherein the first determination submodule includes:
a third determining sub-module configured to determine attention information of the user according to eyeball image characteristics of the user and physiological characteristics of the user.
21. The display device of claim 18, wherein the identification module is followed by further comprising:
a determination module configured to determine a task currently performed by the user by analyzing the attention information of the user and the category of the first entity of interest.
22. The display device of claim 12, wherein the categories of the entities are divided according to the relationship of the entities to tasks performed by the user in the fire scene environment.
23. A head-mounted device comprising a transmissive display unit, a memory, and a processor; wherein,
the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of any of claims 1-11.
24. A computer-readable storage medium having stored thereon computer instructions, characterized in that the computer instructions, when executed by a processor, carry out the method steps of any of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810050656.2A CN108279419A (en) | 2018-01-18 | 2018-01-18 | Fire field environment display methods, device, helmet and readable storage medium storing program for executing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810050656.2A CN108279419A (en) | 2018-01-18 | 2018-01-18 | Fire field environment display methods, device, helmet and readable storage medium storing program for executing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108279419A true CN108279419A (en) | 2018-07-13 |
Family
ID=62804001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810050656.2A Pending CN108279419A (en) | 2018-01-18 | 2018-01-18 | Fire field environment display methods, device, helmet and readable storage medium storing program for executing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108279419A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109507686A (en) * | 2018-11-08 | 2019-03-22 | 歌尔科技有限公司 | A kind of control method wears display equipment, electronic equipment and storage medium |
CN109948536A (en) * | 2019-03-19 | 2019-06-28 | 新华三技术有限公司 | A kind of stupefied recognition methods and device |
CN110286714A (en) * | 2019-05-31 | 2019-09-27 | 深圳龙图腾创新设计有限公司 | A kind of use for laboratory security system and method |
CN110348900A (en) * | 2019-07-03 | 2019-10-18 | 联保(北京)科技有限公司 | A kind of data processing method, system and device |
CN113608355A (en) * | 2021-08-06 | 2021-11-05 | 湖南龙特科技有限公司 | Interactive display mode based on millimeter wave radar and infrared thermal imager |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1046411A2 (en) * | 1999-04-23 | 2000-10-25 | GB Solo Limited | A helmet |
CN103177618A (en) * | 2013-04-17 | 2013-06-26 | 中国人民解放军海军军训器材研究所 | Fire extinguishing training analog system of naval vessel and using method thereof |
US20130278631A1 (en) * | 2010-02-28 | 2013-10-24 | Osterhout Group, Inc. | 3d positioning of augmented reality information |
CN106156751A (en) * | 2016-07-25 | 2016-11-23 | 上海肇观电子科技有限公司 | A kind of method and device playing audio-frequency information to destination object |
CN106843491A (en) * | 2017-02-04 | 2017-06-13 | 上海肇观电子科技有限公司 | Smart machine and electronic equipment with augmented reality |
CN106909215A (en) * | 2016-12-29 | 2017-06-30 | 深圳市皓华网络通讯股份有限公司 | Based on the fire-fighting operation three-dimensional visualization command system being accurately positioned with augmented reality |
-
2018
- 2018-01-18 CN CN201810050656.2A patent/CN108279419A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1046411A2 (en) * | 1999-04-23 | 2000-10-25 | GB Solo Limited | A helmet |
US20130278631A1 (en) * | 2010-02-28 | 2013-10-24 | Osterhout Group, Inc. | 3d positioning of augmented reality information |
CN103177618A (en) * | 2013-04-17 | 2013-06-26 | 中国人民解放军海军军训器材研究所 | Fire extinguishing training analog system of naval vessel and using method thereof |
CN106156751A (en) * | 2016-07-25 | 2016-11-23 | 上海肇观电子科技有限公司 | A kind of method and device playing audio-frequency information to destination object |
CN106909215A (en) * | 2016-12-29 | 2017-06-30 | 深圳市皓华网络通讯股份有限公司 | Based on the fire-fighting operation three-dimensional visualization command system being accurately positioned with augmented reality |
CN106843491A (en) * | 2017-02-04 | 2017-06-13 | 上海肇观电子科技有限公司 | Smart machine and electronic equipment with augmented reality |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109507686A (en) * | 2018-11-08 | 2019-03-22 | 歌尔科技有限公司 | A kind of control method wears display equipment, electronic equipment and storage medium |
CN109948536A (en) * | 2019-03-19 | 2019-06-28 | 新华三技术有限公司 | A kind of stupefied recognition methods and device |
CN110286714A (en) * | 2019-05-31 | 2019-09-27 | 深圳龙图腾创新设计有限公司 | A kind of use for laboratory security system and method |
CN110348900A (en) * | 2019-07-03 | 2019-10-18 | 联保(北京)科技有限公司 | A kind of data processing method, system and device |
CN113608355A (en) * | 2021-08-06 | 2021-11-05 | 湖南龙特科技有限公司 | Interactive display mode based on millimeter wave radar and infrared thermal imager |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108279419A (en) | Fire field environment display methods, device, helmet and readable storage medium storing program for executing | |
CN108169761B (en) | Fire scene task determination method, device and system and computer readable storage medium | |
CN108458790B (en) | Fire scene danger degree and fire source point determining method and device and head-mounted equipment | |
US20200020145A1 (en) | Information display by overlay on an object | |
US11610292B2 (en) | Cognitive load reducing platform having image edge enhancement | |
Bindemann et al. | Face, body, and center of gravity mediate person detection in natural scenes. | |
US20120242801A1 (en) | Vision Enhancement for a Vision Impaired User | |
US20080157946A1 (en) | Interactive data view and command system | |
WO2018134172A1 (en) | Augmented reality for radiation dose monitoring | |
KR102069094B1 (en) | Method for detecting space in smokes using lidar sensor | |
US11475641B2 (en) | Computer vision cameras for IR light detection | |
US10657448B2 (en) | Devices and methods to navigate in areas using a machine learning model | |
WO2020148697A2 (en) | Method and system for monitoring a person using infrared and visible light | |
CA2819216A1 (en) | A response detection system and associated methods | |
Atabaki et al. | Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting | |
NL2019927B1 (en) | A computer controlled method of and apparatus and computer program product for supporting visual clearance of physical content. | |
Streefkerk et al. | Evaluating a multimodal interface for firefighting rescue tasks | |
Eberle et al. | Visible laser dazzle | |
Doll et al. | Robust, sensor-independent target detection and recognition based on computational models of human vision | |
Gardony et al. | Aided target recognition visual design impacts on cognition in simulated augmented reality | |
Kim et al. | Is “Σ” purple or green? Bistable grapheme-color synesthesia induced by ambiguous characters | |
Fotios et al. | Measuring the impact of lighting on interpersonal judgements of pedestrians at night-time | |
McDuff et al. | Inphysible: Camouflage against video-based physiological measurement | |
JP2004113755A (en) | Visual point detection camera and visual point automatic analyzing apparatus | |
US12141349B2 (en) | Attention cues for head-mounted display (HMD) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180713 |