CN113327329A - Indoor projection method, device and system based on three-dimensional model - Google Patents
Indoor projection method, device and system based on three-dimensional model Download PDFInfo
- Publication number
- CN113327329A CN113327329A CN202011472216.XA CN202011472216A CN113327329A CN 113327329 A CN113327329 A CN 113327329A CN 202011472216 A CN202011472216 A CN 202011472216A CN 113327329 A CN113327329 A CN 113327329A
- Authority
- CN
- China
- Prior art keywords
- projection
- model data
- sub
- scene
- indoor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000003190 augmentative effect Effects 0.000 claims abstract description 66
- 230000015654 memory Effects 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 9
- 238000005516 engineering process Methods 0.000 abstract description 10
- 230000004927 fusion Effects 0.000 abstract description 8
- 238000013499 data model Methods 0.000 description 6
- 238000007654 immersion Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 238000013480 data collection Methods 0.000 description 4
- 238000002366 time-of-flight method Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses an indoor projection method based on a three-dimensional model, which comprises the steps of obtaining scene acquisition data of a target indoor scene, and determining the three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene; according to the three-dimensional model data obtained through determination, determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library; the indoor three-dimensional projection material is used for projecting in the target indoor scene to form an augmented reality scene. Therefore, the invention can provide a technology for projecting the indoor scene based on the three-dimensional model data of the indoor scene to form the augmented reality scene, can effectively improve the fusion degree of the projected virtual information in the augmented reality scene and the real indoor scene, and brings more vivid experience for users.
Description
Technical Field
The invention relates to the technical field of augmented reality, in particular to an indoor projection method, device and system based on a three-dimensional model.
Background
The augmented reality technology aims to implement simulation processing and superposition on the basis of computer and other scientific technologies of entity information which is difficult to experience in the space range of the real world originally, effectively apply the virtual information content in the real world, and can be perceived by human senses in the process, so that the sensory experience beyond reality is realized.
In the augmented reality technology in the prior art, head-mounted devices are generally adopted to present the virtual information content of the analog simulation and the real picture to the user, but the head-mounted devices are inconvenient to move, so that the movement of the user is limited, and the use experience of the user is reduced. Therefore, the new technical idea of using projection technology to construct augmented reality scenes is timely and real. However, in the existing construction of the augmented reality scene by adopting the projection technology, the difference of indoor environments is not considered, the scene construction is only carried out by utilizing the monotonous three-dimensional projection, a good mixing effect with the real environment cannot be generated, the user experience is poor, and the reality sense is difficult to generate.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an indoor projection method, device and system based on a three-dimensional model, which can effectively improve the fusion degree of virtual information projected in an augmented reality scene and a real indoor scene, and bring more realistic experience to users.
In order to solve the technical problem, a first aspect of the present invention discloses an indoor projection method based on a three-dimensional model, including:
acquiring scene acquisition data of a target indoor scene, and determining three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene;
according to the three-dimensional model data obtained through determination, determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library; the indoor three-dimensional projection material is used for projecting in the target indoor scene to form an augmented reality scene.
As an alternative implementation, in the first aspect of the present invention, the method further includes:
determining projection position information of the three-dimensional model data;
and projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene.
As an optional implementation manner, in the first aspect of the present invention, the acquiring scene acquisition data of a target indoor scene, and determining three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene includes:
acquiring a plurality of scene acquisition information of a target indoor scene, and determining scene characteristic information in each scene acquisition information;
and performing a three-dimensional reconstruction operation on all the scene acquisition information based on the scene characteristic information in all the scene acquisition information to determine three-dimensional model data of the target indoor scene.
As an optional implementation manner, in the first aspect of the present invention, the determining, according to the three-dimensional model data obtained by the determining, an indoor three-dimensional projection material corresponding to the three-dimensional model data in a preset projection material library includes:
determining a plurality of sub-model data in the three-dimensional model data;
determining projection materials in a sub-room corresponding to each sub-model data;
and determining the projection materials in the sub-room corresponding to all the sub-model data as the three-dimensional projection materials in the room corresponding to the three-dimensional model data.
As an optional implementation manner, in the first aspect of the present invention, the determining the sub-indoor projection material corresponding to each piece of the sub-model data includes:
for each sub-model data, acquiring outline information of the sub-model data;
determining the projection type of the sub-model data according to the contour information of the sub-model data; the projection type is used for representing the position or object of the target indoor scene corresponding to the sub-model data;
and determining the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data.
As an alternative implementation manner, in the first aspect of the present invention, the determining projection position information of the three-dimensional model data includes:
for each sub-model data in the three-dimensional model data, acquiring outline information of the sub-model data;
and determining sub-projection position information corresponding to the sub-model data according to the contour information of the sub-model data, and determining the sub-projection position information corresponding to all the sub-model data as the projection position information of the three-dimensional model data.
As an optional implementation manner, in the first aspect of the present invention, the projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene includes:
acquiring sub-projection position information and sub-indoor projection materials corresponding to each sub-model data in the three-dimensional model data;
and projecting the sub-indoor projection materials corresponding to all the sub-model data in the target indoor scene based on the sub-projection position information corresponding to all the sub-model data to form an augmented reality scene.
The invention discloses an indoor projection device based on a three-dimensional model in a second aspect, which comprises:
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for acquiring scene acquisition data of a target indoor scene and determining three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene;
the second determination module is used for determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library according to the three-dimensional model data obtained through determination; the indoor three-dimensional projection material is used for projecting in the target indoor scene to form an augmented reality scene.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further comprises:
the third determining module is used for determining the projection position information of the three-dimensional model data;
and the projection module is used for projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene.
As an alternative implementation, in the second aspect of the present invention, the first determining module includes:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of scene acquisition information of a target indoor scene;
the determining unit is used for determining scene characteristic information in each scene acquisition information;
and the execution unit is used for executing three-dimensional reconstruction operation on all the scene acquisition information based on the scene characteristic information in all the scene acquisition information so as to determine three-dimensional model data of the target indoor scene.
As an optional implementation manner, in the second aspect of the present invention, the specific manner of determining, by the second determining module, the indoor three-dimensional projection material corresponding to the three-dimensional model data in a preset projection material library according to the three-dimensional model data obtained by determination includes:
determining a plurality of sub-model data in the three-dimensional model data;
determining projection materials in a sub-room corresponding to each sub-model data;
and determining the projection materials in the sub-room corresponding to all the sub-model data as the three-dimensional projection materials in the room corresponding to the three-dimensional model data.
As an optional implementation manner, in the second aspect of the present invention, the specific manner of determining the material projected in the sub-room corresponding to each piece of the sub-model data by the second determining module includes:
for each sub-model data, acquiring outline information of the sub-model data;
determining the projection type of the sub-model data according to the contour information of the sub-model data;
and determining the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data.
As an optional implementation manner, in the second aspect of the present invention, a specific manner of determining the projection position information of the three-dimensional model data by the third determining module includes:
for each sub-model data in the three-dimensional model data, acquiring outline information of the sub-model data;
and determining sub-projection position information corresponding to the sub-model data according to the contour information of the sub-model data, and determining the sub-projection position information of all the sub-model data as the projection position information of the three-dimensional model data.
As an optional implementation manner, in the second aspect of the present invention, a specific manner of projecting, by the projection module, the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene includes:
acquiring sub-projection position information and sub-indoor projection materials corresponding to each sub-model data in the three-dimensional model data;
and projecting the sub-indoor projection materials corresponding to all the sub-model data in the target indoor scene based on the sub-projection position information corresponding to all the sub-model data to form an augmented reality scene.
The invention discloses another indoor projection device based on a three-dimensional model in a third aspect, which comprises:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute part or all of the steps of the three-dimensional model-based indoor projection method disclosed by the first aspect of the embodiment of the invention.
The fourth aspect of the embodiment of the invention discloses an indoor projection system based on a three-dimensional model, which comprises an automatic guide moving device, an information acquisition device and a projection device; the system is used for executing part or all of the steps in the indoor projection method based on the three-dimensional model disclosed by the first aspect of the embodiment of the invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, scene acquisition data of a target indoor scene is acquired, and three-dimensional model data of the target indoor scene is determined according to the scene acquisition data of the target indoor scene; according to the three-dimensional model data obtained through determination, determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library; the indoor three-dimensional projection material is used for projecting in the target indoor scene to form an augmented reality scene. Therefore, the invention can provide a technology for projecting the indoor scene based on the three-dimensional model data of the indoor scene to form the augmented reality scene, can effectively improve the fusion degree of the projected virtual information in the augmented reality scene and the real indoor scene, and brings more vivid experience for users.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an indoor projection method based on a three-dimensional model according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another indoor projection method based on a three-dimensional model according to the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an indoor projection apparatus based on a three-dimensional model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another indoor projection apparatus based on a three-dimensional model according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of another indoor projection apparatus based on a three-dimensional model according to an embodiment of the disclosure.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The invention discloses an indoor projection method, device and system based on a three-dimensional model, which can provide a technology for projecting an indoor scene based on three-dimensional model data of the indoor scene to form an augmented reality scene, can effectively improve the fusion degree of projected virtual information in the augmented reality scene and the real indoor scene, and bring more realistic experience for users. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of an indoor projection method based on a three-dimensional model according to an embodiment of the present invention. The method described in fig. 1 may be applied to a corresponding computing terminal, a corresponding computing device, or a corresponding server, where the server may be a local server or a cloud server, and the embodiment of the present invention is not limited. As shown in fig. 1, the three-dimensional model-based indoor projection method may include the following operations:
101. scene acquisition data of the target indoor scene is acquired, and three-dimensional model data of the target indoor scene is determined according to the scene acquisition data of the target indoor scene.
In the embodiment of the present invention, the target indoor scene may be an indoor scene of a commercial entertainment place, an indoor scene of a civil residence, or an indoor scene of a military facility, and may be an internal scene of any closed or semi-closed place, which is not limited in the present invention. Optionally, the scene acquisition data of the target indoor scene may be acquired by using a common image pickup device, or may be acquired by using a special passive three-dimensional image pickup device, such as a monocular/binocular vision camera, or an RGB-D camera, or by using a special active three-dimensional information sensing device to acquire pulse sensing data or infrared sensing data, and such a special three-dimensional image pickup device may itself acquire the scene acquisition data and implement the construction of the three-dimensional model by using a built-in or external computing device. In the embodiment of the invention, the three-dimensional model data of the target indoor scene obtained by determination can be a voxel data model, a point cloud data model or a grid data model.
102. And according to the three-dimensional model data obtained by determination, determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library.
In the embodiment of the invention, the obtained indoor three-dimensional projection material is used for projecting in a target indoor scene to form an augmented reality scene. In the embodiment of the present invention, the preset projection material library may include a plurality of complete three-dimensional stereoscopic projection materials, that is, projection materials conforming to a closed or semi-closed complete scene, or may include a plurality of separate stereoscopic projection materials, and optionally, a part of the plurality of separate stereoscopic projection materials may be a component of a complete three-dimensional stereoscopic projection material. In the embodiment of the present invention, the preset projection material library may also include a plurality of separate planar projection materials, and optionally, a part of the planar projection materials in the plurality of planar projection materials may be a component of an integrated three-dimensional stereoscopic projection material.
Therefore, the method described by the embodiment of the invention can provide a technology for projecting the indoor scene based on the three-dimensional model data of the indoor scene to form the augmented reality scene, can effectively improve the fusion degree of the projected virtual information in the augmented reality scene and the real indoor scene, and brings more realistic experience for users.
In an optional implementation manner, the acquiring scene acquisition data of the target indoor scene in step 101, and determining three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene includes:
acquiring a plurality of scene acquisition information of a target indoor scene, and determining scene characteristic information in each scene acquisition information;
and performing three-dimensional reconstruction operation on all scene acquisition information based on the scene characteristic information in all scene acquisition information to determine three-dimensional model data of the target indoor scene.
In this alternative embodiment, the plurality of scene capturing information may be a plurality of pictures of the target indoor scene taken at a plurality of positions or at a plurality of angles, and may be taken by a single photographing apparatus a plurality of times or may be taken by a plurality of photographing apparatuses at the same time. Optionally, the scene feature information in the scene acquisition information may include one or more of pixel position information, depth information, or camera calibration information in the picture. In this optional embodiment, an existing three-dimensional reconstruction algorithm may be used to perform three-dimensional reconstruction operations such as feature point matching and three-dimensional reconstruction on the plurality of image data according to the image feature information, and finally, three-dimensional model data of the target indoor scene is formed. Optionally, the three-dimensional reconstruction algorithm may adopt SfM, remodel, SVO, and other algorithms, and such algorithms may be applicable to a plurality of image data acquired by a single image capturing device. Optionally, the three-dimensional reconstruction algorithm may adopt algorithms such as SGM and SGBM, and such algorithms may be applicable to a plurality of image data acquired by the binocular vision/multi-view vision camera device.
In this alternative embodiment, it should be noted that the type of the scene acquisition information is not limited, and therefore, an active three-dimensional reconstruction method may be selected, for example, a structured light method may be used to determine three-dimensional model data, and a projector is used to project the encoded structured light onto the object to be photographed, and then the object is photographed by a camera. The size and shape of the structured light encoded pattern may also change due to the different distance accuracy and direction of different parts on the object being photographed relative to the camera. At this time, the scene acquisition information is a pattern of the changed structured light code acquired by the camera, and then the scene characteristic information such as the depth information is acquired through the arithmetic unit, so that the three-dimensional model data of the target indoor scene is finally acquired. For another example, a TOF laser time-of-flight method may be used to perform a three-dimensional reconstruction operation, light pulses are continuously emitted to a target to be photographed, scene acquisition information is returned light acquired by a sensor at this time, and the time and phase difference of the returned light are determined to determine and calculate the distance to the target, so as to obtain scene characteristic information such as depth information, and finally obtain three-dimensional model data of a scene in a target room.
Therefore, the implementation of the optional embodiment can execute the three-dimensional reconstruction operation on all the scene acquisition information based on the determined scene characteristic information in all the scene acquisition information, and finally obtain the three-dimensional model data of the target indoor scene, which is beneficial to improving the accuracy and comprehensiveness of the determined three-dimensional model data so as to more intuitively reflect the structure of the target indoor scene.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart of another indoor projection method based on a three-dimensional model according to an embodiment of the present invention. The method described in fig. 2 is applied to a corresponding computing terminal, computing device, or server, where the server may be a local server or a cloud server, and the embodiment of the present invention is not limited thereto. As shown in fig. 2, the three-dimensional model-based indoor projection method may include the following operations:
201. scene acquisition data of the target indoor scene is acquired, and three-dimensional model data of the target indoor scene is determined according to the scene acquisition data of the target indoor scene.
202. And according to the three-dimensional model data obtained by determination, determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library.
In the embodiment of the present invention, specific technical details and technical noun explanations of the steps 201-202 may refer to the description of the steps 101-102 in the first embodiment, which are not repeated herein.
203. Projection position information of the three-dimensional model data is determined.
In the embodiment of the invention, the projection position information is used for indicating the projection position of the indoor three-dimensional projection material in the target indoor scene, and specifically, the projection position information can be determined by identifying the specific three-dimensional contour in the three-dimensional model data. For example, a three-dimensional matching algorithm may be used to match three-dimensional contour information of a specific piece of furniture in three-dimensional model data of a target indoor scene, when a corresponding result is matched, a specific feature point in the matched three-dimensional contour information is determined as a projection position point, and then, when projection is performed, projection equipment may perform projection of a projection material according to the determined projection position point. For example, the preset projection idea is to project on a sofa, the three-dimensional matching algorithm may be used to match the three-dimensional model data with a preset three-dimensional model of the sofa to determine three-dimensional contour information of the sofa in the three-dimensional model data, after the three-dimensional contour information of the sofa is determined, feature points on the three-dimensional contour information of the sofa, such as edge points on the front side surface of the backrest of the sofa and edge points on the top side surface of the cushion of the sofa, are selected, and during subsequent projection, the projection device may project the projection material onto the backrest or the cushion of the sofa according to the feature points.
204. And projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene.
Therefore, the embodiment of the invention can project the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form the augmented reality scene, so that the indoor three-dimensional projection material can be projected to the target indoor scene more accurately, the fusion degree of the indoor three-dimensional projection material and the target indoor scene is improved, and the sense of reality experienced by a user is improved.
In an alternative embodiment, the step 204 projects the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form the augmented reality scene, which may be performed by a fixed device or a mobile device that collects scene collection data of the target indoor scene, or may be performed by another independent projection device. When the fixed equipment for acquiring the scene acquisition data of the target indoor scene performs the operation, the equipment can directly project the three-dimensional projection material to the corresponding position according to the projection position information, and at the moment, the coordinate of the fixed equipment is not changed, so that the actual coordinate does not need to be determined or converted.
However, when the mobile device or another independent projection device performs projection, the coordinates of the device are changed, and the actual projection position cannot be determined by using the original coordinate information, so that the step 204 of projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form the augmented reality scene includes:
acquiring the coordinate information of the data acquisition equipment during acquisition and the current coordinate information of the projection equipment, and converting the projection position information into actual projection coordinate information according to the coordinate information of the data acquisition equipment during acquisition and the current coordinate information of the projection equipment;
and based on the actual projection coordinate information, projecting the indoor three-dimensional projection material in the target indoor scene to form an augmented reality scene.
In this alternative embodiment, the data acquisition device is a device for acquiring scene acquisition data of a target indoor scene, and the projection device is a device responsible for projecting an indoor three-dimensional projection material, and the two devices may be independent devices or devices integrated on the same apparatus.
Specifically, when acquiring the acquisition time coordinate information of the data acquisition device and the current coordinate information of the projection device, a coordinate relative relationship between the acquisition time coordinate information of the data acquisition device and the current coordinate information of the projection device may be calculated according to the acquisition time coordinate information of the data acquisition device and the current coordinate information of the projection device, where the coordinate relative relationship represents a position relative relationship between the data acquisition device and the projection device when the data acquisition device and the projection device are respectively independent devices, and may also represent a position relative relationship between the acquisition time position and the current position when the data acquisition device and the projection device are integrated in the same device. Then, according to the determined coordinate relative relationship, the coordinate offset of the three-dimensional model data relative to the projection equipment is determined, according to the coordinate offset, the projection equipment can calculate the position of the projection equipment in the current three-dimensional model data, actual projection coordinate information corresponding to the projection position information is calculated, and projection of the projection material can be carried out according to the actual projection coordinate information.
Therefore, the optional embodiment can convert the projection position information into the actual projection coordinate information according to the collection time coordinate information of the data collection equipment and the current coordinate information of the projection equipment, and project the indoor three-dimensional projection material in the target indoor scene to form the augmented reality scene based on the actual projection coordinate information, so that the problem that the initial coordinates of the model do not accord with the coordinates in the projection process when the mobile equipment or the data collection equipment and the projection equipment are independently arranged is solved, the conversion of a coordinate system is realized, and the projection accuracy is improved.
In another optional embodiment, in step 202, according to the three-dimensional model data obtained by determining, determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library, the method includes:
determining a plurality of sub-model data in the three-dimensional model data;
determining projection materials in the sub-room corresponding to each sub-model data;
and determining the sub-indoor projection materials corresponding to all the sub-model data as indoor three-dimensional projection materials corresponding to the three-dimensional model data.
In this alternative embodiment, the determination of the plurality of sub-model data of the three-dimensional model data may be determined according to a preset model division rule, for example, continuous planar model data and independent three-dimensional model data in the three-dimensional model data may be separated to form a plurality of sub-model data of two different types, so that, for example, an indoor wall model, an individual furniture model and an object model may be separated into a plurality of sub-model data for subsequent processing. Optionally, the three-dimensional profile data meeting the preset profile rule in the three-dimensional model data may also be separated, so as to screen out a plurality of specific sub-model data, for example, the three-dimensional model data may be matched according to the preset specific furniture profile matching rule, for example, the three-dimensional model data may be matched according to the preset seat profile matching rule, and the plurality of seat models obtained through matching are determined as the plurality of sub-model data, so as to facilitate subsequent processing.
Therefore, the optional embodiment can determine a plurality of sub-model data in the three-dimensional model data and determine the projection materials in the sub-chamber corresponding to each sub-model data, so that scenes can be classified more finely, the projection materials of different scenes can be selected, the fidelity of the finally projected augmented reality scene is improved, and a user can obtain better experience.
In yet another alternative embodiment, determining the projection material in the sub-chamber corresponding to each sub-model data includes:
for each sub-model data, acquiring outline information of the sub-model data;
determining the projection type of the sub-model data according to the contour information of the sub-model data;
and determining the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data.
In this optional embodiment, the contour information of each sub-model data may be matched according to a preset type matching rule, so as to determine a projection type of the sub-model data, where the projection type is used to represent a position or an object of a target indoor scene corresponding to the sub-model data. Alternatively, the projection type may include one or more of a wall surface, a ceiling, a door, a window, a cabinet, a table, a chair, a sofa, a television and a curtain, and in practical applications, an operator may determine the projection type according to the object or indoor structure of the actual scene.
In this optional embodiment, the projection material in the sub-room may be a plurality of complete three-dimensional projection materials in a preset projection material library, that is, a projection material conforming to a closed or semi-closed complete scene, or a plurality of separate three-dimensional projection materials or planar projection materials. Optionally, the projection materials in the sub-room in the preset projection material library may be projection materials associated with a specific projection type in advance, and after the projection type of the sub-model data is determined, the projection materials in the sub-room meeting the association condition may be automatically screened out according to the determined projection type.
Therefore, the optional embodiment can determine the projection type of the sub-model data according to the contour information of the sub-model data, and determine the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data, so that the corresponding projection material can be determined for the single sub-model data, the projection pertinence is improved, the selection of the projection material is refined, the detail fineness in the finally constructed augmented reality scene is improved, and a user obtains better immersion experience.
In yet another alternative embodiment, the determining projection location information of the three-dimensional model data in step 203 comprises:
for each sub-model data in the three-dimensional model data, acquiring outline information of the sub-model data;
and determining sub-projection position information corresponding to the sub-model data according to the contour information of the sub-model data, and determining the sub-projection position information corresponding to all the sub-model data as the projection position information of the three-dimensional model data.
In this optional embodiment, in the embodiment of the present invention, the sub-projection position information is used to indicate a projection position of the sub-indoor projection material in the target indoor scene, specifically, the sub-projection position information may be determined by first performing category identification on a three-dimensional profile in the sub-model data, identifying a specific category of the sub-model data, and performing feature point selection according to the identified category and using a feature point matching rule corresponding to the category. For example, the category of the sub-model data may be first identified by performing category identification on the three-dimensional contour of the sub-model data, for example, the sub-model data is identified as a curtain, then a feature point matching rule corresponding to the curtain category is adopted, for example, four edge points matching the front side surface of the curtain model are taken as feature points, the sub-model data is matched, the specific feature points obtained by matching are determined as sub-projection position points, and then, during projection, the projection device may perform projection of the corresponding projection material in the sub-room according to the determined sub-projection position points, that is, the front side surface of the curtain, for the part of the target indoor situation corresponding to the sub-model data, that is, the curtain in this example.
Therefore, according to the optional embodiment, the sub-projection position information corresponding to the sub-model data can be determined according to the contour information of the sub-model data, and the sub-projection position information of all the sub-model data is determined as the projection position information of the three-dimensional model data, so that the whole projection position information of the three-dimensional model data is subdivided, and the part of the target indoor scene corresponding to each sub-model data can be projected in a targeted manner during subsequent projection, so that the projection fineness is improved, the detail fineness in the finally constructed augmented reality scene is improved, and a user obtains better immersion experience.
In yet another alternative embodiment, the projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data in step 204 to form the augmented reality scene includes:
acquiring sub-projection position information and sub-indoor projection materials corresponding to each sub-model data in the three-dimensional model data;
and projecting the sub-indoor projection materials corresponding to all the sub-model data in the target indoor scene based on the sub-projection position information corresponding to all the sub-model data to form an augmented reality scene.
As can be seen, in the optional embodiment, based on the sub-projection position information corresponding to all the sub-model data obtained in the previous step, the sub-indoor projection materials corresponding to all the sub-model data are projected in the target indoor scene to form the augmented reality scene, so that the corresponding projection materials can be projected for part of the target indoor scene corresponding to each sub-model data, the projection pertinence is improved, the selection of the projection materials is refined, the detail fineness in the finally constructed augmented reality scene is improved, and a user obtains better immersion experience.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of an indoor projection apparatus based on a three-dimensional model according to an embodiment of the present invention. The apparatus described in fig. 3 may be applied to a corresponding computing terminal, a corresponding computing device, or a corresponding server, where the server may be a local server or a cloud server, and the embodiment of the present invention is not limited thereto. As shown in fig. 3, the apparatus may include:
the first determining module 301 is configured to obtain scene acquisition data of a target indoor scene, and determine three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene.
In the embodiment of the present invention, the target indoor scene may be an indoor scene of a commercial entertainment place, an indoor scene of a civil residence, or an indoor scene of a military facility, and may be an internal scene of any closed or semi-closed place, which is not limited in the present invention. Optionally, the scene acquisition data of the target indoor scene may be acquired by using a common image pickup device, or may be acquired by using a special passive three-dimensional image pickup device, such as a monocular/binocular vision camera, or an RGB-D camera, or by using a special active three-dimensional information sensing device to acquire pulse sensing data or infrared sensing data, and such a special three-dimensional image pickup device may itself acquire the scene acquisition data and implement the construction of the three-dimensional model by using a built-in or external computing device. In the embodiment of the invention, the three-dimensional model data of the target indoor scene obtained by determination can be a voxel data model, a point cloud data model or a grid data model.
The second determining module 302 is configured to determine, according to the three-dimensional model data obtained by the determination, an indoor three-dimensional projection material corresponding to the three-dimensional model data in a preset projection material library.
In the embodiment of the invention, the obtained indoor three-dimensional projection material is used for projecting in a target indoor scene to form an augmented reality scene. In the embodiment of the present invention, the preset projection material library may include a plurality of complete three-dimensional stereoscopic projection materials, that is, projection materials conforming to a closed or semi-closed complete scene, or may include a plurality of separate stereoscopic projection materials, and optionally, a part of the plurality of separate stereoscopic projection materials may be a component of a complete three-dimensional stereoscopic projection material. In the embodiment of the present invention, the preset projection material library may also include a plurality of separate planar projection materials, and optionally, a part of the planar projection materials in the plurality of planar projection materials may be a component of an integrated three-dimensional stereoscopic projection material.
Therefore, the device described in the embodiment of fig. 3 can provide a technology for projecting the indoor scene based on the three-dimensional model data of the indoor scene to form the augmented reality scene, so that the fusion degree of the projected virtual information in the augmented reality scene and the real indoor scene can be effectively improved, and more realistic experience is brought to the user.
In an alternative embodiment, as shown in fig. 4, the apparatus further comprises:
a third determining module 303, configured to determine projection position information of the three-dimensional model data;
in the embodiment of the invention, the projection position information is used for indicating the projection position of the indoor three-dimensional projection material in the target indoor scene, and specifically, the projection position information can be determined by identifying the specific three-dimensional contour in the three-dimensional model data. For example, a three-dimensional matching algorithm may be used to match three-dimensional contour information of a specific piece of furniture in three-dimensional model data of a target indoor scene, when a corresponding result is matched, a specific feature point in the matched three-dimensional contour information is determined as a projection position point, and then, when projection is performed, projection equipment may perform projection of a projection material according to the determined projection position point. For example, the preset projection idea is to project on a sofa, the three-dimensional matching algorithm may be used to match the three-dimensional model data with a preset three-dimensional model of the sofa to determine three-dimensional contour information of the sofa in the three-dimensional model data, after the three-dimensional contour information of the sofa is determined, feature points on the three-dimensional contour information of the sofa, such as edge points on the front side surface of the backrest of the sofa and edge points on the top side surface of the cushion of the sofa, are selected, and during subsequent projection, the projection device may project the projection material onto the backrest or the cushion of the sofa according to the feature points.
And the projection module 304 is configured to project the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene.
Therefore, the device described in the implementation of fig. 4 can project the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene, so that the indoor three-dimensional projection material can be projected to the target indoor scene more accurately, the fusion degree of the indoor three-dimensional projection material and the target indoor scene is improved, and the sense of reality of user experience is improved.
In another alternative embodiment, as shown in fig. 4, the first determining module 301 may include:
an obtaining unit 3011, configured to obtain multiple pieces of scene acquisition information of a target indoor scene;
a determining unit 3012, configured to determine scene feature information in each scene acquisition information;
and the execution unit 3013 is configured to execute a three-dimensional reconstruction operation on all the scene acquisition information based on the image feature information in all the image data to determine three-dimensional model data of the target indoor scene.
In this alternative embodiment, the plurality of scene capturing information may be a plurality of pictures of the target indoor scene taken at a plurality of positions or at a plurality of angles, and may be taken by a single photographing apparatus a plurality of times or may be taken by a plurality of photographing apparatuses at the same time. The acquiring unit 3011 may be a single or a plurality of image acquiring devices. Optionally, the scene feature information in the scene acquisition information may include one or more of pixel position information, depth information, or camera calibration information in the picture. In this optional embodiment, an existing three-dimensional reconstruction algorithm may be used to perform three-dimensional reconstruction operations such as feature point matching and three-dimensional reconstruction on the plurality of image data according to the image feature information, and finally, three-dimensional model data of the target indoor scene is formed. Optionally, the three-dimensional reconstruction algorithm may adopt SfM, remodel, SVO, and other algorithms, and such algorithms may be applicable to a plurality of image data acquired by a single image capturing device. Optionally, the three-dimensional reconstruction algorithm may adopt algorithms such as SGM and SGBM, and such algorithms may be applicable to a plurality of image data acquired by the binocular vision/multi-view vision camera device.
In this alternative embodiment, it should be noted that the type of the scene acquisition information is not limited, and therefore, the execution unit 3013 may choose to use an active three-dimensional reconstruction method, for example, may use a structured light method to determine three-dimensional model data, and project the encoded structured light onto the object to be photographed by using the projector, and then perform photographing by using the camera. The size and shape of the structured light encoded pattern may also change due to the different distance accuracy and direction of different parts on the object being photographed relative to the camera. At this time, the scene acquisition information is a pattern of the changed structured light code acquired by the camera, and then the scene characteristic information such as the depth information is acquired through the arithmetic unit, so that the three-dimensional model data of the target indoor scene is finally acquired. For another example, a TOF laser time-of-flight method may be used to perform a three-dimensional reconstruction operation, light pulses are continuously emitted to a target to be photographed, scene acquisition information is returned light acquired by a sensor at this time, and the time and phase difference of the returned light are determined to determine and calculate the distance to the target, so as to obtain scene characteristic information such as depth information, and finally obtain three-dimensional model data of a scene in a target room.
Therefore, the device described in fig. 4 can execute the three-dimensional reconstruction operation on all the scene acquisition information based on the determined scene characteristic information in all the scene acquisition information, and finally obtain the three-dimensional model data of the target indoor scene, which is beneficial to improving the accuracy and comprehensiveness of the determined three-dimensional model data, so as to more intuitively reflect the structure of the target indoor scene.
In yet another alternative embodiment, the projection module 304 for projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form the augmented reality scene may be integrated with the first determination module 301 into a fixed device or a mobile device for acquiring scene acquisition data of the target indoor scene, or may be another independent projection device. When the first determining module 301 is integrated with a fixed device for acquiring scene acquisition data of a target indoor scene, the device may directly project the three-dimensional projection material to a corresponding position according to the projection position information, and at this time, since the coordinates of the fixed device are not changed, it is not necessary to determine or convert actual coordinates.
However, when the projection module 304 and the first determination module 301 are integrated into a mobile device for acquiring scene acquisition data of a target indoor scene or the projection module 304 is another independent projection device, since the coordinates of the device are changed, the actual projection position cannot be determined by using the original coordinate information, and therefore, the specific manner in which the projection module 304 projects an indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene includes:
acquiring the coordinate information of the data acquisition equipment during acquisition and the current coordinate information of the projection equipment, and converting the projection position information into actual projection coordinate information according to the coordinate information of the data acquisition equipment during acquisition and the current coordinate information of the projection equipment;
and based on the actual projection coordinate information, projecting the indoor three-dimensional projection material in the target indoor scene to form an augmented reality scene.
In this alternative embodiment, the data acquisition device is a device for acquiring scene acquisition data of a target indoor scene, and the projection device is a device responsible for projecting an indoor three-dimensional projection material, and the two devices may be independent devices or devices integrated on the same apparatus.
Specifically, when acquiring the acquisition time coordinate information of the data acquisition device and the current coordinate information of the projection device, a coordinate relative relationship between the acquisition time coordinate information of the data acquisition device and the current coordinate information of the projection device may be calculated according to the acquisition time coordinate information of the data acquisition device and the current coordinate information of the projection device, where the coordinate relative relationship represents a position relative relationship between the data acquisition device and the projection device when the data acquisition device and the projection device are respectively independent devices, and may also represent a position relative relationship between the acquisition time position and the current position when the data acquisition device and the projection device are integrated in the same device. Then, according to the determined coordinate relative relationship, the coordinate offset of the three-dimensional model data relative to the projection equipment is determined, according to the coordinate offset, the projection equipment can calculate the position of the projection equipment in the current three-dimensional model data, actual projection coordinate information corresponding to the projection position information is calculated, and projection of the projection material can be carried out according to the actual projection coordinate information.
Therefore, the optional embodiment can convert the projection position information into the actual projection coordinate information according to the collection time coordinate information of the data collection equipment and the current coordinate information of the projection equipment, and project the indoor three-dimensional projection material in the target indoor scene to form the augmented reality scene based on the actual projection coordinate information, so that the problem that the initial coordinates of the model do not accord with the coordinates in the projection process when the mobile equipment or the data collection equipment and the projection equipment are independently arranged is solved, the conversion of a coordinate system is realized, and the projection accuracy is improved.
In yet another optional embodiment, the determining, by the second determining module 302, a specific manner of determining, in a preset projection material library, an indoor three-dimensional projection material corresponding to the three-dimensional model data according to the three-dimensional model data obtained by determining includes:
determining a plurality of sub-model data in the three-dimensional model data;
determining projection materials in the sub-room corresponding to each sub-model data;
and determining the sub-indoor projection materials corresponding to all the sub-model data as indoor three-dimensional projection materials corresponding to the three-dimensional model data.
In this alternative embodiment, the determination of the plurality of sub-model data of the three-dimensional model data may be determined according to a preset model division rule, for example, continuous planar model data and independent three-dimensional model data in the three-dimensional model data may be separated to form a plurality of sub-model data of two different types, so that, for example, an indoor wall model, an individual furniture model and an object model may be separated into a plurality of sub-model data for subsequent processing. Optionally, the three-dimensional profile data meeting the preset profile rule in the three-dimensional model data may also be separated, so as to screen out a plurality of specific sub-model data, for example, the three-dimensional model data may be matched according to the preset specific furniture profile matching rule, for example, the three-dimensional model data may be matched according to the preset seat profile matching rule, and the plurality of seat models obtained through matching are determined as the plurality of sub-model data, so as to facilitate subsequent processing.
Therefore, the optional embodiment can determine a plurality of sub-model data in the three-dimensional model data and determine the projection materials in the sub-chamber corresponding to each sub-model data, so that scenes can be classified more finely, the projection materials of different scenes can be selected, the fidelity of the finally projected augmented reality scene is improved, and a user can obtain better experience.
In yet another alternative embodiment, the determining module 302 determines a specific manner of projecting the material in the sub-chamber corresponding to each sub-model data, including:
for each sub-model data, acquiring outline information of the sub-model data;
determining the projection type of the sub-model data according to the contour information of the sub-model data;
and determining the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data.
In this optional embodiment, the contour information of each sub-model data may be matched according to a preset type matching rule, so as to determine a projection type of the sub-model data, where the projection type is used to represent a position or an object of a target indoor scene corresponding to the sub-model data. Alternatively, the projection type may include one or more of a wall surface, a ceiling, a door, a window, a cabinet, a table, a chair, a sofa, a television and a curtain, and in practical applications, an operator may determine the projection type according to the object or indoor structure of the actual scene.
In this optional embodiment, the projection material in the sub-room may be a plurality of complete three-dimensional projection materials in a preset projection material library, that is, a projection material conforming to a closed or semi-closed complete scene, or a plurality of separate three-dimensional projection materials or planar projection materials. Optionally, the projection materials in the sub-room in the preset projection material library may be projection materials associated with a specific projection type in advance, and after the projection type of the sub-model data is determined, the projection materials in the sub-room meeting the association condition may be automatically screened out according to the determined projection type.
Therefore, the optional embodiment can determine the projection type of the sub-model data according to the contour information of the sub-model data, and determine the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data, so that the corresponding projection material can be determined for the single sub-model data, the projection pertinence is improved, the selection of the projection material is refined, the detail fineness in the finally constructed augmented reality scene is improved, and a user obtains better immersion experience.
In yet another alternative embodiment, the third determining module 303 determines the projection location information of the three-dimensional model data in a specific manner, including:
for each sub-model data in the three-dimensional model data, acquiring outline information of the sub-model data;
and determining sub-projection position information corresponding to the sub-model data according to the contour information of the sub-model data, and determining the sub-projection position information of all the sub-model data as the projection position information of the three-dimensional model data.
In this optional embodiment, in the embodiment of the present invention, the sub-projection position information is used to indicate a projection position of the sub-indoor projection material in the target indoor scene, specifically, the sub-projection position information may be determined by first performing category identification on a three-dimensional profile in the sub-model data, identifying a specific category of the sub-model data, and performing feature point selection according to the identified category and using a feature point matching rule corresponding to the category. For example, the category of the sub-model data may be first identified by performing category identification on the three-dimensional contour of the sub-model data, for example, the sub-model data is identified as a curtain, then a feature point matching rule corresponding to the curtain category is adopted, for example, four edge points matching the front side surface of the curtain model are taken as feature points, the sub-model data is matched, the specific feature points obtained by matching are determined as sub-projection position points, and then, during projection, the projection device may perform projection of the corresponding projection material in the sub-room according to the determined sub-projection position points, that is, the front side surface of the curtain, for the part of the target indoor situation corresponding to the sub-model data, that is, the curtain in this example.
Therefore, according to the optional embodiment, the sub-projection position information corresponding to the sub-model data can be determined according to the contour information of the sub-model data, and the sub-projection position information of all the sub-model data is determined as the projection position information of the three-dimensional model data, so that the whole projection position information of the three-dimensional model data is subdivided, and the part of the target indoor scene corresponding to each sub-model data can be projected in a targeted manner during subsequent projection, so that the projection fineness is improved, the detail fineness in the finally constructed augmented reality scene is improved, and a user obtains better immersion experience.
In yet another alternative embodiment, the specific manner of projecting the indoor three-dimensional projection material in the target indoor scene to form the augmented reality scene by the projection module 304 based on the projection position information of the three-dimensional model data includes:
acquiring sub-projection position information and sub-indoor projection materials corresponding to each sub-model data in the three-dimensional model data;
and projecting the sub-indoor projection materials corresponding to all the sub-model data in the target indoor scene based on the sub-projection position information corresponding to all the sub-model data to form an augmented reality scene.
As can be seen, in the optional embodiment, based on the sub-projection position information corresponding to all the sub-model data obtained in the previous step, the sub-indoor projection materials corresponding to all the sub-model data are projected in the target indoor scene to form the augmented reality scene, so that the corresponding projection materials can be projected for part of the target indoor scene corresponding to each sub-model data, the projection pertinence is improved, the selection of the projection materials is refined, the detail fineness in the finally constructed augmented reality scene is improved, and a user obtains better immersion experience.
Example four
Referring to fig. 5, fig. 5 is a schematic structural diagram of another indoor projection apparatus based on a three-dimensional model according to an embodiment of the present invention. As shown in fig. 5, the apparatus may include:
a memory 401 storing executable program code;
a processor 402 coupled with the memory 401;
the processor 402 calls the executable program code stored in the memory 401 to execute part or all of the steps of the three-dimensional model-based indoor projection method disclosed in the first embodiment or the second embodiment of the present invention.
EXAMPLE five
The embodiment of the invention discloses a computer storage medium, which stores computer instructions, and when the computer instructions are called, the computer storage medium is used for executing part or all of the steps in the indoor projection method based on the three-dimensional model disclosed in the first embodiment or the second embodiment of the invention.
EXAMPLE six
The embodiment of the invention discloses an indoor projection system based on a three-dimensional model. The system is used for executing part or all of the steps in the three-dimensional model-based indoor projection method disclosed by the first aspect of the embodiment of the invention.
Specifically, in this embodiment, the system may be a system integrally integrated on one device, for example, an AGV cart provided with a projection device and a camera, or alternatively, the system may also be a system separately provided on an independent device, for example, an AGV cart and a projection device which are in communication connection with each other in advance, where the AGV cart is provided with a camera.
Next, the two scenarios will be described in more detail, in the first scenario, the indoor projection system in this embodiment is an AGV cart provided with a projection device and a camera, and the AGV cart can move in the target indoor scene along a preset guide route, and capture the target indoor scene at a plurality of angles by using the camera during the moving process, so as to obtain an image data set of the target indoor scene. Then, the processor in the AVG cart, or the connected server, may perform three-dimensional reconstruction on the image data set of the target indoor scene to obtain three-dimensional model data of the target indoor scene and a corresponding indoor three-dimensional projection material, and then the AGV cart may directly perform projection using the projection apparatus provided thereon to implement establishment of an augmented reality scene.
Therefore, the AGV trolley in the first scene realizes the integration of the indoor projection system, can be independently sold for independent use, does not need to install various facilities indoors, enables a user to experience a three-dimensional augmented reality scene, and has certain market competitiveness.
In the second scenario, the indoor projection system is respectively disposed on two independent devices, for example, the indoor projection system may include an AGV cart and a projection apparatus that are pre-connected in a communication manner. In the scene, the AGV with the camera can move in the target indoor scene along a preset guide route, and the camera is used for shooting the target indoor scene at a plurality of angles in the moving process to obtain an image data set of the target indoor scene. Then, the processor in the AVG cart, or the connected server, may perform three-dimensional reconstruction on the image data set of the target indoor scene to obtain three-dimensional model data of the target indoor scene and corresponding indoor three-dimensional projection material. And the projection device can be preset indoors and is in communication connection with the AGV trolley. And when receiving the projection instruction, the projection device projects the indoor three-dimensional projection material in the target indoor scene to form an augmented reality scene.
Therefore, the second scene can be projected by utilizing the indoor preset projection device, the flexible AGV trolley is utilized to run indoors to build a three-dimensional scene, the projection device which is generally fixedly arranged in advance is more precise and advanced relative to the projection device which is integrated on the AGV trolley in the first scene, and therefore the second scene can be suitable for building an augmented reality scene with higher precision and higher requirements.
The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate components may or may not be physically separate, and the components shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above detailed description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on such understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, where the storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM), or other disk memories, CD-ROMs, or other magnetic disks, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
Finally, it should be noted that: the method, apparatus and system for indoor projection based on three-dimensional model disclosed in the embodiments of the present invention are only preferred embodiments of the present invention, and are only used for illustrating the technical solutions of the present invention, not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Claims (16)
1. An indoor projection method based on a three-dimensional model, which is characterized by comprising the following steps:
acquiring scene acquisition data of a target indoor scene, and determining three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene;
according to the three-dimensional model data obtained through determination, determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library; the indoor three-dimensional projection material is used for projecting in the target indoor scene to form an augmented reality scene.
2. The three-dimensional model based indoor projection method of claim 1, further comprising:
determining projection position information of the three-dimensional model data;
and projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene.
3. The three-dimensional model based indoor projection method of claim 1, wherein the obtaining scene capture data of a target indoor scene, the determining three-dimensional model data of the target indoor scene from the scene capture data of the target indoor scene, comprises:
acquiring a plurality of scene acquisition information of a target indoor scene, and determining scene characteristic information in each scene acquisition information;
and performing a three-dimensional reconstruction operation on all the scene acquisition information based on the scene characteristic information in all the scene acquisition information to determine three-dimensional model data of the target indoor scene.
4. The three-dimensional model-based indoor projection method according to claim 2, wherein the determining, in a preset projection material library, the indoor three-dimensional projection material corresponding to the three-dimensional model data according to the three-dimensional model data obtained by the determining comprises:
determining a plurality of sub-model data in the three-dimensional model data;
determining projection materials in a sub-room corresponding to each sub-model data;
and determining the projection materials in the sub-room corresponding to all the sub-model data as the three-dimensional projection materials in the room corresponding to the three-dimensional model data.
5. The three-dimensional model-based indoor projection method of claim 4, wherein the determining of the sub-indoor projection material corresponding to each sub-model data comprises:
for each sub-model data, acquiring outline information of the sub-model data;
determining the projection type of the sub-model data according to the contour information of the sub-model data; the projection type is used for representing the position or object of the target indoor scene corresponding to the sub-model data;
and determining the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data.
6. The three-dimensional model-based indoor projection method according to claim 5, wherein the determining projection position information of the three-dimensional model data comprises:
for each sub-model data in the three-dimensional model data, acquiring outline information of the sub-model data;
and determining sub-projection position information corresponding to the sub-model data according to the contour information of the sub-model data, and determining the sub-projection position information corresponding to all the sub-model data as the projection position information of the three-dimensional model data.
7. The three-dimensional model based indoor projection method of claim 6, wherein the projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene comprises:
acquiring sub-projection position information and sub-indoor projection materials corresponding to each sub-model data in the three-dimensional model data;
and projecting the sub-indoor projection materials corresponding to all the sub-model data in the target indoor scene based on the sub-projection position information corresponding to all the sub-model data to form an augmented reality scene.
8. An indoor projection apparatus based on a three-dimensional model, the apparatus comprising:
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for acquiring scene acquisition data of a target indoor scene and determining three-dimensional model data of the target indoor scene according to the scene acquisition data of the target indoor scene;
the second determination module is used for determining indoor three-dimensional projection materials corresponding to the three-dimensional model data in a preset projection material library according to the three-dimensional model data obtained through determination; the indoor three-dimensional projection material is used for projecting in the target indoor scene to form an augmented reality scene.
9. The three-dimensional model based indoor projection apparatus of claim 8, wherein the apparatus further comprises:
the third determining module is used for determining the projection position information of the three-dimensional model data;
and the projection module is used for projecting the indoor three-dimensional projection material in the target indoor scene based on the projection position information of the three-dimensional model data to form an augmented reality scene.
10. The three-dimensional model based indoor projection apparatus of claim 8, wherein the first determining module comprises:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of scene acquisition information of a target indoor scene;
the determining unit is used for determining scene characteristic information in each scene acquisition information;
and the execution unit is used for executing three-dimensional reconstruction operation on all the scene acquisition information based on the scene characteristic information in all the scene acquisition information so as to determine three-dimensional model data of the target indoor scene.
11. The indoor projection device based on the three-dimensional model according to claim 9, wherein the second determining module determines, according to the three-dimensional model data obtained by the determination, a specific manner of an indoor three-dimensional projection material corresponding to the three-dimensional model data in a preset projection material library, and includes:
determining a plurality of sub-model data in the three-dimensional model data;
determining projection materials in a sub-room corresponding to each sub-model data;
and determining the projection materials in the sub-room corresponding to all the sub-model data as the three-dimensional projection materials in the room corresponding to the three-dimensional model data.
12. The three-dimensional model-based indoor projection device of claim 11, wherein the second determining module determines a specific manner of projecting the material in the sub-room corresponding to each sub-model data, and the specific manner includes:
for each sub-model data, acquiring outline information of the sub-model data;
determining the projection type of the sub-model data according to the contour information of the sub-model data;
and determining the projection material in the sub-chamber corresponding to the sub-model data according to the projection type of the sub-model data.
13. The three-dimensional model based indoor projection device of claim 12, wherein the third determination module determines a specific manner of projection position information of the three-dimensional model data, comprising:
for each sub-model data in the three-dimensional model data, acquiring outline information of the sub-model data;
and determining sub-projection position information corresponding to the sub-model data according to the contour information of the sub-model data, and determining the sub-projection position information of all the sub-model data as the projection position information of the three-dimensional model data.
14. The three-dimensional model based indoor projection device of claim 13, wherein the projection module projects the indoor three-dimensional projection material in the target indoor scene to form an augmented reality scene based on the projection position information of the three-dimensional model data by:
acquiring sub-projection position information and sub-indoor projection materials corresponding to each sub-model data in the three-dimensional model data;
and projecting the sub-indoor projection materials corresponding to all the sub-model data in the target indoor scene based on the sub-projection position information corresponding to all the sub-model data to form an augmented reality scene.
15. An indoor projection apparatus based on a three-dimensional model, the apparatus comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the three-dimensional model-based indoor projection method according to any one of claims 1 to 7.
16. An indoor projection system based on a three-dimensional model is characterized by comprising an automatic guide moving device, an information acquisition device and a projection device; the system is used for executing the three-dimensional model based indoor projection method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011472216.XA CN113327329B (en) | 2020-12-15 | 2020-12-15 | Indoor projection method, device and system based on three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011472216.XA CN113327329B (en) | 2020-12-15 | 2020-12-15 | Indoor projection method, device and system based on three-dimensional model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113327329A true CN113327329A (en) | 2021-08-31 |
CN113327329B CN113327329B (en) | 2024-06-14 |
Family
ID=77413253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011472216.XA Active CN113327329B (en) | 2020-12-15 | 2020-12-15 | Indoor projection method, device and system based on three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113327329B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114529678A (en) * | 2022-02-23 | 2022-05-24 | 北京航空航天大学 | Large-range indoor space three-dimensional reconstruction method based on multi-local model splicing |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104869372A (en) * | 2014-02-25 | 2015-08-26 | 联想(北京)有限公司 | Projection method and electronic equipment |
CN107071388A (en) * | 2016-12-26 | 2017-08-18 | 深圳增强现实技术有限公司 | A kind of three-dimensional augmented reality display methods and device |
US20180150148A1 (en) * | 2015-06-30 | 2018-05-31 | Iview Displays (Shenzhen) Company Ltd. | Handheld interactive device and projection interaction method therefor |
CN108597035A (en) * | 2018-05-02 | 2018-09-28 | 福建中锐海沃科技有限公司 | A kind of three-dimensional object display methods, storage medium and computer based on augmented reality |
CN109242958A (en) * | 2018-08-29 | 2019-01-18 | 广景视睿科技(深圳)有限公司 | A kind of method and device thereof of three-dimensional modeling |
CN109584377A (en) * | 2018-09-04 | 2019-04-05 | 亮风台(上海)信息科技有限公司 | A kind of method and apparatus of the content of augmented reality for rendering |
CN109840947A (en) * | 2017-11-28 | 2019-06-04 | 广州腾讯科技有限公司 | Implementation method, device, equipment and the storage medium of augmented reality scene |
CN110062216A (en) * | 2019-04-18 | 2019-07-26 | 北京森焱精创科技有限公司 | Outdoor scene exchange method, system, computer equipment and storage medium |
CN110392251A (en) * | 2018-04-18 | 2019-10-29 | 广景视睿科技(深圳)有限公司 | A kind of dynamic projection method and system based on virtual reality |
CN110597397A (en) * | 2019-09-29 | 2019-12-20 | 深圳传音控股股份有限公司 | Augmented reality implementation method, mobile terminal and storage medium |
CN110688018A (en) * | 2019-11-05 | 2020-01-14 | 广东虚拟现实科技有限公司 | Virtual picture control method and device, terminal equipment and storage medium |
CN110876046A (en) * | 2018-08-31 | 2020-03-10 | 深圳光峰科技股份有限公司 | Projection method, projection apparatus, and computer-readable storage medium |
CN111665842A (en) * | 2020-06-09 | 2020-09-15 | 山东大学 | Indoor SLAM mapping method and system based on semantic information fusion |
-
2020
- 2020-12-15 CN CN202011472216.XA patent/CN113327329B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104869372A (en) * | 2014-02-25 | 2015-08-26 | 联想(北京)有限公司 | Projection method and electronic equipment |
US20180150148A1 (en) * | 2015-06-30 | 2018-05-31 | Iview Displays (Shenzhen) Company Ltd. | Handheld interactive device and projection interaction method therefor |
CN107071388A (en) * | 2016-12-26 | 2017-08-18 | 深圳增强现实技术有限公司 | A kind of three-dimensional augmented reality display methods and device |
CN109840947A (en) * | 2017-11-28 | 2019-06-04 | 广州腾讯科技有限公司 | Implementation method, device, equipment and the storage medium of augmented reality scene |
CN110392251A (en) * | 2018-04-18 | 2019-10-29 | 广景视睿科技(深圳)有限公司 | A kind of dynamic projection method and system based on virtual reality |
CN108597035A (en) * | 2018-05-02 | 2018-09-28 | 福建中锐海沃科技有限公司 | A kind of three-dimensional object display methods, storage medium and computer based on augmented reality |
CN109242958A (en) * | 2018-08-29 | 2019-01-18 | 广景视睿科技(深圳)有限公司 | A kind of method and device thereof of three-dimensional modeling |
CN110876046A (en) * | 2018-08-31 | 2020-03-10 | 深圳光峰科技股份有限公司 | Projection method, projection apparatus, and computer-readable storage medium |
CN109584377A (en) * | 2018-09-04 | 2019-04-05 | 亮风台(上海)信息科技有限公司 | A kind of method and apparatus of the content of augmented reality for rendering |
CN110062216A (en) * | 2019-04-18 | 2019-07-26 | 北京森焱精创科技有限公司 | Outdoor scene exchange method, system, computer equipment and storage medium |
CN110597397A (en) * | 2019-09-29 | 2019-12-20 | 深圳传音控股股份有限公司 | Augmented reality implementation method, mobile terminal and storage medium |
CN110688018A (en) * | 2019-11-05 | 2020-01-14 | 广东虚拟现实科技有限公司 | Virtual picture control method and device, terminal equipment and storage medium |
CN111665842A (en) * | 2020-06-09 | 2020-09-15 | 山东大学 | Indoor SLAM mapping method and system based on semantic information fusion |
Non-Patent Citations (1)
Title |
---|
徐维鹏 等: "基于深度相机的空间增强现实动态投影标定", 《系统仿真学报》, 15 September 2013 (2013-09-15), pages 2097 - 2104 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114529678A (en) * | 2022-02-23 | 2022-05-24 | 北京航空航天大学 | Large-range indoor space three-dimensional reconstruction method based on multi-local model splicing |
Also Published As
Publication number | Publication date |
---|---|
CN113327329B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6951595B2 (en) | Housing data collection and model generation methods | |
CN104835138B (en) | Make foundation drawing picture and Aerial Images alignment | |
EP3414742B1 (en) | Optimized object scanning using sensor fusion | |
CN110505463A (en) | Based on the real-time automatic 3D modeling method taken pictures | |
CN106875493B (en) | The stacking method of virtual target thing in AR glasses | |
CN110874818B (en) | Image processing and virtual space construction method, device, system and storage medium | |
WO2020042970A1 (en) | Three-dimensional modeling method and device therefor | |
US20220078385A1 (en) | Projection method based on augmented reality technology and projection equipment | |
CN108629830A (en) | A kind of three-dimensional environment method for information display and equipment | |
US20140160122A1 (en) | Creating a virtual representation based on camera data | |
CN109640070A (en) | A kind of stereo display method, device, equipment and storage medium | |
JP2023546739A (en) | Methods, apparatus, and systems for generating three-dimensional models of scenes | |
US11138743B2 (en) | Method and apparatus for a synchronous motion of a human body model | |
CN109144252A (en) | Object determines method, apparatus, equipment and storage medium | |
CN117173756A (en) | Augmented reality AR system, computer equipment and storage medium | |
CN112862861B (en) | Camera motion path determining method, determining device and shooting system | |
CN113298928A (en) | House three-dimensional reconstruction method, device, equipment and storage medium | |
CN116012564B (en) | Equipment and method for intelligent fusion of three-dimensional model and live-action photo | |
CN108846900B (en) | Method and system for improving spatial sense of user in room source virtual three-dimensional space diagram | |
CN113327329B (en) | Indoor projection method, device and system based on three-dimensional model | |
CN110191284B (en) | Method and device for collecting data of house, electronic equipment and storage medium | |
CN116664770A (en) | Image processing method, storage medium and system for shooting entity | |
US11282233B1 (en) | Motion capture calibration | |
US11600022B2 (en) | Motion capture calibration using drones | |
US11636621B2 (en) | Motion capture calibration using cameras and drones |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211102 Address after: 510663 501-2, Guangzheng science and Technology Industrial Park, No. 11, Nanyun fifth road, Science City, Huangpu District, Guangzhou, Guangdong Province Applicant after: GUANGZHOU FUGANG LIFE INTELLIGENT TECHNOLOGY Co.,Ltd. Address before: 510700 501-1, Guangzheng science and Technology Industrial Park, No. 11, Yunwu Road, Science City, Huangpu District, Guangzhou City, Guangdong Province Applicant before: GUANGZHOU FUGANG WANJIA INTELLIGENT TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |