CN111652981A - Space capsule special effect generation method and device, electronic equipment and storage medium - Google Patents
Space capsule special effect generation method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111652981A CN111652981A CN202010509232.5A CN202010509232A CN111652981A CN 111652981 A CN111652981 A CN 111652981A CN 202010509232 A CN202010509232 A CN 202010509232A CN 111652981 A CN111652981 A CN 111652981A
- Authority
- CN
- China
- Prior art keywords
- capsule
- real object
- indoor environment
- virtual
- space capsule
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002775 capsule Substances 0.000 title claims abstract description 164
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000000694 effects Effects 0.000 title claims abstract description 42
- 230000004927 fusion Effects 0.000 claims abstract description 100
- 230000001502 supplementing effect Effects 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The disclosure provides a space capsule special effect generation method, a space capsule special effect generation device, electronic equipment and a storage medium, wherein the space capsule special effect generation method comprises the following steps: determining information of each real object in the indoor environment based on the acquired scene image; determining the fusion mode and the fusion position of each virtual space capsule element in the AR space capsule scene image to be generated according to the information of each real object in the indoor environment; the fusion mode comprises supplementing real objects displayed in the indoor environment and replacing the real objects in the indoor environment; and controlling the AR equipment to display the AR capsule scene image for fusing each virtual capsule element into the indoor environment based on the determined fusion mode and fusion position of each virtual capsule element. This virtual space environment through AR equipment structure can make the user produce the outer space experience of being personally on the scene, can use lower cost, experiences outer space environment, and can compromise various real scenes and carry out the structure of virtual space environment, and the suitability is better.
Description
Technical Field
The disclosure relates to the technical field of AR (augmented reality) equipment, in particular to a method and a device for generating special effects of a space capsule, electronic equipment and a storage medium.
Background
With the improvement of living standard of people, more and more users want to find different experiences, and outer space is always a place which is always explored by people but is not more than mystery.
At present, people's exploration for outer space is still in the preliminary stage, the cost of visiting the real outer space environment is very expensive, and people do not have courage to visit the outer space because of some potential threats, so that the requirement that people want to be personally on the scene to feel the outer space environment is difficult to meet.
Disclosure of Invention
The embodiment of the disclosure provides at least one special effect generation scheme for the space capsule, wherein AR equipment is used for displaying an AR space capsule scene image, the virtual space environment constructed by the AR equipment can enable a user to generate an immersive outer space experience, the outer space environment can be experienced with low cost, various real scenes can be taken into consideration for constructing the virtual space environment, and the applicability is better.
In a first aspect, an embodiment of the present disclosure provides a space capsule special effect generating method, where the method includes:
acquiring a scene image including an indoor environment;
determining information of each real object in the indoor environment based on the acquired scene image;
determining the fusion mode and the fusion position of each virtual space capsule element in the AR space capsule scene image to be generated according to the information of each real object in the indoor environment; the fusion mode comprises supplementing real objects displayed in the indoor environment and replacing the indoor environment;
and controlling AR equipment to display the AR capsule scene images of the virtual capsule elements fused into the indoor environment based on the determined fusion mode and fusion position of the virtual capsule elements.
In one embodiment, the determining information of each real object in the indoor environment based on the acquired scene image includes:
carrying out target detection on the scene image based on a trained target detection network, and extracting image areas of all real objects in the scene image;
and performing feature extraction on the image area of each real object based on the trained feature extraction network to obtain the attribute features of the real object, and taking the attribute features as the information of each real object.
In one embodiment, the determining, according to the information of each real object in the indoor environment, a fusion manner and a fusion position of each virtual capsule element in the AR capsule scene image to be generated includes:
and for each real object in the indoor environment, if the fact that the first virtual space capsule element matched with the real object exists is determined, the fact that the real object is replaced by the first virtual space capsule element matched with the real object is determined as the fusion mode, and the fusion position of the first virtual space capsule element matched with the real object in the AR space capsule scene image to be generated is determined according to the position information of the real object in the whole indoor environment.
In one embodiment, for any real object, determining a first virtual space capsule element matching the any real object comprises:
determining a first virtual space capsule element type matched with any real object according to the object type corresponding to the real object;
and acquiring a three-dimensional model corresponding to the determined type of the first virtual space capsule element, and adjusting the attribute characteristics of the three-dimensional model according to the attribute characteristics of any real object to obtain an adjusted first virtual space capsule element matched with any real object.
In one embodiment, the determining, according to the information of each real object in the indoor environment, a fusion manner and a fusion position of each virtual capsule element in the AR capsule scene image to be generated includes:
and for the virtual space capsule elements to be presented, determining that second virtual space capsule elements matched with the real objects do not exist, and determining the fusion mode corresponding to each second virtual space capsule element to be displayed in the indoor environment in a supplementing manner, and determining the fusion position of each second virtual space capsule element in the AR space capsule scene image to be generated according to the position information of each real object in the whole indoor environment.
In one embodiment, the attribute features include at least one of:
size, posture.
In one embodiment, the virtual space capsule element to be rendered is determined according to the following steps:
and selecting a virtual space capsule element set matched with the scene type from each stored virtual space capsule element set according to the scene type corresponding to the indoor environment.
In a second aspect, an embodiment of the present disclosure further provides a special effect generating device for a space capsule, where the device includes:
an acquisition module for acquiring a scene image including an indoor environment;
the first determination module is used for determining information of each real object in the indoor environment based on the acquired scene image;
the second determination module is used for determining the fusion mode and the fusion position of each virtual capsule element in the AR capsule scene image to be generated according to the information of each real object in the indoor environment; the fusion mode comprises supplementing real objects displayed in the indoor environment and replacing the indoor environment;
and the control module is used for controlling AR equipment to display the AR capsule scene images of the virtual capsule elements fused into the indoor environment based on the determined fusion mode and fusion position of the virtual capsule elements.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the space capsule special effects generation method according to the first aspect and any of its various embodiments.
In a fourth aspect, the disclosed embodiments also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the space capsule special effect generation method according to the first aspect and any of the various embodiments thereof.
By adopting the special effect generation scheme of the capsule, firstly, the information of each real object in the indoor environment can be determined based on the acquired scene image, the fusion mode and the fusion position of each virtual capsule element in the AR capsule scene image to be generated can be determined according to the information of each real object, finally, the AR equipment can be controlled to display the AR capsule scene image for fusing each virtual capsule element into the indoor environment according to the determined fusion mode and the fusion position of each virtual capsule element, namely, the AR can display the AR capsule scene image with each virtual capsule element, the virtual space environment constructed by the AR can enable a user to generate an in-person outer space experience, the outer space environment can be experienced by using lower cost, in addition, the displayed AR capsule scene image can be obtained based on various fusion modes, therefore, the virtual space environment can be constructed by taking various real scenes into consideration, and the applicability is better.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required to be used in the embodiments will be briefly described below, and the drawings herein are incorporated into and constitute a part of this specification, and show the embodiments consistent with the present disclosure and together with the description serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a flowchart illustrating a method for generating a special effect of a space capsule according to a first embodiment of the disclosure;
fig. 2(a) is a schematic application diagram of a space capsule special effect generation method provided by a first embodiment of the present disclosure;
fig. 2(b) is an application schematic diagram of a space capsule special effect generation method provided in the first embodiment of the present disclosure
Fig. 3 is a schematic diagram illustrating a special effect generating device for a space capsule provided in a second embodiment of the disclosure;
fig. 4 shows a schematic diagram of an electronic device provided in a third embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making any creative effort, shall fall within the protection scope of the disclosure.
Research shows that the cost for visiting the real outer space environment is very expensive at present, and people do not visit the outer space with courage possibly because of some potential threats, so that the requirement of people on feeling the outer space environment which wants to be personally on the scene is difficult to meet
Based on the research, the utility model provides at least one capsule special effect generation scheme, show AR capsule scene image through AR equipment, the virtual space environment that its was established can make the user produce the outer space experience that faces its environment, can use lower cost, experiences outer space environment, and can compromise various real scenes and carry out the construction of virtual space environment, and the suitability is better.
The defects existing in the above solutions are the results obtained after the inventor has practiced and studied carefully, so the discovery process of the above problems and the solution proposed by the present disclosure to the above problems in the following should be the contribution of the inventor to the present disclosure in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a method for generating a special effect of a space capsule disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the method for generating a special effect of a space capsule provided in the embodiments of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or a server or other processing device, wherein the wearable device may be an Augmented Reality (AR) glasses, an AR helmet, or other AR devices. In some possible implementations, the space capsule specific generation method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a space capsule special effect generation method provided by the embodiment of the present disclosure by taking an execution subject as a server.
Example one
Referring to fig. 1, a flowchart of a space capsule special effect generating method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S103, where:
s101, obtaining a scene image comprising an indoor environment;
s102, determining information of each real object in the indoor environment based on the acquired scene image;
s103, determining a fusion mode and a fusion position of each virtual capsule element in an AR capsule scene image to be generated according to information of each real object in the indoor environment; the fusion mode comprises supplementing real objects displayed in the indoor environment and replacing the real objects in the indoor environment;
and S104, controlling the AR equipment to display the AR capsule scene images for fusing the virtual capsule elements into the indoor environment based on the determined fusion mode and fusion position of the virtual capsule elements.
Here, in order to facilitate understanding of the space capsule special effect generation method provided by the embodiment of the present disclosure, an application scenario of the space capsule special effect generation method is first described in detail. The space capsule special effect generation method provided by the embodiment of the disclosure can be applied to any indoor environment, such as a home environment, a work office environment and the like, so that when a user wears AR equipment and enters an indoor scene in which the space capsule special effect generation method is deployed, AR space capsule scene images related to a space capsule can be provided for the user, namely, the user can view the AR space capsule scene images by utilizing the AR equipment, the constructed virtual space environment can enable the user to generate an outer space experience in the same place, and the outer space environment can be experienced with lower cost.
The AR capsule scene image may be generated after each virtual capsule element is merged into an indoor environment. In the process of element fusion, the space capsule special effect generation method provided by the embodiment of the disclosure combines the fusion mode of virtual space capsule elements, and the displayed AR space capsule scene images can be obtained based on various fusion modes, so that various real scenes can be considered for constructing a virtual space environment, and the applicability is better.
The method for generating the special effect of the space capsule provided by the embodiment of the disclosure can obtain the fusion mode corresponding to the virtual space capsule element by performing image analysis on the scene image including the indoor environment, and besides, can also determine the corresponding fusion position.
In order to determine the fusion information corresponding to the virtual capsule element, information of each real object in the indoor environment may be determined based on the acquired scene image, and the virtual capsule element corresponding to each real object may be determined based on each real object.
In order to determine the information of the real object, the information may be obtained by sequentially performing a target detection process and a feature extraction process in the embodiment of the present disclosure. In practical application, firstly, the target detection can be performed on a scene image based on a trained target detection network, image areas of each real object in the scene image are extracted, then, the feature extraction is performed on the image area of each real object based on the trained feature extraction network, the attribute features of the real object are obtained, and the attribute features are used as information of each real object. The attribute feature in the embodiments of the present disclosure may be a size of a real object, a size of the real object, a posture of the real object, or any combination of the above features.
The target detection network may be trained based on a pre-labeled scene image sample. For a scene image sample, the scene image sample may be marked in a pixel-by-pixel marking manner, where the same mark may be used for a pixel point set belonging to the same real object.
Therefore, the acquired scene image is input to the target detection network, and the image area of each real object in the scene image can be extracted.
In addition, the feature extraction network can be obtained by adopting recurrent neural network training to extract and obtain the attribute features of each real object in the indoor environment.
It should be noted that the space capsule special effect generation provided by the embodiment of the present disclosure may be implemented based on the related image processing technology, in addition to the above-mentioned target detection network to implement the extraction of the image area. Similarly, the embodiment of the present disclosure may implement the extraction of the attribute features based on the feature extraction network, and may also implement the extraction based on a related image processing technology, which is not described herein again.
In the embodiment of the disclosure, under the condition that the fusion modes corresponding to the virtual space capsule elements are different, the determined fusion positions corresponding to the virtual space capsule elements are also different. The fusion mode adopted by the embodiment of the present disclosure may be supplementary display in an indoor environment, or may also be replacement of a real object in the indoor environment, and the following two aspects may be explained.
In a first aspect: for each real object in the indoor environment, when determining that the first virtual capsule element matched with the real object exists, the real object can be replaced by the first virtual capsule element matched with the real object in a determined fusion mode.
Here, a fusion position of the first virtual capsule element matched with the real object in the AR capsule scene image to be generated may be determined according to the position information of the real object in the entire indoor environment.
In a specific application, based on a conversion relation between coordinate systems, position information of a real object in a two-dimensional image coordinate system of a scene image can be converted into a fusion position of a first virtual capsule element matched with the real object in a three-dimensional image coordinate system of an AR capsule scene image to be generated.
In the embodiment of the present disclosure, in order to determine a first virtual space capsule element corresponding to a real object, first, a first virtual space capsule element type matched with the real object may be determined according to an object type corresponding to the real object, so that after a three-dimensional model corresponding to the determined first virtual space capsule element type is obtained, attribute features of the three-dimensional model may be adjusted according to attribute features of the real object, and the adjusted first virtual space capsule element matched with the real object is obtained.
For example, for a light fixture in an indoor environment, corresponding to a virtual capsule environment, a three-dimensional light fixture in a capsule may correspond.
It should be noted that, for any real object, the first virtual space capsule element corresponding to the real object may also be determined directly based on a feature matching manner. Here, the matching degree between the attribute features of the real object and the attribute features of each pre-stored virtual space capsule element may be determined first, and the virtual space capsule element with the highest matching degree may be selected as the first virtual space capsule element corresponding to the any one real object.
In a second aspect: when it is determined that there is no second virtual capsule element matching the real object in the virtual capsule elements to be presented, it may be determined that the fusion manner corresponding to each second virtual capsule element is additionally presented in the indoor environment.
Here, the fusion position of each second virtual capsule element in the AR capsule scene image to be generated may be determined according to the position information of each real object in the entire indoor environment.
In the embodiment of the present disclosure, in determining the virtual capsule elements to be presented, the virtual capsule elements matching the scene type may be selected from each stored virtual capsule element set according to the scene type corresponding to the indoor environment.
The scene type of the indoor environment may be a large exhibition hall, a general office environment, a medium exhibition hall, etc., and for different scene types, a set of virtual space capsule elements corresponding to the scene type may be determined, for example, for the general office environment, the set of virtual space capsule elements determined may include virtual space capsule elements such as a lamp, an office table, a projector, etc.
In order to facilitate understanding of the implementation process of the space capsule special effect generation method, as shown in fig. 2(a) to 2(b), the following description may be made in conjunction with a display effect diagram of an AR device.
As shown in fig. 2(a), after the scene image including the indoor environment acquired by the AR device is determined based on the scene image, a fusion manner and a fusion position of a virtual capsule element matching the real object in the indoor environment can be determined according to information of each real object in the indoor environment, where a first virtual capsule element matching the first real object is determined to exist in a preset virtual capsule element library, the determined fusion manner is a replacement, and where a second virtual capsule element matching the second real object is determined to not exist, the determined fusion manner is a supplement, as shown in fig. 2 (b).
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a special effect generation device of a capsule corresponding to the special effect generation method of a capsule, and since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the special effect generation method of a capsule in the embodiment of the present disclosure, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Example two
Referring to fig. 3, an architectural schematic diagram of a special effect generating device for a space capsule provided in an embodiment of the present disclosure is shown, the device includes: an acquisition module 301, a first determination module 302, a second determination module 303, and a control module 304; wherein,
an obtaining module 301, configured to obtain a scene image including an indoor environment;
a first determining module 302, configured to determine information of each real object in the indoor environment based on the acquired scene image;
a second determining module 303, configured to determine, according to information of each real object in the indoor environment, a fusion manner and a fusion position of each virtual capsule element in the to-be-generated AR capsule scene image; the fusion mode comprises supplementing real objects displayed in the indoor environment and replacing the real objects in the indoor environment;
and the control module 304 is configured to control the AR device to display an AR capsule scene image in which each virtual capsule element is fused into the indoor environment based on the determined fusion mode and fusion position of each virtual capsule element.
By adopting the special effect generating device for the capsule, firstly, the information of each real object in the indoor environment can be determined based on the acquired scene image, the fusion mode and the fusion position of each virtual capsule element in the AR capsule scene image to be generated can be determined according to the information of each real object, finally, the AR equipment can be controlled to display the AR capsule scene image for fusing each virtual capsule element into the indoor environment according to the determined fusion mode and fusion position of each virtual capsule element, namely, the AR can display the AR capsule scene image with each virtual capsule element, the virtual space environment constructed by the device can enable a user to generate the outdoor space experience of the real environment, the outdoor space environment can be experienced by using lower cost, in addition, the displayed AR capsule scene image can be obtained based on various fusion modes, therefore, various real scenes can be considered for constructing the virtual space environment, and the applicability is better.
In one embodiment, the first determining module 302 is configured to determine information of each real object in the indoor environment according to the following steps:
carrying out target detection on the scene image based on the trained target detection network, and extracting image areas of all real objects in the scene image;
and performing feature extraction on the image area of each real object based on the trained feature extraction network to obtain the attribute features of the real objects, and taking the attribute features as the information of each real object.
In one embodiment, the second determining module 303 is configured to determine a fusion manner and a fusion position of each virtual capsule element in the AR capsule scene image to be generated according to the following steps:
for each real object in the indoor environment, if the fact that a first virtual space capsule element matched with the real object exists is determined, the fact that the real object is replaced by the first virtual space capsule element matched with the real object is determined in a fusion mode, and according to position information of the real object in the whole indoor environment, the fusion position of the first virtual space capsule element matched with the real object in an AR space capsule scene image to be generated is determined.
In one embodiment, the second determining module 303 is configured to determine the first virtual space capsule element matching any of the real objects according to the following steps:
determining a first virtual space capsule element type matched with any real object according to the object type corresponding to the real object;
and acquiring a three-dimensional model corresponding to the determined type of the first virtual space capsule element, and adjusting the attribute characteristics of the three-dimensional model according to the attribute characteristics of any real object to obtain the adjusted first virtual space capsule element matched with any real object.
In one embodiment, the second determining module 303 is configured to determine a fusion manner and a fusion position of each virtual capsule element in the AR capsule scene image to be generated according to the following steps:
and for the virtual space capsule elements to be presented, determining a fusion mode corresponding to each second virtual space capsule element as a supplement to be displayed in the indoor environment, and determining the fusion position of each second virtual space capsule element in the AR space capsule scene image to be generated according to the position information of each real object in the whole indoor environment.
In one embodiment, the attribute features include at least one of:
size, posture.
In one embodiment, the second determining module 303 is configured to determine the virtual space capsule element to be presented according to the following steps:
and according to the scene type corresponding to the indoor environment, selecting a virtual space capsule element set matched with the scene type from the stored virtual space capsule element sets.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
EXAMPLE III
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 4, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 401, a memory 402, and a bus 403. The memory 402 stores machine-readable instructions executable by the processor 401, the processor 401 and the memory 402 communicating via the bus 403 when the electronic device is operating, the machine-readable instructions when executed by the processor 401 performing the following:
acquiring a scene image including an indoor environment;
determining information of each real object in the indoor environment based on the acquired scene image;
determining the fusion mode and the fusion position of each virtual space capsule element in the AR space capsule scene image to be generated according to the information of each real object in the indoor environment; the fusion mode comprises the steps of supplementing real objects displayed in the indoor environment and replacing the real objects in the indoor environment;
and controlling the AR equipment to display the AR capsule scene image for fusing each virtual capsule element into the indoor environment based on the determined fusion mode and fusion position of each virtual capsule element.
In one embodiment, the instructions executed by the processor 401 for determining information of each real object in the indoor environment based on the acquired scene image include:
carrying out target detection on the scene image based on the trained target detection network, and extracting image areas of all real objects in the scene image;
and performing feature extraction on the image area of each real object based on the trained feature extraction network to obtain the attribute features of the real objects, and taking the attribute features as the information of each real object.
In an embodiment, the instructions executed by the processor 401 for determining a fusion mode and a fusion position of each virtual capsule element in the AR capsule scene image to be generated according to information of each real object in the indoor environment includes:
for each real object in the indoor environment, if the fact that a first virtual space capsule element matched with the real object exists is determined, the fact that the real object is replaced by the first virtual space capsule element matched with the real object is determined in a fusion mode, and according to position information of the real object in the whole indoor environment, the fusion position of the first virtual space capsule element matched with the real object in an AR space capsule scene image to be generated is determined.
In one embodiment, the instructions executed by the processor 401 for determining, for any real object, a first virtual space capsule element matching with the any real object includes:
determining a first virtual space capsule element type matched with any real object according to the object type corresponding to the real object;
and acquiring a three-dimensional model corresponding to the determined type of the first virtual space capsule element, and adjusting the attribute characteristics of the three-dimensional model according to the attribute characteristics of any real object to obtain the adjusted first virtual space capsule element matched with any real object.
In an embodiment, the instructions executed by the processor 401 for determining a fusion mode and a fusion position of each virtual capsule element in the AR capsule scene image to be generated according to information of each real object in the indoor environment includes:
and for the virtual space capsule elements to be presented, determining a fusion mode corresponding to each second virtual space capsule element as a supplement to be displayed in the indoor environment, and determining the fusion position of each second virtual space capsule element in the AR space capsule scene image to be generated according to the position information of each real object in the whole indoor environment.
In one embodiment, the attribute features include at least one of:
size, posture.
In one embodiment, the processor 401 executes instructions that determine the virtual capsule elements to be rendered according to the following steps:
and according to the scene type corresponding to the indoor environment, selecting a virtual space capsule element set matched with the scene type from the stored virtual space capsule element sets.
The disclosed embodiment also provides a computer readable storage medium, which stores thereon a computer program, and when the computer program is executed by the processor 401, the computer program performs the steps of the space capsule special effect generation method in the above method embodiment. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the space capsule special effect generating method provided by the embodiment of the present disclosure includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the space capsule special effect generating method in the above method embodiment, which may specifically refer to the above method embodiment, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in this disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions in actual implementation, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can still modify or easily conceive of changes in the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A space capsule special effect generation method is characterized by comprising the following steps:
acquiring a scene image including an indoor environment;
determining information of each real object in the indoor environment based on the acquired scene image;
determining the fusion mode and the fusion position of each virtual space capsule element in the AR space capsule scene image to be generated according to the information of each real object in the indoor environment; the fusion mode comprises supplementing real objects displayed in the indoor environment and replacing the indoor environment;
and controlling AR equipment to display the AR capsule scene images of the virtual capsule elements fused into the indoor environment based on the determined fusion mode and fusion position of the virtual capsule elements.
2. The method of claim 1, wherein the determining information about each real object in the indoor environment based on the acquired scene images comprises:
carrying out target detection on the scene image based on a trained target detection network, and extracting image areas of all real objects in the scene image;
and performing feature extraction on the image area of each real object based on the trained feature extraction network to obtain the attribute features of the real object, and taking the attribute features as the information of each real object.
3. The method according to claim 1 or 2, wherein the determining the fusion mode and the fusion position of each virtual capsule element in the AR capsule scene image to be generated according to the information of each real object in the indoor environment comprises:
and for each real object in the indoor environment, if the fact that the first virtual space capsule element matched with the real object exists is determined, the fact that the real object is replaced by the first virtual space capsule element matched with the real object is determined as the fusion mode, and the fusion position of the first virtual space capsule element matched with the real object in the AR space capsule scene image to be generated is determined according to the position information of the real object in the whole indoor environment.
4. The method of claim 3, wherein determining, for any real object, a first virtual capsule element that matches the any real object comprises:
determining a first virtual space capsule element type matched with any real object according to the object type corresponding to the real object;
and acquiring a three-dimensional model corresponding to the determined type of the first virtual space capsule element, and adjusting the attribute characteristics of the three-dimensional model according to the attribute characteristics of any real object to obtain the adjusted first virtual space capsule element matched with any real object.
5. The method according to claim 1 or 2, wherein the determining the fusion mode and the fusion position of each virtual capsule element in the AR capsule scene image to be generated according to the information of each real object in the indoor environment comprises:
and for the virtual space capsule elements to be presented, determining that second virtual space capsule elements matched with the real objects do not exist, and determining the fusion mode corresponding to each second virtual space capsule element to be additionally displayed in the indoor environment, and determining the fusion position of each second virtual space capsule element in the AR space capsule scene image to be generated according to the position information of each real object in the whole indoor environment.
6. The method of claim 2 or 5, wherein the attribute features comprise at least one of:
size, posture.
7. The method of claim 5, wherein the virtual space capsule element to be rendered is determined according to the following steps:
and selecting a virtual space capsule element set matched with the scene type from each stored virtual space capsule element set according to the scene type corresponding to the indoor environment.
8. A space capsule special effect generating device, the device comprising:
an acquisition module for acquiring a scene image including an indoor environment;
the first determination module is used for determining information of each real object in the indoor environment based on the acquired scene image;
the second determination module is used for determining the fusion mode and the fusion position of each virtual capsule element in the AR capsule scene image to be generated according to the information of each real object in the indoor environment; the fusion mode comprises supplementing real objects displayed in the indoor environment and replacing the indoor environment;
and the control module is used for controlling AR equipment to display the AR capsule scene images of the virtual capsule elements fused into the indoor environment based on the determined fusion mode and fusion position of the virtual capsule elements.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the space capsule special effects generation method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the space capsule special effects generation method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010509232.5A CN111652981A (en) | 2020-06-07 | 2020-06-07 | Space capsule special effect generation method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010509232.5A CN111652981A (en) | 2020-06-07 | 2020-06-07 | Space capsule special effect generation method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111652981A true CN111652981A (en) | 2020-09-11 |
Family
ID=72347356
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010509232.5A Pending CN111652981A (en) | 2020-06-07 | 2020-06-07 | Space capsule special effect generation method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652981A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310918A (en) * | 2023-02-16 | 2023-06-23 | 东易日盛家居装饰集团股份有限公司 | Indoor key object identification and positioning method, device and equipment based on mixed reality |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106383587A (en) * | 2016-10-26 | 2017-02-08 | 腾讯科技(深圳)有限公司 | Augmented reality scene generation method, device and equipment |
CN107251100A (en) * | 2015-02-27 | 2017-10-13 | 微软技术许可有限责任公司 | The virtual environment that physics is limited moulds and anchored to actual environment |
CN108597030A (en) * | 2018-04-23 | 2018-09-28 | 新华网股份有限公司 | Effect of shadow display methods, device and the electronic equipment of augmented reality AR |
CN109727314A (en) * | 2018-12-20 | 2019-05-07 | 初速度(苏州)科技有限公司 | A kind of fusion of augmented reality scene and its methods of exhibiting |
CN110392251A (en) * | 2018-04-18 | 2019-10-29 | 广景视睿科技(深圳)有限公司 | A kind of dynamic projection method and system based on virtual reality |
-
2020
- 2020-06-07 CN CN202010509232.5A patent/CN111652981A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107251100A (en) * | 2015-02-27 | 2017-10-13 | 微软技术许可有限责任公司 | The virtual environment that physics is limited moulds and anchored to actual environment |
CN106383587A (en) * | 2016-10-26 | 2017-02-08 | 腾讯科技(深圳)有限公司 | Augmented reality scene generation method, device and equipment |
CN110392251A (en) * | 2018-04-18 | 2019-10-29 | 广景视睿科技(深圳)有限公司 | A kind of dynamic projection method and system based on virtual reality |
CN108597030A (en) * | 2018-04-23 | 2018-09-28 | 新华网股份有限公司 | Effect of shadow display methods, device and the electronic equipment of augmented reality AR |
CN109727314A (en) * | 2018-12-20 | 2019-05-07 | 初速度(苏州)科技有限公司 | A kind of fusion of augmented reality scene and its methods of exhibiting |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116310918A (en) * | 2023-02-16 | 2023-06-23 | 东易日盛家居装饰集团股份有限公司 | Indoor key object identification and positioning method, device and equipment based on mixed reality |
CN116310918B (en) * | 2023-02-16 | 2024-01-09 | 东易日盛家居装饰集团股份有限公司 | Indoor key object identification and positioning method, device and equipment based on mixed reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20210047278A (en) | AR scene image processing method, device, electronic device and storage medium | |
CN111651047A (en) | Virtual object display method and device, electronic equipment and storage medium | |
CN111640197A (en) | Augmented reality AR special effect control method, device and equipment | |
JP2022505998A (en) | Augmented reality data presentation methods, devices, electronic devices and storage media | |
CN111679742A (en) | Interaction control method and device based on AR, electronic equipment and storage medium | |
CN111696215A (en) | Image processing method, device and equipment | |
CN111638797A (en) | Display control method and device | |
CN111651050A (en) | Method and device for displaying urban virtual sand table, computer equipment and storage medium | |
CN111652983A (en) | Augmented reality AR special effect generation method, device and equipment | |
CN111651057A (en) | Data display method and device, electronic equipment and storage medium | |
CN112905014A (en) | Interaction method and device in AR scene, electronic equipment and storage medium | |
CN111598824A (en) | Scene image processing method and device, AR device and storage medium | |
CN111625100A (en) | Method and device for presenting picture content, computer equipment and storage medium | |
CN111580679A (en) | Space capsule display method and device, electronic equipment and storage medium | |
JP2022507502A (en) | Augmented Reality (AR) Imprint Method and System | |
CN111651049B (en) | Interaction method, device, computer equipment and storage medium | |
CN112947756A (en) | Content navigation method, device, system, computer equipment and storage medium | |
CN111652986A (en) | Stage effect presentation method and device, electronic equipment and storage medium | |
CN111653175A (en) | Virtual sand table display method and device | |
KR20190061783A (en) | Method and program for generating virtual reality contents | |
KR102013917B1 (en) | Appartus and method for displaying hierarchical depth image in virtual realilty | |
CN111652981A (en) | Space capsule special effect generation method and device, electronic equipment and storage medium | |
CN111949904A (en) | Data processing method and device based on browser and terminal | |
CN112288881B (en) | Image display method and device, computer equipment and storage medium | |
CN113470190A (en) | Scene display method and device, equipment, vehicle and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200911 |