WO2021228200A1 - Procédé pour réaliser une interaction dans une scène spatiale tridimensionnelle, appareil et dispositif - Google Patents
Procédé pour réaliser une interaction dans une scène spatiale tridimensionnelle, appareil et dispositif Download PDFInfo
- Publication number
- WO2021228200A1 WO2021228200A1 PCT/CN2021/093628 CN2021093628W WO2021228200A1 WO 2021228200 A1 WO2021228200 A1 WO 2021228200A1 CN 2021093628 W CN2021093628 W CN 2021093628W WO 2021228200 A1 WO2021228200 A1 WO 2021228200A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- dimensional model
- user terminal
- pixel
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/239—Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
Definitions
- the present disclosure relates to virtual reality panoramic technology and streaming media technology, and in particular to a method for realizing three-dimensional space scene interaction, a device for realizing three-dimensional space scene interaction, a storage medium, and electronic equipment.
- VR panoramic technology is an emerging rich media technology. Because VR panoramic technology can present users with three-dimensional space scenes without blind angles at 720 degrees, and bring users an immersive visual experience, VR panoramic technology is widely used in various fields such as online shopping malls, travel services, and real estate services. How to enable VR panoramic technology to bring users a richer experience is a technical issue worthy of attention.
- three-dimensional models can give people a stronger visual perception.
- any view of the object can be presented to the user, and the correct projection relationship can be maintained between the views.
- the user terminal while the user terminal is presenting a three-dimensional model, it can support real-time voice on-screen interaction between user terminals, that is, in the process of the user terminal presenting the three-dimensional model, the voice of the opposite user of the user terminal can be transmitted in real time To the user terminal, and the voice acquired by the user terminal can also be transmitted to the opposite terminal in real time.
- a method for realizing interaction in a three-dimensional space scene including: in response to detecting a user operation of setting footprint information in the three-dimensional space scene, determining the user's current position in the three-dimensional space scene The first pixel in the current view corresponding to the perspective; determine the three-dimensional model corresponding to the first pixel; determine the position of the user's footprint information in the three-dimensional model, where the footprint information is used to display when the three-dimensional space scene is viewed ; And set the user's footprint information at the location.
- an interaction method based on a three-dimensional model including: at a first user terminal presenting a user interface: in response to detecting a user's target interaction operation on the user interface, The server that provides page data in the user interface sends an interaction request for the target interaction operation, where the user interface is used to present a three-dimensional model, and the three-dimensional model establishes an association relationship with the user account logged in the second user terminal in advance; Streaming video obtained by the terminal; and presenting the streaming video and three-dimensional model on the user interface.
- an interaction method based on a three-dimensional model including: at a second user terminal: in response to receiving an interaction request sent by a server, acquiring a streaming video, wherein the interaction request indicates The first user terminal detects the user's target interaction operation on the user interface presented by the first user terminal, the user interface is used to present a three-dimensional model, and the three-dimensional model establishes a pre-association relationship with the user account logged in by the second user terminal; and sends a stream to the server Media video, where the server is used to send the streaming video to the first user terminal, so that the first user terminal presents the streaming video and the three-dimensional model on the user interface.
- a device for realizing interaction of a three-dimensional space scene including: a device for executing the method described in any one of the above methods.
- an interaction device based on a three-dimensional model which is provided in a first user terminal, and the device includes: a device for executing the method described in any one of the above methods.
- an interaction device based on a three-dimensional model which is provided in a second user terminal, and the device includes: a device for executing the method described in any one of the above methods.
- an interactive system based on a three-dimensional model, including: a first user terminal for presenting a user interface; a second user terminal; and a server.
- the second user terminal is in communication connection.
- the first user terminal is configured to: in response to detecting the user's target interaction operation on the user interface, send an interaction request for the target interaction operation to the server, and the user interface is used to present the three-dimensional model, the three-dimensional model and the second 2.
- the user account logged in by the user terminal establishes an association relationship in advance; the second user terminal is configured to: obtain the streaming video; and send the streaming video to the server; the server is configured to: send the streaming media to the first user terminal Video; and the first user terminal is configured to: present a streaming video and a three-dimensional model on the user interface.
- a non-transitory computer-readable storage medium stores a computer program. method.
- an electronic device including: a processor; and a memory for storing processor-executable instructions.
- the processor-executable instructions implement any of the above methods when executed by the processor. The method described in one item.
- a computer program product including a computer program, which when executed by a computer causes the computer to implement the method described in any one of the above methods.
- FIG. 1 is a schematic diagram of an embodiment of an applicable scenario of the present disclosure
- FIG. 2 is a flowchart of an embodiment of a method for realizing interaction of a three-dimensional space scene of the present disclosure
- FIG. 3 is a flowchart of an embodiment of determining a three-dimensional model corresponding to a first pixel in the present disclosure
- FIG. 4 is a flowchart of another embodiment of determining a three-dimensional model corresponding to a first pixel point of the present disclosure
- FIG. 5 is a flowchart of an embodiment of presenting footprint information for browsing users in the present disclosure
- FIG. 6 is a schematic structural diagram of an embodiment of an apparatus for realizing interaction in a three-dimensional space scene of the present disclosure
- FIG. 7 is a flowchart of an embodiment of the first interaction method based on a three-dimensional model of the present disclosure.
- 8A-8C are schematic diagrams of application scenarios for the embodiment of FIG. 7.
- Fig. 9 is a flowchart of another embodiment of the first three-dimensional model-based interaction method of the present disclosure.
- FIG. 10 is a flowchart of another embodiment of the first interaction method based on a three-dimensional model of the present disclosure.
- FIG. 11 is a flowchart of an embodiment of the second three-dimensional model-based interaction method of the present disclosure.
- FIG. 12 is a flowchart of another embodiment of the second three-dimensional model-based interaction method of the present disclosure.
- FIG. 13 is a flowchart of an embodiment of the first interactive device based on a three-dimensional model of the present disclosure.
- FIG. 14 is a flowchart of an embodiment of the second interactive device based on a three-dimensional model of the present disclosure.
- FIG. 15 is a schematic diagram of interaction of an embodiment of the interactive system based on a three-dimensional model of the present disclosure.
- Fig. 16 is a structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
- plural may refer to two or more than two, and “at least one” may refer to one, two, or more than two.
- the term "and/or" in the present disclosure is only an association relationship that describes associated objects, indicating that there can be three relationships, such as A and/or B, which can mean: A alone exists, and A and B exist at the same time. There are three cases of B alone.
- the character "/" in the present disclosure generally indicates that the associated objects before and after are in an "or" relationship.
- the embodiments of the present disclosure can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with many other general-purpose or special-purpose computing system environments or configurations.
- Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, or servers include, but are not limited to: personal computer systems, server computer systems, thin clients, and thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
- Electronic devices such as terminal devices, computer systems, and servers can be described in the general context of computer system executable instructions (such as program modules) executed by the computer system.
- program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
- the computer system/server can be implemented in a distributed cloud computing environment.
- tasks can be performed by remote processing devices linked through a communication network.
- program modules may be located on a storage medium of a local or remote computing system including a storage device.
- the inventor found that in the process of a user experiencing a three-dimensional space scene by adjusting his current perspective, some feelings such as emotions and thoughts are often generated. If the user can set the footprint information used to characterize his feelings into the three-dimensional space scene, it will not only help improve the user’s sense of participation, but the footprint information left by the user can also provide other users watching the three-dimensional space scene. Come for a richer VR panoramic experience.
- VR panoramic technology can be used to set a three-dimensional space scene for a house to be rented or a house to be sold. Any user can access through the network and watch the three-dimensional space scene of the corresponding house anytime and anywhere.
- the present disclosure allows the user to leave his own footprint information for the house he is browsing, and the present disclosure can target the user’s own footprint information and other information about the house. The footprint information left by the user for the house is presented to the user together.
- the footprint information 120 left by other users for the three-dimensional space scene of the two-bedroom and one-living house includes: "I like this group of sofas, it's great”, “This decorative partition is good”, “This sofa is good, high-end class”, “Combination and collocation are very diligent, praise and praise", "The design of the tea table is very unique ⁇ the longest copywriting is 20 characters” and the three-dimensional model 110 shown in the upper right corner of Figure 1.
- the user who browses the three-dimensional space scene of the house is presented with the footprint information 120 left by other users with respect to the three-dimensional space scene of the house, which helps the user understand other users’ feelings about the house, thereby helping to deepen the user’s experience of the house. Cognition, which in turn helps to improve the user’s browsing experience of the house.
- the user can also express his own feelings about the house, that is, leave his own footprint information in the three-dimensional space scene.
- the user can set footstep information such as "this pillar makes the house look more distinctive" at the position of the pillar shown in FIG.
- the footprint information set by the user can be instantly displayed in the three-dimensional space scene shown in Figure 1, that is, the user can see the footprint information left by himself during the process of viewing the three-dimensional space scene of the house, which is conducive to enhancing user participation feel.
- all other footprint information set by the user for the house that does not belong to the view shown in FIG. 1 can be presented to the user in the form of a bullet screen 130, which is beneficial to improve the user's ability to browse three-dimensional space scenes in other locations of the house. interest.
- the technology for realizing the interaction of three-dimensional space scenes provided by the present disclosure can also be applied to various other scenes. For example, when a user browses a three-dimensional space scene of a library, it can target a book or chair or coffee in the library. Set the corresponding footprint information.
- the footprint information set by the user for the book may be the user's impression of the book or the number of pages currently read by the user.
- the scenarios where the technology for realizing the interaction of three-dimensional space scenes provided by the present disclosure can be applied will not be illustrated one by one.
- FIG. 2 is a flowchart of an embodiment of a method for realizing interaction of a three-dimensional space scene of the present disclosure.
- the method 200 of the embodiment shown in FIG. 2 includes steps 210 to 240. Each step is described separately below.
- step 210 in response to detecting the user operation of setting the footprint information in the three-dimensional space scene, determine the first pixel in the current view corresponding to the user's current perspective in the three-dimensional space scene.
- a three-dimensional space scene may refer to a space scene with a three-dimensional sense that is presented to the user by using a preset panoramic image and a three-dimensional model.
- the three-dimensional space scene may be a three-dimensional space scene set for a library, a three-dimensional space scene set for a house, a three-dimensional space scene set for a cafe, or a three-dimensional space scene set for a shopping mall.
- the embodiment of the present disclosure when the user triggers the function of setting the footprint information in the three-dimensional space scene, it can be detected that the user needs to set the footprint information in the three-dimensional space scene. For example, when the user clicks a button for setting footstep information or a corresponding option on a menu, the embodiment of the present disclosure can detect that the user needs to set footstep information in a three-dimensional space scene. For another example, the user can use a preset shortcut to trigger the function of setting footprint information in the three-dimensional space scene.
- the user's footprint information may be information that can indicate that the user has visited the three-dimensional space scene. The footprint information can be considered as the visit trace information of the user.
- the current perspective of the user in the three-dimensional space scene may refer to the position and angle at which the user currently views the three-dimensional space scene.
- the user's current perspective in the three-dimensional space scene usually changes with the user's operation. For example, the user can control his current perspective in the three-dimensional scene by performing operations such as dragging on the touch screen.
- the user's current perspective in the three-dimensional space scene determines the content/area of the panorama that the user can currently see, that is, the user's current perspective in the three-dimensional space scene determines the current view.
- the first pixel point is one pixel point in the current view.
- the first pixel point can be obtained according to a preset default rule.
- the first pixel may be a specific pixel in the current view, or it may be any pixel in the current view.
- step 220 the three-dimensional model corresponding to the first pixel is determined.
- a three-dimensional space scene is generally formed by a plurality of three-dimensional models.
- the three-dimensional space scene may also be formed by a three-dimensional model.
- a pixel point in the current view seen by the user may be a representation of a point in the three-dimensional model.
- a pixel in the current view that the user sees may not be a representation of any point in the three-dimensional model. That is to say, under normal circumstances, any point in any three-dimensional model in the three-dimensional space scene can be presented in the panorama, and the points in the panoramic image may not be all three-dimensional models in the three-dimensional scene. In the point.
- the present disclosure does not exclude the possibility that some points in the three-dimensional model in the three-dimensional space scene are not presented in the panoramic image.
- the three-dimensional model where the point is located is the three-dimensional model corresponding to the first pixel.
- the three-dimensional model corresponding to the first pixel may be: the current view that is similar to the first pixel and is used to present the points in the three-dimensional model.
- the three-dimensional model corresponding to other pixels that is to say, when the first pixel is used to present the points in the non-three-dimensional model, and the first pixel is not updated, the three-dimensional model corresponding to other pixels in the current view can be used as the three-dimensional model corresponding to the first pixel. Model.
- step 230 the position of the user's footprint information in the three-dimensional model is determined, where the footprint information is used to display when the three-dimensional space scene is browsed.
- the positions of the first pixel or the other pixels in the three-dimensional model can be obtained. This location is the location of the user's footprint information.
- all three-dimensional models in a three-dimensional space scene may be respectively provided with their own three-dimensional coordinate systems, or may have the same three-dimensional coordinate system.
- the position of the user's footprint information in the three-dimensional model can be represented by (x, y, z). That is, the user's footprint information can be deep.
- step 240 the user's footprint information is set at the location.
- setting the user's footprint information at the location may include: setting a three-dimensional model identifier and three-dimensional coordinates for the user's footprint information, and storing the three-dimensional model identifier, three-dimensional coordinates, and user's footprint information. Correspondence.
- the user's footprint information may be used to display to browsing users (such as all browsing users or some browsing users) of the three-dimensional space scene.
- the browsing user of the three-dimensional space scene may include the user who sets the footprint information.
- the three-dimensional model corresponding to the first pixel and the position of the footprint information in the three-dimensional model are obtained, so that the user can set
- the footprint information of can be associated with the corresponding position of the corresponding 3D model.
- the footprint information includes: at least one of text, picture, audio, video, and a three-dimensional model.
- the text can be considered as a message in the form of characters (such as text, letters, numbers, or symbols, etc.).
- a picture can be considered as a message in the form of an image (such as a photo or emoticon, etc.).
- Audio can be thought of as a voice message (also called memo, etc.).
- Video can be thought of as a message in the form of an image.
- the three-dimensional model can be considered as a three-dimensional message.
- the user's footprint information may be referred to as the user's message.
- a piece of footprint information set by the user may include one or more of text, picture, audio, video, and three-dimensional model at the same time.
- the user's footprint information By enabling the user's footprint information to include at least one of text, pictures, audio, video, and a three-dimensional model, it is beneficial to enrich the expression form of the user's footprint information, thereby helping to enrich the way for the user to interact with the three-dimensional space scene.
- obtaining the first pixel point in the current view corresponding to the current perspective of the user in the three-dimensional space scene may be: obtaining the central pixel point of the current view corresponding to the current perspective of the user in the three-dimensional space scene , And use the central pixel as the first pixel.
- the user’s current perspective in the 3D space scene triggers the function of setting footprint information in the 3D space scene by clicking a button or an option on the menu.
- the center pixel can be considered as the default pixel set for the user's footprint information, and the user can change the default pixel by dragging and other methods.
- the central pixel can be considered as a pixel in the central area of the current view.
- the central area of the current view may include one pixel or multiple pixels.
- obtaining the first pixel in the current view corresponding to the current perspective of the user in the three-dimensional space scene may be: setting the footprint information in the current view corresponding to the current perspective of the user in the three-dimensional space scene
- the operation of the target position obtains the pixel point in the current view corresponding to the target position of the footprint information, and the pixel point is regarded as the first pixel point. That is, when the user performs the operation of setting the target position of the footprint information, the pixel point where the target position of the footprint information formed by the operation in the current view is located may be used as the first pixel point.
- the operation of setting the target position of the footprint information may be an operation used to determine the starting target position of the footprint information, an operation used to determine the ending target position of the footprint information, or an operation used to determine the footprint information. Operation of the center target position.
- the operation of setting the target location of the footprint information may specifically be a click operation or a scroll operation or drag operation based on a tool such as a mouse or a keyboard, and may also be a click operation or a drag operation based on a touch screen.
- the present disclosure does not limit the specific operation of setting the target position of the footprint information.
- the footprint information set by the user By determining the first pixel point according to the user's operation of setting the target position of the footprint information, it is beneficial to make the footprint information set by the user be located at the desired position of the user, thereby improving the flexibility of setting the footprint information and making the footprint information more stable. The location is more appropriate.
- the user triggers the function of setting footprint information in the 3D space scene by clicking a button or an option on the menu.
- the user can use the left mouse button to click, the keyboard up, down, left, and right keys to move the cursor or click the corresponding position on the touch screen to set the position of the desired footprint information in the current view.
- the pixel at this position can be used as the first pixel.
- the user triggers the function of setting footprint information in the 3D space scene by clicking a button or an option on the menu.
- the pixel point is regarded as the first pixel point.
- step 220 the implementation of determining the three-dimensional model corresponding to the first pixel (step 220) may be as shown in FIG. 3. As shown in FIG. 3, step 220 further includes steps 310 to 340.
- step 310 the central pixel point of the front view is determined as the first pixel point.
- the central pixel may be considered as the default pixel set for the user's footprint information.
- the current view is an image of (2n+1) ⁇ (2m+1) (where n and m are both integers greater than 1), then the pixels in the current view (n+1 , M+1) as the central pixel.
- the pixel point (n, m) and pixel point (n+1) in the current view can be , M), pixel (n, m+1), and pixel (n+1, m+1) are used as the central area of the current view, so that any pixel in the central area can be used as the central pixel.
- step 320 it is determined whether a three-dimensional model is set for the first pixel. If a three-dimensional model is set for the first pixel point, go to step 330. If the three-dimensional model is not set for the first pixel point, go to step 340.
- step 330 in response to the determination that the three-dimensional model is set for the first pixel, the three-dimensional model set for the first pixel is used as the three-dimensional model corresponding to the first pixel.
- step 340 in response to the determination that the three-dimensional model is not set for the first pixel, the three-dimensional model set for other pixels in the current view is used as the three-dimensional model corresponding to the first pixel.
- pixels in the current view are pixels in the current view where the three-dimensional model is set.
- the pixels where the three-dimensional model is set can be found according to preset rules.
- the other pixels found may be the pixels closest to the first pixel in a certain direction (such as the left direction, the right direction, the upper direction, or the lower direction).
- the first pixel point can be used as a starting point, and according to a preset inspection rule, the pixel point in the current view corresponding to the current perspective in the three-dimensional space scene can be checked. If it is determined that the pixel point with the three-dimensional model is checked , The three-dimensional model corresponding to the first pixel is obtained, and the inspection process is stopped. For example, you can use the first pixel as a starting point to check the pixels in the current view to the left, and determine whether a three-dimensional model is set for the currently checked pixel.
- the inspection process is stopped, and the three-dimensional model obtained by the current inspection is used as the three-dimensional model corresponding to the first pixel.
- the first pixel point may be updated by using the detected pixel point provided with the three-dimensional model.
- the first pixel may not be updated.
- step 220 may be as shown in FIG. 4.
- step 220 may include step 410 to step 450.
- step 410 in response to the user's operation of setting the target position of the footprint information in the current view, a pixel point in the current view corresponding to the target position of the footprint information is determined as the first pixel point.
- the user may be allowed to set the specific location of the footprint information (that is, the target location of the footprint information) in the current view.
- the user can set the target position of the footprint information in the current view by tapping, sliding, dragging and other operations on the touch screen.
- the target position of the footprint information may be the upper left vertex, the lower left vertex, the upper right vertex, or the lower right vertex of the text box.
- the target location of the footprint information may be the upper left vertex, the lower left vertex, the upper right vertex, or the lower right vertex of the picture.
- the target location of the footprint information may be a pixel point in the current view, and the pixel point is the first pixel point.
- step 420 it is determined whether a three-dimensional model is set for the first pixel. If a three-dimensional model is set for the first pixel point, go to step 430. If a three-dimensional model is not set for the first pixel point, go to step 440.
- step 430 in response to the determination that the three-dimensional model is set for the first pixel, the three-dimensional model set for the first pixel is used as the three-dimensional model corresponding to the first pixel.
- step 440 in response to the determination that the three-dimensional model is not set for the first pixel point, prompt information for updating the target position of the footprint information is output.
- the prompt information is used to prompt the user to update the target location of the footprint information currently set. That is, the prompt information is used to prompt the user that the current target location of the footprint information cannot be set to the footprint information, and the user should reset the target location of the footprint information.
- the prompt information can be output in the form of text, audio, or graphics. After outputting the prompt information, wait for the user's subsequent operations. If the user triggers the function of canceling the setting of the footprint information at this time, the process shown in Figure 4 ends.
- step 450 in response to the determination that the pixel in the current view corresponding to the target position of the updated footprint information is set with a three-dimensional model, the pixel set with the three-dimensional model is taken as the first pixel. Then, the flow returns to step 420.
- the target position of the footprint information obtained again may also be a pixel in the current view, and this pixel is the first pixel. That is, the first pixel point obtained last time is updated by the similarity of the target position of the currently obtained footprint information.
- the first pixel when the first pixel is provided with a three-dimensional model, since the first pixel in the current view has a mapping relationship with the point in the three-dimensional model, the first pixel can be obtained based on the mapping relationship.
- the point corresponding to the point in the three-dimensional model the position of the point is the position of the first pixel point in the three-dimensional model.
- the position of the first pixel in the three-dimensional model can be directly used as the position of the user's footprint information in the three-dimensional model, which is beneficial to quickly and accurately obtain the position of the user's footprint information in the three-dimensional model.
- the browsing user in the process of viewing the three-dimensional space scene by the browsing user, the browsing user may be presented with at least one user's footprint information left in the three-dimensional space scene.
- An example is shown in Figure 5.
- step 510 for any browsing user browsing the three-dimensional space scene, the footprint area corresponding to the current perspective of the browsing user in the three-dimensional space scene is determined.
- the browsing user includes a user who sets his footprint information in the three-dimensional space scene.
- the footprint area can be considered as an area set for the footprint information that needs to be displayed.
- the footprint area can be a footprint area based on the current view, or a footprint area based on a three-dimensional model.
- the size of the footprint area can be preset.
- the shape of the footprint area can be rectangle, circle, triangle, etc.
- an implementation manner of determining the footprint area may be: First, obtain the current view corresponding to the current viewing angle of the browsing user in the three-dimensional space scene. The center pixel of the view, and then the center pixel is the center of the circle, and the predetermined length (such as 1.5 meters in a three-dimensional scene, etc., and 1.5 meters can be converted to the length in the current view) is used as the radius to determine the current view Footprint area. Since at least some of the pixels in the footprint area in the current view have a mapping relationship with points in the three-dimensional model, the footprint information that currently needs to be displayed can be easily obtained by using the footprint area in the current view. In addition, the footprint area in the current view can be regarded as a circle, that is, the footprint area in the current view does not have depth information.
- an implementation manner for determining the footprint area may be: First, obtain the current view corresponding to the current perspective of the browsing user in the three-dimensional space scene. View the center pixel of the view, and determine whether the center pixel is set with a three-dimensional model. If the center pixel is set with a three-dimensional model, determine the position of the center pixel in the three-dimensional model, and then use that position as the center of the circle, Use a predetermined length (such as 1.5 meters in a three-dimensional space scene, etc.) as a radius to determine the footprint area in the three-dimensional model.
- the footprint area may be completely in one 3D model, or it may span multiple 3D models.
- the footprint area in the three-dimensional model can be considered as a cylinder, that is, the footprint area in the three-dimensional model has depth information.
- step 520 the footprint information belonging to the footprint area in the three-dimensional model is determined.
- the embodiment of the present disclosure can check whether each pixel point in the footprint area has a mapping relationship with a point in the three-dimensional model. If there is a mapping relationship, then it is determined whether the points in the three-dimensional model that have a mapping relationship with the pixel points are provided with footprint information. If the footprint information is set, the footprint information can be regarded as the footprint information belonging to the footprint area.
- the embodiments of the present disclosure can check whether each point in the footprint area is provided with footprint information. If the footprint information is set, the footprint information can be regarded as the footprint information belonging to the footprint area.
- step 530 in the current view corresponding to the current perspective of the browsing user in the three-dimensional space scene, the footprint information belonging to the footprint area is displayed.
- the location of each footprint information belonging to the footprint area in the current view can be determined according to the location of each footprint information, so that each footprint information can be displayed according to the location of each footprint information in the current view.
- the process of displaying footprint information it is possible to avoid overlapping display of different footprint information in the current view.
- the obtained multiple footprint information may have different positions, or may have the same position (that is, the position of the footprint information conflicts).
- each footprint information may be displayed in the current view directly according to the image positions of the multiple footprint information in the current view.
- the displayed footprint information can be allowed to partially overlap, and the location control can also be used to make the footprint information not overlap each other.
- different image positions may be assigned to different footprint information in the current view, and the image positions described above may be displayed in the current view according to the assigned image positions. Having different footprint information at the same position helps to avoid overlapping display of different footprint information in the current view.
- all the footprint information belonging to the footprint area can be displayed, or part of the footprint information belonging to the footprint area can be displayed.
- part of the footprint information can be selected from it according to a predetermined rule, and the selected part of the footprint information can be displayed in the current view.
- a predetermined number of footprint information can be randomly selected from all the footprint information belonging to the footprint area, and part of the randomly selected footprint information can be displayed in the current view.
- the form of a bullet screen may be used to display the footprint information outside the current view for the browsing user. For example, you can first determine all the footprint information in the three-dimensional model that does not belong to the current view, and display all the above-mentioned footprint information in the current view corresponding to the current perspective of the browsing user in the three-dimensional scene in the form of a bullet screen. Part of the footprint information.
- the form of a bullet screen may be used to display the footprint information outside the footprint area for the browsing user. For example, you can first determine all the footprint information in the three-dimensional model that does not belong to the footprint area, and display all the above-mentioned footprint information in the current view corresponding to the current perspective of the browsing user in the three-dimensional scene in the form of a bullet screen. Part of the footprint information.
- FIG. 6 is a schematic structural diagram of an embodiment of an apparatus for realizing interaction in a three-dimensional space scene of the present disclosure.
- the device of this embodiment can be used to implement the foregoing method embodiments of the present disclosure.
- the device of this embodiment includes: a pixel point acquiring module 600, a three-dimensional model determining module 601, a position determining module 602, and a footprint information setting module 603.
- the device may further include: a footprint area determination module 604, a footprint information determination module 605, a footprint information display module 606, and a bullet screen display module 607.
- the pixel obtaining module 600 is configured to determine the first pixel in the current view corresponding to the current perspective of the user in the three-dimensional space scene in response to detecting the user operation of setting the footprint information in the three-dimensional space scene.
- the footprint information may include: at least one of text, picture, audio, video, and a three-dimensional model.
- the pixel point acquiring module 600 may include: a first sub-module 6001.
- the first sub-module 6001 is used to determine the center pixel of the current view as the first pixel.
- the pixel point obtaining module 600 may include: a fifth sub-module 6002.
- the fifth sub-module 6002 is configured to determine the pixel points in the current view corresponding to the target position of the footprint information in response to the user's operation of setting the target position of the footprint information in the current view corresponding to the current perspective in the three-dimensional space scene.
- the fifth sub-module 6002 can use the pixel as the first pixel.
- the three-dimensional model determining module 601 is used to determine the three-dimensional model corresponding to the first pixel obtained by the pixel obtaining module 600.
- the determining three-dimensional model module 601 may include: the second sub-module 6011, the third sub-module 6012, and the fourth sub-module 6013.
- the second sub-module 6011 is used to determine whether a three-dimensional model is set for the first pixel.
- the third sub-module 6012 is configured to, if the determination result of the second sub-module 6011 is that a three-dimensional model is set for the first pixel, use the three-dimensional model set for the first pixel as the three-dimensional model corresponding to the first pixel.
- the fourth sub-module 6013 is configured to, if the judgment result of the second sub-module 6011 is that no three-dimensional model is set for the first pixel, use the three-dimensional model set for other pixels in the current view as the three-dimensional model corresponding to the first pixel. Model. For example, if the judgment result of the second sub-module 6011 is that a three-dimensional model is not set for the first pixel, the fourth sub-module 6013 can take the first pixel as a starting point and perform a check on the three-dimensional scene according to the preset inspection rules. Check other pixels in the current view corresponding to the current angle of view. If a pixel with a three-dimensional model is detected, the first pixel is updated to a pixel with a three-dimensional model, the three-dimensional model corresponding to the first pixel is obtained, and this inspection is stopped.
- the determining three-dimensional model module 601 may include: a sixth sub-module 6014, a seventh sub-module 6015, and an eighth sub-module 6016.
- the sixth sub-module 6014 is used to determine whether a three-dimensional model is set for the first pixel. If the determination result of the sixth sub-module 6014 is that a three-dimensional model is set for the first pixel, the seventh sub-module 6015 uses the three-dimensional model set for the first pixel as the three-dimensional model corresponding to the first pixel.
- the eighth sub-module 6016 may output prompt information for updating the target position of the footprint information, and the sixth sub-module 6014 determines When the pixel in the current view corresponding to the target location of the footprint information is set with a three-dimensional model, the pixel with the three-dimensional model is used as the first pixel.
- the eighth sub-module 6016 obtains the three-dimensional model corresponding to the first pixel.
- the position determining module 602 is used to determine the position of the user's footprint information in the three-dimensional model determined by the three-dimensional model determining module 601. For example, the position determining module 602 may obtain the position of the first pixel in the three-dimensional model, and the position determining module 602 may use the position of the first pixel in the three-dimensional model as the position of the user's footprint information in the three-dimensional model.
- the setting footprint information module 603 is used for setting the user's footprint information at the location determined by the location determining module 602.
- the user's footprint information set by the setting footprint information module 603 is used to be displayed to users who browse the three-dimensional space scene.
- the footprint area determination module 604 is used for determining the footprint area corresponding to the current perspective of the browsing user in the three-dimensional space scene for any browsing user who browses the three-dimensional space scene. For example, the module 604 for determining the footprint area may first determine the center pixel of the current view corresponding to the current perspective of the browsing user in the three-dimensional space scene, and then the module 604 for determining the footprint area 604 takes the center pixel as the center of the circle and the predetermined length as The radius determines the footprint area in the current view.
- the footprint information determining module 605 is used to determine the footprint information belonging to the footprint area determined by the footprint area determining module 604 in the three-dimensional model.
- the footprint information display module 606 is configured to display the footprint information that belongs to the footprint area determined by the determination footprint information module 605 in the current view corresponding to the current perspective of the browsing user in the three-dimensional space scene.
- the display footprint information module 606 may respectively locate the multiple footprint information in the current view according to the image positions of the multiple footprint information in the current view. Display the multiple footprint information.
- the footprint information display module 606 may assign different image positions for different footprint information in the current view, and according to the assigned image Position, display different footstep information in the current view.
- the barrage display module 607 is used to determine at least one piece of footprint information in the three-dimensional model that does not belong to the footprint area/current view.
- the barrage display module 607 displays the at least one footprint information in the current view corresponding to the current perspective of the browsing user in the three-dimensional space scene in the form of a barrage.
- FIG. 7 shows a process 700 of an embodiment of the first three-dimensional model-based interaction method according to the present disclosure.
- the three-dimensional model-based interaction method is applied to a first user terminal, and the first user terminal is presented with a user interface, and the three-dimensional model-based interaction method includes:
- Step 710 In response to detecting the user's target interaction operation on the user interface, send an interaction request for the target interaction operation to the server that provides page data for the user interface, where the user interface is used to present the three-dimensional model, the three-dimensional model and the second user The user account logged in by the terminal establishes an association relationship.
- the user can use the first user terminal to interact with the server through the network.
- the first user terminal may be various electronic devices, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and so on.
- the first user terminal may be installed with various client applications, such as real estate transaction software.
- the aforementioned user interface may be a page in an application installed by the first user terminal.
- the user can interact with the server through the user interface, thereby realizing interaction with other user terminals (for example, the second user terminal).
- the first user terminal may send an interaction request for the target interaction operation to a server that provides page data for the user interface.
- the aforementioned user interface is used to present a three-dimensional model.
- the three-dimensional model establishes an association relationship with the user account logged in by the second user terminal in advance.
- the aforementioned target interaction operation may be various operations for instructing the first user terminal to request interaction (information interaction) with the second user terminal.
- the target interaction operation may indicate video communication with the second user terminal.
- the foregoing interaction request may be used to indicate a user request of the first user terminal to interact with the second user terminal.
- the foregoing interaction request may be used to instruct the user of the first user terminal to request video communication with the second user terminal.
- the user interface of the first user terminal may present the above-mentioned three-dimensional model, or may not present the three-dimensional model.
- each three-dimensional model it can be associated with a user account in advance. Therefore, for a specific three-dimensional model, the user account that is associated with the three-dimensional model can be determined to determine the user terminal logging in to the user account, and then the user terminal used to interact with the first user terminal is determined (Ie the second user terminal).
- the above-mentioned three-dimensional model may be a three-dimensional model of any object.
- the three-dimensional model may be a three-dimensional model inside a cell, or a three-dimensional model of a house interior.
- Step 720 Receive the streaming video obtained by the server from the second user terminal.
- the above-mentioned first user terminal may receive the streaming video obtained by the server from the second user terminal.
- the aforementioned interaction confirmation information may be used to instruct the user of the second user terminal to confirm (agree) to perform the interaction indicated by the aforementioned interaction request with the first user terminal.
- the foregoing interactive confirmation information may be used to instruct the user of the second user terminal to confirm (agree) to conduct video communication with the first user terminal.
- the aforementioned streaming video may include images and/or voice.
- the image acquisition device and/or the voice acquisition device of the second user terminal can be used to acquire the aforementioned streaming video.
- the server may use streaming media technology to continuously send the images and/or voice (ie streaming media video) collected by the second user terminal to the first user terminal.
- streaming media technology refers to a media format that uses streaming technology to be continuously played in real time on the network.
- Streaming media technology is also called streaming media technology.
- the second user terminal may send the continuous image and sound information collected by it to the server after compression processing.
- the server transmits each compressed package to the first user terminal sequentially or in real time, so that users who use the first user terminal can watch and listen while downloading.
- the server may send the streaming video collected by the second user terminal to the first user terminal, and may also perform image processing (such as beauty) and voice processing (such as denoising) on the streaming video collected by the second user terminal. ), after operations such as transcoding, recording, and pornography, send the processed streaming video to the first user terminal.
- image processing such as beauty
- voice processing such as denoising
- the first user terminal may perform step 720 again.
- the first user terminal in the case of the interaction confirmation information sent by the second user terminal in response to the interaction request, can present the streaming video through subsequent steps; however, the second user terminal does not send the above
- the first user terminal does not present the streaming video. Therefore, the streaming video and the three-dimensional model can be presented on the user interface of the first user terminal only after obtaining the permission of the user of the second user terminal (for example, connecting to the video call initiated by the first user terminal). This helps to improve the privacy protection of the user of the second user terminal, and provides preparation time for the user of the second user terminal to present the streaming media video to the user of the first user terminal.
- the first user terminal may also directly execute the foregoing step 720 (without the interaction confirmation information sent by the second user terminal in response to the interaction request).
- the user of the second user terminal may be in a state of shooting a streaming video (for example, a live broadcast) to users of other user terminals.
- a streaming video for example, a live broadcast
- the first user terminal can receive the streaming video obtained by the server from the second user terminal at any time, thereby improving the real-time performance of the streaming video presentation.
- the first user terminal may adopt the following steps to receive the streaming video obtained by the server from the second user terminal:
- the current network speed value of the first user terminal is sent to the server.
- the streaming media video obtained and sent by the server from the second user terminal is received, and the streaming media video has a resolution matching the current network speed value.
- the resolution can be positively correlated with the network speed value.
- the resolution of the streaming video received by the first user terminal can be reduced when the network is poor by receiving the streaming video sent by the server whose resolution matches the current network speed value and obtained by the second user terminal. Rate to improve the real-time performance of streaming video transmission.
- Step 730 Present the streaming video and the three-dimensional model on the user interface.
- the first user terminal may present the streaming video and the three-dimensional model on the same screen on the user interface.
- the above-mentioned user interface of the first user terminal may be divided into two parts, and the above-mentioned two parts may respectively present a streaming video and a three-dimensional model.
- the three-dimensional model can also be used as the background of the aforementioned user interface, and the streaming video is presented in a part of the page area of the user interface.
- FIGS. 8A-8C are schematic diagrams of application scenarios for the embodiment of FIG. 7.
- the first user terminal may provide the user interface with The server of the page data sends an interaction request for the target interaction operation 810.
- the user interface presents a three-dimensional model of the house of XX home.
- the three-dimensional model has a pre-established association relationship with the user account logged in by the second user terminal.
- the first user terminal presents a streaming video 830 and a three-dimensional model on the user interface.
- the interaction method based on the three-dimensional model provided by the above-mentioned embodiments of the present disclosure can send an interaction request for the target interaction operation to a server that provides page data for the user interface when the user's target interaction operation for the user interface is detected.
- the user interface is used for presenting a three-dimensional model, and the three-dimensional model establishes an association relationship with the user account logged in by the second user terminal in advance.
- the streaming video obtained by the server from the second user terminal is received.
- the streaming video and 3D model are presented on the user interface.
- the first user terminal may also perform the following steps:
- the model adjustment information sent by the server is received, where the model adjustment information indicates an adjustment operation of the user who uses the second user terminal on the three-dimensional model presented on the second user terminal.
- the adjustment operation includes at least one of the following: zoom, rotate, move, and switch viewpoints.
- the user can perform at least one operation of zooming, rotating, moving, and switching viewpoints on the three-dimensional model.
- the same adjustment operation is performed on the three-dimensional model presented on the user interface.
- the operations performed by the user of the second user terminal on the three-dimensional model can be synchronized to the first user terminal. Therefore, when the streaming video collected by the second user terminal is related to the three-dimensional model (for example, the user of the second user terminal explains or introduces the three-dimensional model), it is convenient for the user of the first user terminal to refer to the second user The same three-dimensional model presented by the terminal acquires the information in the streaming video, thereby improving the pertinence of information acquisition.
- the first user terminal may also perform the following steps:
- the feedback information may include but is not limited to at least one of the following: likes, ratings, comments, and so on.
- the feedback information may be used to characterize the evaluation of the user of the first user terminal on the streaming video of the user of the second user terminal.
- the feedback information is sent to the server, where the server is used to establish an association relationship between the feedback information and the user account.
- the server is used to establish an association relationship between the feedback information and the user account.
- an associative storage method can be used to establish an association relationship between the feedback information and the user account.
- establishing an association relationship between the feedback information and the user account can reflect the user's satisfaction with the object indicated by the three-dimensional model and the user of the second user terminal by the user of the first user terminal, and thus can be more targeted for the first user The terminal pushes information.
- FIG. 9 is a flow 900 of another embodiment of the first three-dimensional model-based interaction method of the present disclosure.
- the three-dimensional model-based interaction method is applied to a first user terminal, and the first user terminal is presented with a user interface, and the method includes:
- Step 910 In response to detecting the user's target interaction operation on the user interface, send an interaction request for the target interaction operation to a server that provides page data for the user interface.
- Step 920 Receive the streaming video obtained by the server from the second user terminal.
- Step 930 Present the streaming video and the three-dimensional model on the user interface.
- step 910 to step 930 are basically the same as step 710 to step 730 in the embodiment corresponding to FIG. 7, and will not be repeated here.
- Step 940 In response to the current network speed value of the first user terminal being less than or equal to the preset network speed threshold, adjust the target user image based on each frame of voice in the streaming video to generate a new video different from the streaming video.
- the first user terminal may adjust the target user image based on each frame of voice in the streaming video, Generate a new video.
- the new video characterizes the actions of the user indicated by the target user's image to perform each frame of voice instructions.
- the user indicated by the target user image may be a user using the second user terminal.
- the new video may be a streaming video that is sent in segments and instantly transmitted based on the network, or it may be a video that is generated locally without being based on the network.
- the first user terminal may generate a new video in the following manner: For each frame of voice in the streaming video, input the frame of voice into a predetermined image frame generation model to obtain a target user that matches the frame of voice The image of the user indicated by the image. Thereby, the obtained frames of images that match each frame of voice in the streaming video and the frames of voice are merged to obtain a new video. The user's action in the user's image indicated by the target user image that matches the voice matches the voice.
- the mouth shape of the user in the user's image indicated by the target user image matching the audio may be the voice "ah” Lip shape
- the action can be an action in a state of fright.
- the aforementioned image frame generation model may be a recurrent neural network model or a convolutional neural network model obtained by training using a machine learning algorithm based on training samples including voice frames, target user images, and image frames matching the voice frames.
- An image frame generation model can be trained for each user, and the target user image in each training sample used to train the user’s image frame generation model can be the same.
- For each voice frame of the user it is determined that the voice frame corresponds to the voice frame. Matched image frames, and then obtain a training sample set used to train the image frame generation model of the user.
- the image frame generation model may also be a two-dimensional table or database that stores the voice frame, the target user image, and the image frame matching the voice frame in association with each other.
- each record of the database may include voice frames, target user images and matching voice frames Image frame.
- the target user image in each record can be the same.
- Frame database that is, image frame generation model.
- the first user terminal may also determine the target user image by any of the following methods:
- the target user image is generated.
- the image whose ratio is greater than the preset threshold is regarded as the target user image.
- the user can upload an image through the user account he uses as the target user image; or after logging in the account he uses, select an image from a predetermined image set as the target user image.
- the above-mentioned optional implementation methods can automatically generate a target user image from the images in the streaming video, or the user manually sets the target user image, so that based on multiple target user image determination methods, the new video
- the generation method is more diversified.
- step 950 the new video is used to replace the streaming video for presentation.
- the first user terminal may use a new video to replace the streaming video for presentation.
- the streaming video can be hidden (that is, no longer presented).
- the first The user terminal can locally generate a new video to replace the streaming video presentation. Therefore, the first user terminal only needs to continuously obtain voice from the server, but does not need to continuously obtain video, thereby reducing the occupation of network resources. In a case where the current network speed value of the first user terminal is relatively small, the real-time performance of the video presentation of the first user terminal can be improved.
- the first user terminal may also send a camera shutdown confirmation to the server. information.
- the camera closing confirmation information is used to determine whether the second user terminal closes the camera.
- the server may send to the second user terminal information for determining whether the second user terminal turns off the camera. Therefore, the user of the second user terminal can reduce the occupation of network resources by the second user terminal by turning off the camera.
- FIG. 10 is a flowchart of another embodiment of the first three-dimensional model-based interaction method of the present disclosure.
- the interaction method based on the three-dimensional model is applied to a first user terminal, and the first user terminal presents a user interface.
- the process 1000 of the interaction method based on the three-dimensional model includes:
- Step 1010 In response to detecting the user's target interaction operation on the user interface, send an interaction request for the target interaction operation to a server that provides page data for the user interface.
- the user interface is used for presenting a three-dimensional model, and the three-dimensional model establishes an association relationship with the user account logged in by the second user terminal in advance.
- Step 1020 Receive the streaming video obtained by the server from the second user terminal.
- Step 1030 Present the streaming video and the three-dimensional model on the user interface.
- step 1010 to step 1030 are basically the same as step 710 to step 730 in the embodiment corresponding to FIG. 7, and will not be repeated here.
- the three-dimensional model includes three-dimensional sub-models of multiple sub-space scenes, and the sub-space scenes in the multiple sub-space scenes correspond to keywords in a predetermined keyword set.
- Step 1040 Perform voice recognition on the voice in the streaming video to obtain a voice recognition result.
- the first user terminal may perform voice recognition on the voice in the streaming video to obtain the voice recognition result.
- the voice recognition result can represent the text corresponding to the voice in the streaming video.
- Step 1050 In response to the determination that the voice recognition result includes keywords in the keyword set, present on the user interface a three-dimensional sub-model of the corresponding sub-space scene among the multiple sub-space scenes corresponding to the keywords included in the voice recognition result .
- the first user terminal may display the subspace scene corresponding to the keywords contained in the voice recognition result on the aforementioned user interface.
- the above-mentioned three-dimensional model is a three-dimensional model inside a house.
- the house includes a bedroom, a living room, a kitchen, and a bathroom, with a total of four sub-space scenes. That is, the above-mentioned three-dimensional model includes a three-dimensional sub-model of a bedroom, a three-dimensional sub-model of a living room, a three-dimensional sub-model of a kitchen, and a three-dimensional sub-model of a bathroom.
- the keyword set includes bedroom, living room, kitchen, bathroom.
- the keyword corresponding to the subspace scene bedroom can be "bedroom”; the keyword corresponding to the subspace scene kitchen can be “kitchen”; the keyword corresponding to the subspace scene living room can be “living room” ; The keyword corresponding to the bathroom in the subspace scene can be "toilet”. Further, as an example, if the voice recognition result includes the keyword "bedroom", then the first user terminal may present a three-dimensional sub-model of the bedroom on the aforementioned user interface.
- the embodiment of the present application may also include the same or similar features and effects as the embodiment corresponding to FIG. 7 and/or FIG. 9, and details are not described herein again.
- the viewpoint switching of the three-dimensional model can be realized by voice, thereby presenting the subspace scene corresponding to the keywords contained in the voice recognition result The three-dimensional sub-model.
- the convenience of browsing the three-dimensional model is improved, and the matching between the presented three-dimensional model and the voice acquired by the second user terminal is improved.
- FIG. 11 shows a process 1100 of an embodiment of the second three-dimensional model-based interaction method according to the present disclosure.
- the three-dimensional model-based interaction method is applied to a second user terminal, and the user account logged in by the second user terminal establishes an association relationship with the three-dimensional model in advance.
- the interactive method based on the 3D model includes:
- Step 1110 In response to receiving the interactive request sent by the server, obtain the streaming video.
- the user can use the second user terminal to interact with the server and the first user terminal through the network.
- the second user terminal may be various electronic devices, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and so on.
- the second user terminal may be installed with various client applications, such as real estate transaction software.
- the streaming video upon receiving the interaction request sent by the server, the streaming video is acquired.
- the interaction request indicates that the first user terminal detects the user's target interaction operation on the user interface presented by the first user terminal.
- the aforementioned interaction request may be used to instruct the user of the first user terminal to request video communication with the second user terminal.
- the user interface is used to present the three-dimensional model.
- Streaming videos can contain images and/or voice.
- the image acquisition device and/or the voice acquisition device of the second user terminal can be used to acquire the aforementioned streaming video.
- the first user terminal may send an interaction request for the target interaction operation to a server that provides page data for the user interface.
- the user interface is used to present the three-dimensional model.
- the three-dimensional model establishes an association relationship with the user account logged in by the second user terminal in advance.
- the aforementioned target interaction operation may be various operations for instructing the first user terminal to request interaction (information interaction) with the second user terminal.
- the target interaction operation may indicate video communication with the second user terminal.
- the user interface of the first user terminal may present the above-mentioned three-dimensional model, or may not present the three-dimensional model.
- Step 1120 Send the streaming video to the server.
- the second user terminal may send the streaming video to the server.
- the server is used to send the streaming video to the first user terminal, so that the first user terminal presents the streaming video and the three-dimensional model on the user interface.
- the server can use streaming media technology to continuously send the images and/or voice (that is, streaming video) collected by the second user terminal to the first user terminal.
- Streaming media technology refers to a media format that uses streaming technology to continuously play on the network in real time.
- the second user terminal may send the continuous image and sound information collected by it to the server after compression processing.
- the server transmits each compressed package to the first user terminal sequentially or in real time, so that users who use the first user terminal can watch and listen while downloading.
- the server may send the streaming video collected by the second user terminal to the first user terminal, and may also perform image processing (such as beauty) and voice processing (such as denoising) on the streaming video collected by the second user terminal. ), after operations such as transcoding, recording, and pornography, send the processed streaming video to the first user terminal.
- image processing such as beauty
- voice processing such as denoising
- the second three-dimensional model-based interaction method provided by the foregoing embodiment of the present disclosure is applied to a second user terminal, and the user account logged in by the second user terminal establishes an association relationship with the three-dimensional model in advance.
- the second user terminal may determine whether the user's confirmation operation for the interaction request is detected in the case of receiving the interaction request sent by the server.
- the interaction request indicates that the first user terminal detects the user's target interaction operation on the user interface presented by the first user terminal, and the user interface is used to present a three-dimensional model. Afterwards, if the confirmation operation is detected, the streaming video is obtained.
- the streaming video is sent to the server, where the server is used to send the streaming video to the first user terminal, so that the first user terminal presents the streaming video and the three-dimensional model on the user interface.
- the server is used to send the streaming video to the first user terminal, so that the first user terminal presents the streaming video and the three-dimensional model on the user interface.
- the streaming media video by presenting the streaming media video and the three-dimensional model on the same page of the terminal device, it is helpful to use the streaming media video to present information related to the three-dimensional model to the user, thereby increasing the diversity of interaction modes.
- users can browse the three-dimensional model more calmly, improve the user's browsing time, and help meet the users' more diversified interactive needs.
- the foregoing step 1110 may include the following steps:
- the confirmation operation indicates that the user of the second user terminal confirms (agrees) to interact with the first user terminal (for example, video communication).
- the first user terminal may present the streaming video; and the second user terminal does not send the interaction confirmation information.
- the first user terminal does not present the streaming video. Therefore, the streaming video and the three-dimensional model can be presented on the user interface of the first user terminal only after the permission of the user of the second user terminal is obtained (for example, the video call initiated by the first user terminal is connected). This helps to improve the privacy protection of the user of the second user terminal, and provides preparation time for the user of the second user terminal to present the streaming media video to the user of the first user terminal.
- the second user terminal may also directly obtain the streaming video, and send the streaming video to the first user terminal through the server, without the need for a second user terminal. 2. Interaction confirmation information sent by the user of the user terminal in response to the interaction request.
- the user of the second user terminal may be in a state of shooting a streaming video (for example, a live broadcast) to users of other user terminals.
- a streaming video for example, a live broadcast
- the first user terminal can receive the streaming video obtained by the server from the second user terminal at any time, thereby improving the real-time performance of the streaming video presentation.
- the second user terminal may receive the camera shutdown confirmation message from the server, and display The camera is turned off confirmation message.
- the camera close confirmation information is used to determine whether the second user terminal closes the camera.
- the server may send to the second user terminal information for determining whether the second user terminal turns off the camera. Therefore, the user of the second user terminal can reduce the occupation of network resources by the second user terminal by turning off the camera.
- the second user terminal may send model adjustment information indicating the adjustment operation to the server , So that the server controls the first user terminal to perform the same adjustment operation on the three-dimensional model presented on the user interface according to the adjustment operation indicated by the model adjustment information.
- the adjustment operation includes at least one of the following: zoom, rotate, move, and switch viewpoints.
- the user can perform at least one operation of zooming, rotating, moving, and switching viewpoints on the three-dimensional model.
- the operations performed by the user of the second user terminal on the three-dimensional model can be synchronized to the first user terminal. Therefore, when the streaming video collected by the second user terminal is related to the three-dimensional model (for example, the user of the second user terminal explains or introduces the three-dimensional model), it is convenient for the user of the first user terminal to refer to the second user The same three-dimensional model presented by the terminal acquires the information in the streaming video, thereby improving the pertinence of information acquisition.
- the second user terminal may follow the adjustment operation indicated by the model adjustment information , Perform the same adjustment operation on the three-dimensional model presented by the second user terminal.
- the adjustment operation includes at least one of the following: zoom, rotate, move, and switch viewpoints.
- the user can perform at least one operation of zooming, rotating, moving, and switching viewpoints on the three-dimensional model.
- the operations performed by the user of the first user terminal on the three-dimensional model can be synchronized to the second user terminal.
- the second user terminal may perform the same method as the feedback information. Matching operation.
- the feedback information may include but is not limited to at least one of the following: likes, ratings, comments, and so on.
- the feedback information may be used to characterize the evaluation of the user of the first user terminal on the streaming video of the user of the second user terminal.
- the second user terminal may present an operation that matches the feedback information, for example, “XX gave you a like!” .
- FIG. 12 is a flow 1200 of another embodiment of the second three-dimensional model-based interaction method of the present disclosure.
- the three-dimensional model-based interaction method is applied to a first user terminal, and the first user terminal presents a user Interface, the method includes:
- Step 1210 In response to receiving the interactive request sent by the server, obtain the streaming video.
- Step 1220 Send the streaming video to the server.
- step 1210 to step 1220 are basically the same as step 1110 to step 1120 in the embodiment corresponding to FIG. 11, and will not be repeated here.
- the three-dimensional model includes three-dimensional sub-models of multiple sub-space scenes, and the sub-space scenes in the multiple sub-space scenes correspond to keywords in a predetermined keyword set.
- Step 1230 Perform voice recognition on the voice acquired by the first user terminal to obtain a voice recognition result.
- the second user terminal may perform voice recognition on the voice acquired by the first user terminal to obtain a voice recognition result.
- the voice recognition result can represent the text corresponding to the voice in the streaming video.
- Step 1240 in response to the determination that the voice recognition result contains keywords in the keyword set, present on the user interface a three-dimensional sub-model of the corresponding sub-space scene among the multiple sub-space scenes corresponding to the keywords contained in the voice recognition result .
- the second user terminal may present on the user interface a three-dimensional subspace scene corresponding to the keywords contained in the voice recognition result. Model.
- the above-mentioned three-dimensional model is a three-dimensional model inside a house.
- the house includes a bedroom, a living room, a kitchen, and a bathroom, with a total of four sub-space scenes, that is, the above-mentioned three-dimensional model includes a three-dimensional sub-model of the bedroom, a three-dimensional sub-model of the living room, a three-dimensional sub-model of the kitchen, and a three-dimensional sub-model of the bathroom.
- the keyword set includes bedroom, living room, kitchen, bathroom.
- the keyword corresponding to the subspace scene bedroom can be "bedroom”; the keyword corresponding to the subspace scene kitchen can be “kitchen”; the keyword corresponding to the subspace scene living room can be “living room” ; The keyword corresponding to the bathroom in the subspace scene can be "toilet”. Further, as an example, if the voice recognition result includes the keyword "bedroom", then the second user terminal may present a three-dimensional sub-model of the bedroom on the aforementioned user interface.
- the viewpoint switching of the three-dimensional model can be realized by voice, thereby presenting the subspace scene corresponding to the keywords contained in the voice recognition result.
- the three-dimensional sub-model As a result, the convenience of browsing the three-dimensional model is improved, and the matching between the presented three-dimensional model and the voice acquired by the second user terminal is improved.
- the present disclosure provides an embodiment of an interaction device based on a three-dimensional model.
- the device embodiment may also include the same or corresponding features as the method embodiment shown in FIGS. 7, 9, and 10, and produce the same as those in the method embodiment shown in FIGS.
- the method embodiments shown have the same or corresponding effects.
- the interaction apparatus 1300 based on the three-dimensional model of this embodiment is set in a first user terminal, and the first user terminal presents a user interface.
- the device 1300 includes: a first sending unit 1310 configured to send an interaction request for the target interaction operation to a server that provides page data for the user interface in response to detecting a user's target interaction operation on the user interface, wherein the user interface For presenting the three-dimensional model, the three-dimensional model is pre-associated with the user account logged in the second user terminal; the first receiving unit 1320 is configured to receive the streaming video obtained by the server from the second user terminal; the first presenting unit 1330 is It is configured to present streaming video and 3D models on the user interface.
- the first sending unit 1310 of the interactive device 1300 based on the three-dimensional model may send the information about the target interaction operation to the server that provides page data for the user interface.
- the user interface is used for presenting a three-dimensional model, and the three-dimensional model establishes an association relationship with the user account logged in by the second user terminal in advance.
- the first receiving unit 1320 may receive the streaming video obtained by the server from the second user terminal.
- the first presentation unit 1330 may present the streaming video and the three-dimensional model on the user interface.
- the first receiving unit is further configured to: in response to the server receiving the interaction confirmation information sent by the second user terminal in response to the interaction request, receive the stream obtained by the server from the second user terminal. Media video.
- the device 1300 further includes: a first adjustment unit (not shown in the figure) configured to respond to the current network speed value of the first user terminal being less than or equal to a preset Network speed threshold, based on each frame of voice in the streaming video, adjust the target user's image to generate a new video, where the new video characterizes the user indicated by the target user's image to perform the actions of each frame of voice instruction; the second presentation unit ( Figure Not shown in ), is configured to use new video instead of streaming video for presentation.
- a first adjustment unit (not shown in the figure) configured to respond to the current network speed value of the first user terminal being less than or equal to a preset Network speed threshold, based on each frame of voice in the streaming video, adjust the target user's image to generate a new video, where the new video characterizes the user indicated by the target user's image to perform the actions of each frame of voice instruction
- the second presentation unit ( Figure Not shown in ), is configured to use new video instead of streaming video for presentation.
- the device 1300 further includes: a first generating unit (not shown in the figure) configured to generate a target user image based on an image in the streaming video; or A determining unit (not shown in the figure) is configured to determine the user image associated with the user account as the target user image.
- the device 1300 further includes: a second sending unit (not shown in the figure), configured to send camera shutdown confirmation information to the server in response to a new video presented on the user interface , Wherein the camera closing confirmation information is used to determine whether the second user terminal closes the camera.
- a second sending unit (not shown in the figure), configured to send camera shutdown confirmation information to the server in response to a new video presented on the user interface , Wherein the camera closing confirmation information is used to determine whether the second user terminal closes the camera.
- the first receiving unit is further configured to: send the current network speed value of the first user terminal to the server; receive the streaming video that the server obtains and sends from the second user terminal , The streaming video has a resolution that matches the current network speed value.
- the device 1300 further includes: a second receiving unit (not shown in the figure) configured to receive model adjustment information sent by the server, wherein the model adjustment information indicates the use of the first 2.
- the user of the user terminal adjusts the three-dimensional model presented on the second user terminal.
- the adjustment operation includes at least one of the following: zoom, rotate, move, and switch viewpoints; the second adjustment unit (not shown in the figure) is configured According to the adjustment operation indicated by the model adjustment information, the same adjustment operation is performed on the three-dimensional model presented on the user interface.
- the three-dimensional model includes three-dimensional sub-models of multiple sub-space scenes, and the sub-space scenes in the multiple sub-space scenes correspond to keywords in a predetermined keyword set; and,
- the device 1300 also includes: a first recognition unit (not shown in the figure), configured to perform voice recognition on the voice in the streaming video to obtain a voice recognition result; and a third presentation unit (not shown in the figure), configured In response to determining that the voice recognition result contains the keywords in the keyword set, a three-dimensional sub-model of the subspace scene corresponding to the keywords contained in the voice recognition result is presented on the user interface.
- the device 1300 further includes: a first acquiring unit (not shown in the figure), configured to acquire user feedback information for streaming media videos; and a third sending unit (not shown in the figure). Not shown in ), is configured to send feedback information to a server, where the server is used to establish an association relationship between the feedback information and the user account.
- the interaction device based on the three-dimensional model provided by the above-mentioned embodiments of the present disclosure is set in a first user terminal, and the first user terminal presents a user interface.
- the first sending unit 1310 may send an interaction request for the target interaction operation to a server that provides page data for the user interface, where the user interface is used for The three-dimensional model is presented.
- the three-dimensional model is pre-associated with the user account logged in the second user terminal.
- the first receiving unit 1320 receives the streaming video obtained by the server from the second user terminal.
- the first presenting unit 1330 displays on the user interface Streaming videos and 3D models are presented on the website.
- streaming media videos and 3D models can be presented on the same page of the terminal device, which helps to use streaming media videos to present information related to the 3D model to users, which improves the diversity of interaction methods.
- Users browse the 3D model more calmly, increase the user's browsing time, and help meet users' more diversified interactive needs.
- the present disclosure provides an embodiment of a second interaction device based on a three-dimensional model.
- the device embodiment may also include the same or corresponding features as the method embodiment shown in Figs. 11 and 12, and the method implementation shown in Figs. The same or corresponding effect.
- the interaction device 1400 based on the three-dimensional model of this embodiment is set in the second user terminal.
- the device 1400 includes: a second determining unit 1410 configured to obtain a streaming video in response to receiving an interaction request sent by a server, wherein the interaction request indicates that the first user terminal detects that the user interface presented to the first user terminal
- the user interface is used to present a three-dimensional model, and the three-dimensional model has a pre-established association relationship with the user account logged in the second user terminal
- the fourth sending unit 1420 is configured to send streaming video to the server, where the server It is used to send the streaming video to the first user terminal, so that the first user terminal presents the streaming video and the three-dimensional model on the user interface.
- the second determining unit 1410 may obtain the streaming video.
- the interaction request indicates that the first user terminal detects a user's target interaction operation on the user interface presented by the first user terminal, and the user interface is used to present a three-dimensional model.
- the fourth sending unit 1420 may be configured to send the streaming video to the server, where the server is used to send the streaming video to the first user terminal, so that the first user terminal displays the streaming video on the user interface. And three-dimensional models.
- the second determining unit 1410 is further configured to: in response to receiving the interaction request sent by the server, determine whether a confirmation operation of the user for the interaction request is detected; in response to detecting the confirmation Operation to obtain streaming video.
- the device 1400 further includes: a third receiving unit (not shown in the figure), configured to respond to the current network speed value of the first user terminal being less than or equal to a preset The network speed threshold, receiving camera closing confirmation information from the server, and presenting camera closing confirmation information, where the camera closing confirmation information is used to determine whether the second user terminal closes the camera.
- a third receiving unit (not shown in the figure), configured to respond to the current network speed value of the first user terminal being less than or equal to a preset The network speed threshold, receiving camera closing confirmation information from the server, and presenting camera closing confirmation information, where the camera closing confirmation information is used to determine whether the second user terminal closes the camera.
- the device 1400 further includes: a fifth sending unit (not shown in the figure), configured to respond to receiving from the server the three-dimensional information presented on the second user terminal by the user.
- the adjustment operation of the model is to perform the same adjustment operation on the three-dimensional model presented on the user interface according to the adjustment operation indicated by the model adjustment information, where the adjustment operation includes at least one of the following: zoom, rotate, move, and switch viewpoints.
- the device 1400 further includes: a fifth sending unit (not shown in the figure), configured to respond to detecting that the user has received the three-dimensional model presented on the second user terminal
- the adjustment operation is to send model adjustment information indicating the adjustment operation to the server, so that the server controls the first user terminal to perform the same adjustment operation on the three-dimensional model presented on the user interface according to the adjustment operation indicated by the model adjustment information.
- the adjustment operation includes the following At least one item: zoom, rotate, move, viewpoint switch.
- the three-dimensional model includes three-dimensional sub-models of multiple sub-space scenes, and the sub-space scenes in the multiple sub-space scenes correspond to keywords in a predetermined keyword set; and,
- the device 1400 further includes: a second recognition unit (not shown in the figure), configured to perform voice recognition on the voice acquired by the first user terminal to obtain a voice recognition result; and a fourth presentation unit (not shown in the figure), It is configured to, in response to determining that the voice recognition result contains the keywords in the keyword set, present on the user interface a three-dimensional sub-model of the subspace scene corresponding to the keywords contained in the voice recognition result.
- the apparatus 1400 further includes: an execution unit (not shown in the figure), configured to respond to receiving a stream media message sent by the server and sent by the user using the first user terminal. Video feedback information, perform operations that match the feedback information.
- the interaction device based on the three-dimensional model provided by the above-mentioned embodiment of the present disclosure is set in the second user terminal, and the user account logged in by the second user terminal establishes an association relationship with the three-dimensional model in advance.
- the second determining unit 1410 may obtain the streaming video, where the interaction request indicates that the first user terminal detects the user's target interaction operation on the user interface presented by the first user terminal, and the user interface is used to present the three-dimensional model.
- the fourth sending unit 1420 may send the streaming video to the server, where the server is used to send the streaming video to the first user terminal, so that the first user terminal can present the streaming video and the three-dimensional model on the user interface.
- streaming videos and 3D models can be presented on the same page of the terminal device, which helps to use streaming videos to present information related to the 3D model to users, which improves the diversity of interaction methods. Users browse the 3D model more calmly, increase the user's browsing time, and help meet users' more diversified interactive needs.
- FIG. 15 is a schematic diagram of interaction of an embodiment 1500 of the interactive system based on a three-dimensional model of the present disclosure.
- the interactive system based on the three-dimensional model includes a first user terminal, a second user terminal, and a server.
- the first user terminal presents a user interface
- the server is respectively communicatively connected with the first user terminal and the second user terminal.
- the first user terminal, the second user terminal, and the server in the interactive system based on the three-dimensional model can perform the following steps:
- Step 1501 The first user terminal detects the user's target interaction operation on the user interface.
- the first user terminal detects the user's target interaction operation on the user interface.
- the user interface is used for presenting a three-dimensional model, and the three-dimensional model establishes an association relationship with the user account logged in by the second user terminal in advance.
- Step 1502 The first user terminal sends an interaction request for the target interaction operation to the server.
- the first user terminal may send an interaction request for a target interaction operation to the server.
- Step 1503 The second user terminal obtains the streaming video.
- the second user terminal can obtain the streaming video.
- Step 1504 The second user terminal sends the streaming video to the server.
- the second user terminal may send the streaming video to the server.
- Step 1505 The server sends the streaming video to the first user terminal.
- the server may send the streaming video to the first user terminal.
- Step 1506 The first user terminal presents the streaming video and the three-dimensional model on the user interface.
- the first user terminal may present the streaming video and the three-dimensional model on the user interface.
- step 1501 to step 1506 can also refer to the implementation of the first three-dimensional model-based interaction method described above.
- the technical features in each embodiment of the second three-dimensional model-based interaction method, and the third three-dimensional model-based interaction method will be explained.
- this embodiment may also include the same or corresponding features as the above-mentioned embodiment of the interaction method based on the three-dimensional model, and produce the same or corresponding effects, which will not be repeated here.
- the interactive system based on the three-dimensional model provided by the above-mentioned embodiments of the present disclosure includes a first user terminal, a second user terminal, and a server.
- the first user terminal presents a user interface
- the server is in communication connection with the first user terminal and the second user terminal. .
- the first user terminal is configured to: in response to detecting the user's target interaction operation on the user interface, send an interaction request for the target interaction operation to the server, wherein the user interface is used to present the three-dimensional model, the three-dimensional model and the second user
- the user account logged in by the terminal establishes an association relationship in advance
- the second user terminal is configured to: obtain the streaming video; send the streaming video to the server;
- the server is also configured to: send the streaming video to the first user terminal;
- the first user terminal It is configured to: present streaming video and three-dimensional model on the user interface.
- streaming media videos and 3D models can be presented on the same page of the terminal device, which helps to use streaming media videos to present information related to the 3D model to users, which improves the diversity of interaction methods.
- Users browse the 3D model more calmly, increase the user's browsing time, and help meet users' more diversified interactive needs.
- FIG. 16 shows a block diagram of an electronic device 1600 according to an embodiment of the present disclosure.
- the electronic device 1600 includes one or more processors 1611 and a memory 1612.
- the processor 1611 may be a central processing unit (CPU) or another form of processing unit having the ability to implement three-dimensional scene interaction and/or instruction execution capabilities, and may control other components in the electronic device 1600 to perform desired functions .
- CPU central processing unit
- the memory 1612 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
- the volatile memory for example, may include: random access memory (RAM) and/or cache memory (cache).
- the non-volatile memory for example, may include: read-only memory (ROM), hard disk, flash memory, and the like.
- One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 1611 may run the program instructions to implement the various methods described above and/or other desired functions.
- Various contents such as input signals, signal components, noise components, etc. can also be stored in the computer-readable storage medium.
- the electronic device 1600 may further include: an input device 1613, an output device 1614, etc., and these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
- the input device 1613 may also include, for example, a keyboard, a mouse, and so on.
- the output device 1614 can output various information to the outside.
- the output device 1614 may include, for example, a display, a speaker, a printer, a communication network and a remote output device connected to it, and so on.
- the electronic device 1600 may also include any other appropriate components.
- the embodiments of the present disclosure may also be computer program products, which include computer program instructions that, when run by a processor, cause the processor to perform operations according to various embodiments of the present disclosure. Steps in various methods.
- the computer program product may use any combination of one or more programming languages to write program codes for performing the operations of the embodiments of the present disclosure.
- the programming languages include object-oriented programming languages, such as Java, C++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages.
- the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
- embodiments of the present disclosure may also be a computer-readable storage medium on which computer program instructions are stored.
- the processor executes each of the various embodiments of the present disclosure. Steps in a method.
- the computer-readable storage medium may adopt any combination of one or more readable media.
- the readable medium may be a readable signal medium or a readable storage medium.
- the readable storage medium may include, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above, for example. More specific examples (non-exhaustive list) of readable storage media may include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- the method and apparatus of the present disclosure may be implemented in many ways.
- the method and apparatus of the present disclosure can be implemented by software, hardware, firmware or any combination of software, hardware, and firmware.
- the above-mentioned order of the steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above, unless specifically stated otherwise.
- the present disclosure can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing methods according to embodiments of the present disclosure.
- the present disclosure also covers a recording medium storing a program for executing a method according to an embodiment of the present disclosure.
- each component or each step can be decomposed and/or recombined. These decomposition and/or recombination should be regarded as equivalent solutions of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
L'invention concerne un procédé de réalisation d'une interaction dans une scène spatiale tridimensionnelle consistant à : répondre à une opération d'utilisateur détectée à l'aide d'informations d'empreinte dans une scène spatiale tridimensionnelle et déterminer un premier pixel dans une vue actuelle à laquelle correspond l'angle de visualisation actuel de l'utilisateur dans la scène spatiale tridimensionnelle ; déterminer un modèle tridimensionnel auquel correspond le premier pixel ; déterminer la position des informations d'empreinte de l'utilisateur dans le modèle tridimensionnel, les informations d'empreinte étant affichées lorsque la scène spatiale tridimensionnelle est visualisée ; et régler les informations d'empreinte de l'utilisateur à la position.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010401813.7A CN111562845B (zh) | 2020-05-13 | 2020-05-13 | 用于实现三维空间场景互动的方法、装置和设备 |
CN202010401813.7 | 2020-05-13 | ||
CN202010698810.4 | 2020-07-20 | ||
CN202010698810.4A CN111885398B (zh) | 2020-07-20 | 2020-07-20 | 基于三维模型的交互方法、装置、系统、电子设备和存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021228200A1 true WO2021228200A1 (fr) | 2021-11-18 |
Family
ID=78525899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/093628 WO2021228200A1 (fr) | 2020-05-13 | 2021-05-13 | Procédé pour réaliser une interaction dans une scène spatiale tridimensionnelle, appareil et dispositif |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021228200A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114241132A (zh) * | 2021-12-16 | 2022-03-25 | 北京字跳网络技术有限公司 | 场景内容展示控制方法、装置、计算机设备及存储介质 |
CN115499641A (zh) * | 2022-09-20 | 2022-12-20 | 北京三月雨文化传播有限责任公司 | 一种快速构建数字展览文件的方法及智能终端 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107710284A (zh) * | 2015-06-30 | 2018-02-16 | 奇跃公司 | 用于在虚拟图像生成系统中更有效地显示文本的技术 |
CN108874471A (zh) * | 2018-05-30 | 2018-11-23 | 链家网(北京)科技有限公司 | 一种房源的功能间附加元素添加方法及系统 |
CN110531847A (zh) * | 2019-07-26 | 2019-12-03 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于增强现实的新型社交方法及系统 |
CN110891167A (zh) * | 2019-11-30 | 2020-03-17 | 北京城市网邻信息技术有限公司 | 一种信息交互方法、第一终端和计算机可读存储介质 |
CN110944140A (zh) * | 2019-11-30 | 2020-03-31 | 北京城市网邻信息技术有限公司 | 远程展示方法、远程展示系统、电子装置、存储介质 |
CN111047717A (zh) * | 2019-12-24 | 2020-04-21 | 北京法之运科技有限公司 | 一种对三维模型进行文字标注的方法 |
CN111562845A (zh) * | 2020-05-13 | 2020-08-21 | 贝壳技术有限公司 | 用于实现三维空间场景互动的方法、装置和设备 |
CN111885398A (zh) * | 2020-07-20 | 2020-11-03 | 贝壳技术有限公司 | 基于三维模型的交互方法、装置和系统 |
-
2021
- 2021-05-13 WO PCT/CN2021/093628 patent/WO2021228200A1/fr active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107710284A (zh) * | 2015-06-30 | 2018-02-16 | 奇跃公司 | 用于在虚拟图像生成系统中更有效地显示文本的技术 |
CN108874471A (zh) * | 2018-05-30 | 2018-11-23 | 链家网(北京)科技有限公司 | 一种房源的功能间附加元素添加方法及系统 |
CN110531847A (zh) * | 2019-07-26 | 2019-12-03 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于增强现实的新型社交方法及系统 |
CN110891167A (zh) * | 2019-11-30 | 2020-03-17 | 北京城市网邻信息技术有限公司 | 一种信息交互方法、第一终端和计算机可读存储介质 |
CN110944140A (zh) * | 2019-11-30 | 2020-03-31 | 北京城市网邻信息技术有限公司 | 远程展示方法、远程展示系统、电子装置、存储介质 |
CN111047717A (zh) * | 2019-12-24 | 2020-04-21 | 北京法之运科技有限公司 | 一种对三维模型进行文字标注的方法 |
CN111562845A (zh) * | 2020-05-13 | 2020-08-21 | 贝壳技术有限公司 | 用于实现三维空间场景互动的方法、装置和设备 |
CN111885398A (zh) * | 2020-07-20 | 2020-11-03 | 贝壳技术有限公司 | 基于三维模型的交互方法、装置和系统 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114241132A (zh) * | 2021-12-16 | 2022-03-25 | 北京字跳网络技术有限公司 | 场景内容展示控制方法、装置、计算机设备及存储介质 |
CN114241132B (zh) * | 2021-12-16 | 2023-07-21 | 北京字跳网络技术有限公司 | 场景内容展示控制方法、装置、计算机设备及存储介质 |
CN115499641A (zh) * | 2022-09-20 | 2022-12-20 | 北京三月雨文化传播有限责任公司 | 一种快速构建数字展览文件的方法及智能终端 |
CN115499641B (zh) * | 2022-09-20 | 2023-09-12 | 广东鸿威国际会展集团有限公司 | 一种快速构建数字展览文件的方法及智能终端 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11899900B2 (en) | Augmented reality computing environments—immersive media browser | |
WO2021109652A1 (fr) | Procédé et appareil permettant d'offrir un cadeau virtuel de personnage, dispositif et support de stockage | |
US10134364B2 (en) | Prioritized display of visual content in computer presentations | |
CN111178191B (zh) | 信息播放方法、装置、计算机可读存储介质及电子设备 | |
JP5901151B2 (ja) | 仮想環境におけるオブジェクトの選択方法 | |
US8117281B2 (en) | Using internet content as a means to establish live social networks by linking internet users to each other who are simultaneously engaged in the same and/or similar content | |
WO2022116751A1 (fr) | Procédé et appareil d'interaction, ainsi que terminal, serveur et support de stockage | |
KR20220115824A (ko) | 콘텐츠를 공간 3d 환경에 매칭 | |
WO2018152455A1 (fr) | Système et procédé de création d'une session virtuelle collaborative | |
US20180160194A1 (en) | Methods, systems, and media for enhancing two-dimensional video content items with spherical video content | |
US20240153186A1 (en) | Sentiment-based interactive avatar system for sign language | |
CN112596694B (zh) | 一种房源信息的处理方法和装置 | |
KR20190047144A (ko) | 인터액티브 비디오 생성 | |
WO2021228200A1 (fr) | Procédé pour réaliser une interaction dans une scène spatiale tridimensionnelle, appareil et dispositif | |
US20220319063A1 (en) | Method and apparatus for video conferencing | |
US20230409632A1 (en) | Systems and methods for using conjunctions in a voice input to cause a search application to wait for additional inputs | |
US20240283994A1 (en) | Display apparatus and method for person recognition and presentation | |
CN111885398B (zh) | 基于三维模型的交互方法、装置、系统、电子设备和存储介质 | |
TW201403501A (zh) | 虛擬社群建立系統及方法 | |
CN112051956A (zh) | 一种房源的交互方法和装置 | |
CN111562845B (zh) | 用于实现三维空间场景互动的方法、装置和设备 | |
CN116017082A (zh) | 一种信息处理方法和电子设备 | |
US20240185530A1 (en) | Information interaction method, computer-readable storage medium and communication terminal | |
CN116762333A (zh) | 将电话会议参与者的图像与共享文档叠加 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21805221 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21805221 Country of ref document: EP Kind code of ref document: A1 |