Nothing Special   »   [go: up one dir, main page]

CN115729393A - Prompting method and device in information processing process, electronic equipment and storage medium - Google Patents

Prompting method and device in information processing process, electronic equipment and storage medium Download PDF

Info

Publication number
CN115729393A
CN115729393A CN202211457010.9A CN202211457010A CN115729393A CN 115729393 A CN115729393 A CN 115729393A CN 202211457010 A CN202211457010 A CN 202211457010A CN 115729393 A CN115729393 A CN 115729393A
Authority
CN
China
Prior art keywords
target
space
mark
elements
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211457010.9A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202211457010.9A priority Critical patent/CN115729393A/en
Publication of CN115729393A publication Critical patent/CN115729393A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a prompting method and device in an information processing process, electronic equipment and a storage medium, wherein the method comprises the following steps: displaying a space contour diagram in an editing state corresponding to the target space; responding to an editing instruction of a target structure element on the space outline drawing, and displaying a space live-action drawing corresponding to the target structure element; responding to the fact that at least two marking elements overlapping the same target medium image are obtained in the space live-action image, and respectively obtaining panoramic areas of the marking elements in the space live-action image; acquiring a target contour element mapped in the space contour map by the marking element and a contour position on the target contour element according to the panoramic area corresponding to the marking element; and displaying the target outline element mapped in the space outline map and the outline position on the target outline element according to each mark element in the space outline map so as to prompt that the different mark elements are overlapped.

Description

Prompting method and device in information processing process, electronic equipment and storage medium
Technical Field
The present invention relates to the field of interface interaction technologies, and in particular, to a method and an apparatus for prompting in an information processing process, an electronic device, and a computer-readable storage medium.
Background
With the development of technologies such as panoramic technology, VR (Virtual Reality), AR (Augmented Reality) and the like, the technologies can be widely applied to the fields of on-line house watching, marketing, exhibition and the like, the information of the real environment is presented by building Virtual scenes, articles and the like by relying on science and technology, and the functions of realizing the repeated carving Reality and recording field information are effectively exerted.
Wherein, still need manual operation in typing into virtual scene with reality information, draw wall body, door, window etc. for example in the VR house source, mark water and electricity and walk line, building structure and house size etc.. However, the completion of these tasks depends on the manual operation of the editor, and because of inevitable subjective errors in the manual editing process, deviations between the information editing process and the live-action information are easily caused, which leads to the problem of poor quality of edited house source information.
Disclosure of Invention
The embodiment of the invention provides a prompting method and device in an information processing process, electronic equipment and a computer readable storage medium, and aims to solve or partially solve the problem of poor house source information quality caused by inaccurate editing in the house information editing process in the related art.
The embodiment of the invention discloses a prompting method in an information processing process, which comprises the following steps:
displaying a space contour map corresponding to a target space in an editing state, wherein the space contour map at least comprises contour elements and at least one structural element positioned on the contour elements, the structural element is an element which is mapped to the corresponding contour element in the space contour map correspondingly according to a target medium image identified by a space live-action map, and the space live-action map is a panoramic image which is used for identifying the target medium image and is acquired by the at least one acquisition point;
in response to an editing instruction of a target structure element on the spatial outline, showing the spatial live-action diagram corresponding to the target structure element;
responding to the acquisition of at least two marking elements overlapping the same target medium image in a space live-action image, and respectively acquiring a panoramic area of each marking element in the space live-action image;
acquiring a target contour element mapped in the space contour map by the marking element and a contour position on the target contour element according to the panoramic area corresponding to the marking element;
and displaying the space outline graph according to the mapping target outline element of each marking element in the space outline graph and the outline position on the target outline element so as to prompt that the different marking elements are overlapped.
Optionally, the acquiring, in response to acquiring, in a space live-action map, at least two mark elements overlapping with the same target medium image, a panoramic area of each mark element in the space live-action map includes:
in response to at least two mark elements which are overlapped and added to the same target medium image in the space live-action image, outputting prompt information aiming at the at least two mark elements;
and acquiring panoramic areas corresponding to the at least two marking elements.
Optionally, the outputting prompt information for at least two mark elements in response to at least two mark elements that overlap with each other in the same target medium image in the spatial live-action map includes:
displaying a first marking element for a target medium image in the space live-action figure in response to a marking instruction for the target medium image;
responding to the editing operation for the first mark element, and if a corresponding second mark element already exists in the target medium image and a spatial overlap occurs between the first mark element and the second mark element after or during the editing of the first mark element, outputting prompt information for the first mark element and the second mark element.
Optionally, the, in response to the editing operation for the first markup element, if a corresponding second markup element already exists in the target media image and a spatial overlap occurs between the first markup element and the second markup element after or during the editing of the first markup element, outputting hint information for the first markup element and the second markup element, including:
responding to an editing operation for the first marking element, determining a first marking position of the first marking element according to the editing operation, and if the first marking position and a second marking position of a second marking element in the space live-action image are overlapped in space, outputting prompt information for the first marking element and the second marking element.
Optionally, the first mark position includes a first coordinate of the first mark element in a panoramic coordinate system, and if a spatial overlap occurs between the first mark position and a second mark position of a second mark element in the spatial live-action image, outputting hint information for the first mark element and the second mark element includes:
acquiring second coordinates of second mark elements in the space live-action image except the first mark elements in the panoramic coordinate system;
and taking a second mark element to which a second coordinate overlapped with the first coordinate belongs and the first mark element as abnormal mark elements, and outputting prompt information aiming at the abnormal mark elements.
Optionally, the outputting the prompt information for the first mark element and the second mark element if spatial overlap occurs between the first mark position and a second mark position of a second mark element in the spatial live-action map includes:
acquiring a second endpoint position or a second boundary position corresponding to a boundary, which corresponds to an endpoint of a second marking element in the space live-action image except the first marking element;
taking a second mark element belonging to a second endpoint position of the same target medium image as the first endpoint position mark as an abnormal mark element, or taking a second mark element belonging to a second boundary position of the same target medium image as the first boundary position mark as an abnormal mark element;
and outputting prompt information aiming at the abnormal marking element.
Optionally, the determining, in response to an editing operation for the first markup element, a first markup position of the first markup element according to the editing operation includes:
displaying an editing control group for the first markup element, wherein the editing control group at least comprises an endpoint control and a mobile control;
in response to the triggering of at least one endpoint control, after the endpoint control completes executing a first editing operation, determining the media images marked by the endpoints in the first marking element in the space live-action diagram according to the first editing operation;
and/or responding to the trigger aiming at the mobile control, and acquiring a first coordinate of the first mark element in the space live-action figure according to the position of a second editing operation after the second editing operation is executed by the mobile control.
Optionally, the outputting the prompt information for the first markup element and the second markup element includes:
switching the first mark element from a first display style to a second display style or switching the second mark element from the first display style to the second display style, and displaying text prompt information aiming at the first mark element;
the first display style and the second display style are different display modes.
Optionally, the acquiring, by the panoramic area corresponding to the markup element, a target contour element mapped in the spatial contour map by the markup element and a contour position on the target contour element includes:
mapping the panoramic pixel coordinate corresponding to each marking element into a three-dimensional point cloud coordinate;
and respectively positioning a target contour element corresponding to the three-dimensional point cloud coordinate corresponding to each marking element and a contour position on the target contour element from the space contour map.
Optionally, the displaying in the spatial profile according to the mapped target profile element of each mark element in the spatial profile and the profile position on the target profile element to prompt that there is an overlap between different mark elements includes:
displaying corresponding structural elements in the space outline according to the mapped target outline elements of the marking elements in the space outline and the outline positions on the target outline elements;
if the positions of the structural elements corresponding to the at least two marking elements on the spatial contour map are overlapped, displaying a first prompt identifier for the at least two structural elements with overlapped positions on the spatial contour map to prompt that the different marking elements are overlapped.
Optionally, the displaying, in response to an edit instruction for a target structure element on the spatial outline diagram, the spatial live-action diagram corresponding to the target structure element includes:
in response to an editing instruction of a target structure element on the spatial outline diagram, extracting a spatial live-action diagram corresponding to the target structure element at a current observation visual angle from the spatial live-action diagram;
acquiring a target observation point corresponding to the current observation visual angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the target acquisition point in the space profile, and the target observation area is a mapping area of the space live-action image in the space profile;
and displaying a space contour map corresponding to the space live-action map, and displaying the target observation point or the target observation point and the target observation area in the space contour map.
Optionally, the target acquisition point is an optimal acquisition point of the at least one acquisition point in the target space relative to the medium corresponding to the target structural element, and the method further comprises:
selecting an acquisition point closest to a medium corresponding to a target structural element from at least one acquisition point in the target space as an optimal acquisition point as the target acquisition point; or the like, or, alternatively,
and selecting an acquisition point close to the forward shooting direction of the medium corresponding to the target structural element from at least one acquisition point in the target space as an optimal acquisition point as the target acquisition point.
Optionally, the target space includes at least one functional space, and the method further includes:
and responding to a storage instruction aiming at the marking elements, detecting the panoramic image corresponding to each functional space, and if a plurality of marking elements exist in the panoramic image corresponding to at least one functional space to mark the same medium image, outputting a second prompt identifier marked with the same medium image in the spatial outline image corresponding to the target space.
Optionally, the method further comprises:
responding to the selection operation aiming at the second prompt mark, determining a target prompt mark, positioning a target function space corresponding to the target prompt mark in the target space, and displaying a panoramic image corresponding to the target function space;
and determining the mark elements which are subjected to spatial overlapping and correspond to the target prompt identification, selecting a selected mark element from the mark elements which are subjected to spatial overlapping, and displaying an editing control group aiming at the mark element so as to edit the mark element through the editing control group.
The embodiment of the invention also discloses a prompting device in the information processing process, which comprises the following steps:
the contour map display module is used for displaying a spatial contour map in an editing state corresponding to a target space, wherein the spatial contour map at least comprises contour elements and at least one structural element positioned on the contour elements, the structural element is an element which is mapped to the corresponding contour elements in the spatial contour map correspondingly according to a target medium image identified by a spatial live-action map, and the spatial live-action map is a panoramic image used for identifying the target medium image in the panoramic image acquired by the at least one acquisition point;
the live-action diagram display module is used for responding to an editing instruction of a target structure element on the space outline diagram and displaying the space live-action diagram corresponding to the target structure element;
the panoramic area acquisition module is used for responding to the acquisition of at least two marking elements overlapping the same target medium image in a space live-action image and respectively acquiring the panoramic area of each marking element in the space live-action image;
the contour position determining module is used for acquiring a target contour element mapped in the space contour map by the marking element and a contour position on the target contour element according to the panoramic area corresponding to the marking element;
and the structural element display module is used for displaying the space outline map according to the mapped target outline elements of the mark elements in the space outline map and the outline positions on the target outline elements so as to prompt that the different mark elements are overlapped.
Optionally, the panoramic area obtaining module is specifically configured to:
in response to at least two mark elements which are overlapped and added to the same target medium image in the space live-action image, outputting prompt information aiming at the at least two mark elements;
and acquiring panoramic areas corresponding to the at least two marking elements.
Optionally, the panoramic area obtaining module is specifically configured to:
displaying a first marking element for a target medium image in the space live-action figure in response to a marking instruction for the target medium image;
responding to the editing operation for the first mark element, and if a corresponding second mark element already exists in the target medium image and a spatial overlap occurs between the first mark element and the second mark element after or during the editing of the first mark element, outputting prompt information for the first mark element and the second mark element.
Optionally, the panoramic area obtaining module is specifically configured to:
responding to an editing operation for the first marking element, determining a first marking position of the first marking element according to the editing operation, and if a spatial overlap occurs between the first marking position and a second marking position of a second marking element in the spatial live-action image, outputting prompt information for the first marking element and the second marking element.
Optionally, the first marker position includes a first coordinate of the first marker element in a panoramic coordinate system, and the panoramic area obtaining module is specifically configured to:
acquiring second coordinates of second mark elements in the space live-action image except the first mark elements in the panoramic coordinate system;
and taking a second mark element to which a second coordinate overlapping with the first coordinate belongs and the first mark element as abnormal mark elements, and outputting prompt information aiming at the abnormal mark elements.
Optionally, the first mark position includes a first endpoint position corresponding to an endpoint of the first mark element or a first boundary position corresponding to a boundary of the first mark element, and the panoramic area obtaining module is specifically configured to:
acquiring a second endpoint position or a second boundary position corresponding to a boundary, which corresponds to an endpoint of a second mark element except the first mark element, in the space live-action image;
taking a second mark element belonging to a second endpoint position of the same target media image as the first endpoint position mark as an abnormal mark element, or taking a second mark element belonging to a second boundary position of the same target media image as the first boundary position mark as an abnormal mark element;
and outputting prompt information aiming at the abnormal marking element.
Optionally, the panoramic area obtaining module is specifically configured to:
displaying an editing control group for the first markup element, wherein the editing control group at least comprises an endpoint control and a mobile control;
in response to the triggering of at least one endpoint control, after the endpoint control completes executing a first editing operation, determining a medium image marked by each endpoint in the first marking element in the space live-action image according to the first editing operation;
and/or responding to the trigger aiming at the mobile control, so that after the mobile control completes the execution of the second editing operation, the first coordinate of the first marking element in the space live-action image is obtained according to the position of the second editing operation.
Optionally, the panoramic area obtaining module is specifically configured to:
switching the first mark element from a first display style to a second display style or switching the second mark element from the first display style to the second display style, and displaying text prompt information aiming at the first mark element;
the first display style and the second display style are different display modes.
Optionally, the panoramic area includes panoramic pixel coordinates of the mark element in the real space scene graph, and the contour position determining module is specifically configured to:
mapping the panoramic pixel coordinate corresponding to each marking element into a three-dimensional point cloud coordinate;
and respectively positioning a target contour element corresponding to the three-dimensional point cloud coordinate corresponding to each marking element and a contour position on the target contour element from the space contour map.
Optionally, the structural element display module is specifically configured to:
respectively displaying corresponding structural elements in the space outline according to the mapped target outline elements of the marking elements in the space outline and the outline positions on the target outline elements;
if the position of the structural elements corresponding to the at least two mark elements on the spatial contour map is overlapped, displaying a first prompt identifier for the at least two structure elements with overlapped positions on the spatial contour map to prompt that the different mark elements are overlapped.
Optionally, the live-action picture display module is specifically configured to:
in response to an editing instruction of a target structure element on the spatial outline map, extracting a spatial live-action map corresponding to the target structure element at a current observation visual angle from the spatial live-action map;
acquiring a target observation point corresponding to the current observation visual angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the target acquisition point in the space profile, and the target observation area is a mapping area of the space real image in the space profile;
and displaying a space contour map corresponding to the space live-action map, and displaying the target observation point or the target observation point and the target observation area in the space contour map.
Optionally, the target acquisition point is an optimal acquisition point of the at least one acquisition point in the target space relative to the medium corresponding to the target structural element, and the apparatus further comprises:
an acquisition point determining module, configured to select, as an optimal acquisition point, an acquisition point that is closest to a medium corresponding to a target structural element from among at least one acquisition point in the target space, and use the optimal acquisition point as the target acquisition point; or selecting an acquisition point close to the forward shooting direction of the medium corresponding to the target structural element from at least one acquisition point in the target space as an optimal acquisition point as the target acquisition point.
Optionally, the target space includes at least one functional space, and the apparatus further includes:
and the second identifier output module is configured to detect the panoramic image corresponding to each functional space in response to the storage instruction for the marker element, and if there are multiple marker elements in the panoramic image corresponding to at least one functional space to mark the same medium image, output a second prompt identifier that marks the same medium image in the spatial profile corresponding to the target space.
Optionally, the method further comprises:
the panoramic image display module is used for responding to the selection operation aiming at the second prompt identifier, determining a target prompt identifier, positioning a target function space corresponding to the target prompt identifier in the target space and displaying a panoramic image corresponding to the target function space;
and the marking element determining module is used for determining the marking elements which are subjected to the spatial overlapping and correspond to the target prompt identifier, selecting a selected marking element from the marking elements which are subjected to the spatial overlapping, and displaying an editing control group aiming at the marking element so as to edit the marking element through the editing control group.
The embodiment of the invention also discloses electronic equipment which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory finish mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
Also disclosed is a computer-readable storage medium having instructions stored thereon, which, when executed by one or more processors, cause the processors to perform a method according to an embodiment of the invention.
The embodiment of the invention has the following advantages:
in the embodiment of the present invention, during the process of editing the house information, the terminal may display a space profile corresponding to a target space in an editing state, where the space profile includes at least a profile element and at least one structure element located on the profile element, where the structure element is an element mapped onto the corresponding profile element in the space profile according to a target medium image identified by the space profile, and the space profile is a panoramic image used for identifying the target medium image in the panoramic image acquired by at least one acquisition point, during the process of editing the space profile by the user, the terminal may display the space profile corresponding to the target structure element in response to an edit instruction for the target structure element on the space profile, so that the user marks the space profile to implement editing of the space profile, during the marking process, the terminal may obtain a panoramic area in the space profile in which each mark element is located in the space profile in response to the obtaining at least two mark elements that overlap the same target medium image in the space profile, and may obtain a relationship between the two mark elements in the space profile mapping of the space profile in the space profile and obtain a mark information indicating that the target profile exists in the space profile, and then display the abnormal situation that the editing of the corresponding target profile elements in the space profile and the corresponding mark elements in the space profile, and the abnormal situation of the corresponding mark elements in the space profile, and the corresponding to the corresponding mark elements in the corresponding to the editing process of the house information of the target profile, and the corresponding mark elements in the corresponding to the editing information of the corresponding mark elements in the space profile of the space profile, and the corresponding target profile, and the editing information of the corresponding target profile of the corresponding mark elements in the space profile of the corresponding to the space profile of the corresponding mark elements, and furthermore, the spatial profile is used for carrying out exception prompting from the global angle, so that a user can process edition exception from the global angle, and the accuracy of house information edition and the quality of house source information can be effectively improved by prompting the exception occurring in the edition process.
Drawings
Fig. 1 is a flowchart illustrating steps of a method for prompting in an information processing process according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of data acquisition provided in an embodiment of the present invention;
FIG. 3 is a schematic diagram of a spatial structure diagram provided in an embodiment of the present invention
FIG. 4 is a schematic diagram of a panoramic editing interface provided in an embodiment of the present invention;
FIG. 5 is a schematic illustration of a panoramic editing interface provided in embodiments of the present invention;
fig. 6 is a block diagram of a prompting device in an information processing process according to an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device provided in an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
As an example, with the application of VR technology in the fields of house watching, car watching, marketing, exhibition, and the like, the display environment information is presented depending on the environment and objects in the VR scene, and the effects of repeated display and recording the field information are achieved. The related reality information of the house is input into the VR scene, and manual operation is relied on, for example, a corresponding house type graph is drawn in the VR scene. However, the completion of these tasks depends on the manual operation of the editor, and because of inevitable subjective errors in the manual editing process, deviations between the information editing process and the live-action information are easily caused, which leads to the problem of poor quality of edited house source information.
One of the core invention points in the invention is that when a user edits house information, a terminal can identify an editing result of the house information or an abnormal situation occurring in the editing process by marking a mapping relation between an element-structure element-space structure (medium, medium image), output corresponding prompt information in a space live-action diagram, and output a corresponding prompt identifier in a space outline diagram synchronously.
In order to make those skilled in the art better understand the technical solution of the present invention, some technical features related to the embodiments of the present invention are explained and illustrated below:
the first image acquisition data may be point cloud data acquired by the electronic terminal on at least one acquisition point of the target space. Optionally, the acquisition point for acquiring the point cloud data may be used as a first acquisition point, and a corresponding point cloud plan may be constructed according to the point cloud data corresponding to at least one first acquisition point, and the basic outline of the target space may be presented through the point cloud plan.
And the second image acquisition data can be panoramic image data acquired by the electronic terminal on at least one acquisition point of the target space. Optionally, the acquisition point for acquiring the panoramic image data may be used as a second acquisition point, and a spatial live-action image corresponding to the target space may be determined by using at least one piece of panoramic image data acquired at the second acquisition point, and a spatial structure corresponding to the target space may be presented by using the spatial live-action image, so as to present more real and three-dimensional spatial information for the user and improve spatial perception of the user on the target space.
The spatial house type graph, which may correspond to the spatial house type of the target space, may include several different structural elements, for example: the door body structural element, the window body structural element, and the like are used for presenting a spatial structure corresponding to a target space, where the target space is understood as a single independent target space, and may also be an overall target space composed of a plurality of independent target spaces, for example, when an individual independent target space may be a living room, a dining room, a kitchen, a bedroom, a bathroom, and the like, the overall target space may be a target space composed of at least two of a living room, a dining room, a kitchen, a bedroom, a bathroom, and the like. For the spatial user type graph, the spatial user type graph can be obtained through corresponding editing processing on the basis of a point cloud plane graph of a target space, and can also be obtained through corresponding operation processing on the basis of a panoramic graph of the target space.
The space outline map can be an outline map which is preliminarily constructed and used for representing the overall outline of the target space, a plurality of outline elements can be included on the space outline map, each outline element corresponds to a solid wall body in the target space, and meanwhile, corresponding structural elements can be further included on the outline elements. After the user finishes editing the space outline diagram, the outline elements can be converted into corresponding wall structure elements.
The spatial structure map may be a local area in the spatial contour map, and for example, when the spatial contour map includes areas corresponding to a plurality of independent target spaces, the spatial structure map may be a contour map corresponding to a certain area. For example, the space outline map and the space structure map can be displayed in a global editing interface, so that a user can browse overall outline information corresponding to a target space from a global perspective; the latter can be displayed in a panoramic editing interface, so that when a user edits a certain structural element, the convenience and the flexibility of editing are improved through linkage between the space structure diagram and the panoramic image.
The medium may be a spatial structure located in a target space, such as a wall, a door, a window, a water line, and an electric line, where the target space is understood to be a single independent target space.
The medium image may be an image of a spatial structure located in a spatial live-action image, such as an image of a wall, an image of a door, an image of a window, an image of a water line, and an image of an electric line, which correspond to the spatial structure.
The structural elements may be used to represent a spatial structure of a target space in a spatial house type diagram, and may include wall structural elements, door structural elements, window structural elements, water pipeline structural elements, electric wire structural elements, and other structural elements used to represent a spatial structure in a target space.
The mark elements may be used as interface elements for marking in the spatial live-action diagram, and different structure elements may correspond to different mark elements, for example, different structure elements, and mark elements of different display styles, so as to be distinguished by different display modes.
Specifically, referring to fig. 1, a flowchart illustrating steps of a prompting method in an information processing process provided in an embodiment of the present invention is shown, which may specifically include the following steps:
step 101, displaying a spatial profile map in an editing state corresponding to a target space, where the spatial profile map includes profile elements and at least one structural element located on the profile elements, where the structural element is an element mapped onto a corresponding profile element in the spatial profile map according to a target medium image identified by a spatial live-action map, and the spatial live-action map is a panoramic image used for identifying the target medium image in panoramic data acquired by at least one acquisition point in the target space;
the house pattern editing related in the embodiment of the present invention may be a process of editing immediately after data acquisition is performed on a target space, may also be a process of supplementary editing of a space house pattern corresponding to a certain target space in an entire house pattern after the space house pattern of a plurality of target spaces is spliced to obtain the entire house pattern of the entire space, and may also be a process of continuously editing at a breakpoint. A user can hold the electronic terminal by hand to search a proper acquisition point in a target space and acquire an image of the target space at the acquisition point to obtain corresponding image data.
The electronic terminal can be an intelligent terminal (a terminal described below) or a camera, and the intelligent terminal can run a corresponding application program (such as an image acquisition program) and can be positioned by a sensor of the intelligent terminal in the acquisition process, and the current position in the target space where the intelligent terminal is located is output in real time in a graphical user interface, so that a user can execute a corresponding image acquisition strategy through the real-time position, and similarly, the camera can also execute corresponding operation. In addition, for the electronic terminal, it may include at least two types of sensors, and in the process of performing image acquisition on the target space, the electronic terminal may acquire the point cloud data corresponding to the target space through the laser scanning device on the one hand, and may acquire the panoramic image corresponding to the target space through the panoramic camera on the other hand, so that in the process of image acquisition, a point cloud plan corresponding to the target space may be constructed based on the point cloud data, and a space live view corresponding to the target space may be constructed through the panoramic image, and the like, which is not limited in this respect.
In an example, referring to fig. 2, a schematic diagram of data acquisition provided in the embodiment of the present invention is shown, assuming that a user performs data acquisition on a target space through three acquisition points in the target space through a terminal, including an acquisition point (1), an acquisition point (2), and an acquisition point (3), the acquired data may include point cloud data a and panoramic data a corresponding to the acquisition point (1), point cloud data B and panoramic data B corresponding to the acquisition point (2), and point cloud data C and panoramic data C corresponding to the acquisition point (3), so that in an image acquisition process, a point cloud plan corresponding to the target space may be constructed based on the point cloud data, a space live view corresponding to the target space may be constructed through a panoramic image, and the like.
It should be noted that, when each acquisition point performs data acquisition, and when one acquisition point triggers to perform one-time data acquisition, the terminal may perform corresponding data acquisition operations respectively through the laser scanning device and the image acquisition sensor based on the same acquisition point, so as to obtain different types of data such as point cloud data and panoramic data acquired at the time, so that the terminal performs different data processing operations based on the different types of data. The invention is not limited in this regard.
Further, for point cloud data corresponding to each point, the point cloud data can be obtained through the following two methods:
taking the acquisition point (1), the acquisition point (2) and the acquisition point (3) as an example, assuming that the acquisition point (1), the acquisition point (2) and the acquisition point (3) are in a sequential acquisition order, the sequentially acquired data may include point cloud data a and panoramic data a corresponding to the acquisition point (1), point cloud data B and panoramic data B corresponding to the acquisition point (2) and point cloud data C and panoramic data C corresponding to the acquisition point (3), wherein the point cloud data a ' currently acquired at the acquisition point (1) may be directly used as the point cloud data a, the point cloud data B ' currently acquired at the acquisition point (2) may be directly used as the point cloud data B, and the point cloud data C ' currently acquired at the acquisition point (3) may be directly used as the point cloud data C.
Taking the acquisition point (1), the acquisition point (2) and the acquisition point (3) as an example, assuming that the acquisition point (1), the acquisition point (2) and the acquisition point (3) are in a sequential acquisition order, the sequentially acquired data may include point cloud data a and panoramic data a corresponding to the acquisition point (1), point cloud data B and panoramic data B corresponding to the acquisition point (2) and point cloud data C and panoramic data C corresponding to the acquisition point (3), wherein the point cloud data a ' currently acquired at the acquisition point (1) may be directly used as the point cloud data a, the point cloud data B ' and the point cloud data a currently acquired at the acquisition point (2) are point cloud-fused to acquire the point cloud data B, and the point cloud data C ' and the point cloud data B (and the point cloud data a) currently acquired at the acquisition point (3) are point cloud-fused to acquire the point cloud data C.
Based on the point cloud data and the panoramic data collected at the collection point, a corresponding space outline graph can be constructed, meanwhile, the space outline graph can be displayed by the terminal and placed in an editable state, so that a user can edit the space outline graph, and corresponding structural elements are added, modified and deleted on the space outline graph, so that the information contained in the space outline graph is matched with the space structure of a target space. The space profile map may include a plurality of profile elements and at least one structural element located on the profile elements, where the structural element may be an element that maps a target medium image identified according to the space real map onto a corresponding profile element in the space profile map, and the space real map is a panoramic image used for identifying the target medium image in panoramic data acquired by at least one acquisition point in the target space. For example, for a contour element, it may be used to represent a solid wall structure of a target space, and a structural element may be used to represent a solid structure of a door, a window, etc. of the target space, and different doors and windows may be located on corresponding walls, so that the spatial structure of the target space may be represented by the contour element and the structural element.
102, responding to an editing instruction of a target structure element on the space outline map, and displaying the space live-action map corresponding to the target structure element;
after the editing of the outline elements is completed, the outline elements on the spatial outline drawing can basically present the outline condition of the target space, and for the structural elements located on the outline elements, which can be identified by the terminal according to the panoramic data, the spatial structure and the display position represented by the structural elements do not necessarily correspond to the spatial structure of the target space, and for this reason, the user is required to further verify the spatial structure.
In one case, when the terminal fails to identify the corresponding spatial structure such as a door window through the panoramic data, the user may select a contour element, which needs to add a structural element such as a door window, from the spatial contour map for editing, specifically, the terminal may select a target contour element corresponding to the selection operation in response to the selection operation for the contour element, and display an editing control group for the target contour element, where the editing control group includes at least a mark control, and then may display a spatial live-action map corresponding to the target contour element in response to the selection operation for the mark control, so that the user may add the corresponding structural element on the spatial contour map based on the spatial live-action map by presenting the corresponding live-action content.
In another case, when the terminal identifies a corresponding spatial structure such as a door or a window through the panoramic data, but identifies an error (the door is identified as the window, the window is identified as the door, etc.) or displays an error on the spatial profile, etc., the user may select a structural element to be edited from the spatial profile for editing, specifically, the terminal may select a target structural element corresponding to the selection operation in response to the selection operation on the structural element, and display an editing control group for the target structural element, where the editing control group at least includes a mark control, and then display a spatial live-action diagram corresponding to the target structural element in response to the selection operation on the mark control, so that the user may adjust the structural element identified with the error on the spatial profile based on the spatial live-action diagram by presenting corresponding live-action content.
In addition, no matter whether the user selects the contour element to edit the spatial contour map or selects the structural element to edit the spatial contour map, the terminal may respond to an editing instruction for a target structural element on the spatial contour map, extract the spatial real map corresponding to the target structural element at the current observation angle from the spatial real map, then obtain a target observation point corresponding to the current observation angle and a target observation region corresponding to the target observation point, where the target observation point is a mapping point of a target acquisition point in the spatial contour map, and the target observation region is a mapping region of the spatial real map in the spatial contour map, then display the spatial structure map corresponding to the spatial real map, and display the target observation point and the target observation region in the spatial structure map.
Before the space outline map is edited through the space real image, the terminal can establish a mapping relation between the space outline map and the space real image, and can conveniently identify the space real image in the follow-up process through establishing the mapping relation between the space outline map and the space real image, and the purpose of editing the space outline map according to an identification result is achieved. Specifically, the terminal can acquire a first panoramic pixel coordinate corresponding to a target medium image from the space live-action image and determine a corresponding first three-dimensional point cloud coordinate in the target point cloud data, and then map the first panoramic pixel coordinate to a second three-dimensional point cloud coordinate in a three-dimensional point cloud coordinate system or map the first three-dimensional point cloud coordinate to a second panoramic pixel coordinate in the panoramic pixel coordinate system according to a relative pose relationship between the target point cloud data and equipment for acquiring the space live-action image, so that data conversion in different coordinate systems is realized by taking the point cloud coordinate system or the panoramic coordinate system as a reference coordinate system, and further, the editing of the space profile image is conveniently realized by subsequently editing the space live-action image by establishing a coordinate mapping relationship between the space profile image and the space live-action image.
Illustratively, panoramic data b can be acquired according to the acquisition point (2), image recognition is carried out on the panoramic data b, when an image of a target medium is recognized in a panoramic image, panoramic pixel coordinates of the target medium in a corresponding panoramic image can be acquired according to the target medium image, and the panoramic pixel coordinates of the target medium are mapped to a coordinate system of a three-dimensional point cloud image of a target space to obtain three-dimensional point cloud coordinates. For example, panoramic pixel coordinates corresponding to the outlines of the door body and the window body can be mapped into three-dimensional point cloud coordinates.
Optionally, according to the mapping relationship between the panoramic pixel coordinates and the spherical coordinates, the panoramic pixel coordinates respectively corresponding to the outlines of the media (wall, door, window, electric wire, water pipeline, etc.) are mapped into the spherical space to obtain corresponding spherical coordinates; further, according to the relative pose relation between the panoramic camera and the laser scanning device, the spherical coordinates respectively corresponding to the medium outline are mapped into a three-dimensional point cloud coordinate system by combining the mapping relation between the spherical coordinates and the three-dimensional point cloud coordinates. Optionally, when mapping a panoramic Pixel coordinate corresponding to a contour of the medium to a spherical coordinate, taking a Pixel coordinate at an upper left corner of the panoramic Pixel coordinate as an origin, assuming that a length and a width of the panoramic image are H and W, respectively, and a Pixel coordinate corresponding to each Pixel is Pixel (x, y), a longitude Lon and a latitude Lat corresponding to the spherical coordinate after mapping of each panoramic Pixel coordinate are:
Lon=(x/W-0.5)*360;
Lat=(0.5–y/H)*180;
further, an origin O1 (0, 0) of the spherical coordinate system is established, and assuming that the radius of the spherical coordinate system is R, the spherical coordinates (X, Y, Z) of each panoramic pixel coordinate after mapping are respectively:
X=R*cos(Lon)*cos(Lat);
Y=R*sin(Lat);
Z=R*sin(Lon)*cos(Lat);
further, when mapping from the spherical coordinate system to the three-dimensional point cloud coordinate system, the medium may be scanned by the laser scanning device, and the corresponding spherical coordinate P = Q (X + X0, Y + Y0, Z + Z0) after rotation and movement transformation is mapped; wherein, x0, Y0, z0 are respectively the origin O2 (x 0, Y0, z 0) of the three-dimensional point cloud coordinate system, rotationY is the rotation angle of the laser scanning device around the Y axis of the world coordinate system, and Q is a quaternion obtained by a system function quaternion.
Optionally, when determining the three-dimensional point cloud coordinates corresponding to the medium outline, the three-dimensional point cloud coordinates corresponding to the designated spatial position in each functional space may be used as reference coordinates to determine the three-dimensional point cloud coordinates corresponding to the medium outline respectively according to the relationship between the spherical coordinates and the reference coordinates. In the embodiment of the present invention, a specific position of the designated space position in the target house is not limited, and optionally, a three-dimensional point cloud coordinate corresponding to the medium contour in each functional space may be used as a reference coordinate, further, the reference coordinate is mapped to a corresponding reference spherical coordinate set, a ray from an origin O1 to a point P in a spherical coordinate system and a focus of the reference spherical coordinate are determined, and the three-dimensional point cloud coordinate corresponding to the focus is used as the three-dimensional point cloud coordinate corresponding to the medium contour. Of course, the ball coordinate corresponding to the known object in the target house may also be used as the reference ball coordinate, for example, the ball coordinate corresponding to the ground is used as the reference ball coordinate, the focal point between the ray from the origin O1 to the point P and the reference ball coordinate, that is, the focal point of the plane where the ground is located, may be determined, and the three-dimensional point cloud coordinate corresponding to the focal point may be used as the three-dimensional point cloud coordinate corresponding to the medium profile, so as to determine the mapping area of the target medium image in the space profile, and further through the process of the above-mentioned coordinate conversion, the mapping relationship between the space profile and the panorama may be established, so that the user may edit the space profile by marking the panorama.
In addition, the space profile may include a mapping point corresponding to the second target collection point, after the space profile corresponding to the target space is obtained, the terminal may further display the space profile, and simultaneously, take the mapping point of the second target collection point on the space profile as a target observation point, and take the target observation point as an origin point and point to the direction of the structural element as a target observation area, then display a space structure diagram corresponding to the space profile, and display the target observation point and the target observation area in the space structure diagram. For example, the terminal may display a spatial profile corresponding to the whole house (e.g., a profile composed of different areas such as a living room, a kitchen, a bedroom, and a bathroom) in the global editing interface, and may display a partial spatial profile corresponding to the spatial live view (e.g., a house type map corresponding to a single spatial structure such as a living room, a kitchen, a bedroom, and a bathroom) in the panoramic editing interface, and the partial spatial profile displayed in the panoramic editing interface may be a spatial structure map used to represent a profile corresponding to a specific functional space in the target space, which is not limited in this respect.
For a target acquisition point, the spatial live-action map may be an image region of the medium corresponding to at least a part of the target structural element acquired from the panoramic data acquired at a second acquisition point in the target space, where the second acquisition point may be an optimal acquisition point of the medium corresponding to the target structural element among the acquisition point (1), the acquisition point (2) and the acquisition point (3) in fig. 2.
In one example, the best acquisition point among the acquisition point (1), the acquisition point (2) and the acquisition point (3) is the acquisition point closest to the medium corresponding to the target structural element, and as the second acquisition point, for example, for a certain solid wall in the target space, the corresponding distances from the acquisition point (1), the acquisition point (2) and the acquisition point (3) are respectively 2 meters, 3 meters and 5 meters, the acquisition point (1) can be used as the best acquisition point relative to the solid wall.
In another example, an acquisition point close to the forward shooting direction of the medium corresponding to the target structural element is taken as an optimal acquisition point among the acquisition point (1), the acquisition point (2) and the acquisition point (3), and as a second acquisition point, for example, assuming that the camera is taken as an origin and the corresponding ray is emitted as a forward shooting direction, if an included angle between a connecting line between the camera and the origin and the ray is smaller for the same solid wall in the target space, the closer the solid wall is to the forward shooting direction is indicated, so that the acquisition point with the smallest included angle can be taken as the optimal acquisition point relative to the solid wall.
Specifically, as described above, a user may perform data acquisition on a target space at least one acquisition point in the target space, each acquisition point corresponds to point cloud data and panoramic data, the point cloud data is used to construct a corresponding spatial floor plan, and the panoramic data is used to construct a spatial real view (i.e., panoramic view), when the user performs data acquisition on the target space at multiple acquisition points in the same target space, since different acquisition points may correspond to different acquisition perspectives, and there may be overlapping portions of the panoramic data acquired by different acquisition points based on the corresponding acquisition perspectives, for example, the acquisition perspectives corresponding to two different acquisition points may acquire the panoramic data corresponding to the same wall, in this case, when editing a wall structure element corresponding to the wall, the terminal may select an optimal acquisition point relative to the wall from the two acquisition points involved, so that after determining a first structure element to be edited in the spatial floor plan, the terminal may derive the optimal acquisition point relative to the first acquisition point based on a relationship between the panoramic data-the first structure element-acquisition point "and may obtain the corresponding panoramic data of the first acquisition point, and then extract the corresponding panoramic data from the panoramic view area. It can be understood that, based on the above description, in order to fully display an image area corresponding to a structural element that needs to be edited, in the process of editing a spatial house type diagram, the spatial house type diagram may be constructed based on point cloud data a acquired by an acquisition point (1), when the structural element in the spatial house type diagram is edited, exemplarily, by the method for determining an optimal acquisition point, it is determined that an acquisition point (2) is an optimal acquisition point with respect to a medium corresponding to the structural element, then panoramic data b corresponding to the acquisition point (2) is called, and an image area at least covering part of the medium corresponding to the structural element is acquired according to the panoramic data b to acquire a spatial live view for display.
In addition, based on the above scheme, while the terminal displays the spatial outdoor scene graph, a target observation point corresponding to the current observation angle and a target observation region corresponding to the target observation point may be acquired, where the target observation point may be a mapping point of the acquisition point (2) in the spatial outdoor scene graph, and the target observation region is a mapping region of the spatial outdoor scene graph in the spatial outdoor scene graph, and exemplarily, the mapping region may be represented by a sector region centered on the mapping point of the acquisition point (2) in the spatial outdoor scene graph; the method comprises the steps of displaying a space user type graph corresponding to a space real scene graph in a graphical user interface, displaying a target observation point or a target management point and a target observation area in the space user type graph, simultaneously displaying the space real scene graph comprising an image area at least covering a part of a medium corresponding to a structural element and the space user type graph of a target space in the graphical user interface, and linking the space real scene graph and the space user type graph, so that the richness of information display in the user type graph editing process is improved, the linkage between the marking of the space real scene graph and the displaying of the space user type graph is realized, the space real scene graph is adopted to assist in editing the space user type graph, the marking result in the user type graph editing process can be visually presented, and the global perception of the marking content of the target space can be improved.
In an example, referring to fig. 3, which shows a schematic diagram of a space structure diagram provided in the embodiment of the present invention, while displaying a space real view 310 corresponding to a current observation angle, a terminal may simultaneously display a space structure diagram 320 corresponding to the space real view 310 in a graphical user interface, and based on a determined target observation point and a target observation region, select a corresponding observation point 330 and display an observation region 340 (a sector region in the diagram) corresponding to the observation point 330 in the space structure diagram 320, where as an observation angle of the space real view 310 by a user changes, the observation region 340 may also dynamically change along with the change of the space real view displayed in the graphical user interface, so as to implement linkage of displaying house information contents.
103, in response to the fact that at least two marking elements overlapping the same target medium image are obtained in the space live-action image, respectively obtaining a panoramic area of each marking element in the space live-action image;
after the corresponding space live-action pictures are displayed, the terminal may perform image recognition on the space live-action pictures displayed in the current graphical user interface to recognize media included in the space live-action pictures, and then display mark elements corresponding to the media. In the automatic identification process, the terminal may have an identification error, which results in adding at least two overlapped mark elements for the same medium image in the spatial live-action image displayed by the terminal. In addition, in the process of manually marking the partial space live-action image, at least two overlapped marking elements may be added to the same medium image due to misoperation of a user and the like, and in this case, the terminal may output corresponding prompt information while updating the space outline image based on the marking elements, so as to prompt the user of the abnormal marking condition.
In a specific implementation, after the corresponding space live-action image is displayed, if the corresponding mark element exists in the displayed space live-action image, the terminal may display the mark element so that the user can edit the mark element; if the corresponding mark elements do not exist, the user can add the corresponding mark elements through a toolbar displayed in the panoramic editing interface, so that the real scene content in the space real scene graph is marked by adding the corresponding mark elements, and then the terminal can update the related house information according to the marking result, such as updating the house type graph corresponding to the target house. The toolbar may include a plurality of structure marking controls corresponding to different marking elements, each structure marking control corresponds to a marking element, each marking element corresponds to a structure element, and different spatial structures are represented.
In an example, referring to fig. 4, which illustrates a schematic diagram of a panoramic editing interface provided in an embodiment of the present invention, a terminal may display a corresponding panoramic editing interface 40 according to an operation of a user, and may display a spatial real view 410, a spatial outline 420, a mark element 430 (which may be a mark element already existing in the spatial real view or added by the user) and an editing control group 440 corresponding to the mark element 430 in the panoramic editing interface 40, so that the user may edit the mark element 420 through the editing control group 440, and simultaneously, along with the editing of the mark element 430 by the user, update a structure element 450 corresponding to the mark element 430 may be implemented in the spatial outline 420, thereby implementing linkage between the spatial real view and the spatial outline, mapping an editing result of the mark element in the spatial real view to a spatial outline of a two-dimensional plane, presenting the editing result to the user in real time, and improving a global perception of the user on a target space.
In addition, at least one space point location 460 may be further included in the space outline 420, and the user may jump to another collection point of the same functional space by selecting the corresponding space point location 460, so that the user may browse the real-scene content corresponding to the functional space with the browsing perspective corresponding to the collection point, or edit the markup element, and the like. In addition, in the panoramic editing interface 40, when a plurality of acquisition points exist in the corresponding function space, a space point location list 470 corresponding to each roaming point, such as "living room 1", "living room 2", and "living room 3", may be displayed in the panoramic editing interface 40, so that the user may not only realize the switching of the space live-action diagrams through the space points in the space contour diagram 420, but also realize the switching through the space point location list 470, thereby improving the convenience in the user editing process.
The marking elements corresponding to different spatial structures may be displayed in different display styles, for example, the marking elements may be displayed in yellow, green, red, white, etc. for the door, the window, the water line, the electric wire, etc. to distinguish different spatial structures.
It should be noted that, in the following embodiments, an example in which at least two mark elements are added to the same medium in the space live-action diagram during the manual editing process is taken as an example, it may be understood that the present invention is also applicable to a process of automatic machine adding, and the present invention is not limited to this.
In the process of editing the mark elements, the terminal may output prompt information for at least two mark elements in response to at least two mark elements that are added to the same target medium image in the spatial live-action diagram and overlap with each other, and then acquire panoramic areas corresponding to the at least two mark elements, so as to display corresponding structural elements on the spatial outline diagram according to the panoramic areas corresponding to the mark elements. Specifically, as for the output process of the prompt information, in response to a marking instruction for a target medium image in the spatial live-action image, a first marking element for the target medium image is displayed, and then, in response to an editing operation for the first marking element, if a corresponding second marking element already exists in the target medium image and spatial overlapping occurs between the first marking element and the second marking element after the first marking element is edited or in the editing process, the prompt information for the first marking element and the second marking element is output.
It should be noted that, because there are mapping relationships such as "spatial structure-structure element-mark element", "target space-space live-view-space outline map", and the like, a user can edit the space outline map by adding a corresponding mark element to a corresponding media image in the space live-view map. Specifically, when a user selects any one of the mark elements to edit, if at least two mark elements are overlapped in space in the space live-action diagram during or after the editing is finished, the terminal can judge that an editing abnormality occurs, output corresponding prompt information in the space live-action diagram, and simultaneously synchronously output first prompt identifiers of the at least two mark elements overlapped with the space live-action diagram, so that on one hand, the user can intuitively know the abnormal condition existing in the editing process through prompt information fed back in real time by the space live-action diagram, on the other hand, the user can process the editing abnormality from the global perspective through exception prompt of the space live-action diagram from the global perspective, and further, the accuracy of house information editing and the quality of house source information can be effectively improved through prompt of the exception occurring in the editing process.
In a specific implementation, when a user selects any one of the mark elements for editing, the terminal may determine, in response to a selection operation for the mark element, a first mark element corresponding to the selection operation, and display an editing control group for the first mark element, and then, in response to an editing operation for the editing control group input by the user, the terminal may determine, according to the editing operation, a first mark position of the first mark element, and if a spatial overlap occurs between the first mark position and a second mark position of a second mark element in the spatial real view, output prompt information for the first mark element and the second mark element; if only the first marked element exists in the space live-action image, no matter what kind of editing is carried out on the first marked element by the user through the editing control group, the abnormal editing condition can not occur. The first mark position can be the relative position of the mark element in the space live-action image; for the prompt information, it may include text prompt information, voice prompt information, image prompt information, and the like; the prompt identification can be image prompt information, text prompt information and the like.
It should be noted that, for spatial overlapping between at least two mark elements, the same spatial structure in the spatial rendering may be marked corresponding to different mark elements, or the same spatial structure in the spatial rendering may be marked corresponding to the same mark elements, and the like, which is not limited in the present invention.
The terminal may respond to the trigger for at least one endpoint control in the editing process of the marked element, so that after the endpoint control performs the first editing operation, the media image marked by each endpoint in the first marked element is determined in the space live-action diagram according to the first editing operation; and/or responding to the trigger aiming at the mobile control, and acquiring a first coordinate of the first marking element in the space live-action image according to the position of the second editing operation after the mobile control completes the execution of the second editing operation. In addition, the terminal can delete the first mark element from the space live-action diagram in response to the selection operation of the deletion control, and can also switch the first mark element from the first display style to the second display style or switch the second mark element from the first display style to the second display style, and display the text prompt information for the first mark element, so that the mark element can be edited through the control in the editing control group.
In specific implementation, when a user edits a first mark element through an editing control group, a terminal can acquire a panoramic area of the first mark element in a target panoramic image in real time, and for the panoramic area, the panoramic area can include panoramic pixel coordinates mapped by the first mark element in the target panoramic image, so that on one hand, by comparing panoramic pixel coordinates corresponding to different mark elements, whether at least two mark elements overlapping the same target medium image exist in a space live-action image can be judged, on the other hand, corresponding structural elements can be displayed in a space contour map based on the panoramic pixel coordinates, the user can handle editing abnormality from a global angle, and further, by prompting the abnormality occurring in the editing process, the accuracy of house information editing and the quality of house source information can be effectively improved.
In an optional embodiment, the first mark position may include a first coordinate (panoramic pixel coordinate) of the first mark element in the panoramic coordinate system, and for coordinate overlap, the terminal may construct a corresponding panoramic coordinate system based on panoramic data corresponding to the acquisition point, and as the user edits the first mark element, the terminal may acquire, in real time, a second coordinate of a second mark element in the panoramic coordinate system, except the first mark element, in the live-action space map, and use the second mark element and the first mark element to which the second coordinate overlapped with the first coordinate belongs as an abnormal mark element, and then output prompt information for the abnormal mark element in the space live-action map.
In an example, taking the user as an example of ending editing on the first mark element, after the user finishes editing on the first mark element, the terminal may obtain a first coordinate corresponding to the first mark element, where the abscissa is x1-x20 and the ordinate is y1, and meanwhile, the terminal may further obtain a second coordinate of another mark element in a current visual angle in the spatial real view, assuming that the second mark element exists, and the corresponding second coordinate is x11-x18 and the ordinate is y2, and by comparison, an overlapping range "x11-x18" exists on the abscissa between the first mark element and the second mark element, the terminal may determine that there is spatial overlap between the first mark element and the second mark element, take the first mark element and the second mark element as abnormal mark elements, and output corresponding prompt information in the panoramic editing interface, such as text prompt information that "mark cannot be placed in an overlapping manner.
In another optional embodiment, the first mark position includes a first end point position corresponding to an end point of the first mark element or a first boundary position corresponding to a boundary of the first mark element, and for the overlapping of spatial structures, the terminal may identify a spatial structure marked by the mark element in the spatial live-action diagram, and then compare the spatial structures marked by different mark elements to determine whether there is a case where different mark elements mark the same spatial structure, or a case where a plurality of the same mark elements mark the same spatial structure, and specifically, the terminal may obtain, in real time, a second end point position corresponding to an end point of a second mark element other than the first mark element in at least part of the spatial live-action diagram or a second boundary position corresponding to a boundary, and use the second mark element belonging to the second end point position where the first end point marks the same medium as the abnormal mark element, or use the second mark element belonging to the second boundary position where the first boundary position marks the same medium as the abnormal mark element, and then output prompt information for the abnormal mark element in the graphical user interface. Optionally, as for the end point position, the boundary position, etc., they may also be based on the panoramic pixel coordinates in the foregoing embodiments, for example, the end point position may be an end point coordinate corresponding to an end point in the mark element, and the boundary position may be a series of coordinates corresponding to a boundary in the mark element, which is not limited in this disclosure.
It should be noted that, the terminal may determine, through the endpoint, the media image marked by the mark element, and may also determine, by taking the mark element as a whole, the media image marked by the mark element as a whole, and then compare the media images marked by different mark elements, to determine whether the same media image is marked by different mark elements, or the same media image is marked by the same mark element, and the like, which is not limited in the present invention.
In another example, taking the user as an example of ending editing on the first mark element, after the user finishes editing on the first mark element, the terminal may obtain, in an image recognition manner, a first medium image marked by the first mark element in the space live view, which is a medium image of the door body, and may also obtain a second medium image marked by another mark element in the space live view, and if the second mark element exists (the first mark element and the second mark element are mark elements for representing the same medium), and the second medium image marked by the second mark element is also a medium image of the door body, which is obtained by comparison, and both mark the same medium image, the terminal may determine that there is spatial overlap between the first mark element and the second mark element, and use both as abnormal mark elements, and output corresponding prompt information in the graphical user interface, such as text prompt information of "mark non-overlappable placement", and the like; if the first marking element and the second marking element are characterized by different medium marking elements, if the first marking element is a door body marking element and the second marking element is a window body marking element, a first medium image marked in a space live view by the first marking element is a medium image of a window body, a second medium image marked in a space live view by the second marking element is also a medium image of the window body, and the first medium image and the second medium image are marked by the same medium image, the terminal can judge that the first marking element and the second marking element are overlapped in space, use the first marking element and the second marking element as abnormal marking elements, and output corresponding prompt information in a graphic user interface, such as text prompt information of 'marking can not be overlapped', and the like.
Optionally, for the mark element, the mark element may include mark marks of different display styles such as a mark line segment, a mark surface, a three-dimensional mark, and the like, in the process of outputting the prompt information, the terminal may identify a target medium image marked in the space live-action image by the second mark element spatially overlapped with the first mark element, then may display a mask layer for the target medium image marked by the second mark element in the space live-action image, and display the prompt information for the first mark element in the panorama editing interface, so that content of a mark abnormality is instantly and intuitively presented in the space live-action image through the mask layer, a user can conveniently and instantly adjust editing of the mark element, accuracy of the mark element to the space live-action image mark is ensured, and further, matching between the house source information and the entity house is improved. In addition, the terminal can also switch the first mark element from the first display style to the second display style or switch the second mark element from the first display style to the second display style, and display the text prompt information aiming at the first mark element, wherein the first display style and the second display style are different display modes, so that a user can intuitively and quickly know that the mark is abnormal in the house information editing process through differentiated display, and the user can conveniently and timely adjust the mark element.
In addition, the terminal can judge whether different mark elements are overlapped in the space live-action image, and can also obtain a panoramic area of each mark element in the space live-action image, so that the corresponding structural element is displayed on the space outline image through the panoramic area. The mark element may be an element that "covers" a corresponding image area on the space real image, and the display size (width and height) corresponding to the mark element may be determined based on the panoramic pixel coordinates of the upper end, the lower end, the left end, and the right end of the image area that the mark element "covers" on the space real image. As can be seen from the foregoing, the panoramic area may be represented by the panoramic pixel coordinates, such as the position and size of the marker element in the panoramic image.
104, acquiring a target contour element mapped in the space contour map by the marking element and a contour position on the target contour element according to the panoramic area corresponding to the marking element;
in specific implementation, the panoramic area includes panoramic pixel coordinates of the mark elements in the space live-action image, and then the terminal may map the panoramic pixel coordinates corresponding to each mark element into three-dimensional point cloud coordinates based on a mapping relationship between the space live-action image and the space contour image, and then separately locate the target contour element corresponding to the three-dimensional point cloud coordinates corresponding to each mark element and a contour position on the target contour element from the space contour image, and determine what kind of structural element needs to be displayed according to the structural identifier corresponding to the mark element, so that according to the mapping relationship between the constructed space contour image and the space live-action image, the user can edit the space user-type image by marking the corresponding medium in the space live-action image, thereby greatly simplifying the flow of editing the user-type image, not only improving the convenience of editing, but also improving the editing efficiency and improving the accuracy of the content presented by marking in combination with the presented live-action content. The preset proportional mapping relationship may be a mapping relationship for converting a display size of a mark element and a display size of a structural element, and may be, for example, 100: the relationship 1 converts the display size of the mark element into a display size corresponding to the structural element, specifically, assuming that the width of the door body mark element is 1 meter, the width of the corresponding door body structural element on the spatial profile may be 1 cm, which is not limited in the present invention.
Optionally, the terminal may obtain panoramic pixel coordinates of each mark element in a spatial live-action image, where the spatial live-action image may be collected at a second collection point in the target space as exemplary second image collection data; mapping the panoramic pixel coordinate corresponding to each marking element to a coordinate system of a three-dimensional point cloud image of a target space to obtain a three-dimensional point cloud coordinate, wherein the three-dimensional point cloud image is taken as exemplary first image acquisition data and acquired at a first acquisition point of the target space, which is not limited in the invention.
The coordinate mapping process is exemplarily described below by taking the mapping between the panoramic pixel coordinates and the three-dimensional point cloud coordinates corresponding to the outline of the door and/or window (e.g., an exemplary destination medium) as an example.
Specifically, the panoramic pixel coordinates corresponding to the outlines of the door body and the window body can be mapped into three-dimensional point cloud coordinates, and then the panoramic pixel coordinates corresponding to the outlines of the door body and the window body are mapped into a sphere space according to the mapping relation between the panoramic pixel coordinates and the sphere coordinates to obtain corresponding sphere coordinates; and further, according to the relative pose relation between the panoramic camera and the laser scanning equipment and the mapping relation between the spherical coordinates and the three-dimensional point cloud coordinates, mapping the spherical coordinates respectively corresponding to the door body outline and the window body outline into a three-dimensional point cloud coordinate system. Optionally, when the panoramic Pixel coordinates corresponding to the door body contour and the window body contour are mapped to be spherical coordinates, the Pixel coordinate at the upper left corner of the panoramic Pixel coordinate may be used as an origin, assuming that the length and the width of the panoramic image are H and W, respectively, and the Pixel coordinate corresponding to each Pixel point is Pixel (x, y), then the longitude Lon and the latitude Lat corresponding to the spherical coordinates after mapping of each panoramic Pixel coordinate are respectively:
Lon=(x/W-0.5)*360;
Lat=(0.5–y/H)*180;
further, an origin O1 (0, 0) of the spherical coordinate system is established, and assuming that the radius of the spherical coordinate system is R, the spherical coordinates (X, Y, Z) of each panoramic pixel coordinate after mapping are:
X=R*cos(Lon)*cos(Lat);
Y=R*sin(Lat);
Z=R*sin(Lon)*cos(Lat);
further, when the door body and the window body are scanned by the laser scanning equipment, mapping is carried out according to a mapping relation of the corresponding spherical coordinates P = Q (X + X0, Y + Y0, Z + Z0) after rotation and movement transformation when the door body and the window body are mapped from the spherical coordinate system to the three-dimensional point cloud coordinate system; wherein, x0, Y0, z0 are respectively the origin O2 (x 0, Y0, z 0) of the three-dimensional point cloud coordinate system, rotationY is the rotation angle of the laser scanning device around the Y axis of the world coordinate system, and Q is a quaternion obtained by a system function quaternion.
Optionally, when determining the three-dimensional point cloud coordinates corresponding to the door body contour and the window body contour, the three-dimensional point cloud coordinates corresponding to the specified spatial position in each functional space may be used as reference coordinates, so as to determine the three-dimensional point cloud coordinates corresponding to the door body contour and the window body contour respectively according to the relationship between the spherical coordinates and the reference coordinates. In the embodiment of the present invention, a specific position of the designated space position in the target house is not limited, and optionally, a three-dimensional point cloud coordinate corresponding to a wall contour in each functional space may be used as a reference coordinate, further, the reference coordinate is mapped to a corresponding reference spherical coordinate set, a ray from an origin O1 to a point P in a spherical coordinate system and a focus of the reference spherical coordinate are determined, and the three-dimensional point cloud coordinate corresponding to the focus is used as a three-dimensional point cloud coordinate corresponding to a door contour or a window contour. Of course, the ball coordinate corresponding to the known object in the target house may be used as the reference ball coordinate, for example, if the ball coordinate corresponding to the ground is used as the reference ball coordinate, the focal point of the ray from the origin O1 to the point P and the reference ball coordinate, that is, the focal point of the plane where the ground is located may be determined, and the three-dimensional point cloud coordinate corresponding to the focal point may be used as the three-dimensional point cloud coordinate corresponding to the door body contour or the window body contour. Further, the three-dimensional point cloud coordinates may be two-dimensionally mapped to the spatial contour map, and target contour elements on which the respective mark elements are mapped on the spatial contour map and contour positions on the target contour elements may be determined based on the mapping result, so that which contour element the respective mark elements are located on the spatial contour map and at what position on the contour elements may be obtained based on the mapping result.
And 105, displaying the target contour elements mapped in the space contour map according to the marking elements in the space contour map and contour positions on the target contour elements so as to prompt that different marking elements are overlapped.
Specifically, after the terminal respectively displays corresponding structural elements in the space contour map according to the mapping target contour elements of each mark element in the space contour map and the contour positions on the target contour elements, the terminal may detect the structural elements on the space contour map, and if it is detected that position overlapping occurs between the structural elements corresponding to at least two mark elements respectively on the space contour map, a first prompt identifier for the at least two structural elements with the overlapped positions is displayed on the space contour map to prompt that overlapping exists between different mark elements, so that in the process of editing the house information, based on the position relationship between the at least two mark elements in the space real view map, the terminal may recognize the abnormal condition occurring in the house information editing process, and output the corresponding prompt identifier in the space contour map, and further perform abnormal prompt from a global perspective through the space contour map, so that the user may process the editing abnormality from the global perspective, and by prompting the abnormality occurring in the editing process, the accuracy of the house information editing and the quality of the house source information may be effectively improved.
In the process, the first prompt identifier displayed in the spatial profile map can prompt a user about which structural elements are abnormal, and can also realize the positioning of the marked elements, specifically, the terminal can respond to the selection operation for the first prompt identifier, display at least two marked elements corresponding to the first prompt identifier in the spatial live-action map, select a selected marked element from the at least two marked elements, and display the editing control group for the marked element, so that the user can edit the marked element with abnormal marking through the editing control group, and the accuracy of marking the marked element is ensured.
It should be noted that, for the real-time exception prompt in the editing process, a user can quickly learn which marked elements have marked exceptions through the prompt information output in the three-dimensional full-entry space; and for positioning the marking elements through the prompt identifiers, the method can be suitable for scenes in which the user does not know which marking elements have abnormal marks or need to realize quick positioning, so that the marking elements with abnormal marks are quickly positioned through the prompt identifiers, and the efficiency of editing house information by the user can be greatly improved.
In an example, referring to fig. 5, a schematic diagram of a panoramic editing interface provided in an embodiment of the present invention is shown, and when spatial overlapping occurs between different markup elements at the end of editing or during editing, a terminal may output corresponding prompt information in the panoramic editing interface 50. Specifically, in the process that the user edits the first mark element 510, the terminal detects that the first mark element 510 and the second mark element 520 are spatially overlapped, on one hand, the terminal can recognize a spatial structure (a door body in fig. 5) marked by the second mark element 520 in the spatial live-action diagram 530, and then can display a mask layer 540 for the spatial structure to prompt the user of abnormal editing in a visual prompt manner, and at the same time, can output corresponding prompt information 540 in the graphical user interface, such as "mark non-overlapping placement". In addition, due to the mapping relationship between the structure element and the mark element in the spatial profile 560, the terminal can also output the corresponding prompt identifier 570 in the spatial profile 560 synchronously, so that when a user edits the house information, the terminal can identify the editing result of the house information or the abnormal condition occurring in the editing process by the mark element-structure element-spatial structure through the mapping relationship between the mark element-structure element-spatial structure, output the corresponding prompt information in the spatial real-scene graph, and output the corresponding prompt identifier in the spatial profile synchronously.
In the embodiment of the present invention, after the user finishes editing the mark elements through the above process, the corresponding editing result may be saved through a saving control provided in the panoramic editing interface, and specifically, the terminal may respond to a saving instruction for the mark elements (for example, generate a corresponding saving instruction through the saving control), detect the panoramic image corresponding to each functional space, and if there are multiple mark elements in the panoramic image corresponding to at least one functional space to mark the same medium image, output a second prompt identifier that marks the same medium image in the spatial profile corresponding to the target space.
In a specific implementation, because a plurality of function spaces exist in a target house, in the process of editing a mark element, a terminal often only can display a space live-action diagram corresponding to a certain function space, which easily causes the problem that a mark is abnormal in the space live-action diagram corresponding to a certain function space. If the abnormal mark exists, a corresponding second prompt identifier can be output in a user-type graph corresponding to the target house, the user can directly position the abnormal mark element through the second prompt identifier, specifically, the terminal can respond to the selection operation aiming at the second prompt identifier, determine the target prompt identifier, position the target function space corresponding to the target prompt identifier in the target space, display the panoramic image corresponding to the target function space, determine the overlapped mark elements corresponding to the target prompt identifier, select one of the overlapped mark elements, display an editing control group aiming at the mark element, edit the mark elements through the editing control group to ensure the mark accuracy of the mark elements, optimize the editing process in a one-step positioning mode on one hand, and detect the editing result on the other hand, and effectively ensure the editing accuracy of the house information through the prompt identifier.
It should be noted that the embodiment of the present invention includes but is not limited to the above examples, and it is understood that, under the guidance of the idea of the embodiment of the present invention, a person skilled in the art may also set the method according to actual requirements, and the present invention is not limited to this.
In the embodiment of the present invention, during the process of editing the house information, the terminal may display a space profile corresponding to a target space in an editing state, where the space profile includes at least a profile element and at least one structure element located on the profile element, where the structure element is an element mapped onto the corresponding profile element in the space profile according to a target medium image identified by the space profile, and the space profile is a panoramic image used for identifying the target medium image in the panoramic image acquired by at least one acquisition point, during the process of editing the space profile by the user, the terminal may display the space profile corresponding to the target structure element in response to an edit instruction for the target structure element on the space profile, so that the user marks the space profile to implement editing of the space profile, during the marking process, the terminal may obtain a panoramic area in the space profile in which each mark element is located in the space profile in response to the obtaining at least two mark elements that overlap the same target medium image in the space profile, and may obtain a relationship between the two mark elements in the space profile mapping of the space profile in the space profile and obtain a mark information indicating that the target profile exists in the space profile, and then display the abnormal situation that the editing of the corresponding target profile elements in the space profile and the corresponding mark elements in the space profile, and the abnormal situation of the corresponding mark elements in the space profile, and the corresponding to the corresponding mark elements in the corresponding to the editing process of the house information of the target profile, and the corresponding mark elements in the corresponding to the editing information of the corresponding mark elements in the space profile of the space profile, and the corresponding target profile, and the editing information of the corresponding target profile of the corresponding mark elements in the space profile of the corresponding to the space profile of the corresponding mark elements, and furthermore, the spatial profile is used for carrying out exception prompting from the global angle, so that a user can process edition exception from the global angle, and the accuracy of house information edition and the quality of house source information can be effectively improved by prompting the exception occurring in the edition process.
It should be noted that for simplicity of description, the method embodiments are shown as a series of combinations of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a block diagram of a structure of a prompting device in an information processing process provided in the embodiment of the present invention is shown, and the prompting device specifically includes the following modules:
a contour diagram display module 601, configured to display a spatial contour diagram in an editing state corresponding to a target space, where the spatial contour diagram at least includes contour elements and at least one structural element located on the contour elements, where the structural element is an element that is mapped onto a corresponding contour element in a spatial contour diagram corresponding to a target medium image identified according to a spatial live-action diagram, and the spatial live-action diagram is a panoramic image used for identifying the target medium image in a panoramic image acquired by at least one acquisition point;
a live-action diagram display module 602, configured to display, in response to an edit instruction for a target structure element on the spatial outline diagram, the spatial live-action diagram corresponding to the target structure element;
a panoramic area obtaining module 603, configured to, in response to obtaining at least two mark elements overlapping with the same target medium image in a space live-action image, obtain a panoramic area of each mark element in the space live-action image;
a contour position determining module 604, configured to obtain, according to the panoramic area corresponding to the markup element, a target contour element mapped in the spatial contour map by the markup element and a contour position on the target contour element;
a structural element displaying module 605, configured to display, in the spatial outline diagram, a target outline element mapped in the spatial outline diagram by each of the mark elements and an outline position on the target outline element, so as to prompt that there is overlap between different mark elements.
In an optional embodiment, the panoramic area obtaining module 603 is specifically configured to:
in response to at least two mark elements which are overlapped and added to the same target medium image in the space live-action image, outputting prompt information aiming at the at least two mark elements;
and acquiring panoramic areas corresponding to the at least two marking elements.
In an optional embodiment, the panoramic area obtaining module 603 is specifically configured to:
displaying a first marking element for a target medium image in the space live-action figure in response to a marking instruction for the target medium image;
responding to the editing operation for the first mark element, and if a corresponding second mark element already exists in the target medium image and a spatial overlap occurs between the first mark element and the second mark element after or during the editing of the first mark element, outputting prompt information for the first mark element and the second mark element.
In an optional embodiment, the panoramic area obtaining module 603 is specifically configured to:
responding to an editing operation for the first marking element, determining a first marking position of the first marking element according to the editing operation, and if the first marking position and a second marking position of a second marking element in the space live-action image are overlapped in space, outputting prompt information for the first marking element and the second marking element.
In an optional embodiment, the first mark position includes a first coordinate of the first mark element in a panoramic coordinate system, and the panoramic area obtaining module 603 is specifically configured to:
acquiring second coordinates of second mark elements in the space live-action image except the first mark elements in the panoramic coordinate system;
and taking a second mark element to which a second coordinate overlapped with the first coordinate belongs and the first mark element as abnormal mark elements, and outputting prompt information aiming at the abnormal mark elements.
In an optional embodiment, the first mark position includes a first endpoint position corresponding to an endpoint of the first mark element or a first boundary position corresponding to a boundary of the first mark element, and the panoramic area acquisition module 603 is specifically configured to:
acquiring a second endpoint position or a second boundary position corresponding to a boundary, which corresponds to an endpoint of a second mark element except the first mark element, in the space live-action image;
taking a second mark element belonging to a second endpoint position of the same target medium image as the first endpoint position mark as an abnormal mark element, or taking a second mark element belonging to a second boundary position of the same target medium image as the first boundary position mark as an abnormal mark element;
and outputting prompt information aiming at the abnormal marking element.
In an optional embodiment, the panoramic area obtaining module 603 is specifically configured to:
displaying an editing control group for the first markup element, wherein the editing control group at least comprises an endpoint control and a mobile control;
in response to the triggering of at least one endpoint control, after the endpoint control completes executing a first editing operation, determining a medium image marked by each endpoint in the first marking element in the space live-action image according to the first editing operation;
and/or responding to the trigger aiming at the mobile control, so that after the mobile control completes the execution of the second editing operation, the first coordinate of the first marking element in the space live-action image is obtained according to the position of the second editing operation.
In an optional embodiment, the panoramic area obtaining module 603 is specifically configured to:
switching the first mark element from a first display style to a second display style or switching the second mark element from the first display style to the second display style, and displaying text prompt information aiming at the first mark element;
the first display style and the second display style are different display modes.
In an alternative embodiment, the panoramic area includes panoramic pixel coordinates of the markup element in the real space scene map, and the contour position determining module 604 is specifically configured to:
mapping the panoramic pixel coordinate corresponding to each marking element into a three-dimensional point cloud coordinate;
and respectively positioning a target contour element corresponding to the three-dimensional point cloud coordinate corresponding to each marking element and a contour position on the target contour element from the space contour map.
In an optional embodiment, the structural element displaying module 605 is specifically configured to:
respectively displaying corresponding structural elements in the space outline according to the mapped target outline elements of the marking elements in the space outline and the outline positions on the target outline elements;
if the positions of the structural elements corresponding to the at least two marking elements on the spatial contour map are overlapped, displaying a first prompt identifier for the at least two structural elements with overlapped positions on the spatial contour map to prompt that the different marking elements are overlapped.
In an optional embodiment, the live-action-picture displaying module 602 is specifically configured to:
in response to an editing instruction of a target structure element on the spatial outline diagram, extracting a spatial live-action diagram corresponding to the target structure element at a current observation visual angle from the spatial live-action diagram;
acquiring a target observation point corresponding to the current observation visual angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the target acquisition point in the space profile, and the target observation area is a mapping area of the space real image in the space profile;
and displaying a space contour map corresponding to the space live-action map, and displaying the target observation point or the target observation point and the target observation area in the space contour map.
In an optional embodiment, the target acquisition point is an optimal acquisition point of the at least one acquisition point of the target space relative to the medium corresponding to the target structural element, and the apparatus further comprises:
an acquisition point determining module, configured to select, as an optimal acquisition point, an acquisition point that is closest to a medium corresponding to a target structural element from among at least one acquisition point in the target space, and use the optimal acquisition point as the target acquisition point; or selecting an acquisition point close to the forward shooting direction of the medium corresponding to the target structural element from at least one acquisition point in the target space as an optimal acquisition point as the target acquisition point.
In an alternative embodiment, the target space comprises at least one functional space, the apparatus further comprising:
and the second identifier output module is configured to detect the panoramic image corresponding to each functional space in response to the storage instruction for the marker element, and if there are multiple marker elements in the panoramic image corresponding to at least one functional space to mark the same medium image, output a second prompt identifier that marks the same medium image in the spatial profile corresponding to the target space.
In an optional embodiment, further comprising:
the panoramic image display module is used for responding to the selection operation aiming at the second prompt identifier, determining a target prompt identifier, positioning a target function space corresponding to the target prompt identifier in the target space, and displaying a panoramic image corresponding to the target function space;
and the marking element determining module is used for determining the marking elements which are overlapped in the generating space and correspond to the target prompt identification, selecting a selected marking element from the marking elements which are overlapped in the generating space, displaying an editing control group aiming at the marking elements and editing the marking elements through the editing control group.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
In addition, an embodiment of the present invention further provides an electronic device, including: the processor, the memory, and the computer program stored in the memory and capable of running on the processor, when executed by the processor, implement each process of the above-mentioned prompting method embodiment in the information processing process, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned prompting method embodiment in the information processing process, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 702, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may provide audio output related to a specific function performed by the electronic apparatus 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The electronic device 700 also includes at least one sensor 705, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7071 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7071 and/or a backlight when the electronic device 700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7071, and the Display panel 7071 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7071, and when the touch panel 7071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7071 according to the type of the touch event. Although the touch panel 7071 and the display panel 7071 are shown in fig. 7 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 7071 and the display panel 7071 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 708 is an interface for connecting an external device to the electronic apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 700 or may be used to transmit data between the electronic apparatus 700 and the external device.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby monitoring the whole electronic device. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The electronic device 700 may also include a power supply 711 (e.g., a battery) for providing power to the various components, and preferably, the power supply 711 may be logically coupled to the processor 710 via a power management system, such that functions of managing charging, discharging, and power consumption may be performed via the power management system.
In addition, the electronic device 700 includes some functional modules that are not shown, and are not described in detail here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (17)

1. A prompting method in an information processing process is characterized by comprising the following steps:
displaying a space contour map in an editing state corresponding to a target space, wherein the space contour map at least comprises contour elements and at least one structural element positioned on the contour elements, the structural element is an element which is mapped to the corresponding contour element in the space contour map correspondingly according to a target medium image identified by a space live-action map, and the space live-action map is a panoramic image used for identifying the target medium image in the panoramic image acquired by the at least one acquisition point;
in response to an editing instruction of a target structure element on the spatial outline, showing the spatial live-action diagram corresponding to the target structure element;
responding to the acquisition of at least two marking elements overlapping the same target medium image in a space live-action image, and respectively acquiring a panoramic area of each marking element in the space live-action image;
acquiring a target contour element mapped in the space contour map by the marking element and a contour position on the target contour element according to the panoramic area corresponding to the marking element;
and displaying the space outline graph according to the mapping target outline element of each marking element in the space outline graph and the outline position on the target outline element so as to prompt that the different marking elements are overlapped.
2. The method according to claim 1, wherein the obtaining a panoramic area of each marking element in the spatial live-action map in response to obtaining at least two marking elements overlapping the same target medium image in the spatial live-action map comprises:
in response to at least two mark elements which are overlapped and added to the same target medium image in the space live-action image, outputting prompt information aiming at the at least two mark elements;
and acquiring panoramic areas corresponding to the at least two marking elements.
3. The method according to claim 2, wherein the outputting the hint information for the at least two marking elements in response to adding the at least two marking elements overlapping for the same target medium image in the spatial live-action map comprises:
displaying a first marking element for a target medium image in the space live-action figure in response to a marking instruction for the target medium image;
responding to the editing operation for the first mark element, and if a corresponding second mark element already exists in the target medium image and a spatial overlap occurs between the first mark element and the second mark element after or during the editing of the first mark element, outputting prompt information for the first mark element and the second mark element.
4. The method according to claim 3, wherein the outputting, in response to the editing operation for the first mark element, prompt information for the first mark element and the second mark element if a corresponding second mark element already exists in the target media image and a spatial overlap occurs between the first mark element and the second mark element after or during editing of the first mark element comprises:
responding to an editing operation for the first marking element, determining a first marking position of the first marking element according to the editing operation, and if the first marking position and a second marking position of a second marking element in the space live-action image are overlapped in space, outputting prompt information for the first marking element and the second marking element.
5. The method of claim 4, wherein the first marker position comprises a first coordinate of the first marker element in a panoramic coordinate system, and wherein outputting hinting information for a second marker element in the spatial live view if a spatial overlap occurs between the first marker position and a second marker position of the first marker element comprises:
acquiring second coordinates of second mark elements in the space live-action image except the first mark elements in the panoramic coordinate system;
and taking a second mark element to which a second coordinate overlapped with the first coordinate belongs and the first mark element as abnormal mark elements, and outputting prompt information aiming at the abnormal mark elements.
6. The method according to claim 4, wherein the first mark position comprises a first end point position corresponding to an end point of the first mark element or a first boundary position corresponding to a boundary of the first mark element, and the outputting the hint information for the first mark element and a second mark element in the real scene graph if a spatial overlap occurs between the first mark position and a second mark position of the second mark element comprises:
acquiring a second endpoint position or a second boundary position corresponding to a boundary, which corresponds to an endpoint of a second mark element except the first mark element, in the space live-action image;
taking a second mark element belonging to a second endpoint position of the same target medium image as the first endpoint position mark as an abnormal mark element, or taking a second mark element belonging to a second boundary position of the same target medium image as the first boundary position mark as an abnormal mark element;
and outputting prompt information aiming at the abnormal marking element.
7. The method of claim 4, wherein said determining a first marker position for the first marker element in accordance with an editing operation for the first marker element in response to the editing operation comprises:
displaying an editing control group for the first markup element, wherein the editing control group at least comprises an endpoint control and a mobile control;
in response to the triggering of at least one endpoint control, after the endpoint control completes executing a first editing operation, determining a medium image marked by each endpoint in the first marking element in the space live-action image according to the first editing operation;
and/or responding to the trigger aiming at the mobile control, so that after the mobile control completes the execution of the second editing operation, the first coordinate of the first marking element in the space live-action image is obtained according to the position of the second editing operation.
8. The method of claim 2, wherein outputting hint information for the first markup element and the second markup element comprises:
switching the first mark element from a first display style to a second display style or switching the second mark element from the first display style to the second display style, and displaying text prompt information aiming at the first mark element;
the first display style and the second display style are different display modes.
9. The method according to claim 2, wherein the panoramic area includes panoramic pixel coordinates of the markup element in the real space image, and the obtaining, according to the panoramic area corresponding to the markup element, a target outline element mapped in the real space image by the markup element and an outline position on the target outline element comprises:
mapping the panoramic pixel coordinate corresponding to each marking element into a three-dimensional point cloud coordinate;
and respectively positioning a target contour element corresponding to the three-dimensional point cloud coordinate corresponding to each marking element and a contour position on the target contour element from the space contour map.
10. The method of claim 9, wherein displaying in the spatial profile map a target profile element according to a mapping of each of the markup elements in the spatial profile map and a profile position on the target profile element to indicate that there is an overlap between different markup elements comprises:
displaying corresponding structural elements in the space outline according to the mapped target outline elements of the marking elements in the space outline and the outline positions on the target outline elements;
if the positions of the structural elements corresponding to the at least two marking elements on the spatial contour map are overlapped, displaying a first prompt identifier for the at least two structural elements with overlapped positions on the spatial contour map to prompt that the different marking elements are overlapped.
11. The method according to claim 1, wherein said presenting the spatial realistic view corresponding to the target structural element in response to an edit instruction for the target structural element on the spatial outline map comprises:
in response to an editing instruction of a target structure element on the spatial outline diagram, extracting a spatial live-action diagram corresponding to the target structure element at a current observation visual angle from the spatial live-action diagram;
acquiring a target observation point corresponding to the current observation visual angle and a target observation area corresponding to the target observation point, wherein the target observation point is a mapping point of the target acquisition point in the space profile, and the target observation area is a mapping area of the space real image in the space profile;
and displaying a space contour map corresponding to the space live-action map, and displaying the target observation point or the target observation point and the target observation area in the space contour map.
12. The method of claim 11, wherein said target acquisition point is an optimal acquisition point of media corresponding to said target stractural element in at least one acquisition point of said target space, the method further comprising:
selecting an acquisition point closest to a medium corresponding to a target structural element from at least one acquisition point in the target space as an optimal acquisition point as the target acquisition point; or the like, or, alternatively,
and selecting an acquisition point close to the forward shooting direction of the medium corresponding to the target structural element from at least one acquisition point in the target space as an optimal acquisition point as the target acquisition point.
13. The method of claim 1, wherein the target space comprises at least one functional space, the method further comprising:
and responding to a storage instruction aiming at the mark elements, detecting the panoramic image corresponding to each functional space, and if a plurality of mark elements exist in the panoramic image corresponding to at least one functional space to mark the same medium image, outputting a second prompt identifier marked with the same medium image in the space outline corresponding to the target space.
14. The method of claim 13, further comprising:
responding to the selection operation aiming at the second prompt identifier, determining a target prompt identifier, positioning a target function space corresponding to the target prompt identifier in the target space, and displaying a panoramic image corresponding to the target function space;
and determining the mark elements which are overlapped in the generation space and correspond to the target prompt identification, selecting a selected mark element from the mark elements which are overlapped in the generation space, and displaying an editing control group aiming at the mark elements so as to edit the mark elements through the editing control group.
15. A presentation apparatus in an information processing process, comprising:
the contour map display module is used for displaying a spatial contour map in an editing state corresponding to a target space, wherein the spatial contour map at least comprises contour elements and at least one structural element positioned on the contour elements, the structural element is an element which is mapped to the corresponding contour elements in the spatial contour map correspondingly according to a target medium image identified by a spatial live-action map, and the spatial live-action map is a panoramic image used for identifying the target medium image in the panoramic image acquired by the at least one acquisition point;
the live-action diagram display module is used for responding to an editing instruction of a target structure element on the space outline diagram and displaying the space live-action diagram corresponding to the target structure element;
the panoramic area acquisition module is used for responding to the acquisition of at least two marking elements which are overlapped with the same target medium image in the space live-action image, and respectively acquiring the panoramic area of each marking element in the space live-action image;
the contour position determining module is used for acquiring a target contour element mapped in the space contour map by the marking element and a contour position on the target contour element according to the panoramic area corresponding to the marking element;
and the structural element display module is used for displaying the space outline map according to the mapped target outline elements of the mark elements in the space outline map and the outline positions on the target outline elements so as to prompt that the different mark elements are overlapped.
16. An electronic device, comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing a program stored on the memory, implementing the method of any one of claims 1-14.
17. A computer-readable storage medium having stored thereon instructions, which when executed by one or more processors, cause the processors to perform the method of any one of claims 1-14.
CN202211457010.9A 2022-11-21 2022-11-21 Prompting method and device in information processing process, electronic equipment and storage medium Pending CN115729393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211457010.9A CN115729393A (en) 2022-11-21 2022-11-21 Prompting method and device in information processing process, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211457010.9A CN115729393A (en) 2022-11-21 2022-11-21 Prompting method and device in information processing process, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115729393A true CN115729393A (en) 2023-03-03

Family

ID=85296891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211457010.9A Pending CN115729393A (en) 2022-11-21 2022-11-21 Prompting method and device in information processing process, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115729393A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096805A (en) * 2024-04-24 2024-05-28 广州开得联智能科技有限公司 Full-scenic spot layout method and device, electronic equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096805A (en) * 2024-04-24 2024-05-28 广州开得联智能科技有限公司 Full-scenic spot layout method and device, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US11887312B2 (en) Fiducial marker patterns, their automatic detection in images, and applications thereof
CN111417028B (en) Information processing method, information processing device, storage medium and electronic equipment
CN111145352A (en) House live-action picture display method and device, terminal equipment and storage medium
CN109947886B (en) Image processing method, image processing device, electronic equipment and storage medium
US7120519B2 (en) Remote-controlled robot and robot self-position identification method
JP6469706B2 (en) Modeling structures using depth sensors
US20120105447A1 (en) Augmented reality-based device control apparatus and method using local wireless communication
KR101330805B1 (en) Apparatus and Method for Providing Augmented Reality
CN107330978B (en) Augmented reality modeling experience system and method based on position mapping
CN107782314A (en) A kind of augmented reality indoor positioning air navigation aid based on barcode scanning
US9733896B2 (en) System, apparatus, and method for displaying virtual objects based on data received from another apparatus
CN111968247B (en) Method and device for constructing three-dimensional house space, electronic equipment and storage medium
CN112068752B (en) Space display method and device, electronic equipment and storage medium
CN108921894A (en) Object positioning method, device, equipment and computer readable storage medium
WO2020042968A1 (en) Method for acquiring object information, device, and storage medium
CN111951374A (en) House decoration data processing method and device, electronic equipment and storage medium
CN110944139B (en) Display control method and electronic equipment
CN111078819A (en) Application sharing method and electronic equipment
CN115729393A (en) Prompting method and device in information processing process, electronic equipment and storage medium
CN115904188B (en) Editing method and device for house type diagram, electronic equipment and storage medium
CN110717964A (en) Scene modeling method, terminal and readable storage medium
CN115731349A (en) Method and device for displaying house type graph, electronic equipment and storage medium
CN115830280A (en) Data processing method and device, electronic equipment and storage medium
CN111147750B (en) Object display method, electronic device, and medium
US20230215092A1 (en) Method and system for providing user interface for map target creation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination