Nothing Special   »   [go: up one dir, main page]

WO2022082704A1 - 模型修正方法、装置、设备 - Google Patents

模型修正方法、装置、设备 Download PDF

Info

Publication number
WO2022082704A1
WO2022082704A1 PCT/CN2020/123136 CN2020123136W WO2022082704A1 WO 2022082704 A1 WO2022082704 A1 WO 2022082704A1 CN 2020123136 W CN2020123136 W CN 2020123136W WO 2022082704 A1 WO2022082704 A1 WO 2022082704A1
Authority
WO
WIPO (PCT)
Prior art keywords
space
models
model
boundary line
model correction
Prior art date
Application number
PCT/CN2020/123136
Other languages
English (en)
French (fr)
Inventor
蔡锫
徐猛
汪意伟
徐涛
董浩
杨挺志
Original Assignee
上海亦我信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海亦我信息技术有限公司 filed Critical 上海亦我信息技术有限公司
Priority to CN202080002532.2A priority Critical patent/CN112424837B/zh
Priority to PCT/CN2020/123136 priority patent/WO2022082704A1/zh
Publication of WO2022082704A1 publication Critical patent/WO2022082704A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to a model correction method, device, device, and storage medium.
  • the present disclosure is made to solve the above problems, and its purpose is to provide a fast, efficient, intuitive and accurate model correction method, device, device, and storage medium.
  • an embodiment of the present disclosure provides a model correction method, which adopts the following technical solutions:
  • the rough adjustment step is to roughly adjust the positions of the models of the first space and the second space according to the corresponding structures, so that the corresponding structures meet the preset connection establishment conditions and establish a connection relationship;
  • the positions of the models of the first space and the second space are finely adjusted according to the established connection relationship
  • the embodiments of the present disclosure also provide a model correction device, which adopts the following technical solutions, including:
  • a structure determination module for determining corresponding structures in the respective models of the first space and the second space
  • a rough adjustment module configured to roughly adjust the positions of the models of the first space and the second space according to the corresponding structures, so that the corresponding structures meet preset connection establishment conditions and establish a connection relationship;
  • a fine-adjustment module configured to fine-tune the positions of the models of the first space and the second space according to the established connection relationship
  • the embodiments of the present disclosure also provide a computer device, which adopts the following technical solutions, including:
  • a memory and a processor wherein a computer program is stored in the memory, and the processor implements the method as described above when the computer program is executed.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, which adopts the following technical solutions, including:
  • a computer program is stored on the computer-readable storage medium, and the computer program implements the aforementioned method when executed by a processor.
  • the present disclosure can realize intuitive understanding of relative spatial positions and corresponding relationships, greatly improve the efficiency of generating spatial models, and improve user experience.
  • FIG. 1 is an exemplary system architecture diagram to which the present disclosure may be applied;
  • FIG. 2 is a flowchart of one embodiment of a model correction method according to the present disclosure
  • A is a schematic diagram of a stage before generating a three-dimensional model in this embodiment
  • B is a schematic diagram of a stage in the process of generating a three-dimensional model in this embodiment
  • FIG. 4 is a schematic diagram of one embodiment of a model correction device according to the present disclosure.
  • FIG. 5 is a schematic structural diagram of an embodiment of a computer device according to the present disclosure.
  • the system structure 100 may include terminal devices 101 , 102 , 103 , and 104 , a network 105 and a server 106 .
  • the network 105 is used to provide a communication link between the terminal devices 101 , 102 , 103 , 104 and the server 106 .
  • the electronic device (for example, the terminal device 101 , 102 , 103 or 104 shown in FIG. 1 ) on which the method according to the embodiment of the present disclosure operates can transmit various information through the network 105 .
  • the network 105 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • wireless connection methods may include but are not limited to 3G/4G/5G connection, Wi-Fi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB connection, Local Area Network ("LAN”), Wide Area Network (“WAN”) ), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), and other means of network connectivity now known or developed in the future.
  • the network 105 can communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol), and can communicate with any form or medium of digital data (eg, a communications network) interconnection.
  • HTTP Hyper Text Transfer Protocol
  • the user can use the terminal devices 101, 102, 103, 104 to interact with the server 106 through the network 105 to receive or send messages and the like.
  • the terminal device 101, 102, 103 or 104 may be various electronic devices having a touch screen display and/or supporting web browsing, including but not limited to smartphones, tablet computers, e-book readers, MP3 (Motion Picture Expert Compressed Standard Audio) Layer 3) Player, MP4 (Motion Picture Expert Compression Standard Audio Layer 4) Player, Head Mounted Display Device, Notebook Computer, Digital Broadcast Receiver, PDA (Personal Digital Assistant), PMP (Portable Multimedia Player), Vehicle Terminals such as in-vehicle navigation terminals, etc., and mobile terminals such as digital TVs, desktop computers, and the like.
  • MP3 Motion Picture Expert Compressed Standard Audio
  • MP4 Motion Picture Expert Compression Standard Audio Layer 4
  • Player Head Mounted Display Device
  • notebook Computer Digital Broadcast Receiver
  • PDA Personal Digital Assistant
  • PMP Portable Multimedia Player
  • Vehicle Terminals such
  • the server 106 may be a server that provides various services, such as a background server that provides support for pages displayed on the terminal device 101 , 102 , 103 or 104 or data transmitted.
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • the terminal device can independently or cooperate with other electronic terminal devices to run applications in various operating systems, such as the Android system, to implement the method of the embodiments of the present disclosure, and can also run applications in other operating systems, such as the iOS system, Windows system, Hongmeng
  • applications in other operating systems such as the iOS system, Windows system, Hongmeng
  • the application of the system or the like implements the embodiment method of the present disclosure.
  • the model is a three-dimensional model and/or a two-dimensional model established based on images captured inside the first space and the second space. For example, images taken at positions in different spaces or images taken in the process of moving in different spaces can be imported into the marking system to generate three-dimensional models and/or two-dimensional models of each space. Images captured at locations or images captured in the process of moving in different spaces automatically generate 3D models and/or 2D models of each space. The 3D model and/or 2D model of each space are spliced together to generate the overall 3D model and/or 2D model, or the point cloud model is obtained by laser scanning, TOF scanning, structured light scanning, etc.
  • the room structure (such as the location of walls) is extracted from the point cloud model, thereby forming a 3D model and/or a 2D model in which each space is independent or connected.
  • point cloud scanning the room structure (such as the location of walls) is extracted from the point cloud model, thereby forming a 3D model and/or a 2D model in which each space is independent or connected.
  • errors in the current modeling technology including errors in spatial boundary recognition due to spatial shape, shooting angle, and limitations of recognition technology, as well as errors in positioning (position and direction), resulting in errors in the final generated overall model or inaccuracy. precise.
  • the model correction method according to the present disclosure needs to more intuitively, more efficiently, and more accurately correct the three-dimensional model and/or two-dimensional model of each space.
  • the model correction method of the present disclosure can be realized by at least three steps including a placement step, a rough adjustment step and a fine adjustment step, so as to realize the functions of the correction including the boundary of a single model, the position between the models and the connection relationship. Then realize the generation and correction of the final model;
  • the rough adjustment step for example, uses internal adsorption (as shown in the oval box in Figure 3B), external adsorption (as shown in the rectangular box in Figure 3B), etc. to determine the connection or correspondence between a single model or model group. , to achieve rough positioning between individual models or model groups.
  • internal adsorption, external adsorption, etc. can achieve a relatively large positional flexibility between models or model groups that allow corresponding relationships, so that even when there is a corresponding relationship in the room
  • the models or model groups can also be moved to ensure that models without overlapping relationships will not be overlapped.
  • the "what you see is what you get” approach is used to confirm the boundary line of each model or model group, such as the position of the wall.
  • the model correction method of the present disclosure there may be situations in which the original boundary line position of a single model is incorrect due to identification errors.
  • the rough adjustment step for example, when the model formed by the images captured at different shooting positions of the same space is initially positioned, the outer boundary of the same space can be visually distinguished (as shown in FIG. 3B , the whole of the living room 6 and the dining room 13 The outer boundary line is shown as a solid line, indicating that the room actually has a wall at the solid line), and the original boundary line that is included in this same space (as shown in FIG.
  • the inner original boundary line of the living room 6 and the dining room 13 is shown as a dashed line) , indicating that the room actually has no wall at the dotted line, and there is a boundary error caused by inaccurate identification).
  • the boundary line confirmed in the rough adjustment step (the solid line or the dashed line in the model corresponds to the room with or without walls, respectively) is not changed.
  • the coarse adjustment step can realize the confirmation of the final boundary line, and it is flexible and convenient; the fine adjustment step can realize the adjustment and optimization of the error without changing the nature of the boundary line (solid line or dashed line or wall or no wall) .
  • the above steps are efficient and accurate 3D model correction methods.
  • the model correction method includes the following steps:
  • the corresponding structures are corresponding openings or borderlines of the first space and the second space.
  • the space may be, for example, a room
  • the corresponding structure of the room is, for example, one of a door, or a window, or an opening, or a corner or a line of the room.
  • determining the corresponding structures in the respective models of the first space and the second space may, for example, be through a marking system to determine the structure of the space.
  • the images of each space captured are first synthesized into a 360-degree panorama, and then the panorama is marked corresponding to the three-dimensional space to determine the corresponding structure, for example, by clicking on the punctuation or punctuation of the structure of the space in the marking system.
  • the marking lines formed between the punctuation points are determined according to the actual position of each structure in the panorama, such as the wall of the room or the basic object structure of various rooms in the marking system, and by dragging the marking system according to the actual structure in the image.
  • the punctuation or marking of each corresponding structure is expanded or the basic object structure of various rooms is added, such as doors, open spaces, ordinary windows, bay windows, stairs, etc.
  • basic object structures can also be added on the wall.
  • the coarse adjustment step according to the corresponding structure, coarsely adjust the positions of the models of the first space and the second space, so that the corresponding structures meet the preset connection establishment conditions, and establish a connection relationship;
  • a preset connection effect indicating that the two have established a connection relationship appears between the models of the first space and the second space.
  • the preset connection establishment conditions include making the included angle between the corresponding structures of the first space and the second space smaller than the preset angle and/or the distance smaller than the preset distance.
  • the included angles of corresponding structures of the models of different spaces are less than 30° and the distance between the model connection interfaces is less than 1 cm, for example.
  • the preset connection effect may include, for example, the effect of changing to the same color, appearing an adsorption effect, or appearing one of the connection logos.
  • the included angle of the corresponding structures of the models of different spaces here, for example, the "doors" of the rooms
  • the distance between the model connection interfaces is less than 1 cm
  • the difference between the doors of the two rooms becomes The same color or a snap effect or a connection logo or any other associated effect is used to visually determine that there is a correspondence between the two.
  • the system connects or splices space models the spaces with the corresponding relationship are automatically adjusted and aligned and connected or spliced together.
  • the models of the first space and the second space are models respectively established for different spaces, in the rough adjustment step S22, so that the models of the first space and the second space do not overlap way to roughly adjust the positions of the models in the first space and the second space;
  • the positions of the models of the first space and the second space are roughly adjusted in such a way that the models of the first space and the second space at least partially overlap.
  • the model boundary line of the first space and the second space is the first boundary line;
  • the model boundary line of the overlapping part is the second boundary line; the fine adjustment step S23 described below will finely adjust the first space and the second boundary line according to the first boundary line and the second boundary line. The location of the model in the second space.
  • the coarse adjustment step is used to make the first boundary line and the second boundary line between the spaces correct, for example, the first boundary line and the second boundary line are respectively displayed in a preset manner, and the preset
  • the setting modes include, for example, solid line or dotted line display, clear or blurred effect display, different contrast display, different color display, and so on.
  • a solid line indicates the position of the first boundary line of each space, such as a wall
  • a dotted line indicates the second boundary line of the overlapped portion, such as the original boundary line included.
  • a panorama comparison is used to ensure the accuracy of, for example, solid lines representing walls of each space, and for inaccurate locations, modifications and adjustments are made in the panorama based on the actual location by the marking system described above. .
  • a step of pre-arranging the models of the first space and the second space according to the positions and/or directions when the respective images are taken is further included.
  • the models of each space can be placed first according to the positions and/or directions of the images of the three-dimensional model and/or the two-dimensional model at different shooting positions or during the moving process.
  • First convert the three-dimensional model of each space into a two-dimensional model, and arrange the two-dimensional models of each space according to the position and/or direction of the shooting in different shooting positions or during the movement process, so as to realize the subsequent steps to achieve this
  • the technology of the function is not limited;
  • the position and/or orientation at the time of shooting refers to the position and/or orientation when the image used to generate the three-dimensional model is captured at different shooting positions or during movement, the position and/or orientation
  • the position and/or orientation can be obtained by sensors such as a positioning sensor and an orientation sensor of a photographing device.
  • the relative displacement and photographing direction information of each photographing location can be obtained by performing feature point matching on images of similar photographing locations, which is not limited.
  • At least one of the models of the first space and the second space is a model group composed of models of multiple spaces.
  • the movement is too fast, so that there are not enough feature points in the adjacent two frames of images to match, or there is interference in the environment or changes in the environment during the movement process.
  • the models of multiple spaces whose positions and/or directions can be determined are divided into different model groups.
  • the model groups are distinguished, for example, in a preset manner and each model group is placed separately; wherein, the preset manner can be, for example, by comparing the three-dimensional models and/or two-dimensional models of the respective model groups before and after the interruption to Different border colors are used to distinguish, of course, other ways can also be used to distinguish.
  • the openings or boundary lines that have established connection relationships are aligned.
  • the boundary lines where the corresponding structures of the roughly adjusted models of the first space and the second space are located are merged, and the midpoints of the corresponding structures are coincident.
  • the positions of the first boundary line and the second boundary line, such as the solid line and the dashed line, of the model after the rough adjustment is completed remain unchanged.
  • the fine adjustment step S23 when the models of the first space and the second space are for the same space, based on images captured at different shooting locations or models respectively established by point cloud scanning, in the fine adjustment step S23 , for the two models, only the first boundary line is retained to correct the errors caused by the connection of the corresponding connection structures.
  • the fine-tuning step further adjusts the spatial overlap caused by the error, and checks the overlap in all models. At least one of the models of the first space and the second space has an overlapping part with the model of the third space, then merge the two roughly parallel boundary lines within the preset distance in the overlapping part, that is, check whether the original in the overlapping part is solid.
  • the first boundary line of the line if the solid line of the original non-overlapping space that is the first boundary line is substantially parallel, the two first boundary lines are merged into one.
  • the solid lines of the originally non-overlapping spaces that were originally the first boundary lines are substantially parallel, and the two boundary lines conform to a preset distance, for example, when the distance in the schematic diagram of the model is 1 cm, Then the two first boundary lines are merged to the middle position of the first boundary lines.
  • a second boundary line, such as a dashed line, in the overlapping portion is deleted to complete the final model.
  • the method further includes: converting the fine-tuned three-dimensional model into a two-dimensional model; determining the orientation of the two-dimensional model according to the entrance of the space.
  • the orientation of the two-dimensional model can be adjusted so that the space The entrance of the space is located above the two-dimensional model.
  • the position of the entrance of the space can be adjusted according to the actual orientation of each space obtained during the shooting, so as to adjust the overall direction of the two-dimensional model.
  • the position and/or direction at the time of shooting refers to the position and/or direction when the image used to generate the model is captured in different shooting positions or during the movement process, and the position and/or direction can be determined by, for example, the shooting device. It can be obtained from sensors such as a positioning sensor and an orientation sensor. Of course, it is also possible to perform feature point matching through images of similar shooting positions to obtain the relative displacement and shooting direction information of each shooting position, which is not limited.
  • An application scenario of the present disclosure is, for example, during the process of capturing an image for generating a 3D model, the movement is too fast, resulting in insufficient feature points for two adjacent frames of images to match, or during the movement process, there is interference or environment in the environment. Changes, such as entering a rough room or an environment with poor light conditions (too dark or too strong); or interrupted by external factors during the shooting process, such as the interruption of the shooting route caused by answering the phone, etc.
  • At least one spatial three-dimensional model and/or two-dimensional model is The three-dimensional model and/or the two-dimensional model of the space is grouped, and the models of the multiple spaces capable of determining the position and/or orientation are divided into different model groups.
  • the space where the model correction object is a room is used as an example for description, which mainly includes the following steps:
  • Step 1 Arrange the three-dimensional models and/or two-dimensional models of each space in advance according to the position and/or direction of the respective images.
  • the spaces in the house include different rooms: master bedroom, Different spaces such as secondary bedroom, main bathroom, corridor, etc., so that the established models of each room are pre-arranged according to the position and/or orientation of their respective images;
  • the movement is too fast, so that there are not enough feature points for matching between two adjacent frames of images, or during the movement process, there is interference in the environment or the environment changes, For example, entering a rough room or an environment with poor light conditions (too dark or too strong); or interrupted by external factors during the shooting process, such as the interruption of shooting caused by answering the phone, etc. If the interruption causes the model of the living room 6 (or the dining room 13) to be preliminarily placed according to the position and/or direction at the time of the shooting, the model before and after the route of the time when the shooting of the living room 6 (or the dining room 13) was interrupted was performed for all spaces.
  • FIG. 3A it is a schematic diagram of each model group reflected in the two-dimensional model. Spaces such as restaurant 13 are divided into one model group, and are marked with dark frame differentiate.
  • Step 2 Determine the corresponding structures in the respective models of the first space and the second space.
  • the space structure is marked by a marking system.
  • the panorama image corresponds to the three-dimensional space to be marked to determine the corresponding structure, for example, by clicking the punctuation of the restaurant 13 in the marking system or the marking line formed between the punctuations, the wall of the restaurant 13 is modified to determine the wall of each room. position is indicated by a solid line.
  • the opening of the dining room 13 and the balcony 7 is adjusted (ie, the actual position of the opening corresponding to the punctuation or marking line is clicked in the marking system), and the opening of the living room 6 and the balcony 7 is adjusted.
  • Step 3 the coarse adjustment step, according to the corresponding structure, coarsely adjust the positions of the models in each space, so that the corresponding structures meet the preset connection establishment conditions, and establish the connection relationship.
  • a model group with a dark frame line is used.
  • the doors of the dining room 13 and the kitchen 12 have corresponding structures in different rooms in 13 and the model of the kitchen 12 are not overlapped, and the positions of the models of the dining room 13 and the kitchen 12 are roughly adjusted.
  • the angle between the two is less than 30° and for example, when the models are connected
  • the distance between the interfaces is less than 1 cm
  • a preset connection effect appears between the corresponding connection structures, for example, an adsorption effect may appear between the two spaces, as shown in FIG. 3B, the restaurant.
  • the door of 13 establishes a connection relationship with the door of kitchen 12 (shown as a rectangular frame).
  • the system splices the space model, the spaces with the connection relationship are automatically adjusted and aligned and spliced together. Therefore, when the user adjusts the positional relationship between the two spaces, it is not necessary to precisely align the edges of the two spaces or the connecting structures such as doors and windows, which can greatly reduce the user's workload, and the system automatically assigns the corresponding two spaces based on the corresponding relationship. Spatial alignment and splicing, so the splicing accuracy of the model is also greatly improved compared to pure manual adjustment.
  • the shooting route is interrupted or the shooting position and direction cannot be determined, and the image of the living room 6 (or the dining room 13 ) that is continuously captured after the interruption can be captured.
  • the living room 6 is modeled based on the images of two different shooting positions or shooting routes, and respectively It forms a model group with other space models before and after the route is interrupted.
  • the two models of the living room 6 (or the dining room 13) need to be connected or spliced into one, and the two groups of model groups should be connected at the same time. or splicing.
  • the opening of the dining room 13 and the balcony 7 and the opening of the living room 6 and the balcony 7 are corresponding structures in the same space between different model groups.
  • the positions of the models of the living room 6 and the dining room 13 are roughly adjusted so that the models of the living room 6 and the dining room 13 at least partially overlap.
  • the living room 6 (or the dining room 13) is modeled based on the images of two different shooting positions or shooting routes, and respectively forms a model group with other space models before and after the route is interrupted.
  • the shooting route between the two different spaces of the restaurant 13 and the kitchen 12 may be interrupted or the shooting position and direction cannot be determined, and the restaurant 13 and the kitchen 12 respectively form a model group with other space models before and after the route interruption.
  • the connection relationship is established through the door of the dining room 13 and the door of the kitchen 12, similar to the above model group, and the connection effect and method will not be repeated.
  • the opening of the dining room 13 and the balcony 7 and the opening of the living room 6 and the balcony 7 meet the preset connection establishment conditions, for example, the angle between the two is less than 30° and the distance between the model connection interface is less than 1 cm, the corresponding A preset connection effect appears between the structures, such as an adsorption effect between the two.
  • the dining room 13 and the living room 6 are the connection relationships established by models generated from images obtained at different shooting positions in the same space. (shown in oval box). Later, when the system connects or splices space models, the spaces with corresponding relationships are automatically adjusted, aligned and spliced together.
  • the outer boundary (first boundary line) of each space is represented by a solid line.
  • the model will have overlapping parts.
  • the customer The bathroom 8 and the kitchen 12 have overlapping parts due to the establishment of connection relationships with other models, so the original solid line of the first boundary line is displayed as the dashed line of the second boundary line, and the panorama image is used to compare to ensure the solid line of the outer boundary precise.
  • Step 4 the fine adjustment step, according to the established connection relationship, finely adjust the position of the model of each space; for example, in this embodiment, make the opening of the dining room 13 and the balcony 7 and the living room 6 and the balcony with the established connection relationship. 7, or align the door of the dining room 13 and the door of the kitchen 12 that have been connected.
  • the models of the dining room 13 and the living room 6 are for the same space, based on images captured at different shooting positions or models established by point cloud scanning. In the fine adjustment step, the models of the dining room 13 and the living room 6 model, keeping only a solid set of first boundary lines.
  • the corresponding structures of the models of the dining room 13 and the living room 6 after rough adjustment that is, the boundary lines between the opening of the dining room 13 and the balcony 7 and the opening of the living room 6 and the balcony 7 are combined, for example, the corresponding The midpoints of the openings overlap, so that the boundary line of the dining room 13 with a shorter length is merged into the boundary line of the living room 6 with a longer length.
  • One, or the two boundary lines are closely attached in parallel, etc., which is not limited. In order to ensure the accuracy after overlapping, it is preferable to overlap the midpoints of the corresponding structures;
  • the boundary line is displayed in the form of a dotted line, and of course, it can also be displayed by means of a blurring effect, a display with a reduced contrast, a display in a different color from the first boundary line, and the like.
  • the original boundary line between the guest bathroom 8 and the kitchen 12 is incorrect, and there is an overlapping portion.
  • preliminary positioning of the guest bathroom 8 and the kitchen 12 is carried out, so as to be able to visually distinguish the outer boundary of the guest bathroom 8 and the kitchen 12, that is, the first boundary (solid line), indicating the space between the guest bathroom 8 and the kitchen 12. There is actually a wall.
  • the original boundary lines between the guest bathroom 8 and the kitchen 12 are solid lines, so two vertical border lines that are roughly parallel to each other will be Combined into one, for example, when the distance between two vertical boundary lines that are substantially parallel to each other in FIG. 3B is less than 1 cm, the position between the two vertical boundary lines is taken as the position of the new solid line.
  • the dashed line of the second boundary line generated by removing the overlapping parts caused by the models photographed at different positions in the same space is used to complete the fine-tuning. to generate the final model.
  • Step 5 also includes: converting the overall three-dimensional model of the space into a two-dimensional model; determining the orientation of the two-dimensional model according to the entrance of the space.
  • the direction of the two-dimensional model can be adjusted to make the space
  • the entrance is located above the two-dimensional model.
  • the position of the entrance of the space can be adjusted according to the actual orientation of each space obtained during shooting, so as to adjust the overall direction of the two-dimensional model.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read only memory (ROM), or a random access memory (RAM).
  • an embodiment of the present disclosure provides a model correction device, which can be applied to various electronic terminal devices, as shown in FIG. 4 , including: a structure determination module 401 , a coarse adjustment module 402 , a fine adjustment module 403 , and a placement module 404 .
  • a structure determination module 401 configured to determine the corresponding structures in the respective models of the first space and the second space;
  • the corresponding structures are corresponding openings or borderlines of the first space and the second space.
  • the space may be, for example, a room
  • the corresponding structure of the room is, for example, one of a door, or a window, or an opening, or a corner or a line of the room.
  • determining the corresponding structures in the respective models of the first space and the second space may, for example, be through a marking system to determine the structure of the space.
  • the images of each space captured are first synthesized into a 360-degree panorama, and then the panorama is marked corresponding to the three-dimensional space to determine the corresponding structure, for example, by clicking on the punctuation or punctuation of the structure of the space in the marking system.
  • the marking lines formed between the punctuation points are determined according to the actual position of each structure in the panorama, such as the wall of the room or the basic object structure of various rooms in the marking system, and by dragging the marking system according to the actual structure in the image.
  • the punctuation or marking of each corresponding structure is expanded or the basic object structure of various rooms is added, such as doors, open spaces, ordinary windows, bay windows, stairs, etc.
  • basic object structures can also be added on the wall.
  • the coarse adjustment module 402 is configured to coarsely adjust the positions of the models of the first space and the second space according to the corresponding structures, so that the corresponding structures meet the preset connection establishment conditions and establish a connection relationship;
  • a preset connection effect indicating that the two have established a connection relationship appears between the models of the first space and the second space.
  • the preset connection establishment conditions include making the included angle between the corresponding structures of the first space and the second space smaller than the preset angle and/or the distance smaller than the preset distance.
  • the included angles of corresponding structures of the models of different spaces are less than 30° and the distance between the model connection interfaces is less than 1 cm, for example.
  • the preset connection effect may include, for example, the effect of changing to the same color, appearing an adsorption effect, or appearing one of the connection logos.
  • the included angle of the corresponding structures of the models of different spaces here, for example, the "doors" of the rooms
  • the distance between the model connection interfaces is less than 1 cm
  • the difference between the doors of the two rooms becomes The same color or a snap effect or a connection logo or any other associated effect is used to visually determine that there is a correspondence between the two.
  • the system connects or splices space models the spaces with the corresponding relationship are automatically adjusted and aligned and connected or spliced together.
  • the coarse adjustment module 402 when the models of the first space and the second space are models respectively established for different spaces, the coarse adjustment module 402 makes the models of the first space and the second space not overlap, Coarsely adjust the position of the models in the first space and the second space;
  • the coarse adjustment module 402 makes the first space The positions of the models of the first space and the second space are roughly adjusted in such a manner that the models of the first space and the second space at least partially overlap.
  • the model boundary line of the first space and the second space is The first boundary line; when the models of the first space and the second space determined by the coarse adjustment module 402 overlap, the model boundary line of the overlapping part is the second boundary line; the fine adjustment module 403 will be described below according to the first boundary line and The second boundary line fine-tunes the positions of the models in the first space and the second space.
  • the coarse adjustment module 402 is configured to make the first boundary line and the second boundary line between each space correct, for example, the first boundary line and the second boundary line are displayed in a preset manner, respectively.
  • the preset manners include, for example, display with solid lines or dotted lines, display with clear or blurred effects, display with different contrasts, display with different colors, and the like.
  • a solid line indicates the position of the first boundary line of each space, such as a wall
  • a dotted line indicates the second boundary line of the overlapped portion, such as the original boundary line included.
  • a panorama comparison is used to ensure the accuracy of, for example, solid lines representing walls of each space, and for inaccurate locations, modifications and adjustments are made in the panorama based on the actual location by the marking system described above. .
  • an arrangement module 404 is further included, so that the models of the first space and the second space are pre-arranged according to the positions and/or directions when the respective images were taken.
  • the models of each space can be placed first according to the positions and/or directions of the images of the three-dimensional model and/or the two-dimensional model at different shooting positions or during the moving process.
  • First convert the three-dimensional model of each space into a two-dimensional model, and arrange the two-dimensional models of each space according to the position and/or direction of the shooting in different shooting positions or during the movement process, so as to realize the subsequent steps to achieve this
  • the technology of the function is not limited;
  • the position and/or orientation at the time of shooting refers to the position and/or orientation when the image used to generate the three-dimensional model is captured at different shooting positions or during movement, the position and/or orientation
  • the position and/or orientation can be obtained by sensors such as a positioning sensor and an orientation sensor of a photographing device.
  • the relative displacement and photographing direction information of each photographing location can be obtained by performing feature point matching on images of similar photographing locations, which is not limited.
  • At least one of the models of the first space and the second space is a model group composed of models of multiple spaces.
  • the movement is too fast, so that there are not enough feature points in the adjacent two frames of images to match, or there is interference in the environment or changes in the environment during the movement process.
  • the models of multiple spaces whose positions and/or directions can be determined are divided into different model groups.
  • the model groups are distinguished, for example, in a preset manner and each model group is placed separately; wherein, the preset manner can be, for example, by comparing the three-dimensional models and/or two-dimensional models of the respective model groups before and after the interruption to Different border colors are used to distinguish, of course, other ways can also be used to distinguish.
  • a fine adjustment module 403, configured to finely adjust the positions of the models of the first space and the second space according to the established connection relationship
  • the fine-tuning module 403 aligns the openings or boundary lines that have established connections.
  • the boundary lines where the corresponding structures of the roughly adjusted models of the first space and the second space are located are merged, and the midpoints of the corresponding structures are coincident. The position of the solid line and the dotted line of the model after the rough adjustment is not changed.
  • the fine-tuning module 403 when the models of the first space and the second space are for the same space, based on images captured at different shooting locations or models respectively established by point cloud scanning, the fine-tuning module 403 is for the two model, only the first boundary line is retained, which is used to correct the error caused by the connection of the corresponding connection structure.
  • the fine-tuning step further adjusts the spatial overlap caused by the error, and checks the overlap in all models. At least one of the models of the first space and the second space has an overlapping part with the model of the third space, then merge the two roughly parallel boundary lines within the preset distance in the overlapping part, that is, check whether the original in the overlapping part is solid.
  • the first boundary line of the line if the solid line of the original non-overlapping space that is the first boundary line is substantially parallel, the two first boundary lines are merged into one.
  • the solid lines of the originally non-overlapping spaces that were originally the first boundary lines are substantially parallel, and the two boundary lines conform to a preset distance, for example, when the distance in the schematic diagram of the model is 1 cm, Then the two first boundary lines are merged to the middle position of the first boundary lines.
  • a second boundary line, such as a dashed line, in the overlapping portion is deleted to complete the final model.
  • a conversion determination module (not shown), in one or more embodiments, further includes a conversion determination module for converting the overall three-dimensional model of the space into a two-dimensional model; determining the orientation of the two-dimensional model according to the entrance of the space, here , for example, the orientation of the 2D model can be adjusted so that the entrance of the space is located above the 2D model.
  • the position of the entrance of the space can also be adjusted according to the actual orientation of each space obtained during shooting, so as to adjust the overall orientation of the 2D model. direction.
  • each block in the block diagrams of the accompanying drawings may represent a module, a portion of which may contain one or more executable instructions for implementing the specified logical function, the modules are not necessarily in order Execute in sequence.
  • the modules and functional units in the device embodiments of the present disclosure may be integrated into one processing module, or each unit may exist physically alone, or two or more modules or functional units may be integrated into one module.
  • Each of the above integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
  • the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.
  • FIG. 5 it shows a schematic structural diagram of an electronic device (eg, the terminal device or server in FIG. 1 ) 500 suitable for implementing an embodiment of the present disclosure.
  • the terminal device in the embodiment of the present disclosure may be various terminal devices in the above-mentioned system.
  • the electronic device shown in FIG. 5 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 500 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 501 for controlling the overall operation of the electronic device.
  • the processing device may include one or more processors to execute instructions to perform all or part of the steps of the above-described methods.
  • the processing device 501 may also include one or more modules for processing interactions with other devices.
  • the storage device 502 is used to store various types of data, and the storage device 502 may include various types of computer-readable storage media or combinations thereof, such as electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the sensor device 503 for sensing the specified measured information and converting it into a usable output signal according to a certain law, may include one or more sensors.
  • it may include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor, etc., for detecting changes in the electronic device's open/closed state, relative positioning, acceleration/deceleration, temperature, humidity, and light, etc.
  • the processing device 501 , the storage device 502 and the sensor device 503 are connected to each other by a bus 504 .
  • An input/output (I/O) interface 505 is also connected to bus 504 .
  • the multimedia device 506 may include input devices such as a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, etc. to receive input signals from the user, and the various input devices may cooperate with various sensors of the sensor device 503 to complete, for example, gesture operations. input, image recognition input, distance detection input, etc.; the multimedia device 506 may also include output devices such as a liquid crystal display (LCD), a speaker, a vibrator, and the like.
  • input devices such as a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, etc.
  • the various input devices may cooperate with various sensors of the sensor device 503 to complete, for example, gesture operations. input, image recognition input, distance detection input, etc.
  • the multimedia device 506 may also include output devices such as a liquid crystal display (LCD), a speaker, a vibrator, and the like.
  • LCD liquid crystal display
  • Power supply device 507 used to provide power to various devices in the electronic device, may include a power management system, one or more power supplies, and components that distribute power to other devices.
  • the communication means 508 may allow the electronic device 500 to communicate wirelessly or by wire with other devices to exchange data.
  • the above-mentioned devices can also be connected to the I/O interface 505 to realize the application of the electronic device 500 .
  • Figure 5 shows an electronic device having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from a network via a communication device, or installed from a storage device.
  • the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • a model correction method characterized in that:
  • the rough adjustment step is to roughly adjust the positions of the models of the first space and the second space according to the corresponding structures, so that the corresponding structures meet the preset connection establishment conditions and establish a connection relationship;
  • the positions of the models of the first space and the second space are finely adjusted according to the established connection relationship
  • a model correction method characterized in that:
  • the preset connection establishment condition includes making the included angle between the corresponding structures of the first space and the second space smaller than a preset angle and/or a distance smaller than a preset distance.
  • a model correction method characterized in that:
  • the models of the first space and the second space are models established for different spaces, in the rough adjustment step, the models of the first space and the second space are not overlapped. , coarsely adjust the positions of the models in the first space and the second space;
  • the models of the first space and the second space are respectively established models for the same space
  • the models of the first space and the second space are at least partially overlapped. , and coarsely adjust the positions of the models in the first space and the second space.
  • a model correction method characterized in that:
  • the model boundary line of the first space and the second space is the first boundary line
  • the model boundary line of the overlapping portion is the second boundary line
  • the fine-tuning step fine-tunes the positions of the models of the first space and the second space according to the first boundary line and the second boundary line.
  • a model correction method characterized in that:
  • a model correction method characterized in that:
  • the fine-tuning step further includes removing the second boundary line.
  • a model correction method characterized in that:
  • the corresponding structure is the corresponding opening or boundary line of the first space and the second space;
  • the openings or boundary lines that have established a connection relationship are aligned.
  • a model correction method characterized in that:
  • the boundary lines where the corresponding structures of the roughly adjusted models of the first space and the second space are located are merged, and the midpoints of the corresponding structures are coincident.
  • a model correction method characterized in that:
  • the Two substantially parallel boundary lines within a predetermined distance range in the overlapping portion are merged.
  • a model correction method characterized in that:
  • a model correction method characterized in that:
  • the model is a three-dimensional model and/or a two-dimensional model established based on images captured inside the first space and the second space;
  • the rough adjustment step also includes a step of prearranging the models of the first space and the second space according to the positions and/or directions when the respective images were taken.
  • a model correction method characterized in that:
  • At least one of the models of the first space and the second space is a model group composed of models of a plurality of spaces.
  • a model correction method characterized in that:
  • the first space and the second space are rooms, and the corresponding structure is at least one of a door, or a window, or an opening, or a wall corner or a wall line of the room.
  • a model correction method characterized in that:
  • the preset connection effect includes at least one of changing to the same color, appearing an adsorption effect, or appearing a connection logo.
  • a model correction device characterized in that it includes:
  • a structure determination module for determining corresponding structures in the respective models of the first space and the second space
  • a rough adjustment module configured to roughly adjust the positions of the models of the first space and the second space according to the corresponding structures, so that the corresponding structures meet preset connection establishment conditions and establish a connection relationship;
  • a fine-adjustment module configured to fine-tune the positions of the models of the first space and the second space according to the established connection relationship
  • a model correction device characterized in that:
  • the preset connection establishment conditions include making the included angle between the corresponding structures of the first space and the second space smaller than a preset angle and/or a distance smaller than a preset distance;
  • the preset connection effect includes at least one of changing to the same color, appearing an adsorption effect, or appearing a connection logo.
  • a model correction device characterized in that:
  • the coarse adjustment module performs a coarse adjustment in a manner that the models of the first space and the second space do not overlap. the locations of the models of the first space and the second space;
  • the coarse adjustment module When the models of the first space and the second space are respectively established models for the same space, the coarse adjustment module performs a coarse adjustment in a manner that the models of the first space and the second space at least partially overlap. The locations of the models of the first space and the second space.
  • a model correction device characterized in that:
  • the model boundary line of the first space and the second space is the first boundary line
  • the model boundary line of the overlapping portion is the second boundary line
  • the fine-tuning module fine-tunes the positions of the models of the first space and the second space according to the first boundary line and the second boundary line.
  • a model correction device characterized in that:
  • the fine adjustment module only retains the first boundary line and removes the second boundary line for the two models.
  • a model correction device characterized in that:
  • the corresponding structure is the corresponding opening or boundary line of the first space and the second space;
  • the fine-tuning module aligns the openings or borderlines for which the connection has been established.
  • a model correction device characterized in that:
  • the fine adjustment module merges the boundary lines where the corresponding structures of the roughly adjusted models of the first space and the second space are located, and makes the midpoints of the corresponding structures coincide;
  • a model correction device characterized in that:
  • the model is a three-dimensional model and/or a two-dimensional model established based on images captured inside the first space and the second space;
  • the apparatus further includes an arrangement module, which enables the models of the first space and the second space to be pre-arranged according to the positions and/or directions when the respective images are taken.
  • a model correction device characterized in that:
  • At least one of the models of the first space and the second space is a model group composed of models of a plurality of spaces
  • the first space and the second space are rooms, and the corresponding structure is at least one of a door, or a window, or an opening, or a wall corner or a wall line of the room.
  • a computer device which is characterized by comprising a memory and a processor, wherein a computer program is stored in the memory, and the processor implements the above-mentioned computer program when executing the computer program The method of any one.
  • a computer-readable storage medium wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the above-mentioned The method of any one.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Civil Engineering (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种模型修正方法、装置、设备和存储介质。所述模型修正方法包括:确定第一空间和第二空间各自的模型中的对应结构(S21);粗调步骤,根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述结构符合预设的连接建立条件,建立连接关系(S22);精调步骤,按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置(S23)。上述方法能够使用户直观了解空间相对位置和对应关系,大幅度提高生成三维空间模型的效率,提升了用户体验。

Description

模型修正方法、装置、设备 技术领域
本公开涉及图像处理领域,尤其涉及一种模型修正方法、装置、设备、存储介质。
背景技术
随着互联网数字化社会的发展,在很多方面例如建筑工程、室内设计、装修、房屋买卖、出租等场景下往往需要将实际的空间结构例如房屋结构转化为虚拟的空间模型,以便用户直观感受该空间的布局和实景信息。现有的空间模型一般利用建模软件来进行构建,需要通过系统的学习后才可以掌握,一般的用户难以应用。并且在空间模型的构建过程中操作繁琐,导致空间模型制作时间非常长。
现有技术中已提出利用移动终端设备进行空间模型例如房屋模型的自动化生成的技术,但当该房屋包含多个房间时,分别生成的各房间的模型并进行修正成为难点,有时仍需要人工进行各房间模型的修正。然而,由于不易准确了解多个房间模型的相对位置和对应关系,导致修正过程费时费力,操作难度、精度都有待改善。
发明内容
本公开是为了解决上述课题而完成的,其目的在于提供一种快速高效、直观准确的模型修正方法、装置、设备、存储介质。
本公开提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
为了解决上述技术问题,本公开实施例提供一种模型修正方法,采用 了如下所述的技术方案,
确定第一空间和第二空间各自的模型中的对应结构;
粗调步骤,根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述对应结构符合预设的连接建立条件,建立连接关系;
精调步骤,按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置;
其中,当所述对应结构符合所述预设的连接建立条件时,在所述第一空间和所述第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
为了解决上述技术问题,本公开实施例还提供一种模型修正装置,采用了如下所述的技术方案,包括:
结构确定模块,用于确定第一空间和第二空间各自的模型中的对应结构;
粗调模块,用于根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述对应结构符合预设的连接建立条件,建立连接关系;
精调模块,用于按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置;
其中,当所述对应结构符合所述预设的连接建立条件时,在所述第一空间和所述第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
为了解决上述技术问题,本公开实施例还提供一种计算机设备,采用了如下所述的技术方案,包括:
存储器和处理器,所述存储器中存储有计算机程序,所述处理器执行所述计算机程序时实现如前述所述的方法。
为了解决上述技术问题,本公开实施例还提供一种计算机可读存储介质,采用了如下所述的技术方案,包括:
所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如前述所述的方法。
根据本公开所公开的技术方案,与现有技术相比,本公开可以实现直观了解空间相对位置和对应关系,大幅度提高生成空间模型的效率,提升了用户体验。
附图说明
图1是本公开可以应用于其中的示例性系统架构图;
图2是根据本公开的模型修正方法的一个实施例的流程图;
图3是根据本公开的模型修正方法的一个实施例的示意图,其中A为该实施例中生成三维模型前的一个阶段的示意图,B为该实施例中生成三维模型过程中的一个阶段的示意图;
图4是根据本公开的模型修正装置的一个实施例的示意图;
图5是根据本公开的计算机设备的一个实施例的结构示意图。
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,元件和元素不一定按照比例绘制。
具体实施方式
除非另有定义,本文所使用的所有的技术和科学术语与属于本公开的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本公开;本公开的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本公开的说明书和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本公开的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例 互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
为了使本技术领域的人员更好地理解本公开方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
[系统结构]
首先,说明本公开的一个实施例的系统的结构。如图1所示,系统结构100可以包括终端设备101、102、103、104,网络105和服务器106。网络105用以在终端设备101、102、103、104和服务器106之间提供通信链路。
本公开的实施例方法运行于其上的电子设备(例如图1所示的终端设备101、102、103或104)可以通过网络105进行各种信息的传输。网络105可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。需要指出的是,上述无线连接方式可以包括但不限于3G/4G/5G连接、Wi-Fi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB连接、局域网(“LAN”)、广域网(“WAN”)、网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络)以及其他现在已知或将来开发的网络连接方式。网络105可以利用诸如HTTP(Hyper Text Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。
用户可以使用终端设备101、102、103、104通过网络105与服务器106交互,以接收或发送消息等。终端设备101、102、103或104可以是具有触摸显示屏和/或支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3(动态影像专家压缩标准音频层面3)播放器、MP4(动态影像专家压缩标准音频层面4)播放器、头戴式显示设备、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等。
服务器106可以是提供各种服务的服务器,例如对终端设备101、102、103或104上显示的页面或传输的数据提供支持的后台服务器。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
这里,终端设备可以独立或通过与其他电子终端设备配合运行各类操作系统例如安卓系统中的应用实现本公开的实施例方法,也可以运行其他操作系统中的应用例如iOS系统、Windows系统、鸿蒙系统等的应用实现本公开的实施例方法。
[模型修正方法]
首先,说明本公开的一个实施例的应用场景,在生成空间的模型的过程中,模型是基于对第一空间和第二空间的内部拍摄得到的图像而建立的三维模型和/或二维模型,例如可以通过将在不同空间中的位置拍摄的图像或者在不同空间中移动的过程中拍摄的图像导入标记系统以生成各个空间的三维模型和/或二维模型,或者可以通过在不同空间中的位置拍摄的图像或者在不同空间中移动的过程中拍摄的图像自动生成各个空间的三维模型和/或二维模型,在生成各个空间的三维模型和/或二维模型后,根据拍摄位置或移动过程中的相对位置将各个空间的三维模型和/或二维模型进行拼接,以生成整体的三维模型和/或二维模型,或者通过激光扫描、TOF扫描、结构光扫描等方式获取点云模型(下称点云扫描),从点云模型中提取房间结构(例如墙的位置),从而形成各个空间独立或连接的三维模型和/或二维模型。但是目前的建模技术存在各种误差,包括因空间形状、拍摄角度以及识别技术的局限等导致空间边界识别的误差,以及定位(位置和方向)的误差,导致最终生成的整体模型错误或者不准确。
在拼接生成整体的三维模型和/或二维模型时,需要根据本公开的模型修正方法更直观、更高效率、更准确地修正各个空间的三维模型和/或二维模型。
为便于理解本公开的模型修正方法,首先将本公开的主要技术方案 大致概括为下述几点内容:
1.例如可以通过包括摆放步骤、粗调步骤和精调步骤等至少三个步骤来实现本公开的模型修正方法,以实现修正包括单个模型的边界、模型之间的位置和连接关系等功能进而实现最终模型的生成和修正;
2.其中,粗调步骤例如使用内吸附(如图3B椭圆形框所示)、外吸附(如图3B矩形框所示)等方式进行单个模型或模型组之间的连接或对应关系的确定,实现单个模型或模型组之间的粗略定位,这里,内吸附、外吸附等方式能够实现允许存在对应关系的模型或模型组之间存在比较大的位置灵活性,使得即使当房间中的对应结构例如门、窗、边界等有少量误差时,也可以通过模型或模型组的移动来确保没有重叠关系的模型不会被重叠摆放。
3.其中,使用“所见即所得”的方式来确认各个模型或模型组的边界线例如墙的位置。在本公开的模型修正方法中,可能存在由于识别误差,使得单个模型的原始边界线位置不正确的情况。在粗调步骤中,例如当同一个空间的在不同的拍摄位置拍摄的图像形成的模型初步定位时,能够直观地区分同一空间的外边界(如图3B所示,客厅6和餐厅13的整体外边界线以实线表示,表示房间在实线处实际有墙),以及被包括在该同一空间内的原始边界线(如图3B所示,客厅6和餐厅13的内部原始边界线以虚线表示,表示房间在虚线处实际无墙,存在因识别不准导致的边界错误)。在精调步骤中,确保粗调步骤中确认过的边界线(模型中的实线或虚线分别对应为房间中有墙或无墙)不被改变。
在实际操作中,粗调步骤能够实现确认最终的边界线,并且灵活方便;精调步骤能够实现在对误差进行调整优化的同时,不改变边界线的性质(实线虚线或有墙无墙)。上述步骤是高效和准确的3D模型修正方法。
参考图2,示出了根据本公开的模型修正方法的一个实施例的流程图。所述模型修正方法,包括以下步骤:
S21,确定第一空间和第二空间各自的模型中的对应结构;
在一个或多个实施例中,对应结构是第一空间和第二空间的对应的开口或边界线。
在一个或多个实施例中,空间例如可以为房间,房间的对应结构例如是房间的门、或窗、或开口、或墙角或墙线之一。
在一个或多个实施例中,确定第一空间和第二空间各自的模型中的对应结构例如可以通过标记系统来确定空间的结构。具体而言,例如首先将拍摄的每个空间的图像合成360度全景图,然后将该全景图对应于三维空间进行标记来确定对应结构,例如通过在标记系统中点击该空间的结构的标点或标点之间组成的标线,根据全景图中各个结构的实际位置来确定标记系统中的例如房间的墙体或各类房间的基本物体结构,并通过按照图像中实际的结构拖动标记系统中各个对应结构的标点或标线进行扩展或添加各类房间的基本物体结构,比如门、开放空间、普通窗、飘窗、楼梯等,当然也可以在墙面上添加基本物体结构。
S22,粗调步骤,根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述对应结构符合预设的连接建立条件,建立连接关系;
在一个或多个实施例中,当对应结构符合预设的连接建立条件时,在第一空间和第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
在一个或多个实施例中,预设的连接建立条件,包括使第一空间和第二空间的对应结构之间的夹角小于预设角度和/或距离小于预设距离。例如不同的空间的模型的对应结构(在此例如为房间的“门”)的夹角小于30°并且例如在模型连接界面的距离小于1cm。
在一个或多个实施例中,预设的连接效果例如可以包括变为相同颜色、出现吸附效果、或出现连接标识之一的效果。例如可以是当不同的空间的模型的对应结构(在此例如为房间的“门”)的夹角小于30°并且例如在模型连接界面的距离小于1cm时,两个房间的门之间变为相同颜色或出现吸附效果或出现连接标识或其他任意的关联效果,用于直观地确定两者 之间存在对应关系。之后当系统进行空间模型的连接或拼接时,具有该对应关系的空间被自动调整对齐并连接或拼接在一起。由此,用户调整不同空间的位置关系时,不必使不同空间的边线或门、窗等连接结构精确对准,从而能够大幅度减少用户的工作量,而系统基于对应关系自动将相对应的两空间对齐、拼接,故模型的拼接精度也比纯手动调整时大大提高。
在一个或多个实施例中,当第一空间和第二空间的模型是针对不同的空间分别建立的模型时,在粗调步骤S22中,以使第一空间和第二空间的模型不重叠的方式,粗调第一空间和第二空间的模型的位置;
在一个或多个实施例中,当第一空间和第二空间的模型是针对同一空间,基于在不同拍摄位置拍摄的图像或通过点云扫描而分别建立的模型时,在粗调步骤S22中,以使第一空间和第二空间的模型至少部分重叠的方式,粗调第一空间和第二空间的模型的位置。
在一个或多个实施例中,粗调步骤中确定的第一空间和第二空间的模型不重叠时,第一空间和第二空间的模型边界线为第一边界线;粗调步骤中确定的第一空间和第二空间的模型重叠时,重叠部分的模型边界线为第二边界线;下面将描述的精调步骤S23将根据第一边界线和第二边界线精调第一空间和第二空间的模型的位置。在一个或多个实施例中,粗调步骤用于使得各空间之间的第一边界线和第二边界线正确,例如将第一边界线和第二边界线分别以预设方式显示,预设方式例如包括实线或虚线方式显示、清晰或模糊效果显示、对比度不同方式显示、不同颜色显示等等方式。例如以实线表示各空间的第一边界线例如墙的位置,以虚线表示被重叠部分的第二边界线例如包含的原边界线。
在一个或多个实施例中,例如使用全景图比对确保例如表示各空间的墙的实线的准确,对于不准确的位置,通过上述的标记系统根据实际位置在全景图中进行修改和调整。
在一个或多个实施例中,粗调步骤S22之前,还包括使第一空间和第二空间的模型按照拍摄各自的图像时的位置和/或方向预先摆放的步骤。
在一个或多个实施例中,例如可以先将各个空间的模型按照三维模型 和/或二维模型的图像在不同拍摄位置或移动过程中拍摄时的位置和/或方向摆放,当然也可以先将各个空间的三维模型转换为二维模型,通过将各个空间的二维模型按照在不同拍摄位置或移动过程中拍摄时的位置和/或方向摆放,以实现后面的各个步骤,实现该功能的技术并不做限定;
在一个或多个实施例中,拍摄时的位置和/或方向是指在不同拍摄位置或者在移动过程中拍摄用于生成三维模型的图像时的位置和/或方向,该位置和/或方向例如可以通过拍摄装置的定位传感器和方向传感器等传感器获得,当然也可以通过相近拍摄位置的图像进行特征点匹配,来获得各拍摄位置的相对位移和拍摄方向信息,并不做限定。
在一个或多个实施例中,第一空间和第二空间的模型的至少一者,是多个空间的模型构成的模型组。例如在拍摄用于生成三维模型和/或二维模型的图像过程中移动过快造成相邻两帧图像没有足够多的特征点进行匹配,或是在移动过程中,环境中存在干扰或环境发生改变,例如进入毛坯房或者光线条件差(过暗或过强)的环境;或是在拍摄过程中被外部因素中断拍摄,例如接电话导致拍摄中断等造成拍摄路线的中断,导致当生成三维模型和/或二维模型的图像在不同拍摄位置或移动过程中拍摄时的位置和/或方向无法确定时,将能够确定位置和/或方向的多个空间的模型分成不同的模型组。
在一个或多个实施例中,模型组例如通过预设方式区分并将各个模型组分别放置;其中,预设方式例如可以通过对中断前后的各个模型组的三维模型和/或二维模型以不同的边框颜色进行区分,当然也可以通过其他方式进行区分。
S23,精调步骤,按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置,以生成最终模型;
在一个或多个实施例中,在精调步骤S23中,使已建立连接关系的开口或边界线对准。
在一个或多个实施例中,将粗调后的第一空间和第二空间的模型的对应结构所在的边界线合并,并使对应结构的中点重合。粗调完成后的模型 的第一边界线和第二边界线例如实线和虚线的位置不变。
在一个或多个实施例中,当第一空间和第二空间的模型是针对同一空间,基于在不同拍摄位置拍摄的图像或通过点云扫描而分别建立的模型时,在精调步骤S23中,对于两模型,仅保留第一边界线,用于修正对应的连接结构连接后导致的误差。
在一个或多个实施例中,精调步骤进一步调整误差导致的空间重叠部分,检查所有模型中的重叠部分,例如若第一空间和第二空间的模型的对应结构所在的边界线合并后,第一空间和第二空间的模型至少一者与第三空间的模型存在重叠部分,则将重叠部分中预设距离范围内的两条大致平行的边界线合并,即检查重叠部分中原来为实线的第一边界线,如果原来不重叠的空间的原来为第一边界线的实线基本平行,则将这两条第一边界线合并为一条。
在一个或多个实施例中,如果原来不重叠的空间的原来为第一边界线的实线基本平行,并且两条边界线符合预设距离时,例如在模型示意图中的距离为1cm时,则将两条第一边界线合并至第一边界线的中间位置。
在一个或多个实施例中,删除重叠部分中的第二边界线例如虚线,以完成最终的模型。
在一个或多个实施例中,还包括,将精调完成的三维模型转换为二维模型;根据空间的入口确定二维模型的朝向,这里,例如可以调整二维模型的方向,以使空间的入口位于二维模型的上方,当然也可以根据拍摄时获得的各个空间的实际朝向来调整空间的入口的位置,以调整二维模型整体的方向。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不 必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
[模型修正方法实施例]
如上所述,拍摄时的位置和/或方向是指在不同拍摄位置或者在移动过程中拍摄用于生成模型的图像时的位置和/或方向,该位置和/或方向例如可以通过拍摄装置的定位传感器和方向传感器等传感器获得,当然也可以通过相近拍摄位置的图像进行特征点匹配,来获得各拍摄位置的相对位移和拍摄方向信息,并不做限定。
本公开的一个应用场景为,例如在拍摄用于生成三维模型的图像过程中移动过快造成相邻两帧图像没有足够多的特征点进行匹配,或是在移动过程中,环境中存在干扰或环境发生改变,例如进入毛坯房或者光线条件差(过暗或过强)的环境;或是在拍摄过程中被外部因素中断拍摄,例如接电话导致拍摄中断等造成拍摄路线的中断,导致当生成三维模型和/或二维模型的图像在不同拍摄位置或移动过程中拍摄时的位置和/或方向无法确定时,对至少一个空间的三维模型和/或二维模型按照无法确定的时间将至少一个空间的三维模型和/或二维模型进行分组,将能够确定位置和/或方向的多个空间的模型分成不同的模型组。
下面,参见图3,说明本公开存在不同的模型组的情况下的一个实施例,在本实施例中,以模型修正对象的空间为房间为例进行说明,主要包括以下步骤:
步骤1,将各个空间的三维模型和/或二维模型按照拍摄各自的图像时的位置和/或方向预先摆放,例如在本实施例中,房屋中的空间包括不同的房间:主卧、次卧、主卫、走廊等等不同的空间,使建立起的各房间的模型按照各自的图像时的位置和/或方向预先摆放;
在本实施例中,例如在拍摄用于生成模型的图像过程中移动过快造成相邻两帧图像没有足够多的特征点进行匹配,或是在移动过程中,环境中存在干扰或环境发生改变,例如进入毛坯房或者光线条件差(过暗或过强)的环境;或是在拍摄过程中被外部因素中断拍摄,例如接电话导致拍摄中 断等造成拍摄客厅6(或餐厅13)时的路线发生中断,导致客厅6(或餐厅13)的模型无法按照拍摄时的位置和/或方向进行初步放置时,对所有的空间按照拍摄客厅6(或餐厅13)时中断的时间的路线前后的模型进行拼接并将多个空间的模型分为两个模型组,例如图3A所示,为各个模型组反映在二维模型中的示意图,餐厅13等空间分为一个模型组,并以深色框线进行区分。
步骤2,确定第一空间和第二空间各自的模型中的对应结构,在本实施例中,例如通过标记系统标记空间结构,例如首先将餐厅13的照片合成360度全景图,然后将餐厅13的全景图对应于三维空间进行标记来确定对应结构,例如通过在标记系统中点击餐厅13的标点或标点之间组成的标线,来对餐厅13的墙体进行修改,以确定各个房间的墙的位置并以实线进行表示。例如在本实施例中,对餐厅13的与阳台7的开口进行调整(即在标记系统中点击标点或标线对应开口的实际位置),并对客厅6的与阳台7的开口进行调整。还例如,对于餐厅13等深色框线的一个模型组中的不同的房间餐厅13的门与厨房12的门进行调整,还例如对另一个模型组中次卧2-1的门与过道5的门(见图3B上侧矩形框)进行调整等等。
步骤3,粗调步骤,根据对应结构,粗调各个空间的模型的位置,使对应结构符合预设的连接建立条件,建立连接关系,在本实施例中,例如深色框线的一个模型组中的不同的房间餐厅13的门与厨房12的门为对应结构,还例如另一个模型组中次卧2-1的门与过道5的门为对应结构,在粗调步骤中,以使餐厅13与厨房12的模型不重叠的方式,粗调餐厅13与厨房12的模型的位置。当用户调整餐厅13和厨房12的模型的方向和/或位置,使得餐厅13的门与厨房12的门符合预设的连接建立条件时,例如两者的夹角小于30°并且例如在模型连接界面的距离小于1cm时,判定用户意图使该两个空间连接,进而使该对应的连接结构之间出现预设连接效果,例如可以是两者之间出现吸附果,如图3B所示,餐厅13的门与厨房12的门建立连接关系(矩形框所示)。之后系统进行空间模型的拼 接时,具有连接关系的空间被自动调整对齐并拼接在一起。由此,用户调整两空间的位置关系时,不必使两空间的边线或门、窗等连接结构精确对准,从而能够大幅度减少用户的工作量,而系统基于对应关系自动将相对应的两空间对齐、拼接,故模型的拼接精度也比纯手动调整时大大提高。
在本实施例中,例如在拍摄客厅6(或餐厅13)的图像的过程中拍摄路线发生中断或者拍摄位置和方向无法确定,而中断后继续拍摄的客厅6(或餐厅13)的图像则能按照新的拍摄路线或新的能够确定的拍摄位置和方向,因此当最终完成图像拍摄后,客厅6(或餐厅13)被基于两个不同拍摄位置或拍摄路线的图像分别建立了模型,并且分别与路线中断前后的其它空间模型构成了模型组,由此,当建立连接关系时,需要将该两个客厅6(或餐厅13)的模型连接或拼接成为一个,同时使两组模型组进行连接或拼接。餐厅13的与阳台7的开口和客厅6的与阳台7的开口为不同模型组之间的同一个空间内的对应结构。在粗调步骤中,以使客厅6和餐厅13的模型至少部分重叠的方式,粗调客厅6和餐厅13的模型的位置。在本实施例中,客厅6(或餐厅13)被基于两个不同拍摄位置或拍摄路线的图像分别建立了模型,并且分别与路线中断前后的其它空间模型构成了模型组,在实际应用中,例如还可以是餐厅13与厨房12两个不同的空间之间的拍摄路线发生中断或者拍摄位置和方向无法确定,并且餐厅13与厨房12分别与路线中断前后的其它空间模型构成了模型组,当餐厅13与厨房12分别对应的模型组建立连接关系时,同上述模型组内类似,通过餐厅13的门与厨房12的门建立连接关系,连接效果和方式不再赘述。
当餐厅13的与阳台7的开口与客厅6的与阳台7的开口符合预设的连接建立条件时,例如两者的夹角小于30°并且例如在模型连接界面的距离小于1cm时,该对应结构之间出现预设的连接效果,例如可以是两者之间出现吸附效果,如图3B所示,餐厅13与客厅6为同一空间中的不同拍摄位置获取的图像生成的模型建立的连接关系(椭圆形框所示)。之后系统进行空间模型的连接或拼接时,具有对应关系的空间被自动调整 对齐并拼接在一起。由此,用户调整两空间的位置关系时,不必使两空间的边线或门、窗等连接结构精确对准,从而能够大幅度减少用户的工作量,而系统基于对应关系自动将相对应的两空间对齐、拼接,故模型的拼接精度也比纯手动调整时大大提高。
在本实施例中,例如各个空间的外边界(第一边界线)均以实线进行表示,当进行粗调步骤时,由于连接关系的建立,导致模型产生重叠部分,例如图3B中,客卫8与厨房12由于分别与其他模型建立连接关系而产生重叠部分,则将原本的第一边界线的实线显示为第二边界线的虚线,并使用全景图比对确保外边界的实线准确。
步骤4,精调步骤,按照已建立的连接关系,精调各个空间的模型的位置;例如在本实施例中,使已建立连接关系的餐厅13的与阳台7的开口与客厅6的与阳台7的开口对准,或使已建立连接关系的餐厅13的门与厨房12的门对准。
在本实施例中,餐厅13和客厅6的模型是针对同一空间,基于在不同拍摄位置拍摄的图像或通过点云扫描而分别建立的模型,在精调步骤中,对于餐厅13和客厅6的模型,仅保留一组第一边界线的实线。
在本实施例中,例如将粗调后的餐厅13和客厅6的模型的对应结构即餐厅13的与阳台7的开口与客厅6的与阳台7的开口所在的边界线合并,例如将对应的开口的中点重合,从而将长度较短的餐厅13的边界线合并至长度较长的客厅6的边界线,当然其他的对应结构在精调时也可以将长度相同的两条边界线合并为一条,或者将两条边界线平行贴紧等,并不做限定,为保证重合后的准确性,优选将对应结构的中点重合;
在本实施例中,当粗调后的餐厅13和客厅6的模型存在部分重叠时,例如在本实施例中餐厅13的部分容纳在客厅6中,则将重叠部分内部的餐厅13的第二边界线以虚线方式显示,当然也可以利用模糊效果显示、对比度降低显示、与第一边界线不同颜色显示等等方式。
在本实施例中,对于客卫8与厨房12的位置关系,在步骤3粗调步骤完成前例如由于存在识别误差,使得客卫8与厨房12的原始边界线不 正确,存在重叠部分。经过步骤3粗调步骤后,将客卫8与厨房12进行初步定位,以能够直观区分客卫8与厨房12的外边界即第一边界(实线),表示客卫8与厨房12之间实际有墙,在精调步骤前,客卫8与厨房12并不存在重叠关系,并且原来客卫8与厨房12的边界线均为实线,则将大致相互平行的两条竖直边界线合并为一条,例如当大致相互平行的两条竖直边界线在图3B中的距离小于1cm时,取这两条竖直边界线中间的位置作为新的实线的位置。
在本实施例中,当各个空间的模型位置确定并且重叠部分被精调优化后,去除同一空间不同位置拍摄的模型导致的重叠部分而产生的第二边界线的虚线,以完成精调,用于生成最终的模型。
步骤5,在本实施例中,还包括,将空间的整体三维模型转换为二维模型;根据空间的入口确定二维模型的朝向,这里,例如可以调整二维模型的方向,以使空间的入口位于二维模型的上方,当然也可以根据拍摄时获得的各个空间的实际朝向来调整空间的入口的位置,以调整二维模型整体的方向。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(ROM)等非易失性存储介质,或随机存储记忆体(RAM)等。
[模型修正装置]
为了实现本公开实施例中的技术方案,本公开的一个实施例提供了一种模型修正装置,该装置具体可以应用于各种电子终端设备中,如图4所示,包括:结构确定模块401、粗调模块402、精调模块403、摆放模块404。
结构确定模块401,用于确定第一空间和第二空间各自的模型中的对应结构;
在一个或多个实施例中,对应结构是第一空间和第二空间的对应的开口或边界线。
在一个或多个实施例中,空间例如可以为房间,房间的对应结构例如是房间的门、或窗、或开口、或墙角或墙线之一。
在一个或多个实施例中,确定第一空间和第二空间各自的模型中的对应结构例如可以通过标记系统来确定空间的结构。具体而言,例如首先将拍摄的每个空间的图像合成360度全景图,然后将该全景图对应于三维空间进行标记来确定对应结构,例如通过在标记系统中点击该空间的结构的标点或标点之间组成的标线,根据全景图中各个结构的实际位置来确定标记系统中的例如房间的墙体或各类房间的基本物体结构,并通过按照图像中实际的结构拖动标记系统中各个对应结构的标点或标线进行扩展或添加各类房间的基本物体结构,比如门、开放空间、普通窗、飘窗、楼梯等,当然也可以在墙面上添加基本物体结构。
粗调模块402,用于根据对应结构,粗调第一空间和第二空间的模型的位置,使对应结构符合预设的连接建立条件,建立连接关系;
在一个或多个实施例中,当对应结构符合预设的连接建立条件时,在第一空间和第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
在一个或多个实施例中,预设的连接建立条件,包括使第一空间和第二空间的对应结构之间的夹角小于预设角度和/或距离小于预设距离。例如不同的空间的模型的对应结构(在此例如为房间的“门”)的夹角小于30°并且例如在模型连接界面的距离小于1cm。
在一个或多个实施例中,预设的连接效果例如可以包括变为相同颜色、出现吸附效果、或出现连接标识之一的效果。例如可以是当不同的空间的模型的对应结构(在此例如为房间的“门”)的夹角小于30°并且例如在模型连接界面的距离小于1cm时,两个房间的门之间变为相同颜色或出现吸附效果或出现连接标识或其他任意的关联效果,用于直观地确定两者之间存在对应关系。之后当系统进行空间模型的连接或拼接时,具有该对 应关系的空间被自动调整对齐并连接或拼接在一起。由此,用户调整不同空间的位置关系时,不必使不同空间的边线或门、窗等连接结构精确对准,从而能够大幅度减少用户的工作量,而系统基于对应关系自动将相对应的两空间对齐、拼接,故模型的拼接精度也比纯手动调整时大大提高。
在一个或多个实施例中,当第一空间和第二空间的模型是针对不同的空间分别建立的模型时,在粗调模块402使第一空间和第二空间的模型不重叠的方式,粗调第一空间和第二空间的模型的位置;
在一个或多个实施例中,当第一空间和第二空间的模型是针对同一空间,基于在不同拍摄位置拍摄的图像或通过点云扫描而分别建立的模型时,粗调模块402使第一空间和第二空间的模型至少部分重叠的方式,粗调第一空间和第二空间的模型的位置。
在一个或多个实施例中,在一个或多个实施例中,粗调模块402中确定的第一空间和第二空间的模型不重叠时,第一空间和第二空间的模型边界线为第一边界线;粗调模块402确定的第一空间和第二空间的模型重叠时,重叠部分的模型边界线为第二边界线;下面将描述的精调模块403将根据第一边界线和第二边界线精调第一空间和第二空间的模型的位置。在一个或多个实施例中,粗调模块402用于使得各空间之间的第一边界线和第二边界线正确,例如将第一边界线和第二边界线分别以预设方式显示,预设方式例如包括实线或虚线方式显示、清晰或模糊效果显示、对比度不同方式显示、不同颜色显示等等方式。例如以实线表示各空间的第一边界线例如墙的位置,以虚线表示被重叠部分的第二边界线例如包含的原边界线。
在一个或多个实施例中,例如使用全景图比对确保例如表示各空间的墙的实线的准确,对于不准确的位置,通过上述的标记系统根据实际位置在全景图中进行修改和调整。
在一个或多个实施例中,还包括摆放模块404,使第一空间和第二空间的模型按照拍摄各自的图像时的位置和/或方向预先摆放。
在一个或多个实施例中,例如可以先将各个空间的模型按照三维模型 和/或二维模型的图像在不同拍摄位置或移动过程中拍摄时的位置和/或方向摆放,当然也可以先将各个空间的三维模型转换为二维模型,通过将各个空间的二维模型按照在不同拍摄位置或移动过程中拍摄时的位置和/或方向摆放,以实现后面的各个步骤,实现该功能的技术并不做限定;
在一个或多个实施例中,拍摄时的位置和/或方向是指在不同拍摄位置或者在移动过程中拍摄用于生成三维模型的图像时的位置和/或方向,该位置和/或方向例如可以通过拍摄装置的定位传感器和方向传感器等传感器获得,当然也可以通过相近拍摄位置的图像进行特征点匹配,来获得各拍摄位置的相对位移和拍摄方向信息,并不做限定。
在一个或多个实施例中,第一空间和第二空间的模型的至少一者,是多个空间的模型构成的模型组。例如在拍摄用于生成三维模型和/或二维模型的图像过程中移动过快造成相邻两帧图像没有足够多的特征点进行匹配,或是在移动过程中,环境中存在干扰或环境发生改变,例如进入毛坯房或者光线条件差(过暗或过强)的环境;或是在拍摄过程中被外部因素中断拍摄,例如接电话导致拍摄中断等造成拍摄路线的中断,导致当生成三维模型和/或二维模型的图像在不同拍摄位置或移动过程中拍摄时的位置和/或方向无法确定时,将能够确定位置和/或方向的多个空间的模型分成不同的模型组。
在一个或多个实施例中,模型组例如通过预设方式区分并将各个模型组分别放置;其中,预设方式例如可以通过对中断前后的各个模型组的三维模型和/或二维模型以不同的边框颜色进行区分,当然也可以通过其他方式进行区分。
精调模块403,用于按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置;
在一个或多个实施例中,精调模块403使已建立连接关系的开口或边界线对准。
在一个或多个实施例中,将粗调后的第一空间和第二空间的模型的对应结构所在的边界线合并,并使对应结构的中点重合。粗调完成后的模型 的实线和虚线的位置不变。
在一个或多个实施例中,当第一空间和第二空间的模型是针对同一空间,基于在不同拍摄位置拍摄的图像或通过点云扫描而分别建立的模型时,精调模块403对于两模型,仅保留第一边界线,用于修正对应的连接结构连接后导致的误差。
在一个或多个实施例中,精调步骤进一步调整误差导致的空间重叠部分,检查所有模型中的重叠部分,例如若第一空间和第二空间的模型的对应结构所在的边界线合并后,第一空间和第二空间的模型至少一者与第三空间的模型存在重叠部分,则将重叠部分中预设距离范围内的两条大致平行的边界线合并,即检查重叠部分中原来为实线的第一边界线,如果原来不重叠的空间的原来为第一边界线的实线基本平行,则将这两条第一边界线合并为一条。
在一个或多个实施例中,如果原来不重叠的空间的原来为第一边界线的实线基本平行,并且两条边界线符合预设距离时,例如在模型示意图中的距离为1cm时,则将两条第一边界线合并至第一边界线的中间位置。
在一个或多个实施例中,删除重叠部分中的第二边界线例如虚线,以完成最终的模型。
转换确定模块(未图示),在一个或多个实施例中,还包括转换确定模块,用于将空间的整体三维模型转换为二维模型;根据空间的入口确定二维模型的朝向,这里,例如可以调整二维模型的方向,以使空间的入口位于二维模型的上方,当然也可以根据拍摄时获得的各个空间的实际朝向来调整空间的入口的位置,以调整二维模型整体的方向。
应该理解的是,虽然附图的框图中的每个方框可以代表一个模块,该模块的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令,但是这些模块并不是必然按照顺序依次执行。本公开中装置实施例中的各模块及功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上的模块或功能单元集成在一个模块中。上述集成的各个模块既可以采用硬件的形式实现,也可 以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。上述提到的存储介质可以是只读存储器,磁盘或光盘等。
[模型修正设备]
下面参考图5,其示出了适于用来实现本公开实施例的电子设备(例如图1中的终端设备或服务器)500的结构示意图。本公开实施例中的终端设备可以是上述系统中的各种终端设备。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,用于控制电子设备的整体操作。处理装置可以包括一个或多个处理器来执行指令,以完成上述的方法的全部或部分步骤。此外,处理装置501还可以包括一个或多个模块,用于处理和其他装置之间的交互。
存储装置502用于存储各种类型的数据,存储装置502可以是包括各种类型的计算机可读存储介质或者它们的组合,例如可以是电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
传感器装置503,用于感受规定的被测量的信息并按照一定的规律转换成可用输出信号,可以包括一个或多个传感器。例如,其可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器 等,用于检测电子设备的打开/关闭状态、相对定位、加速/减速、温度、湿度和光线等的变化。
处理装置501、存储装置502以及传感器装置503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
多媒体装置506可以包括触摸屏、触摸板、键盘、鼠标、摄像头、麦克风等的输入装置用以接收来自用户的输入信号,在各种输入装置可以与上述传感器装置503的各种传感器配合完成例如手势操作输入、图像识别输入、距离检测输入等;多媒体装置506还可以包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置。
电源装置507,用于为电子设备中的各种装置提供电力,可以包括电源管理系统、一个或多个电源及为其他装置分配电力的组件。
通信装置508,可以允许电子设备500与其他设备进行无线或有线通信以交换数据。
上述各项装置也均可以连接至I/O接口505以实现电子设备500的应用。
虽然图5示出了具有各种装置的电子设备,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置从网络上被下载和安装,或者从存储装置被安装。在该计算机程序被处理装置执行时,执行本公开实施例的方法中限定的上述功能。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。
要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
确定第一空间和第二空间各自的模型中的对应结构;
粗调步骤,根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述对应结构符合预设的连接建立条件,建立连接关系;
精调步骤,按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置;
其中,当所述对应结构符合所述预设的连接建立条件时,在所述第一空间和所述第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
所述预设的连接建立条件,包括使所述第一空间和第二空间的所述对应结构之间的夹角小于预设角度和/或距离小于预设距离。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征 在于,
当所述第一空间和第二空间的模型是针对不同的空间分别建立的模型时,在所述粗调步骤中,以使所述第一空间和所述第二空间的模型不重叠的方式,粗调所述第一空间和第二空间的模型的位置;
当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,在所述粗调步骤中,以使所述第一空间和所述第二空间的模型至少部分重叠的方式,粗调所述第一空间和第二空间的模型的位置。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
所述粗调步骤中确定的所述第一空间和第二空间的模型不重叠时,所述第一空间和第二空间的模型边界线为第一边界线;
所述粗调步骤中确定的所述第一空间和第二空间的模型重叠时,所述重叠部分的模型边界线为第二边界线;
所述精调步骤根据所述第一边界线和所述第二边界线精调所述第一空间和第二空间的模型的位置。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,在所述精调步骤中,对于两模型,仅保留所述第一边界线。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
所述精调步骤还包括,去除所述第二边界线。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
所述对应结构,是所述第一空间和所述第二空间的对应的开口或边界线;
在所述精调步骤中,使已建立连接关系的所述开口或边界线对准。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征 在于,
将粗调后的所述第一空间和第二空间的模型的所述对应结构所在的边界线合并,并使所述对应结构的中点重合。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
若所述第一空间和第二空间的模型的所述对应结构所在的边界线合并后,所述第一空间和第二空间的模型至少一者与第三空间的模型存在重叠部分,则将所述重叠部分中预设距离范围内的两条大致平行的边界线合并。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
当所述重叠部分中预设距离范围内的两条大致平行的边界线合并时,将所述两条边界线合并至所述两条边界线的中间位置。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
所述模型是基于对所述第一空间和第二空间的内部拍摄得到的图像而建立的三维模型和/或二维模型;
在所述粗调步骤之前,还包括使所述第一空间和第二空间的所述模型按照拍摄各自的所述图像时的位置和/或方向预先摆放的步骤。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
所述第一空间和第二空间的模型的至少一者,是多个空间的模型构成的模型组。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征在于,
所述第一空间和第二空间为房间,所述对应结构至少是所述房间的门、或窗、或开口、或墙角或墙线之一。
根据本公开的一个或多个实施例,提供了一种模型修正方法,其特征 在于,
所述预设的连接效果至少包括变为相同颜色、出现吸附效果、或出现连接标识之一。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,包括:
结构确定模块,用于确定第一空间和第二空间各自的模型中的对应结构;
粗调模块,用于根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述对应结构符合预设的连接建立条件,建立连接关系;
精调模块,用于按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置;
其中,当所述对应结构符合所述预设的连接建立条件时,在所述第一空间和所述第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
所述预设的连接建立条件,包括使所述第一空间和第二空间的所述对应结构之间的夹角小于预设角度和/或距离小于预设距离;
所述预设的连接效果至少包括变为相同颜色、出现吸附效果、或出现连接标识之一。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
当所述第一空间和第二空间的模型是针对不同的空间分别建立的模型时,所述粗调模块以使所述第一空间和所述第二空间的模型不重叠的方式,粗调所述第一空间和第二空间的模型的位置;
当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,所述粗调模块以使所述第一空间和所述第二空间的模型至少部分重叠的方式,粗调所述第一空间和第二空间的模型的位置。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
所述粗调模块确定的所述第一空间和第二空间的模型不重叠时,所述第一空间和第二空间的模型边界线为第一边界线;
所述粗调模块确定的所述第一空间和第二空间的模型重叠时,所述重叠部分的模型边界线为第二边界线;
所述精调模块根据所述第一边界线和所述第二边界线精调所述第一空间和第二空间的模型的位置。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,所述精调模块对于两模型,仅保留所述第一边界线并去除所述第二边界线。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
所述对应结构,是所述第一空间和所述第二空间的对应的开口或边界线;
所述精调模块使已建立连接关系的所述开口或边界线对准。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
所述精调模块将粗调后的所述第一空间和第二空间的模型的所述对应结构所在的边界线合并,并使所述对应结构的中点重合;
若所述第一空间和第二空间的模型的所述对应结构所在的边界线合并后,所述第一空间和第二空间的模型至少一者与第三空间的模型存在重叠部分,则将所述重叠部分中预设距离范围内的两条大致平行的边界线合并;
当所述重叠部分中预设距离范围内的两条大致平行的边界线合并时,将所述两条边界线合并至所述两条边界线的中间位置。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
所述模型是基于对所述第一空间和第二空间的内部拍摄得到的图像而建立的三维模型和/或二维模型;
所述装置还包括摆放模块,使所述第一空间和第二空间的所述模型按照拍摄各自的所述图像时的位置和/或方向预先摆放。
根据本公开的一个或多个实施例,提供了一种模型修正装置,其特征在于,
所述第一空间和第二空间的模型的至少一者,是多个空间的模型构成的模型组;
所述第一空间和第二空间为房间,所述对应结构至少是所述房间的门、或窗、或开口、或墙角或墙线之一。
根据本公开的一个或多个实施例,提供了一种计算机设备,其特征在于,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器执行所述计算机程序时实现如上述任一项所述的方法。
根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上述任一项所述的方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的 限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (25)

  1. 一种模型修正方法,其特征在于,包括:
    确定第一空间和第二空间各自的模型中的对应结构;
    粗调步骤,根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述对应结构符合预设的连接建立条件,建立连接关系;
    精调步骤,按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置;
    其中,当所述对应结构符合所述预设的连接建立条件时,在所述第一空间和所述第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
  2. 如权利要求1所述的模型修正方法,其特征在于,
    所述预设的连接建立条件,包括使所述第一空间和第二空间的所述对应结构之间的夹角小于预设角度和/或距离小于预设距离。
  3. 如权利要求1或2所述的模型修正方法,其特征在于,
    当所述第一空间和第二空间的模型是针对不同的空间分别建立的模型时,在所述粗调步骤中,以使所述第一空间和所述第二空间的模型不重叠的方式,粗调所述第一空间和第二空间的模型的位置;
    当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,在所述粗调步骤中,以使所述第一空间和所述第二空间的模型至少部分重叠的方式,粗调所述第一空间和第二空间的模型的位置。
  4. 如权利要求3所述的模型修正方法,其特征在于,
    所述粗调步骤中确定的所述第一空间和第二空间的模型不重叠时,所述第一空间和第二空间的模型边界线为第一边界线;
    所述粗调步骤中确定的所述第一空间和第二空间的模型重叠时,所述重叠部分的模型边界线为第二边界线;
    所述精调步骤根据所述第一边界线和所述第二边界线精调所述第一空间和第二空间的模型的位置。
  5. 如权利要求4所述的模型修正方法,其特征在于,
    当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,在所述精调步骤中,对于两模型,仅保留所述第一边界线。
  6. 如权利要求4所述的模型修正方法,其特征在于,
    所述精调步骤还包括,去除所述第二边界线。
  7. 如权利要求1所述的模型修正方法,其特征在于,
    所述对应结构,是所述第一空间和所述第二空间的对应的开口或边界线;
    在所述精调步骤中,使已建立连接关系的所述开口或边界线对准。
  8. 如权利要求1所述的模型修正方法,其特征在于,
    将粗调后的所述第一空间和第二空间的模型的所述对应结构所在的边界线合并,并使所述对应结构的中点重合。
  9. 如权利要求8所述的模型修正方法,其特征在于,
    若所述第一空间和第二空间的模型的所述对应结构所在的边界线合并后,所述第一空间和第二空间的模型至少一者与第三空间的模型存在重叠部分,则将所述重叠部分中预设距离范围内的两条大致平行的边界线合并。
  10. 如权利要求9所述的模型修正方法,其特征在于,
    当所述重叠部分中预设距离范围内的两条大致平行的边界线合并时,将所述两条边界线合并至所述两条边界线的中间位置。
  11. 如权利要求10所述的模型修正方法,其特征在于,
    所述模型是基于对所述第一空间和第二空间的内部拍摄得到的图像而建立的三维模型和/或二维模型;
    在所述粗调步骤之前,还包括使所述第一空间和第二空间的所述模型按照拍摄各自的所述图像时的位置和/或方向预先摆放的步骤。
  12. 如权利要求1所述的模型修正方法,其特征在于,
    所述第一空间和第二空间的模型的至少一者,是多个空间的模型构成的模型组。
  13. 如权利要求1所述的模型修正方法,其特征在于,
    所述第一空间和第二空间为房间,所述对应结构至少是所述房间的门、或窗、或开口、或墙角或墙线之一。
  14. 如权利要求1所述的模型修正方法,其特征在于,
    所述预设的连接效果至少包括变为相同颜色、出现吸附效果、或出现连接标识之一。
  15. 一种模型修正装置,其特征在于,包括:
    结构确定模块,用于确定第一空间和第二空间各自的模型中的对应结构;
    粗调模块,用于根据所述对应结构,粗调所述第一空间和第二空间的模型的位置,使所述对应结构符合预设的连接建立条件,建立连接关系;
    精调模块,用于按照已建立的连接关系,精调所述第一空间和第二空间的模型的位置;
    其中,当所述对应结构符合所述预设的连接建立条件时,在所述第一空间和所述第二空间的模型之间出现表示二者已建立连接关系的预设的连接效果。
  16. 如权利要求15所述的模型修正装置,其特征在于,
    所述预设的连接建立条件,包括使所述第一空间和第二空间的所述对应结构之间的夹角小于预设角度和/或距离小于预设距离;
    所述预设的连接效果至少包括变为相同颜色、出现吸附效果、或出现连接标识之一。
  17. 如权利要求15所述的模型修正装置,其特征在于,包括:
    当所述第一空间和第二空间的模型是针对不同的空间分别建立的模型时,所述粗调模块以使所述第一空间和所述第二空间的模型不重叠的方式,粗调所述第一空间和第二空间的模型的位置;
    当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,所述粗调模块以使所述第一空间和所述第二空间的模型至少部分重叠的方式,粗调所述第一空间和第二空间的模型的位置。
  18. 如权利要求17所述的模型修正装置,其特征在于,还包括:
    所述粗调模块确定的所述第一空间和第二空间的模型不重叠时,所述第一空间和第二空间的模型边界线为第一边界线;
    所述粗调模块确定的所述第一空间和第二空间的模型重叠时,所述重叠部分的模型边界线为第二边界线;
    所述精调模块根据所述第一边界线和所述第二边界线精调所述第一空间和第二空间的模型的位置。
  19. 如权利要求18所述的模型修正装置,其特征在于,包括:
    当所述第一空间和第二空间的模型是针对同一空间分别建立的模型时,所述精调模块对于两模型,仅保留所述第一边界线并去除所述第二边界线。
  20. 如权利要求15所述的模型修正装置,其特征在于,包括:
    所述对应结构,是所述第一空间和所述第二空间的对应的开口或边界线;
    所述精调模块使已建立连接关系的所述开口或边界线对准。
  21. 如权利要求15所述的模型修正装置,其特征在于,包括:
    所述精调模块将粗调后的所述第一空间和第二空间的模型的所述对应结构所在的边界线合并,并使所述对应结构的中点重合;
    若所述第一空间和第二空间的模型的所述对应结构所在的边界线合并后,所述第一空间和第二空间的模型至少一者与第三空间的模型存在重叠部分,则将所述重叠部分中预设距离范围内的两条大致平行的边界线合并;
    当所述重叠部分中预设距离范围内的两条大致平行的边界线合并时,将所述两条边界线合并至所述两条边界线的中间位置。
  22. 如权利要求15所述的模型修正装置,其特征在于,包括:
    所述模型是基于对所述第一空间和第二空间的内部拍摄得到的图像而建立的三维模型和/或二维模型;
    所述装置还包括摆放模块,使所述第一空间和第二空间的所述模型按照拍摄各自的所述图像时的位置和/或方向预先摆放。
  23. 如权利要求15所述的模型修正装置,其特征在于,包括:
    所述第一空间和第二空间的模型的至少一者,是多个空间的模型构成的模型组;
    所述第一空间和第二空间为房间,所述对应结构至少是所述房间的门、或窗、或开口、或墙角或墙线之一。
  24. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器执行所述计算机程序时实现如权利要求1-14中任一项所述的方法。
  25. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-14中任一项所述的方法。
PCT/CN2020/123136 2020-10-23 2020-10-23 模型修正方法、装置、设备 WO2022082704A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080002532.2A CN112424837B (zh) 2020-10-23 2020-10-23 模型修正方法、装置、设备
PCT/CN2020/123136 WO2022082704A1 (zh) 2020-10-23 2020-10-23 模型修正方法、装置、设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/123136 WO2022082704A1 (zh) 2020-10-23 2020-10-23 模型修正方法、装置、设备

Publications (1)

Publication Number Publication Date
WO2022082704A1 true WO2022082704A1 (zh) 2022-04-28

Family

ID=74783021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123136 WO2022082704A1 (zh) 2020-10-23 2020-10-23 模型修正方法、装置、设备

Country Status (2)

Country Link
CN (1) CN112424837B (zh)
WO (1) WO2022082704A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972579A (zh) * 2022-06-22 2022-08-30 北京城市网邻信息技术有限公司 户型图构建方法、装置、设备及存储介质
CN115373570A (zh) * 2022-07-05 2022-11-22 北京乐新创展科技有限公司 图像处理方法、装置、电子设备及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591193A (zh) * 2021-08-05 2021-11-02 广东三维家信息科技有限公司 一种图形位置调整方法、装置、电子设备及存储介质
CN113760463B (zh) * 2021-09-08 2023-07-28 北京世冠金洋科技发展有限公司 子模型组件在父模型组件各边位置的调整方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1695170A (zh) * 2002-10-11 2005-11-09 英特莱格公司 生成计算机可读模型的方法
CN102609584A (zh) * 2012-02-09 2012-07-25 孙华良 室内软装3d效果图设计输出装置和方法
CN103839293A (zh) * 2014-03-07 2014-06-04 武汉蜗牛科技有限责任公司 一种三维房屋装饰方法与系统
CN104751517A (zh) * 2015-04-28 2015-07-01 努比亚技术有限公司 图形处理方法及装置
CN108228986A (zh) * 2017-12-22 2018-06-29 北京城建设计发展集团股份有限公司 地铁车站三维建筑模型自动生成方法
CN108717726A (zh) * 2018-05-11 2018-10-30 北京家印互动科技有限公司 三维户型模型生成方法及装置
CN109598783A (zh) * 2018-11-20 2019-04-09 西南石油大学 一种房间3d建模方法及家具3d预览系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025685B (zh) * 2017-04-11 2020-03-17 南京林业大学 拓扑感知下的机载建筑屋顶点云建模方法
CN108416747B (zh) * 2018-02-27 2020-07-10 平安科技(深圳)有限公司 元素位置修正方法、装置、计算机设备及存储介质
CN109448115B (zh) * 2018-10-31 2023-10-27 广州凡拓动漫科技有限公司 三维模型的处理方法、装置和计算机设备
CN110232733B (zh) * 2019-05-29 2024-03-15 武汉华正空间软件技术有限公司 三维模型建模方法与系统、存储介质和计算机
CN110505463A (zh) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 基于拍照的实时自动3d建模方法
CN111080804B (zh) * 2019-10-23 2020-11-06 贝壳找房(北京)科技有限公司 三维图像生成方法及装置
CN111127655B (zh) * 2019-12-18 2021-10-12 北京城市网邻信息技术有限公司 房屋户型图的构建方法及构建装置、存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1695170A (zh) * 2002-10-11 2005-11-09 英特莱格公司 生成计算机可读模型的方法
CN102609584A (zh) * 2012-02-09 2012-07-25 孙华良 室内软装3d效果图设计输出装置和方法
CN103839293A (zh) * 2014-03-07 2014-06-04 武汉蜗牛科技有限责任公司 一种三维房屋装饰方法与系统
CN104751517A (zh) * 2015-04-28 2015-07-01 努比亚技术有限公司 图形处理方法及装置
CN108228986A (zh) * 2017-12-22 2018-06-29 北京城建设计发展集团股份有限公司 地铁车站三维建筑模型自动生成方法
CN108717726A (zh) * 2018-05-11 2018-10-30 北京家印互动科技有限公司 三维户型模型生成方法及装置
CN109598783A (zh) * 2018-11-20 2019-04-09 西南石油大学 一种房间3d建模方法及家具3d预览系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972579A (zh) * 2022-06-22 2022-08-30 北京城市网邻信息技术有限公司 户型图构建方法、装置、设备及存储介质
CN115373570A (zh) * 2022-07-05 2022-11-22 北京乐新创展科技有限公司 图像处理方法、装置、电子设备及存储介质
CN115373570B (zh) * 2022-07-05 2023-11-21 北京乐新创展科技有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112424837B (zh) 2021-09-28
CN112424837A (zh) 2021-02-26

Similar Documents

Publication Publication Date Title
WO2022082704A1 (zh) 模型修正方法、装置、设备
WO2021036353A1 (zh) 基于拍照的3d建模系统及方法、自动3d建模装置及方法
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
KR102248474B1 (ko) 음성 명령 제공 방법 및 장치
US20160156682A1 (en) Method and device for function sharing between electronic devices
US10331393B2 (en) Vehicle-mounted terminal and method for obtaining resolution of a screen of a handheld terminal
US20240085913A1 (en) Robot autonomous operation method, robot, and computer-readable storage medium
CN111753622A (zh) 用于室内环境的定位的计算机实现的方法、服务器和介质
CN111292420A (zh) 用于构建地图的方法和装置
US20230005194A1 (en) Image processing method and apparatus, readable medium and electronic device
JP7261732B2 (ja) 文字の色を決定する方法および装置
CN113221225B (zh) 物品布局方法、装置、存储介质及电子设备
CN107566793A (zh) 用于远程协助的方法、装置、系统及电子设备
US20240273808A1 (en) Texture mapping method and apparatus, device and storage medium
CN109754464A (zh) 用于生成信息的方法和装置
WO2021253277A1 (zh) 虚拟装修方法、装置、系统
WO2021098361A1 (zh) 地形图编辑方法、装置、电子设备及计算机可读介质
WO2021142787A1 (zh) 行进路线及空间模型生成方法、装置、系统
CN107622241A (zh) 用于移动设备的显示方法和装置
US11514648B2 (en) Aligning input image data with model input data to generate image annotations
WO2020077912A1 (zh) 图像处理方法、装置、硬件装置
CN110657760A (zh) 基于人工智能的测量空间面积的方法、装置及存储介质
CN112258622B (zh) 图像处理方法、装置、可读介质及电子设备
WO2020155908A1 (zh) 用于生成信息的方法和装置
CN112214708A (zh) 页面生成方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20958286

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20958286

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20958286

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.10.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20958286

Country of ref document: EP

Kind code of ref document: A1