Nothing Special   »   [go: up one dir, main page]

CN110910504A - Method and device for determining three-dimensional model of region - Google Patents

Method and device for determining three-dimensional model of region Download PDF

Info

Publication number
CN110910504A
CN110910504A CN201911194400.XA CN201911194400A CN110910504A CN 110910504 A CN110910504 A CN 110910504A CN 201911194400 A CN201911194400 A CN 201911194400A CN 110910504 A CN110910504 A CN 110910504A
Authority
CN
China
Prior art keywords
dimensional model
video frame
frame picture
point
point locations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911194400.XA
Other languages
Chinese (zh)
Inventor
周明瑞
孙锐
唐萌
麻广伟
石清华
熊继林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Cennavi Technologies Co Ltd
Original Assignee
Beijing Cennavi Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Cennavi Technologies Co Ltd filed Critical Beijing Cennavi Technologies Co Ltd
Priority to CN201911194400.XA priority Critical patent/CN110910504A/en
Publication of CN110910504A publication Critical patent/CN110910504A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method and a device for determining a three-dimensional model of a region, relates to the technical field of data processing, and is used for timely and accurately constructing a scene model. The method comprises the following steps: acquiring a three-dimensional model of a target area; acquiring video data of the target area, and extracting a video frame picture from the video data; determining coordinate information of M point locations in the three-dimensional model according to the video frame picture; m is a positive integer; and superposing the video frame picture to the three-dimensional model according to the coordinate information of the M point locations.

Description

Method and device for determining three-dimensional model of region
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for determining a three-dimensional model of a region.
Background
At present, the method of determining a three-dimensional model of a region is mainly manual modeling. The artificial modeling method comprises the following steps: the staff collects geographical position information of the region and description information such as photos, videos and the like. And the staff determines the specific position of the area according to the geographical position information and determines a three-dimensional model of the area according to the description information. And the staff builds the three-dimensional model of the region in the three-dimensional modeling software according to the specific position of the region and the three-dimensional model.
However, under the condition that the surrounding buildings or equipment of the scene are changed, a large amount of time is consumed for data acquisition and modeling in manual modeling, so that a scene model cannot be timely and accurately constructed.
Disclosure of Invention
The application provides a method and a device for determining a three-dimensional model of a region, which are used for timely and accurately constructing the three-dimensional model of the region.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a method for determining a three-dimensional model of a region is provided, the method comprising: the server obtains a three-dimensional model of the target area. The server acquires video data of a target area and extracts a video frame picture from the video data. And the server determines the coordinate information of M point locations in the three-dimensional model according to the video frame picture, wherein M is a positive integer. And the server superimposes the video frame picture on the three-dimensional model of the target area according to the coordinate information of the M point locations.
Based on the scheme, the server obtains the three-dimensional model of the target area and obtains the video data of the target area. Because the video data of the target area can reflect the actual state of the target area in time, the server can extract the video frame picture from the video data as new texture picture data of the three-dimensional model of the target area. And the server determines the coordinate information of M point positions in the three-dimensional model according to the video frame picture. Then, the server superimposes the video frame picture on the three-dimensional model according to the coordinate information of the M point locations, so that the three-dimensional model of the target area can truly present the actual situation of the target area. In the embodiment of the application, the server superimposes the real-time video data of the real scene on the three-dimensional model of the target area, so that the surrounding facilities and equipment of the target area can be viewed in a three-dimensional manner in the three-dimensional model of the target area, and the actual condition of the target area can be viewed in real time.
In a second aspect, an apparatus for determining a three-dimensional model of a region is provided, where the apparatus may be a server or a chip applied to the server, and the apparatus may include: a communication unit for acquiring a three-dimensional model of a target region; video data of a target scene is obtained, and a video frame picture is extracted from the video data. And the processing unit is used for determining the coordinate information of M point positions of the three-dimensional model according to the video frame picture. And the processing unit is also used for superposing the video frame picture on the three-dimensional model according to the coordinate information of the M point positions.
In a third aspect, a readable storage medium is provided, in which instructions are stored, which when executed, implement the method for determining a three-dimensional model of a region as in the first aspect.
In a fourth aspect, there is provided a computer program product comprising at least one instruction which, when run on a computer, causes the computer to perform the method of determining a three-dimensional model of a region as in the first aspect.
In a fifth aspect, a chip is provided, the chip comprising at least one processor and a communication interface, the communication interface being coupled to the at least one processor, the at least one processor being configured to execute a computer program or instructions to implement the method for determining a three-dimensional model of a region of the first aspect.
The above-mentioned apparatuses, computer storage media, computer program products, or chips are all configured to execute the corresponding methods provided above, and therefore, the beneficial effects that can be achieved by the apparatuses, the computer storage media, the computer program products, or the chips refer to the beneficial effects of the corresponding schemes in the corresponding methods provided above, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application;
fig. 2 is a first flowchart illustrating a method for determining a three-dimensional model of a region according to an embodiment of the present disclosure;
FIG. 3 is a first schematic diagram of a three-dimensional model of a region provided by an embodiment of the present application;
FIG. 4a is a schematic diagram of texture map data of a three-dimensional model according to an embodiment of the present application;
FIG. 4b is a schematic diagram of point cloud data of a three-dimensional model according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a geometric model provided by an embodiment of the present application;
fig. 6a is a first schematic diagram illustrating a video frame picture in video data of a target area according to an embodiment of the present application;
fig. 6b is a schematic diagram illustrating a video frame picture in video data of a target scene according to an embodiment of the present application;
FIG. 7 is a second schematic diagram of a three-dimensional model of a target region according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a space structure according to an embodiment of the present disclosure;
fig. 9 is a flowchart illustrating a second method for determining a three-dimensional model of a region according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a method for determining texture coordinates of a geometric model by using an inkpot projection method according to an embodiment of the present disclosure;
FIG. 11 is a third schematic diagram of a three-dimensional model of a target region provided in an embodiment of the present application;
fig. 12 is a first schematic structural diagram of an apparatus for determining a scene model according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a scene model determining apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
To facilitate clear description of the technical solutions of the embodiments of the present application, the terms "including" and "having" and any variations thereof mentioned in the description of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It is noted that, in the present application, words such as "exemplary" or "for example" are used to mean exemplary, illustrative, or descriptive. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In order to facilitate understanding of the technical solutions of the present application, some technical terms are described below.
1. Rasterization
Rasterization, which refers to the process of converting the mathematical description of an object and the color information associated with the object into the colors for the pixels at the corresponding locations on the screen and for the fill pixels.
2. Network graphics library (WebGL)
WebGL is a 3D mapping protocol. Web developers can display 3D scenes and models more smoothly in a browser by means of a system display card, and meanwhile, complex navigation can be established and visualization of data can be achieved.
3. Open graphics library (Opengraphics library, OpenGL)
OpenGL is a cross-language, cross-platform application programming interface protocol for rendering 2D, 3D vector graphics.
4. Application Program Interface (API)
An API is a predefined function or convention that refers to the joining of different components of a software system. The purpose is to provide the application and developer the ability to access a set of routines based on some software or hardware without having to access native code or understand the details of the internal workings.
5. Binary Open Scene Graph (OSGB)
OSGB is a data format for a three-dimensional model of oblique photography, typically binary stored data with embedded link textures.
6、FBX
FBX is a three-dimensional model format defined by Autodesk corporation that can provide interoperability among most 3D software.
7. City Information Model (CIM)
CIM is an organic complex of a three-dimensional city space model and city information established on the basis of city information data. The data type is formed by a large amount of three-dimensional model data, and belongs to basic data of smart city construction.
8、3DS
The 3DS is a three-dimensional model format defined by Autodesk corporation, which is a relatively early three-dimensional format.
9. Geographic information system (geographic information system, GIS)
GIS is a specific and very important spatial information system. The system is a technical system for collecting, storing, managing, operating, analyzing, displaying and describing relevant geographic distribution data in the whole or partial earth surface (including the atmosphere) space under the support of a computer hardware and software system.
10. Network geographic information system (WebGIS, Web geographic information system)
WebGIS refers to GIS working on Web network, is extension and development of traditional GIS on network, has characteristics of traditional GIS, and can realize GIS basic functions of spatial data retrieval, query, drawing output, editing and the like.
11. Hypertext markup language (HTML)
HTML is a markup language. The document format on the network can be unified through the labels, so that the scattered Internet resources are connected into a logic whole. HTML5 refers to the latest specification for this language.
12. Three-dimensional model
The three-dimensional model refers to a model for representing a three-dimensional structure of an object. The three-dimensional model is composed of a texture map and a geometric model. The geometric model is made up of a plurality of points, each of which has corresponding three-dimensional coordinates.
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application. The communication system includes: a monitoring device 10, and a server 20 communicatively coupled to the monitoring device 10.
The monitoring device 10 is configured to obtain video data of an area and send the video data to the server 20.
The monitoring device 10 has corresponding location information, which may be longitude and latitude, or spatial geographic information. Such as city C street, B, a, B, etc.
And the server 20 is used for receiving the video data from the monitoring device 10 and determining a three-dimensional model of the area according to the video data.
Alternatively, the number of the monitoring devices 10 in the communication system of the embodiment of the present application may be plural.
In fig. 1, the monitoring apparatus 10 may communicate with the server 20 by a wired method (e.g., a communication cable) or the like. Of course, the monitoring device 10 may also communicate with the server 20 by wireless means. For example, the monitoring device 10 communicates with the server 20 via a network, such as wireless fidelity (WiFi) or the like.
It should be noted that the server 20 may be a computer. The server 20 may use WebGL as a computer graphics API to implement the technical solution of the present application. The server 20 may also use OpenGL as a computer graphics API to implement the technical solution of the present application.
A method for determining a three-dimensional model of a region according to an embodiment of the present application will be described in detail with reference to fig. 2 to 11.
It should be noted that the embodiments of the present application may be referred to or referred to, for example, the same or similar steps, method embodiments and apparatus embodiments may be referred to, without limitation.
As shown in fig. 2, a method for determining a three-dimensional model of a region provided in an embodiment of the present application may include:
step 101, a server obtains a three-dimensional model of a target area.
The three-dimensional model can be a tilted photography three-dimensional model or a CIM three-dimensional model. The data of the three-dimensional model includes texture map data and point location data. Wherein the dot data includes coordinate information, normal vector, texture coordinate and other data. The coordinate information is a pointed spatial position, including longitude and latitude, height and the like.
Illustratively, as shown in fig. 3, a three-dimensional model of a target region is provided in an embodiment of the present application. The three-dimensional model may be composed of the texture map data shown in FIG. 4a and the plurality of points shown in FIG. 4 b.
The data of the three-dimensional model has a plurality of data formats, such as an OSGB data format, a 3DS data format, an FBX data format, and the like. A texture map is a static picture having various formats, such as Joint Photographic Experts Group (JPEG), Portable Network Graphics (PNG).
In one possible implementation, the server is pre-provisioned with a three-dimensional model of the plurality of regions. The three-dimensional model of each region has a unique identification. The identification may be the name of the area, e.g., a-cell, B-road. The server determines a three-dimensional model of the target area in response to input operations by the staff. For example, the server has an input device through which the staff member inputs the identification of the target scene. The server determines a three-dimensional model of the target scene from among three-dimensional models of a plurality of scenes in response to the input operation instruction.
Alternatively, the server may obtain the three-dimensional model of the target area in other manners, for example, the server obtains the three-dimensional model of the target area from a database communicatively connected to the server.
In another possible implementation manner, the server may process a plurality of point data of the three-dimensional model to obtain a geometric model of the target area. And the server pastes the texture mapping to the geometric model according to the texture coordinates of the point data to obtain a three-dimensional model of the target area.
Illustratively, the server has a WebGIS tool and a Central Processing Unit (CPU) Graphics Processor (GPU). After the server obtains data (such as oblique photography data) of a target scene, the server analyzes the oblique photography data in the CPU through a WebGIS tool to obtain data such as vertex coordinates, normal vectors, texture coordinates, texture information and the like of point location data. And sends the processed data to the GPU. The server may perform image construction on the multiple point data in fig. 4a by using a preset drawing unit (such as a triangle) through the GPU, and calculate and store texture coordinates corresponding to each point, so as to obtain a geometric model as shown in fig. 5. And then, the server pastes the texture map to the geometric model according to the texture coordinates of the texture map data to obtain the three-dimensional model of the target area.
Step 102, the server obtains video data of the target area.
In one possible implementation, the server may obtain video data of the target scene in real time through the monitoring apparatus 10 shown in fig. 1. Alternatively, the server may obtain the video data of the target scene according to a preset time period through the monitoring apparatus shown in fig. 1.
It should be noted that the video data of the target area can be used to characterize the actual situation of the target area within the preset time period. Video data is composed of a plurality of consecutive video frame pictures. Each video frame picture has temporal information.
Illustratively, as shown in fig. 6a, a video frame picture in video data of a target area is provided in an embodiment of the present application. The time information of the video frame picture is 14:42 of 9, 18 and wednesday in 2019. Wherein, the monitoring area of the monitoring device 10 is Fuzhou city street in Fujian province. The position information of the monitoring device 10 is the roof of the city street branch office in fuzhou, fujian province.
Illustratively, the data format of the video data may include MP4 format, AVI format, WMV format, and the like.
And 103, extracting a video frame picture from the video data of the target area by the server.
The video frame picture comprises a plurality of pixels, and each pixel has corresponding position information. The video frame picture can be in JPEG, PNG or other picture formats.
In a possible implementation manner, the server is provided with a video frame picture extraction tool in advance. The server may extract the video frame picture from the video data through a video frame picture extraction tool.
Illustratively, a video frame picture extraction tool is taken as a WebGIS tool as an example. The server may load the video data into the WebGIS tool in the form of video (video) tags in HTML 5. The server may extract and store the video frame picture corresponding to the target field area from the video data of the target field area through the WebGIS tool (as shown in fig. 6 a).
In another possible implementation manner, in the case that the video data includes video areas of other areas, for example, in order to obtain the video data of the target area as completely as possible, the shooting range of the monitoring device may exceed the range of the target area. In this case, the server may identify the video frame picture corresponding to the determined target area through image recognition or instruction recognition. These two modes will be described separately below.
1. And (5) image recognition. The server can perform image recognition on the video frame picture and determine the video frame picture corresponding to the target area. Alternatively, the server may divide the video frame picture into a plurality of sub-video frame pictures. Each sub-video frame picture corresponds to a region. That is, the server may automatically determine the video frame picture of the target area.
2. And (4) instruction identification. The server can respond to the manually input operation instruction and determine the video frame picture corresponding to the target area. For example, after determining a video frame picture of the monitoring device, the staff selects a boundary point or a boundary line of the target area in the video frame picture through a WebGIS tool of the server (as shown in fig. 6b, a block diagram of white lines represents the boundary line of the target area). And the server determines the video frame picture of the target area according to the boundary point or the boundary line of the target area.
And step 104, the server determines coordinate information of M point locations in the three-dimensional model according to the video frame picture.
Wherein M is a positive integer. The video frame picture includes a plurality of pixels, each pixel having corresponding position information.
In a possible implementation manner, the server may determine the coordinate information of the M point locations in the three-dimensional model in the following manner.
1. And the server selects N point locations from the three-dimensional model of the target area according to the video frame picture of the target area, and acquires the position information of the N point locations.
The position information of the point location comprises coordinate information and height information. The pixels of the video frame picture and the N point positions have corresponding relation, and N is a positive integer.
For example, the server may select a plurality of points of the target area in the three-dimensional model of the target area, for example, vertices of the target area in the three-dimensional model. As shown in fig. 7, 1234 in fig. 7 represents a plurality of points of the target region in the three-dimensional model. Each point location has corresponding location information.
2. And the server determines a space structure corresponding to the target area according to the position information of the N point locations.
For example, the server may establish a spatial structure according to the coordinate information of the vertex of the target area. The server translates vertex 1, vertex 2, vertex 3 and vertex 4 of the target area in fig. 7 downwards by a preset distance to obtain point 1', point 2', point 3', point 4' and translates upwards by a preset distance to obtain point 1", point 2", point 3 "and point 4", respectively. The server connection point 1', point 2', point 3', point 4', point 1", point 2", point 3", point 4" can yield a space structure as shown in fig. 8.
3. And the server determines the coordinate information of the M point locations according to the intersection of the space structure body and the three-dimensional model of the target area.
In the embodiment of the application, the server constructs the space structure according to the coordinate information of the vertex of the target area, and the three-dimensional model of the target area has an intersection surface. Wherein the intersection surface has a plurality of points. Because the intersecting surface is located on the surface of the three-dimensional model, the server can acquire the coordinate information of the M point locations on the intersecting surface.
And 105, the server superimposes the video frame picture on the three-dimensional model according to the coordinate information of the M point locations.
In one possible implementation, as shown in fig. 9, step 105 can be implemented by steps 1051-1053:
step 1051, the server determines texture coordinates of the M point locations according to the coordinate information of the M point locations.
Wherein the texture coordinates are two-dimensional coordinates. And each texture coordinate corresponds to the three-dimensional coordinate of the point position in the three-dimensional model one to one.
In one possible implementation manner, the server determines texture coordinates of the target area according to coordinate information of a plurality of points and a preset projection method.
Illustratively, the default projection method is taken as the mercator projection method. As shown in fig. 10, in the three-dimensional coordinate axis XYZ, the curved surface ABCD is a surface of the geometric model, and the curved surface ABCD includes a plurality of points, each point having a corresponding three-dimensional coordinate. The server can project the three-dimensional coordinates of a plurality of points of the curved surface ABCD onto a plane by a mercator projection method to obtain texture coordinates corresponding to the plurality of points of the curved surface ABCD. For example, the server projects a plurality of points of the curved surface ABCD onto a plane formed by the X-axis and the Y-axis, resulting in the plane a 'B' C 'D' in fig. 10. Wherein the plane a 'B' C 'D' comprises a plurality of points, each point corresponding to a point in the curved surface ABCD. For example, point a corresponds to point a ', point B corresponds to point B', point C corresponds to point C ', and point D corresponds to point D'. Wherein the coordinates of each point of the plane a 'B' C 'D' represent texture coordinates of the geometric model.
Step 1052, the server determines texture coordinates corresponding to pixels of the video frame picture.
In one possible implementation, the server may translate the location of each pixel in the texture map data for the target region into texture coordinates for the three-dimensional model. In this way, the server can determine a mapping relationship between texture coordinates of the three-dimensional model and the plurality of pixels. The server can determine texture coordinates corresponding to the pixels of the video frame picture through the mapping relation between the texture coordinates and the position information of the pixels.
Illustratively, the server determines the location of each at plane A 'B' C 'D' in FIG. 10 based on the location of each pixel in the texture map data. That is, the server converts the position of each pixel into a two-dimensional coordinate system in which the plane a 'B' C 'D' is located, so that each pixel corresponds to a point in the plane a 'B' C 'D'. For example, pixel a corresponds to point a ', pixel B corresponds to point B', pixel C corresponds to point C ', and pixel D corresponds to point D'.
And 1053, the server superimposes the video frame picture on the three-dimensional model according to the texture coordinates corresponding to the pixels of the video frame picture.
In one possible implementation, step 1053 may be implemented as follows:
1. after determining the texture coordinates corresponding to each pixel, the server may determine the position information of each pixel in the three-dimensional model according to the texture coordinates.
For example, the server may determine the position information of each pixel in the three-dimensional model according to the correspondence between the plane and the curved surface. For example, the server determines that pixel a corresponds to point a ', and the coordinates (i.e., texture coordinates) of point a' are the texture coordinates of pixel a. The server may determine that the position information of the pixel a in the three-dimensional model is the position of the point a. The server determines that the pixel B corresponds to the point B ', and the coordinate (i.e., texture coordinate) of the point B' is the texture coordinate of the pixel B. The server may determine that the position information of pixel B in the three-dimensional model is the position of point B. By analogy, the server may determine position information of a plurality of pixels in the three-dimensional model.
2. The server can superimpose the video frame picture on the three-dimensional model according to the position information of the pixels of the video frame picture in the three-dimensional model.
Illustratively, as shown in fig. 11, a three-dimensional model of a target region is provided in an embodiment of the present application.
It should be noted that, in the embodiment of the present application, the server may repeat steps 102 to 105 according to a preset time period, so that the three-dimensional model of the target area is continuously updated, so as to ensure the accuracy of the three-dimensional model.
In the embodiment of the application, the server obtains the three-dimensional model of the target area and obtains the video data of the target area. Because the video data of the target area can reflect the actual state of the target area in time, the server can extract the video frame picture from the video data as new texture picture data of the three-dimensional model of the target area. And the server determines the coordinate information of M point positions in the three-dimensional model according to the video frame picture. Then, the server superimposes the video frame picture on the three-dimensional model according to the coordinate information of the M point locations, so that the three-dimensional model of the target area can truly present the actual situation of the target area. In the embodiment of the application, the server superimposes the real-time video data of the real area on the three-dimensional model of the target area, so that the surrounding facilities and equipment of the target area can be viewed in a three-dimensional manner in the three-dimensional model of the target area, and the actual situation of the target area can be viewed in real time.
Optionally, in order to enable the three-dimensional model to have a better visual effect, after the three-dimensional model of the target area is obtained, the server may further perform rasterization processing on the three-dimensional model through the GPU.
Optionally, in order to improve the accuracy of the three-dimensional model of the target region, after determining the three-dimensional model of the target region, the server may further use a marker of the target region as a reference object, and adjust the position of the texture mapping data on the geometric model, so that the three-dimensional model is more accurate.
Illustratively, the server takes the intersection road marking as a reference object and adjusts the position of the texture mapping data so as to enable the road condition road marking in the texture mapping data to coincide with the intersection road marking in the three-dimensional city model.
In a possible embodiment, in the present application, the server may further determine a three-dimensional model of the target area in the following manner.
Step 201, the server determines a geometric model of the target area according to the three-dimensional model of the target area.
In a possible implementation manner, the server may delete texture map data in the three-dimensional model of the target area to obtain the geometric model of the target area.
The surface of the geometric model is formed by splicing a plurality of geometric figures, and the plurality of points are the vertexes of the geometric figures. For example, as shown in fig. 5, the surface of the geometric model is formed by splicing a plurality of triangles, each triangle includes three vertices, and two adjacent triangles have two vertices in common.
Step 202, the server determines a plurality of point locations of the target area on the geometric model.
It should be noted that the plurality of point positions may be a set of vertices of a plurality of geometric figures of the geometric model.
And step 203, the server superimposes the video frame picture on the geometric model according to the coordinate information of the point positions.
It should be noted that step 202 and step 203 may specifically refer to step 104 and step 105, and are not described herein again.
In the embodiment of the application, the server can directly superimpose the video frame picture of the target area on the three-dimensional model of the target area, so that the three-dimensional model of the target area can be updated in time. The server may also delete texture map data of the three-dimensional model of the target region to obtain the geometric model of the target region. And then, the server superimposes the latest video frame picture of the target area on the geometric model to obtain the latest three-dimensional model of the target area. In this way, the latest three-dimensional model can be prevented from being affected by the texture data of the map before updating.
In the embodiment of the present application, the determination device for three-dimensional models of areas may divide the functional modules or the functional units according to the above method examples, for example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module or a functional unit. The division of the modules or units in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
An embodiment of the present application provides a device for determining a three-dimensional model of an area, where the device may be a server or a chip applied to the server, and as shown in fig. 12, the device may include:
a communication unit 121 for acquiring a three-dimensional model of a target region; video data of a target area is acquired.
And the processing unit 122 is configured to extract a video frame picture from the video data.
The processing unit 122 is further configured to determine, according to the video frame picture, coordinate information of M point locations in the three-dimensional model; m is a positive integer.
The processing unit 122 is further configured to superimpose the video frame picture onto the three-dimensional model according to the coordinate information of the M point locations.
Optionally, the processing unit 122 is specifically configured to: selecting N point locations from a three-dimensional model according to a video frame picture, and acquiring position information of the N point locations, wherein the position information of the point locations comprises coordinate information and height information; the pixels of the video frame picture and the N point positions have corresponding relations; n is a positive integer; constructing a space structure according to the position information of the N point locations; and determining an intersection between the space structure body and the three-dimensional model, wherein the intersection comprises M point positions and coordinate information of the M point positions.
Optionally, the processing unit 122 is specifically configured to: determining texture coordinates of the M point locations according to the coordinate information of the M point locations; determining texture coordinates corresponding to pixels of the video frame picture; and superposing the video frame picture to the three-dimensional model according to the texture coordinate corresponding to the pixel of the video frame picture.
Optionally, the processing unit 122 is further configured to: and adjusting the position of the video frame picture on the three-dimensional model by taking the marker of the target area as a reference object.
Fig. 13 shows a schematic view of a further possible configuration of the device for determining a three-dimensional model of a region involved in the above-described embodiment. When the determination device is a server, the determination device includes: one or more processors 131 and a communication interface 132. The processor 131 is used to control and manage the actions of the device, for example, to perform the steps performed by the processing unit 122 described above, and/or to perform other processes for the techniques described herein.
In particular implementations, processor 131 may include one or more CPUs such as CPU0 and CPU1 in fig. 13 as an example.
In particular implementations, for one embodiment, a communication device may include multiple processors, such as processor 131 in fig. 13. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
Optionally, the apparatus may further comprise a memory 133 and a communication line 134, the memory 133 being used for storing program codes and data of the apparatus.
Fig. 14 is a schematic structural diagram of a chip 140 according to an embodiment of the present disclosure. Chip 140 includes one or more (including two) processors 1410 and a communication interface 1430.
Optionally, the chip 140 further includes a memory 1440, and the memory 1440 may include a read-only memory and a random access memory, and provides operating instructions and data to the processor 1410. A portion of the memory 1440 may also include non-volatile random access memory (NVRAM).
In some embodiments, memory 1440 stores elements, execution modules, or data structures, or a subset thereof, or an expanded set thereof.
In the embodiment of the present application, the operation instruction stored in the memory 1440 (which may be stored in an operating system) is called to perform the corresponding operation.
The processor 1410 may implement or execute various exemplary logical blocks, units and circuits described in connection with the disclosure herein. The processor may be a central processing unit, general purpose processor, digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, units, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others.
Memory 1440 may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
The bus 1420 may be an Extended Industry Standard Architecture (EISA) bus or the like. The bus 1420 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one line is shown in FIG. 14, but it is not intended that there be only one bus or one type of bus.
It is clear to those skilled in the art from the foregoing description of the embodiments that, for convenience and simplicity of description, the foregoing division of the functional units is merely used as an example, and in practical applications, the above function distribution may be performed by different functional units according to needs, that is, the internal structure of the device may be divided into different functional units to perform all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by a computer, the computer executes each step in the method flow shown in the above method embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, and a hard disk. Random Access Memory (RAM), Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM), registers, a hard disk, an optical fiber, a portable Compact disk Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any other form of computer-readable storage medium, in any suitable combination, or as appropriate in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Embodiments of the present invention provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform a method of determining a three-dimensional model of a region as described in fig. 2, 9.
Since the determining apparatus, the computer-readable storage medium, and the computer program product of the three-dimensional model of the region in the embodiments of the present invention may be applied to the method described above, the technical effects obtained by the determining apparatus, the computer-readable storage medium, and the computer program product may also refer to the method embodiments described above, and the details of the embodiments of the present invention are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of determining a three-dimensional model of a region, comprising:
acquiring a three-dimensional model of a target area;
acquiring video data of the target area, and extracting a video frame picture from the video data;
determining coordinate information of M point locations in the three-dimensional model according to the video frame picture; m is a positive integer;
and superposing the video frame picture to the three-dimensional model according to the coordinate information of the M point locations.
2. The method according to claim 1, wherein the determining, according to the video frame picture, coordinate information of M point locations in the three-dimensional model includes:
selecting N point locations from the three-dimensional model according to the video frame picture, and acquiring position information of the N point locations, wherein the position information of the point locations comprises coordinate information and height information; the pixels of the video frame picture and the N point positions have a corresponding relation; n is a positive integer;
constructing a space structure according to the position information of the N point locations;
and determining an intersection between the space structure and the three-dimensional model, wherein the intersection comprises the M point positions and coordinate information of the M point positions.
3. The method according to claim 1 or 2, wherein the superimposing the video frame picture onto the three-dimensional model according to the coordinate information of the M point locations includes:
determining texture coordinates of the M point locations according to the coordinate information of the M point locations;
determining texture coordinates corresponding to pixels of the video frame picture;
and superposing the video frame picture to the three-dimensional model according to the texture coordinate corresponding to the pixel of the video frame picture.
4. The method of claim 3, further comprising:
and adjusting the position of the video frame picture on the three-dimensional model by taking the marker of the target area as a reference object.
5. An apparatus for determining a three-dimensional model of a region, comprising:
a communication unit for acquiring a three-dimensional model of a target region;
the communication unit is further configured to acquire video data of the target area and extract a video frame picture from the video data;
the processing unit is used for determining the coordinate information of M point positions in the three-dimensional model according to the video frame picture; m is a positive integer;
and the processing unit is further configured to superimpose the video frame picture onto the three-dimensional model according to the coordinate information of the M point locations.
6. The determination apparatus according to claim 5, wherein the processing unit is specifically configured to:
selecting N point locations from the three-dimensional model according to the video frame picture, and acquiring position information of the N point locations, wherein the position information of the point locations comprises coordinate information and height information; the pixels of the video frame picture and the N point positions have a corresponding relation; n is a positive integer;
constructing a space structure according to the position information of the N point locations;
and determining an intersection between the space structure and the three-dimensional model, wherein the intersection comprises the M point positions and coordinate information of the M point positions.
7. The determination apparatus according to claim 5 or 6, wherein the processing unit is specifically configured to:
determining texture coordinates of the M point locations according to the coordinate information of the M point locations;
determining texture coordinates corresponding to pixels of the video frame picture;
and superposing the video frame picture to the three-dimensional model according to the texture coordinate corresponding to the pixel of the video frame picture.
8. The apparatus according to claim 7, wherein the processing unit is further configured to:
and adjusting the position of the video frame picture on the three-dimensional model by taking the marker of the target area as a reference object.
9. A readable storage medium having stored therein instructions which, when executed, implement the method of any one of claims 1 to 4.
10. A chip comprising at least one processor and a communication interface, the communication interface being coupled to the at least one processor, the at least one processor being configured to execute a computer program or instructions to implement the method of any one of claims 1 to 4.
CN201911194400.XA 2019-11-28 2019-11-28 Method and device for determining three-dimensional model of region Pending CN110910504A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911194400.XA CN110910504A (en) 2019-11-28 2019-11-28 Method and device for determining three-dimensional model of region

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911194400.XA CN110910504A (en) 2019-11-28 2019-11-28 Method and device for determining three-dimensional model of region

Publications (1)

Publication Number Publication Date
CN110910504A true CN110910504A (en) 2020-03-24

Family

ID=69820364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911194400.XA Pending CN110910504A (en) 2019-11-28 2019-11-28 Method and device for determining three-dimensional model of region

Country Status (1)

Country Link
CN (1) CN110910504A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626971A (en) * 2020-05-26 2020-09-04 南阳师范学院 Smart city CIM real-time imaging method with image semantic perception
CN113034688A (en) * 2021-04-25 2021-06-25 中国电子系统技术有限公司 Three-dimensional map model generation method and device
CN113487747A (en) * 2021-06-25 2021-10-08 山东齐鲁数通科技有限公司 Model processing method, device, terminal and storage medium
CN114218638A (en) * 2021-12-14 2022-03-22 深圳须弥云图空间科技有限公司 Panorama generation method and device, storage medium and electronic equipment
CN116048531A (en) * 2023-03-30 2023-05-02 南京砺算科技有限公司 Instruction compiling method, graphic processing unit, storage medium and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131535A (en) * 2016-07-29 2016-11-16 传线网络科技(上海)有限公司 Video capture method and device, video generation method and device
CN106373148A (en) * 2016-08-31 2017-02-01 中国科学院遥感与数字地球研究所 Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system
US20180268516A1 (en) * 2017-03-20 2018-09-20 Qualcomm Incorporated Adaptive perturbed cube map projection
CN109920048A (en) * 2019-02-15 2019-06-21 北京清瞳时代科技有限公司 Monitored picture generation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131535A (en) * 2016-07-29 2016-11-16 传线网络科技(上海)有限公司 Video capture method and device, video generation method and device
CN106373148A (en) * 2016-08-31 2017-02-01 中国科学院遥感与数字地球研究所 Equipment and method for realizing registration and fusion of multipath video images to three-dimensional digital earth system
US20180268516A1 (en) * 2017-03-20 2018-09-20 Qualcomm Incorporated Adaptive perturbed cube map projection
CN109920048A (en) * 2019-02-15 2019-06-21 北京清瞳时代科技有限公司 Monitored picture generation method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626971A (en) * 2020-05-26 2020-09-04 南阳师范学院 Smart city CIM real-time imaging method with image semantic perception
CN111626971B (en) * 2020-05-26 2021-09-07 南阳师范学院 Smart city CIM real-time imaging method with image semantic perception
CN113034688A (en) * 2021-04-25 2021-06-25 中国电子系统技术有限公司 Three-dimensional map model generation method and device
CN113034688B (en) * 2021-04-25 2024-01-30 中国电子系统技术有限公司 Three-dimensional map model generation method and device
CN113487747A (en) * 2021-06-25 2021-10-08 山东齐鲁数通科技有限公司 Model processing method, device, terminal and storage medium
CN113487747B (en) * 2021-06-25 2024-03-29 山东齐鲁数通科技有限公司 Model processing method, device, terminal and storage medium
CN114218638A (en) * 2021-12-14 2022-03-22 深圳须弥云图空间科技有限公司 Panorama generation method and device, storage medium and electronic equipment
CN116048531A (en) * 2023-03-30 2023-05-02 南京砺算科技有限公司 Instruction compiling method, graphic processing unit, storage medium and terminal equipment
CN116048531B (en) * 2023-03-30 2023-08-08 南京砺算科技有限公司 Instruction compiling method, graphic processing device, storage medium and terminal equipment

Similar Documents

Publication Publication Date Title
CN110910504A (en) Method and device for determining three-dimensional model of region
CN112884875B (en) Image rendering method, device, computer equipment and storage medium
US9183666B2 (en) System and method for overlaying two-dimensional map data on a three-dimensional scene
US8274506B1 (en) System and methods for creating a three-dimensional view of a two-dimensional map
US9519999B1 (en) Methods and systems for providing a preloader animation for image viewers
Lerones et al. A practical approach to making accurate 3D layouts of interesting cultural heritage sites through digital models
US20130257862A1 (en) System, apparatus, and method of modifying 2.5d gis data for a 2d gis system
WO2014143689A1 (en) Overlaying two-dimensional map elements over terrain geometry
CN107909541B (en) Map conversion method and device
US20150332481A1 (en) Indexed uniform styles for stroke rendering
CN110503718B (en) Three-dimensional engineering model lightweight display method
CN109741431B (en) Two-dimensional and three-dimensional integrated electronic map frame
CN112053440A (en) Method for determining individualized model and communication device
WO2023231793A9 (en) Method for virtualizing physical scene, and electronic device, computer-readable storage medium and computer program product
CN112330785A (en) Image-based urban road and underground pipe gallery panoramic image acquisition method and system
Virtanen et al. Browser based 3D for the built environment
CN111324658A (en) Ocean temperature visual analysis method, intelligent terminal and storage medium
US20230065027A1 (en) Gpu-based digital map tile generation method and system
CN112825198B (en) Mobile tag display method, device, terminal equipment and readable storage medium
KR102275621B1 (en) Apparatus and method for integrating
CN115187709A (en) Geographic model processing method and device, electronic equipment and readable storage medium
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments
KR20120066709A (en) System and method for web based 3d visualisation of geo-referenced data
Hairuddin et al. Development of a 3d cadastre augmented reality and visualization in malaysia
CN111599011A (en) WebGL technology-based rapid construction method and system for power system scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200324