CN111369668A - Method for automatically drawing 3D model - Google Patents
Method for automatically drawing 3D model Download PDFInfo
- Publication number
- CN111369668A CN111369668A CN202010152685.7A CN202010152685A CN111369668A CN 111369668 A CN111369668 A CN 111369668A CN 202010152685 A CN202010152685 A CN 202010152685A CN 111369668 A CN111369668 A CN 111369668A
- Authority
- CN
- China
- Prior art keywords
- model
- internet
- information
- things equipment
- height
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The method for automatically drawing the 3D model provided by the embodiment of the invention comprises the following steps: the method comprises the following steps: determining a target area and Internet of things equipment located in the target area; acquiring feature information of the Internet of things equipment, wherein the feature information comprises longitude and latitude of the Internet of things equipment, height of the Internet of things equipment, video information acquired by the Internet of things equipment and picture information acquired by the Internet of things equipment; and drawing the 3D model according to the characteristic information. The method for automatically drawing the 3D model, provided by the embodiment of the invention, draws the 3D model by utilizing the characteristic information of the Internet of things equipment in the target area, has multiple detail levels, determines the resource allocation of object rendering according to the position and the importance of the node of the object model in the display environment, and reduces the surface number and the detail of non-important objects, so that high-efficiency rendering operation is obtained, and the universal 3D model is effectively drawn.
Description
Technical Field
The invention relates to the technical field of geographic information systems, in particular to a method for automatically drawing a 3D model.
Background
The 3D model is characterized by elegance and accuracy, and compared with alternatives such as a 2D map, the 3D model has higher rendering cost and is more troublesome to navigate, which is why the 2D map is different from the 3D model and dominates commercial presentations such as driving maps and hotel reservations.
However, to effectively represent data of real environments such as cameras and sensors requires additional precision and rich presentation, and such features are only available in 3D models.
In most cities in the world, 2D model information (the basis for driving maps and hotel booking displays) is available, but 3D models are typically either not available or not in the public domain. At present, a universal 3D model cannot be generated or manufactured so that a user can meet normal use.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a method for automatically rendering a 3D model, so as to generate or fabricate a general 3D model, so that a user can meet normal usage requirements. The specific technical scheme is as follows:
the embodiment of the invention provides a method for automatically drawing a 3D model, which is characterized by comprising the following steps:
determining a target area and Internet of things equipment located in the target area;
acquiring feature information of the Internet of things equipment, wherein the feature information comprises longitude and latitude where the Internet of things equipment is located, height of the Internet of things equipment, video information acquired by the Internet of things equipment and picture information acquired by the Internet of things equipment;
and drawing a 3D model according to the characteristic information.
Optionally, in the target area, the target area is determined to be any one of a garden area, a city area, and a country area.
Optionally, according to the feature information, the drawing a 3D model includes:
determining the boundary of the target area according to the longitude and latitude information;
drawing the height of the three-dimensional building according to the height information of the Internet of things equipment located at the same longitude and latitude;
drawing the outline of the three-dimensional building according to the picture information around the three-dimensional building;
generating a three-dimensional building with height and outline;
drawing outdoor parks, streets and tunnels according to the picture information and the longitude and latitude information;
the 3D model is composed of the three-dimensional building, the outdoor park, the street and the tunnel together.
Optionally, according to the feature information, the drawing a 3D model includes:
determining the boundary of the target area according to the longitude and latitude information;
drawing the height of the three-dimensional building according to the height information of the Internet of things equipment located at the same longitude and latitude;
drawing the outline of the three-dimensional building according to the boundary of the existing 2D map;
generating a three-dimensional building with height and outline;
drawing outdoor parks, streets and tunnels according to the picture information and the longitude and latitude information;
the 3D model is composed of the three-dimensional building, the outdoor park, the street and the tunnel together.
Optionally, the height of each internet of things device is 2 m.
Optionally, when the picture information of any one of the outdoor park, the street and the tunnel is displayed, the height information is not included, and the default setting of the height information is 4 m.
Optionally, the drawing the height of the three-dimensional building according to the height information of the internet of things devices located in the same longitude and latitude includes:
when irregular multi-story buildings exist, the system automatically derives a 3D model; therein, the automatically derived 3D model height is displayed at 10 m.
Optionally, after the 3D model is drawn according to the feature information, the method further includes:
generating a 2D map according to the bottom interface of the 3D model;
making and generating a user interface, wherein the user interface comprises a left window, a middle window and a right window; the left window displays the 2D map, the right window displays the 3D model, and the middle window manages query operation and displays query results.
Optionally, the querying includes: inquiring an access path of a target person, inquiring a form path of a target vehicle, inquiring video information of the target person at the Internet of things equipment with a video shooting function, and planning an equipment maintenance path.
Optionally, the querying the access path of the query target person includes:
determining a target person; the target person is a person needing to inquire an access path in the target area;
acquiring image information of a target person shot by Internet of things equipment with a shooting function; wherein, the image information is a front image or a back image;
extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
acquiring picture information or video information of target people shot by all the Internet of things equipment according to the characteristics to be detected;
acquiring the shooting time and the shooting place of the picture information or the video information;
connecting the shooting places according to the sequence of the shooting time to obtain an access path of the target person;
and displaying the access path of the target person in the intermediate window.
Optionally, after the intermediate window displays the access path of the target person, the method includes:
clicking on the current internet of things device icon located in the 3D model,
and displaying the video clip of the target person passing through the current Internet of things equipment in the intermediate window.
The method for automatically drawing the 3D model provided by the embodiment of the invention comprises the following steps: the method comprises the following steps: determining a target area and Internet of things equipment located in the target area; acquiring feature information of the Internet of things equipment, wherein the feature information comprises longitude and latitude of the Internet of things equipment, height of the Internet of things equipment, video information acquired by the Internet of things equipment and picture information acquired by the Internet of things equipment; and drawing the 3D model according to the characteristic information. The method for automatically drawing the 3D model, provided by the embodiment of the invention, draws the 3D model by utilizing the characteristic information of the Internet of things equipment in the target area, has multiple detail levels, determines the resource allocation of object rendering according to the position and the importance of the node of the object model in the display environment, and reduces the surface number and the detail of non-important objects, so that high-efficiency rendering operation is obtained, and the universal 3D model is effectively drawn.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of a method for automatically rendering a 3D model according to an embodiment of the present invention.
Fig. 2 is a 3D model diagram according to an embodiment of the present invention.
Fig. 3 is a cabinet model storage list provided in the embodiment of the present invention.
Fig. 4 is a user interface diagram according to an embodiment of the present invention.
FIG. 5 is a gallery dilator in a user interface diagram provided by an embodiment of the present invention.
Fig. 6 is an access path displayed in a user interface diagram according to an embodiment of the present invention.
Fig. 7 is a schematic view of playing a video clip in a user interface diagram according to an embodiment of the present invention.
Fig. 8 is a clock tool in a user interface diagram provided by an embodiment of the present invention.
Fig. 9 is a detailed description of a cloud data center server as one of the building GUI models according to the embodiment of the present invention.
FIG. 10 is a graphical user interface provided by embodiments of the present invention to present consistent interactions between various use cases.
Fig. 11 is a schematic navigation diagram of a large event history record according to an embodiment of the present invention.
Fig. 12 is a schematic diagram illustrating two modes of statistical data according to an embodiment of the present invention.
Fig. 13 is a schematic diagram illustrating a method for representing a visual maintenance path according to an embodiment of the present invention.
FIG. 14 is a schematic diagram of creating a 3D model from polygonal shapes in a 2D map according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
In order to generate or make a general 3D model to satisfy normal use of a user, the embodiment of the present invention provides a method for automatically rendering a 3D model.
Referring to fig. 1 and fig. 2, an embodiment of the present invention provides a method for automatically rendering a 3D model, including:
s110, determining a target area and Internet of things equipment located in the target area;
s120, acquiring feature information of the Internet of things equipment, wherein the feature information comprises longitude and latitude of the Internet of things equipment, height of the Internet of things equipment, video information acquired by the Internet of things equipment and picture information acquired by the Internet of things equipment;
and S130, drawing a 3D model according to the characteristic information.
Specifically, the method for automatically drawing a 3D model provided by the embodiment of the present invention includes: the method comprises the following steps: determining a target area and Internet of things equipment located in the target area; acquiring feature information of the Internet of things equipment, wherein the feature information comprises longitude and latitude of the Internet of things equipment, height of the Internet of things equipment, video information acquired by the Internet of things equipment and picture information acquired by the Internet of things equipment; and drawing the 3D model according to the characteristic information. The method for automatically drawing the 3D model provided by the embodiment of the invention draws the 3D model by utilizing the characteristic information of the Internet of things equipment in the target area, has multiple detail levels, determines the resource allocation of object rendering according to the position and the importance of the node of the object model in the display environment, and reduces the number of faces and the detail of non-important objects, so that high-efficiency rendering operation is obtained, and further the universal 3D model is effectively drawn.
Further, in the target area, the target area is determined to be any one of a garden area, a city area and a country area.
Further, according to the feature information, rendering the 3D model includes:
determining the boundary of the target area according to the longitude and latitude information;
drawing the height of the three-dimensional building according to the height information of the Internet of things equipment located at the same longitude and latitude;
drawing the outline of the three-dimensional building according to the picture information around the three-dimensional building;
generating a three-dimensional building with height and outline;
drawing outdoor parks, streets and tunnels according to the picture information and the longitude and latitude information;
the 3D model is composed of the three-dimensional building, the outdoor park, the street and the tunnel together.
Further, according to the feature information, rendering the 3D model includes:
determining the boundary of the target area according to the longitude and latitude information;
drawing the height of the three-dimensional building according to the height information of the Internet of things equipment located at the same longitude and latitude;
drawing the outline of the three-dimensional building according to the boundary of the existing 2D map;
generating a three-dimensional building with height and outline;
drawing outdoor parks, streets and tunnels according to the picture information and the longitude and latitude information;
the 3D model is composed of the three-dimensional building, the outdoor park, the street and the tunnel together.
Further, the height of each internet of things device is 2 m.
Further, when the picture information of any one of the outdoor park, the street and the tunnel is displayed, the picture information does not contain height information, and the default setting of the height information is 4 m.
Specifically, referring to fig. 2, in the method for automatically rendering a 3D model according to the embodiment of the present invention, the automatically generated three-dimensional model is estimated or accurate between LOD 1.5 and LOD 2 according to the existence of the model region reference two-dimensional boundary marker. The objects are classified into 4 types, namely outdoor park-3.3, street-3.4, building-3.2 and tunnel; a building or tunnel may have multiple floors and be represented by a cube. Parks or streets are represented by the ground. The zone boundaries, whether cubic or planar, are generated from the minimum and maximum values of the latitude and longitude, and the minimum and maximum values of the floors, as appropriate. Without explicit dimensions, the floor appears to be 4 meters high. Each sensor element (3.1) is centered on the longitude and latitude and is perpendicular to 2 meters above the floor level contained.
The limitation of automatically generating geometry is that the rendered geometry is greater than or equal to the actual physical boundary. This can be improved if there are area marker sources, such as 2D map databases. It should be noted that the elements are drawn and not labeled. The model is thought to be a basic concept of a thematic 3D map, the model can be rendered and animated according to a query result, the 3D model is manufactured by a drawing method, and the 3D model with a blank area can be effectively created.
It should also be noted that the presentation of such a graphical interface is highly compatible with city planning, allowing 3D models to be created without 2D or 3D map data at all, and that the graphical interface will also automatically generate 2D map models from the underlying object geometry. In other words, given a space in an existing map representing future developments, the method provided by embodiments of the present invention will fill the space.
It should be noted that, in the embodiment of the present invention, a cabinet model is introduced to store information, for example, referring to fig. 3, a graphical interface topology may be derived from an underlying cabinet structure in the data cloud CDN. Line 13 defines the camera, line 9 defines the sector containing the map, and line 8 defines the area containing the map.
For camera information (line 13), column K defines the camera as park (green) class, column L defines the camera as GPU face class, column G indicates that the camera is on level 1, so the center elevation is 2 meters, and column I, J defines latitude and longitude.
The green class indicates that a ground plane needs to be generated. Column CDE defines an area, sector and region for the camera. The area boundary of all cameras sharing the same area in longitude and latitude is the three-dimensional model ground plane. The boundaries of the sector are the maximum and minimum values of the sector area. The boundary of a region is the maximum and minimum of all sectors in the region.
By utilizing the cabinet model, the longitude and latitude, height, image, video and other information of each Internet of things device can be effectively stored, real-time calling can be realized, and further, in the method provided by the embodiment of the invention, the characteristic information can be quickly and accurately acquired from the cabinet model so as to smoothly finish the drawing of the 3D model. Please see the following table: the following table is an information list stored in the data cloud by a certain internet of things device.
Device information | Means of | Device information | Means of | |
Opt=cam | The device being a camera | Zone=1 | Zone code number | |
Reg=30 | Code number of area | Floor=1 | First layer | |
Sect=1 | Sector code number | Cam=1 | Camera code | |
Area=1 | Location of area | Lat=34.2540882 | Longitude (G) | |
C1=g | Park (Green) class | Lng=108.9423662 | Latitude | |
C2=face | GPU face class |
As can be seen from the above table, the Internet of things equipment is defined as a park (green) class and a GPU face class, and is located at the 1 st floor, and the center elevation is 2 m. The green class indicates that a ground plane needs to be generated. The area boundary of all cameras sharing the same area in longitude and latitude is the three-dimensional model ground plane.
Further, according to the height information of the internet of things devices located in the same longitude and latitude, drawing the height of the three-dimensional building comprises the following steps:
when irregular multi-story buildings exist, the system automatically derives a 3D model; therein, the automatically derived 3D model height is displayed at 10 m.
Further, referring to fig. 4, after the drawing the 3D model according to the feature information, the method further includes:
generating a 2D map according to the bottom interface of the 3D model;
making and generating a user interface, wherein the user interface comprises a left window, a middle window and a right window; the left window displays the 2D map, the right window displays the 3D model, and the middle window manages query operation and displays query results.
Specifically, in the embodiment of the invention, the 3D model can be drawn by using the internet of things device in the target area, and after the 3D model is drawn, the system automatically generates the 2D map model from the bottom layer object geometric figure. In the user interface, above the three panels (3.3, 3.4, 3.5) there is a top bar (3.1, 3.2) of 3 parts, which displays a 2D map in the left panel, a library of thumbnails etc. query information in the middle pane, and a 3D model in the right pane. The main navigation is 2D, and the map can be translated. Clicking on the map sector or area icon will correctly position the 3D model. The rendered rendering of the two-dimensional map and the three-dimensional model will be based on the query results, which the center panel manages.
Referring to fig. 7, fig. 7 shows a video clip displayed by clicking the camera in the 3D model.
In an embodiment of the present invention, please refer to fig. 9, which provides a 3D image user interface, and this figure describes the cloud data center server as one of the building of the GUI model in detail. The disks are represented above ground, the CPUs and GPUs are represented below ground, and the network is represented above ground. Servers (8.1) in the data cloud topology are associated with the region but not with the sector, so the servers within the region will appear as a single model. Disk icon (8.4), GPU/GPU icon may be colored (8.5) to indicate fault status and performance aspects. Network traffic between different servers and from clients will appear as an automated pipe, similar to a building's HVAC presentation.
It should be noted that, referring to fig. 5, in the embodiment of the present invention, a gallery expander is further added, and the gallery expander provides a thumbnail-based navigation mechanism. The top toolbar icon (4.1) toggles this view, which will expand the thumbnail library to cover the 3D model portion. The user is allowed to navigate through the thumbnails rather than the default map-by-map navigation. And clicking the thumbnail to directly jump to an access record of the face.
Referring to fig. 8, in an embodiment of the present invention, a clock tool is added that allows the time scale to be changed from the default value "current" (within 10 minutes) to a defined interval. The effect can be used as the history backtracking of historical traffic data, cameras and internet of things equipment data in a 3D intelligent city.
Further, the query operation includes: inquiring an access path of a target person, inquiring a form path of a target vehicle, inquiring video information of the target person at the Internet of things equipment with a video shooting function, and planning an equipment maintenance path.
Specifically, referring to FIG. 6, after querying the thumbnail library (5.1) for the most recent access, clicking on the thumbnail will display the access path on the 2D map (5.2) and the 3D model (5.3).
The access time is displayed when the mouse stays at the place of the 2D map area, and the access time is also displayed when the mouse stays at the camera icon of the 3D map. Clicking on the 3D camera image will display a video clip. Clicking on the camera in the 3D model will display a video clip. For example, the file path for the video clip will be uploaded as part of the most recent query result in the matching thumbnail library.
Further, querying the access path of the query target person comprises:
determining a target person; the target person is a person needing to inquire an access path in the target area;
acquiring image information of a target person shot by Internet of things equipment with a shooting function; wherein, the image information is a front image or a back image;
extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
acquiring picture information or video information of target people shot by all the Internet of things equipment according to the characteristics to be detected;
acquiring the shooting time and the shooting place of the picture information or the video information;
connecting the shooting places according to the sequence of the shooting time to obtain an access path of the target person;
and displaying the access path of the target person in the intermediate window.
Further, after the intermediate window displays the access path of the target person, the method includes:
clicking on the current internet of things device icon located in the 3D model,
and displaying the video clip of the target person passing through the current Internet of things equipment in the intermediate window.
Referring to fig. 10, the 3D model drawn by the method provided by the embodiment of the present invention has the following functions of face search (9.1-first stage), camera state (9.2-first stage), cloud state (9.3-stage 1), exhibition hall maintenance (9.4-first stage), gait detection (9.5-third stage), clothing detection (9.6-2 nd stage), and crowd statistics (9.7-first stage)
Referring to FIG. 11, the heat map navigation function, a large event history can be easily navigated. A series of events are presented in the form of clickable heatmap charts (10.1). When the mouse is moved over a small icon (10.2), a detailed illustration (10.3) is displayed.
Referring to fig. 12, in the embodiment of the present invention, there are two ways of data statistics, and the statistical data can be represented in two modes, namely, according to position and order. An ordinal thermal icon (11.1) is drawn on the 3D model. The central heatmap (11.2) was organized by number. These presentations are driven by AI/BI and thus, in combination, may represent any number of factors.
Referring to fig. 13, in an embodiment of the present invention, the visual maintenance path may be represented as both a list to be made and a visual path. The central heat map (12.1) is a to-do list, possibly organized by people and time. The set of operations to be performed is represented as a path (12.2) from one physical component to another. This mechanism can be applied in real environments (camera maintenance) and cloud data centers (disk drive replacement).
Referring to FIG. 14, drawing polygon interpolation, polygon shapes from a 2D map may be used to create a 3D model. The two-dimensional map boundaries (13.1) can be used to create a three-dimensional polygonal model (13.2). The mechanism can present the accurate outline of the building through a small amount of component information, wherein the small amount of components are the equipment of the Internet of things; such as a building with only one surveillance camera. On the other hand, when there is an irregular multi-story building, the model derived from the member object is more representative. Shell-based object set representations are ubiquitous.
The method for automatically drawing the 3D model provided by the embodiment of the invention comprises the following steps: the method comprises the following steps: determining a target area and Internet of things equipment located in the target area; acquiring feature information of the Internet of things equipment, wherein the feature information comprises longitude and latitude of the Internet of things equipment, height of the Internet of things equipment, video information acquired by the Internet of things equipment and picture information acquired by the Internet of things equipment; and drawing the 3D model according to the characteristic information. The method for automatically drawing the 3D model, provided by the embodiment of the invention, draws the 3D model by utilizing the characteristic information of the Internet of things equipment in the target area, has multiple detail levels, determines the resource allocation of object rendering according to the position and the importance of the node of the object model in the display environment, and reduces the surface number and the detail of non-important objects, so that high-efficiency rendering operation is obtained, and the universal 3D model is effectively drawn.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. A method of automatically rendering a 3D model, comprising:
determining a target area and Internet of things equipment located in the target area;
acquiring feature information of the Internet of things equipment, wherein the feature information comprises longitude and latitude where the Internet of things equipment is located, height of the Internet of things equipment, video information acquired by the Internet of things equipment and picture information acquired by the Internet of things equipment;
and drawing a 3D model according to the characteristic information.
2. The method of automatically rendering a 3D model according to claim 1, wherein a target area is determined, the target area being any one of a campus area, a city area, and a country area.
3. The method of automatically rendering a 3D model according to claim 1, wherein rendering a 3D model according to the feature information comprises:
determining the boundary of the target area according to the longitude and latitude information;
drawing the height of the three-dimensional building according to the height information of the Internet of things equipment located at the same longitude and latitude;
drawing the outline of the three-dimensional building according to the picture information around the three-dimensional building;
generating a three-dimensional building with height and outline;
drawing outdoor parks, streets and tunnels according to the picture information and the longitude and latitude information;
the 3D model is composed of the three-dimensional building, the outdoor park, the street and the tunnel together.
4. The method of automatically rendering a 3D model according to claim 1, wherein rendering a 3D model according to the feature information comprises:
determining the boundary of the target area according to the longitude and latitude information;
drawing the height of the three-dimensional building according to the height information of the Internet of things equipment located at the same longitude and latitude;
drawing the outline of the three-dimensional building according to the boundary of the existing 2D map;
generating a three-dimensional building with height and outline;
drawing outdoor parks, streets and tunnels according to the picture information and the longitude and latitude information;
the 3D model is composed of the three-dimensional building, the outdoor park, the street and the tunnel together.
5. The method for automatically rendering a 3D model according to claim 3 or 4, wherein each Internet of things device is located at a height of 2 m; and when the picture information of any one of the outdoor park, the street and the tunnel is displayed and does not contain height information, setting the height information to be 4m by default.
6. The method for automatically drawing a 3D model according to claim 3 or 4, wherein drawing the height of the three-dimensional building according to the height information of the Internet of things devices located at the same longitude and latitude comprises:
when irregular multi-story buildings exist, the system automatically derives a 3D model; therein, the automatically derived 3D model height is displayed at 10 m.
7. The method for automatically rendering a 3D model according to claim 1, wherein rendering the 3D model based on the feature information further comprises:
generating a 2D map according to the bottom interface of the 3D model;
making and generating a user interface, wherein the user interface comprises a left window, a middle window and a right window; the left window displays the 2D map, the right window displays the 3D model, and the middle window manages query operation and displays query results.
8. The method for automatically rendering a 3D model according to claim 7, wherein the query operation comprises: inquiring an access path of a target person, inquiring a form path of a target vehicle, inquiring video information of the target person at the Internet of things equipment with a video shooting function, and planning an equipment maintenance path.
9. The method of automatically rendering a 3D model of claim 8, wherein querying access paths of query target personas comprises:
determining a target person; the target person is a person needing to inquire an access path in the target area;
acquiring image information of a target person shot by Internet of things equipment with a shooting function; wherein, the image information is a front image or a back image;
extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
acquiring picture information or video information of target people shot by all the Internet of things equipment according to the characteristics to be detected;
acquiring the shooting time and the shooting place of the picture information or the video information;
connecting the shooting places according to the sequence of the shooting time to obtain an access path of the target person;
and displaying the access path of the target person in the intermediate window.
10. The method of automatically rendering a 3D model of claim 9, after the intermediate window displays the access path of the target person, comprising:
clicking on the current internet of things device icon located in the 3D model,
and displaying the video clip of the target person passing through the current Internet of things equipment in the intermediate window.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010152685.7A CN111369668A (en) | 2020-03-06 | 2020-03-06 | Method for automatically drawing 3D model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010152685.7A CN111369668A (en) | 2020-03-06 | 2020-03-06 | Method for automatically drawing 3D model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111369668A true CN111369668A (en) | 2020-07-03 |
Family
ID=71208681
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010152685.7A Pending CN111369668A (en) | 2020-03-06 | 2020-03-06 | Method for automatically drawing 3D model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111369668A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112164140A (en) * | 2020-09-18 | 2021-01-01 | 华航环境发展有限公司 | Three-dimensional data model construction method |
CN117078868A (en) * | 2023-10-17 | 2023-11-17 | 北京太极信息系统技术有限公司 | Virtual reality engine based on information creation software and hardware and modeling and rendering method thereof |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105571588A (en) * | 2016-03-10 | 2016-05-11 | 赛度科技(北京)有限责任公司 | Method for building three-dimensional aerial airway map of unmanned aerial vehicle and displaying airway of three-dimensional aerial airway map |
CN106408640A (en) * | 2016-09-14 | 2017-02-15 | 李娜 | 3D map modeling surface contour fast linkage rendering method |
CN109711249A (en) * | 2018-11-12 | 2019-05-03 | 平安科技(深圳)有限公司 | Personage's motion profile method for drafting, device, computer equipment and storage medium |
-
2020
- 2020-03-06 CN CN202010152685.7A patent/CN111369668A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105571588A (en) * | 2016-03-10 | 2016-05-11 | 赛度科技(北京)有限责任公司 | Method for building three-dimensional aerial airway map of unmanned aerial vehicle and displaying airway of three-dimensional aerial airway map |
CN106408640A (en) * | 2016-09-14 | 2017-02-15 | 李娜 | 3D map modeling surface contour fast linkage rendering method |
CN109711249A (en) * | 2018-11-12 | 2019-05-03 | 平安科技(深圳)有限公司 | Personage's motion profile method for drafting, device, computer equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112164140A (en) * | 2020-09-18 | 2021-01-01 | 华航环境发展有限公司 | Three-dimensional data model construction method |
CN117078868A (en) * | 2023-10-17 | 2023-11-17 | 北京太极信息系统技术有限公司 | Virtual reality engine based on information creation software and hardware and modeling and rendering method thereof |
CN117078868B (en) * | 2023-10-17 | 2023-12-15 | 北京太极信息系统技术有限公司 | Virtual reality engine based on information creation software and hardware and modeling and rendering method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11238652B2 (en) | Presenting integrated building information using building models | |
US20210183136A1 (en) | Method for Representing Virtual Information in a Real Environment | |
US8706718B2 (en) | Searching a database that stores information about individual habitable units | |
CN100423007C (en) | Modeling approachused for trans-media digital city scenic area | |
US20130222373A1 (en) | Computer program, system, method and device for displaying and searching units in a multi-level structure | |
KR20130139302A (en) | Creating and linking 3d spatial objects with dynamic data, and visualizing said objects in geographic information systems | |
KR20140123019A (en) | Visual representation of map navigation history | |
JP2017505923A (en) | System and method for geolocation of images | |
JP2014527667A (en) | Generation and rendering based on map feature saliency | |
US20140309925A1 (en) | Visual positioning system | |
US20140372031A1 (en) | Systems, methods and computer-readable media for generating digital wayfinding maps | |
US10459598B2 (en) | Systems and methods for manipulating a 3D model | |
CN111369668A (en) | Method for automatically drawing 3D model | |
KR102497681B1 (en) | Digital map based virtual reality and metaverse online platform | |
US10489965B1 (en) | Systems and methods for positioning a virtual camera | |
Jian et al. | Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system | |
CN116610236B (en) | Online cloud exhibition hall management system based on digital visualization technology | |
WO2014170758A2 (en) | Visual positioning system | |
US20220058862A1 (en) | Three dimensional structural placement representation system | |
Zhou et al. | Customizing visualization in three-dimensional urban GIS via web-based interaction | |
Asharsinyo et al. | Degree Level of Publicness Through Meaning of Public Sphere In Bandung City, West Java, Indonesia | |
US9230366B1 (en) | Identification of dynamic objects based on depth data | |
Glander et al. | Cell-based generalization of 3D building groups with outlier management | |
Blettery et al. | A spatio-temporal web-application for the understanding of the formation of the Parisian metropolis | |
US10108882B1 (en) | Method to post and access information onto a map through pictures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200703 |