CN114066999B - Target positioning system and method based on three-dimensional modeling - Google Patents
Target positioning system and method based on three-dimensional modeling Download PDFInfo
- Publication number
- CN114066999B CN114066999B CN202111440449.6A CN202111440449A CN114066999B CN 114066999 B CN114066999 B CN 114066999B CN 202111440449 A CN202111440449 A CN 202111440449A CN 114066999 B CN114066999 B CN 114066999B
- Authority
- CN
- China
- Prior art keywords
- target
- targets
- image
- ground
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 32
- 238000004458 analytical method Methods 0.000 claims abstract description 20
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000004807 localization Effects 0.000 claims 6
- 238000012545 processing Methods 0.000 abstract description 7
- 238000005457 optimization Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000005192 partition Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2462—Approximate or statistical queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/248—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Computer Graphics (AREA)
- Probability & Statistics with Applications (AREA)
- Remote Sensing (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及应急救援的技术领域,更具体的说,本发明主要涉及一种基于三维建模的目标定位系统及方法。The present invention relates to the technical field of emergency rescue, and more specifically, mainly to a target positioning system and method based on three-dimensional modeling.
背景技术Background Art
近年来我国各地自然灾害频发,对应急救援的及时性以及救援的效率都提出了不小的考验。在自然灾害发生后,如何快速获取灾区地形以及受灾人员的准确位置,是制定应急救援决策的关键。由于目前主流的目标定位方式大多都并非针对应急救援场景的应用而研发,因此定位所需时间较长,数据冗余量大,且获取定位数据后也无法直观展示,从而不适于在应急救援场景中使用。鉴于此,有必要针对依靠计算机实现的应急救援的场景中所使用的受灾人员快速定位,以及展示的方法进行研究和改进。In recent years, natural disasters have occurred frequently in various parts of my country, which has put forward a great challenge to the timeliness and efficiency of emergency rescue. After a natural disaster occurs, how to quickly obtain the terrain of the disaster area and the accurate location of the victims is the key to making emergency rescue decisions. Since most of the current mainstream target positioning methods are not developed for emergency rescue scenarios, the positioning time is long, the data redundancy is large, and the positioning data cannot be displayed intuitively after acquisition, which makes it unsuitable for use in emergency rescue scenarios. In view of this, it is necessary to study and improve the methods of rapid positioning of victims and display used in emergency rescue scenarios that rely on computers.
发明内容Summary of the invention
本发明的目的之一在于解决上述不足,提供一种基于三维建模的目标定位系统及方法,以期望解决现有技术中同类目标定位方法定位所需时间长,数据冗余量大,无法直观展示定位数据,不适于在应急救援场景中使用等技术问题。One of the purposes of the present invention is to address the above-mentioned shortcomings and to provide a target positioning system and method based on three-dimensional modeling, in the hope of solving the technical problems of similar target positioning methods in the prior art, such as long positioning time, large data redundancy, inability to intuitively display positioning data, and unsuitability for use in emergency rescue scenarios.
为解决上述的技术问题,本发明采用以下技术方案:In order to solve the above technical problems, the present invention adopts the following technical solutions:
本发明第一方面提供了一种基于三维建模的目标定位系统,所述的系统包括:图像识别模块,用于通过人工智能算法识别出原始图像数据中每张图像上的目标;目标分析模块,用于分别分析判断每张图像上的各个目标在地面的位置是否相同,如相同,则认为当前的两个或多个目标为同一目标;所述分析判断每张图像上的目标,在地面的位置是否相同,为比较目标在地面的位置之间的距离是否超过指定阈值,如未超过,则认为目标在地面的位置相同,反之则认为当前的两个或多个目标非同一目标;所述目标分析模块还用于将每张图像上的目标去重后重新统计,然后将重新统计后的目标定位在地面三维模型上显示;所述地面三维模型通过所述原始图像数据所生成;所述去重为将判断为同一目标的两个或多个目标,作为一个目标进行统计。The first aspect of the present invention provides a target positioning system based on three-dimensional modeling, and the system includes: an image recognition module, which is used to identify the target on each image in the original image data through an artificial intelligence algorithm; a target analysis module, which is used to analyze and determine whether the positions of the targets on each image are the same on the ground. If they are the same, it is considered that the current two or more targets are the same target; the analysis and judgment of whether the positions of the targets on each image are the same on the ground is to compare whether the distance between the positions of the targets on the ground exceeds a specified threshold. If it does not exceed, it is considered that the positions of the targets on the ground are the same, otherwise it is considered that the current two or more targets are not the same target; the target analysis module is also used to deduplicate the targets on each image and re-count them, and then locate the re-counted targets on the ground three-dimensional model for display; the ground three-dimensional model is generated by the original image data; the deduplication is to count two or more targets judged to be the same target as one target.
作为优选,进一步的技术方案是:所述目标在地面的位置为:设每张图像上的目标为P点,通过下式得到经过相机中心点与P点的射线方向v:As a preferred embodiment, a further technical solution is: the position of the target on the ground is: assuming that the target on each image is point P, the ray direction v passing through the camera center point and point P is obtained by the following formula:
v=R^'K^(-1)P (1)v=R^'K^(-1)P (1)
式(1)中,R为图像姿态旋转矩阵,K为相机内参数矩阵;进而得到从相机中心出发且经过P点的射线上的任意一点的参数方程:In formula (1), R is the image posture rotation matrix, and K is the camera intrinsic parameter matrix; thus, the parameter equation of any point on the ray starting from the camera center and passing through point P is obtained:
P=c+μv (2)P=c+μv (2)
式(2)中,c为图像位置的向量,μ为所述射线上的点与相机中心的距离值;通过射线与空间三角形求交算法,判断并得出所述射线与地面三维模型的三角形相交的点I,从而得到当前目标在地面位置的坐标值。In formula (2), c is the vector of the image position, and μ is the distance between the point on the ray and the center of the camera. Through the ray and space triangle intersection algorithm, the point I where the ray intersects with the triangle of the ground three-dimensional model is determined and obtained, thereby obtaining the coordinate value of the current target position on the ground.
更进一步的技术方案是:所述目标为每张图像上的人员。A further technical solution is: the target is a person in each image.
更进一步的技术方案是:所述原始图像数据为无人机拍摄并压缩处理后的多张图像。A further technical solution is that the original image data is a plurality of images taken by a drone and compressed.
更进一步的技术方案是:所述系统用于应急管理中进行灾区人员定位,为救援决策提供支撑。A further technical solution is: the system is used to locate people in disaster areas during emergency management and provide support for rescue decisions.
本发明第二方面提供了一种基于三维建模的目标定位方法,所述的方法包括如下步骤:A second aspect of the present invention provides a target positioning method based on three-dimensional modeling, wherein the method comprises the following steps:
通过人工智能算法识别出原始图像数据中每张图像上的目标;Identify the target on each image in the original image data through artificial intelligence algorithms;
分别分析判断每张图像上的各个目标在地面的位置是否相同,如相同,则认为当前的两个或多个目标为同一目标;所述分析判断每张图像上的目标,在地面的位置是否相同,为比较目标在地面的位置之间的距离是否超过指定阈值,如未超过,则认为目标在地面的位置相同,反之则认为当前的两个或多个目标非同一目标;Analyze and judge whether the positions of the targets on each image are the same on the ground. If they are the same, it is considered that the two or more targets are the same target. The analysis and judgment of whether the positions of the targets on each image are the same on the ground is to compare whether the distance between the positions of the targets on the ground exceeds a specified threshold. If not, it is considered that the positions of the targets on the ground are the same. Otherwise, it is considered that the two or more targets are not the same target.
将每张图像上的目标去重后重新统计,然后将重新统计后的目标定位在地面三维模型上显示;所述地面三维模型通过所述原始图像数据所生成;所述去重为将判断为同一目标的两个或多个目标,作为一个目标进行统计。The targets on each image are deduplicated and re-counted, and then the re-counted targets are positioned and displayed on a three-dimensional ground model; the three-dimensional ground model is generated by the original image data; the deduplication is to count two or more targets judged to be the same target as one target.
作为优选,进一步的技术方案是:所述目标在地面的位置为设每张图像上的目标为P点,通过下式得到经过相机中心点与P点的射线方向v:As a preferred embodiment, a further technical solution is: the position of the target on the ground is assuming that the target on each image is point P, and the ray direction v passing through the camera center point and point P is obtained by the following formula:
v=R^'K^(-1)P (1)v=R^'K^(-1)P (1)
式(1)中,R为图像姿态旋转矩阵,K为相机内参数矩阵;进而得到从相机中心出发且经过P点的射线上的任意一点的参数方程:In formula (1), R is the image posture rotation matrix, and K is the camera intrinsic parameter matrix; thus, the parameter equation of any point on the ray starting from the camera center and passing through point P is obtained:
P=c+μv (2)P=c+μv (2)
式(2)中,c为图像位置的向量,μ为射线上的点与相机中心的距离值;In formula (2), c is the vector of the image position, μ is the distance between the point on the ray and the camera center;
通过射线与空间三角形求交算法,判断并得出所述射线与地面三维模型的三角形相交的点I,从而得到当前目标在地面位置的坐标值。Through the ray and space triangle intersection algorithm, the point I where the ray intersects with the triangle of the ground three-dimensional model is determined and obtained, so as to obtain the coordinate value of the current target position on the ground.
更进一步的技术方案是:所述目标为每张图像上的人员;所述原始图像数据为无人机拍摄并压缩处理后的多张图像。A further technical solution is: the target is a person in each image; the original image data is a plurality of images taken by a drone and compressed.
更进一步的技术方案是:所述系统用于应急管理中进行灾区人员定位,为救援决策提供支撑。A further technical solution is: the system is used to locate people in disaster areas during emergency management and provide support for rescue decisions.
本发明第三方面提供了一种计算机可读的存储介质,所述计算机可读的存储介质中存储有指令,当计算机执行所述指令时,使得计算机执行上述的方法。A third aspect of the present invention provides a computer-readable storage medium, wherein the computer-readable storage medium stores instructions, and when a computer executes the instructions, the computer executes the above method.
与现有技术相比,本发明的有益效果之一是:通过人工智能算法直接从原始图像数据中识别出目标,并计算目标位置后进行去重处理,从而有效减少了目标数据量,也避免同一目标被多次统计,从而提高了目标统计的准确性,且通过在相同图像数据所生成的三维模型上显示已统计的目标位置,使得目标的定位更为直观,更有利于为应急管理快速辅助决策提供必要的支撑。Compared with the prior art, one of the beneficial effects of the present invention is that the target is directly identified from the original image data through an artificial intelligence algorithm, and deduplication processing is performed after the target position is calculated, thereby effectively reducing the amount of target data and avoiding the same target being counted multiple times, thereby improving the accuracy of target statistics, and by displaying the counted target position on the three-dimensional model generated by the same image data, the positioning of the target is more intuitive, which is more conducive to providing necessary support for rapid auxiliary decision-making in emergency management.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为用于说明本发明一个实施例的系统示意框图。FIG. 1 is a schematic block diagram of a system for illustrating an embodiment of the present invention.
图2为用于说明本发明一个实施例的目标分析模块结构示意框图。FIG. 2 is a schematic block diagram of the target analysis module structure for illustrating an embodiment of the present invention.
图3为用于说明本发明一个实施例的方法流程图。FIG. 3 is a flowchart for illustrating a method according to an embodiment of the present invention.
图4为用于说明本发明一个实施例的地面三维模型生成方法流程图。FIG. 4 is a flow chart of a method for generating a ground three-dimensional model according to an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
下面结合附图对本发明作进一步阐述。The present invention will be further described below in conjunction with the accompanying drawings.
本发明从原始图像数据中识别出目标,然后通过计算实现目标定位,将定位后的目标直接显示在三维模型上,并且该三维模型是在应急管理的技术需求之下快速生成的三维模型。在前述概要的基础上,本发明的一个实施例是一种基于三维建模的目标定位系统,主要用于应急管理中进行灾区人员定位,为救援决策提供支撑。具体的,参考图1所示,按照模块的功能划分,该系统主要包括两个部分,即图像识别模块与目标分析模块,图像识别模块与目标分析模块之间可进行数据传输。The present invention identifies the target from the original image data, and then realizes the target positioning through calculation, and directly displays the positioned target on the three-dimensional model, and the three-dimensional model is a three-dimensional model that is quickly generated under the technical requirements of emergency management. On the basis of the foregoing summary, one embodiment of the present invention is a target positioning system based on three-dimensional modeling, which is mainly used for locating people in disaster areas in emergency management and providing support for rescue decisions. Specifically, as shown in FIG1, according to the functional division of the modules, the system mainly includes two parts, namely an image recognition module and a target analysis module, and data can be transmitted between the image recognition module and the target analysis module.
具体的,由上述图像识别模块通过人工智能算法(AI)识别出原始图像数据中每张图像上的目标,目标识别可参考现有技术中同类的AI视频识别技术,一般而言,在本系统应用中,通过AI识别图像上的目标为人员,更具体的是指受灾人员。并且前述原始图像数据一般指无人机拍摄并压缩处理后的多张图像。Specifically, the above-mentioned image recognition module identifies the target on each image in the original image data through an artificial intelligence algorithm (AI). The target recognition can refer to the similar AI video recognition technology in the prior art. Generally speaking, in the application of this system, the target on the image identified by AI is a person, more specifically, a disaster victim. And the above-mentioned original image data generally refers to multiple images taken by drones and compressed.
通过上述方式获知了图像中的目标数据后,传输至目标分析模块,由目标分析模块分别分析判断每张图像上的各个目标,在地面的位置是否相同,如相同,则认为当前的两个或多个目标为同一目标。前述的地面的位置指的是地面坐标,并且该地面坐标与上述三位模型属于同一坐标系。After the target data in the image is obtained in the above manner, it is transmitted to the target analysis module, which analyzes and determines whether the positions of the targets in each image are the same on the ground. If they are the same, the two or more targets are considered to be the same target. The above-mentioned ground position refers to the ground coordinates, and the ground coordinates belong to the same coordinate system as the above-mentioned three-dimensional model.
在目标分析模块中,进行分析判断每张图像上的目标在地面的位置是否相同的操作,具体是指比较目标在地面的位置之间的距离是否超过指定阈值,如未超过,则认为目标在地面的位置相同,反之则认为当前的两个或多个目标非同一目标,需要分别统计。In the target analysis module, an operation is performed to analyze and determine whether the positions of the targets on each image are the same on the ground. Specifically, it refers to comparing whether the distance between the positions of the targets on the ground exceeds a specified threshold. If not, the positions of the targets on the ground are considered to be the same. Otherwise, the two or more targets are considered to be different targets and need to be counted separately.
正如上述所提到的目标判断,为防止多张图像中的同一目标被多次统计,导致目标数据量过多,目标分析模块需将每张图像上的目标去重后重新统计,前述去重指的是将判断为同一目标的两个或多个目标时,将当前的两个或多个目标作为一个目标进行统计。As mentioned above, in order to prevent the same target in multiple images from being counted multiple times, resulting in excessive target data, the target analysis module needs to deduplicate the targets on each image and then re-count them. The aforementioned deduplication means that when two or more targets are judged to be the same target, the current two or more targets are counted as one target.
具体来说,对于两张拍摄时间相近图像上的两个目标T1和T2,假设计算出其在地表的位置分别为L1和L2,假设L1和L2的距离小于指定阈值,那么就认为这T1和T2为同一个目标。反之则将T1和T2当做两个目标来统计,对于T1和T2不进行上述的去重处理。由于本发明的目标是灾区人员,同一人员在拍摄时间相近的两张图像中位移距离通常比较小,因此可以认为上述的假设是合理的。Specifically, for two targets T1 and T2 on two images taken at similar times, assuming that their positions on the ground are calculated to be L1 and L2 respectively, and assuming that the distance between L1 and L2 is less than a specified threshold, then T1 and T2 are considered to be the same target. Otherwise, T1 and T2 are counted as two targets, and the above deduplication process is not performed on T1 and T2. Since the target of the present invention is people in the disaster area, the displacement distance of the same person in two images taken at similar times is usually relatively small, so the above assumption can be considered reasonable.
由目标分析模块将重新统计后的目标定位在地面三维模型上显示。前述地面三维模型通过上述原始图像数据所生成,即上述AI识别目标的原始图像数据。The target analysis module positions the re-counted targets on the ground three-dimensional model. The ground three-dimensional model is generated by the original image data, that is, the original image data of the AI-recognized target.
正如上述提到的,目标定位的地面坐标与三维模型在同一坐标系下,因此目标定位后可以直接以三维模型为背景进行显示,即目标定位在地面三维模型上显示。As mentioned above, the ground coordinates of the target positioning are in the same coordinate system as the three-dimensional model, so after the target positioning, it can be directly displayed with the three-dimensional model as the background, that is, the target positioning is displayed on the ground three-dimensional model.
在上述的实施例中,目标分析模块首先需要获知每个目标在地面的位置,才可进行上述的去重判断,本实施例提供了一种优选的方法,可由目标分析模块从原始图像获知目标在地面的位置,即地面坐标。In the above embodiment, the target analysis module first needs to know the position of each target on the ground before performing the above deduplication judgment. This embodiment provides a preferred method, where the target analysis module can obtain the position of the target on the ground, that is, the ground coordinates, from the original image.
具体为设每张图像上的目标为P点,通过下式得到经过相机中心点与P点的射线方向v:Specifically, let the target on each image be point P, and the ray direction v passing through the camera center point and point P is obtained by the following formula:
v=R^'K^(-1)P (1)v=R^'K^(-1)P (1)
在式(1)中,R为图像姿态旋转矩阵,K为相机内参数矩阵,前述R与K可通过空三优化算法通过原始图像数据计算得到,在本实施例中,两者相当于已知的量;进而得到从相机中心出发且经过P点的射线上任意一点的参数方程:In formula (1), R is the image posture rotation matrix, and K is the camera intrinsic parameter matrix. The aforementioned R and K can be calculated by the original image data through the aerotriangulation optimization algorithm. In this embodiment, the two are equivalent to known quantities; and then the parameter equation of any point on the ray starting from the camera center and passing through point P is obtained:
P=c+μv (2)P=c+μv (2)
式(2)中,c为图像位置的向量,μ为射线上的点与相机中心的距离值;正如前述所提到的,式(2)中的P点指的是从相机中心出发且经过P点的射线上的任意一点,其中自然也包括P点本身。In formula (2), c is the vector of the image position, and μ is the distance between the point on the ray and the camera center. As mentioned above, point P in formula (2) refers to any point on the ray that starts from the camera center and passes through point P, which naturally includes point P itself.
通过射线与空间三角形求交算法,判断并得出上述射线与地面三维模型的三角形相交的点I,从而得到当前目标在地面位置的坐标值。具体而言,由于三维模型代表的是地面地形,因此前述射线与三维模型中的三角形相交也即意味着与地面相交,进而当目标为前述射线上的一点时,即可获得目标在地面位置的坐标值。Through the ray and space triangle intersection algorithm, the point I where the ray intersects with the triangle of the ground three-dimensional model is determined and obtained, thereby obtaining the coordinate value of the current target on the ground. Specifically, since the three-dimensional model represents the ground terrain, the intersection of the ray and the triangle in the three-dimensional model also means the intersection with the ground, and then when the target is a point on the ray, the coordinate value of the target on the ground can be obtained.
上述射线与空间三角形求交算法可以参考如下过程实现:The above ray and space triangle intersection algorithm can be implemented by referring to the following process:
假设一个点位于三角形(V0,V1,V2)上,前述V0,V1,V2为三角形的三个顶点的坐标,那么这个点就能够用例如以下的方式来表示:Assume that a point is located on the triangle (V0, V1, V2), where V0, V1, V2 are the coordinates of the three vertices of the triangle. Then this point can be represented, for example, in the following way:
T(u,v)=(1-u-v)×V0+u×V1+v×V2T(u,v)=(1-u-v)×V0+u×V1+v×V2
这里的u+v≤1,u≥0,v≥0;Here u+v≤1,u≥0,v≥0;
而对于射线,我们一般使用以下的方程来表示它:For rays, we generally use the following equation to represent it:
R(t)=O+t×D(R为图像姿态旋转矩阵,O为射线的起始点,D为射线的方向)R(t)=O+t×D (R is the image posture rotation matrix, O is the starting point of the ray, and D is the direction of the ray)
所以,既然它们要有交点。我们就行直接使用例如以下的方法来得出:So, since they have to have an intersection, we can directly use the following method to get it:
O+t×D=(1-u-v)×V0+u×V1+v×V2O+t×D=(1-u-v)×V0+u×V1+v×V2
上面方程组成三元一次线性方程组,可以求解t,u和v三个参数。如果0<u<1,0<v<1,且u+v≤1,那么求出来的点位于三角形内部,意味着射线与三角形相交。否则两者不相交。The above equations form a system of three linear equations, which can be solved for the three parameters t, u and v. If 0<u<1, 0<v<1, and u+v≤1, then the point is inside the triangle, which means that the ray intersects the triangle. Otherwise, the two do not intersect.
在上述的三元一次线性方程组中,u,v是三角形所在平面上的点以三角形的三个顶点坐标表达的两个参数。u表示点在V0与V1线段上的归一化分量,v表示点在V0与V2线段上的归一化分量。即如果交点位于三角形内部则0<u<1,0<v<1,且u+v≤1。如果不满足上述条件,则交点位于三角形外部。t是射线上的一个点的参数,t表示点距离射线起点的有向距离。t>0表示点在射线的正向上。t<0表示在射线的负向上。In the above three-variable linear equation system, u and v are two parameters of the point on the plane where the triangle is located expressed by the coordinates of the three vertices of the triangle. u represents the normalized component of the point on the line segments V0 and V1, and v represents the normalized component of the point on the line segments V0 and V2. That is, if the intersection point is inside the triangle, then 0<u<1, 0<v<1, and u+v≤1. If the above conditions are not met, the intersection point is outside the triangle. t is a parameter of a point on the ray, and t represents the directed distance of the point from the starting point of the ray. t>0 means that the point is in the positive direction of the ray. t<0 means that it is in the negative direction of the ray.
为保证本发明能够对本领域技术人员充分公开,参考图2所示,本发明的又一个实施例是应用在上述实施例中,分析目标是否相同及进行目标定位的目标分析模块的硬件结构,在本实施例中,其包括处理器、存储器与数据接口,通过数据接口接收原始图像数据中每张图像上的目标,并存储在存储器中,存储器还用于存储必要的计算机指令,由处理器读取存储器中每张图像上的目标,执行相应的计算机指令,进而实现上述实施例中的目标分析模块的功能。To ensure that the present invention can be fully disclosed to those skilled in the art, with reference to FIG2 , another embodiment of the present invention is a hardware structure of a target analysis module for analyzing whether targets are identical and locating targets in the above embodiment. In this embodiment, it includes a processor, a memory, and a data interface. The target on each image in the original image data is received through the data interface and stored in the memory. The memory is also used to store necessary computer instructions. The processor reads the target on each image in the memory and executes corresponding computer instructions, thereby realizing the functions of the target analysis module in the above embodiment.
在具体的实现中,处理器可以包括一个或多个中央处理器(central processingunit,CPU),这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。In a specific implementation, the processor may include one or more central processing units (CPUs), where a processor may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
存储器可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electricallyerasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。The memory may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (RAM) or other types of dynamic storage devices that can store information and instructions, or an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compressed optical disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store the desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto. The memory may exist independently and be connected to the processor through a bus. The memory may also be integrated with the processor.
数据接口可使用本领域内常用的任意一类计算机硬件的数据接口,用于使图像数据能被接收并写入智能建模单元内处理器可读的存储介质上。The data interface may use any type of computer hardware data interface commonly used in the art, so that the image data can be received and written to a storage medium readable by a processor in the intelligent modeling unit.
参考图3所示,基于上述的系统,本发明的另一个实施例是一种基于三维建模的目标定位方法,该方法包括如下步骤:Referring to FIG. 3 , based on the above system, another embodiment of the present invention is a target positioning method based on three-dimensional modeling, the method comprising the following steps:
步骤S101、通过人工智能算法识别出原始图像数据中每张图像上的目标。方法应用于应急管理场景中,因此前述目标一般指的是原始图像数据中每张图像上的人员,由其是指受灾人员;并且原始图像数据主要来源于无人机拍摄并压缩处理后的多张图像。Step S101: Identify the target on each image in the original image data by artificial intelligence algorithm. The method is applied in emergency management scenarios, so the aforementioned target generally refers to the person on each image in the original image data, especially the disaster victims; and the original image data mainly comes from multiple images taken by drones and compressed.
步骤S102、分别分析计算各个目标在地面的位置;Step S102, respectively analyzing and calculating the position of each target on the ground;
步骤S103、判断每张图像上的是否相同,如相同,则认为当前的两个或多个目标为同一目标。Step S103: determine whether the targets in each image are the same. If they are the same, it is considered that the current two or more targets are the same target.
具体的,分析判断每张图像上的目标在地面的位置是否相同,为比较目标在地面的位置之间的距离是否超过指定阈值,如未超过,则认为目标在地面的位置相同,反之则认为当前的两个或多个目标非同一目标,需要分别统计。Specifically, we analyze and determine whether the positions of the targets on the ground in each image are the same, and compare whether the distances between the positions of the targets on the ground exceed a specified threshold. If not, we consider that the positions of the targets on the ground are the same; otherwise, we consider that the two or more targets are not the same target and need to be counted separately.
步骤S104、将每张图像上的目标去重后重新统计,然后将重新统计后的目标定位在地面三维模型上显示。前述去重为将判断为同一目标的两个或多个目标,作为一个目标进行统计。Step S104: remove duplicates from the objects in each image and re-count them, then locate and display the re-counted objects on the ground three-dimensional model. The above-mentioned removal of duplicates is to count two or more objects that are determined to be the same object as one object.
具体的,上述地面三维模型通过所述原始图像数据所生成。Specifically, the above-mentioned ground three-dimensional model is generated by the original image data.
优选的是,上述目标在地面的位置采用如下步骤得到:Preferably, the position of the target on the ground is obtained by the following steps:
步骤S1041、设每张图像上的目标为P点,通过下式得到经过相机中心点与P点的射线方向v:Step S1041, assuming that the target on each image is point P, the ray direction v passing through the camera center point and point P is obtained by the following formula:
v=R^'K^(-1)P (1)v=R^'K^(-1)P (1)
式(1)中,R为图像姿态旋转矩阵,K为相机内参数矩阵;In formula (1), R is the image posture rotation matrix, and K is the camera intrinsic parameter matrix;
步骤S1042、利用上述得到的射线方向v,构建从相机中心出发且经过P点的射线上的任意一点的参数方程:Step S1042: Using the ray direction v obtained above, construct a parametric equation for any point on the ray starting from the camera center and passing through point P:
P=c+μv (2)P=c+μv (2)
式(2)中,c为图像位置的向量,μ为所述射线与相机中心的距离值。正如前述所提到的,式(2)中的P点指的是从相机中心出发且经过P点的射线上的任意一点,其中自然也包括P点本身。In formula (2), c is the vector of the image position, and μ is the distance between the ray and the camera center. As mentioned above, point P in formula (2) refers to any point on the ray starting from the camera center and passing through point P, which naturally includes point P itself.
步骤S1043、通过射线与空间三角形求交算法,判断并得出所述射线与地面三维模型的三角形相交的点I,从而得到当前目标在地面位置的坐标值。本步骤中的射线与空间三角形求交算法可参考上述实施例的方法实现。Step S1043: By using the ray and space triangle intersection algorithm, determine and obtain the point I where the ray intersects with the triangle of the ground three-dimensional model, thereby obtaining the coordinate value of the current target position on the ground. The ray and space triangle intersection algorithm in this step can be implemented with reference to the method of the above embodiment.
基于计算机软件类产品的一般形态,本发明的还一个实施例提供了一种计算机可读的存储介质,该计算机可读的存储介质中存储有指令,当计算机执行所述指令时,使得计算机执行上述实施例中的基于三维建模的目标定位方法。Based on the general form of computer software products, another embodiment of the present invention provides a computer-readable storage medium, in which instructions are stored. When a computer executes the instructions, the computer executes the target positioning method based on three-dimensional modeling in the above embodiment.
其中,计算机可读存储介质,例如可以是但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(erasableprogrammable read only memory,EPROM)、寄存器、硬盘、光纤、CD-ROM、光存储器件、磁存储器件、或者上述的任意合适的组合、或者本领域熟知的任何其它形式的计算机可读存储介质。一种示例性地存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于特定用途集成电路(application specificintegrated circuit,ASIC)中。在本申请实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Among them, the computer-readable storage medium, for example, can be but is not limited to an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples (non-exhaustive list) of computer-readable storage media include: an electrical connection with one or more wires, a portable computer disk, a hard disk, RAM, ROM, an erasable programmable read only memory (EPROM), a register, a hard disk, an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the above, or any other form of computer-readable storage medium known in the art. An exemplary storage medium is coupled to a processor so that the processor can read information from the storage medium and write information to the storage medium. Of course, the storage medium can also be a component of the processor. The processor and the storage medium can be located in an application-specific integrated circuit (ASIC). In an embodiment of the present application, the computer-readable storage medium can be any tangible medium containing or storing a program, which can be used by an instruction execution system, device or device or used in combination with it.
更为优选的是,上述已经提到地面三维模型通过原始图像数据所生成,并且目标的地面坐标与地面三维模型处于同一个坐标系中,因此目标定位后刻以三维模型为背景进行显示,两者可直接叠加即可实现灾区人员定位及显示,目标显示更为直观可靠,更利于为应急管理快速辅助决策提供必要的支撑。More preferably, the above-mentioned ground three-dimensional model is generated by the original image data, and the ground coordinates of the target are in the same coordinate system as the ground three-dimensional model. Therefore, after the target is located, it is displayed with the three-dimensional model as the background. The two can be directly superimposed to achieve the positioning and display of people in the disaster area. The target display is more intuitive and reliable, and is more conducive to providing necessary support for rapid decision-making assistance for emergency management.
在本发明上述的多个实施例中,通过图像的位置和姿态信息以及地表三维模型,可以精确定位灾区人员等目标在地面的位置,有效减少重叠图像上同一目标被统计多次的可能,增加目标统计的准确性。In the above-mentioned multiple embodiments of the present invention, the position of targets such as people in disaster areas on the ground can be accurately located through the position and posture information of the image and the three-dimensional model of the surface, effectively reducing the possibility of the same target being counted multiple times in overlapping images and increasing the accuracy of target statistics.
参考图4所示,上述基于原始图像数据生成地面三维模型可通过如下的方法快速生成三维模型:Referring to FIG4 , the above-mentioned three-dimensional ground model generated based on the original image data can be quickly generated by the following method:
步骤S001、获取压缩后的原始图像数据,并通过压缩后的图像数据构建场景图;在构建场景图时,基于图像获取时的空间位置(经纬度),根据空间近邻原则,选取与每张图像最近的一定数量(例如50张)图像作为其邻近图像,从而快速构建场景图。Step S001, obtain compressed original image data, and construct a scene graph through the compressed image data; when constructing the scene graph, based on the spatial position (longitude and latitude) of the image when it is acquired and according to the spatial neighbor principle, select a certain number (for example, 50) of images closest to each image as its neighboring images, so as to quickly construct the scene graph.
步骤S002、使用SIFT(尺度不变特征转换Scale-invariant feature transform)或类似的算法提取每张图像上的特征点,用来完成图像匹配,特征点为每张图像上的特征点按尺度由大到小排列后,排列靠前的部分特征点。可选的,即仅保留尺度最大的一定数量(例如2000-4000个)的特征点,而不是使用全部的特征点来进行特征点匹配,这样可以大大减少匹配时间。Step S002, using SIFT (Scale-invariant feature transform) or similar algorithms to extract feature points on each image to complete image matching, the feature points are the front-most feature points after the feature points on each image are arranged from large to small according to scale. Optionally, only a certain number (e.g., 2000-4000) of feature points with the largest scale are retained instead of using all feature points for feature point matching, which can greatly reduce the matching time.
步骤S003、通过空三优化算法确定每张图像拍摄时的位置和姿态信息;空三优化算法为图像处理中常用的算法,其为根据匹配好的特征点,寻找最优的相机位置和姿态信息以及特征点的三维坐标,从而最小化三维点的重投影误差的平方和。Step S003, determine the position and posture information of each image when it was taken through an aerial triangulation optimization algorithm; the aerial triangulation optimization algorithm is a commonly used algorithm in image processing, which finds the optimal camera position and posture information and the three-dimensional coordinates of the feature points based on the matched feature points, thereby minimizing the sum of squares of the reprojection errors of the three-dimensional points.
步骤S004、通过立体像对匹配算法由每张图像拍摄时的位置和姿态信息,生成稠密点云;结合图4所示,稠密点云生成是通过预设的图像降采样系数对每张图像进行降采样,且指定多个点云采样密度等级,且仅在高密度等级时才对图像进行全像素深度图生成;前述的图像降采样系数一般默认为2。前述立体像对匹配算法也是图像处理中常用的算法,其可根据每张图像与相邻图像的位置关系,确定每个像素与相邻图像上的同名像素,然后使用前方交会算法确定相应像素的位置,进而得到稠密点云。Step S004, using the stereo pair matching algorithm to generate a dense point cloud from the position and posture information of each image when it was taken; as shown in FIG4 , the dense point cloud generation is to downsample each image by a preset image downsampling coefficient, and specify multiple point cloud sampling density levels, and only generate a full pixel depth map for the image at a high density level; the aforementioned image downsampling coefficient is generally 2 by default. The aforementioned stereo pair matching algorithm is also a commonly used algorithm in image processing, which can determine the same-name pixels of each pixel and the adjacent image according to the positional relationship between each image and the adjacent image, and then use the forward intersection algorithm to determine the position of the corresponding pixel, thereby obtaining a dense point cloud.
可选的是,上述点云采样密度等级包括高密度等级、中密度等级与低密度等级;除了高密度等级时才对图像进行全像素深度图生成外,其他均采用间隔像素进行深度图生成,即在中密度等级时对图像水平方向和垂直方向上,每隔一个像素进行深度图生成;在低密度等级时对图像水平方向和垂直方向上,每隔两个像素进行深度图生成。通过指定降采样系数和点云采样密度,可以大大减少稠密点云生成数量和生成时间,也为后续的三维模型生成节约了时间。Optionally, the above-mentioned point cloud sampling density levels include high density level, medium density level and low density level; except for the high density level, the full pixel depth map is generated for the image, and the other levels are all generated by interval pixels, that is, at the medium density level, the depth map is generated for every other pixel in the horizontal and vertical directions of the image; at the low density level, the depth map is generated for every two pixels in the horizontal and vertical directions of the image. By specifying the downsampling coefficient and point cloud sampling density, the number and generation time of dense point clouds can be greatly reduced, which also saves time for the subsequent 3D model generation.
步骤S005、根据已生成的稠密点云,生成三角网模型;然后将所述三角网模型与图像数据进行纹理映射,生成接近于所述建模区域实时图像的三维模型。其中三角网模型是通过Delaunay四面体剖分算法和图割方法基于已生成的稠密点云生成的。Step S005: Generate a triangulated mesh model based on the generated dense point cloud; then perform texture mapping between the triangulated mesh model and the image data to generate a three-dimensional model close to the real-time image of the modeling area. The triangulated mesh model is generated based on the generated dense point cloud by using a Delaunay tetrahedron partitioning algorithm and a graph cut method.
在本步骤,可选的是,首先基于生成的稠密点云,生成Delaunay四面体空间剖分,构建全局优化图,图的节点由空间剖分中的四面体构建成,图的边为相邻四面体的三角面,然后确定每个点到其所见相机的连线相交的Delaunay四面体剖分中的三角面,累加权重值1到全局优化图中相应的边,得到可见线约束的全局优化图,最后使用最大流最小割算法对全局优化图进行分割,确定每个四面体与模型表面的内外关系,提取位于表面内外的相邻的四面体的共享三角面构成最终的三角网模型。然后选择与每个三角面最近的可见相机作为其关联相机,然后把同一关联相机的空间上相连通的三角面分成同一组,获取其在关联相机上拍摄的图像块,最后把所有的图像块按照打包算法组合成一张纹理图,实现三角网络的纹理映射,得到上述与建模区域实时图像相同或接近的三维模型。In this step, it is optional to first generate a Delaunay tetrahedron spatial partition based on the generated dense point cloud, construct a global optimization graph, the nodes of the graph are constructed by tetrahedrons in the spatial partition, and the edges of the graph are triangular faces of adjacent tetrahedrons, then determine the triangular faces in the Delaunay tetrahedron partition where the lines from each point to the camera it sees intersect, and add a weight value of 1 to the corresponding edge in the global optimization graph to obtain a global optimization graph with visible line constraints, and finally use the maximum flow minimum cut algorithm to segment the global optimization graph, determine the internal and external relationship between each tetrahedron and the model surface, and extract the shared triangular faces of adjacent tetrahedrons located inside and outside the surface to form the final triangulated network model. Then select the visible camera closest to each triangular face as its associated camera, and then divide the spatially connected triangular faces of the same associated camera into the same group, obtain the image blocks taken on the associated camera, and finally combine all the image blocks into a texture map according to the packing algorithm to realize the texture mapping of the triangular network, and obtain the above-mentioned three-dimensional model that is the same or close to the real-time image of the modeling area.
除上述以外还需要说明的是,在本说明书中所谈到的“一个实施例”、“另一个实施例”、“实施例”等,指的是结合该实施例描述的具体特征、结构或者特点包括在本申请概括性描述的至少一个实施例中。在说明书中多个地方出现同种表述不是一定指的是同一个实施例。进一步来说,结合任一实施例描述一个具体特征、结构或者特点时,所要主张的是结合其他实施例来实现这种特征、结构或者特点也落在本发明的范围内。In addition to the above, it should be noted that "one embodiment", "another embodiment", "embodiment", etc. mentioned in this specification refer to the specific features, structures or characteristics described in conjunction with the embodiment included in at least one embodiment generally described in this application. The same expression appearing in multiple places in the specification does not necessarily refer to the same embodiment. Further, when describing a specific feature, structure or characteristic in conjunction with any embodiment, it is claimed that the realization of such feature, structure or characteristic in conjunction with other embodiments also falls within the scope of the present invention.
尽管这里参照本发明的多个解释性实施例对本发明进行了描述,但是,应该理解,本领域技术人员可以设计出很多其他的修改和实施方式,这些修改和实施方式将落在本申请公开的原则范围和精神之内。更具体地说,在本申请公开、附图和权利要求的范围内,可以对主题组合布局的组成部件和/或布局进行多种变型和改进。除了对组成部件和/或布局进行的变型和改进外,对于本领域技术人员来说,其他的用途也将是明显的。Although the present invention is described herein with reference to a number of illustrative embodiments of the present invention, it will be appreciated that those skilled in the art may devise many other modifications and implementations that fall within the scope and spirit of the principles disclosed herein. More specifically, within the scope of the present disclosure, drawings, and claims, a variety of variations and improvements may be made to the components and/or layout of the subject combination layout. In addition to variations and improvements made to the components and/or layout, other uses will also be apparent to those skilled in the art.
Claims (8)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111151403 | 2021-09-29 | ||
CN2021111514032 | 2021-09-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114066999A CN114066999A (en) | 2022-02-18 |
CN114066999B true CN114066999B (en) | 2024-11-05 |
Family
ID=80277210
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111440449.6A Active CN114066999B (en) | 2021-09-29 | 2021-11-30 | Target positioning system and method based on three-dimensional modeling |
CN202111438187.XA Pending CN114092651A (en) | 2021-09-29 | 2021-11-30 | Intelligent modeling system and method for emergency management |
CN202111438495.2A Pending CN114067060A (en) | 2021-09-29 | 2021-11-30 | A Fast Generation Method of Dense Point Clouds for 3D Modeling |
CN202210545528.1A Active CN114782219B (en) | 2021-09-29 | 2022-05-19 | Personnel flow data analysis method and device |
CN202210554244.9A Pending CN114969153A (en) | 2021-09-29 | 2022-05-20 | Personnel distribution data determination method and device |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111438187.XA Pending CN114092651A (en) | 2021-09-29 | 2021-11-30 | Intelligent modeling system and method for emergency management |
CN202111438495.2A Pending CN114067060A (en) | 2021-09-29 | 2021-11-30 | A Fast Generation Method of Dense Point Clouds for 3D Modeling |
CN202210545528.1A Active CN114782219B (en) | 2021-09-29 | 2022-05-19 | Personnel flow data analysis method and device |
CN202210554244.9A Pending CN114969153A (en) | 2021-09-29 | 2022-05-20 | Personnel distribution data determination method and device |
Country Status (1)
Country | Link |
---|---|
CN (5) | CN114066999B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116227929B (en) * | 2023-03-07 | 2024-03-19 | 广州爱浦路网络技术有限公司 | Communication data analysis method, device, equipment and storage medium |
CN117314081B (en) * | 2023-09-26 | 2024-09-24 | 选房宝(珠海横琴)数字科技有限公司 | Method, device, equipment and storage medium for guest development |
CN118488426B (en) * | 2024-07-11 | 2024-10-11 | 广州市突发事件预警信息发布中心(广州市气象探测数据中心) | Emergency information release method and system based on mobile phone signaling data |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344002A (en) * | 2021-07-29 | 2021-09-03 | 北京图知天下科技有限责任公司 | Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7986271B2 (en) * | 2009-06-18 | 2011-07-26 | Bae Systems Information And Electronic Systems Integration Inc. | Tracking of emergency personnel |
CN102243074B (en) * | 2010-05-13 | 2014-06-18 | 中国科学院遥感应用研究所 | Method for simulating geometric distortion of aerial remote sensing image based on ray tracing technology |
KR20140033672A (en) * | 2012-09-10 | 2014-03-19 | 삼성전자주식회사 | Method and device for trasmitting information related to event |
CN104750895B (en) * | 2013-12-30 | 2018-01-16 | 深圳先进技术研究院 | Real-time city emergency evacuation emulation method and system based on cell phone data |
CN104715471B (en) * | 2014-01-03 | 2018-01-02 | 杭州海康威视数字技术股份有限公司 | Target locating method and its device |
WO2019090480A1 (en) * | 2017-11-07 | 2019-05-16 | 深圳市大疆创新科技有限公司 | Three-dimensional reconstruction method, system and apparatus based on aerial photography by unmanned aerial vehicle |
CN108391223B (en) * | 2018-02-12 | 2020-08-11 | 中国联合网络通信集团有限公司 | Method and device for determining lost user |
CN109005500A (en) * | 2018-07-09 | 2018-12-14 | 京信通信系统(中国)有限公司 | Emergency rescue method, apparatus, system, computer storage medium and equipment |
CN109360270B (en) * | 2018-11-13 | 2023-02-10 | 盎维云(深圳)计算有限公司 | 3D face pose alignment method and device based on artificial intelligence |
US20200193685A1 (en) * | 2018-12-13 | 2020-06-18 | Advanced Micro Devices, Inc. | Water tight ray triangle intersection without resorting to double precision |
CN109767452A (en) * | 2018-12-24 | 2019-05-17 | 深圳市道通智能航空技术有限公司 | A target positioning method and device, and an unmanned aerial vehicle |
CN109886096B (en) * | 2019-01-09 | 2021-09-14 | 武汉中联智诚科技有限公司 | Wisdom tourism supervision and safe emergency management linkage command system |
CN109640355B (en) * | 2019-01-22 | 2022-02-11 | 中国联合网络通信集团有限公司 | Method and device for determining personal safety of personnel in disaster area |
CN109949399B (en) * | 2019-03-15 | 2023-07-14 | 西安因诺航空科技有限公司 | Scene three-dimensional reconstruction method based on unmanned aerial vehicle aerial image |
CN111599001B (en) * | 2020-05-14 | 2023-03-14 | 星际(重庆)智能装备技术研究院有限公司 | Unmanned aerial vehicle navigation map construction system and method based on image three-dimensional reconstruction technology |
CN111629193B (en) * | 2020-07-28 | 2020-11-10 | 江苏康云视觉科技有限公司 | Live-action three-dimensional reconstruction method and system |
CN112308325B (en) * | 2020-11-05 | 2024-06-04 | 腾讯科技(深圳)有限公司 | Thermodynamic diagram generation method and device |
-
2021
- 2021-11-30 CN CN202111440449.6A patent/CN114066999B/en active Active
- 2021-11-30 CN CN202111438187.XA patent/CN114092651A/en active Pending
- 2021-11-30 CN CN202111438495.2A patent/CN114067060A/en active Pending
-
2022
- 2022-05-19 CN CN202210545528.1A patent/CN114782219B/en active Active
- 2022-05-20 CN CN202210554244.9A patent/CN114969153A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344002A (en) * | 2021-07-29 | 2021-09-03 | 北京图知天下科技有限责任公司 | Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium |
Non-Patent Citations (1)
Title |
---|
地震灾后区域无人机GPS遥感定位技术研究;时盛春;地震工程学报;20180415;350-355 * |
Also Published As
Publication number | Publication date |
---|---|
CN114092651A (en) | 2022-02-25 |
CN114782219B (en) | 2024-11-05 |
CN114969153A (en) | 2022-08-30 |
CN114067060A (en) | 2022-02-18 |
CN114066999A (en) | 2022-02-18 |
CN114782219A (en) | 2022-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114066999B (en) | Target positioning system and method based on three-dimensional modeling | |
WO2020206903A1 (en) | Image matching method and device, and computer readable storage medium | |
WO2020001168A1 (en) | Three-dimensional reconstruction method, apparatus, and device, and storage medium | |
WO2020093950A1 (en) | Three-dimensional object segmentation method and device and medium | |
US9286539B2 (en) | Constructing contours from imagery | |
WO2021056516A1 (en) | Method and device for target detection, and movable platform | |
US11651533B2 (en) | Method and apparatus for generating a floor plan | |
CN114332415A (en) | Three-dimensional reconstruction method and device of power transmission line corridor based on multi-view technology | |
US20160232420A1 (en) | Method and apparatus for processing signal data | |
CN117581232A (en) | Accelerated training of NeRF-based machine learning models | |
CN111639147A (en) | Map compression method, system and computer readable storage medium | |
CN116109799B (en) | Method, device, computer equipment and storage medium for training adjustment model | |
CN113379826A (en) | Method and device for measuring volume of logistics piece | |
CN117635875A (en) | Three-dimensional reconstruction method, device and terminal | |
CN114757822B (en) | Binocular-based human body three-dimensional key point detection method and system | |
CN114522420B (en) | Game data processing method, device, computer equipment and storage medium | |
US10861174B2 (en) | Selective 3D registration | |
WO2022252036A1 (en) | Method and apparatus for acquiring obstacle information, movable platform and storage medium | |
CN116957999A (en) | Depth map optimization method, device, equipment and storage medium | |
CN116188565A (en) | Position area detection method, device, apparatus, storage medium and program product | |
CN115661421A (en) | Point cloud outlier removal method, point cloud processing method, device and related equipment | |
CN114943809A (en) | Map model generation method and device and storage medium | |
CN111145081A (en) | 3D model view projection method and system based on spatial volume feature | |
CN114812540B (en) | Picture construction method and device and computer equipment | |
CN118189934B (en) | Map updating method, map updating device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |