Nothing Special   »   [go: up one dir, main page]

CN109816724B - Method and device for 3D feature extraction based on machine vision - Google Patents

Method and device for 3D feature extraction based on machine vision Download PDF

Info

Publication number
CN109816724B
CN109816724B CN201811474153.4A CN201811474153A CN109816724B CN 109816724 B CN109816724 B CN 109816724B CN 201811474153 A CN201811474153 A CN 201811474153A CN 109816724 B CN109816724 B CN 109816724B
Authority
CN
China
Prior art keywords
measured
detected
position information
image
feature point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811474153.4A
Other languages
Chinese (zh)
Other versions
CN109816724A (en
Inventor
沈震
熊刚
李志帅
彭泓力
郭超
董西松
商秀芹
王飞跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201811474153.4A priority Critical patent/CN109816724B/en
Publication of CN109816724A publication Critical patent/CN109816724A/en
Priority to PCT/CN2019/105962 priority patent/WO2020114035A1/en
Application granted granted Critical
Publication of CN109816724B publication Critical patent/CN109816724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明属于机器视觉领域,具体提供一种基于机器视觉的三维特征提取方法及装置。本发明旨在解决现有技术中的三维模型重建过程复杂耗时、普及困难等问题。为此目的,本发明的基于机器视觉的三维特征提取方法的步骤包括:获取包含目标物的预设待测特征点的多角度图像;提取所述待测特征点在每个所述图像中的位置信息;根据所述待测特征点在每个所述图像中的位置信息获取所述待测特征点的空间位置信息;基于所述空间位置信息和预设的三维特征类别,计算某个待测特征点对应的第一距离信息和/或第二距离信息。通过机器视觉获取包含待测特征点的不同角度图像进而获取待测特征点的空间位置信息以便计算得到目标物的距离信息。

Figure 201811474153

The invention belongs to the field of machine vision, and specifically provides a method and device for extracting three-dimensional features based on machine vision. The present invention aims to solve the problems in the prior art that the reconstruction process of the three-dimensional model is complicated, time-consuming, and difficult to popularize. To this end, the steps of the three-dimensional feature extraction method based on machine vision of the present invention include: acquiring a multi-angle image containing preset feature points to be measured of the target; extracting the feature points to be measured in each of the images. position information; obtain the spatial position information of the feature points to be tested according to the position information of the feature points to be tested in each of the images; The first distance information and/or the second distance information corresponding to the measured feature points. Through machine vision, images of different angles containing the feature points to be measured are obtained, and then the spatial position information of the feature points to be measured is obtained so as to calculate the distance information of the target object.

Figure 201811474153

Description

基于机器视觉的三维特征提取方法及装置Method and device for 3D feature extraction based on machine vision

技术领域technical field

本发明属于机器视觉领域,具体涉及一种基于机器视觉的三维特征提取方法及装置。The invention belongs to the field of machine vision, and in particular relates to a three-dimensional feature extraction method and device based on machine vision.

背景技术Background technique

随着云制造、云计算的发展和“工业4.0”的临近,社会制造模式,即面向顾客定制生产的模式应运而生。社会制造的特点是能够将消费者的需求直接转化为产品,以社会计算理论为基础,基于移动互联网技术、社会媒体与3D打印技术,通过众包等形式让社会民众充分参与产品的全生命制造过程,实现个性化、实时化、经济化的生产和消费模式。也就是说,在社会制造中,每个消费者都可以参与产品生产全生命周期的各个阶段,包括产品的设计、制造和消费。以制鞋为例,社会制造在制鞋过程中的应用体现在用户可以根据自己的需求来进行个性化的定制与选取,这就要求能够简单、快捷、准确地获取用户的脚型三维特征。With the development of cloud manufacturing and cloud computing and the approach of "Industry 4.0", the social manufacturing mode, that is, the mode of customized production for customers, came into being. The characteristic of social manufacturing is that it can directly convert consumers' needs into products. Based on social computing theory, based on mobile Internet technology, social media and 3D printing technology, the public can fully participate in the full-life manufacturing of products through crowdsourcing and other forms. process to achieve personalized, real-time, and economical production and consumption patterns. That is to say, in social manufacturing, every consumer can participate in all stages of the whole life cycle of product production, including product design, manufacture and consumption. Taking shoemaking as an example, the application of social manufacturing in the shoemaking process is reflected in the fact that users can customize and select according to their own needs, which requires the ability to simply, quickly and accurately obtain the three-dimensional characteristics of the user's foot shape.

但是,原始的手工测量能够得到的脚型参数较少,并不能准确描述脚型,只有具有制鞋行业的专业工具才能获得准确的测量结果。为使非专业人士也能获得较为准确的脚型参数以便实现鞋子的个性化定制,本发明提出了采用建立模型计算获得脚型参数的方法。由于每个人的足弓高和脚趾与脚底平面夹角都是不同的,若只获得脚长和脚宽两个特征尺寸是不能准确反映属于同一型号的不同个体脚型的差异,因此就需要对脚型进行三维模型重建来获得准确的脚型参数。目前,可以通过激光三维扫描等设备进行脚型三维模型重建,但是这种方法操作复杂耗时、硬件成本高、普及困难。这样一来,就需要一种更为简便的三维模型方法来准确获取脚型参数。However, the original manual measurement can obtain fewer foot parameters, and cannot accurately describe the foot shape. Only professional tools in the shoe industry can obtain accurate measurement results. In order to enable non-professionals to obtain more accurate foot shape parameters so as to realize personalized customization of shoes, the present invention proposes a method for obtaining foot shape parameters by establishing a model and calculating. Since the height of the arch of the foot and the angle between the toes and the sole of the foot are different for each person, if only the two characteristic dimensions of foot length and foot width are obtained, it cannot accurately reflect the difference between different individual foot shapes belonging to the same model. The foot shape is reconstructed by 3D model to obtain accurate foot shape parameters. At present, the 3D model of the foot can be reconstructed by equipment such as laser 3D scanning, but this method is complicated and time-consuming, the hardware cost is high, and the popularization is difficult. In this way, a simpler three-dimensional model method is required to accurately obtain foot shape parameters.

相应地,本领域需要一种新的三维模型重建方法来解决上述问题。Accordingly, there is a need in the art for a new three-dimensional model reconstruction method to solve the above problems.

发明内容SUMMARY OF THE INVENTION

为了解决现有技术中的上述问题,即为了解决现有的三维模型重建过程复杂耗时、普及困难等问题,本发明第一方面公开了一种基于机器视觉的三维特征提取方法,所述三维特征提取方法包括下列步骤:获取包含参照物及相对于所述参照物设置的目标物的预设待测特征点的多角度图像;提取所述待测特征点在每个所述图像中的位置信息;根据所述待测特征点在每个所述图像中的位置信息获取所述待测特征点的空间位置信息;基于所述空间位置信息和预设的三维特征类别,计算某个待测特征点对应的第一距离信息和/或第二距离信息;其中,所述第一距离信息是所述某个待测特征点与其他待测特征点之间的距离信息,所述第二距离信息是所述某个待测特征点与预设平面之间的垂直距离信息;所述某个待测特征点、所述其他待测特征点与所述平面均取决于所述三维特征类别。In order to solve the above-mentioned problems in the prior art, that is, in order to solve the problems of the existing three-dimensional model reconstruction process being complicated, time-consuming, and difficult to popularize, the first aspect of the present invention discloses a three-dimensional feature extraction method based on machine vision. The feature extraction method includes the following steps: acquiring a multi-angle image of a preset feature point to be measured including a reference object and a target set relative to the reference object; extracting the position of the feature point to be measured in each of the images obtain the spatial position information of the feature point to be tested according to the position information of the feature point to be tested in each of the images; based on the spatial position information and the preset three-dimensional feature category, calculate a certain The first distance information and/or the second distance information corresponding to the feature point; wherein, the first distance information is the distance information between the certain feature point to be measured and other feature points to be measured, and the second distance information The information is vertical distance information between the certain feature point to be measured and a preset plane; the certain feature point to be measured, the other feature points to be measured and the plane all depend on the three-dimensional feature category.

在上述基于机器视觉的三维特征提取方法的优选技术方案中,“提取所述待测特征点在每个所述图像中的位置信息”的步骤包括:利用手动标记法获取某个所述图像中的所述待测特征点的像素位置;利用预设的特征点匹配法并且根据所获取的像素位置,提取所述待测特征点在其他图像中对应的像素位置。In the preferred technical solution of the above-mentioned machine vision-based three-dimensional feature extraction method, the step of "extracting the position information of the feature point to be measured in each of the images" includes: using a manual marking method to obtain a certain image in the image. The pixel positions of the feature points to be tested are extracted; the pixel positions corresponding to the feature points to be tested in other images are extracted by using a preset feature point matching method and according to the acquired pixel positions.

在上述基于机器视觉的三维特征提取方法的优选技术方案中,“提取所述待测特征点在每个所述图像中的位置信息”的步骤包括:获取所述目标物中所述待测特征点所在区域对应的区域形状;根据所述区域形状获取所述每个图像对应的待测区域;根据所述待测特征点与所述区域形状之间的相对位置以及每个所述待测区域,获取所述待测特征点在所述每个图像中的位置信息。In the preferred technical solution of the above-mentioned machine vision-based three-dimensional feature extraction method, the step of "extracting the position information of the feature point to be measured in each of the images" includes: acquiring the feature to be measured in the target The area shape corresponding to the area where the point is located; the area to be measured corresponding to each image is obtained according to the area shape; the relative position between the characteristic point to be measured and the area shape and each area to be measured , and obtain the position information of the feature point to be measured in each image.

在上述基于机器视觉的三维特征提取方法的优选技术方案中,“提取所述待测特征点在每个所述图像中的位置信息”的步骤包括:利用预先构建的神经网络获取所述待测特征点在每个所述图像中的位置信息;其中,所述神经网络是基于预设的训练集并利用深度学习相关算法训练的深度神经网络。In the preferred technical solution of the above three-dimensional feature extraction method based on machine vision, the step of "extracting the position information of the feature points to be measured in each of the images" includes: using a pre-built neural network to obtain the Position information of feature points in each of the images; wherein, the neural network is a deep neural network trained based on a preset training set and using a deep learning related algorithm.

在上述基于机器视觉的三维特征提取方法的优选技术方案中,“根据所述待测特征点在每个所述图像中的位置信息获取所述待测特征点的空间位置信息”的步骤包括:利用三角化方法并且根据所述待测特征点在所述每个图像中的位置信息与相机内外参数获取所述待测特征点的欧氏位置。In the preferred technical solution of the above three-dimensional feature extraction method based on machine vision, the step of "acquiring the spatial position information of the feature point to be tested according to the position information of the feature point to be tested in each of the images" includes: The Euclidean position of the feature point to be tested is obtained by using the triangulation method and according to the position information of the feature point to be tested in each of the images and the internal and external parameters of the camera.

在上述基于机器视觉的三维特征提取方法的优选技术方案中,“根据所述待测特征点在每个所述图像中的位置信息获取所述待测特征点的空间位置信息”的步骤包括:利用增量式SFM方法和每个所述图像中所述待测特征点的位置信息构建稀疏模型,并利用三角化方法计算所述待测特征点在世界坐标系下的空间位置信息;利用预先获取的尺度系数恢复上述步骤中得到的所述待测特征点在世界坐标系下的空间位置信息,得到所述待测特征点的真实位置。In the preferred technical solution of the above three-dimensional feature extraction method based on machine vision, the step of "acquiring the spatial position information of the feature point to be tested according to the position information of the feature point to be tested in each of the images" includes: A sparse model is constructed by using the incremental SFM method and the position information of the feature points to be measured in each of the images, and the triangulation method is used to calculate the spatial position information of the feature points to be measured in the world coordinate system; The obtained scale coefficient restores the spatial position information of the feature point to be measured in the world coordinate system obtained in the above steps, so as to obtain the real position of the feature point to be measured.

在上述基于机器视觉的三维特征提取方法的优选技术方案中,在“利用预先获取的尺度系数恢复上述步骤中得到的所述特征点在世界坐标系下的空间位置信息,得到所述待测特征点的真实位置”之前,所述基于机器视觉的三维特征提取方法还包括:利用所述稀疏模型并根据在相机坐标系下参照物顶点的像素位置,获取在世界坐标系下所述参照物顶点的坐标,需要注意的是,世界坐标系下的顶点坐标与空间真实位置相差尺度系数λ;根据在世界坐标系下所述参照物顶点的坐标以及参照物顶点的空间真实位置,计算该尺度系数λ。In the preferred technical solution of the above three-dimensional feature extraction method based on machine vision, in "using the pre-acquired scale coefficient to restore the spatial position information of the feature point in the world coordinate system obtained in the above step, the feature to be measured is obtained. Before the actual position of the point", the three-dimensional feature extraction method based on machine vision further includes: using the sparse model and according to the pixel position of the reference object vertex in the camera coordinate system, obtain the reference object vertex in the world coordinate system. It should be noted that the coordinate of the vertex in the world coordinate system differs from the real position in space by the scale coefficient λ; according to the coordinates of the vertex of the reference object in the world coordinate system and the real position of the vertex of the reference object, the scale coefficient is calculated. λ.

在上述基于机器视觉的三维特征提取方法的优选技术方案中,所述三角化方法包括:根据所述相机内外参数与所述待测特征点在所述每个图像中的位置信息,获取所述待测特征点的射影空间位置,以及对所述射影空间位置进行齐次化处理得到所述待测特征点的欧氏空间位置。In a preferred technical solution of the above three-dimensional feature extraction method based on machine vision, the triangulation method includes: obtaining the The projective space position of the feature point to be measured, and the Euclidean space position of the feature point to be measured is obtained by homogenizing the projective space position.

本领域技术人员可以理解的是,在本发明的技术方案中,通过获取目标物的不同角度图像并提取待测特征点在图像中的位置,然后利用三角化方法或者稀疏重建问题求解来计算待测特征点在世界坐标系下的空间位置,根据计算得到的待测特征点的空间位置信息计算特征点之间的第一距离信息和/或第二距离信息。本发明的三维特征提取方法仅通过拍照设备获取的多角度图像即可快速确定目标物的三维特征点,进而计算得到目标物的距离信息,无需使用激光三维扫描等高成本、操作复杂的硬件设备,简化了三维重建过程。It can be understood by those skilled in the art that, in the technical solution of the present invention, by acquiring images of the target object at different angles and extracting the positions of the feature points to be measured in the images, and then using the triangulation method or solving the sparse reconstruction problem to calculate the The spatial position of the feature point in the world coordinate system is measured, and the first distance information and/or the second distance information between the feature points is calculated according to the calculated spatial position information of the feature point to be measured. The three-dimensional feature extraction method of the present invention can quickly determine the three-dimensional feature points of the target object only through the multi-angle images obtained by the photographing device, and then calculate the distance information of the target object without using high-cost and complicated hardware equipment such as laser three-dimensional scanning. , which simplifies the 3D reconstruction process.

在本发明的优选技术方案中,通过手动标记或者自动方法确定待测特征点在每一幅图像中的像素位置,其中自动方法包括根据待测特征点所在区域对应的区域形状再利用每个图像的待测区域或者利用预先构建的神经网络来获取待测特征点在每个图像中的位置信息。然后利用参照物自动标定相机参数再三角化或者通过稀疏重建问题求解来求取待测特征点的真实空间位置,不需要对整个目标物的模型重建,能够减少计算量,简化模型建立过程。最后基于待测特征点的真实空间位置以及预设的三维特征类别,计算待测特征点对应的距离信息。In a preferred technical solution of the present invention, the pixel positions of the feature points to be measured in each image are determined by manual marking or an automatic method, wherein the automatic method includes reusing each image according to the shape of the region corresponding to the region where the feature points to be measured are located or use a pre-built neural network to obtain the location information of the feature points to be measured in each image. Then use the reference object to automatically calibrate the camera parameters and then triangulate or solve the sparse reconstruction problem to obtain the real spatial position of the feature point to be measured. It does not need to reconstruct the model of the entire target object, which can reduce the amount of calculation and simplify the model establishment process. Finally, based on the real space position of the feature point to be measured and the preset three-dimensional feature category, the distance information corresponding to the feature point to be measured is calculated.

本发明第二方面提供了一种存储装置,所述存储装置存储有多条程序,所述程序适于由处理器加载以执行前述任一项所述的基于机器视觉的三维特征提取方法。A second aspect of the present invention provides a storage device, the storage device stores a plurality of programs, and the programs are adapted to be loaded by a processor to execute any one of the aforementioned methods for extracting three-dimensional features based on machine vision.

需要说明的是,该存储装置具有前述的基于机器视觉的三维特征提取方法的所有技术效果,在此不再赘述。It should be noted that the storage device has all the technical effects of the aforementioned three-dimensional feature extraction method based on machine vision, which will not be repeated here.

本发明第三方面还提供了一种控制装置,所述控制装置包括处理器和存储设备,所述存执设备适于存储多条程序,所述程序适于由所述处理器加载以执行前述任一项所述的基于机器视觉的三维特征提取方法。A third aspect of the present invention further provides a control device, the control device includes a processor and a storage device, the storage device is adapted to store a plurality of programs, the programs are adapted to be loaded by the processor to execute the foregoing Any one of the three-dimensional feature extraction methods based on machine vision.

需要说明的是,该控制装置具有前述的基于机器视觉的三维特征提取方法的所有技术效果,在此不再赘述。It should be noted that the control device has all the technical effects of the aforementioned three-dimensional feature extraction method based on machine vision, which will not be repeated here.

附图说明Description of drawings

下面参照附图并结合脚型来描述本发明的基于机器视觉的三维特征提取方法。附图中:The three-dimensional feature extraction method based on machine vision of the present invention will be described below with reference to the accompanying drawings and in conjunction with foot shapes. In the attached picture:

图1是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的主要步骤流程图;1 is a flow chart of the main steps of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention;

图2是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以圆形为模板使用广义霍夫变换检测特征点的示意图;2 is a schematic diagram of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention using a circle as a template to detect feature points using generalized Hough transform;

图3是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以圆形为模板使用广义霍夫变换检测特征点的示意图;3 is a schematic diagram of a machine vision-based foot shape three-dimensional feature extraction method in an embodiment of the present invention using a circle as a template to detect feature points using generalized Hough transform;

图4是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以圆形为模板使用广义霍夫变换检测特征点的示意图;4 is a schematic diagram of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention using a circle as a template to detect feature points using generalized Hough transform;

图5是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以直线为模板使用广义霍夫变换检测参照物的示意图;5 is a schematic diagram of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention using a straight line as a template to detect a reference object using generalized Hough transform;

图6是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的三角化过程求解特征点空间位置信息的过程示意图;6 is a schematic diagram of the process of solving the spatial position information of feature points by a triangulation process of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention;

图7是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的稀疏重建过程求解特征点空间位置信息的过程示意图。7 is a schematic diagram of a process of obtaining spatial position information of feature points in a sparse reconstruction process of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention.

具体实施方式Detailed ways

下面参照附图来描述本发明的优选实施方式。本领域技术人员应当理解的是,这些实施方式仅仅用于解释本发明的技术原理,并非旨在限制本发明的保护范围。例如,虽然本发明是以脚型为例来描述的,但是还可以是其他可以通过建立模型转化为产品的目标物,如衣服。另外,本发明是以A4纸作为参照物来进行描述的,但是还可以是其他已知尺寸的物体(如地板砖)。本领域技术人员可以根据需要对其作出调整,以便适应具体的应用场合。Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only used to explain the technical principle of the present invention, and are not intended to limit the protection scope of the present invention. For example, although the present invention is described by taking a foot shape as an example, it can also be other objects that can be converted into products by building a model, such as clothes. In addition, the present invention is described with reference to A4 paper, but other objects of known size (eg floor tiles) are also possible. Those skilled in the art can adjust it as needed to adapt to specific applications.

需要说明的是,在本发明的描述中,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。It should be noted that, in the description of the present invention, the terms "first", "second" and "third" are only used for description purposes, and cannot be understood as indicating or implying relative importance.

下面结合附图对本发明提供的基于机器视觉的脚型三维特征提取方法进行说明。The method for extracting three-dimensional features of a foot shape based on machine vision provided by the present invention will be described below with reference to the accompanying drawings.

在本发明的一种具体实施方式中,将脚型的三维特征参数的提取计算过程转化为确定对应特征点的空间位置,然后利用欧氏距离公式计算得到需要测量的脚型的特征参数。其中,可以得到的脚型的基本参数包括:脚长、脚围、脚背围高度、弓上弯点高度、脚宽、拇指高度、足后跟凸度点高度、足外踝骨中心点高度等制鞋所需的脚型参数信息。下面以获取脚长、脚宽以及脚踝点高度三个参数为例来说明了本发明的基于机器视觉的脚型三维特征提取方法的可能的实现方式。In a specific embodiment of the present invention, the extraction and calculation process of the three-dimensional feature parameters of the foot shape is transformed into determining the spatial position of the corresponding feature point, and then the Euclidean distance formula is used to calculate the feature parameters of the foot shape to be measured. Among them, the basic parameters of the foot shape that can be obtained include: foot length, foot circumference, instep circumference height, arch bending point height, foot width, thumb height, heel convexity point height, foot outer ankle bone center point height, etc. Required foot parameter information. A possible implementation of the method for extracting three-dimensional features of a foot shape based on machine vision of the present invention is described below by taking the acquisition of three parameters of foot length, foot width and ankle point height as examples.

首先参照图1,图1示例性地示出了本发明实施例中基于机器视觉的脚型三维特征提取方法的主要步骤,本发明中基于机器视觉的脚型三维特征提取方法可以包括下述步骤:Referring first to FIG. 1 , FIG. 1 exemplarily shows the main steps of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention. The machine vision-based three-dimensional feature extraction method for feet in the present invention may include the following steps :

步骤S100,获取包含目标物的预设待测特征点的多角度图像。Step S100, acquiring a multi-angle image including preset feature points to be measured of the target.

具体地,将脚正放在A4纸上,利用移动拍照设备,如照相机,拍摄多个角度的脚型的图像,以便能够充分表现脚型特征,能够获取足够的待测特征点,如最长脚趾顶点和脚后跟凸点以便计算待测脚型的长度,又如指拇指球外侧点和尾趾根部外侧点以便计算待测脚型的宽度,又如脚踝点以便计算脚踝点的高度等。需要说明的是,拍摄的脚型的图像的数量应至少在三张以上,包含待测特征点的图像的数量越多,根据待测特征点计算出来的脚型参数越准确。Specifically, put the foot on the A4 paper, and use a mobile photographing device, such as a camera, to take images of the foot shape from multiple angles, so as to fully express the foot shape characteristics and obtain enough feature points to be measured, such as the longest Toe apex and heel convex point are used to calculate the length of the foot to be measured. Another example is the outer point of the ball of the thumb and the outer point of the base of the tail toe to calculate the width of the foot to be measured. Another example is the ankle point to calculate the height of the ankle. It should be noted that the number of images of the foot shape taken should be at least three or more, and the more images containing the feature points to be measured, the more accurate the foot shape parameters calculated according to the feature points to be measured.

步骤S200,提取待测特征点在每个图像中的位置信息。Step S200, extracting the position information of the feature point to be measured in each image.

具体地,在本实施例的一个优选实施方案中图1所示的三维特征提取方法可以按照以下步骤获取待测特征点在每个图像中的像素位置(x,y),具体为:Specifically, in a preferred implementation of this embodiment, the three-dimensional feature extraction method shown in FIG. 1 can obtain the pixel position (x, y) of the feature point to be measured in each image according to the following steps, specifically:

首先通过手动标记待测特征点在某个图像中的像素位置,然后利用特征点匹配方法,如尺度不变特征转换(Scale Invariant Feature Transform,SIFT)或者迭代最近点(Iterative Closest Point,ICP)等,找到该待测特征点在其他图像中对应的像素位置。以测量脚踝的高度为例,选取包含脚踝点的一个图像,手动标记脚踝点在该图像中的像素位置,然后使用SIFT或者ICP等特征点匹配方法找到脚踝点在包含脚踝点的其他角度的图像中对应的像素位置。通过这种方法可以快速找到待测特征点在所有图像中对应的像素位置而不需要对每个图像都进行特征点的手动标记,提高了获取特征点的像素位置的效率。First, manually mark the pixel positions of the feature points to be measured in an image, and then use feature point matching methods, such as Scale Invariant Feature Transform (SIFT) or Iterative Closest Point (Iterative Closest Point, ICP), etc. , find the corresponding pixel position of the feature point to be measured in other images. Taking measuring the height of the ankle as an example, select an image containing the ankle point, manually mark the pixel position of the ankle point in the image, and then use the feature point matching method such as SIFT or ICP to find the image of the ankle point at other angles containing the ankle point. The corresponding pixel position in . Through this method, the corresponding pixel positions of the feature points to be detected in all images can be quickly found without the need to manually mark the feature points for each image, which improves the efficiency of obtaining the pixel positions of the feature points.

可选的,在本实施例的另一个优选实施方案中图1所示的三维特征提取方法还可以按照以下步骤获取待测特征点在每个图像中的像素位置(x,y),具体为:Optionally, in another preferred implementation of this embodiment, the three-dimensional feature extraction method shown in FIG. 1 can also obtain the pixel position (x, y) of the feature point to be measured in each image according to the following steps, specifically: :

根据待测特征点的所在区域形状的唯一性,利用特征检测的方法,如广义霍夫变换检测特定形状进而确定每个图像中的待测特征点的位置信息。具体而言,首先确定待测特征点所在区域对应的区域形状,然后根据该区域形状并利用广义霍夫变换自动找到待测特征点在每个图像中对应的待测区域,再根据待测特征点与该区域形状之间的相对位置以及在每个图像中的待测区域得到待测特征点在每个图像中的位置信息。下面以圆形为模板并利用广义霍夫变换找到特征点为例来说明的可能的实现方式。According to the uniqueness of the shape of the region where the feature points to be detected are located, a feature detection method, such as generalized Hough transform, is used to detect a specific shape and then determine the position information of the feature points to be detected in each image. Specifically, first determine the area shape corresponding to the area where the feature points to be measured are located, then automatically find the area to be measured corresponding to the feature points to be measured in each image according to the shape of the area and use the generalized Hough transform, and then use the generalized Hough transform to automatically find the area to be measured corresponding to the feature points to be measured in each image, and then The relative position between the point and the shape of the area and the area to be measured in each image obtain the location information of the feature point to be measured in each image. A possible implementation is described below by taking a circle as a template and finding feature points by using the generalized Hough transform as an example.

参照图2、图3和图4,图2是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以圆形为模板使用广义霍夫变换检测特征点的示意图;图3是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以圆形为模板使用广义霍夫变换检测特征点的示意图;图4是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以圆形为模板使用广义霍夫变换检测特征点的示意图,图2、图3和图4中分别示出了不同角度下以脚踝点所在区域是圆形为模板来利用广义霍夫变换找到特征点的具体实现方式。如图2、图3和图4所示,脚踝中心所在的脚踝是圆形的,从图中可以看出,这个圆形轮廓在脚型中是唯一的,这样一来,在利用广义霍夫变换时,以圆形为模板,在图像中自动找到圆形的位置(如图2-4中虚线所示的圆形模板),该位置即是脚踝所在的位置,搜索到圆形位置的中心G点即是待测特征点脚踝点在图像中的位置。Referring to Fig. 2, Fig. 3 and Fig. 4, Fig. 2 is a schematic diagram of using a generalized Hough transform to detect feature points with a circle as a template of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention; Fig. 3 is a A schematic diagram of a machine vision-based three-dimensional feature extraction method for a foot shape in an embodiment of the present invention using a circle as a template to detect feature points using generalized Hough transform; FIG. 4 is a machine vision-based foot shape in an embodiment of the present invention. The schematic diagram of the three-dimensional feature extraction method using a circle as a template to detect feature points using the generalized Hough transform. Figure 2, Figure 3 and Figure 4 show the use of the generalized Hough transform at different angles. The specific implementation of Hough transform to find feature points. As shown in Figure 2, Figure 3 and Figure 4, the ankle where the center of the ankle is located is circular. It can be seen from the figure that this circular outline is unique in the foot shape. In this way, using the generalized Hough When transforming, take the circle as the template, and automatically find the position of the circle in the image (the circle template shown by the dotted line in Figure 2-4). This position is the position of the ankle, and the center of the circle position is searched. Point G is the position of the ankle point of the feature point to be measured in the image.

可以理解的是,在确定最长脚趾顶点的位置信息时,可以以最长脚趾的轮廓作为广义霍夫变换的模板,在图像中进行搜索,找到脚趾轮廓后,通过该轮廓与最长脚趾顶点的相对位置来确定该特征点的像素位置。It can be understood that when determining the position information of the vertex of the longest toe, the contour of the longest toe can be used as the template of the generalized Hough transform to search in the image. to determine the pixel position of the feature point.

可选的,在本实施例的另一个优选实施方案中图1所示的三维特征提取方法还可以按照以下步骤获取待测特征点在每个图像中的像素位置(x,y),具体为:Optionally, in another preferred implementation of this embodiment, the three-dimensional feature extraction method shown in FIG. 1 can also obtain the pixel position (x, y) of the feature point to be measured in each image according to the following steps, specifically: :

基于足够量标记好的脚型的特征点的数据样本并利用深度学习算法构建深度神经网络,然后利用该神经网络获取待测特征点在每个图像中的位置信息。具体地,训练该神经网络时,输入的是包含待测特征点的图像数据,输出的是待测特征点在图像中的像素位置(x,y),其中,输出包括真实输出和期望输出,网络的最后全连接层真实输出的是待测特征点在该图像中对应的像素位置(x,y),网络的期望输出的是待测特征点在图像中标记好的实际像素位置。然后利用网络的真实输出与期望输出产生的误差反向训练整个网络,迭代训练直到网络收敛,神经网络训练完毕后输入某个包含待测特征点的待测图像,神经网络自动输出神经网络自动输出在该图像中的像素位置。以获取脚踝点的像素位置为例,选择足够量的标记好脚踝点的图像样本作为训练集,并搭建深层神经网络,然后用训练集训练该深度神经网络,训练完毕后,输入一个包含脚踝点的待测图像,神经网络自动输出脚踝点在该图像中的像素位置。可以理解的是,确定其他特征点的像素位置时,使用与该特征点对应的图像数据样本训练预先搭建的深度神经网络,然后输入包含该特征点的待测图像从而得到该特征点在图像中的像素位置。A deep neural network is constructed based on a sufficient number of data samples of marked feature points of the foot shape and a deep learning algorithm is used, and then the position information of the feature points to be measured in each image is obtained by using the neural network. Specifically, when training the neural network, the input is image data containing the feature points to be tested, and the output is the pixel position (x, y) of the feature points to be tested in the image, wherein the output includes the actual output and the expected output, The actual output of the last fully connected layer of the network is the pixel position (x, y) corresponding to the feature point to be tested in the image, and the expected output of the network is the actual pixel position marked in the image by the feature point to be tested. Then use the error between the actual output of the network and the expected output to reversely train the entire network, and iteratively train until the network converges. After the neural network is trained, input an image to be tested that contains the feature points to be tested, and the neural network automatically outputs the neural network. pixel location in this image. Take obtaining the pixel position of the ankle point as an example, select a sufficient number of image samples marked with the ankle point as the training set, and build a deep neural network, and then use the training set to train the deep neural network. The image to be tested, the neural network automatically outputs the pixel position of the ankle point in the image. It can be understood that when determining the pixel positions of other feature points, use the image data samples corresponding to the feature points to train the pre-built deep neural network, and then input the image to be tested containing the feature points to obtain the feature points in the image. pixel location.

步骤S300,根据待测特征点在每个图像中的位置信息获取待测特征点的空间位置信息。Step S300, obtaining spatial position information of the feature point to be measured according to the position information of the feature point to be measured in each image.

具体地,在本实施例的一个优选实施方案中图1所示的三维特征提取方法可以按照以下步骤获取待测特征点的空间位置信息,具体为:Specifically, in a preferred implementation of this embodiment, the three-dimensional feature extraction method shown in FIG. 1 can obtain the spatial position information of the feature points to be measured according to the following steps, specifically:

首先利用参照物标定相机参数,然后利用三角化方法计算待测特征点的空间位置信息。具体而言,以A4纸作为参照物为例,将脚型放置在A4纸上,利用摄像设备,如相机获取多个不同角度的图像,这些不同角度的图像中包含了A4纸的轮廓。利用这些不同角度的图像来标定相机,确定相机内参数矩阵K,外参数相对世界坐标系旋转矩阵R、平移矩阵t。然后根据步骤S200得到的待测特征点在图像中的像素位置(x,y),并利用三角化方法以及齐次化求解待测特征点在世界坐标系下的空间位置信息(X,Y,Z)。下面结合图5和图6来说明通过三角化方法获取特征点真实空间位置的可能的实现方式。Firstly, the camera parameters are calibrated by the reference object, and then the spatial position information of the feature points to be measured is calculated by the triangulation method. Specifically, taking A4 paper as an example, the foot shape is placed on the A4 paper, and imaging equipment, such as a camera, is used to obtain multiple images at different angles, and these images at different angles contain the outline of the A4 paper. Use these images of different angles to calibrate the camera, determine the camera's internal parameter matrix K, the external parameters relative to the world coordinate system rotation matrix R, translation matrix t. Then, according to the pixel position (x, y) of the feature point to be measured in the image obtained in step S200, and using the triangulation method and homogenization to solve the spatial position information (X, Y, Z). A possible implementation manner of obtaining the real spatial position of the feature point by the triangulation method will be described below with reference to FIG. 5 and FIG. 6 .

参照图5,图5是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的以直线为模板使用广义霍夫变换检测参照物的示意图。如图5所示,利用直线模板并利用随机霍夫变换检测图像中的A4纸的边缘直线。可以看出,检测到四条边缘直线,各直线两两相交,交点即为A4纸的四个顶点(A、B、C、D)的像素位置(xi,yi),i=1,2,3,4。继续参照图2、图3和图4,由空间几何变换知识可得A点在欧氏空间和射影空间的如下关系:Referring to FIG. 5 , FIG. 5 is a schematic diagram of a method for extracting three-dimensional features of a foot shape based on machine vision according to an embodiment of the present invention, using a straight line as a template and using generalized Hough transform to detect a reference object. As shown in Fig. 5, the straight line template and the random Hough transform are used to detect the edge straight line of the A4 paper in the image. It can be seen that four edge straight lines are detected, and each straight line intersects two by two, and the intersection point is the pixel position (x i , y i ) of the four vertices (A, B, C, D) of the A4 paper, i=1, 2 , 3, 4. Continue to refer to Figure 2, Figure 3 and Figure 4, from the knowledge of spatial geometric transformation, the following relationship between point A in Euclidean space and projective space can be obtained:

Figure BDA0001891768560000081
Figure BDA0001891768560000081

公式(1)中参数K、R和t分别是相机内参数矩阵、相机相对世界坐标系的旋转矩阵和平移矩阵([R|t]合称为相机外参数矩阵)。其中,符号“|”代表增广矩阵,r1、r2、r3分别是相机相对世界坐标系的旋转矩阵R的展开形式,由矩阵乘法可知,r3与0元素相乘消掉。The parameters K, R and t in formula (1) are the camera internal parameter matrix, the rotation matrix and translation matrix of the camera relative to the world coordinate system ([R|t] are collectively referred to as the camera external parameter matrix). Among them, the symbol "|" represents the augmented matrix, and r 1 , r 2 , and r 3 are respectively the expanded form of the rotation matrix R of the camera relative to the world coordinate system. It can be seen from the matrix multiplication that r 3 and 0 elements are multiplied and eliminated.

其中,

Figure BDA0001891768560000091
是A4纸顶点A的像素位置,(XA,YA,ZA)T是其在世界坐标系下的真实位置,K[R|t]是相机的内外参数。单应矩阵H=K[r1r2|t]有8个自由度,将世界坐标系建立在A4纸的顶点A上,则A4纸的四个顶点的世界坐标系为(0,0,0),(X,0,0),(0,Y,0),(X,Y,0),其中,X=210mm,Y=297mm。每个顶点都能写成式(1)形式以构造两组线性方程。因此,四组顶点可以构建8组线性方程,通过直接线性变换(Direct Linear Transform,DLT)方式来求解H。in,
Figure BDA0001891768560000091
is the pixel position of vertex A of the A4 paper, (X A , Y A , Z A ) T is its real position in the world coordinate system, and K[R|t] is the internal and external parameters of the camera. The homography matrix H=K[r 1 r 2 |t] has 8 degrees of freedom. If the world coordinate system is established on the vertex A of the A4 paper, the world coordinate system of the four vertices of the A4 paper is (0, 0, 0), (X, 0, 0), (0, Y, 0), (X, Y, 0), where X=210mm, Y=297mm. Each vertex can be written in the form of equation (1) to construct two sets of linear equations. Therefore, four sets of vertices can construct 8 sets of linear equations, and H can be solved by means of Direct Linear Transform (DLT).

由于获取三张照片的角度不同,因而三张照片的相机位姿不同,按照上述同样方法可以得到世界坐标系在相机中的三组单应矩阵H1,H2,H3Since the angles of the three photos are different, the camera poses of the three photos are different. According to the same method above, three sets of homography matrices H 1 , H 2 , and H 3 of the world coordinate system in the camera can be obtained.

从单应矩阵H中可求得K,由于H=[h1 h2 h3]=K[r1 r2|t],因此可得:K can be obtained from the homography matrix H. Since H=[h 1 h 2 h 3 ]=K[r 1 r 2 |t], we can obtain:

K-1[h1 h2 h3]=[r1 r2|t] (2)K -1 [h 1 h 2 h 3 ]=[r 1 r 2 |t] (2)

公式(2)中参数K-1、R、t和H分别是相机内参数矩阵的逆矩阵、相机相对世界坐标系旋转矩阵、平移矩阵和单应矩阵。其中,r1、r2分别是通过两张不同角度的图像得到的相机外参数相对世界坐标系的旋转矩阵,h1、h2、h3分别是通过三张不同角度的图像得到的相对世界坐标系在相机中的三组单应矩阵。The parameters K -1 , R, t and H in formula (2) are the inverse matrix of the parameter matrix in the camera, the rotation matrix of the camera relative to the world coordinate system, the translation matrix and the homography matrix, respectively. Among them, r 1 , r 2 are the rotation matrices of the external camera parameters relative to the world coordinate system obtained through two images of different angles, respectively, h 1 , h 2 , h 3 are the relative world obtained through three images of different angles respectively. The three sets of homography matrices for the coordinate system in the camera.

其中,R=[r1 r2 r3]为旋转矩阵,具有正交性质,即:r1 Tr2=0且‖r1‖=‖r2‖=1。因此,可以得到:h1 TK-TK-1h2=0,进而可以得到:Wherein, R=[r 1 r 2 r 3 ] is a rotation matrix with orthogonal properties, namely: r 1 T r 2 =0 and ‖r 1 ‖=‖r 2 ‖=1. Therefore, we can get: h 1 T K -T K -1 h 2 =0, and then we can get:

h1 TK-TK-1h1=h2 TK-TK-1h2 (3)h 1 T K -T K -1 h 1 =h 2 T K -T K -1 h 2 (3)

公式(3)中参数K-T和K-1分别是相机内参数矩阵的转置矩阵的正交矩阵和逆矩阵,h1、h2分别是通过其中两张不同角度的图像得到的相对世界坐标系在相机中的两组单应矩阵,h1 T、h2 T是单应矩阵h1、h2的转置矩阵,由上述可得,每两张图像可以获得两个相机的内参数的约束方程。The parameters K- T and K -1 in formula (3) are the orthogonal matrix and the inverse matrix of the transposed matrix of the parameter matrix in the camera, respectively, and h 1 and h 2 are the relative world obtained by two images of different angles. The two sets of homography matrices of the coordinate system in the camera, h 1 T , h 2 T are the transpose matrices of the homography matrices h 1 , h 2 , which can be obtained from the above, and the internal parameters of two cameras can be obtained for each two images constraint equation.

相机内参数矩阵K是上三角矩阵,w=K-TK-1是对称阵,根据图2、图3和图4的三个不同角度的图像并通过DLT线性求解出w,进而通过正交分解可求解得出K。根据公式(1)可知,[r1 r2|t]=K-1[h1 h2 h3],结合前述求解得出的h1、h2、h3以及K,可以求解得出r1、r2、t。由旋转矩阵的正交性得到r3=r1×r2,因此R=[r1 r2 r3]。由该方法得到拍摄图2、图3和图4时的相机的内外参数K[R1|t1]、K[R2|t2]、K[R3|t3]。The camera internal parameter matrix K is an upper triangular matrix, and w=K- T K -1 is a symmetric matrix. According to the images of three different angles in Figure 2, Figure 3 and Figure 4, w is linearly solved by DLT, and then through the orthogonal The decomposition can be solved to obtain K. According to formula (1), [r 1 r 2 |t]=K -1 [h 1 h 2 h 3 ], combined with h 1 , h 2 , h 3 and K obtained by the above solution, we can obtain r 1 , r 2 , t. From the orthogonality of the rotation matrix, r 3 =r 1 ×r 2 , so R=[r 1 r 2 r 3 ]. By this method, the intrinsic and extrinsic parameters K[R 1 |t 1 ], K[R 2 |t 2 ], and K[R 3 |t 3 ] of the camera at the time of shooting FIG. 2 , FIG. 3 and FIG. 4 are obtained.

参照图6,图6是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的三角化过程求解特征点空间位置信息的过程示意图。如图6所示,图6中示出了以图3和图4(即Image1和Image2)中的脚踝点G为例的三角化过程,根据步骤S200中得到的脚踝点G在Image1和Image2中的像素位置x1和x2,以及上述步骤中求得的相机的内外参数P1=K1[R1|t1]、P2=K2[R2|t2],依次进行重投影误差平方和最小化min∑i‖PiX-xi‖,从而得到待测特征点在射影空间的位置X=(M,N,O,w),其中,P1、P2分别是根据标定方法得到的相机拍摄Image1和Image2两张图像时的内外参数,K1、K2分别是相机拍摄Image1和Image2两张图像时的相机内参数矩阵,R1、R2分别是相机拍摄Image1和Image2两张图像时的相对世界坐标系的旋转矩阵,t1、t2分别是平移矩阵。最后,通过齐次化射影空间坐标,即可得到待测特征点G的欧氏空间位置X=(M/w,N/w,O/w)=(X,Y,Z),其中,M,N,O,w分别是特征点G在射影空间下的位置坐标。Referring to FIG. 6 , FIG. 6 is a schematic diagram of a process of solving the spatial position information of feature points by a triangulation process of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention. As shown in FIG. 6, FIG. 6 shows the triangulation process taking the ankle point G in FIG. 3 and FIG. 4 (ie, Image1 and Image2) as an example, according to the ankle point G obtained in step S200, in Image1 and Image2 The pixel positions x 1 and x 2 of , and the internal and external parameters of the camera obtained in the above steps P 1 =K 1 [R 1 |t 1 ], P 2 =K 2 [R 2 |t 2 ], reprojection is performed in sequence Minimize the sum of squares of errors min∑ i ‖P i Xx i ‖, so as to obtain the position of the feature point to be measured in the projective space X=(M, N, O, w), where P 1 and P 2 are respectively based on the calibration method The obtained internal and external parameters when the camera shoots Image1 and Image2, K 1 and K 2 are the camera's internal parameter matrix when the camera shoots Image1 and Image2 respectively, R 1 and R 2 are the camera shooting Image1 and Image2 respectively. The rotation matrix relative to the world coordinate system when an image is taken, t 1 and t 2 are translation matrices respectively. Finally, by homogenizing the projective space coordinates, the Euclidean space position of the feature point G to be measured can be obtained X=(M/w, N/w, O/w)=(X, Y, Z), where M , N, O, and w are the position coordinates of the feature point G in the projective space, respectively.

可选的,在本实施例的另一个优选实施方案中图1所示的三维特征提取方法还可以按照以下步骤获取待测特征点的真实空间位置,具体为:Optionally, in another preferred implementation of this embodiment, the three-dimensional feature extraction method shown in FIG. 1 can also obtain the real spatial position of the feature point to be measured according to the following steps, specifically:

将三维重建问题转换成待测特征点的稀疏重建问题,如利用增量式SFM方法构建稀疏模型并利用三角化方法来解决稀疏重建问题。具体而言,根据步骤S200得到的待测特征点在多张图像中的像素位置(x,y),与上一实施方案不同的是利用增量式SFM方法来直接求解相机内参数矩阵K、相机旋转矩阵R、相对与世界坐标的平移量t、待测特征点在世界坐标系下的坐标λ(X,Y,Z),略去了用参照物标定相机过程,然后使用已知规格的参照物确定尺度系数λ,进而得到特征点的真实空间位置坐标(X,Y,Z)。下面结合附图7来说明利用增量式SFM方法来解决稀疏重建问题的可能的实现方式,以3个不同角度的图像为例。Convert the 3D reconstruction problem into a sparse reconstruction problem of the feature points to be measured, such as using the incremental SFM method to build a sparse model and using the triangulation method to solve the sparse reconstruction problem. Specifically, according to the pixel positions (x, y) of the feature points to be measured in the multiple images obtained in step S200, the difference from the previous embodiment is that the incremental SFM method is used to directly solve the camera internal parameter matrix K, The camera rotation matrix R, the translation t relative to the world coordinate, the coordinates λ (X, Y, Z) of the feature point to be measured in the world coordinate system, omit the process of calibrating the camera with the reference object, and then use the known specifications The reference object determines the scale coefficient λ, and then obtains the real space position coordinates (X, Y, Z) of the feature point. A possible implementation manner of using the incremental SFM method to solve the sparse reconstruction problem will be described below with reference to FIG. 7 , taking three images from different angles as an example.

参照图7,图7是本发明实施例中一种基于机器视觉的脚型三维特征提取方法的稀疏重建过程求解特征点空间位置信息的过程示意图。如图7所示,利用增量式SFM方法来解决稀疏重建问题的步骤具体包括:Referring to FIG. 7 , FIG. 7 is a schematic diagram of a process of obtaining spatial position information of feature points in a sparse reconstruction process of a machine vision-based three-dimensional feature extraction method for feet in an embodiment of the present invention. As shown in Figure 7, the steps of using the incremental SFM method to solve the sparse reconstruction problem include:

步骤1:在3个不同角度的图像中随机挑选两张图像Image1和Image2以确定初始图像对,利用增量式SFM方法计算拍摄图像Image1和Image2的相机的内外参数的初始值[R|t]矩阵:利用图像Image1和Image2中的5组特征点对(最长脚趾顶点和脚后跟凸点、指拇指球外侧点和尾趾根部外侧点、脚踝点),利用5点法分别计算图像Image1和Image2对应的本质矩阵E1和E2,其中E=[R|t],可以从本质矩阵E中分解出相机旋转矩阵R1、R2和相对于世界坐标的平移量t1、t2矩阵。然后,结合步骤S200中得到的在相机坐标系下的待测特征点在图像Image1和Image2中的像素位置,构建初始稀疏模型;Step 1: Randomly select two images Image1 and Image2 from three different angles to determine the initial image pair, and use the incremental SFM method to calculate the initial values [R|t] of the internal and external parameters of the camera that captured the images Image1 and Image2 Matrix: Using the 5 sets of feature point pairs in the images Image1 and Image2 (the vertex of the longest toe and the convex point of the heel, the lateral point of the thumb ball and the lateral point of the root of the tail toe, and the ankle point), the images Image1 and Image2 are calculated using the 5-point method respectively. The corresponding essential matrices E 1 and E 2 , where E=[R|t], can be decomposed from the essential matrix E to decompose the camera rotation matrices R 1 , R 2 and the translations t 1 , t 2 matrices relative to the world coordinates. Then, in combination with the pixel positions of the feature points to be measured in the images Image1 and Image2 obtained in the camera coordinate system obtained in step S200, an initial sparse model is constructed;

步骤2:根据步骤1的中构建的初始稀疏模型,并且利用三角化方法计算待测特征点在图像Image1和Image2中的世界坐标系下的位置坐标λ(X1,Y1,Z1)和λ(X2,Y2,Z2);Step 2: According to the initial sparse model constructed in Step 1, and use the triangulation method to calculate the position coordinates λ (X 1 , Y 1 , Z 1 ) and λ(X 2 , Y 2 , Z 2 );

步骤3:将图像Image3在步骤S200中得到的相机坐标系下待测特征点的像素位置输入步骤2的得到的初始稀疏模型中,可以重新获取相机内外参数[R|t]矩阵,即相机旋转矩阵R3和相对于世界坐标的平移量t3,并且利用该相机内外参数修正初始稀疏模型;Step 3: Input the pixel positions of the feature points to be measured in the camera coordinate system of the image Image3 obtained in step S200 into the initial sparse model obtained in step 2, and the camera internal and external parameters [R|t] matrix can be re-obtained, that is, the camera rotation matrix R 3 and the translation t 3 relative to the world coordinate, and use the camera intrinsic and extrinsic parameters to correct the initial sparse model;

步骤4:根据步骤3修正后的稀疏模型,并且利用三角化方法计算待测特征点在图像Image3中的世界坐标系下的空间位置坐标λ(X3,Y3,Z3);Step 4: According to the sparse model corrected in Step 3, and use the triangulation method to calculate the spatial position coordinates λ (X 3 , Y 3 , Z 3 ) of the feature points to be measured under the world coordinate system in the image Image3;

步骤5:利用捆绑调整(Bundle Adjustment,BA)方法对步骤2和4中得到的特征点的位置坐标进行修正,得到优化后的稀疏模型。Step 5: Use a bundle adjustment (BA) method to correct the position coordinates of the feature points obtained in steps 2 and 4 to obtain an optimized sparse model.

其中,步骤5中对待测特征点在剩余其他图像中得到不同的坐标位置反复进行捆绑调整,直至前后两次计算得到的待测特征点的坐标λ(X,Y,Z)的误差小于等于预设的阈值。Among them, in step 5, different coordinate positions of the feature points to be measured are obtained in the remaining other images, and the binding adjustment is repeatedly performed until the error of the coordinates λ (X, Y, Z) of the feature points to be measured obtained by the two calculations before and after is less than or equal to the predetermined value. set threshold.

虽然本发明仅提供了利用增量式SFM方法来求解三个图像中待测特征点的空间位置信息这一种具体实施方案,但是本领域技术人员可以理解的是,本发明提供的增量式SFM方法还可以用来求解多个不同角度的图像,在利用增量式SFM方法构建稀疏模型的过程中,反复代入新图像中待测特征点在相机坐标系下的像素位置信息,重新获取相机内外参数并且利用该相机内外参数修正稀疏模型,直至得到所有的图像都被添加至稀疏模型。可以理解的是,获取的图像的不同角度越多,迭代计算的次数就越多,得到的相机内外参数也就越精确,根据其构建的稀疏模型计算得到的待测特征点在世界坐标系下的空间位置信息也就越精确。Although the present invention only provides a specific implementation of using the incremental SFM method to solve the spatial position information of the feature points to be measured in the three images, those skilled in the art can understand that the incremental SFM method provided by the present invention The SFM method can also be used to solve multiple images of different angles. In the process of building a sparse model using the incremental SFM method, the pixel position information of the feature points to be measured in the new image in the camera coordinate system is repeatedly substituted, and the camera is re-obtained. extrinsic and extrinsic parameters and use the camera extrinsic parameters to correct the sparse model until all images are added to the sparse model. It can be understood that the more different angles of the acquired images, the more iterative calculations are performed, and the more accurate the internal and external parameters of the camera are obtained. The feature points to be measured calculated according to the sparse model constructed are in the world coordinate system. The more accurate the spatial location information.

步骤6:以图4中的A点为坐标原点,根据步骤S200中得到的A4纸的顶点D在相机坐标系下的像素位置信息,再利用步骤5中得到的稀疏模型计算得到顶点D的空间坐标为(M,N,0),而顶点D的真实的空间位置为(210mm,297mm,0),因此,可以得到尺度系数λ=210mm/M=297mm/N。再结合步骤5中得到的待测特征点在世界坐标系下的空间坐标λ(X,Y,Z),除以尺度系数λ得到待测特征点的真实空间位置(X,Y,Z)。Step 6: Taking point A in FIG. 4 as the coordinate origin, according to the pixel position information of the vertex D of the A4 paper in the camera coordinate system obtained in step S200, and then using the sparse model obtained in step 5 to calculate the space of the vertex D The coordinates are (M, N, 0), and the real spatial position of the vertex D is (210mm, 297mm, 0), therefore, the scale coefficient λ=210mm/M=297mm/N can be obtained. Combined with the spatial coordinates λ (X, Y, Z) of the feature point to be measured obtained in step 5 in the world coordinate system, divide by the scale coefficient λ to obtain the real spatial position (X, Y, Z) of the feature point to be measured.

步骤S400,基于空间位置信息和预设的三维特征类别,计算某个待测特征点对应的第一距离信息和/或第二距离信息。Step S400: Calculate the first distance information and/or the second distance information corresponding to a certain feature point to be measured based on the spatial position information and the preset three-dimensional feature category.

需要说明的是,第一距离信息是某个待测特征点与其他待测特征点之间的距离信息,如长度,第二距离信息是某个待测特征点与预设平面之间的垂直距离信息,如高度。It should be noted that the first distance information is the distance information between a certain feature point to be measured and other feature points to be measured, such as the length, and the second distance information is the vertical distance between a certain feature point to be measured and the preset plane. Distance information, such as altitude.

具体地,以脚型为例,根据步骤S300中计算得到的五个待测特征点的空间位置信息,如,最长脚趾顶点为(X1,Y1,Z1)、脚后跟凸点为(X2,Y2,Z2)、指拇指球外侧点为(X3,Y3,Z3)、尾趾根部外侧点为(X4,Y4,Z4)、脚踝点为(X5,Y5,Z5),利用距离公式,如欧氏距离公式

Figure BDA0001891768560000121
可以得到如下的计算公式:Specifically, taking the foot shape as an example, according to the spatial position information of the five feature points to be measured calculated in step S300, for example, the vertex of the longest toe is (X 1 , Y 1 , Z 1 ), and the convex point of the heel is ( X 2 , Y 2 , Z 2 ), the outside point of the thumb ball is (X 3 , Y 3 , Z 3 ), the outside point of the base of the tail toe is (X 4 , Y 4 , Z 4 ), the ankle point is (X 5 ) , Y 5 , Z 5 ), using the distance formula, such as the Euclidean distance formula
Figure BDA0001891768560000121
The following calculation formula can be obtained:

Figure BDA0001891768560000122
Figure BDA0001891768560000122

公式(4)中参数L、W和H分别是脚长、脚宽和脚踝高度。The parameters L, W and H in formula (4) are the foot length, foot width and ankle height, respectively.

这样一来,脚长、脚宽以及脚踝点高度三个参数即可求出。虽然本发明仅提供了通过提取三维特征点来计算脚长、脚宽以及脚踝点高度三个参数这一种具体实施方案,但是本领域技术人员可以理解的是,本发明提供的三维特征提取方法还可以计算其他的脚型参数,如计算脚背高度,此时,不同角度的图像中均需包含特征点脚背点,然后依次根据上述实施例中所述的本发明的三维特征提取方法的步骤来计算脚背高度。In this way, the three parameters of foot length, foot width and ankle height can be calculated. Although the present invention only provides a specific embodiment of calculating the three parameters of foot length, foot width and ankle point height by extracting three-dimensional feature points, those skilled in the art can understand that the three-dimensional feature extraction method provided by the present invention Other foot shape parameters can also be calculated, such as calculating the height of the instep. At this time, the images of different angles must include the feature points and the instep points, and then follow the steps of the three-dimensional feature extraction method of the present invention described in the above embodiment in turn. Calculate the instep height.

综上所述,在本发明的优选技术方案中,采用摄像设备获取包含脚型的最长脚趾顶点、脚后跟凸点、指拇指球外侧点、尾趾根部外侧点以及脚踝点五个待测特征点的五个不同角度的图像,通过手动标记或者自动方法确定每个待测特征点在每一幅图像中的像素位置信息,然后利用手动标定相机参数再三角化或者通过稀疏重建问题求解来求取待测特征点的真实空间位置,不需要对整个目标物的模型重建,能够减少计算量,简化模型建立过程。最后基于五个待测特征点的空间位置,利用欧氏距离公式,从而能够计算得到脚长、脚宽以及脚踝点高度三个脚型参数。依次类推,获取不同特征点的不同角度的图像,亦可计算得到该特征点对应的脚型参数,如获取包含脚背点的不同角度的图像,根据上述步骤可以计算得到脚背点的空间位置信息,从而计算得到脚背高度这个参数。To sum up, in the preferred technical solution of the present invention, the camera equipment is used to obtain five features to be measured including the vertex of the longest toe, the convex point of the heel, the outer point of the thumb ball, the outer point of the root of the tail and the ankle point. Images of five different angles of the point, the pixel position information of each feature point to be measured in each image is determined by manual marking or automatic method, and then the camera parameters are manually calibrated and then triangulated or solved by sparse reconstruction problem. Taking the real spatial position of the feature point to be measured does not need to reconstruct the model of the entire target object, which can reduce the amount of calculation and simplify the model building process. Finally, based on the spatial positions of the five feature points to be measured, the Euclidean distance formula can be used to calculate the three foot shape parameters of foot length, foot width and ankle point height. By analogy, images of different angles of different feature points can be obtained, and the foot shape parameters corresponding to the feature points can also be obtained by calculation. Thus, the parameter of the instep height is calculated.

进一步地,基于上述方法实施例,本发明还提供了一种存储装置,该存储装置中存储有多条程序,该程序可以适用于由处理器加载以执行上述方法实施例所述的基于机器视觉的三维特征提取方法。Further, based on the above method embodiments, the present invention also provides a storage device, where a plurality of programs are stored in the storage device, and the programs can be loaded by a processor to execute the machine vision based machine vision described in the above method embodiments. 3D feature extraction method.

更进一步地,基于上述方法实施例,本发明还提供了一种控制装置,该控制装置包括处理器和存储设备,其中,存储设备可以适用于存储多条程序,该程序能够适用于由所述处理器加载以执行上述方法实施例所述的基于机器视觉的三维特征提取方法。Further, based on the above method embodiments, the present invention also provides a control device, the control device includes a processor and a storage device, wherein the storage device can be adapted to store a plurality of programs, and the programs can be adapted to The processor is loaded to execute the three-dimensional feature extraction method based on machine vision described in the above method embodiments.

至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described with reference to the preferred embodiments shown in the accompanying drawings, however, those skilled in the art can easily understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principle of the present invention, those skilled in the art can make equivalent changes or substitutions to the relevant technical features, and the technical solutions after these changes or substitutions will fall within the protection scope of the present invention.

Claims (7)

1. A three-dimensional feature extraction method based on machine vision is characterized by comprising the following steps:
the method comprises the steps that a multi-angle image containing a reference object and preset feature points to be detected of a target object arranged relative to the reference object is obtained through a mobile camera, and the number of the multi-angle images is at least three;
extracting the position information of the feature point to be detected in each image, wherein the feature point to be detected is on a target object;
acquiring spatial position information of the feature points to be detected according to the position information of the feature points to be detected in each image, specifically, calibrating camera parameters by using a reference object, and calculating the spatial position information of the feature points to be detected by using a triangulation method;
calculating first distance information and/or second distance information corresponding to a certain feature point to be detected based on the spatial position information and a preset three-dimensional feature category;
the first distance information is distance information between the certain characteristic point to be measured and other characteristic points to be measured, and the second distance information is vertical distance information between the certain characteristic point to be measured and a preset plane; the certain feature point to be measured, the other feature points to be measured and the plane all depend on the three-dimensional feature category;
wherein the reference object is an object of known dimensions;
wherein the multi-angle image simultaneously comprises the reference object and the target object arranged relative to the reference object;
the step of "extracting the position information of the feature point to be detected in each image" is to obtain the pixel position (x, y) of the feature point to be detected in each image;
the model reconstruction is not performed on the whole target object, and the step of acquiring the spatial position information of the feature point to be detected according to the position information of the feature point to be detected in each image is to acquire the real spatial position (X, Y, Z) of the feature point to be detected.
2. The machine-vision-based three-dimensional feature extraction method according to claim 1, wherein the step of "extracting the position information of the feature point to be measured in each image" includes:
acquiring the pixel position of the characteristic point to be detected in a certain image by using a manual marking method;
and extracting the corresponding pixel positions of the feature points to be detected in other images by using a preset feature point matching method and according to the acquired pixel positions.
3. The machine-vision-based three-dimensional feature extraction method according to claim 1, wherein the step of "extracting the position information of the feature point to be measured in each image" includes:
acquiring the area shape corresponding to the area where the characteristic point to be detected in the target object is located;
acquiring a region to be detected corresponding to each image according to the region shape;
and acquiring the position information of the feature point to be detected in each image according to the relative position between the feature point to be detected and the shape of the region and each region to be detected.
4. The machine-vision-based three-dimensional feature extraction method according to claim 1, wherein the step of "extracting the position information of the feature point to be measured in each image" includes:
acquiring the position information of the feature point to be detected in each image by utilizing a pre-constructed neural network;
the neural network is a deep neural network which is based on a preset training set and trained by using a deep learning correlation algorithm.
5. The machine-vision-based three-dimensional feature extraction method according to any one of claims 1 to 4, wherein the step of acquiring spatial position information of the feature point to be measured from position information of the feature point to be measured in each of the images comprises:
and acquiring the Euclidean position of the feature point to be detected by using a triangulation method according to the position information of the feature point to be detected in each image and the internal and external parameters of the camera.
6. A storage device having stored therein a plurality of programs, characterized in that said programs are adapted to be loaded by a processor for performing the method of machine vision based three-dimensional feature extraction according to any of claims 1-5.
7. A control apparatus comprising a processor and a storage device adapted to store a plurality of programs, characterized in that the programs are adapted to be loaded by the processor to perform the machine vision based three-dimensional feature extraction method of any one of claims 1-5.
CN201811474153.4A 2018-12-04 2018-12-04 Method and device for 3D feature extraction based on machine vision Active CN109816724B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811474153.4A CN109816724B (en) 2018-12-04 2018-12-04 Method and device for 3D feature extraction based on machine vision
PCT/CN2019/105962 WO2020114035A1 (en) 2018-12-04 2019-09-16 Three-dimensional feature extraction method and apparatus based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811474153.4A CN109816724B (en) 2018-12-04 2018-12-04 Method and device for 3D feature extraction based on machine vision

Publications (2)

Publication Number Publication Date
CN109816724A CN109816724A (en) 2019-05-28
CN109816724B true CN109816724B (en) 2021-07-23

Family

ID=66601919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811474153.4A Active CN109816724B (en) 2018-12-04 2018-12-04 Method and device for 3D feature extraction based on machine vision

Country Status (2)

Country Link
CN (1) CN109816724B (en)
WO (1) WO2020114035A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816724B (en) * 2018-12-04 2021-07-23 中国科学院自动化研究所 Method and device for 3D feature extraction based on machine vision
CN110133443B (en) * 2019-05-31 2020-06-16 中国科学院自动化研究所 Method, system and device for detection of power transmission line components based on parallel vision
CN110223383A (en) * 2019-06-17 2019-09-10 重庆大学 A kind of plant three-dimensional reconstruction method and system based on depth map repairing
CN110796705B (en) * 2019-10-23 2022-10-11 北京百度网讯科技有限公司 Model error elimination method, device, equipment and computer readable storage medium
CN112070883A (en) * 2020-08-28 2020-12-11 哈尔滨理工大学 A three-dimensional reconstruction method of 3D printing process based on machine vision
CN112487979B (en) * 2020-11-30 2023-08-04 北京百度网讯科技有限公司 Target detection method, model training method, device, electronic equipment and medium
CN112541936B (en) * 2020-12-09 2022-11-08 中国科学院自动化研究所 Method and system for determining visual information of actuator operating space
CN114113163B (en) * 2021-12-01 2023-12-08 北京航星机器制造有限公司 Automatic digital ray detection device and method based on intelligent robot
CN114299228A (en) * 2021-12-28 2022-04-08 浙江大学 A progressive high-precision human foot shape reconstruction method
CN114723977A (en) * 2022-04-08 2022-07-08 广东工业大学 Stable feature point identification method for visual SLAM system
CN114841959B (en) * 2022-05-05 2023-04-04 广州东焊智能装备有限公司 Automatic welding method and system based on computer vision
CN115112098B (en) * 2022-08-30 2022-11-08 常州铭赛机器人科技股份有限公司 Monocular vision one-dimensional two-dimensional measurement method
CN116672082B (en) * 2023-07-24 2024-03-01 苏州铸正机器人有限公司 Navigation registration method and device of operation navigation ruler
CN118010751B (en) * 2024-04-08 2025-01-03 杭州汇萃智能科技有限公司 Machine vision detection method and system for workpiece defect detection
CN118466390A (en) * 2024-04-29 2024-08-09 浙江永盛工具有限公司 Precision control system, method and medium for multi-specification drill production machine tools

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04349583A (en) * 1991-05-27 1992-12-04 Nippon Telegr & Teleph Corp <Ntt> Generalized hough transform circuit
WO2002025592A2 (en) * 2000-09-22 2002-03-28 Hrl Laboratories, Llc Sar and flir image registration method
CN102157013A (en) * 2011-04-09 2011-08-17 温州大学 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously
CN102354457A (en) * 2011-10-24 2012-02-15 复旦大学 General Hough transformation-based method for detecting position of traffic signal lamp
CN102376089A (en) * 2010-12-09 2012-03-14 深圳大学 Target correction method and system
CN105184857A (en) * 2015-09-13 2015-12-23 北京工业大学 Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN106127258A (en) * 2016-07-01 2016-11-16 华中科技大学 A kind of target matching method
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7580546B2 (en) * 2004-12-09 2009-08-25 Electronics And Telecommunications Research Institute Marker-free motion capture apparatus and method for correcting tracking error
CN106204727A (en) * 2016-07-11 2016-12-07 北京大学深圳研究生院 The method and device that a kind of foot 3-D scanning is rebuild
CN108305286B (en) * 2018-01-25 2021-09-07 哈尔滨工业大学深圳研究生院 Method, system and medium for 3D measurement of foot shape with multi-eye stereo vision based on color coding
CN109816724B (en) * 2018-12-04 2021-07-23 中国科学院自动化研究所 Method and device for 3D feature extraction based on machine vision

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04349583A (en) * 1991-05-27 1992-12-04 Nippon Telegr & Teleph Corp <Ntt> Generalized hough transform circuit
WO2002025592A2 (en) * 2000-09-22 2002-03-28 Hrl Laboratories, Llc Sar and flir image registration method
CN102376089A (en) * 2010-12-09 2012-03-14 深圳大学 Target correction method and system
CN102157013A (en) * 2011-04-09 2011-08-17 温州大学 System for fully automatically reconstructing foot-type three-dimensional surface from a plurality of images captured by a plurality of cameras simultaneously
CN102354457A (en) * 2011-10-24 2012-02-15 复旦大学 General Hough transformation-based method for detecting position of traffic signal lamp
CN105184857A (en) * 2015-09-13 2015-12-23 北京工业大学 Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
JP2017191022A (en) * 2016-04-14 2017-10-19 有限会社ネットライズ Method for imparting actual dimension to three-dimensional point group data, and position measurement of duct or the like using the same
CN106127258A (en) * 2016-07-01 2016-11-16 华中科技大学 A kind of target matching method
CN107767442A (en) * 2017-10-16 2018-03-06 浙江工业大学 A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"A generalized Hough transform template and its applications in computer vision";Zhu T等;《Journal of Computational Information Systems》;20050930;第1卷(第3期);全文 *
"Trinocular stereovision by generalized Hough transform";Jun Shen等;《Intelligent Robots and Computer Vision XIV: Algorithms, Techniques, Active Vision, and Materials Handling》;19951003;全文 *
"低成本多目立体视觉脚型三维测量方法研究";秦绪功;《中国优秀硕士学位论文全文数据库·信息科技辑》;20180215;第2018年卷(第2期);第2章-第4章 *
"面向大型装备的工业摄影测量技术及实现";史传飞等;《航空制造技术》;20181001;第61卷(第19期);第24-30页 *
秦绪功."低成本多目立体视觉脚型三维测量方法研究".《中国优秀硕士学位论文全文数据库·信息科技辑》.2018,第2018年卷(第2期),第I138-2199页. *

Also Published As

Publication number Publication date
CN109816724A (en) 2019-05-28
WO2020114035A1 (en) 2020-06-11

Similar Documents

Publication Publication Date Title
CN109816724B (en) Method and device for 3D feature extraction based on machine vision
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN108305286B (en) Method, system and medium for 3D measurement of foot shape with multi-eye stereo vision based on color coding
US10460517B2 (en) Mobile device human body scanning and 3D model creation and analysis
US10813715B1 (en) Single image mobile device human body scanning and 3D model creation and analysis
CN107111833B (en) Fast 3D model adaptation and anthropometry
CN106164978B (en) The method and system of personalized materialization is constructed using deformable mesh is parameterized
US20130259403A1 (en) Flexible easy-to-use system and method of automatically inserting a photorealistic view of a two or three dimensional object into an image using a cd,dvd or blu-ray disc
Läbe et al. Automatic relative orientation of images
JP6571225B2 (en) Camera posture estimation method and system
US8326022B2 (en) Stereoscopic measurement system and method
CN106705849B (en) Calibrating Technique For The Light-strip Sensors
US11176738B2 (en) Method for calculating the comfort level of footwear
Zhang et al. A novel method for measuring the volume and surface area of egg
US10650584B2 (en) Three-dimensional modeling scanner
CN107610215B (en) A high-precision multi-angle oral three-dimensional digital imaging model construction method
CN105787464A (en) A viewpoint calibration method of a large number of pictures in a three-dimensional scene
US7046839B1 (en) Techniques for photogrammetric systems
CN112258538A (en) Method and device for acquiring three-dimensional data of human body
CN112002016A (en) Continuous curved surface reconstruction method, system and device based on binocular vision
JP2011022084A (en) Device and method for measuring three-dimensional pose
Skabek et al. Comparison of photgrammetric techniques for surface reconstruction from images to reconstruction from laser scanning
CN115409876A (en) Method for reconstructing three-dimensional foot model by using depth camera and measuring parameters
Ni et al. 3D reconstruction of small plant from multiple views
Ni et al. Plant or tree reconstruction based on stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant