Nothing Special   »   [go: up one dir, main page]

CN105335399B - A kind of information processing method and electronic equipment - Google Patents

A kind of information processing method and electronic equipment Download PDF

Info

Publication number
CN105335399B
CN105335399B CN201410344552.4A CN201410344552A CN105335399B CN 105335399 B CN105335399 B CN 105335399B CN 201410344552 A CN201410344552 A CN 201410344552A CN 105335399 B CN105335399 B CN 105335399B
Authority
CN
China
Prior art keywords
match point
coordinate system
unit
image information
acquisition unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410344552.4A
Other languages
Chinese (zh)
Other versions
CN105335399A (en
Inventor
申浩
李南君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410344552.4A priority Critical patent/CN105335399B/en
Publication of CN105335399A publication Critical patent/CN105335399A/en
Application granted granted Critical
Publication of CN105335399B publication Critical patent/CN105335399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of information processing method and electronic equipments, which comprises obtains the first image information in current environment by described image acquisition unit;Feature extraction is carried out to the first image information, obtains N number of different characteristic parameter, each characteristic parameter is for describing the first image information;It is retrieved in preset image feature base using characteristic parameter described at least one, obtains T matching result, the T is the integer greater than 1;Corresponding match point is obtained according to the T matching result, the match point is screened, the first screening set is obtained;Match point in first screening set is coordinately transformed and is estimated, the match point under global space coordinate system is obtained.

Description

A kind of information processing method and electronic equipment
Technical field
The present invention relates to electronic technology more particularly to a kind of information processing methods and electronic equipment.
Background technique
The location technology of view-based access control model has great application value and vast market prospect, for example, is creating Such as in instant positioning and map structuring (SLAM, Simultaneous Localization and Mapping) during map During, need vision positioning.How realizing stabilization in large scale scene, quickly locating is vision positioning technology The key point of successful application.
Summary of the invention
In view of this, the embodiment of the present invention provides a kind of information processing method and electronic equipment, it can be in large scale scene Middle realization is stablized, is quickly located.
The technical solution of the embodiment of the present invention is achieved in that
In a first aspect, the embodiment of the present invention provides a kind of information processing method, it is applied to electronic equipment, electronic equipment has Image acquisition units, which comprises
The first image information in current environment is obtained by described image acquisition unit;
Feature extraction is carried out to the first image information, obtains N number of different characteristic parameter, each characteristic parameter For describing the first image information;
It is retrieved in preset image feature base using characteristic parameter described at least one, obtains T matching As a result, the T is the integer greater than 1;
Corresponding match point is obtained according to the T matching result, the match point is screened, the first screening is obtained Set;
Match point in first screening set is coordinately transformed and is estimated, the matching under global space coordinate system is obtained Point.
Second aspect, the embodiment of the present invention provide a kind of electronic equipment, and electronic equipment has image acquisition units, the electricity Sub- equipment further includes first acquisition unit, extraction unit, retrieval unit, screening unit and converter unit, in which:
The first acquisition unit, for obtaining the first image letter in current environment by described image acquisition unit Breath;
The extraction unit obtains N number of different feature ginseng for carrying out feature extraction to the first image information Number, each characteristic parameter is for describing the first image information;
The retrieval unit, for being carried out in preset image feature base using characteristic parameter described at least one Retrieval, obtains T matching result, and the T is the integer greater than 1;
The screening unit carries out the match point for obtaining corresponding match point according to the T matching result Screening, obtains the first screening set;
The converter unit obtains the overall situation for the match point in the first screening set to be coordinately transformed and estimated Match point under space coordinates.
Information processing method and electronic equipment provided in an embodiment of the present invention are obtained current by described image acquisition unit The first image information in environment;Feature extraction is carried out to the first image information, obtains N number of different characteristic parameter;Benefit It is retrieved in preset image feature base with characteristic parameter described at least one, obtains T matching result;According to institute It states T matching result and obtains corresponding match point, the match point is screened, the first screening set is obtained;It is sieved to first Match point in selected works conjunction is coordinately transformed and estimates, obtains the match point under global space coordinate system;It so, it is possible big It is realized in scale scene and stablizes, quickly locates.
Detailed description of the invention
Fig. 1 is the implementation process schematic diagram of one information processing method of the embodiment of the present invention;
Fig. 2 is the implementation process schematic diagram of two information processing method of the embodiment of the present invention;
Fig. 3 is the implementation process schematic diagram of three information processing method of the embodiment of the present invention;
Fig. 4-1 is the implementation process schematic diagram of four information processing method of the embodiment of the present invention;
Fig. 4-2 is using effect diagram obtained from the relevant technologies;
Fig. 4-3 is using effect diagram obtained from the embodiment of the present invention;
Fig. 5 is the composed structure schematic diagram of five electronic equipment of the embodiment of the present invention;
Fig. 6 is the composed structure schematic diagram of six electronic equipment of the embodiment of the present invention;
Fig. 7 is the composed structure schematic diagram of seven electronic equipment of the embodiment of the present invention;
Fig. 8 is the composed structure schematic diagram of eight electronic equipment of the embodiment of the present invention.
Specific embodiment
The following embodiment of the present invention is based on the fact that, first for the image information that image acquisition units obtain Image information is handled, characteristic parameter is obtained, then carries out spy in image feature base using these characteristic parameters Sign matching, obtains matching result;And then match point corresponding to matching result is estimated using algorithm for estimating, thus It obtains the current position of image acquisition units, and then completes the process of vision positioning.
Here, the algorithm for estimating generally comprises least square method, random sampling consistency (RANSAC, Random Sample Consensus) algorithm etc., wherein least square method includes Partial Least Squares.Figure is being estimated using algorithm for estimating When as acquisition unit current position, RANSAC algorithm is only capable of being stablized in the case where noise spot ratio is lower than 40% Locating effect, and least square method can be suitable for the less situation of noise spot ratio.
During realizing vision positioning under large scale scene, the characteristics of image as included by present image information compares More, the scene information in image feature base is also very more in addition, and there are a large amount of similar scene informations, therefore root Higher according to noise spot ratio in the obtained match point of matching result, noise spot ratio is even up to 80% or more sometimes, thus So that algorithm for estimating cannot achieve positioning.For this problem, the embodiment of the present invention will be processed to match point increase by one is obtained Journey, it may be assumed that the match point is screened, a screening set is obtained;The screening process is intended to remove some noise spots, in this way Make it possible to that match point is coordinately transformed and is estimated using algorithm for estimating, finally obtains the matching under global space coordinate system Point completes vision positioning process.
The technical solution of the present invention is further elaborated in the following with reference to the drawings and specific embodiments.
Embodiment one
The embodiment of the present invention provides a kind of information processing method, is applied to electronic equipment, and electronic equipment has Image Acquisition Unit, Fig. 1 are the implementation process schematic diagram of one information processing method of the embodiment of the present invention, as shown in Figure 1, this method comprises:
Step 101, the first image information in current environment is obtained by described image acquisition unit;
Here, the first image information refers to the image information about current environment that image acquisition units obtain;Institute It states first in the first image information and only makees difference nominally, have no specific meanings, such as the first image information and the second figure As information refers to two pieces of image information, and in substantive content, the first image information may be identical as the second image information, can also It can be different.The embodiment of the present invention further relates to the first screening set below, first effect in the first screening set and the First effect in one image information is similar, therefore repeats no more.
Here, as the electronic equipment can be intelligent robot (hereinafter referred to as robot), with the robot have image Acquisition unit is as a preferable example.Described image acquisition unit is during specific implementation, it may be possible to certain specific figure As acquisition equipment, such as can be image acquisition units can be three-dimensional (3D, 3Dimensions) camera, the 3D camera It is electrically connected with the main part of electronic equipment.During specific implementation, which can be for RGB-D sensing Device, R, G, B in RGB-D sensor indicate red (Red), green (Green) and blue (Blue), the D in RGB-D sensor It indicates depth (Depth), the most representational sensor of RGB-D sensor first elects the Kinect3D sensor of Microsoft. RGB-D sensor is used to refer to the sensor of the colouring information (RGB) and depth information (Depth) that can obtain environment simultaneously.
Here, the present embodiments relate to 3D scanning technique fields, carry out 3D to target object using intelligent robot and sweep When retouching, the image acquisition units usually on intelligent robot especially intelligent robot rotate a circle around current environment, that is, obtain The first image information of current environment is taken, which is one group of picture frame during specific implementation, this is every It include mass cloud data (point cloud) that each point includes the three-dimensional for showing spatial position in one picture frame Such as (x, y, z), point cloud data further includes colouring information (RGB) other than with spatial position to coordinate points, and some is even also wrapped Include intensity (Intensity) information.Wherein, colouring information is usually to obtain color image by color camera, then will be corresponded to The colouring information of the pixel of position assigns corresponding point in point cloud.The acquisition of strength information is the collected echo of laser sensor Strength information, the Facing material of this strength information and target, roughness, incident angular direction and equipment emitted energy, swash Optical wavelength is related.
Step 102, feature extraction is carried out to the first image information, obtains N number of different characteristic parameter;
Here, each characteristic parameter is for describing the first image information;
Here, characteristics of image is illustrated first, characteristics of image is part interesting in a digital picture, is expression The pith of image information.Feature extraction is a primary operation in image processing, that is to say, that feature extraction is to one First calculation process that a image information carries out, it check each pixel (hereinafter referred to as point) determine the pixel whether generation One feature of table.Feature extraction is the starting point of many image analyses, therefore the most important characteristic of feature extraction is " repeatable Property ", the extracted feature of the different images of Same Scene should be identical.
Characteristics of image generally comprises shape feature, color characteristic, textural characteristics, shape feature, spatial relation characteristics.It is described Shape feature includes the features such as edge (edge), region (patch), and vision positioning involved in the embodiment of the present invention is led Domain, usually using angle point (corner) as being characteristics of image, and angle point can be used as following two points the reason of characteristics of image: one It is that angle point has unique identifiability;Second is that angle point has stability, in other words, as when the point has small movement When, apparent variation will be generated.For other shape features, such as side (edge), region (patch) etc., with the language of mathematics Speech is smaller come variability when describing, so feature is not obvious enough.
Here, the characteristic parameter and the characteristics of image of selection have close relationship, such as using color characteristic as image spy Characteristic parameter is illustrated in sign.Color characteristic is a kind of global characteristics, describes scape corresponding to image or image-region The surface nature of object.The common method for extracting color characteristic includes the side such as color histogram, color moment, color convergence vector Method, wherein color histogram is the method for most common expression color characteristic, its advantage is that not changed by image rotation and translation Influence, further by normalization can not also by graphical rule change be influenced.Now by taking color histogram as an example, to illustrate to walk Characteristic parameter in rapid 102, when using color histogram, the characteristic parameter can be color column.
Step 103, it is retrieved, is obtained in preset image feature base using characteristic parameter described at least one T matching result;
Here, the T is the integer greater than 1;The matching result refers to the scene information in image feature base.
Here, the process that step 103 executes is actually the process of characteristic matching;Preferably, step 104 can also basis The characteristic parameter constructs k-d tree (kd-tree, k-dimensional tree);Then k-d tree quick-searching and institute are utilized State the similar scene information of characteristic parameter.
Step 104, corresponding match point is obtained according to the T matching result, the match point is screened, is obtained First screening set;
Here, the match point refers to the data point carried out in matched the first image information with scene information.
Step 105, the match point in the first screening set is coordinately transformed and is estimated, obtain global space coordinate system Under match point.
Here, there are two kinds of coordinate systems during vision positioning: robot coordinate system and global coordinate system;Wherein machine Device people's coordinate system is also known as local coordinate system, and global coordinate system is also known as global space coordinate system;Figure acquired in robot As information is the image information under robot coordinate system, and the result of vision positioning refers to that it is absolute under global coordinate system X=(x, y, z, θ) is denoted as in coordinate, such as three-dimensional environment, wherein x, y, z indicate that the coordinate under global coordinate system, θ indicate The posture (or, visual angle) of the point.In the initial state, robot coordinate system is overlapped with global coordinate system;But with robot Movement, substantially refer to the movement of image acquisition units here, robot coordinate system is just no longer overlapped with global coordinate system, because This needs is coordinately transformed and estimates, to complete vision positioning.
In the embodiment of the present invention, after step 105, the method also includes utilizing the matching under global space coordinate system Point completes vision positioning.
In the embodiment of the present invention, the step 105 is specifically included:
Step A1 estimates that transformation matrix, the transformation matrix are used for the match point by the figure using algorithm for estimating As the local coordinate system of acquisition unit transforms to corresponding global space coordinate system;
The match point is transformed to corresponding global space by local coordinate system according to the transformation matrix and sat by step A2 Mark system.
In the embodiment of the present invention, the algorithm for estimating includes: least square method, RANSAC algorithm etc., wherein least square Method includes Partial Least Squares.
In the embodiment of the present invention, the feature extraction includes that feature detection is described with feature, wherein feature detection and feature The method of description includes Scale invariant features transform (SIFT, Scale Invariant Feature Transform) mode, adds Fast robust features (SURF Speeded-Uprobust Features) mode, binary system simple descriptor (ORB, Oriented Brief) mode;Mode due to extracting feature can be realized by above-mentioned the relevant technologies, no longer superfluous in the present embodiment It states.
A kind of information processing method provided in the embodiment of the present invention obtains current environment by described image acquisition unit In the first image information;Feature extraction is carried out to the first image information, obtains N number of different characteristic parameter;Using extremely A few characteristic parameter is retrieved in preset image feature base, obtains T matching result;According to the T A matching result obtains corresponding match point, screens to the match point, obtains the first screening set;To the first screening collection Match point in conjunction is coordinately transformed and estimates, obtains the match point under global space coordinate system;In this way, provided by the invention Technical solution has the advantage that by screening to the match point obtained according to the T matching result, obtains first Screening set, the noise point of bulk redundancy can be eliminated by screening, and stablized, quickly so as to realize in large scale scene Ground positioning.
Embodiment two
Based on the above embodiments one, the embodiment of the present invention provides a kind of information processing method, is applied to electronic equipment, electricity Sub- equipment has image acquisition units, and Fig. 2 is the implementation process schematic diagram of two information processing method of the embodiment of the present invention, such as Fig. 2 It is shown, this method comprises:
Step 201, the first image information in current environment is obtained by described image acquisition unit;
Step 202, feature extraction is carried out to the first image information, obtains N number of different characteristic parameter;
Here, each characteristic parameter is for describing the first image information;
Step 203, it is retrieved, is obtained in preset image feature base using characteristic parameter described at least one T matching result;
Here, the T is the integer greater than 1;
Step 204, corresponding match point is obtained according to the T matching result, the match point is screened, is obtained First screening set;
Step 205, estimate that transformation matrix, the transformation matrix are used for the match point by the figure using algorithm for estimating As the local coordinate system of acquisition unit transforms to corresponding global space coordinate system;
Here, there are two kinds of coordinate systems during vision positioning: robot coordinate system and global coordinate system;Wherein machine Device people's coordinate system is also known as local coordinate system, and global coordinate system is also known as global space coordinate system;Figure acquired in robot As information is the image information under robot coordinate system, and the result of vision positioning refers to that it is absolute under global coordinate system X=(x, y, z, θ) is denoted as in coordinate, such as three-dimensional environment, wherein x, y, z indicate that the coordinate under global coordinate system, θ indicate The posture (or, visual angle) of the point.In the initial state, robot coordinate system is overlapped with global coordinate system;But with robot Movement, substantially refer to the movement of image acquisition units here, robot coordinate system is just no longer overlapped with global coordinate system, because This needs is coordinately transformed and estimates, to complete vision positioning.
Step 206, the match point is transformed to by local coordinate system by corresponding global space according to the transformation matrix Coordinate system.
In the embodiment of the present invention, after step 206, the method also includes: utilize under global space coordinate system Vision positioning is completed with.
In the step 205 of the embodiment of the present invention, the algorithm for estimating includes: least square method, RANSAC algorithm etc., wherein Least square method includes Partial Least Squares.
Embodiment three
The embodiment of the present invention provides a kind of information processing method, is applied to electronic equipment, and electronic equipment has Image Acquisition Unit, Fig. 3 are the implementation process schematic diagram of three information processing method of the embodiment of the present invention, as shown in figure 3, this method comprises:
Step 301, the first image information in current environment is obtained by described image acquisition unit;
Step 302, feature extraction is carried out to the first image information, obtains N number of different characteristic parameter;
Here, each characteristic parameter is for describing the first image information;
Step 303, it is retrieved, is obtained in preset image feature base using characteristic parameter described at least one T matching result;
Here, the T is the integer greater than 1;
Step 304, corresponding match point is obtained according to the T matching result, the match point is clustered, is obtained Multiple cluster results;
Step 305, the first screening set will be determined as comprising the most cluster result of match point number;
Here, the step 304 and step 305 are a kind of implementation of step 104 in above-described embodiment one, that is, benefit Clustering processing is carried out with clustering algorithm pair match point corresponding with T matching result, obtains multiple cluster results, the cluster As a result refer to that the cluster including several match points centered on cluster centre, the cluster are generally spherical cluster.The clustering algorithm can To use k mean value (k-means) algorithm.
Step 306, the match point in the first screening set is coordinately transformed and is estimated, obtain global space coordinate system Under match point.
In the embodiment of the present invention, the step 306 is specifically included:
Step A1 estimates that transformation matrix, the transformation matrix are used for the match point by the figure using algorithm for estimating As the local coordinate system of acquisition unit transforms to corresponding global space coordinate system;
The match point is transformed to corresponding global space by local coordinate system according to the transformation matrix and sat by step A2 Mark system.
In the embodiment of the present invention, after step 306, the method also includes: utilize under global space coordinate system Vision positioning is completed with.
One kind is provided in the embodiment of the present invention to screen the match point, obtains the mode of the first screening set, i.e., First, the match point is clustered, multiple cluster results are obtained;It then will be true comprising the most cluster result of match point number It is set to the first screening set;In this way, technical solution provided in an embodiment of the present invention, by match point carry out cluster realize to Screening with point so as to eliminate the noise point of bulk redundancy, and then can be realized in large scale scene and stablize, rapidly Positioning.
Example IV
Based on the above embodiments one, the embodiment of the present invention provides a kind of information processing method, is applied to electronic equipment, electricity Sub- equipment has image acquisition units, and Fig. 4-1 is the implementation process schematic diagram of four information processing method of the embodiment of the present invention, such as schemes Shown in 4-1, this method comprises:
Step 401, the first image information in current environment is obtained by described image acquisition unit;
Step 402, feature extraction is carried out to the first image information, obtains N number of different characteristic parameter;
Here, each characteristic parameter is for describing the first image information;
Here, the feature extraction includes that feature detection is described with feature, wherein the method for feature detection and feature description Including Scale invariant features transform (SIFT, Scale Invariant Feature Transform) mode, accelerate robustness special (the SURF Speeded Uprobust Features) mode of sign, binary system simple descriptor (ORB, ORiented Brief) side Formula;Mode due to extracting feature can be realized by above-mentioned the relevant technologies, be repeated no more in the present embodiment.
Step 403, it is retrieved, is obtained in preset image feature base using characteristic parameter described at least one T matching result;
Here, the T is the integer greater than 1;
Step 404, corresponding match point is obtained according to the T matching result, the match point is clustered, is obtained First screening set;
Here, clustering processing is carried out using clustering algorithm pair match point corresponding with T matching result, obtained multiple poly- Class is as a result, the cluster result refers to that the cluster including several match points centered on cluster centre, the cluster are generally spherical cluster. The clustering algorithm can use k mean value (k-means) algorithm.
Here, the interior point set in step 404 after acquired first screening set as clustering processing.
Step 405, the performance parameter for obtaining described image acquisition unit itself, determines the figure according to the performance parameter As the distribution space range of acquisition unit the first image information collected;
Here, the performance parameter of described image acquisition unit itself is primarily referred to as the performance parameter of depth value, performance ginseng Number may include described image acquisition unit measurable range, and/or, the effective range of described image acquisition unit; Wherein, from the range size for including, the measurable range includes effective range, such as: Kinect3D camera The measurable range of depth value in 0.4m-8m, but when applying, the effective range of the depth value of Kinect3D camera is about 0.8m-4m, it is seen that measurable range includes effective range.
Here, the distribution space range is a range included by performance parameter, in general, image acquisition units The performance parameter of itself just determines the distribution space range of described image acquisition unit the first image information collected, and institute is not With, be using measurable range or using effective range and using it is a certain between measurable range and effective range it Between range.For example, the measurable range of the depth value of Kinect3D camera is 0.4m-8m, effective range is about 0.8m- 4m then can determine that the distribution space range of the first image information is 0.4m-8m, or determines the distribution of the first image information Spatial dimension is 0.8m-4m.
Step 406, using the distribution space range as cluster boundary condition, the first screening set is adjusted, is obtained First screening set adjusted;
Specifically, the cluster result most comprising match point number is determined under the conditions of cluster boundary, includes matching by this The most cluster result of point number is determined as the first screening set adjusted.It is realized in this step to gained in step 404 The amendment of the interior point set arrived, to realize the purpose of removal noise spot.
Here, it should be noted that it in the third embodiment also include cluster boundary condition, the cluster boundary in embodiment three Condition can be preassigned.
Step 407, the match point in the first screening set adjusted is coordinately transformed and is estimated, obtain global sky Between match point under coordinate system.
In the embodiment of the present invention, the performance parameter includes measurable range;
Accordingly, the distribution space of described image acquisition unit acquired image information is determined according to the performance parameter Range, comprising:
According to the maximum value in the measurable range, point of described image acquisition unit acquired image information is determined Cloth spatial dimension.
Here, the example in step 405 is accepted, it is assumed that be measurable range in measurable range be 0.4m-8m, then pressing Distribution space range according to the image acquisition units acquired image information that aforesaid way determines is 8m.It can survey described in the use When maximum value in amount range is determined as the distribution space range of described image acquisition unit acquired image information, Ke Yibao The robustness of technical solution provided in an embodiment of the present invention is demonstrate,proved, can also realize stablize, quickly in large scale scene in this way Ground vision positioning.
In the embodiment of the present invention, after step 407, the method also includes: utilize under global space coordinate system Vision positioning is completed with.
In the embodiment of the present invention, the step 407 is specifically included:
Step A1 estimates that transformation matrix, the transformation matrix are used for the match point by the figure using algorithm for estimating As the local coordinate system of acquisition unit transforms to corresponding global space coordinate system;
The match point is transformed to corresponding global space by local coordinate system according to the transformation matrix and sat by step A2 Mark system.
In the embodiment of the present invention, using the performance parameter of image acquisition units itself, the spatial distribution of matching point set is determined Characteristic, that is, match point spatial distribution range, and obtained matching double points are screened as cluster boundary condition, it obtains Then interior point set adjusted recycles algorithm for estimating to realize vision positioning.Compared to traditional technical solution, the present invention is implemented On the one hand the technical solution that example provides, which has the advantage that, can remove interference of a large amount of noises to positioning accuracy;On the other hand By filtering out a large amount of noises, it is time-consuming to reduce positioning.
The relevant technologies and above-mentioned provided technical solution is respectively adopted to Same Scene (some office in the embodiment of the present invention Area) carry out vision positioning processing, as shown in Fig. 4-2 and Fig. 4-3, Fig. 4-2 be using effect diagram obtained from the relevant technologies, Fig. 4-3 is using effect diagram obtained from the embodiment of the present invention, the relevant technologies and skill provided in an embodiment of the present invention Whether the difference of art scheme is: screening to the match point, wherein the embodiment of the present invention needs to sieve match point Choosing.In operation, the 3D data acquired using 3D video camera, totally 2084 frame data (believe by RGB image information and depth image Breath respectively has 2084 frames), and image spy's data sign library acquisition scene information includes two Office Areas, about 1800 square metres, image is special 67.7 ten thousand features are shared in sign database.326 location points are made when using the relevant technologies, in Fig. 4-2 in total;And use this 1224 location points are made provided by inventive embodiments when technical solution, in Fig. 4-3 altogether.As it can be seen that in same test data set In Same Scene, the position success rate of this technical solution provided in an embodiment of the present invention is 3.7 times of the relevant technologies.Therefore, Technical solution provided in an embodiment of the present invention can realize that stablizing, quickly locating is vision positioning skill in large scale scene The key point of art successful application.
Embodiment five
Based on above-mentioned information processing method, the embodiment of the present invention provides a kind of electronic equipment, and electronic equipment has image Acquisition unit, Fig. 5 is the composed structure schematic diagram of five electronic equipment of the embodiment of the present invention, as shown in figure 5, the electronic equipment also wraps Include first acquisition unit 501, extraction unit 502, retrieval unit 503, screening unit 504 and converter unit 505, in which:
The first acquisition unit 501, for obtaining the first image in current environment by described image acquisition unit Information;
The extraction unit 502 obtains N number of different feature for carrying out feature extraction to the first image information Parameter, each characteristic parameter is for describing the first image information;
The retrieval unit 503, for utilizing at least one described characteristic parameter in preset image feature base It is retrieved, obtains T matching result, the T is the integer greater than 1;
The screening unit 504, for obtaining corresponding match point according to the T matching result, to the match point It is screened, obtains the first screening set;
The converter unit 505 obtains complete for the match point in the first screening set to be coordinately transformed and estimated Match point under office's space coordinates.
In the embodiment of the present invention, the electronic equipment further includes positioning unit, for using under global space coordinate system Match point completes vision positioning.
Here, the first image information refers to the image information about current environment that image acquisition units obtain;Institute It states first in the first image information and only makees difference nominally, have no specific meanings, such as the first image information and the second figure As information refers to two pieces of image information, and in substantive content, the first image information may be identical as the second image information, can also It can be different.The embodiment of the present invention further relates to the first screening set below, first effect in the first screening set and the First effect in one image information is similar, therefore repeats no more.
Here, as the electronic equipment can be intelligent robot, using the robot have image acquisition units as one Preferable example.Described image acquisition unit is during specific implementation, it may be possible to certain specific image capture device, such as can It can be three-dimensional camera to be image acquisition units, which is electrically connected with the main part of electronic equipment.Specific During implementation, which can be for RGB-D sensor, R, G, B in RGB-D sensor indicate red, green Color and blue, the D in RGB-D sensor indicate depth, and the most representational sensor of RGB-D sensor first elects Microsoft Kinect3D sensor.RGB-D sensor is used to refer to the biography of the colouring information and depth information that can obtain environment simultaneously Sensor.
Here, the present embodiments relate to 3D scanning technique fields, carry out 3D to target object using intelligent robot and sweep When retouching, the image acquisition units usually on intelligent robot especially intelligent robot rotate a circle around current environment, that is, obtain The first image information of current environment is taken, which is one group of picture frame during specific implementation, this is every In one picture frame include mass cloud data, each point include for show the three-dimensional coordinate point of spatial position such as (x, y, Z), point cloud data further includes colouring information other than with spatial position, and some further includes even strength information.Wherein, face Color information is usually to obtain color image by color camera, then assigns the colouring information of the pixel of corresponding position in point cloud Corresponding point.The acquisition of strength information is the strength information of the collected echo of laser sensor, this strength information and target Facing material, roughness, incident angular direction and the emitted energy of equipment, optical maser wavelength are related.
Here, characteristics of image is illustrated first, characteristics of image is part interesting in a digital picture, is expression The pith of image information.Feature extraction is a primary operation in image processing, that is to say, that feature extraction is to one First calculation process that a image information carries out, it check each pixel (hereinafter referred to as point) determine the pixel whether generation One feature of table.Feature extraction is the starting point of many image analyses, therefore the most important characteristic of feature extraction is " repeatable Property ", the extracted feature of the different images of Same Scene should be identical.
Characteristics of image generally comprises shape feature, color characteristic, textural characteristics, shape feature, spatial relation characteristics.It is described Shape feature includes that the features such as edge, region usually make angle point in vision positioning field involved in the embodiment of the present invention To be characteristics of image, and angle point can have following two points as the reason of characteristics of image: first is that angle point has unique can recognize Property;Second is that angle point has stability, in other words, as will generate apparent variation when the point has small movement.It is right In other shape features, such as side, region etc. is smaller come variability when describing with the language of mathematics, so feature is not bright enough It is aobvious.
Here, the characteristic parameter and the characteristics of image of selection have close relationship, such as using color characteristic as image spy Characteristic parameter is illustrated in sign.Color characteristic is a kind of global characteristics, describes scape corresponding to image or image-region The surface nature of object.The common method for extracting color characteristic includes the side such as color histogram, color moment, color convergence vector Method, wherein color histogram is the method for most common expression color characteristic, its advantage is that not changed by image rotation and translation Influence, further by normalization can not also by graphical rule change be influenced.Now by taking color histogram as an example, to illustrate to mention The characteristic parameter in unit 502 is taken, when using color histogram, the characteristic parameter can be color column.
Here, the matching result refers to the scene information in image feature base.
Here, the process that retrieval unit 503 executes is actually the process of characteristic matching;Preferably, screening unit 504 is gone back K-d tree can be constructed according to the characteristic parameter;Then k-d tree quick-searching scene similar with the characteristic parameter is utilized Information.
Here, the match point refers to the data point carried out in matched the first image information with scene information.
In the embodiment of the present invention, the algorithm for estimating includes: least square method, RANSAC algorithm etc., wherein least square Method includes Partial Least Squares.
In the embodiment of the present invention, the feature extraction includes that feature detection is described with feature, wherein feature detection and feature The method of description includes Scale invariant features transform mode, accelerates robust features mode, binary system simple descriptor mode;By It can be realized by above-mentioned the relevant technologies in the mode for extracting feature, be repeated no more in the present embodiment.
In the embodiment of the present invention, the first image information in current environment is obtained by described image acquisition unit;To institute It states the first image information and carries out feature extraction, obtain N number of different characteristic parameter;Using characteristic parameter described at least one pre- If image feature base in retrieved, obtain T matching result;Corresponding is obtained according to the T matching result With point, the match point is screened, obtains the first screening set;Coordinate change is carried out to the match point in the first screening set It changes and estimates, obtain the match point under global space coordinate system;In this way, technical solution provided by the invention, has the advantage that By being screened to the match point obtained according to the T matching result, the first screening set is obtained, can be disappeared by screening Except the noise point of bulk redundancy, so as to which realization is stable in large scale scene, quickly locates.
Embodiment six
Based on the above embodiments five, the embodiment of the present invention provides a kind of electronic equipment, and electronic equipment has Image Acquisition Unit, Fig. 6 is the composed structure schematic diagram of six electronic equipment of the embodiment of the present invention, as shown in fig. 6, the electronic equipment further includes the One acquiring unit 601, extraction unit 602, retrieval unit 603, screening unit 604 and converter unit 605, wherein the transformation is single Member 605 includes estimation module 651 and conversion module 652, in which:
The acquiring unit 601, for obtaining the first image information in current environment by described image acquisition unit;
The extraction unit 602 obtains N number of different feature for carrying out feature extraction to the first image information Parameter, each characteristic parameter is for describing the first image information;
The retrieval unit 603, for utilizing at least one described characteristic parameter in preset image feature base It is retrieved, obtains T matching result, the T is the integer greater than 1;
The screening unit 604, for obtaining corresponding match point according to the T matching result, to the match point It is screened, obtains the first screening set;
The estimation module 651, for estimating transformation matrix using algorithm for estimating, the transformation matrix is used for will be described Corresponding global space coordinate system is transformed to by the local coordinate system of described image acquisition unit with point;
The conversion module 652, for being transformed to the match point pair by local coordinate system according to the transformation matrix The global space coordinate system answered.
In the embodiment of the present invention, the electronic equipment further includes positioning unit, for using under global space coordinate system Match point completes vision positioning.
Here, there are two kinds of coordinate systems during vision positioning: robot coordinate system and global coordinate system;Wherein machine Device people's coordinate system is also known as local coordinate system, and global coordinate system is also known as global space coordinate system;Figure acquired in robot As information is the image information under robot coordinate system, and the result of vision positioning refers to that it is absolute under global coordinate system X=(x, y, z, θ) is denoted as in coordinate, such as three-dimensional environment, wherein x, y, z indicate that the coordinate under global coordinate system, θ indicate The posture (or, visual angle) of the point.In the initial state, robot coordinate system is overlapped with global coordinate system;But with robot Movement, substantially refer to the movement of image acquisition units here, robot coordinate system is just no longer overlapped with global coordinate system, because This needs is coordinately transformed and estimates, to complete vision positioning.
In the embodiment of the present invention, the algorithm for estimating includes: least square method, RANSAC algorithm etc., wherein least square Method includes Partial Least Squares.
Embodiment seven
Based on the above embodiment, the embodiment of the present invention provides a kind of electronic equipment, and electronic equipment has image acquisition units, Fig. 7 is the composed structure schematic diagram of seven electronic equipment of the embodiment of the present invention, as shown in fig. 7, the electronic equipment further includes first obtaining Unit 701, extraction unit 702, retrieval unit 703, screening unit 704 and converter unit 705 are taken, wherein the screening unit 704 include cluster module 741 and determining module 742, in which:
The first acquisition unit 701, for obtaining the first image in current environment by described image acquisition unit Information;
The extraction unit 702 obtains N number of different feature for carrying out feature extraction to the first image information Parameter, each characteristic parameter is for describing the first image information;
The retrieval unit 703, for utilizing at least one described characteristic parameter in preset image feature base It is retrieved, obtains T matching result, the T is the integer greater than 1;
The cluster module 741, for obtaining corresponding match point according to the T matching result, to the match point It is clustered, obtains multiple cluster results;
Specifically, clustering processing is carried out using clustering algorithm pair match point corresponding with T matching result, obtained multiple Cluster result, the cluster result refer to that the cluster including several match points centered on cluster centre, the cluster are generally spherical Cluster.The clustering algorithm can use k mean algorithm.
The determining module 742, for will include that the most cluster result of match point number is determined as the first screening set.
The converter unit 705 obtains complete for the match point in the first screening set to be coordinately transformed and estimated Match point under office's space coordinates.
The embodiment of the present invention, the converter unit 705 further comprises estimation module and conversion module, in which:
The estimation module, for estimating that transformation matrix, the transformation matrix are used for the matching using algorithm for estimating Point transforms to corresponding global space coordinate system by the local coordinate system of described image acquisition unit;
The conversion module, it is corresponding for being transformed to the match point by local coordinate system according to the transformation matrix Global space coordinate system.
In the embodiment of the present invention, the electronic equipment further includes positioning unit, for using under global space coordinate system Match point completes vision positioning.
One kind is provided in the embodiment of the present invention to screen the match point, obtains the mode of the first screening set, i.e., First, the match point is clustered, multiple cluster results are obtained;It then will be true comprising the most cluster result of match point number It is set to the first screening set;In this way, technical solution provided in an embodiment of the present invention, by match point carry out cluster realize to Screening with point so as to eliminate the noise point of bulk redundancy, and then can be realized in large scale scene and stablize, rapidly Positioning.
Embodiment eight
Based on the above embodiment, the embodiment of the present invention provides a kind of electronic equipment, and electronic equipment has image acquisition units, Fig. 8 is the composed structure schematic diagram of eight electronic equipment of the embodiment of the present invention, as shown in figure 8, the electronic equipment further includes first obtaining Unit 801, extraction unit 802, retrieval unit 803, screening unit 804 and converter unit 805 are taken, the electronic equipment further includes Second acquisition unit 806 and adjustment unit 807, in which:
The first acquisition unit 801, for obtaining the first image in current environment by described image acquisition unit Information;
The extraction unit 802 obtains N number of different feature for carrying out feature extraction to the first image information Parameter, each characteristic parameter is for describing the first image information;
Here, the feature extraction includes that feature detection is described with feature, wherein the method for feature detection and feature description Including Scale invariant features transform mode, accelerate robust features mode, binary system simple descriptor mode;Due to extracting feature Mode can be realized by above-mentioned the relevant technologies, repeated no more in the present embodiment.
The retrieval unit 803, for utilizing at least one described characteristic parameter in preset image feature base It is retrieved, obtains T matching result, the T is the integer greater than 1;
The screening unit 804, for obtaining corresponding match point according to the T matching result, to the match point It is clustered, obtains the first screening set;
Here, clustering processing is carried out using clustering algorithm pair match point corresponding with T matching result, obtained multiple poly- Class is as a result, the cluster result refers to that the cluster including several match points centered on cluster centre, the cluster are generally spherical cluster. The clustering algorithm can use k mean algorithm.
The second acquisition unit 806, for obtaining the performance parameter of described image acquisition unit itself, according to the property Energy parameter determines the distribution space range of described image acquisition unit the first image information collected;
Here, the performance parameter of described image acquisition unit itself is primarily referred to as the performance parameter of depth value, performance ginseng Number may include described image acquisition unit measurable range, and/or, the effective range of described image acquisition unit; Wherein, from the range size for including, the measurable range includes effective range, such as: Kinect3D camera The measurable range of depth value in 0.4m-8m, but when applying, the effective range of the depth value of Kinect3D camera is about 0.8m-4m, it is seen that measurable range includes effective range.
Here, the distribution space range is a range included by performance parameter, in general, image acquisition units The performance parameter of itself just determines the distribution space range of described image acquisition unit the first image information collected, and institute is not With, be using measurable range or using effective range and using it is a certain between measurable range and effective range it Between range.For example, the measurable range of the depth value of Kinect3D camera is 0.4m-8m, effective range is about 0.8m- 4m then can determine that the distribution space range of the first image information is 0.4m-8m, or determines the distribution of the first image information Spatial dimension is 0.8m-4m.
The adjustment unit 807, for adjusting first sieve using the distribution space range as cluster boundary condition Selected works close, the first screening set after being adjusted;
Specifically, the cluster result most comprising match point number is determined under the conditions of cluster boundary, includes matching by this The most cluster result of point number is determined as the first screening set adjusted.It is realized in this adjustment unit 807 single to screening The amendment of obtained interior point set in member 804, to realize the purpose of removal noise spot.
Here, it should be noted that it in the third embodiment also include cluster boundary condition, the cluster boundary in embodiment three Condition can be preassigned.
The converter unit 805, be also used to it is adjusted first screening set in match point be coordinately transformed with Estimation, obtains the match point under global space coordinate system.
In the embodiment of the present invention, the performance parameter includes measurable range;
Accordingly, the determination unit is also used to determine that described image is adopted according to the maximum value in the measurable range Collect the distribution space range of unit acquired image information.
Here, the example in second acquisition unit 806 is accepted, it is assumed that be measurable range in measurable range be 0.4m- 8m, then the distribution space range of determining image acquisition units acquired image information is 8m in the manner described above.When adopting It is determined as the distribution space model of described image acquisition unit acquired image information with the maximum value in the measurable range When enclosing, it is ensured that the robustness of technical solution provided in an embodiment of the present invention, it so also can be real in large scale scene Now stable, rapidly vision positioning.
In the embodiment of the present invention, the converter unit 805 further comprises estimation module and conversion module, in which:
The estimation module, for estimating that transformation matrix, the transformation matrix are used for the matching using algorithm for estimating Point transforms to corresponding global space coordinate system by the local coordinate system of described image acquisition unit;
The conversion module, it is corresponding for being transformed to the match point by local coordinate system according to the transformation matrix Global space coordinate system.
In the embodiment of the present invention, the electronic equipment further includes positioning unit, for using under global space coordinate system Match point completes vision positioning.
In the embodiment of the present invention, using the performance parameter of image acquisition units itself, the spatial distribution of matching point set is determined Characteristic, that is, match point spatial distribution range, and obtained matching double points are screened as cluster boundary condition, it obtains Then interior point set adjusted recycles algorithm for estimating to realize vision positioning.Compared to traditional technical solution, the present invention is implemented On the one hand the technical solution that example provides, which has the advantage that, can remove interference of a large amount of noises to positioning accuracy;On the other hand By filtering out a large amount of noises, it is time-consuming to reduce positioning.
The relevant technologies and above-mentioned provided technical solution is respectively adopted to Same Scene (some office in the embodiment of the present invention Area) carry out vision positioning processing, as shown in Fig. 4-2 and Fig. 4-3, Fig. 4-2 be using effect diagram obtained from the relevant technologies, Fig. 4-3 is using effect diagram obtained from the embodiment of the present invention, the relevant technologies and skill provided in an embodiment of the present invention Whether the difference of art scheme is: screening to the match point, wherein the embodiment of the present invention needs to sieve match point Choosing.In operation, the 3D data acquired using 3D video camera, totally 2084 frame data (believe by RGB image information and depth image Breath respectively has 2084 frames), and image spy's data sign library acquisition scene information includes two Office Areas, about 1800 square metres, image is special 67.7 ten thousand features are shared in sign database.326 location points are made when using the relevant technologies, in Fig. 4-2 in total;And use this 1224 location points are made provided by inventive embodiments when technical solution, in Fig. 4-3 altogether.As it can be seen that in same test data set In Same Scene, the position success rate of this technical solution provided in an embodiment of the present invention is 3.7 times of the relevant technologies.Therefore, Technical solution provided in an embodiment of the present invention can realize that stablizing, quickly locating is vision positioning skill in large scale scene The key point of art successful application.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.Apparatus embodiments described above are merely indicative, for example, the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can combine, or It is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed each composition portion Mutual coupling or direct-coupling or communication connection is divided to can be through some interfaces, the INDIRECT COUPLING of equipment or unit Or communication connection, it can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit The component shown can be or may not be physical unit;Both it can be located in one place, and may be distributed over multiple network lists In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can store in computer-readable storage medium, which exists When execution, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: movable storage device, read-only deposits Reservoir (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or The various media that can store program code such as CD.
If alternatively, the above-mentioned integrated unit of the present invention is realized in the form of software function module and as independent product When selling or using, it also can store in a computer readable storage medium.Based on this understanding, the present invention is implemented Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words, The computer software product is stored in a storage medium, including some instructions are used so that computer equipment (can be with It is personal computer, server or network equipment etc.) execute all or part of each embodiment the method for the present invention. And storage medium above-mentioned includes: that movable storage device, ROM, RAM, magnetic or disk etc. are various can store program code Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.

Claims (8)

1. a kind of information processing method, is applied to electronic equipment, electronic equipment has image acquisition units, which comprises
The first image information in current environment is obtained by described image acquisition unit;
Feature extraction is carried out to the first image information, obtains N number of different characteristic parameter, each characteristic parameter is used for The first image information is described;
It is retrieved in preset image feature base using characteristic parameter described at least one, obtains T matching result, The T is the integer greater than 1;
Corresponding match point is obtained according to the T matching result, the match point is screened, the first screening collection is obtained It closes;
The performance parameter for obtaining described image acquisition unit itself, determines described image acquisition unit institute according to the performance parameter The distribution space range of first image information of acquisition;
Using the distribution space range as cluster boundary condition, adjusts first screening and gather, first after being adjusted Screening set;
Match point in first screening set adjusted is coordinately transformed and is estimated, is obtained under global space coordinate system Match point.
2. the method according to claim 1, wherein the match point in the first screening set adjusted It is coordinately transformed and estimates, the match point obtained under global space coordinate system includes:
Estimate that transformation matrix, the transformation matrix are used for the match point by described image acquisition unit using algorithm for estimating Local coordinate system transforms to corresponding global space coordinate system;
The match point is transformed into corresponding global space coordinate system by local coordinate system according to the transformation matrix.
3. method according to claim 1 or 2, which is characterized in that it is described that the match point is screened, obtain first Screening set, comprising:
The match point is clustered, multiple cluster results are obtained;
It will be determined as the first screening set comprising the most cluster result of match point number.
4. the method according to claim 1, wherein the performance parameter includes measurable range;
Accordingly, the distribution space of described image acquisition unit the first image information collected is determined according to the performance parameter Range, comprising:
According to the maximum value in the measurable range, point of described image acquisition unit the first image information collected is determined Cloth spatial dimension.
5. a kind of electronic equipment, electronic equipment has image acquisition units, and the electronic equipment further includes first acquisition unit, mentions Take unit, retrieval unit, screening unit, second acquisition unit, adjustment unit and converter unit, in which:
The first acquisition unit, for obtaining the first image information in current environment by described image acquisition unit;
The extraction unit obtains N number of different characteristic parameter, often for carrying out feature extraction to the first image information One characteristic parameter is for describing the first image information;
The retrieval unit, for being examined in preset image feature base using characteristic parameter described at least one Rope, obtains T matching result, and the T is the integer greater than 1;
The screening unit sieves the match point for obtaining corresponding match point according to the T matching result Choosing, obtains the first screening set;
The second acquisition unit, for obtaining the performance parameter of described image acquisition unit itself, according to the performance parameter Determine the distribution space range of described image acquisition unit the first image information collected;
The adjustment unit, for adjusting the first screening set using the distribution space range as cluster boundary condition, The first screening set after being adjusted;
The converter unit is obtained for the match point in the first screening set adjusted to be coordinately transformed and estimated Match point under global space coordinate system.
6. electronic equipment according to claim 5, which is characterized in that the converter unit includes estimation module and transformation mould Block, in which:
The estimation module, for using algorithm for estimating estimate transformation matrix, the transformation matrix be used for by the match point by The local coordinate system of described image acquisition unit transforms to corresponding global space coordinate system;
The conversion module, for the match point to be transformed to the corresponding overall situation by local coordinate system according to the transformation matrix Space coordinates.
7. electronic equipment according to claim 5 or 6, which is characterized in that the screening unit include cluster module and really Cover half block, in which:
The cluster module gathers the match point for obtaining corresponding match point according to the T matching result Class obtains multiple cluster results;
The determining module, for will include that the most cluster result of match point number is determined as the first screening set.
8. electronic equipment according to claim 5, which is characterized in that the performance parameter includes measurable range;
Accordingly, the electronic equipment further includes determination unit, for determining institute according to the maximum value in the measurable range State the distribution space range of image acquisition units the first image information collected.
CN201410344552.4A 2014-07-18 2014-07-18 A kind of information processing method and electronic equipment Active CN105335399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410344552.4A CN105335399B (en) 2014-07-18 2014-07-18 A kind of information processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410344552.4A CN105335399B (en) 2014-07-18 2014-07-18 A kind of information processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105335399A CN105335399A (en) 2016-02-17
CN105335399B true CN105335399B (en) 2019-03-29

Family

ID=55285936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410344552.4A Active CN105335399B (en) 2014-07-18 2014-07-18 A kind of information processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105335399B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106355197A (en) * 2016-08-24 2017-01-25 广东宝乐机器人股份有限公司 Navigation image matching filtering method based on K-means clustering algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN103530881A (en) * 2013-10-16 2014-01-22 北京理工大学 Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011039974A (en) * 2009-08-18 2011-02-24 Kddi Corp Image search method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106688A (en) * 2013-02-20 2013-05-15 北京工业大学 Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN103530881A (en) * 2013-10-16 2014-01-22 北京理工大学 Outdoor augmented reality mark-point-free tracking registration method applicable to mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Matching with PROSAC - progressive sample consensus;O. Chum 等;《2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition》;20050725;正文第1-7页
基于视觉的无人机飞行过程定位算法研究;马园 等;《电光与控制》;20131130;第20卷(第11期);第42-46页

Also Published As

Publication number Publication date
CN105335399A (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN111328396B (en) Pose estimation and model retrieval for objects in images
CN106716450B (en) Image-based feature detection using edge vectors
Sodhi et al. In-field segmentation and identification of plant structures using 3D imaging
KR102138950B1 (en) Depth map generation from a monoscopic image based on combined depth cues
TWI569229B (en) Method for registering data
JP5822322B2 (en) Network capture and 3D display of localized and segmented images
Santos et al. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera
JP2015201850A (en) Method of estimating imaging device parameter
CN113850865A (en) Human body posture positioning method and system based on binocular vision and storage medium
Su et al. Natural scene statistics of color and range
CN112348958A (en) Method, device and system for acquiring key frame image and three-dimensional reconstruction method
CN116229189B (en) Image processing method, device, equipment and storage medium based on fluorescence endoscope
Junayed et al. HiMODE: A hybrid monocular omnidirectional depth estimation model
Bellavia et al. Image orientation with a hybrid pipeline robust to rotations and wide-baselines
CN111126508A (en) Hopc-based improved heterogeneous image matching method
CN114494594A (en) Astronaut operating equipment state identification method based on deep learning
US20150131897A1 (en) Method and Apparatus for Building Surface Representations of 3D Objects from Stereo Images
Saeed et al. PeanutNeRF: 3D radiance field for peanuts
Jindal et al. An ensemble mosaicing and ridgelet based fusion technique for underwater panoramic image reconstruction and its refinement
Zhao et al. Automatic sweet pepper detection based on point cloud images using subtractive clustering
CN105335399B (en) A kind of information processing method and electronic equipment
CN109641351B (en) Object feature identification method, visual identification device and robot
Cobb et al. Multi-image texton selection for sonar image seabed co-segmentation
Hachiuma et al. Recognition and pose estimation of primitive shapes from depth images for spatial augmented reality
Sok et al. Visually aided feature extraction from 3D range data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant