CN111259843B - Multimedia navigator testing method based on visual stability feature classification registration - Google Patents
Multimedia navigator testing method based on visual stability feature classification registration Download PDFInfo
- Publication number
- CN111259843B CN111259843B CN202010070309.3A CN202010070309A CN111259843B CN 111259843 B CN111259843 B CN 111259843B CN 202010070309 A CN202010070309 A CN 202010070309A CN 111259843 B CN111259843 B CN 111259843B
- Authority
- CN
- China
- Prior art keywords
- feature
- image
- registration
- multimedia
- navigator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Databases & Information Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Manufacturing & Machinery (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Health & Medical Sciences (AREA)
Abstract
A multimedia navigator testing method based on visual stability characteristic classification registration relates to a multimedia navigator testing method, in particular to a multimedia navigator testing method based on visual stability characteristic classification registration. The invention aims to solve the problem that image detail information cannot be captured due to over-bright background in the multimedia navigation display interface testing process. The method comprises the following specific steps: step one, preparing a temporary sample library in advance, carrying out primary sample collection on a target, naming pictures according to a corresponding sequence and storing the pictures; secondly, SURF characteristic point detection is carried out on all collected samples; step three, randomly extracting part of feature point sets; step four, finishing the training phase; step five, adopting a mode of voting for multiple scenes; step six, performing fast nearest neighbor approximate search algorithm matching on all feature points in the currently acquired image and all feature points of the matched image in the sample library; and step seven, continuously framing by the camera according to a certain frame rate. The invention belongs to the technical field of communication.
Description
Technical Field
The invention relates to a multimedia navigator testing method, in particular to a multimedia navigator testing method based on visual stability characteristic classification registration, and belongs to the technical field of communication.
Background
The development of computer technology enables the production and life of human beings to continuously develop towards the direction of intellectualization and visualization, computer vision is also more and more widely applied to various fields of industrial production, and machines can have human-like vision systems and process complex tasks more intelligently and flexibly. The digital image processing technology is one of the realization methods of computer vision, but for the traditional image processing algorithm, the influence of light level change and noise interference on the image analysis result is great, and the digital image processing method has no strong anti-interference capability; for example, in the background of the patent, for a scene or an object with self-luminous capability, it is difficult to maintain high adaptability to all targets and tasks by using one algorithm, and especially, online real-time processing is poor in effect due to various uncertain factors. Even if slight changes in light may not appear to be very different to the human eye, each pixel value is increased by an order of magnitude from a pixel point of view analysis, and globally the average level of the image changes significantly. Such variations will be decisive for the final processing result. Based on this, the stability of the image processing algorithm starting from the global situation is poor, and the adaptability to different environments is difficult to meet the actual requirement. However, in consideration of local characteristics, such random disturbance or change in a controllable range always causes difficulty in annihilation of local stable characteristics, and therefore, a design method of extracting and analyzing local characteristic information is mainly adopted in the present design.
The image registration is a technology capable of realizing retrieval and identification of unknown scenes, and is an important technical means for realizing tasks such as visual navigation, image information retrieval and the like. Generally, the core of registration is the detection and matching of features, such as SIFT, SURF, and other feature descriptors, which can extract the local key feature information of an image under the condition of large interference, and the registration still has good effect even under a complex environment. Feature detection is a fundamental algorithm in computer vision and image processing, which uses a computer algorithm to process image information, determines from a digital image perspective whether a pixel point of each image belongs to a feature of the image, or divides the points on the image into different subsets, and determines whether it belongs to an isolated point, a continuous curve, or a continuous region. It features that local analysis is performed from some positions in image, rather than whole image, and a certain local feature is stable enough to be judged as a key point of image. The method is widely applied to the fields of target recognition, image registration, visual tracking, three-dimensional reconstruction and the like.
As described above, one of the difficulties in detecting such a luminous target related to this patent is the change of brightness, which is determined by the ambient light level and does not change greatly in a short time, unlike the general target that can only reflect external light; however, the self-luminous target may be darker or brighter, or may be alternately bright and dark at any time, so that the quality of the acquired image is uneven, the possible brightness taken at a certain moment is appropriate, but the acquired image at the next moment is an overexposed image; one of the solutions is to automatically adjust the exposure time, but frequent adjustment in a short time is a great challenge for the camera, and especially for long-term application in industrial production, the camera has a great influence on the system reliability, and at the same time, the control flow is more complicated. Based on the above factors, we consider adjusting the camera exposure to a lower level, ensuring that the imaging at the maximum brightness of the target can still maintain a higher quality, and ensuring local key information even though global quality is lost at the lower brightness. On the basis, the specific position of the luminous object in the dark light environment can be restored by calculating the homography matrix according to the relative position relation of the matching key points in the real-time image and the sample image, and even people without judgment experience are difficult to position the target through the image.
The characteristic point detection mainly comprises the processes of extracting local characteristic points, describing the characteristic points, matching the characteristic points and the like. The well-known Feature detection algorithms which are applied more currently include a Scale-Invariant Feature Transform (SIFT-SIFT) Feature, namely, a Scale-Invariant Feature Transform, and a Speeded-Up Robust Feature. The two well realize the task of extracting the characteristics from the image, the LoG is approximated by the DoG in the former, a scale space is established to search for local extreme points and correct, each characteristic is described by a 128-dimensional vector, and the processing of the details of the characteristic points is very detailed; the latter is optimized to have more advantage in algorithm speed, which is much faster than SIFT, but is generally processed by a stable degree in scale and rotation invariance of images. Compared with SIFT characteristics, the method greatly improves the execution efficiency of the program, reduces the dimension of the description of the key points except for using a Hession matrix, and only adopts 64-dimension vectors to describe the characteristics. Feature vectors, also called feature descriptors, are used to store information about the keypoints themselves and their surrounding neighbors. The description of a feature point not only contains its own information in isolation, but also should embody a complete description of a local range, each feature point will form a multi-dimensional feature vector, and the feature vector describes the feature and all the surrounding information in detail.
In order to improve the efficiency of a control algorithm and further improve the detection and identification speed, and compare the performances of different algorithms through experiments, the SURF algorithm is adopted, meanwhile, the algorithm is improved for improving the detection precision and accuracy of the system, and a traditional method for carrying out violent matching on key points or directly matching and identifying the key points by a fast nearest neighbor approximate search algorithm is not adopted. The traditional feature point matching requires that after sampling each time, the extracted feature points are respectively subjected to one-to-one matching on N pictures in a sample library, and a certain sample with the optimal matching is taken as a recognition result, but the time overhead in the mode becomes very large along with the increase of the number of scene types, and the requirement of online real-time detection can be hardly met. The other idea is to perform classification learning on all feature points according to different label scenes, train an SVM decision maker with excellent performance aiming at the classification task by sample data features, rapidly classify the feature points of the collected new scene or target, and judge whether the current scene exists in a scene library or not, if so, the best matching is returned. The Support Vector Machine (SVM) mentioned here is a generalized linear classifier for binary classification of data by supervised machine learning, whose decision boundary is the maximum margin segmentation hyperplane solved for the learning samples. The core idea of the method is to linearly classify samples to be classified, although most tasks are linearly inseparable in an original space, nonlinear classification effects can be obtained by mapping input data to a high-dimensional space by using a nonlinear function and then applying a hyperplane selected by a linear SVM. SVMs were originally proposed for the two-classification problem, i.e., only when there are two possible scenarios, the traditional SVM can function. How to effectively popularize the method to multi-class classification is one of important contents of current research on support vector machines. The currently proposed solutions include one-to-one (one-against-one) and one-to-many (one-against-rest) methods, but both the classifier performance and the existence of non-partitioned areas are problems to be solved. The general SVM classifier has the obvious disadvantage that the distribution condition of each class is not considered, the general SVM classifier has obvious tendency in processing data with unbalanced sample distribution, for example, the number of detected key points in scene 1 is far more than that in scene 2, and the trained classifier is more favorable for scene 1 in decision making.
Therefore, aiming at the problem of stable feature classification and registration designed by the patent, a complete binary tree SVM multi-classification algorithm based on a spherical structure is used for training and learning feature points of all samples uniformly, classification is carried out according to the distribution characteristics of feature descriptors in a hyperspace, and an excellent feature classifier is learned according to the classification.
Disclosure of Invention
The invention provides a multimedia navigator test method based on visual stability characteristic classification registration, which aims to solve the problem that image detail information cannot be captured due to over-bright background in the multimedia navigation display interface test process.
The technical scheme adopted by the invention for solving the problems is as follows: the method comprises the following specific steps:
step one, preparing a temporary sample library in advance, carrying out primary sample collection on a target, naming pictures according to a corresponding sequence and storing the pictures;
secondly, SURF characteristic point detection is carried out on all collected samples;
randomly extracting part of feature point sets, training by applying a supervised machine learning classification algorithm, namely a complete binary tree SVM (support vector machine) based on a spherical structure, finding out a proper decision function, and adjusting learning strategies and parameters by using other feature points to continuously optimize until the system can accurately classify the feature point sets into N types of scenes matched with scene labels;
step four, finishing the training phase, wherein the camera can start to collect images, preprocessing the images, delivering the images to a feature extractor, keeping the images consistent with detection parameters of a training classifier, and acquiring a plurality of local invariant feature point information;
step five, a multi-scene voting mode is adopted, if the feature point Ki belongs to the interface Xi, the number of votes in the category is increased by 1, the same operation is completed on all the feature points, and the scene category with the largest number of votes is regarded as the best registration result with the current interface;
step six, performing fast nearest neighbor approximate search algorithm matching on all feature points in the currently acquired image and all feature points of the matched image in the sample library to find out the best matching feature point pair, calculating a homography matrix by using a random sampling consensus algorithm, wherein the homography matrix describes the transformation relation of two positions of a target in an image two-dimensional space, and detecting the position of the navigator in the unknown scene image;
and step seven, continuously framing according to a certain frame rate by the camera, detecting and outputting a registration result in real time by the system along with the change of the content of the liquid crystal screen, and performing cumulative recording on the detection result by the background and storing the detection result to the local in a log form.
Furthermore, when the display interfaces corresponding to all keys on the multimedia navigator are shot in the step one, the stability of the system is ensured because the subsequent algorithm designs margins aiming at the changes of the environment, the platform installation and the like, and if no special condition exists, the step only needs to be carried out once.
Furthermore, in the second step, during the SURF feature point detection of all the collected samples, it is required to ensure that the detection parameters are consistent, and each sample is required to be capable of extracting at least n feature points.
Further, assuming that there are N scene categories, k is extracted under a certain sceneiN, where i is 1,2,3iA set of feature points, where kiN, each feature point is described by a 64-dimensional vector.
Further, when the screen registration result does not conform to the expectation in step seven, the system automatically records the current time and stores the frame of image, so as to facilitate further analysis manually.
The invention has the beneficial effects that: the invention can complete a set of automatic test flow of operation and detection aiming at multimedia navigators of different models. The traditional navigator testing link needs to depend on a manual operation mode on a production line, the workload is high, the efficiency is low, and the phenomena of missing detection and false detection easily occur under a long-time condition. And the automation mode of computer vision can completely replace manual work, and the whole process completely does not need human participation by depending on the multimedia intelligent test platform in the design, thereby greatly improving the test efficiency and being suitable for large-batch online detection.
The traditional detection and identification of the luminous object are mostly completed by deep learning based on the neural network due to the complexity, so that a large number of samples are needed, the training period is long, the hardware has high requirements, the cost is high, the expansibility is poor, and retraining and designing software are needed when label scenes are added or different types of navigators are replaced. The characteristic classification registration technology provided by the patent greatly improves the stability and practicability of the platform on the premise of guaranteeing the beauty of the basic functions of realizing the multimedia test. A large number of experiments prove that the stability and the accuracy of the system can be still maintained under different lighting conditions, different positions and different installation modes, and even under the condition of artificially adding external interference (such as partial shielding or strong light to a target).
In order to enhance the adaptability of the design, not only are various jigs matched with the mechanical arm for operation prepared at the tail end of the mechanical arm, but also the expansibility of software is considered, when new interface content is replaced or added, the new interface is only required to be sampled and added into a sample library, and a label is defined, so that the navigators with different models and different scenes can be tested, redesign is not required, the practical application is facilitated, and the applicability is wide. Meanwhile, the design also provides a feasible scheme for identifying and positioning the target under the dark light condition. In conclusion, the detection platform has the significance of adopting an intelligent mode based on machine vision to detect the key functions of the multimedia navigator, and can also be used for identifying and positioning all luminous objects, so that the production efficiency of enterprises is improved, and the competitiveness of the enterprises is developed.
Drawings
FIG. 1 is a block diagram of the algorithm flow of the present invention.
Detailed Description
The first embodiment is as follows: the embodiment is described with reference to fig. 1, and the specific steps of the method for testing a multimedia navigator based on classification and registration of visual stability features in the embodiment are as follows:
step one, preparing a temporary sample library in advance, carrying out primary sample collection on a target, naming pictures according to a corresponding sequence and storing the pictures;
secondly, SURF characteristic point detection is carried out on all collected samples;
randomly extracting part of feature point sets, training by applying a supervised machine learning classification algorithm, namely a complete binary tree SVM (support vector machine) based on a spherical structure, finding out a proper decision function, and adjusting learning strategies and parameters by using other feature points to continuously optimize until the system can accurately classify the feature point sets into N types of scenes matched with scene labels;
step four, finishing the training phase, wherein the camera can start to collect images, preprocessing the images, delivering the images to a feature extractor, keeping the images consistent with detection parameters of a training classifier, and acquiring a plurality of local invariant feature point information;
step five, a multi-scene voting mode is adopted, if the feature point Ki belongs to the interface Xi, the number of votes in the category is increased by 1, the same operation is completed on all the feature points, and the scene category with the largest number of votes is regarded as the best registration result with the current interface;
step six, performing fast nearest neighbor approximate search algorithm matching on all feature points in the currently acquired image and all feature points of the matched image in the sample library to find out the best matching feature point pair, calculating a homography matrix by using a random sampling consensus algorithm, wherein the homography matrix describes the transformation relation of two positions of a target in an image two-dimensional space, and detecting the position of the navigator in the unknown scene image;
and step seven, continuously framing according to a certain frame rate by the camera, detecting and outputting a registration result in real time by the system along with the change of the content of the liquid crystal screen, and performing cumulative recording on the detection result by the background and storing the detection result to the local in a log form.
The second embodiment is as follows: the embodiment is described with reference to fig. 1, in a first step of the method for testing a multimedia navigator based on visual stability feature classification registration in the embodiment, when a display interface corresponding to all keys on the multimedia navigator is photographed, because a subsequent algorithm has designed margins for changes of environments, platform installation and other conditions to ensure the stability of the system, if no special condition exists, the step is performed only once.
The third concrete implementation mode: referring to fig. 1, the embodiment is described, and in the second step of the method for testing a multimedia navigator based on visual stability feature classification and registration in the embodiment, detection parameters should be ensured to be consistent in the SURF feature point detection process performed on all collected samples, and each sample is required to be capable of extracting at least n feature points.
The fourth concrete implementation mode: the embodiment is described with reference to fig. 1, N scene categories exist in the assumption of the method for testing a multimedia navigator based on classification and registration of visual stability features in the embodiment, and k is extracted in a certain sceneiN, where i is 1,2,3iA set of feature points, where ki>n, each feature point is described by a 64-dimensional vector.
The fifth concrete implementation mode: in the seventh step of the method for testing a multimedia navigator based on classification and registration of visual stability features according to the embodiment, when the screen registration result does not match the expected result, the system automatically records the current time and stores the frame of image, which is convenient for further analysis manually.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (3)
1. The multimedia navigator testing method based on the visual stability feature classification registration is characterized by comprising the following steps of: the multimedia navigator testing method based on the visual stability feature classification registration specifically comprises the following steps:
step one, preparing a temporary sample library in advance, carrying out primary sample collection on a target, naming pictures according to a corresponding sequence and storing the pictures;
secondly, SURF characteristic point detection is carried out on all collected samples; the consistency of detection parameters is ensured in the process of SURF characteristic point detection on all collected samples, each sample is required to be capable of extracting at least N characteristic points and N scene categories, and k is extracted under each sceneiN, wherein i is 1,2,3iA set of feature points, where kiN, each feature point is described by a 64-dimensional vector;
randomly extracting part of feature point sets, training by applying a supervised machine learning classification algorithm, namely a complete binary tree SVM (support vector machine) based on a spherical structure, finding a decision function, and adjusting learning strategies and parameters by using other feature points to continuously optimize until the system can accurately classify the feature point sets into N types of scenes matched with scene labels;
step four, finishing the training phase, starting to acquire images by a camera, preprocessing the images, delivering the images to a feature extractor, keeping the images consistent with detection parameters of a training classifier, and acquiring a plurality of local invariant feature point information;
step five, adopting a mode of voting for multiple scenes, and if the feature point belongs to the interface XiAdding 1 to the ticket number of the category, completing the same operation on all the feature points, and considering the scene category with the maximum ticket number as the optimal registration result with the current interface;
step six, performing fast nearest neighbor approximate search algorithm matching on all feature points in the currently acquired image and all feature points of the matched image in the sample library to find out the best matched feature point pair, calculating a homography matrix by a random sampling consensus algorithm, wherein the homography matrix describes the transformation relation of the two times of positions of the target in the two-dimensional space of the image, and detecting the position of the navigator in the unknown scene image;
and step seven, the camera continuously frames according to the inherent frame rate of the camera, the system detects and outputs a registration result in real time along with the change of the content of the liquid crystal screen, and the background carries out cumulative record on the detection result and stores the result to the local in a log mode.
2. The method for testing the multimedia navigator based on the classification and registration of the visual stability characteristics as claimed in claim 1, wherein: in the first step, when the display interfaces corresponding to all keys on the multimedia navigator are shot, the stability of the system is ensured because the subsequent algorithm designs allowance aiming at the change of the environment and the installation condition of the platform, and the step only needs to be carried out once.
3. The method for testing the multimedia navigator based on the classification and registration of the visual stability characteristics as claimed in claim 1, wherein: and seventhly, when the screen registration result does not accord with the prediction, the system automatically records the current time and stores the frame of image, so that the further analysis is facilitated manually.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010070309.3A CN111259843B (en) | 2020-01-21 | 2020-01-21 | Multimedia navigator testing method based on visual stability feature classification registration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010070309.3A CN111259843B (en) | 2020-01-21 | 2020-01-21 | Multimedia navigator testing method based on visual stability feature classification registration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111259843A CN111259843A (en) | 2020-06-09 |
CN111259843B true CN111259843B (en) | 2021-09-03 |
Family
ID=70952475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010070309.3A Active CN111259843B (en) | 2020-01-21 | 2020-01-21 | Multimedia navigator testing method based on visual stability feature classification registration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259843B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104504723A (en) * | 2015-01-14 | 2015-04-08 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN106530293A (en) * | 2016-11-07 | 2017-03-22 | 上海交通大学 | Manual assembly visual detection error prevention method and system |
CN107229560A (en) * | 2016-03-23 | 2017-10-03 | 阿里巴巴集团控股有限公司 | A kind of interface display effect testing method, image specimen page acquisition methods and device |
CN108229485A (en) * | 2018-02-08 | 2018-06-29 | 百度在线网络技术(北京)有限公司 | For testing the method and apparatus of user interface |
CN108596226A (en) * | 2018-04-12 | 2018-09-28 | 武汉精测电子集团股份有限公司 | A kind of defects of display panel training method and system based on deep learning |
CN109084955A (en) * | 2018-07-02 | 2018-12-25 | 北京百度网讯科技有限公司 | Display screen quality determining method, device, electronic equipment and storage medium |
CN110308346A (en) * | 2019-06-24 | 2019-10-08 | 中国航空无线电电子研究所 | Cockpit display system automatic test approach and system based on image recognition |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6766456B1 (en) * | 2000-02-23 | 2004-07-20 | Micron Technology, Inc. | Method and system for authenticating a user of a computer system |
US10120474B2 (en) * | 2010-12-09 | 2018-11-06 | T-Mobile Usa, Inc. | Touch screen testing platform for engaging a dynamically positioned target feature |
CN105894036B (en) * | 2016-04-19 | 2019-04-09 | 武汉大学 | A kind of characteristics of image template matching method applied to mobile phone screen defects detection |
CN106157310B (en) * | 2016-07-06 | 2018-09-14 | 南京汇川图像视觉技术有限公司 | The TFT LCD mura defect inspection methods combined with multichannel based on mixed self-adapting Level Set Models |
JP2019531547A (en) * | 2016-09-08 | 2019-10-31 | エイアイキュー ピーティーイー.リミテッド | Object detection with visual search queries |
US10380449B2 (en) * | 2016-10-27 | 2019-08-13 | Entit Software Llc | Associating a screenshot group with a screen |
CN107292279B (en) * | 2017-06-30 | 2021-02-23 | 宇龙计算机通信科技(深圳)有限公司 | Display method, display device, terminal and computer-readable storage medium |
US9870615B2 (en) * | 2017-07-19 | 2018-01-16 | Schwalb Consulting, LLC | Morphology identification in tissue samples based on comparison to named feature vectors |
CN109902541B (en) * | 2017-12-10 | 2020-12-15 | 彼乐智慧科技(北京)有限公司 | Image recognition method and system |
CN108171748B (en) * | 2018-01-23 | 2021-12-07 | 哈工大机器人(合肥)国际创新研究院 | Visual identification and positioning method for intelligent robot grabbing application |
CN109740696B (en) * | 2019-01-30 | 2021-09-24 | 亮风台(上海)信息科技有限公司 | Method and equipment for identifying pressing plate |
-
2020
- 2020-01-21 CN CN202010070309.3A patent/CN111259843B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104504723A (en) * | 2015-01-14 | 2015-04-08 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN107229560A (en) * | 2016-03-23 | 2017-10-03 | 阿里巴巴集团控股有限公司 | A kind of interface display effect testing method, image specimen page acquisition methods and device |
CN106530293A (en) * | 2016-11-07 | 2017-03-22 | 上海交通大学 | Manual assembly visual detection error prevention method and system |
CN108229485A (en) * | 2018-02-08 | 2018-06-29 | 百度在线网络技术(北京)有限公司 | For testing the method and apparatus of user interface |
CN108596226A (en) * | 2018-04-12 | 2018-09-28 | 武汉精测电子集团股份有限公司 | A kind of defects of display panel training method and system based on deep learning |
CN109084955A (en) * | 2018-07-02 | 2018-12-25 | 北京百度网讯科技有限公司 | Display screen quality determining method, device, electronic equipment and storage medium |
CN110308346A (en) * | 2019-06-24 | 2019-10-08 | 中国航空无线电电子研究所 | Cockpit display system automatic test approach and system based on image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN111259843A (en) | 2020-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107506703B (en) | Pedestrian re-identification method based on unsupervised local metric learning and reordering | |
CN107133569B (en) | Monitoring video multi-granularity labeling method based on generalized multi-label learning | |
Mathur et al. | Crosspooled FishNet: transfer learning based fish species classification model | |
CN110929593B (en) | Real-time significance pedestrian detection method based on detail discrimination | |
CN109829467A (en) | Image labeling method, electronic device and non-transient computer-readable storage medium | |
US20140079314A1 (en) | Method and Apparatus for Improved Training of Object Detecting System | |
Wang et al. | Tree leaves detection based on deep learning | |
CN110334703B (en) | Ship detection and identification method in day and night image | |
Lin et al. | Live Face Verification with Multiple Instantialized Local Homographic Parameterization. | |
CN108038515A (en) | Unsupervised multi-target detection tracking and its storage device and camera device | |
CN113763424B (en) | Real-time intelligent target detection method and system based on embedded platform | |
CN108805102A (en) | A kind of video caption detection and recognition methods and system based on deep learning | |
Zhao et al. | Real-time pedestrian detection based on improved YOLO model | |
CN116597438A (en) | Improved fruit identification method and system based on Yolov5 | |
Chen et al. | Single‐Object Tracking Algorithm Based on Two‐Step Spatiotemporal Deep Feature Fusion in a Complex Surveillance Scenario | |
CN114743257A (en) | Method for detecting and identifying image target behaviors | |
Luo et al. | RBD-Net: robust breakage detection algorithm for industrial leather | |
Gong et al. | Research on an improved KCF target tracking algorithm based on CNN feature extraction | |
CN116994049A (en) | Full-automatic flat knitting machine and method thereof | |
Villamizar et al. | Online learning and detection of faces with low human supervision | |
CN111259843B (en) | Multimedia navigator testing method based on visual stability feature classification registration | |
CN110728316A (en) | Classroom behavior detection method, system, device and storage medium | |
Meena Deshpande | License plate detection and recognition using yolo v4 | |
Ren et al. | Implementation of vehicle and license plate detection on embedded platform | |
Goyal et al. | Moving Object Detection in Video Streaming Using Improved DNN Algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Sun Jingting Inventor after: Chen Hui Inventor after: Cui Peng Inventor after: Jiang Di Inventor after: Wang Yuanze Inventor before: Sun Jingting Inventor before: Chen Hui Inventor before: Cui Peng |
|
CB03 | Change of inventor or designer information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |