Nothing Special   »   [go: up one dir, main page]

CN112906557B - Multi-granularity feature aggregation target re-identification method and system under multi-view angle - Google Patents

Multi-granularity feature aggregation target re-identification method and system under multi-view angle Download PDF

Info

Publication number
CN112906557B
CN112906557B CN202110183597.8A CN202110183597A CN112906557B CN 112906557 B CN112906557 B CN 112906557B CN 202110183597 A CN202110183597 A CN 202110183597A CN 112906557 B CN112906557 B CN 112906557B
Authority
CN
China
Prior art keywords
target
granularity
neural network
hypergraph
queried
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110183597.8A
Other languages
Chinese (zh)
Other versions
CN112906557A (en
Inventor
彭德光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhaoguang Technology Co ltd
Original Assignee
Chongqing Zhaoguang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhaoguang Technology Co ltd filed Critical Chongqing Zhaoguang Technology Co ltd
Priority to CN202110183597.8A priority Critical patent/CN112906557B/en
Publication of CN112906557A publication Critical patent/CN112906557A/en
Application granted granted Critical
Publication of CN112906557B publication Critical patent/CN112906557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a multi-granularity characteristic aggregation target re-identification method and system under a multi-view angle, comprising the following steps: constructing a multi-view neural network, and acquiring target characteristics of multiple views of a target object through the multi-view neural network; constructing a multi-granularity hypergraph based on the target characteristics of each target object in a set time period; inputting a target image to be queried, and acquiring a neighboring feature set of the target image to be queried from the multi-granularity hypergraph; performing similarity comparison on the adjacent feature set of the target image to be queried and the adjacent feature set of each target object in the multi-granularity hypergraph to obtain a target object re-identification result; the invention can effectively improve the re-identification precision.

Description

Multi-granularity feature aggregation target re-identification method and system under multi-view angle
Technical Field
The invention relates to the field of multi-granularity feature aggregation target re-identification method and system under multiple view angles.
Background
Pedestrian re-recognition based on video sequences is widely discussed because rich time information can be used for solving visual ambiguity, and currently, a classical method for video pedestrian re-recognition is to adopt a deep learning method to project a high-dimensional feature space on the video sequences, then perform identity matching sorting by calculating distances among samples, and mainly comprise the steps of representing video pedestrian features by aggregating frame-level time features by adopting a recurrent neural network and learning the time features by extracting video frame dynamic time information by using an optical flow field. Disadvantages of the prior art: 1. the most discriminant features cannot be learned based on the video learning of the recurrent neural network, and the training of the model on the long-segment video is complex and takes a long time. 2. The method for extracting the time features by means of the light field exploration flow structure is often easy to generate optical flow estimation errors due to the fact that adjacent frames of a certain video segment are not aligned. In order to solve the problems, the invention 1 provides a video pedestrian re-identification method based on multi-granularity feature aggregation under multi-view angles, simultaneously captures multi-granularity spatial information and time information of a video sequence, and adopts a simple and efficient hypergraph construction mode to reserve and enhance the diversity discrimination feature representation of different spatial granularities.
Disclosure of Invention
In view of the problems in the prior art, the invention provides a multi-granularity feature aggregation target re-identification method and system under multiple visual angles, which mainly solve the problems of long time consumption and low accuracy in training in the prior art.
In order to achieve the above and other objects, the present invention adopts the following technical scheme.
A multi-granularity characteristic aggregation target re-identification method under a multi-view angle comprises the following steps:
constructing a multi-view neural network, and acquiring target characteristics of multiple views of a target object through the multi-view neural network;
constructing a multi-granularity hypergraph based on the target characteristics of each target object in a set time period;
inputting a target image to be queried, and acquiring a neighboring feature set of the target image to be queried from the multi-granularity hypergraph;
and comparing the similarity between the adjacent feature set of the target image to be queried and the adjacent feature set of each target object in the multi-granularity hypergraph to obtain a target object re-identification result.
Optionally, the multi-view neural network comprises a convolutional neural network and a classification output layer, and after the image is subjected to feature extraction by the convolutional neural network, the image is input into the classification output layer to obtain target feature output of different views.
Optionally, the multi-view neural network is pre-trained by inputting a set of images containing pre-labeled different views into the multi-view neural network, constructing a loss function by cross entropy, and updating network parameters by using back propagation.
Optionally, the loss function is expressed as:
Figure BDA0002942141000000021
wherein y is i For a label to be of a corresponding viewing angle,
Figure BDA0002942141000000022
for classification prediction results, N is the number of views.
Optionally, the target object includes a pedestrian, a vehicle.
Optionally, acquiring the adjacent feature set of the target image to be queried from the multi-granularity hypergraph includes:
calculating Euclidean distance among target features in the multi-granularity hypergraph, and acquiring first K target features with the nearest feature distance corresponding to the target image to be queried;
and acquiring a neighbor set of each target feature in the K target features, and selecting neighbor sets containing corresponding features of the target image to be queried in each neighbor set to form a neighbor feature set of the target image to be queried.
Optionally, performing similarity comparison between the adjacent feature set of the target image to be queried and the adjacent feature set of each target object in the multi-granularity hypergraph to obtain a target object re-identification result, which includes:
and selecting a target object corresponding to the adjacent feature set, the similarity of which reaches a set threshold value, as re-identification output by measuring the similarity between the adjacent feature sets through the Jaccard distance.
Alternatively, the similarity calculation method is expressed as:
Figure BDA0002942141000000023
wherein I is i ,I j Respectively representing two frames of images, R (I i K) represents image I i Is described.
A multi-granularity feature aggregation target re-identification system at multiple perspectives, comprising:
the network construction module is used for constructing a multi-view neural network, and acquiring target characteristics of multiple views of a target object through the multi-view neural network;
the hypergraph construction module is used for constructing a multi-granularity hypergraph based on the target characteristics of each target object in a set time period;
the feature set acquisition module is used for inputting a target image to be queried and acquiring a neighboring feature set of the target image to be queried from the multi-granularity hypergraph;
and the identification module is used for comparing the similarity between the adjacent feature set of the target image to be queried and the adjacent feature set of each target object in the multi-granularity hypergraph to obtain a target object re-identification result.
As described above, the multi-granularity characteristic aggregation target re-identification method and system under multi-view angle have the following beneficial effects.
The viewing angle information is increased, and the problems of shielding, viewing angle difference and the like are overcome; and enhancing the re-identification precision through the neighbor feature set.
Drawings
FIG. 1 is a flowchart of a multi-granularity feature aggregation target re-identification method under a multi-view in an embodiment of the invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
Referring to fig. 1, the present invention provides a multi-granularity feature aggregation target re-identification method under multi-view, which includes steps S01-S04.
In step S01, a multi-view neural network is constructed, and target characteristics of multiple views of a target object are acquired through the multi-view neural network:
in one embodiment, the target object may comprise a pedestrian, a vehicle, etc., and the video image containing the target object is acquired in advance, and the video sequence is acquired as an input to the multi-view neural network.
In an embodiment, the multi-view neural network comprises a convolutional neural network and a classified output layer, and after the image is subjected to feature extraction through the convolutional neural network, the classified output layer is input to obtain target feature output of different views.
In one embodiment, a set of images containing pre-labeled different perspectives is input into a multi-perspective neural network, and the multi-perspective neural network is pre-trained by constructing a loss function through cross entropy and updating network parameters by using back propagation.
Specifically, a ternary classification output layer is added after the traditional CNN, and the marked image x is utilized i As input, it corresponds to the viewing angle label y i As a supervisory signal, for the prediction result
Figure BDA0002942141000000041
Supervision training by cross entropy, cross entropy loss function can be adoptedExpressed as:
Figure BDA0002942141000000042
the forward and backward algorithms are used to complete the update calculation of the loss function.
Extracting video frame characteristics;
for video sequences i= { I containing a sheet of pictures 1 ,I 2 ,...,I T And extracting the characteristics of each image by adopting the constructed multi-view neural network, which can be expressed as follows:
F i =CNN(I i ),i=1,...,T,
wherein F is i The three-dimensional tensor of each dimension c×h×w, the channel size, and the height and width of the feature map.
In step S02, a multi-granularity hypergraph is constructed based on the target features of each target object within a set period of time:
and (3) dividing the image features extracted in the step (S01) into p E {1,2,4,8} horizontal blocks according to a horizontal division mode, and carrying out average combination on the divided feature images to construct partial feature vectors. For each granularity, the entire sequence generates N p T×p partial-level features, respectively denoted as
Figure BDA0002942141000000043
The first granularity of a video sequence contains a single global feature vector, and the other granularity consists of partial feature vectors.
First using v i ∈V p ,i∈{1,2,...,N p The pre-candidate nodes needed for building the hypergraph are defined as a group of hyperedges E for capturing time information p To model short-to-long-term correlations in hypergraphs. Specifically, for any one candidate node v i Select it at time T t The most similar K adjacent nodes in the network
Figure BDA0002942141000000051
The k+1 nodes are related by using the hyperedge as shown in the following formula, which is specifically expressed as follows:
Figure BDA0002942141000000052
hypergraph feature updating;
for a node v of hypergraph i Definition of
Figure BDA0002942141000000053
Representing all supersides associated with the point, since the point associated with one superside has extremely strong association, the feature of defining the superside with the aggregation operation is as follows:
Figure BDA0002942141000000054
wherein,,
Figure BDA0002942141000000055
representing v j Node characteristics at the layer. To calculate the association relation between node features and the association feature of the superside, calculate its similarity +.>
Figure BDA0002942141000000056
Figure BDA0002942141000000057
Wherein,,
Figure BDA0002942141000000058
representing the similarity between features. In addition, the softMax is adopted to normalize the similarity weight and aggregate the superside information to respectively calculate and obtain ++>
Figure BDA0002942141000000059
Figure BDA00029421410000000510
Figure BDA00029421410000000511
After the aggregated superside information is acquired, node characteristics can be associated by a full connection layer:
Figure BDA00029421410000000512
wherein W is l Representing a weight matrix, σ represents the excitation equation. Thus repeating the update mechanism more than L times, a series of output node characteristics can be calculated
Figure BDA00029421410000000513
Hypergraph feature aggregation based on an attention mechanism;
after obtaining the final updated node characteristics for each hypergraph, it is considered that in one hypergraph, different nodes have different importance. For example: the lower the importance of the blocked part or background is, the better the feature discrimination is. Thus, discriminant computation based on the mechanism of attention is designed, with nodes of each hypergraph being noted
Figure BDA0002942141000000061
Figure BDA0002942141000000062
Wherein W is u Representing a weight matrix. The hypergraph feature can thus be computed as a weight aggregation of node features:
Figure BDA0002942141000000063
aggregating the multi-granularity hypergraph based on the mutual information minimizing retention loss;
to optimize the framework, the training process is co-supervised with cross entropy loss and triplet loss:
Figure BDA0002942141000000064
Figure BDA0002942141000000065
wherein y is i Representing characteristics
Figure BDA0002942141000000066
N, C represent the size of mini-batch and the number of training set categories, respectively,
Figure BDA0002942141000000067
the query sample, positive sample and negative sample are respectively represented when the division granularity is p. After training the model based on the two loss terms, each hypergraph will output a distinct graph-level feature.
In order to acquire the characteristics fusing multi-granularity hypergraph information, mutual information minimization loss is adopted, mutual information among different hypergraph characteristics is reduced, and uncertainty of final video representation is further increased by combining all the characteristics. Thus, for hypergraph features of different granularity p, we define the mutual information minimization penalty:
Figure BDA0002942141000000068
kappa is used to measure the mutual information of different hypergraph features. Finally, the loss functions of all parts are combined as in the formula (13), and a forward and backward algorithm is adopted to finish the updating calculation of the loss functions.
L all =L xent +L tri +L MI
In step S03, inputting a target image to be queried, and acquiring a set of neighboring features of the target image to be queried from the multi-granularity hypergraph:
in an embodiment, calculating Euclidean distance among target features in the multi-granularity hypergraph, and acquiring first K target features with the nearest feature distance corresponding to the target image to be queried;
and acquiring a neighbor set of each target feature in the K target features, and selecting neighbor sets containing corresponding features of the target image to be queried in each neighbor set to form a neighbor feature set of the target image to be queried.
Specifically, the Euclidean distance d between hypergraph features obtained by step S03 is calculated m (F′i,F′ j ) The k nearest distances of the query image probe correspond to a neighbor set N (probe, k), wherein the set contains positive samples and negative samples, and is defined as:
Figure BDA0002942141000000071
wherein,,
Figure BDA0002942141000000072
samples 1,2, and k near the Euclidean distance between the samples are shown. At the same time, for each of the neighbor sets N +.>
Figure BDA0002942141000000073
Also have their respective neighbor sets N ', which are adjacent to each other if probes are included in N', or are not adjacent to each other. Thus, the k-mutual proximity set R of the probe can be obtained, and all elements in the R are target objects which are mutually adjacent to the probe.
R(probe,k)={(t i ∈N(probe,k)∩(p∈N(t i ,k))} (16)
The set can be regarded as a probe k-mutual proximity feature, which is more suitable for similarity measurement between pedestrians than a hypergraph feature.
In step S04, similarity comparison is performed between the adjacent feature set of the target image to be queried and the adjacent feature set of each target object in the multi-granularity hypergraph, and a target object re-recognition result is obtained:
in an embodiment, through the similarity between adjacent feature sets measured by the Jaccard distance, selecting a target object corresponding to the adjacent feature set with the similarity reaching a set threshold as the re-recognition output, specifically:
to describe any two images I specifically from the point of view of collection i ,I j Variability between nearest neighbor sets, defining Jaccard distance between two neighbor sets
Figure BDA0002942141000000074
And measuring the similarity between the target objects by the distance, and re-identifying the query target objects.
The embodiment also provides a multi-granularity characteristic aggregation target re-identification system under the multi-view angle, which is used for executing the multi-granularity characteristic aggregation target re-identification method under the multi-view angle in the embodiment of the method. Since the technical principle of the system embodiment is similar to that of the foregoing method embodiment, the same technical details will not be repeated.
In one embodiment, a multi-granularity feature aggregation target re-identification system under multiple perspectives comprises: the device comprises a network construction module, a hypergraph construction module, a feature set acquisition module and an identification module, wherein the network construction module is used for assisting in executing the step S01 in the embodiment of the method; the hypergraph construction module is used for assisting in executing the step S02 in the embodiment of the method; the feature set obtaining module is used for assisting in executing step S03 in the method embodiment; the identification module is used to assist in executing step S04 in the foregoing method embodiment.
In summary, according to the multi-granularity feature aggregation target re-identification method and system under the view angle, the three-dimensional view angle classification is adopted to enable the pedestrian feature to contain view angle information in subsequent processing, so that the problems of shielding, view angle difference and the like are overcome; the hypergraph neural network structure can simultaneously extract the spatial characteristics and the time dependence of the video frame, and the hypergraph diversity corresponding to different spatial granularities can be reserved and enhanced by using mutual information to minimize loss; the k-mutual adjacent coding method is utilized to improve the re-recognition precision of pedestrians, and the defect that the hypergraph learning is too focused on local information is overcome. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (6)

1. The multi-granularity characteristic aggregation target re-identification method under the multi-view angle is characterized by comprising the following steps of:
constructing a multi-view neural network, acquiring target characteristics of multiple views of a target object through the multi-view neural network, wherein the multi-view neural network comprises a convolutional neural network and a classified output layer, and after an image is subjected to characteristic extraction through the convolutional neural network, inputting the image into the classified output layer to acquire target characteristic output of different views;
constructing a multi-granularity hypergraph based on the target characteristics of each target object in a set time period;
inputting a target image to be queried, and acquiring a neighboring feature set of the target image to be queried from the multi-granularity hypergraph;
the similarity comparison is carried out on the adjacent feature set of the target image to be queried and the adjacent feature set of each target object in the multi-granularity hypergraph, and a target object re-identification result is obtained, which comprises the following steps: and (3) measuring the similarity between adjacent feature sets through the Jaccard distance, selecting a target object corresponding to the adjacent feature set, the similarity of which reaches a set threshold value, as re-identification output, wherein the similarity calculation mode is expressed as follows:
Figure FDA0004259182210000011
wherein I is i ,I j Respectively representing two frames of images, R (I i K) represents image I i Is described.
2. The multi-granularity feature aggregation target re-recognition method under multi-view according to claim 1, wherein a set of images including pre-labeled different views is input into the multi-view neural network, and the multi-view neural network is pre-trained by constructing a loss function through cross entropy and updating network parameters by adopting back propagation.
3. The multi-granularity feature aggregation target re-identification method under multi-view according to claim 2, wherein the loss function is expressed as:
Figure FDA0004259182210000012
wherein y is i For a label to be of a corresponding viewing angle,
Figure FDA0004259182210000013
for classification prediction results, N is the number of views.
4. The multi-granularity feature aggregation target re-recognition method under multiple perspectives of claim 1, wherein the target object comprises a pedestrian or a vehicle.
5. The multi-granularity feature aggregation target re-identification method under the multi-view angle according to claim 1, wherein the acquiring the adjacent feature set of the target image to be queried from the multi-granularity hypergraph comprises:
calculating Euclidean distance among target features in the multi-granularity hypergraph, and acquiring first K target features with the nearest feature distance corresponding to the target image to be queried;
and acquiring a neighbor set of each target feature in the K target features, and selecting neighbor sets containing corresponding features of the target image to be queried in each neighbor set to form a neighbor feature set of the target image to be queried.
6. A multi-granularity feature aggregation target re-identification system under multiple viewing angles, comprising:
the system comprises a network construction module, a multi-view neural network, a classification output layer and a target characteristic acquisition module, wherein the network construction module is used for constructing a multi-view neural network, acquiring target characteristics of a target object at multiple views through the multi-view neural network, the multi-view neural network comprises a convolutional neural network and the classification output layer, and after an image is subjected to characteristic extraction through the convolutional neural network, the image is input into the classification output layer to acquire target characteristic output at different views;
the hypergraph construction module is used for constructing a multi-granularity hypergraph based on the target characteristics of each target object in a set time period;
the feature set acquisition module is used for inputting a target image to be queried and acquiring an adjacent feature set of the target image to be queried from the multi-granularity hypergraph;
the identification module is used for comparing the similarity between the adjacent feature set of the target image to be queried and the adjacent feature set of each target object in the multi-granularity hypergraph to obtain a target object re-identification result, and comprises the following steps: and (3) measuring the similarity between adjacent feature sets through the Jaccard distance, selecting a target object corresponding to the adjacent feature set, the similarity of which reaches a set threshold value, as re-identification output, wherein the similarity calculation mode is expressed as follows:
Figure FDA0004259182210000021
wherein I is i ,I j Respectively representing two frames of images, R (I i K) represents image I i Is described.
CN202110183597.8A 2021-02-08 2021-02-08 Multi-granularity feature aggregation target re-identification method and system under multi-view angle Active CN112906557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110183597.8A CN112906557B (en) 2021-02-08 2021-02-08 Multi-granularity feature aggregation target re-identification method and system under multi-view angle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110183597.8A CN112906557B (en) 2021-02-08 2021-02-08 Multi-granularity feature aggregation target re-identification method and system under multi-view angle

Publications (2)

Publication Number Publication Date
CN112906557A CN112906557A (en) 2021-06-04
CN112906557B true CN112906557B (en) 2023-07-14

Family

ID=76123514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110183597.8A Active CN112906557B (en) 2021-02-08 2021-02-08 Multi-granularity feature aggregation target re-identification method and system under multi-view angle

Country Status (1)

Country Link
CN (1) CN112906557B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299128A (en) * 2021-12-30 2022-04-08 咪咕视讯科技有限公司 Multi-view positioning detection method and device
CN114419349B (en) * 2022-03-30 2022-07-15 中国科学技术大学 Image matching method and device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7428998B2 (en) * 2003-11-13 2008-09-30 Metrologic Instruments, Inc. Automatic hand-supportable image-based bar code symbol reader having image-processing based bar code reading subsystem employing simple decode image processing operations applied in an outwardly-directed manner referenced from the center of a captured narrow-area digital image of an object bearing a 1D bar code symbol
US8024193B2 (en) * 2006-10-10 2011-09-20 Apple Inc. Methods and apparatus related to pruning for concatenative text-to-speech synthesis
CN102663374B (en) * 2012-04-28 2014-06-04 北京工业大学 Multi-class Bagging gait recognition method based on multi-characteristic attribute
CN103959308A (en) * 2011-08-31 2014-07-30 Metaio有限公司 Method of matching image features with reference features
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN104281572A (en) * 2013-07-01 2015-01-14 中国科学院计算技术研究所 Target matching method and system based on mutual information
CN106780551A (en) * 2016-11-18 2017-05-31 湖南拓视觉信息技术有限公司 A kind of Three-Dimensional Moving Targets detection method and system
CN109543602A (en) * 2018-11-21 2019-03-29 太原理工大学 A kind of recognition methods again of the pedestrian based on multi-view image feature decomposition
CN106096532B (en) * 2016-06-03 2019-08-09 山东大学 A kind of across visual angle gait recognition method based on tensor simultaneous discriminant analysis
CN110738146A (en) * 2019-09-27 2020-01-31 华中科技大学 target re-recognition neural network and construction method and application thereof
CN111814584A (en) * 2020-06-18 2020-10-23 北京交通大学 Vehicle weight identification method under multi-view-angle environment based on multi-center measurement loss
CN112132014A (en) * 2020-09-22 2020-12-25 德州学院 Target re-identification method and system based on non-supervised pyramid similarity learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7428998B2 (en) * 2003-11-13 2008-09-30 Metrologic Instruments, Inc. Automatic hand-supportable image-based bar code symbol reader having image-processing based bar code reading subsystem employing simple decode image processing operations applied in an outwardly-directed manner referenced from the center of a captured narrow-area digital image of an object bearing a 1D bar code symbol
US8024193B2 (en) * 2006-10-10 2011-09-20 Apple Inc. Methods and apparatus related to pruning for concatenative text-to-speech synthesis
CN103959308A (en) * 2011-08-31 2014-07-30 Metaio有限公司 Method of matching image features with reference features
CN102663374B (en) * 2012-04-28 2014-06-04 北京工业大学 Multi-class Bagging gait recognition method based on multi-characteristic attribute
CN104281572A (en) * 2013-07-01 2015-01-14 中国科学院计算技术研究所 Target matching method and system based on mutual information
CN104061907A (en) * 2014-07-16 2014-09-24 中南大学 Viewing-angle greatly-variable gait recognition method based on gait three-dimensional contour matching synthesis
CN106096532B (en) * 2016-06-03 2019-08-09 山东大学 A kind of across visual angle gait recognition method based on tensor simultaneous discriminant analysis
CN106780551A (en) * 2016-11-18 2017-05-31 湖南拓视觉信息技术有限公司 A kind of Three-Dimensional Moving Targets detection method and system
CN109543602A (en) * 2018-11-21 2019-03-29 太原理工大学 A kind of recognition methods again of the pedestrian based on multi-view image feature decomposition
CN110738146A (en) * 2019-09-27 2020-01-31 华中科技大学 target re-recognition neural network and construction method and application thereof
CN111814584A (en) * 2020-06-18 2020-10-23 北京交通大学 Vehicle weight identification method under multi-view-angle environment based on multi-center measurement loss
CN112132014A (en) * 2020-09-22 2020-12-25 德州学院 Target re-identification method and system based on non-supervised pyramid similarity learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于混合式协同训练的人体动作识别算法研究;景陈勇等;《计算机科学 》;第275-278页 *

Also Published As

Publication number Publication date
CN112906557A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN109948425B (en) Pedestrian searching method and device for structure-aware self-attention and online instance aggregation matching
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN106096561B (en) Infrared pedestrian detection method based on image block deep learning features
CN103971386B (en) A kind of foreground detection method under dynamic background scene
CN109712105B (en) Image salient object detection method combining color and depth information
CN110263697A (en) Pedestrian based on unsupervised learning recognition methods, device and medium again
CN109165540B (en) Pedestrian searching method and device based on prior candidate box selection strategy
CN110414368A (en) A kind of unsupervised pedestrian recognition methods again of knowledge based distillation
CN112507901B (en) Unsupervised pedestrian re-identification method based on pseudo tag self-correction
CN105809672B (en) A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring
CN112906557B (en) Multi-granularity feature aggregation target re-identification method and system under multi-view angle
CN109492583A (en) A kind of recognition methods again of the vehicle based on deep learning
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
CN111461043B (en) Video significance detection method based on deep network
CN112070010B (en) Pedestrian re-recognition method for enhancing local feature learning by combining multiple-loss dynamic training strategies
CN113313123B (en) Glance path prediction method based on semantic inference
CN108846416A (en) The extraction process method and system of specific image
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN110569706A (en) Deep integration target tracking algorithm based on time and space network
CN114372523A (en) Binocular matching uncertainty estimation method based on evidence deep learning
CN113591545A (en) Deep learning-based multistage feature extraction network pedestrian re-identification method
CN104463962B (en) Three-dimensional scene reconstruction method based on GPS information video
CN116402690A (en) Road extraction method, system, equipment and medium in high-resolution remote sensing image based on multi-head self-attention mechanism
CN115690549A (en) Target detection method for realizing multi-dimensional feature fusion based on parallel interaction architecture model
Han et al. Light-field depth estimation using RNN and CRF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 400000 6-1, 6-2, 6-3, 6-4, building 7, No. 50, Shuangxing Avenue, Biquan street, Bishan District, Chongqing

Applicant after: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

Address before: 400000 2-2-1, 109 Fengtian Avenue, tianxingqiao, Shapingba District, Chongqing

Applicant before: CHONGQING ZHAOGUANG TECHNOLOGY CO.,LTD.

GR01 Patent grant
GR01 Patent grant