Nothing Special   »   [go: up one dir, main page]

CN112507893A - Distributed unsupervised pedestrian re-identification method based on edge calculation - Google Patents

Distributed unsupervised pedestrian re-identification method based on edge calculation Download PDF

Info

Publication number
CN112507893A
CN112507893A CN202011464486.6A CN202011464486A CN112507893A CN 112507893 A CN112507893 A CN 112507893A CN 202011464486 A CN202011464486 A CN 202011464486A CN 112507893 A CN112507893 A CN 112507893A
Authority
CN
China
Prior art keywords
pedestrian
layer
pedestrian image
edge device
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011464486.6A
Other languages
Chinese (zh)
Inventor
吕建明
胡超杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202011464486.6A priority Critical patent/CN112507893A/en
Publication of CN112507893A publication Critical patent/CN112507893A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a distributed unsupervised pedestrian re-identification method based on edge calculation, which comprises the following steps: 1) constructing a distributed system consisting of edge devices; 2) the edge device processes the collected pedestrian image data and acquires a local pseudo tag; 3) the edge device trains a convolutional neural network model by using local pedestrian image data and a local pseudo label; 4) training a convolutional neural network model in a wandering manner in a distributed system; 5) carrying out interactive clustering on the pedestrian image features extracted by the convolutional neural network model among edge devices and establishing a neighbor list; 6) the edge device calculates a global pseudo label of the pedestrian image data according to the neighbor list; 7) the edge device trains a convolutional neural network model by using local pedestrian image data and a global pseudo label; 8) the convolutional neural network model utilizes a global pseudo label to perform wandering training in a distributed system; 9) and extracting the characteristics of the test picture and sending the characteristics to the edge equipment to realize pedestrian re-identification by utilizing similarity sequencing.

Description

Distributed unsupervised pedestrian re-identification method based on edge calculation
Technical Field
The invention relates to the technical field of edge calculation and computer vision, in particular to a distributed unsupervised pedestrian re-identification method based on edge calculation.
Background
The pedestrian re-identification method is characterized in that images of the same pedestrian are matched under different camera scenes, and the pedestrian re-identification method can be used as an important auxiliary task for identifying the pedestrian under the scene that the identity information of the pedestrian cannot be accurately acquired. The pedestrian re-identification is widely applied to the fields of intelligent video monitoring, intelligent security and the like.
The pedestrian re-identification in the traditional sense mainly faces the problem that pedestrians have large appearance difference under different cameras, and the appearance difference mainly refers to the difference of postures, the difference of imaging colors and the difference of environmental shielding. Secondly, tens of thousands of cameras in a city can generate a large amount of video data, and the transmission and analysis of the video data bring huge burden to a cloud server, so that the difficulty is further increased for a pedestrian re-identification task with higher timeliness requirement.
At present, the research directions of pedestrian re-identification are mainly divided into two categories, namely supervised pedestrian re-identification and unsupervised pedestrian re-identification. The supervised pedestrian re-recognition is trained by using a labeled data set, generally based on a deep learning method, labeled data can accurately guide the direction of model optimization, and the supervised pedestrian re-recognition field obtains an ideal effect on a main stream data set. The problem of obvious performance reduction occurs when a model trained on the basis of a labeled data set is transferred to other data sets for testing, so that unsupervised pedestrian re-identification gradually becomes a mainstream research direction in the industry. At present, the unsupervised pedestrian re-identification research focuses more on the direction of transfer learning, and incremental training is carried out on the model by combining labeled source domain data and unlabeled target domain data, so that the model has stronger generalization performance.
At present, most pedestrian re-identification algorithms are centralized algorithms, and mass data are delivered to a server for batch processing. In a real scene, tens of thousands of cameras in a city generate massive video data all the time, and a centralized algorithm brings great pressure to a server and network bandwidth, so that real-time processing of the data is difficult to ensure.
In addition, the pedestrian re-recognition algorithm is trained on a data set with a small data scale at present, and no data set with a million-level is used for study and study. In a real scene, pedestrian data acquired by a camera dynamically increases in real time, and most of current mainstream algorithms do not consider how to optimize and incrementally train a model in an environment where data dynamically increases.
Wherein the edge calculation is defined as follows: edge computing refers to an open platform integrating network, computing, storage and application core capabilities at one side close to an object or a data source to provide nearest-end services nearby. The application program is initiated at the edge side, so that a faster network service response is generated, and the basic requirements of the industry in the aspects of real-time business, application intelligence, safety, privacy protection and the like are met.
The distributed algorithm is defined as follows: a distributed algorithm is an algorithm designed to run on computer hardware consisting of distributed processors that are able to communicate with each other.
The centralized algorithm is defined as follows: the centralized algorithm is designed to run on a large central computing host, and a terminal connected with the host is only used as an input and output device of data and has no processing capability.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides a distributed unsupervised pedestrian re-identification method based on edge calculation.
The purpose of the invention can be achieved by adopting the following technical scheme:
a distributed unsupervised pedestrian re-identification method based on edge calculation comprises the following steps:
s1, constructing a distributed system consisting of a plurality of edge devices, wherein each edge device is used for collecting pedestrian data in a monitoring range, and each edge device is also preset with a convolutional neural network model pre-trained by ImageNet, wherein the pedestrian data comprises pedestrian image data, the collection time of the pedestrian image data and the position information of the pedestrian image data;
s2, each edge device generates a countermeasure network cycleGAN to carry out style migration on the image through circulation consistency on the pedestrian image data acquired locally, and assigns the pedestrian image data with space-time continuity to the same local pseudo label by combining the acquisition time of the pedestrian image data and the position information of the pedestrian image data;
s3, inputting the pedestrian image data obtained through the processing in the step S2 into a convolutional neural network model by each edge device to extract pedestrian image features, then inputting the pedestrian image features and the pedestrian image local pseudo labels into a classifier initialized at random, and training a minimized first cross entropy loss function until the convolutional neural network model and the classifier converge, wherein the pedestrian image features extracted in the training process are stored locally in the current edge device;
s4, carrying out random walk on the converged convolutional neural network model trained in the step S3 in a distributed system, extracting pedestrian image features of local pedestrian image data in the current edge device after reaching one edge device, inputting the pedestrian image features and the local pseudo labels of the pedestrian image data into a classifier initialized randomly, training the minimum cross entropy loss until the convolutional neural network model and the classifier converge, wherein the pedestrian image features extracted in the training process are stored in the current edge device locally, the converged convolutional neural network model replaces the convolutional neural network model stored locally in the current edge device and then continues to carry out random walk in the distributed system until the walk turns reach a preset value, and then ending the walk;
s5, each edge device interacts the locally stored pedestrian image features with other edge devices in the distributed system, and the pedestrian image features between different edge devices establish a neighbor list by using a clustering algorithm which is the nearest neighbor of each other;
s6, inputting each neighbor list established according to the pedestrian image characteristics in the step S5 into a hash function by each edge device to obtain a globally unique hash value in the distributed system, and taking the hash value as a global pseudo label of the pedestrian image data in the distributed system;
s7, extracting pedestrian image features through a convolutional neural network model stored in each edge device after the random walk is finished in the step S4, inputting the pedestrian image features and global pseudo labels of the pedestrian images into a classifier initialized at random, and training a minimized second cross entropy loss function until the convolutional neural network model and the classifier converge, wherein the pedestrian image features extracted in the training process are stored in the current edge device locally;
s8, the converged convolutional neural network model and the classifier trained in the step S7 perform random walk in the distributed system, pedestrian image features in local pedestrian image data of the current edge device are extracted after each edge device is reached, the pedestrian image features and the global pseudo labels of the pedestrian image data are input into the classifier, the minimum cross entropy loss is trained until the convolutional neural network model and the classifier are converged, wherein the pedestrian image features extracted in the training process are stored in the local edge device, the converged convolutional neural network model and the classifier replace the convolutional neural network model and the classifier which are stored in the local edge device, then the random walk is continuously performed in the distributed system, and the walk is finished until the number of rounds of the walk reaches a preset value;
s9, extracting the pedestrian image features of the pedestrian image data to be detected by using the convolutional neural network model obtained by training in the distributed system, sending the pedestrian image features to edge equipment in the distributed system, and performing feature similarity sequencing with the local pedestrian image features of the current edge equipment to obtain a pedestrian re-recognition result, so that the pedestrian re-recognition is realized.
Further, in step S1, each edge device acquires pedestrian image data by using a YOLO target detection algorithm and records the acquisition time of the pedestrian image data and the position information of the pedestrian image data;
presetting a convolutional neural network model in each edge device, wherein a ResNet-50 network is adopted as a main network in the convolutional neural network model, and pre-training the ResNet-50 network on an ImageNet data set to enable the ResNet-50 network to obtain an ideal initial value;
wherein, the network structure of Resnet-50 is as follows: a convolutional layer conv1, a BN layer BN1, a maximum pooling layer max _ pool, a convolutional layer layerr 1.0.conv1, a BN layer layerr 1.0.bn1, a convolutional layer layerr 1.0.conv2, a BN layer layerr 1.0.bn2, a convolutional layer layerr 1.0.conv3, a BN layer layerr 1.0.bn3, a downsampling layer layerr 1.0.downsample, a convolutional layer layerr 1.1.conv1, a BN layer layerr 1.1.bn1, a convolutional layer layerr 1.1.1.conv 2, a BN layer layerr 1.1.bn2, a BN layer1.1.bn2, a convolutional layer layerr 1.1.1.1.conv 3, a convolutional layer layerr 2.1.0.1.1.1, a convolutional layer 2.1.2.0.1.0.1.1.1.1.1.1.bn3, a convolutional layer 2.2.2.2.1.0.0.0.1.0.1.0.1.1.1.1.1.1.1.2.2.0.1.1.1.1.1.1.1.1.2.1.1.2.2.2.1.2.2.1.1.1.2.2.2.1.2.2.2.2.2.2.2.2.1.1.2.2.2.1.1.1.1.2.1.2.2.2.2.1.1.1.2.2.1.2.2.2.2.2.2.2.2.2.2.1.2.1.1.2.1.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a convolutional layer, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2., BN layer layerr3.1. bn1, convolutional layer layerr3.1. conv2, BN layer layerr3.1. bnn 2, convolutional layer layerr3.1. conv3, BN layer layerr3.1.bn 3, convolutional layer layerr3.2. conv1, BN layer layerr3.2. BN1, convolutional layer layerr3.2. conv2, BN layer layerr3.2. BN2, convolutional layer layerr3.2. BN3, BN layer layerr3.2. conv3, BN layer layerr3.2. bn3, convolutional layer layerr3.3. conv1, multilayer layer3.3. bnr 1, convolutional layer layerr3.3. bn1, multilayer layer3.3.bn1, convolutional layer3.3.conv2, BN layer layerr3.3. bynen2, convolutional layer layerr3.3. 3. byerr3. bor 3, convolutional layer layerr3.4. byerr3. bor 2, convolutional layer layerr3.4. byerr 3. borebn 2, convolutional layer3.4. byerr 3.0.0.1. byerr 2. byerr 3, convolutional layer4. byerr 3.4. bor 3.0.0.0.1. byerr 2. bor 2, convolutional layer, bor 4. bor 3.4. bor 2, bor 3.4. bor 2, bor 2. bor 3, bor 3.4, bor 2. bor 4, bor 3.4, bor 4, bor 3.4, borborborborbor 3.4, bor 4, bor 3.4, bor 2, borborborborborborbornbn 2, bor 3.4, borborbor 3.3.3.4, borborborborborborborborborborborborbor 3.3.4, bor 3.0.0.0.0.0.4, bor 3.5, bor 3.4, bor 3.0.0.0.1, bor 3.0.1, borborborborborborborbornbn 2, borborborborborbornbn 2, bornbn 2, bor 3.4, bornbn 2.
Further, the cycle consistent generation of the countermeasure network CycleGAN includes a first generation countermeasure network and a second generation countermeasure network, wherein the first generation countermeasure network includes a forward generator and a forward discriminator, the second generation countermeasure network includes a reverse generator and a reverse discriminator, and the target equation of the cycle consistent generation of the countermeasure network CycleGAN is as follows:
L(G,F,DA,DB)=LGAN(G,DB,A,B)+LGAN(F,DA,B,A)+Lcyc(G,F)
wherein L isGAN(G,DBA, B) is a first loss function for generating a competing network, LGAN(F,DAB, A) is a loss function of the second generation countermeasure network, Lcyc(G, F) is the cyclic uniform loss function, G is the forward generator, F is the reverse generator, DAAs a forward direction discriminator, DBA is the original style data and B is the target style data.
Further, the step S3 is as follows:
each edge device inputs the pedestrian image data processed in the step S2 into a convolutional neural network model to extract the pedestrian image characteristics, and defines the pedestrian image as
Figure BDA0002833566530000061
Wherein
Figure BDA0002833566530000062
Pedestrian image data representing the nth pedestrian image class in the mth edge device, k representing the kth image of a certain pedestrian class, defining a pedestrian image data class label as
Figure BDA0002833566530000063
Representing the nth pedestrian image data category label in the mth edge device, the number of categories of pedestrian image data in each edge device being the rows determined according to the spatio-temporal continuity of the image in step S2The number of the local pseudo tags of the human image data defines the features of the pedestrian image extracted by the convolutional neural network model as follows:
Figure BDA0002833566530000064
Figure BDA0002833566530000065
wherein N represents the number of pedestrian image data categories in the edge device, m represents the edge device number,
Figure BDA0002833566530000066
image features representing an nth class of pedestrians on an mth edge device;
inputting the pedestrian image features and the local pseudo labels of the pedestrian images into a classifier initialized randomly, wherein the classifier comprises a full connection layer FC1 and a softmax layer, the size of the output dimension of the full connection layer FC1 is the number of the local pseudo labels of the images, the softmax layer normalizes the output of the full connection layer to obtain the probability that the image features belong to a certain class, a first cross entropy loss function is trained to be minimized until a convolutional neural network model and the classifier converge, and a first cross entropy loss function formula adopted by training optimization is as follows:
Figure BDA0002833566530000071
wherein,
Figure BDA0002833566530000072
is an image
Figure BDA0002833566530000073
Belong to the category
Figure BDA0002833566530000074
N represents the number of pedestrian image data classes in the edge device, KnIs the number of pedestrian images included in each category of pedestrian image data, and m represents the edge device number.
Further, in step S4, a specified number of edge devices are selected from the distributed system, and the converged convolutional neural network model trained in step S3 performs random walk in the edge devices selected from the distributed system.
Further, the formula of the clustering algorithm with the nearest neighbors in step S5 is shown as follows
Figure BDA0002833566530000075
Figure BDA0002833566530000076
Wherein,
Figure BDA0002833566530000077
a pedestrian image feature representing the u-th pedestrian category in edge device i,
Figure BDA0002833566530000078
pedestrian image features, M, representing the v-th pedestrian category in edge device jiSet of image features representing all pedestrians in edge device i, MjRepresenting a set of all image features of pedestrians in the edge device j, c representing a certain image feature in the set of all image features of the edge device, d (,) representing the L2 distance between the image features of pedestrians, and L2 distance representing the distance between two points in euclidean space, if two image features of pedestrians belonging to two different edge devices are closest to each other in distance measurement, establishing association between the image data of pedestrians corresponding to the two image features of pedestrians in a manner of establishing a neighbor list for the image data of pedestrians and adding basic information of the image data of pedestrians on other edge devices associated with the image data of pedestrians into the neighbor list, wherein the basic information of the image data of pedestrians further comprises the number of the edge device to which the image data of pedestrians belongs and a local pseudo tag.
Further, the mapping formula of the hash function in step S6 is:
Figure BDA0002833566530000079
wherein,
Figure BDA00028335665300000710
represents all of
Figure BDA00028335665300000711
Establishing pedestrian image characteristics of other relevant edge devices, wherein H (or) represents a hash function, calculating two input parameters to obtain corresponding hash values,
Figure BDA0002833566530000081
Figure BDA0002833566530000082
the image characteristics of the jth pedestrian on the h edge device and the image characteristics of the nth pedestrian on the mth edge device have a correlation relationship.
Further, the step S7 is as follows:
inputting the pedestrian image features and the pedestrian image global pseudo labels into a classifier initialized randomly, wherein the classifier comprises a full connection layer FC1 and a softmax layer, the size of the output dimension of the full connection layer FC1 is the number of all the pedestrian image global pseudo labels obtained by calculation in the step S6, the softmax layer normalizes the output of the full connection layer to obtain the probability that the image features belong to a certain category, a second cross entropy loss function is trained and minimized until a convolutional neural network model and the classifier converge, and a second cross entropy loss function formula adopted by training optimization is as follows:
Figure BDA0002833566530000083
wherein,
Figure BDA0002833566530000084
is an image
Figure BDA0002833566530000085
Belong to the category
Figure BDA0002833566530000086
N represents the number of image data classes in the edge device, KnIs the number of pedestrian images included in each category of pedestrian image data, m is the edge device number,
Figure BDA0002833566530000087
the global pseudo label of the pedestrian image calculated by the hash function in step S6 represents the category information to which the pedestrian image belongs.
The invention firstly provides a distributed unsupervised pedestrian re-identification method based on edge calculation, which can perform incremental training of a pedestrian re-identification model in a distributed environment based on edge calculation, and compared with the current mainstream research method, the method has the following advantages and effects:
1) the pedestrian data are kept in the edge device, so that the privacy of the pedestrians can be well protected;
2) the training of the convolutional neural network model and the pedestrian re-identification task are completed at the edge equipment end, so that the data transmission bandwidth can be greatly saved;
3) the method is completely unsupervised, does not need labeled pedestrian data to participate in training, has good expandability, has high model training convergence speed, and has high practical value in the construction of smart cities.
Drawings
FIG. 1 is a diagram of a neural network on a single edge device in an embodiment of the present invention;
fig. 2 is a schematic diagram of cross-camera feature vector clustering and neighbor list screening in the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
As shown in fig. 1, the embodiment provides a method for training a re-recognition model of a pedestrian in a distributed environment, including the following steps:
s1, constructing a distributed system composed of a plurality of edge devices, wherein each edge device is used for collecting pedestrian data in a monitoring range, and each edge device is also preset with a convolutional neural network model pre-trained by ImageNet, wherein the pedestrian data comprises pedestrian image data, the collecting time of the pedestrian image data and the position information of the pedestrian image data.
The convolutional neural network model in step S1 is a ResNet-50 network, and the ResNet-50 network is pre-trained on the ImageNet data set to obtain an ideal initial value for the ResNet-50 network.
Wherein, the network structure of Resnet-50 is as follows: a convolutional layer conv1, a BN layer BN1, a maximum pooling layer max _ pool, a convolutional layer layerr 1.0.conv1, a BN layer layerr 1.0.bn1, a convolutional layer layerr 1.0.conv2, a BN layer layerr 1.0.bn2, a convolutional layer layerr 1.0.conv3, a BN layer layerr 1.0.bn3, a downsampling layer layerr 1.0.downsample, a convolutional layer layerr 1.1.conv1, a BN layer layerr 1.1.bn1, a convolutional layer layerr 1.1.1.conv 2, a BN layer layerr 1.1.bn2, a BN layer1.1.bn2, a convolutional layer layerr 1.1.1.1.conv 3, a convolutional layer layerr 2.1.0.1.1.1, a convolutional layer 2.1.2.0.1.0.1.1.1.1.1.1.bn3, a convolutional layer 2.2.2.2.1.0.0.0.1.0.1.0.1.1.1.1.1.1.1.2.2.0.1.1.1.1.1.1.1.1.2.1.1.2.2.2.1.2.2.1.1.1.2.2.2.1.2.2.2.2.2.2.2.2.1.1.2.2.2.1.1.1.1.2.1.2.2.2.2.1.1.1.2.2.1.2.2.2.2.2.2.2.2.2.2.1.2.1.1.2.1.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a convolutional layer, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a layer 2.2.2.2.2.2.2.2.2.2.2.2., BN layer layerr3.1. bn1, convolutional layer layerr3.1. conv2, BN layer layerr3.1. bnn 2, convolutional layer layerr3.1. conv3, BN layer layerr3.1.bn 3, convolutional layer layerr3.2. conv1, BN layer layerr3.2. BN1, convolutional layer layerr3.2. conv2, BN layer layerr3.2. BN2, convolutional layer layerr3.2. BN3, BN layer layerr3.2. conv3, BN layer layerr3.2. bn3, convolutional layer layerr3.3. conv1, multilayer layer3.3. bnr 1, convolutional layer layerr3.3. bn1, multilayer layer3.3.bn1, convolutional layer3.3.conv2, BN layer layerr3.3. bynen2, convolutional layer layerr3.3. 3. byerr3. bor 3, convolutional layer layerr3.4. byerr3. bor 2, convolutional layer layerr3.4. byerr 3. borebn 2, convolutional layer3.4. byerr 3.0.0.1. byerr 2. byerr 3, convolutional layer4. byerr 3.4. bor 3.0.0.0.1. byerr 2. bor 2, convolutional layer, bor 4. bor 3.4. bor 2, bor 3.4. bor 2, bor 2. bor 3, bor 3.4, bor 2. bor 4, bor 3.4, bor 4, bor 3.4, borborborborbor 3.4, bor 4, bor 3.4, bor 2, borborborborborborbornbn 2, bor 3.4, borborbor 3.3.3.4, borborborborborborborborborborborborbor 3.3.4, bor 3.0.0.0.0.0.4, bor 3.5, bor 3.4, bor 3.0.0.0.1, bor 3.0.1, borborborborborborborbornbn 2, borborborborborbornbn 2, bornbn 2, bor 3.4, bornbn 2.
And S2, each edge device generates a confrontation network cycleGAN through cycle consistency on the locally acquired pedestrian image data to perform style migration on the images, and assigns the pedestrian image data with space-time continuity to the same local pseudo label by combining the acquisition time of the pedestrian image data and the position information of the pedestrian image data.
The method comprises the steps that a countermeasure network cycleGAN is generated in a cycle-consistent mode, wherein the countermeasure network cycleGAN comprises a first countermeasure network and a second countermeasure network; the first generation countermeasure network comprises a forward generator and a forward discriminator; the second generation countermeasure network comprises a reverse generator and a reverse discriminator;
the target equation for cycle-consistent generation of the antagonistic network CycleGAN is as follows:
L(G,F,DA,DB)=LGAN(G,DB,A,B)+LGAN(F,DA,B,A)+Lcyc(G,F)
wherein L isGAN(G,DBA, B) is a first loss function for generating a competing network, LGAN(F,DAB, A) is a loss function of the second generation countermeasure network, Lcyc(G, F) is a cyclic uniform loss function; g is a forward generator, F is a reverse generator, DAAs a forward direction discriminator, DBA is the original style data and B is the target style data.
According to the embodiment, the antagonistic network cycleGAN is generated in a cyclic consistency mode to carry out style migration on the pedestrian images on each edge device, so that the styles of the pedestrian images acquired by different edge devices tend to be consistent, and the problem of low retrieval precision caused by image style difference in pedestrian re-identification can be well solved.
And S3, inputting the pedestrian image data obtained through the processing in the step S2 into a convolutional neural network model by each edge device to extract the pedestrian image characteristics, then inputting the pedestrian image characteristics and the local pseudo labels of the pedestrian images into a classifier initialized at random, and training a minimized first cross entropy loss function until the convolutional neural network model and the classifier converge. And the pedestrian image features extracted in the training process are stored locally in the edge equipment.
In the above step S3, each edge device inputs the pedestrian image data processed in the step S2 into the convolutional neural network model described in the step S1 to extract pedestrian image features, and defines the pedestrian image features extracted by the convolutional neural network model as:
Figure BDA0002833566530000111
wherein N represents the number of pedestrian image data categories in the edge device, m represents the edge device number,
Figure BDA0002833566530000112
representing image features of the nth class of pedestrian on the mth edge device.
Inputting the pedestrian image features and the pedestrian image local pseudo labels into a classifier initialized randomly, wherein the classifier comprises a full connection layer FC1 and a softmax layer, the size of the output dimension of the full connection layer FC1 is the number of the image local pseudo labels, the softmax layer normalizes the output of the full connection layer to obtain the probability that the image features belong to a certain class, and training the minimum cross entropy loss until the convolutional neural network model and the classifier converge. The first cross entropy loss function formula adopted by the training optimization is as follows:
Figure BDA0002833566530000121
wherein,
Figure BDA0002833566530000122
is an image
Figure BDA0002833566530000123
Belong to the category
Figure BDA0002833566530000124
N is the number of local pedestrian image data categories, KnIs the number of pedestrian images included in each category of pedestrian image data, and m is the edge device number.
S4, the convolutional neural network model which is trained and converged in the step S3 carries out random walk in the distributed system, the pedestrian image features in the local pedestrian image data of the edge device are extracted after every edge device is reached, the pedestrian image features and the local pseudo labels of the pedestrian image data are input into a classifier which is initialized randomly, and cross entropy loss is trained and minimized until the convolutional neural network model and the classifier are converged. And the pedestrian image features extracted in the training process are stored in the current edge device locally. And the converged convolutional neural network model replaces the convolutional neural network model stored locally by the edge device, then random walk is continuously carried out in the distributed environment, and the walk is finished until the walk turns reach a preset value.
The step S4 selects a specified number of edge devices in the distributed system, and the convolutional neural network model that is trained and converged in the step S3 in the selected edge devices will randomly walk in the distributed system. The number of edge devices is chosen to be 1 in the present invention.
And when the convolutional neural network model reaches one edge device, extracting the pedestrian image characteristics of the local pedestrian image data of the edge device, inputting the pedestrian image characteristics and the local pseudo label of the pedestrian image data into a randomly initialized classifier, and training to minimize cross entropy loss until the convolutional neural network model converges.
The pedestrian image features extracted during the training process will be stored locally at the edge device. And the converged convolutional neural network model replaces the convolutional neural network model stored locally by the edge device, then random walk is continuously carried out in the distributed environment, and the walk is finished until the walk turns reach a preset value. The preset value for the walk round in the present invention is 60.
The convolutional neural network model can asynchronously learn pedestrian data acquired by each edge device in a random walk learning mode, and can improve the accuracy of extracting pedestrian features by the model on the premise that the pedestrian data are not interacted.
S5, each edge device interacts the locally stored pedestrian image features with other edge devices in the distributed system, and the pedestrian image features between different edge devices establish a neighbor list by using a clustering algorithm which is the nearest neighbor of each other.
In the step S5, the edge device interacts the locally stored image features of the pedestrian with all other edge devices, and the image features of the pedestrian between different edge devices establish a neighbor list by using a clustering algorithm that is nearest to each other. The mutual nearest neighbor clustering algorithm formula is as follows:
Figure BDA0002833566530000131
Figure BDA0002833566530000132
wherein,
Figure BDA0002833566530000133
a pedestrian image feature representing the u-th pedestrian category in edge device i,
Figure BDA0002833566530000134
pedestrian image features, M, representing the v-th pedestrian category in edge device jiSet of image features representing all pedestrians in edge device i, MjRepresenting a set of all image features of pedestrians in the edge device j, c representing a certain image feature in the set of all image features of the edge device, wherein d (,) represents an L2 distance between the image features of the pedestrians, wherein an L2 distance is a distance between two points in an euclidean space, which is a common image feature distance measurement mode, and if two image features of pedestrians belonging to two different edge devices are closest to each other in the distance measurement mode, establishing association to the image data of the pedestrians corresponding to the two image features of the pedestrians; the association is established by establishing a neighbor list for the pedestrian image data and adding basic information of the pedestrian image data on other edge devices associated with the pedestrian image data into the neighbor list, wherein the basic information of the pedestrian image data further comprises the number of the edge device to which the pedestrian image data belongs and a local pseudo tag.
And S6, each edge device inputs the neighbor list established by each pedestrian image feature in the step S5 into a hash function to obtain a globally unique hash value in the distributed system, the pedestrian image features with the same neighbor list obtain the same hash value, and the hash value is used as a global pseudo label of the pedestrian image data in the distributed system.
In the above step S6, each edge device inputs each neighbor list created according to the pedestrian image features in step S5 into a hash function to obtain a globally unique hash value in the distributed system, and uses the hash value as a global pseudo tag of the pedestrian image data in the distributed system. The Hash mapping formula is as follows:
Figure BDA0002833566530000141
wherein, H (,) represents a hash function, two input parameters are calculated to obtain the corresponding hash value,
Figure BDA0002833566530000142
the image characteristics of the jth pedestrian on the h edge device and the image characteristics of the nth pedestrian on the mth edge device have a correlation relationship. The hash function used in this embodiment is an SHA256 function.
And S7, extracting the pedestrian image features through the convolutional neural network model stored in each edge device after the random walk is finished in the step S4, inputting the pedestrian image features and the global pseudo labels of the pedestrian images into a classifier initialized at random, and training a minimum second cross entropy loss function until the convolutional neural network model and the classifier converge. The pedestrian image features extracted in the training process are stored locally in the current edge device.
In the step S7, each edge device inputs the pedestrian image data processed in the step S2 into the convolutional neural network model trained in the step S4 to extract the pedestrian image features, inputs the pedestrian image features and the global pseudo labels of the pedestrian images obtained in the step S6 into a classifier initialized at random, and trains to minimize cross entropy loss until the convolutional neural network model and the classifier converge. The pedestrian image features extracted during the training process will be stored locally at the edge device. The loss function formula adopted by the training optimization is as follows:
Figure BDA0002833566530000151
wherein,
Figure BDA0002833566530000152
is an image
Figure BDA0002833566530000153
Belong to class 0
Figure BDA0002833566530000154
N represents the number of image data classes in the edge device, KnIs the number of pedestrian images included in each category of pedestrian image data, m is the edge device number,
Figure BDA0002833566530000155
the global pseudo label of the pedestrian image calculated by the hash function in step S6 represents the category information to which the pedestrian image belongs.
S8, the convolutional neural network model and the classifier which are converged by training in the step S7 walk in a distributed system randomly, pedestrian image features in local pedestrian image data of the current edge device are extracted after each edge device is reached, the pedestrian image features and global pseudo labels of the pedestrian image data are input into the classifier, and cross entropy loss is trained to be minimized until the convolutional neural network model and the classifier are converged. The pedestrian image features extracted in the training process are stored in the edge device locally, the converged convolutional neural network model and the converged classifier replace the convolutional neural network model and the classifier stored in the edge device locally, then random walk is continuously performed in the distributed environment, and the walk is finished until the walk turns reach a preset value.
The step S8 selects a specified number of edge devices in the distributed system, and the convolutional neural network model and the classifier that have been trained and converged in the step S7 in the selected edge devices will perform random walk in the distributed system. The number of edge devices selected in the present invention is 1.
And when the convolutional neural network model reaches one edge device, extracting the pedestrian image characteristics of the local pedestrian image data of the edge device, inputting the pedestrian image characteristics and the global pseudo label of the pedestrian image data into a classifier, and training to minimize cross entropy loss until the convolutional neural network model converges.
The pedestrian image features extracted during the training process will be stored locally at the edge device. And the converged convolutional neural network model and the classifier replace the convolutional neural network model and the classifier which are locally stored by the edge device, and then the random walk is continuously carried out in the distributed environment until the walk turns reach a preset value, and the walk is finished. The preset value of the walking turn in the invention is 60.
The convolutional neural network model can learn the correlation information of the pedestrian between different edge devices on the premise that pedestrian data is not interacted by a random walk learning mode and the use of a global pseudo tag, is an important supervision signal for pedestrian re-identification, and can obviously improve the accuracy of model extraction features.
S9, extracting the pedestrian image features of the pedestrian image data to be detected by using the convolutional neural network model obtained by training in the distributed system, sending the pedestrian image features to edge equipment in the distributed system, and performing feature similarity sequencing with the pedestrian image features local to the current edge equipment to obtain a pedestrian re-recognition result, so that pedestrian re-recognition is realized.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (8)

1. A distributed unsupervised pedestrian re-identification method based on edge calculation is characterized by comprising the following steps of:
s1, constructing a distributed system consisting of a plurality of edge devices, wherein each edge device is used for collecting pedestrian data in a monitoring range, and each edge device is also preset with a convolutional neural network model pre-trained by ImageNet, wherein the pedestrian data comprises pedestrian image data, the collection time of the pedestrian image data and the position information of the pedestrian image data;
s2, each edge device generates a countermeasure network cycleGAN to carry out style migration on the image through circulation consistency on the pedestrian image data acquired locally, and assigns the pedestrian image data with space-time continuity to the same local pseudo label by combining the acquisition time of the pedestrian image data and the position information of the pedestrian image data;
s3, inputting the pedestrian image data obtained through the processing in the step S2 into a convolutional neural network model by each edge device to extract pedestrian image features, then inputting the pedestrian image features and the pedestrian image local pseudo labels into a classifier initialized at random, and training a minimized first cross entropy loss function until the convolutional neural network model and the classifier converge, wherein the pedestrian image features extracted in the training process are stored locally in the current edge device;
s4, carrying out random walk on the converged convolutional neural network model trained in the step S3 in a distributed system, extracting pedestrian image features of local pedestrian image data in the current edge device after reaching one edge device, inputting the pedestrian image features and the local pseudo labels of the pedestrian image data into a classifier initialized randomly, training the minimum cross entropy loss until the convolutional neural network model and the classifier converge, wherein the pedestrian image features extracted in the training process are stored in the current edge device locally, the converged convolutional neural network model replaces the convolutional neural network model stored locally in the current edge device and then continues to carry out random walk in the distributed system until the walk turns reach a preset value, and then ending the walk;
s5, each edge device interacts the locally stored pedestrian image features with other edge devices in the distributed system, and the pedestrian image features between different edge devices establish a neighbor list by using a clustering algorithm which is the nearest neighbor of each other;
s6, inputting each neighbor list established according to the pedestrian image characteristics in the step S5 into a hash function by each edge device to obtain a globally unique hash value in the distributed system, and taking the hash value as a global pseudo label of the pedestrian image data in the distributed system;
s7, extracting pedestrian image features through a convolutional neural network model stored in each edge device after the random walk is finished in the step S4, inputting the pedestrian image features and global pseudo labels of the pedestrian images into a classifier initialized at random, and training a minimized second cross entropy loss function until the convolutional neural network model and the classifier converge, wherein the pedestrian image features extracted in the training process are stored in the current edge device locally;
s8, the converged convolutional neural network model and the classifier trained in the step S7 perform random walk in the distributed system, pedestrian image features in local pedestrian image data of the current edge device are extracted after each edge device is reached, the pedestrian image features and the global pseudo labels of the pedestrian image data are input into the classifier, the minimum cross entropy loss is trained until the convolutional neural network model and the classifier are converged, wherein the pedestrian image features extracted in the training process are stored in the local edge device, the converged convolutional neural network model and the classifier replace the convolutional neural network model and the classifier which are stored in the local edge device, then the random walk is continuously performed in the distributed system, and the walk is finished until the number of rounds of the walk reaches a preset value;
s9, extracting the pedestrian image features of the pedestrian image data to be detected by using the convolutional neural network model obtained by training in the distributed system, sending the pedestrian image features to edge equipment in the distributed system, and performing feature similarity sequencing with the local pedestrian image features of the current edge equipment to obtain a pedestrian re-recognition result, so that the pedestrian re-recognition is realized.
2. The distributed unsupervised pedestrian re-identification method based on edge calculation as claimed in claim 1, wherein in step S1, each edge device acquires pedestrian image data by using a YOLO target detection algorithm and records the acquisition time of the pedestrian image data and the position information of the pedestrian image data;
presetting a convolutional neural network model in each edge device, wherein a ResNet-50 network is adopted as a main network in the convolutional neural network model, and pre-training the ResNet-50 network on an ImageNet data set to enable the ResNet-50 network to obtain an ideal initial value;
wherein, the network structure of Resnet-50 is as follows: a convolutional layer conv1, a BN layer BN1, a maximum pooling layer max _ pool, a convolutional layer layerr 1.0.conv1, a BN layer layerr 1.0.bn1, a convolutional layer layerr 1.0.conv2, a BN layer layerr 1.0.bn2, a convolutional layer layerr 1.0.conv3, a BN layer layerr 1.0.bn3, a downsampling layer layerr 1.0.downsample, a convolutional layer layerr 1.1.conv1, a BN layer layerr 1.1.bn1, a convolutional layer layerr 1.1.1.conv 2, a BN layer layerr 1.1.bn2, a BN layer1.1.bn2, a convolutional layer layerr 1.1.1.1.conv 3, a BN layer layerr 2.1.0.1.1.1.bn3, a convolutional layer layerr 1.2.2.2.1.2.1.0.1.1.1.2.1.0.2.1.1.1.1.1.1.1.1.1.1.1.1.1.bn3, a convolutional layer, a layer 2.2.2.2.2.2.2.2.2.2.0.0.0.0.2.2.2.0.0.1.1.2.2.0.0.2.2.2.2.2.2.2.1.1.2.2.2.1.2.2.1.2.1.2.2.2.2.2.1.1.2.2.2.2.2.2.2.2.2.2.2.1.1.1.1.1.2.2.1.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.1.1.2.1.1.2.2.2.2.2.2.2.2.2.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.1.2.2.2.2.2.2.1.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2.2, a convolutional layer, a filter, a layer, a filter 2.2, a layer, a filter 2, a convolutional layer, a layer2.2, a filter 2.2, a layer 2.2.2, a layer 2.2.2.2.2, a layer, a filter 2, a layer2, a filter 2.2.2.2.2.2.2, a filter 2.2.2.2.2, BN layer layerr3.1. bn1, convolutional layer layerr3.1. conv2, BN layer layerr3.1. bnn 2, convolutional layer layerr3.1. conv3, BN layer layerr3.1.bn 3, convolutional layer layerr3.2. conv1, BN layer layerr3.2. BN1, convolutional layer layerr3.2. conv2, BN layer layerr3.2. BN2, convolutional layer layerr3.2. BN3, BN layer layerr3.2. conv3, BN layer layerr3.2. bn3, convolutional layer layerr3.3. conv1, multilayer layer3.3. bnr 1, convolutional layer layerr3.3. bn1, multilayer layer3.3.bn1, convolutional layer3.3.conv2, BN layer layerr3.3. bynen2, convolutional layer layerr3.3. 3. byerr3. bor 3, convolutional layer layerr3.4. byerr3. bor 2, convolutional layer layerr3.4. byerr 3. borebn 2, convolutional layer3.4. byerr 3.0.0.1. byerr 2. byerr 3, convolutional layer4. byerr 3.4. bor 3.0.0.0.1. byerr 2. bor 2, convolutional layer, bor 4. bor 3.4. bor 2, bor 3.4. bor 2, bor 2. bor 3, bor 3.4, bor 2. bor 4, bor 3.4, bor 4, bor 3.4, borborborborbor 3.4, bor 4, bor 3.4, bor 2, borborborborborborbornbn 2, bor 3.4, borborbor 3.3.3.4, borborborborborborborborborborborborbor 3.3.4, bor 3.0.0.0.0.0.4, bor 3.5, bor 3.4, bor 3.0.0.0.1, bor 3.0.1, borborborborborborborbornbn 2, borborborborborbornbn 2, bornbn 2, bor 3.4, bornbn 2.
3. The distributed unsupervised pedestrian re-identification method based on edge computing as claimed in claim 1, wherein the cycle-consistent generation countermeasure network CycleGAN includes a first generation countermeasure network and a second generation countermeasure network, wherein the first generation countermeasure network includes a forward generator and a forward discriminator, the second generation countermeasure network includes a reverse generator and a reverse discriminator, and the objective equation of the cycle-consistent generation countermeasure network CycleGAN is as follows:
L(G,F,DA,DB)=LGAN(G,DB,A,B)+LGAN(F,DA,B,A)+Lcyc(G, F) wherein LGAN(G,DBA, B) is a first loss function for generating a competing network, LGAN(F,DAB, A) is a loss function of the second generation countermeasure network, Lcyc(G, F) is the cyclic uniform loss function, G is the forward generator, F is the reverse generator, DAAs a forward direction discriminator, DBA is the original style data and B is the target style data.
4. The distributed unsupervised pedestrian re-identification method based on edge calculation as claimed in claim 1, wherein the step S3 is performed as follows:
each edge device converts the pedestrian image data processed in step S2Inputting the convolutional neural network model to extract the features of the pedestrian image, and defining the pedestrian image as
Figure FDA0002833566520000041
Wherein
Figure FDA0002833566520000042
Pedestrian image data representing the nth pedestrian image class in the mth edge device, k representing the kth image of a certain pedestrian class, defining a pedestrian image data class label as
Figure FDA0002833566520000043
Figure FDA0002833566520000044
Representing the nth pedestrian image data category label in the mth edge device, wherein the number of the categories of the pedestrian image data in each edge device is the number of the local pseudo labels of the pedestrian image data determined according to the image space-time continuity in the step S2, and defining the pedestrian image features extracted by the convolutional neural network model as follows:
Figure FDA0002833566520000051
Figure FDA0002833566520000052
wherein N represents the number of pedestrian image data categories in the edge device, m represents the edge device number,
Figure FDA0002833566520000053
image features representing an nth class of pedestrians on an mth edge device;
inputting the pedestrian image features and the local pseudo labels of the pedestrian images into a classifier initialized randomly, wherein the classifier comprises a full connection layer FC1 and a softmax layer, the size of the output dimension of the full connection layer FC1 is the number of the local pseudo labels of the images, the softmax layer normalizes the output of the full connection layer to obtain the probability that the image features belong to a certain class, a first cross entropy loss function is trained to be minimized until a convolutional neural network model and the classifier converge, and a first cross entropy loss function formula adopted by training optimization is as follows:
Figure FDA0002833566520000054
wherein,
Figure FDA0002833566520000055
is an image
Figure FDA0002833566520000056
Belong to the category
Figure FDA0002833566520000057
N represents the number of pedestrian image data classes in the edge device, KnIs the number of pedestrian images included in each category of pedestrian image data, and m represents the edge device number.
5. The distributed unsupervised pedestrian re-identification method based on edge computing as claimed in claim 1, wherein in step S4, a specified number of edge devices are selected from the distributed system, and the converged convolutional neural network model trained in step S3 performs random walk in the edge devices selected from the distributed system.
6. The distributed unsupervised pedestrian re-identification method based on edge calculation as claimed in claim 1, wherein the formula of the mutual nearest neighbor clustering algorithm in step S5 is as follows
Figure FDA0002833566520000058
Figure FDA0002833566520000059
Wherein,
Figure FDA0002833566520000061
a pedestrian image feature representing the u-th pedestrian category in edge device i,
Figure FDA0002833566520000062
pedestrian image features, M, representing the v-th pedestrian category in edge device jiSet of image features representing all pedestrians in edge device i, MjRepresenting a set of all image features of pedestrians in the edge device j, c representing a certain image feature in the set of all image features of the edge device, d (,) representing the L2 distance between the image features of pedestrians, and L2 distance representing the distance between two points in euclidean space, if two image features of pedestrians belonging to two different edge devices are closest to each other in distance measurement, establishing association between the image data of pedestrians corresponding to the two image features of pedestrians in a manner of establishing a neighbor list for the image data of pedestrians and adding basic information of the image data of pedestrians on other edge devices associated with the image data of pedestrians into the neighbor list, wherein the basic information of the image data of pedestrians further comprises the number of the edge device to which the image data of pedestrians belongs and a local pseudo tag.
7. The distributed unsupervised pedestrian re-identification method based on edge calculation as claimed in claim 1, wherein the mapping formula of the hash function in step S6 is:
Figure FDA0002833566520000063
wherein,
Figure FDA0002833566520000064
represents all of
Figure FDA0002833566520000065
Establishing pedestrian image characteristics of other relevant edge devices, wherein H (or) represents a hash function, calculating two input parameters to obtain corresponding hash values,
Figure FDA0002833566520000066
Figure FDA0002833566520000067
the image characteristics of the jth pedestrian on the h edge device and the image characteristics of the nth pedestrian on the mth edge device have a correlation relationship.
8. The distributed unsupervised pedestrian re-identification method based on edge calculation as claimed in claim 1, wherein the step S7 is performed as follows:
inputting the pedestrian image features and the pedestrian image global pseudo labels into a classifier initialized randomly, wherein the classifier comprises a full connection layer FC1 and a softmax layer, the size of the output dimension of the full connection layer FC1 is the number of all the pedestrian image global pseudo labels obtained by calculation in the step S6, the softmax layer normalizes the output of the full connection layer to obtain the probability that the image features belong to a certain category, a second cross entropy loss function is trained and minimized until a convolutional neural network model and the classifier converge, and a second cross entropy loss function formula adopted by training optimization is as follows:
Figure FDA0002833566520000071
wherein,
Figure FDA0002833566520000072
is an image
Figure FDA0002833566520000073
Belong to the category
Figure FDA0002833566520000074
N represents the number of image data classes in the edge device, KnIs the number of pedestrian images included in each category of pedestrian image data, m is the edge device number,
Figure FDA0002833566520000075
the global pseudo label of the pedestrian image calculated by the hash function in step S6 represents the category information to which the pedestrian image belongs.
CN202011464486.6A 2020-12-14 2020-12-14 Distributed unsupervised pedestrian re-identification method based on edge calculation Pending CN112507893A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011464486.6A CN112507893A (en) 2020-12-14 2020-12-14 Distributed unsupervised pedestrian re-identification method based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011464486.6A CN112507893A (en) 2020-12-14 2020-12-14 Distributed unsupervised pedestrian re-identification method based on edge calculation

Publications (1)

Publication Number Publication Date
CN112507893A true CN112507893A (en) 2021-03-16

Family

ID=74972595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011464486.6A Pending CN112507893A (en) 2020-12-14 2020-12-14 Distributed unsupervised pedestrian re-identification method based on edge calculation

Country Status (1)

Country Link
CN (1) CN112507893A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378651A (en) * 2021-05-20 2021-09-10 合肥工业大学 Distributed rapid pedestrian re-identification system
CN116631008A (en) * 2023-05-25 2023-08-22 沈阳工业大学 Suspicious personnel tracking and positioning method based on OSNet
CN116824275A (en) * 2023-08-29 2023-09-29 青岛美迪康数字工程有限公司 Method, device and computer equipment for realizing intelligent model optimization

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135295A (en) * 2019-04-29 2019-08-16 华南理工大学 A kind of unsupervised pedestrian recognition methods again based on transfer learning
CN110929679A (en) * 2019-12-05 2020-03-27 杭州电子科技大学 Non-supervision self-adaptive pedestrian re-identification method based on GAN
CN111027397A (en) * 2019-11-14 2020-04-17 上海交通大学 Method, system, medium and device for detecting comprehensive characteristic target in intelligent monitoring network
CN111553213A (en) * 2020-04-17 2020-08-18 大连理工大学 Real-time distributed identity-aware pedestrian attribute identification method in mobile edge cloud
CN112001321A (en) * 2020-08-25 2020-11-27 商汤国际私人有限公司 Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135295A (en) * 2019-04-29 2019-08-16 华南理工大学 A kind of unsupervised pedestrian recognition methods again based on transfer learning
CN111027397A (en) * 2019-11-14 2020-04-17 上海交通大学 Method, system, medium and device for detecting comprehensive characteristic target in intelligent monitoring network
CN110929679A (en) * 2019-12-05 2020-03-27 杭州电子科技大学 Non-supervision self-adaptive pedestrian re-identification method based on GAN
CN111553213A (en) * 2020-04-17 2020-08-18 大连理工大学 Real-time distributed identity-aware pedestrian attribute identification method in mobile edge cloud
CN112001321A (en) * 2020-08-25 2020-11-27 商汤国际私人有限公司 Network training method, pedestrian re-identification method, network training device, pedestrian re-identification device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANMING LV ET AL.: "SoMem: A Self-optimizing Memory Network for Distributed Person Re-identification", 《2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113378651A (en) * 2021-05-20 2021-09-10 合肥工业大学 Distributed rapid pedestrian re-identification system
CN116631008A (en) * 2023-05-25 2023-08-22 沈阳工业大学 Suspicious personnel tracking and positioning method based on OSNet
CN116824275A (en) * 2023-08-29 2023-09-29 青岛美迪康数字工程有限公司 Method, device and computer equipment for realizing intelligent model optimization
CN116824275B (en) * 2023-08-29 2023-11-17 青岛美迪康数字工程有限公司 Method, device and computer equipment for realizing intelligent model optimization

Similar Documents

Publication Publication Date Title
Khan et al. Deep unified model for face recognition based on convolution neural network and edge computing
CN113936339B (en) Fighting identification method and device based on double-channel cross attention mechanism
CN111539370B (en) Image pedestrian re-identification method and system based on multi-attention joint learning
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
Mou et al. IM2HEIGHT: Height estimation from single monocular imagery via fully residual convolutional-deconvolutional network
Fan et al. Point spatio-temporal transformer networks for point cloud video modeling
CN111814661B (en) Human body behavior recognition method based on residual error-circulating neural network
Deng et al. Multiple diseases and pests detection based on federated learning and improved faster R-CNN
Yu et al. Human action recognition using deep learning methods
CN111161315B (en) Multi-target tracking method and system based on graph neural network
CN112507893A (en) Distributed unsupervised pedestrian re-identification method based on edge calculation
CN110390308B (en) Video behavior identification method based on space-time confrontation generation network
Gao et al. PSGCNet: A pyramidal scale and global context guided network for dense object counting in remote-sensing images
CN113158943A (en) Cross-domain infrared target detection method
Zhang et al. Joint discriminative representation learning for end-to-end person search
US20240144490A1 (en) Joint count and flow analysis for video crowd scenes
Hammam et al. DeepPet: A pet animal tracking system in internet of things using deep neural networks
Yin Object Detection Based on Deep Learning: A Brief Review
CN114519863A (en) Human body weight recognition method, human body weight recognition apparatus, computer device, and medium
Li et al. Real-time tracking algorithm for aerial vehicles using improved convolutional neural network and transfer learning
Duan et al. Multi-scale convolutional neural network for SAR image semantic segmentation
Guo et al. Varied channels region proposal and classification network for wildlife image classification under complex environment
Hasan et al. Tiny head pose classification by bodily cues
Zhao et al. Research on human behavior recognition in video based on 3DCCA
CN114663835B (en) Pedestrian tracking method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210316

WD01 Invention patent application deemed withdrawn after publication