CN110738309B - DDNN training method and DDNN-based multi-view target identification method and system - Google Patents
DDNN training method and DDNN-based multi-view target identification method and system Download PDFInfo
- Publication number
- CN110738309B CN110738309B CN201910931384.1A CN201910931384A CN110738309B CN 110738309 B CN110738309 B CN 110738309B CN 201910931384 A CN201910931384 A CN 201910931384A CN 110738309 B CN110738309 B CN 110738309B
- Authority
- CN
- China
- Prior art keywords
- ddnn
- model
- cloud
- edge
- sample image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000012549 training Methods 0.000 title claims abstract description 56
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000006870 function Effects 0.000 claims description 29
- 239000013598 vector Substances 0.000 claims description 15
- 230000004913 activation Effects 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 abstract description 8
- 230000003044 adaptive effect Effects 0.000 abstract description 3
- 230000008569 process Effects 0.000 abstract description 3
- 238000013508 migration Methods 0.000 abstract description 2
- 230000005012 migration Effects 0.000 abstract description 2
- 238000012546 transfer Methods 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a DDNN training method, a DDNN-based multi-view target identification method and a DDNN-based multi-view target identification system, and belongs to the field of cloud computing. The method comprises the following steps: acquiring the information entropy of a cloud side model of the distributed deep neural network on a sample image; constructing a DDNN objective function based on the information entropy of the sample image; and jointly training an edge side model and a cloud side model of the DDNN according to the DDNN objective function. The invention focuses on a knowledge migration method of a 'teacher-student' network, and provides a sample weighting-based adaptive training method under the background of DDNN hierarchical multi-outlet. The cloud side model guides the whole training process of the edge model, and the edge model can learn the real label and the cloud side transfer knowledge at the same time.
Description
Technical Field
The invention belongs to the field of cloud computing, and particularly relates to a DDNN training method, a DDNN-based multi-view target identification method and a DDNN-based multi-view target identification system.
Background
Deep Neural Networks (DNNs) have a multi-layer structure, the expression learning of which is also hierarchically distributed, for input vectors, layer-by-layer transmission causes delay to the layers behind the DNN, and as the operation parameters are continuously accumulated, the calculation energy consumption is increased layer by layer, which is not beneficial to the real-time control of the radio resources of the next generation mobile Network. On the basis, a Distributed Deep Neural Network (DDNN) model is proposed, which has a Distributed computing hierarchy, and edge-oriented DDNN refers to a terminal device that maps parts of a single DNN onto Distributed heterogeneous devices, including cloud, edge and geographic distribution.
Under the condition that the memory of the camera is limited, the task of identifying the target near the multi-view data source by using an artificial intelligence deep learning method is converted into the problem of distributed edge intelligence. The DDNN has a plurality of outlets, with the sample image having a different level of feature representation at each outlet. If the shallow layer of the DDNN can correctly identify the target of the image, the classification result can be output at the edge side, and the middle-layer or even high-layer feature extraction is not specially carried out on the sample image at the cloud side. Because it tends to spend the computational overhead on the sample image that is considered to be complex by the cloud-side model, while the sample image that is considered to be complex by the edge-side model is considered to be a simple sample image. Therefore, the cloud-side model is expected to omit the simple sample images and be more used for processing relatively complex sample images, so that the edge side and the cloud side can be well trained, and the overall performance of the DDNN is improved. The training process is similar to a teacher (cloud model) telling a student (edge model) which points of interest should be followed and telling him which knowledge is far above the current level and then neglecting these aspects.
However, different sample images have different complexities, and it is difficult to define a model directly corresponding to the different sample images to select a suitable sample image.
Disclosure of Invention
Aiming at the problem that the overall accuracy of a DDNN training method for multi-view target recognition in the prior art is limited, the invention provides a DDNN training method, a DDNN-based multi-view target recognition method and a DDNN-based multi-view target recognition system, and aims to improve the classification accuracy of an edge side and a cloud side and reduce the communication traffic transmitted from the edge side to the cloud side.
To achieve the above object, according to a first aspect of the present invention, there is provided a method for training DDNN, the method comprising the steps of:
s1, acquiring information entropy of a distributed deep neural network DDNN cloud side model on a sample image;
s2, constructing a DDNN target function based on the information entropy of the sample image;
and S3, jointly training an edge side model and a cloud side model of the DDNN according to the DDNN target function.
Specifically, the information entropy calculation formula of the cloud-side model for sample classification is as follows:
wherein p isiAnd C represents a probability vector output by the cloud side model softmax classifier on the ith sample image, and C represents a label set.
Specifically, the constructed DDNN objective function is as follows:
where N represents the number of all sample images for cloud-side and end-side training, L(i,edge)And L(i,cloud)Loss functions on the edge side and the cloud side of the ith sample image are respectively represented.
To achieve the above object, according to a second aspect of the present invention, there is provided a training method of DDNN, comprising the steps of:
s1, calculating the probability that a distributed deep neural network cloud side model judges that a sample image belongs to each class;
s2, determining the confidence coefficient of the cloud side model to the sample image based on the probability that the sample image belongs to each class;
s3, constructing a DDNN target function based on the confidence coefficient of the sample image;
and S4, jointly training an edge side model and a cloud side model of the DDNN according to the DDNN target function.
Specifically, the calculation formula of the cloud-side model for the sample image classification result is as follows:
wherein p isiProbability vector, p, representing output of i-th sample image by cloud-side model softmax classifierikRepresenting a cloud-side model predicateProbability of the i-th sample image belonging to the k-th class, ziI-th sample image input vector, z, of softmax classifier representing cloud-side model(i,c)Denotes ziThe c-th value of (a).
Specifically, the confidence w of the cloud-side model to the ith sample imageiThe calculation formula is as follows:
wi=yipi T
specifically, the constructed DDNN objective function is as follows:
where N represents the number of all sample images for cloud-side and end-side training, L(i,edge)And L(i,cloud)Loss functions on the edge side and the cloud side of the ith sample image are respectively represented.
To achieve the above object, according to a third aspect of the present invention, there is provided a DDNN-based multi-view object recognition method, where the DDNN of the multi-view object recognition method employs a training method of DDNN according to the first aspect or the second aspect.
To achieve the above object, according to a fourth aspect of the present invention, there is provided a DDNN-based multi-view object recognition system, where the DDNN of the multi-view object recognition system employs the DDNN training method according to the first aspect or the second aspect.
To achieve the above object, according to a fifth aspect of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the training method of the DDNN according to the first aspect or the second aspect.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
(1) the invention focuses on a knowledge migration method of a 'teacher-student' network, and provides a sample weighting-based adaptive training method under the background of DDNN (distributed data network) level multi-outlet. The method can ensure good classification accuracy and minimize communication traffic, thereby further improving the multi-view target identification accuracy.
(2) The cloud side model (teacher network) guides the whole training process of the edge model (student network), and the edge model can simultaneously learn the real labels and the cloud side transfer knowledge. Moreover, each branch weight of the DDNN edge side is not shared, the edge outlet obtains low-level semantic representation fused with each view angle, the cloud side classifier obtains high-level semantic representation fused with each view angle, and diversity of multiple view angles can be kept.
Drawings
Fig. 1 is a schematic diagram of a training framework of DDNN provided in an embodiment of the present invention;
fig. 2 is an example of a multi-view picture provided by an embodiment of the present invention;
fig. 3 is an example of a sample data set provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The DDNN model is analogized to a teacher-student network, and the training of the DDNN, namely the teacher network (a cloud side model) guides the student network (an edge side model) to learn. Predicting the sample image by the cloud side model to obtain an evaluation score; then, this score is used to evaluate the ease of training the sample image relative to the model, if the cloud-side model evaluation is a simple sample image, then increase the weight of this sample image on the edge side and decrease its weight on the cloud side, and vice versa; and finally, training the weighted edge side model and the weighted cloud side model at the same time. The training result of the edge side model on the simple sample image is closer to a real label, the knowledge learned on the simple sample image is closer to the knowledge learned on the cloud side model, the DDNN expression capability can be improved, the wireless communication burden is reduced, and the DDNN classification precision is improved.
The DDNN is a deep neural network model which is based on BranchyNet and can be used for 'cloud-edge-end' cooperative computing, and classifiers are respectively arranged on the equipment side, the edge side and the cloud side to form a multi-outlet cascade classifier. The training framework of DDNN is shown in fig. 1, and it consists of two parts: cloud models and edge models. The edge side classifier outlet and the cloud side classifier outlet of the DDNN are regarded as two cascade classifiers, so that the inference result of the simple sample image is output from the edge side model as much as possible, and the inference result of the complex sample image is output from the cloud side model as much as possible.
The structure is similar to a "teacher-student" network. The teacher network and the student network share the lower layers of the DDNN, namely convolutional layers in the left blue box, pooling and normalization. Each view angle of the DDNN is provided with an independent convolution feature extraction module and a full connection layer, output vectors of all the full connection layers are fused, and then the fused output vectors are sent to a softmax activation function of an edge side to obtain a classification result of the student network. Note that softmax activation vector of the student network is p (x), p (x) ═ softm (s (x)), where s (x) denotes the logits value of the weighted sum of the previous layer of softmax of the student network. Similar to the student network, let the teacher network softmax activation vector be denoted as q (x), q (x) softmax (z (x)), where z (x) represents the logits value of the weighted sum of the previous layer of softmax for the teacher network.
The invention provides a DDNN training method, which comprises the following steps:
s1, acquiring information entropy of a distributed deep neural network cloud side model on a sample image.
The information entropy calculation formula of the cloud side model of the cloud-edge cooperative distributed deep neural network for sample classification is as follows:
wherein p isiAnd C represents a probability vector output by the cloud side model softmax classifier on the ith sample image, and C represents a label set.
And S2, constructing a DDNN target function based on the information entropy of the sample image.
The information entropy of the cloud side model to the sample image can be regarded as the confidence of the ith sample image if the score of the cloud side to the sample image is scoreiThe smaller the value of (b), the simpler the ith sample image, and the more it should be processed on the edge side as much as possible, so the edge-side and cloud-side losses in the target loss function are weighted with this entropy of information.
The constructed DDNN objective function is as follows:
where N represents the number of all sample images for cloud-side and end-side training, L(i,edge)And L(i,cloud)Loss functions on the edge side and the cloud side of the ith sample image are respectively represented.
The present invention uses cross entropy loss. The cloud side model is similar to a teacher network, the edge side model is similar to a student network, and the student network can pay attention to the knowledge which needs to be paid more attention through the feedback of the teacher network and can strengthen the learning of the knowledge.
And S3, jointly training an edge side model and a cloud side model of the DDNN according to the DDNN target function.
The DDNN was trained using a gradient descent algorithm.
The training method of the self-adaptive strategy based on the information entropy enables the training result of the student network on the simple sample to be closer to the real label, and the knowledge learned on the simple sample is closer to the knowledge learned on the teacher network, so that the representation capability can be improved, and the wireless communication burden can be reduced.
The invention also provides a DDNN training method, which comprises the following steps:
and S1, calculating the probability that the cloud side model of the distributed deep neural network judges that the sample image belongs to each class.
The computing mode of the cloud side model for the sample image classification result is as follows:
wherein p isikRepresents the probability that the i-th sample image belongs to the k-th class, ziThe i-th sample image input vector of the softmax function representing the cloud-side model.
And S2, determining the confidence coefficient of the cloud side model to the sample image based on the probability that the sample image belongs to each class.
Confidence w of cloud side model to ith sample imageiThe calculation formula is as follows:
wi=yipi T
wherein, yiAnd the real label representing the ith sample image is obtained by one-hot coding.
And S3, constructing a DDNN target function based on the confidence coefficient of the sample image.
The DDNN objective function is as follows:
wherein L is(i,edge)And L(i,cloud)The loss functions of the edge side and the cloud side are represented, respectively.
And S4, jointly training an edge side model and a cloud side model of the DDNN according to the DDNN target function.
The DDNN was trained using a gradient descent algorithm.
The training method of the probability-based adaptive strategy adopts weighted back propagation, sample images transmitted to the cloud side model are almost sample images which are not easy to classify, cross entropy loss of the sample images in the cloud side model is large, and therefore the cloud side model can be used for training complex sample images in a targeted mode.
A DDNN-based multi-view target identification method comprises the following steps:
s1, training the DDNN by adopting the method.
And S2, inputting the sample image to be recognized into the trained DDNN to obtain a multi-view target recognition result.
The data set of this embodiment is a DDNN-trained multi-view data set, which is a video sequence synchronously photographed by multiple cameras on an EPFL university campus, and has a total of six cameras, one of which is installed at a position 2 m high from the ground, the other two are located at the first floor, and the remaining three cameras are installed at the second floor, and can cover an area 22 m long and wide, respectively, and are covered with a bus stop, a parking space, and a pedestrian crossing. The frames taken by the six cameras at the same time are shown in fig. 2. Taking view 2 as an example, where the area of the point represents the range of the target label, the car, bus and person are surrounded by bounding boxes respectively. In an IoT network, an edge layer generally consists of IoT devices, IoT gateways, and local area network access points, and a cloud layer includes the internet and cloud servers. For experimental evaluation, assuming that each camera is connected to one IoT device, the edge device may transmit the captured image to the cloud over a wireless network.
The video duration is 23 minutes and 57 seconds, each video has 242 frames, and the number of people, cars and buses is 1297, 3553 and 56 respectively. There may be multiple bounding boxes in a single image, each bounding box labeling a different class of objects. When preparing a data set, firstly selecting a frame of image from a video of one camera, extracting an object in a boundary box, then extracting objects in corresponding frames of other cameras, adjusting the size of the pixels to be 32 × 32RGB, and finally manually synchronizing the objects of each frame and sorting the objects into the data set. For the case where a given object is not present within the camera's capture range, the image of that viewing angle is replaced with a black picture of the same size, as shown in fig. 3. The training set consisted of 4080 pictures and the testing set consisted of 1026 pictures. In order to simulate a multi-view IoT device collaborative computing scene, all branch data sets are not shared, and a completely black image is allowed to appear in a training set, so that the advantage of DDNN multi-view fusion is embodied.
The structure of the DDNN model is the first. The edge model for the experimental test contained the convolutional layer, the pooling layer, the BN layer, and Dropout. The cloud side model is deeper than the edge model, and its convolution layer, pooling layer, BN and Dropout are all twice as many as the edge model, and all use the Relu activation function. The number of neurons in the full connection layer of the cloud side model is 256, and the activation function of the neurons is Sigmoid. The number of convolution channels of the edge side model is set to 4, and the number of two convolution layer channels of the cloud side model is set to 32 and 64, respectively. Then the hyper-parameter settings during model training. The optimization algorithm selected for training the DDNN is Adam, the beta parameter is set to be 0.9, and the rest super parameters are set by TensorFlow defaults. Each set of experiments was repeated ten times, and the mean and variance of the ten sets of experiments were taken as the final experimental results. The number of experimental iterations was set to 100, the learning rate was set to 0.02 for the first 50 and 0.005 for the last 50. Batch size is set to 32 and Dropoutre is set to 0.8.
The precision of the cloud-side classifier is 97.08% by single inference, the precision of the edge-side classifier is 97.02% by single inference, the overall precision (98.42%) of the cloud-edge cooperative inference is far higher than the former two, and the traffic transmission is also improved. Thus, it can be concluded that multi-outlet cooperative reasoning can indeed improve performance.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (4)
1. A multi-view target identification method based on a DDNN model is characterized by comprising the following steps:
a training stage:
s1, acquiring a multi-view image data set, wherein the multi-view image data set is a video sequence synchronously shot by a plurality of cameras, and each camera in the plurality of cameras is connected to an IoT (Internet of things) device;
s2, selecting a frame of image from a video shot by each camera, extracting target objects, wherein the target objects are people, cars and buses, extracting the target objects in corresponding frames of other cameras, synchronizing the target objects of each frame, obtaining a multi-view image training set, and inputting the multi-view image training set to a DDNN model, wherein the DDNN model comprises a cloud side model and an edge model; each view angle is provided with an independent convolution feature extraction module and a full-connection layer, output vectors of all the full-connection layers are fused and sent to a softmax activation function, and a classification result is obtained; the number of the convolution layers, the pooling layers, the BN layers and the Dropout of the cloud side model is twice that of the edge model;
s3, acquiring the information entropy of the sample images in the DDNN cloud side model to the multi-view image training set:
wherein p isiRepresenting a probability vector output by the cloud side model softmax classifier on the ith sample image, wherein C represents a label set;
s4, constructing a DDNN target function based on the information entropy of the sample image:
where N represents the number of all sample images for cloud-side and end-side training, L(i,edge)And L(i,cloud)Respectively representing loss functions of an edge side and a cloud side of the ith sample image;
s5, jointly training an edge side model and a cloud side model of the DDNN according to the DDNN target function to obtain a trained DDNN model;
and (3) identification:
and inputting the sample image to be recognized into the trained DDNN model to obtain a multi-view target recognition result.
2. A multi-view target identification method based on a DDNN model is characterized by comprising the following steps:
a training stage:
s1, acquiring a multi-view image data set, wherein the multi-view image data set is a video sequence synchronously shot by a plurality of cameras, and each camera in the plurality of cameras is connected to an IoT (Internet of things) device;
s2, selecting a frame of image from a video shot by each camera, extracting target objects, wherein the target objects are people, cars and buses, extracting the target objects in corresponding frames of other cameras, synchronizing the target objects of each frame, obtaining a multi-view image training set, and inputting the multi-view image training set to a DDNN model, wherein the DDNN model comprises a cloud side model and an edge model; each view angle is provided with an independent convolution feature extraction module and a full-connection layer, output vectors of all the full-connection layers are fused and sent to a softmax activation function, and a classification result is obtained; the number of the convolution layers, the pooling layers, the BN layers and the Dropout of the cloud side model is twice that of the edge model;
s3, calculating a DDNN cloud side model to judge the probability that the sample images in the multi-view image training set belong to each class;
s4, determining the confidence w of the cloud side model to the sample image based on the probability that the sample image in the multi-view image training set belongs to each classi:
wi=yipi T
Wherein, yiTrue label, p, representing the ith sample imageiRepresenting a probability vector output by the cloud side model softmax classifier on the ith sample image;
s5, constructing a DDNN target function based on the confidence degrees of the sample images in the multi-view image training set:
where N represents the number of all sample images for cloud-side and end-side training, L(i,edge)And L(i,cloud)Respectively representing loss functions of an edge side and a cloud side of the ith sample image;
s6, jointly training an edge side model and a cloud side model of the DDNN according to the DDNN target function to obtain a trained DDNN model;
and (3) identification:
and inputting the sample image to be recognized into the trained DDNN model to obtain a multi-view target recognition result.
3. The method of claim 2, wherein the cloud-side model calculates the classification of the sample images in the training set of multi-view images as follows:
wherein p isikRepresents the probability that the i-th sample image belongs to the k-th class, ziI-th sample image input vector, z, of softmax classifier representing cloud-side model(i,c)Denotes ziThe c-th value of (a).
4. A DDNN-based multi-perspective target recognition system, comprising: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is configured to read executable instructions stored in the computer-readable storage medium, and execute the DDNN model-based multi-view object recognition method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910931384.1A CN110738309B (en) | 2019-09-27 | 2019-09-27 | DDNN training method and DDNN-based multi-view target identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910931384.1A CN110738309B (en) | 2019-09-27 | 2019-09-27 | DDNN training method and DDNN-based multi-view target identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110738309A CN110738309A (en) | 2020-01-31 |
CN110738309B true CN110738309B (en) | 2022-07-12 |
Family
ID=69269807
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910931384.1A Expired - Fee Related CN110738309B (en) | 2019-09-27 | 2019-09-27 | DDNN training method and DDNN-based multi-view target identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110738309B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111382782B (en) * | 2020-02-23 | 2024-04-26 | 华为技术有限公司 | Method and device for training classifier |
CN111639744B (en) * | 2020-04-15 | 2023-09-22 | 北京迈格威科技有限公司 | Training method and device for student model and electronic equipment |
CN111985562B (en) * | 2020-08-20 | 2022-07-26 | 复旦大学 | End cloud collaborative training system for protecting end-side privacy |
CN112685176A (en) * | 2020-12-25 | 2021-04-20 | 国网河北省电力有限公司信息通信分公司 | Resource-constrained edge computing method for improving DDNN (distributed neural network) |
CN112735198A (en) * | 2020-12-31 | 2021-04-30 | 深兰科技(上海)有限公司 | Experiment teaching system and method |
CN112910806B (en) * | 2021-01-19 | 2022-04-08 | 北京理工大学 | Joint channel estimation and user activation detection method based on deep neural network |
CN113657747B (en) * | 2021-08-12 | 2023-06-16 | 中国安全生产科学研究院 | Intelligent assessment system for enterprise safety production standardization level |
CN113807349B (en) * | 2021-09-06 | 2023-06-20 | 海南大学 | Multi-view target identification method and system based on Internet of things |
CN116049347B (en) * | 2022-06-24 | 2023-10-31 | 荣耀终端有限公司 | Sequence labeling method based on word fusion and related equipment |
CN115545198B (en) * | 2022-11-25 | 2023-05-26 | 成都信息工程大学 | Edge intelligent collaborative inference method and system based on deep learning model |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108012121A (en) * | 2017-12-14 | 2018-05-08 | 安徽大学 | A kind of edge calculations and the real-time video monitoring method and system of cloud computing fusion |
CN109543829A (en) * | 2018-10-15 | 2019-03-29 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Method and system for hybrid deployment of deep learning neural network on terminal and cloud |
CN109977094A (en) * | 2019-01-30 | 2019-07-05 | 中南大学 | A method of the semi-supervised learning for structural data |
CN110009045A (en) * | 2019-04-09 | 2019-07-12 | 中国联合网络通信集团有限公司 | The recognition methods of internet-of-things terminal and device |
CN110111214A (en) * | 2019-04-24 | 2019-08-09 | 北京邮电大学 | User uses energy management method and system to one kind priority-based |
CN110147709A (en) * | 2018-11-02 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Training method, device, terminal and the storage medium of vehicle attribute model |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3455684B1 (en) * | 2016-05-09 | 2024-07-17 | Strong Force Iot Portfolio 2016, LLC | Methods and systems for the industrial internet of things |
WO2019133052A1 (en) * | 2017-12-28 | 2019-07-04 | Yang Shao Wen | Visual fog |
-
2019
- 2019-09-27 CN CN201910931384.1A patent/CN110738309B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108012121A (en) * | 2017-12-14 | 2018-05-08 | 安徽大学 | A kind of edge calculations and the real-time video monitoring method and system of cloud computing fusion |
CN109543829A (en) * | 2018-10-15 | 2019-03-29 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Method and system for hybrid deployment of deep learning neural network on terminal and cloud |
CN110147709A (en) * | 2018-11-02 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Training method, device, terminal and the storage medium of vehicle attribute model |
CN109977094A (en) * | 2019-01-30 | 2019-07-05 | 中南大学 | A method of the semi-supervised learning for structural data |
CN110009045A (en) * | 2019-04-09 | 2019-07-12 | 中国联合网络通信集团有限公司 | The recognition methods of internet-of-things terminal and device |
CN110111214A (en) * | 2019-04-24 | 2019-08-09 | 北京邮电大学 | User uses energy management method and system to one kind priority-based |
Non-Patent Citations (1)
Title |
---|
"Distributed Deep Neural Networks over the Cloud, the Edge and End Devices";Surat Teerapittayanon et al.;《arXiv》;20170906;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110738309A (en) | 2020-01-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110738309B (en) | DDNN training method and DDNN-based multi-view target identification method and system | |
WO2020192736A1 (en) | Object recognition method and device | |
Budiharto et al. | Fast object detection for quadcopter drone using deep learning | |
CN112446398B (en) | Image classification method and device | |
CN112990211B (en) | Training method, image processing method and device for neural network | |
Mendes et al. | Exploiting fully convolutional neural networks for fast road detection | |
CN113705769A (en) | Neural network training method and device | |
CN111797983A (en) | Neural network construction method and device | |
CN113065645B (en) | Twin attention network, image processing method and device | |
CN111368972A (en) | Convolution layer quantization method and device thereof | |
CN115167442A (en) | Power transmission line inspection path planning method and system | |
CN112489072B (en) | Vehicle-mounted video perception information transmission load optimization method and device | |
CN113297972B (en) | Transformer substation equipment defect intelligent analysis method based on data fusion deep learning | |
CN114694089B (en) | Novel multi-mode fusion pedestrian re-recognition method | |
CN115661246A (en) | Attitude estimation method based on self-supervision learning | |
CN115170746A (en) | Multi-view three-dimensional reconstruction method, system and equipment based on deep learning | |
CN113359820A (en) | DQN-based unmanned aerial vehicle path planning method | |
CN112418032A (en) | Human behavior recognition method and device, electronic equipment and storage medium | |
Shariff et al. | Artificial (or) fake human face generator using generative adversarial network (GAN) machine learning model | |
CN109740455B (en) | Crowd evacuation simulation method and device | |
Koo et al. | A jellyfish distribution management system using an unmanned aerial vehicle and unmanned surface vehicles | |
CN118379535A (en) | Method, system, equipment and medium for dynamically capturing and identifying targets of electric robot | |
CN113065506B (en) | Human body posture recognition method and system | |
CN110705564B (en) | Image recognition method and device | |
CN117576149A (en) | Single-target tracking method based on attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220712 |