WO2022168274A1 - 情報処理装置、選択出力方法、及び選択出力プログラム - Google Patents
情報処理装置、選択出力方法、及び選択出力プログラム Download PDFInfo
- Publication number
- WO2022168274A1 WO2022168274A1 PCT/JP2021/004388 JP2021004388W WO2022168274A1 WO 2022168274 A1 WO2022168274 A1 WO 2022168274A1 JP 2021004388 W JP2021004388 W JP 2021004388W WO 2022168274 A1 WO2022168274 A1 WO 2022168274A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- learning data
- unlabeled
- object detection
- unlabeled learning
- information processing
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 87
- 238000004364 calculation method Methods 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 10
- 238000002372 labelling Methods 0.000 description 21
- 230000000694 effects Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 5
- 241000282326 Felis catus Species 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 101100099277 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) rgt-1 gene Proteins 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/87—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using selection of the recognition techniques, e.g. of a classifier in a multiple classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/091—Active learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7753—Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
Definitions
- the present disclosure relates to an information processing device, a selective output method, and a selective output program.
- the device performs deep learning using a large amount of teacher data (for example, it is also called a learning data set).
- teacher data for example, it is also called a learning data set.
- the training data includes the area of the object to be detected in the image and a label indicating the type of the object.
- the training data is created by a labeling operator.
- the creation work by the labeling worker is called labeling. Labeling by a labeling operator increases the burden on the labeling operator. Therefore, active learning has been devised in order to reduce the burden on the labeling operator. In active learning, labeled images with high learning effect are used as teacher data.
- the active learning device uses a discriminator trained with labeled learning data to calculate a discrimination score for unlabeled learning data.
- An active learning device generates a plurality of clusters by clustering unlabeled learning data.
- the active learning device selects learning data to be used for active learning from unlabeled learning data based on a plurality of clusters and discrimination scores.
- learning data is selected using classifiers obtained by learning using labeled learning data in a certain way and unlabeled learning data.
- the discriminator is hereinafter referred to as a trained model.
- the selected learning data is learning data with a high learning effect when learning is performed using the method.
- the method using the above technique is not necessarily preferred. Therefore, the problem is how to select learning data with a high learning effect.
- the purpose of this disclosure is to select learning data with a high learning effect.
- the information processing apparatus includes an acquisition unit that acquires a plurality of trained models that detect objects by different methods, a plurality of unlabeled learning data that are a plurality of images including the object, and a collection of the plurality of unlabeled learning data. For each, an object detection unit that detects an object using the plurality of trained models, and a plurality of information amount scores indicating the value of the plurality of unlabeled learning data based on the plurality of object detection results. a calculating unit that calculates and selects a preset number of unlabeled learning data from the plurality of unlabeled learning data based on the plurality of information content scores, and outputs the selected unlabeled learning data and a selection output to.
- FIG. 2 is a block diagram showing functions of the information processing apparatus according to Embodiment 1;
- FIG. 2 illustrates hardware included in the information processing apparatus according to the first embodiment;
- FIG. 4A and 4B are diagrams for explaining an IoU according to the first embodiment;
- FIG. 4 is a diagram showing the relationship between Precision, Recall, and AP according to the first embodiment;
- FIG. (A) and (B) are diagrams (part 1) showing examples of output of selected images.
- (A) and (B) are diagrams (part 2) showing examples of output of selected images.
- 3 is a block diagram showing functions of an information processing apparatus according to a second embodiment;
- FIG. 9 is a flow chart showing an example of processing executed by the information processing apparatus according to the second embodiment;
- FIG. 1 is a block diagram showing functions of an information processing apparatus according to a first embodiment.
- the information processing device 100 is a device that executes the selective output method.
- the information processing apparatus 100 has a first storage section 111 , a second storage section 112 , an acquisition section 120 , learning sections 130 a and 130 b , an object detection section 140 , a calculation section 150 and a selection output section 160 .
- FIG. 2 illustrates hardware included in the information processing apparatus according to the first embodiment.
- the information processing device 100 has a processor 101 , a volatile memory device 102 and a nonvolatile memory device 103 .
- the processor 101 controls the information processing apparatus 100 as a whole.
- the processor 101 is a CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), or the like.
- Processor 101 may be a multiprocessor.
- the information processing device 100 may have a processing circuit.
- the processing circuit may be a single circuit or multiple circuits.
- the volatile memory device 102 is the main memory device of the information processing device 100 .
- the volatile memory device 102 is RAM (Random Access Memory).
- the nonvolatile storage device 103 is an auxiliary storage device of the information processing device 100 .
- the nonvolatile memory device 103 is a HDD (Hard Disk Drive) or an SSD (Solid State Drive).
- the first storage unit 111 and the second storage unit 112 may be implemented as storage areas secured in the volatile storage device 102 or the nonvolatile storage device 103 .
- a part or all of the acquisition unit 120, the learning units 130a and 130b, the object detection unit 140, the calculation unit 150, and the selection output unit 160 may be realized by a processing circuit.
- Some or all of the acquisition unit 120, the learning units 130a and 130b, the object detection unit 140, the calculation unit 150, and the selection output unit 160 may be implemented as modules of a program executed by the processor 101.
- the program executed by processor 101 is also called a selection output program.
- the selected output program is recorded on a recording medium.
- the information processing apparatus 100 generates trained models 200a and 200b. A process up to generation of trained models 200a and 200b will be described.
- the first storage unit 111 will be described.
- the first storage unit 111 may store labeled learning data.
- the labeled learning data includes an image, one or more detection target object regions in the image, and a label indicating the type of the object. Information including the region of the object and the label is also called label information. Also, for example, when the image is an image including a road, the type is a four-wheeled vehicle, a two-wheeled vehicle, a truck, or the like.
- the acquisition unit 120 acquires labeled learning data.
- the acquisition unit 120 acquires labeled learning data from the first storage unit 111 .
- the acquisition unit 120 acquires labeled learning data from an external device (for example, a cloud server).
- the learning units 130a and 130b generate learned models 200a and 200b by performing object detection learning in different ways using the labeled learning data.
- the methods include Faster R-CNN (Regions with Convolutional Neural Networks), YOLO (You Look Only Once), and SSD (Single Shot MultiBox Detector). Note that the method may also be called an algorithm.
- the learning units 130a and 130b generate the learned models 200a and 200b that detect objects by different methods.
- the trained model 200a is a trained model that performs object detection using Faster R-CNN.
- the trained model 200b is a trained model that performs object detection using YOLO.
- FIG. 1 shows two learning units.
- the number of learning units is not limited to two.
- the same number of trained models as the learning units are generated. Therefore, the number of trained models is not limited to two.
- a trained model may also be referred to as a detector or detector information.
- the generated trained models 200a and 200b may be stored in the volatile storage device 102 or the nonvolatile storage device 103, or may be stored in an external device.
- the second storage unit 112 may store a plurality of unlabeled learning data.
- Each of the plurality of unlabeled training data does not contain label information.
- the multiple unlabeled training data are multiple images.
- Each of the multiple images includes an object.
- the objects are humans, animals, and the like.
- Acquisition unit 120 acquires a plurality of unlabeled learning data. For example, the acquisition unit 120 acquires multiple pieces of unlabeled learning data from the second storage unit 112 . Also, for example, the acquisition unit 120 acquires a plurality of unlabeled learning data from an external device. Acquisition unit 120 acquires trained models 200a and 200b. For example, the acquisition unit 120 acquires the trained models 200 a and 200 b from the volatile storage device 102 or the nonvolatile storage device 103 . Also, for example, the acquisition unit 120 acquires the trained models 200a and 200b from an external device.
- the object detection unit 140 performs object detection using the learned models 200a and 200b for each of the plurality of unlabeled learning data. For example, when the number of unlabeled learning data is two, the object detection unit 140 uses the trained models 200a and 200b for the first unlabeled learning data among the plurality of unlabeled learning data. , object detection. In other words, the object detection unit 140 performs object detection using the first unlabeled learning data and the learned models 200a and 200b. Also, for example, the object detection unit 140 performs object detection on the second unlabeled learning data among the plurality of unlabeled learning data using the learned models 200a and 200b. In this way, the object detection unit 140 performs object detection using the learned models 200a and 200b for each of the plurality of unlabeled learning data.
- the object detection unit 140 performs object detection using the one unlabeled learning data and the learned models 200a and 200b. For example, the object detection unit 140 performs object detection using the unlabeled learning data and the trained model 200a. Also, for example, the object detection unit 140 performs object detection using the unlabeled learning data and the learned model 200b. As a result, object detection is performed in different ways. An object detection result is output for each trained model. The object detection result is denoted as D i . Note that i is an integer from 1 to N.
- the object detection result D i is also called an inference label R i .
- An inference label Ri is represented by "(c, x, y , w, h)".
- c indicates the type of object.
- x and y indicate the coordinates (x, y) of the center of the image area of the object.
- w indicates the width of the object.
- h indicates the height of the object.
- the calculation unit 150 calculates an information amount score using the object detection result D i .
- the information score indicates the value of unlabeled training data. Therefore, the larger the information amount score, the higher the value of the learning data. In other words, the information content score has a large difference in the results of types in image regions with high similarity. Alternatively, the information amount score has a large difference in the image area for the same type of results.
- mAP mean Average Precision
- IoU Intersection over Union
- the information content score is calculated using Equation (1).
- the object detection result output from the trained model 200a is assumed to be D1.
- the object detection result output from the trained model 200b is D2.
- mAP@0.5 is one of evaluation methods in object detection, and IoU is known as a concept used for evaluation. IoU is expressed using Equation (2) when object detection is performed using labeled learning data.
- R gt indicates the region of true values.
- R d indicates the detection area.
- A indicates an area.
- FIGS. 3A and 3B are diagrams for explaining the IoU according to the first embodiment.
- FIG. 3A shows a specific example of the true value region Rgt and the detection region Rd .
- FIG. 3A shows how much the true value region Rgt and the detection region Rd overlap.
- IoU cannot be expressed using Equation (2) as it is. Therefore, IoU is represented as follows. A region indicated by one object detection result is defined as a true value region. Then, the area indicated by the other object detection result is set as the detection area. For example, in FIG. 3B, the detection region Rgt1 indicated by the object detection result D1 is the true value region. The detection area Rd1 indicated by the object detection result D2 is the detection area. Using the example of FIG. 3B, IoU is expressed using Equation (3).
- TP True Positive
- FP False Positive
- FN False Negative
- TP indicates that the trained model has detected an object existing in the image of the unlabeled training data.
- the detection region R d1 and the detection region R gt1 exist at substantially the same position, it indicates that the learned model has detected the true value.
- FP indicates that the trained model detected an object that was not present in the image of the unlabeled training data. In other words, it indicates that the learned model made an erroneous detection because the detection region R gt1 is located at a deviated position.
- FN indicates that the trained model did not detect an object present in the unlabeled training data image. In other words, it indicates that the learned model did not detect because the detection region R gt1 exists at a deviated position.
- Precision is also expressed using TP and FP. Specifically, Precision is expressed using Equation (4). Note that Precision indicates the ratio of actually positive data out of the data predicted to be positive. Note that Precision is also referred to as a matching ratio.
- Recall is expressed using TP and FP. Specifically, Recall is expressed using equation (5). Note that Recall indicates the ratio of predicted positive results to the actually positive results. Note that Recall is also referred to as recall rate.
- FIG. 4 is a diagram showing the relationship between Precision, Recall, and AP according to the first embodiment.
- the vertical axis indicates Precision.
- the horizontal axis indicates Recall.
- AP Average Precision
- AP Average Precision
- the calculation unit 150 calculates TP, FP, and FN for each of the multiple objects.
- the calculator 150 calculates the Precision and Recall of each of the plurality of objects using Equations (4) and (5).
- the calculation unit 150 calculates AP for each object (that is, class) based on Precision and Recall of each of a plurality of objects. For example, when the plurality of objects are a cat and a dog, the cat's AP "0.4" and the dog's AP "0.6" are calculated.
- the calculator 150 calculates the average AP for each object as mAP.
- the calculation unit 150 calculates mAP "0.5". Note that if only one object exists in the image of the unlabeled training data, one AP is calculated. One AP becomes the mAP.
- the calculation unit 150 calculates the information content score using mAP and Equation (1). That is, the calculation unit 150 calculates the information content score by "1-mAP". Thereby, an information amount score is calculated.
- the information content score is calculated using Equation (6). That is, the calculation unit 150 uses N learned models to create a plurality of combinations of two learned models, calculates a value using Equation (1) for each combination, and calculates the value of the calculated value. By dividing the total value by N, the information content score is calculated.
- the calculation unit 150 calculates the information amount score corresponding to the single unlabeled learning data.
- the information processing device 100 that is, the object detection unit 140 and the calculation unit 150 performs similar processing on each of the plurality of unlabeled learning data.
- the information processing apparatus 100 can obtain the information amount score of each of the plurality of unlabeled learning data.
- the information processing apparatus 100 can obtain a plurality of information content scores corresponding to a plurality of unlabeled learning data.
- the information processing apparatus 100 calculates multiple information amount scores based on multiple object detection results.
- the information processing apparatus 100 calculates a plurality of information amount scores using mAP and a plurality of object detection results.
- the selection output unit 160 selects a preset number of unlabeled learning data from a plurality of unlabeled learning data based on a plurality of information amount scores. In other words, the selection output unit 160 selects unlabeled learning data with a high learning effect from a plurality of unlabeled learning data corresponding to a plurality of information content scores based on a plurality of information content scores.
- This sentence can be expressed as follows. The selection output unit 160 selects unlabeled learning data expected to contribute to learning from among a plurality of unlabeled learning data.
- the information content score is a value ranging from 0 to 1.
- the detection results by the learned models 200a and 200b are substantially the same. Therefore, the unlabeled learning data corresponding to the information content score of "0" is less necessary to be used as learning data, and thus is considered to have little utility value.
- the information amount score is "1”
- the detection results by the trained models 200a and 200b are significantly different.
- the unlabeled learning data corresponding to the information content score of "1” can be said to be a special case that is very difficult to detect.
- the selection output unit 160 excludes the unlabeled learning data corresponding to the information amount scores of "0" and "1" from among the plurality of unlabeled learning data corresponding to the plurality of information amount scores. After the exclusion, the selection output unit 160 selects the top n (n is a positive integer) unlabeled learning data from the plurality of unlabeled learning data as unlabeled learning data with high learning effect.
- the selection output unit 160 outputs the selected unlabeled learning data.
- the selection output unit 160 may also output, as an inference label, an object detection result obtained by performing object detection on the selected unlabeled learning data (hereinafter referred to as the selected image).
- the selected image an object detection result obtained by performing object detection on the selected unlabeled learning data
- FIGS. 5A and 5B are diagrams (part 1) showing examples of output of selected images.
- FIG. 5A shows the case where the selected image is output to the volatile memory device 102 or the nonvolatile memory device 103.
- FIG. 5A shows the case where the selected image is output to the volatile memory device 102 or the nonvolatile memory device 103.
- FIG. 5B For example, the labeling operator uses the information processing apparatus 100 to label the selected image.
- FIG. 5(B) shows the case where the selected image and the inference label are output to the volatile storage device 102 or the non-volatile storage device 103.
- the labeling operator uses the information processing apparatus 100 and the inference label to label the selected image.
- the labeling work of the labeling operator is reduced.
- FIGS. 6(A) and (B) are diagrams (part 2) showing examples of output of selected images.
- FIG. 6A shows the case where the selected image is output to the labeling tool. By outputting the selected image to the labeling tool in this way, the labeling work of the labeling operator is reduced.
- FIG. 6(B) shows the case where the selected image and the inference label are output to the labeling tool.
- the labeling operator uses the labeling tool to label the selected images while correcting the inferred labels.
- the images selected by the selection output unit 160 are images selected using trained models that detect objects by different methods. Therefore, the selected image is not only suitable as learning data used when learning with a certain method, but also suitable as learning data used when learning with another method. Therefore, it can be said that the selected image is learning data with a high learning effect. According to Embodiment 1, the information processing apparatus 100 can select learning data with a high learning effect.
- learning data with a high learning effect is automatically selected by the information processing apparatus 100. Therefore, the information processing apparatus 100 can efficiently select learning data with a high learning effect.
- Embodiment 2 Next, Embodiment 2 will be described. In Embodiment 2, mainly matters different from Embodiment 1 will be described. In the second embodiment, descriptions of items common to the first embodiment are omitted.
- FIG. 7 is a block diagram showing functions of the information processing apparatus according to the second embodiment. 7 that are the same as those shown in FIG. 1 are assigned the same reference numerals as those shown in FIG.
- the information processing apparatus 100 relearns the trained models 200a and 200b. The details of re-learning will be explained later.
- FIG. 8 is a flowchart illustrating an example of processing executed by the information processing apparatus according to the second embodiment; FIG. (Step S11)
- the acquisition unit 120 acquires labeled learning data. Note that the data amount of the labeled learning data may be small.
- the learning units 130a and 130b generate trained models 200a and 200b by performing object detection learning using different methods using the labeled learning data.
- Step S12 The acquisition unit 120 acquires a plurality of unlabeled learning data.
- the object detection unit 140 performs object detection using a plurality of unlabeled learning data and trained models 200a and 200b.
- the calculation unit 150 calculates a plurality of information amount scores corresponding to a plurality of unlabeled learning data based on a plurality of object detection results.
- Step S14 The selection output unit 160 selects unlabeled learning data with a high learning effect from a plurality of unlabeled learning data based on a plurality of information amount scores.
- Step S15 The selection output unit 160 outputs the selected unlabeled learning data (that is, the selected image). For example, the selection output unit 160 outputs the selected image as illustrated in FIG. 5 or FIG.
- the labeling operator uses the selected image for labeling.
- labeled learning data is generated.
- the labeled learning data includes a selected image, one or more detection target object regions in the image, and a label indicating the type of the object.
- the labeled learning data may be stored in the first storage unit 111 . Note that the labeling work may be performed by an external device.
- Step S16 The acquisition unit 120 acquires labeled learning data.
- the acquisition unit 120 acquires labeled learning data from the first storage unit 111 .
- the acquisition unit 120 acquires labeled learning data from an external device.
- the learning units 130a and 130b relearn the trained models 200a and 200b using the labeled learning data.
- Step S18 The information processing apparatus 100 determines whether or not the learning termination condition is satisfied. Note that, for example, the termination condition is stored in the nonvolatile storage device 103 . If the termination condition is satisfied, the process ends. If the termination condition is not satisfied, the process proceeds to step S12.
- the information processing apparatus 100 can improve the object detection accuracy of the trained model by repeating addition of labeled learning data and re-learning.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
図1は、実施の形態1の情報処理装置の機能を示すブロック図である。情報処理装置100は、選択出力方法を実行する装置である。情報処理装置100は、第1の記憶部111、第2の記憶部112、取得部120、学習部130a,130b、物体検出部140、算出部150、及び選択出力部160を有する。
図2は、実施の形態1の情報処理装置が有するハードウェアを示す図である。情報処理装置100は、プロセッサ101、揮発性記憶装置102、及び不揮発性記憶装置103を有する。
図1に戻って、情報処理装置100の機能を説明する。
取得部120、学習部130a,130b、物体検出部140、算出部150、及び選択出力部160の一部又は全部は、処理回路によって実現してもよい。また、取得部120、学習部130a,130b、物体検出部140、算出部150、及び選択出力部160の一部又は全部は、プロセッサ101が実行するプログラムのモジュールとして実現してもよい。例えば、プロセッサ101が実行するプログラムは、選択出力プログラムとも言う。例えば、選択出力プログラムは、記録媒体に記録されている。
まず、第1の記憶部111を説明する。第1の記憶部111は、ラベルあり学習データを記憶してもよい。ラベルあり学習データは、画像と、当該画像内における1以上の検出対象の物体の領域と、当該物体の種別を示すラベルとを含む。なお、当該物体の領域と当該ラベルと含む情報は、ラベル情報とも言う。また、例えば、当該画像が道路を含む画像である場合、当該種別は、四輪車、二輪車、トラックなどである。
まず、第2の記憶部112を説明する。第2の記憶部112は、複数のラベルなし学習データを記憶してもよい。複数のラベルなし学習データのそれぞれには、ラベル情報が含まれていない。複数のラベルなし学習データは、複数の画像である。複数の画像のそれぞれは、物体を含む。例えば、物体は、人間、動物などである。
取得部120は、学習済モデル200a,200bを取得する。例えば、取得部120は、学習済モデル200a,200bを揮発性記憶装置102又は不揮発性記憶装置103から取得する。また、例えば、取得部120は、学習済モデル200a,200bを外部装置から取得する。
このように、物体検出部140は、複数のラベルなし学習データのそれぞれに対して、学習済モデル200a,200bを用いて、物体検出を行う。
物体検出部140は、当該1つのラベルなし学習データと学習済モデル200a,200bとを用いて、物体検出を行う。例えば、物体検出部140は、当該ラベルなし学習データと学習済モデル200aとを用いて、物体検出を行う。また、例えば、物体検出部140は、当該ラベルなし学習データと学習済モデル200bとを用いて、物体検出を行う。これにより、それぞれ異なる方法で、物体検出が行われる。学習済モデルごとに、物体検出結果が、出力される。物体検出結果は、Diと表記する。なお、iは、1~Nの整数である。また、物体検出結果Diは、推論ラベルRiとも言う。推論ラベルRiは、“(c,x,y,w,h)”で表現される。cは、物体の種別を示す。xとyは、物体の画像領域中心の座標(x,y)を示す。wは、物体の幅を示す。hは、物体の高さを示す。
図3(A),(B)は、実施の形態1のIoUを説明するための図である。図3(A)は、真値の領域Rgtと検出領域Rdとの具体例を示している。また、図3(A)は、真値の領域Rgtと検出領域Rdとがどれだけ重なっているかを示している。
図4は、実施の形態1のPrecision、Recall、及びAPの関係を示す図である。縦軸は、Precisionを示している。横軸は、Recallを示している。PrecisionとRecallとを用いて、AP(Average Precision)が算出される。すなわち、図4の“AP”の面積が、APとして算出される。
次に、実施の形態2を説明する。実施の形態2では、実施の形態1と相違する事項を主に説明する。そして、実施の形態2では、実施の形態1と共通する事項の説明を省略する。
情報処理装置100は、学習済モデル200a,200bを再学習する。再学習の詳細は、後で説明する。
図8は、実施の形態2の情報処理装置が実行する処理の例を示すフローチャートである。
(ステップS11)取得部120は、ラベルあり学習データを取得する。なお、当該ラベルあり学習データのデータ量は、少量でもよい。
学習部130a,130bは、ラベルあり学習データを用いて、それぞれ異なる方法で物体の検出学習を行うことで、学習済モデル200a,200bを生成する。
物体検出部140は、複数のラベルなし学習データと学習済モデル200a,200bとを用いて、物体検出を行う。
(ステップS13)算出部150は、複数の物体検出結果に基づいて、複数のラベルなし学習データに対応する複数の情報量スコアを算出する。
(ステップS14)選択出力部160は、複数の情報量スコアに基づいて、複数のラベルなし学習データの中から、学習効果の高いラベルなし学習データを選択する。
(ステップS15)選択出力部160は、選択されたラベルなし学習データ(すなわち、選択された画像)を出力する。例えば、選択出力部160は、図5又は図6で例示したように、選択された画像を出力する。
(ステップS17)学習部130a,130bは、ラベルあり学習データを用いて、学習済モデル200a,200bを再学習する。
Claims (6)
- それぞれ異なる方法で物体検出を行う複数の学習済モデルと、物体を含む複数の画像である複数のラベルなし学習データとを取得する取得部と、
前記複数のラベルなし学習データのそれぞれに対して、前記複数の学習済モデルを用いて、物体検出を行う物体検出部と、
複数の物体検出結果に基づいて、前記複数のラベルなし学習データの価値を示す複数の情報量スコアを算出する算出部と、
前記複数の情報量スコアに基づいて、前記複数のラベルなし学習データの中から、予め設定された数のラベルなし学習データを選択し、選択されたラベルなし学習データを出力する選択出力部と、
を有する情報処理装置。 - 前記選択出力部は、選択されたラベルなし学習データに対して、物体検出を行った結果である物体検出結果を、推論ラベルとして、出力する、
請求項1に記載の情報処理装置。 - 前記算出部は、mean Average Precisionと前記複数の物体検出結果とを用いて、前記複数の情報量スコアを算出する、
請求項1又は2に記載の情報処理装置。 - 複数の学習部をさらに有し、
前記取得部は、選択されたラベルなし学習データを含むラベルあり学習データを取得し、
前記複数の学習部は、前記ラベルあり学習データを用いて、前記複数の学習済モデルを再学習する、
請求項1から3のいずれか1項に記載の情報処理装置。 - 情報処理装置が、
それぞれ異なる方法で物体検出を行う複数の学習済モデルと、物体を含む複数の画像である複数のラベルなし学習データとを取得し、
前記複数のラベルなし学習データのそれぞれに対して、前記複数の学習済モデルを用いて、物体検出を行い、
複数の物体検出結果に基づいて、前記複数のラベルなし学習データの価値を示す複数の情報量スコアを算出し、
前記複数の情報量スコアに基づいて、前記複数のラベルなし学習データの中から、予め設定された数のラベルなし学習データを選択し、
選択されたラベルなし学習データを出力する、
選択出力方法。 - 情報処理装置に、
それぞれ異なる方法で物体検出を行う複数の学習済モデルと、物体を含む複数の画像である複数のラベルなし学習データとを取得し、
前記複数のラベルなし学習データのそれぞれに対して、前記複数の学習済モデルを用いて、物体検出を行い、
複数の物体検出結果に基づいて、前記複数のラベルなし学習データの価値を示す複数の情報量スコアを算出し、
前記複数の情報量スコアに基づいて、前記複数のラベルなし学習データの中から、予め設定された数のラベルなし学習データを選択し、
選択されたラベルなし学習データを出力する、
処理を実行させる選択出力プログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022579270A JP7511690B2 (ja) | 2021-02-05 | 2021-02-05 | 情報処理装置、選択出力方法、及び選択出力プログラム |
DE112021006984.5T DE112021006984T5 (de) | 2021-02-05 | 2021-02-05 | Informationsverarbeitungseinrichtung, auswahlausgabe- verfahren und auswahlausgabeprogramm |
CN202180092367.9A CN116802651A (zh) | 2021-02-05 | 2021-02-05 | 信息处理装置、选择输出方法和选择输出程序 |
US18/273,278 US20240119723A1 (en) | 2021-02-05 | 2021-02-05 | Information processing device, and selection output method |
PCT/JP2021/004388 WO2022168274A1 (ja) | 2021-02-05 | 2021-02-05 | 情報処理装置、選択出力方法、及び選択出力プログラム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/004388 WO2022168274A1 (ja) | 2021-02-05 | 2021-02-05 | 情報処理装置、選択出力方法、及び選択出力プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022168274A1 true WO2022168274A1 (ja) | 2022-08-11 |
Family
ID=82742068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/004388 WO2022168274A1 (ja) | 2021-02-05 | 2021-02-05 | 情報処理装置、選択出力方法、及び選択出力プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240119723A1 (ja) |
JP (1) | JP7511690B2 (ja) |
CN (1) | CN116802651A (ja) |
DE (1) | DE112021006984T5 (ja) |
WO (1) | WO2022168274A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024161535A1 (ja) * | 2023-02-01 | 2024-08-08 | 三菱電機株式会社 | 情報処理装置、プログラム、情報処理システム及び情報処理方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007304782A (ja) * | 2006-05-10 | 2007-11-22 | Nec Corp | データセット選択装置および実験計画システム |
JP2020528623A (ja) * | 2017-08-31 | 2020-09-24 | 三菱電機株式会社 | 能動学習のシステム及び方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6364037B2 (ja) | 2016-03-16 | 2018-07-25 | セコム株式会社 | 学習データ選択装置 |
WO2019182590A1 (en) | 2018-03-21 | 2019-09-26 | Visa International Service Association | Automated machine learning systems and methods |
GB201805302D0 (en) | 2018-03-29 | 2018-05-16 | Benevolentai Tech Limited | Ensemble Model Creation And Selection |
JP7233251B2 (ja) | 2019-02-28 | 2023-03-06 | キヤノン株式会社 | 情報処理装置、情報処理装置の制御方法及びプログラム |
-
2021
- 2021-02-05 CN CN202180092367.9A patent/CN116802651A/zh active Pending
- 2021-02-05 WO PCT/JP2021/004388 patent/WO2022168274A1/ja active Application Filing
- 2021-02-05 US US18/273,278 patent/US20240119723A1/en active Pending
- 2021-02-05 JP JP2022579270A patent/JP7511690B2/ja active Active
- 2021-02-05 DE DE112021006984.5T patent/DE112021006984T5/de active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007304782A (ja) * | 2006-05-10 | 2007-11-22 | Nec Corp | データセット選択装置および実験計画システム |
JP2020528623A (ja) * | 2017-08-31 | 2020-09-24 | 三菱電機株式会社 | 能動学習のシステム及び方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024161535A1 (ja) * | 2023-02-01 | 2024-08-08 | 三菱電機株式会社 | 情報処理装置、プログラム、情報処理システム及び情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
JP7511690B2 (ja) | 2024-07-05 |
JPWO2022168274A1 (ja) | 2022-08-11 |
US20240119723A1 (en) | 2024-04-11 |
DE112021006984T5 (de) | 2023-11-16 |
CN116802651A (zh) | 2023-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10997746B2 (en) | Feature descriptor matching | |
Zhu et al. | Learning object-specific distance from a monocular image | |
Jana et al. | YOLO based Detection and Classification of Objects in video records | |
US10474713B1 (en) | Learning method and learning device using multiple labeled databases with different label sets and testing method and testing device using the same | |
WO2017059576A1 (en) | Apparatus and method for pedestrian detection | |
US10262214B1 (en) | Learning method, learning device for detecting lane by using CNN and testing method, testing device using the same | |
US8036468B2 (en) | Invariant visual scene and object recognition | |
CN117015813A (zh) | 对用于训练的点云数据集进行自适应增强的设备、系统、方法和媒体 | |
Tsintotas et al. | Appearance-based loop closure detection with scale-restrictive visual features | |
WO2022168274A1 (ja) | 情報処理装置、選択出力方法、及び選択出力プログラム | |
Melotti et al. | Reducing overconfidence predictions in autonomous driving perception | |
Kuppusamy et al. | Traffic Sign Recognition for Autonomous Vehicle Using Optimized YOLOv7 and Convolutional Block Attention Module | |
CN104680194A (zh) | 基于随机蕨丛和随机投影的在线目标跟踪方法 | |
CN117789160A (zh) | 一种基于聚类优化的多模态融合目标检测方法及系统 | |
CN117237911A (zh) | 一种基于图像的动态障碍物快速检测方法及系统 | |
US20230267175A1 (en) | Systems and methods for sample efficient training of machine learning models | |
US11928593B2 (en) | Machine learning systems and methods for regression based active learning | |
US20220398494A1 (en) | Machine Learning Systems and Methods For Dual Network Multi-Class Classification | |
Xiong et al. | Hinge-Wasserstein: Estimating Multimodal Aleatoric Uncertainty in Regression Tasks | |
CN115527083A (zh) | 图像标注方法、装置和电子设备 | |
JP7306460B2 (ja) | 敵対的事例検知システム、方法およびプログラム | |
Fujita et al. | Fine-tuned Surface Object Detection Applying Pre-trained Mask R-CNN Models | |
JP2022150552A (ja) | データ処理装置及び方法 | |
Fakharurazi et al. | Object Detection in Autonomous Vehicles | |
Shi et al. | A dynamically class-wise weighting mechanism for unsupervised cross-domain object detection under universal scenarios |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21924669 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022579270 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18273278 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180092367.9 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112021006984 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21924669 Country of ref document: EP Kind code of ref document: A1 |