WO2024224751A1 - Image processing method and image processing device - Google Patents
Image processing method and image processing device Download PDFInfo
- Publication number
- WO2024224751A1 WO2024224751A1 PCT/JP2024/004653 JP2024004653W WO2024224751A1 WO 2024224751 A1 WO2024224751 A1 WO 2024224751A1 JP 2024004653 W JP2024004653 W JP 2024004653W WO 2024224751 A1 WO2024224751 A1 WO 2024224751A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- template
- image
- image processing
- model
- posture
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 title claims description 246
- 238000000034 method Methods 0.000 claims abstract description 184
- 238000003384 imaging method Methods 0.000 claims description 31
- 238000001514 detection method Methods 0.000 claims description 26
- 239000012636 effector Substances 0.000 description 31
- 238000012986 modification Methods 0.000 description 15
- 230000004048 modification Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 238000004519 manufacturing process Methods 0.000 description 10
- 230000015572 biosynthetic process Effects 0.000 description 7
- 230000015654 memory Effects 0.000 description 7
- 238000003786 synthesis reaction Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 101100005555 Rattus norvegicus Ccl20 gene Proteins 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 101150073618 ST13 gene Proteins 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 101150001619 St18 gene Proteins 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Images
Definitions
- This disclosure relates to an image processing method and an image processing device.
- a known example of a conventional determination process is the template matching method, which compares a prepared template (for example, an image) of the part with an image of the part captured by a camera installed in the factory and performs a matching process.
- Patent Document 1 discloses a template creation device that creates a set of templates used in an object recognition device that recognizes objects by template matching.
- the template creation device acquires multiple templates from multiple images of an object in different poses, calculates the similarity of image features between two templates selected from the multiple templates, and performs clustering to divide the multiple templates into multiple groups based on the similarity. For each of the multiple groups, the template creation device integrates all the templates in the group into a single integrated template, and generates a template set with an integrated template for each group.
- the object recognition device performs a hierarchical search by creating a hierarchical template set, performing rough recognition using a low-resolution template set, and then using the results to perform detailed recognition using a high-resolution template set.
- matching processing must be performed in at least two stages, such as recognition processing using a low-resolution template set and recognition processing using a high-resolution template set, which inevitably increases the processing load on the object recognition device.
- the present disclosure has been devised in consideration of the conventional circumstances, and aims to provide an image processing method and image processing device that achieves highly accurate template matching of an object even when the orientation of the object relative to an imaging device changes as the imaging device moves.
- the present disclosure provides an image processing method performed by an image processing device that is mobile and capable of communicating with a camera capable of capturing an image of an object, which performs a first process of acquiring an image of the object, detecting a position or orientation of the object from the captured image, and generating a template of the object, and while performing the first process, performs template matching based on the captured image and the template of the object, and executes a second process multiple times to acquire information related to the movement of the object, and the first process predicts the orientation of the object based on the detected position or orientation of the object and the information related to the movement of the object acquired multiple times in the second process, and generates a template of the object corresponding to the predicted orientation of the object.
- the present disclosure also provides an image processing device that includes an acquisition unit that acquires an image of an object captured by a movable camera capable of capturing an image of the object, a first processing unit that detects a position or posture of the object from the captured image and generates a template of the object, and a second processing unit that performs template matching based on the captured image and the object template multiple times while the first processing unit is generating the template of the object, and acquires information related to the movement of the object, and the first processing unit predicts the posture of the object based on the detected position or posture of the object and the information related to the movement of the object acquired multiple times by the second processing unit, and generates a template of the object corresponding to the predicted posture of the object.
- an acquisition unit that acquires an image of an object captured by a movable camera capable of capturing an image of the object
- a first processing unit that detects a position or posture of the object from the captured image and generates a template of the object
- a second processing unit that performs template matching
- highly accurate template matching of an object can be achieved even in a situation where the orientation of the object from the imaging device changes as the imaging device moves.
- FIG. 1 is a functional block diagram illustrating functions of a first processing unit and a second processing unit in a first embodiment.
- FIG. 1 is a diagram for explaining an example of an overall operation procedure of a picking system according to the first embodiment.
- 11 is a flowchart showing a second example of a processing procedure of the image processing device according to the first embodiment.
- 1 is a flowchart showing a first example of a processing procedure of the image processing device according to the first embodiment.
- FIG. 13 is a diagram showing an example of a template display screen.
- FIG. 1 is a diagram for explaining an example of an overall operation procedure of a picking system according to a first modified example of the first embodiment.
- an object recognition device includes a template creation device for objects on a belt conveyor using images captured by a camera installed at a fixed location that cannot be moved relative to a production line, etc.
- the object recognition device uses a template set generated based on an integrated template for each group as is.
- one method is to stop or slow down the belt conveyor in accordance with the matching process speed, but when this method is adopted, the part picking efficiency (in other words, production efficiency) decreases. Therefore, there has been a demand for an object recognition device that can realize part matching and part picking without stopping the operation of the belt conveyor and end effector.
- a template registration device for example, an image processing device
- images an object and registers information based on an input image of an object captured by an imaging device that can change the imaging position relative to the object by moving (see below) and information based on the input image of the object (see below) in a storage unit in association with position information of the imaging device and information based on the input image of the object (see below) as templates to be used for template matching.
- FIG. 1 is a diagram for explaining an example of the configuration of a picking system.
- FIG. 2 is a block diagram showing an example of the internal configuration of a picking system.
- the picking system 100 includes an actuator AC, a camera CM, an image processing device P1, a display 13, and an operation device 14.
- the actuator AC and the image processing device P1, the camera CM and the image processing device P1, and the image processing device P1 and the operation device 14 are connected so as to enable input/output (transmission/reception) of data signals.
- FIG. 1 The positional relationship between the camera CM and the target object Tg will be described with reference to FIG. 1. Note that the description of FIG. 1 is applicable not only to the first embodiment but also to a modified version of the first embodiment described below.
- the object Tg is an object that is picked by the end effector EF of the picking system 100 deployed in a factory, and is, for example, an industrial part, an industrial product, etc. If it is an industrial part, for example, after being picked, it is moved to another lane (production line) for assembling a finished product. If it is an industrial product, for example, after being picked, it is stored in a box such as a cardboard box. It goes without saying that the type of object Tg is not limited to the industrial parts and industrial products described above.
- the actuator AC controls the camera CM to be movable three-dimensionally, thereby changing the positional relationship between the object Tg moving on the belt conveyor, the end effector EF that picks the object Tg, and the camera CM fixed to the end effector EF.
- the actuator AC controls the end effector EF and the camera CM attached to the end effector EF so that they can be moved in three dimensions using multiple axes.
- the actuator AC can control the recognition, maintenance, or change of the three-dimensional position (coordinates) of the camera CM.
- the end effector EF is, for example, a robot hand provided at the tip of a robot arm deployed in correspondence with the picking system 100, and approaches the target object Tg under the control of the actuator AC, and picks up the target object Tg.
- the camera CM is placed near the end effector EF and moves together with the end effector EF under the control of the actuator AC to capture an image of the object Tg.
- the camera CM captures the object Tg at a predetermined frame rate (e.g., 1000 frames per rate (hereinafter referred to as "fps")) and transmits the captured image of the object Tg (an example of an input image) obtained each time it is captured to the image processing device P1.
- fps frames per rate
- the image processing device P1 acquires the captured image of the object Tg transmitted from the camera CM.
- the image processing device P1 is configured by a computer capable of executing a first process (see Figs. 4 and 6) in which the image processing device detects the posture of the object Tg using the captured image of the object Tg transmitted from the camera CM, observes the movement of the object Tg based on information on the movement of the object Tg obtained by the second process while detecting the posture of the object Tg, predicts the posture of the object Tg at the timing when the posture detection process of the object Tg ends based on the detected posture of the object Tg and the movement information of the object Tg, and generates a detection template corresponding to the predicted posture of the object Tg (an example of a detected posture), and a second process (see Figs.
- the image processing device P1 may be, for example, a personal computer (hereinafter referred to as "PC"), or may be a dedicated hardware device specialized for executing each of the above-mentioned first and second processes.
- the image processing device P1 realizes recognition processing of the position and orientation of the target object Tg picked by the end effector EF by executing each of the above-mentioned first and second processes.
- the image processing device P1 includes a communication unit 10, a processor 11, a memory 12, and a 3D model database DB.
- the image processing device P1 accepts a user operation, and based on the user operation, generates a template display screen SC (see FIG. 7) including a template TP1 (see FIG. 7) of the object Tg viewed from the position and posture of the icon PP1 described below, a prediction template TP2 (see FIG. 7) of the object Tg corresponding to the predicted posture of the object Tg predicted by the first process, and a detection template TP3 (see FIG. 7) of the object Tg corresponding to the posture of the object Tg detected by the first process, and displays it on the display 13.
- a template display screen SC see FIG. 7 including a template TP1 (see FIG. 7) of the object Tg viewed from the position and posture of the icon PP1 described below, a prediction template TP2 (see FIG. 7) of the object Tg corresponding to the predicted posture of the object Tg predicted by the first process, and a detection template TP3 (see FIG. 7) of the object Tg corresponding to the posture of the object
- the communication unit 10 (an example of an acquisition unit) is connected to the actuator AC, the camera CM, the display 13, and the operation device 14 so that data can be communicated between them, and transmits and receives data.
- the communication unit 10 outputs the captured image transmitted from the camera CM and the control command transmitted from the operation device 14 to the processor 11.
- the communication unit 10 transmits the template display screen SC (see FIG. 7) output from the processor 11 to the display 13.
- the processor 11 is configured using, for example, a Central Processing Unit (CPU) or a Field Programmable Gate Array (FPGA), and performs various processes and controls in cooperation with the memory 12. Specifically, the processor 11 references the programs and data stored in the memory 12 and executes the programs to realize the respective functions of the first processing unit 110 and the second processing unit 120.
- CPU Central Processing Unit
- FPGA Field Programmable Gate Array
- the first processing unit 110 executes a detection process for the object Tg and executes a first process (see FIG. 4 and FIG. 6) for generating a prediction template TP2 for the object Tg.
- the first process is an advanced image process using deep learning, and takes a longer time (e.g., 17 ms) than the time required for executing the second process.
- the position and posture of the object Tg continue to change while the first process is being executed. Therefore, in order to maintain the validity of the detection template TP3 output at the timing when the first process ends, the first processing unit 110 predicts template candidates for the object Tg using the results of the second process executed by the second processing unit 120, which is executed multiple times while the first process is being executed.
- the first processing unit 110 can track the changes in posture of the object Tg that change during the execution of the first process in real time, and predict template candidates that are more suitable for the feature matching executed by the second processing unit 120.
- the second processing unit 120 executes a second process (see Figures 4 and 5) in which feature matching is performed using the prediction template TP2 obtained by the first process and the captured image captured by the camera CM, the position information of the object Tg is estimated, and the estimated position information of the object Tg is transmitted to the actuator AC.
- the second process is a simple image process using feature matching, and requires a shorter time (e.g., 1 ms) than the time required to execute the first process.
- the second processing unit 120 executes the second process multiple times while the first processing unit 110 executes the first process once.
- Memory 12 has, for example, Random Access Memory (RAM) as a working memory used when executing each process of processor 11, and Read Only Memory (ROM) that stores programs and data that define the operation of processor 11. Data or information generated or acquired by processor 11 is temporarily stored in RAM. Programs that define the operation of processor 11 are written in ROM.
- RAM Random Access Memory
- ROM Read Only Memory
- the 3D model database DB (an example of a database) is, for example, a flash memory, a hard disk drive (HDD), or a solid state drive (SSD).
- the 3D model database DB stores (registers) 3D model data of at least one object Tg to be picked, and information about the object Tg (for example, the name and identification number of each object) for each object Tg.
- the display 13 is a device that outputs (displays) the template display screen SC (see FIG. 7) generated by the image processing device P1, and is configured, for example, by a Liquid Crystal Display (LCD) or an organic electroluminescence (EL) device.
- LCD Liquid Crystal Display
- EL organic electroluminescence
- the operation device 14 is an interface that detects user operation input, and is composed of, for example, a mouse, a keyboard, or a touch panel. When the operation device 14 receives a user operation, it generates an electrical signal based on the user operation and transmits it to the image processing device P1.
- FIG. 3 is a functional block diagram illustrating the functions of the first processing unit 110 and the second processing unit 120 in the first embodiment.
- the first processing unit 110 includes an object detection unit 111, a 3D model selection unit 112, a first time prediction unit 113, and a 3D model synthesis unit 117.
- the object detection unit 111 uses Deep Learning to perform image recognition processing on the captured image transmitted from the camera CM and detects an object (target object Tg) from the captured image.
- the object detection unit 111 outputs information on the detected object (target object Tg) to the 3D model selection unit 112 and the first time prediction unit 113.
- the Deep Learning used by the object detection unit 111 may be any learning method suitable for detecting the target object Tg, such as a Convolutional Neural Network (CNN).
- CNN Convolutional Neural Network
- the 3D model selection unit 112 selects a 3D model MD that corresponds to information about an object (object Tg) specified by a user operation from among the 3D models of at least one object Tg registered in the 3D model database DB.
- the 3D model selection unit 112 outputs the selected 3D model MD of the object Tg to each of the 3D matching unit 114 and the 3D model synthesis unit 117.
- the first time prediction unit 113 tracks (observes) changes in the position and posture of the object Tg based on the movement information of the object Tg obtained by the second process while the detection process of the object Tg is being performed by the first process, and generates a prediction template TP2 corresponding to the posture of the object Tg at the time when the first process ends.
- the first time prediction unit 113 includes a 3D matching unit 114, a template prediction unit 115, and a prediction model update unit 116.
- the 3D matching unit 114 executes 3D matching in which the information about the object (target object Tg) output from the object detection unit 111 and the 3D model MD of the target object Tg output from the 3D model selection unit 112 are matched in a three-dimensional space, and recognizes the posture of the target object Tg that appears in the captured image captured by the camera CM (hereinafter referred to as the "detected posture").
- the 3D matching unit 114 associates information about the detected posture of the target object Tg with the 3D model MD of the target object Tg used in the 3D matching, and outputs the information to the template prediction unit 115 and the 3D model synthesis unit 117.
- the template prediction unit 115 generates a prediction template TP2 corresponding to the posture of the object Tg at the time when the first process ends, based on information on the detected posture of the object Tg and the 3D model MD of the object Tg sent from the 3D matching unit 114, and the prediction model output from the prediction model update unit 116.
- the prediction model here is a prediction model in which the posture of the object Tg is predicted, and is a mathematical model based on information on the movement of the object Tg obtained by the second process that has been executed multiple times.
- the template prediction unit 115 generates a 2D image (hereinafter referred to as a "prediction template") that is the posture of the camera CM when the next captured image is captured, and that is obtained when the 3D model MD is viewed from the predicted posture (angle) of the object Tg, based on the prediction result.
- the template prediction unit 115 associates information about the posture of the object Tg with the generated prediction template TP2 of the object Tg (see Figure 7), and outputs the information to the 3D model synthesis unit 117 and the template update unit 121, respectively.
- the prediction model update unit 116 updates the prediction model for predicting changes in the posture of the object Tg that occur while the detection process for the object Tg is being performed by the first process, based on the feature matching result output from the feature matching unit 123 and the movement of the object Tg output from the position fitting unit 124.
- the prediction model update unit 116 outputs the updated prediction model to the template prediction unit 115.
- the 3D model synthesis unit 117 acquires the detected orientation of the object Tg output from the 3D matching unit 114 and the detection template TP3 (see FIG. 7), and generates an icon PP3 indicating the angle at which the object Tg in the captured image was captured based on the detected orientation of the object Tg.
- the 3D model synthesis unit 117 acquires the predicted orientation of the object Tg output from the template prediction unit 115 and the prediction template TP2, and generates an icon PP2 indicating the angle of the object Tg to be captured in the next captured image based on the predicted orientation of the object Tg.
- the 3D model synthesis unit 117 generates a template display screen SC based on the 3D model MD of the object Tg, the prediction template TP2, the detection template TP3, and each of the icons PP1, PP2, and PP3, and transmits it to the display 13.
- the second processing unit 120 includes a template update unit 121, a feature extraction unit 122, a feature matching unit 123, a position fitting unit 124, a second time prediction unit 125, and a control unit 126.
- the template update unit 121 acquires the prediction template TP2 output from the template prediction unit 115 of the first processing unit 110, and updates the template (2D data) of the object Tg used for feature matching to the acquired prediction template TP2.
- the feature extraction unit 122 extracts the feature amount of the object Tg from the captured image transmitted from the camera CM.
- the feature extraction unit 122 outputs the extracted feature amount of the object Tg to the feature matching unit 123.
- the feature matching unit 123 matches the feature amount of the object Tg included in the prediction template TP2 output from the template update unit 121 with the feature amount of the object Tg output from the feature extraction unit 122.
- the feature matching unit 123 outputs the matching result to the prediction model update unit and the position fitting unit 124.
- the position fitting unit 124 acquires the matching results output from the feature matching unit 123.
- the position fitting unit 124 fits the position information of the object Tg appearing in the captured image based on the result of the feature matching.
- the position fitting unit 124 outputs the position information of the object Tg after position fitting to the prediction model update unit 116 and the second time prediction unit 125.
- the second time prediction unit 125 predicts the movement of the object Tg while the second process is being performed based on the position information of the object Tg output from the position fitting unit 124, and predicts the position of the object Tg at the time when the second process ends.
- the second time prediction unit 125 outputs information on the predicted position of the object Tg to the control unit 126.
- the control unit 126 outputs the information on the predicted position of the object Tg output from the second time prediction unit 125 to the actuator AC.
- FIG. 4 is a diagram illustrating an example of the overall operation procedure of the picking system 100 according to the first embodiment.
- the example of the overall operation procedure of the picking system 100 shown in FIG. 4 is just one example, and is not limited to this.
- FIG. 4 in order to make it easier to understand the relationship between the first process and the second process, an example is shown in which the first process is executed once and the second process is executed N times (N: an integer of 3 or more) when picking one target object Tg, but the number of times that the first process and the second process are executed is not limited to this. It goes without saying that the picking system 100 may execute the first process multiple times when picking one target object Tg.
- the actuator AC picks up the object Tg being transported on the belt conveyor by the camera CM while capturing images of the object Tg at a predetermined frame rate (e.g., 1000 fps).
- the actuator AC shown in FIG. 4 shows a part of the picking process of the object Tg (time t11 to time t1N), and is executed repeatedly, for example, until the object Tg is picked.
- the image processing device P1 acquires an image of the object Tg captured by the camera CM at a predetermined frame rate, and performs a second process (step St100) on the acquired image.
- the image processing device P1 executes the first process (step St200) based on the captured image captured by the camera CM, the feature matching results (in other words, matching tendency) obtained by the multiple second processes executed during the first process, and the amount of movement of the object Tg (in other words, movement information).
- the image processing device P1 feeds back to the second process the prediction template TP2 (see FIG. 7), which is a template candidate for the object Tg obtained by the first process.
- the camera CM captures an image of the object Tg at time t11 and transmits the captured image Img11 to the image processing device P1.
- the image processing device P1 acquires the first captured image Img11 (image data) transmitted from the camera CM and executes the first process and the second process using the first captured image Img11.
- the image processing device P1 outputs the matching tendency of the object Tg obtained by the second process and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (x1, y1, z1) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired three-dimensional predicted position (x1, y1, z1) of the object Tg.
- the camera CM captures an image of the object Tg.
- the camera CM transmits the captured image Img12 to the image processing device P1.
- the image processing device P1 acquires the second captured image Img12 (image data) transmitted from the camera CM and executes the second process using the second captured image Img12.
- the image processing device P1 outputs the matching tendency of the object Tg obtained by the second process and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (x2, y2, z2) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired predicted position (x2, y2, z2) of the object Tg.
- the camera CM captures an image of the object Tg.
- the camera CM transmits the captured image Img13 to the image processing device P1.
- the image processing device P1 acquires the third captured image Img13 (image data) transmitted from the camera CM and executes the second process using the third captured image Img13.
- the image processing device P1 outputs the matching tendency of the object Tg obtained by the second process and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (not shown) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired predicted position of the object Tg.
- the camera CM captures an image of the object Tg.
- the camera CM transmits the captured image Img1 (N-2) to the image processing device P1.
- the image processing device P1 acquires the (N-2)th captured image Img1 (N-2) (image data) transmitted from the camera CM and executes the second process using the (N-2)th captured image Img1 (N-2).
- the image processing device P1 outputs the matching tendency of the object Tg and the movement information of the object Tg obtained by the second process to the first processing unit 110, and transmits information on the predicted position (not shown) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired predicted position of the object Tg.
- the camera CM captures an image of the object Tg.
- the camera CM transmits the captured image Img1 (N-1) to the image processing device P1.
- the image processing device P1 acquires the (N-1)th captured image Img1 (N-1) (image data) transmitted from the camera CM and executes the second process using the (N-1)th captured image Img1 (N-1).
- the image processing device P1 outputs the matching tendency of the object Tg and the movement information of the object Tg obtained by the second process to the first processing unit 110, and transmits information on the predicted position (not shown) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired predicted position of the object Tg.
- the camera CM captures an image of the object Tg.
- the camera CM transmits the captured image Img1N to the image processing device P1.
- the image processing device P1 acquires the Nth captured image Img1N (image data) transmitted from the camera CM and executes the second processing using the Nth captured image Img1N.
- the image processing device P1 outputs the matching tendency of the object Tg obtained by the second processing and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (xN, yN, zN) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired predicted position (xN, yN, zN) of the object Tg to pick up the object Tg.
- the image processing device P1 feeds back the template candidate (prediction template TP2) of the object Tg obtained by the first process at time t1(N+1) to the second processing unit 120, and updates the template candidate (prediction template TP2).
- the image processing device P1 performs feature matching using the latest obtained prediction template TP2 until a new prediction template TP2 is fed back again by the first process.
- FIG. 5 is a flowchart showing an example of the second process procedure (step St100) of the image processing device P1 in embodiment 1.
- the second processing unit 120 sets the update flag for updating the template of the target object Tg to "1" based on the control command notifying the start of the picking process acquired via the operation device 14 (St11).
- the second processing unit 120 acquires the captured image transmitted from the camera CM (St12).
- the second processing unit 120 determines whether the currently set update flag is "1" (St13).
- the second processing unit 120 determines in the processing of step St13 that the currently set update flag is "1" (St13, YES), it updates the template (template TP1 or prediction template TP2) of the object Tg used for feature matching (St14).
- the second processing unit 120 outputs the captured image of the object Tg for generating a template candidate (prediction template TP2) to the first processing unit 110 (St15), and sets the update flag to "0" (St16).
- the second processing unit 120 determines in the processing of step St13 that the currently set update flag is not "1" (St13, NO), it determines whether or not there is a template (prediction template TP2) of the object Tg to be used for feature matching (St17).
- step St17 If the second processing unit 120 determines in step St17 that there is a template (prediction template TP2) of the object Tg to be used for feature matching (St17, YES), it extracts features from the captured image (St18).
- prediction template TP2 a template of the object Tg to be used for feature matching
- the second processing unit 120 determines in the processing of step St17 that there is no template (prediction template TP2) of the object Tg to be used for feature matching (St17, NO), it returns to the processing of step St13 and waits until a template (template TP1 or prediction template TP2) of the object Tg is fed back from the first processing unit 110.
- the second processing unit 120 performs feature matching between the features extracted from the captured image (i.e., the features of the captured image) and the features of the object Tg based on the template (i.e., the features of the template). Based on the matching result, the second processing unit 120 fits the position of the object Tg appearing in the captured image captured by the camera CM (St19).
- the second processing unit 120 calculates the amount of movement of the object Tg between the captured image used for feature matching (i.e., the most recent captured image) and the captured image captured immediately before this captured image. Based on the calculated amount of movement, the second processing unit 120 predicts the amount of movement of the object Tg while the second process is being performed, predicts the position of the object Tg at the time when the second process ends, and obtains the predicted position of the object Tg (St20). The second processing unit 120 transmits the predicted position of the object Tg to the actuator AC (St21).
- the second processing unit 120 outputs the matching tendency, which is the result of feature matching, and the movement information (movement information) of the object Tg to the first processing unit 110 (St22).
- the second processing unit 120 determines whether the picking process of the object Tg has been completed based on a control command notifying the completion of the picking process of the object Tg by the actuator AC (St23).
- step St23 If the second processing unit 120 determines in step St23 that the picking process of the target object Tg has been completed (St23, YES), it terminates the second process (step St100) shown in FIG. 5.
- step St23 determines in the processing of step St23 that the picking process of the target object Tg has not been completed (St23, NO)
- the processing returns to the processing of step St11.
- the image processing device P1 in the first embodiment can predict the amount of movement of the object Tg while the second process is being performed based on the captured images captured at a high frame rate by the second process, and can predict the position of the object Tg at the timing when the second process ends. This allows the image processing device P1 to support the real-time tracking of the object Tg by the actuator AC.
- FIG. 6 is a flowchart showing an example of the first process procedure (step St200) of the image processing device P1 in the first embodiment.
- the first processing unit 110 acquires the captured image output from the second processing unit 120 (St31).
- the first processing unit 110 detects the object Tg from the acquired captured image using advanced image processing techniques such as Deep Learning (St32).
- the first processing unit 110 selects a 3D model MD corresponding to the object Tg from among the 3D models stored in the 3D model database DB.
- the first processing unit 110 performs 3D matching between the selected 3D model MD and the detected object Tg to obtain the detected orientation of the object Tg in the captured image (in other words, the captured orientation of the object Tg when it is captured) (St33). This allows the first processing unit 110 to obtain the detected orientation for generating the detection template TP3 (see FIG. 7).
- the first processing unit 110 updates a prediction model for predicting the orientation of the object Tg (in other words, the captured orientation) based on the feature matching result and the movement information of the object Tg obtained from the second processing unit 120 (St34).
- the first processing unit 110 predicts a template candidate corresponding to the posture of the object Tg at the timing (time) when the first processing ends based on a prediction model for predicting the updated posture of the object Tg (in other words, the imaging posture) and the detected posture of the object Tg (St35), and feeds back (outputs) the predicted template candidate (prediction template TP2) to the second processing unit 120 (St36).
- the first processing unit 110 sets the update flag for updating the template of the object Tg to "1" (St37).
- the first processing unit 110 determines whether the picking process of the object Tg has been completed based on a control command notifying the completion of the picking process of the object Tg by the actuator AC (St38).
- step St38 If the first processing unit 110 determines in step St38 that the picking process of the target object Tg has been completed (St38, YES), it ends the first process (step St200) shown in FIG. 6.
- step St38 determines in the processing of step St38 that the picking process of the target object Tg has not been completed (St38, NO)
- the processing returns to the processing of step St31.
- the image processing device P1 in embodiment 1 can perform more advanced image recognition processing through the first processing, and thereby select with higher accuracy a 3D model MD of the object Tg for generating a prediction template TP2 to be used in the second processing. Furthermore, the image processing device P1 can track changes in the posture of the object Tg that change during the first processing, based on the feature matching results and movement information of the object Tg output from the second processing unit 120, which has a short processing time, and generate a template (prediction template TP2) of the object Tg that is closer to the actual posture of the object Tg. This allows the image processing device P1 to improve the feature matching accuracy of the second processing unit 120, as well as improve the tracking accuracy of tracking the object Tg.
- FIG. 7 is a diagram showing an example of the template display screen SC. Note that the template display screen SC shown in FIG. 7 is an example and is not limited to this.
- the first processing unit 110 generates a template display screen SC based on the result of the first processing, and transmits the generated template display screen SC to the display 13 for display.
- the template display screen SC includes a first display area AR1, a second display area AR2, a third display area AR3, a fourth display area AR4, a fifth display area AR5, and a registration button BT.
- the first display area AR1 includes a 3D model MD of the object Tg obtained by image processing in the first process or selected by user operation in a modified example of the first embodiment described below, and an XYZ coordinate system set in the data of the 3D model MD of the object Tg.
- the first display area AR1 also includes an icon PP1 indicating the imaging angle of the camera CM corresponding to the template TP1, an icon PP2 indicating the imaging angle of the camera CM corresponding to the prediction template TP2 (an example of the first imaging position), and an icon PP3 indicating the imaging angle of the camera CM corresponding to the detection template TP3 (an example of the second imaging position).
- the icon PP1 can receive user operations via the operation device 14.
- the image processing device P1 When the position of the icon PP1 is changed by a user operation, the image processing device P1 generates a template (2D) of the 3D model MD of the object Tg viewed from the position (angle) of the icon PP1.
- the image processing device P1 generates a template display screen SC in which the generated template (2D) of the 3D model MD is displayed as the template TP1 in the third display area AR3, and transmits it to the display 13 for display.
- Icon PP2 is the position (angle) of the camera CM capable of capturing an image of the template (2D) of the 3D model MD of the prediction template TP2, and indicates the imaging position of the object Tg to be imaged next.
- the image processing device P1 predicts the predicted attitude of the object Tg by the first process based on the predicted position of the object Tg obtained by the second process
- the image processing device P1 updates the position of the icon PP2 and the prediction template TP2 displayed in the fourth display area AR4 based on the predicted attitude of the object Tg.
- the image processing device P1 generates a template display screen SC in which the position of the icon PP2 and the prediction template TP2 have been updated, and transmits it to the display 13 for display.
- Icon PP3 is the position (angle) of the camera CM capable of capturing an image of the template (2D) of the 3D model MD of the detection template TP3, and indicates the imaging position of the object Tg detected from the captured image on which image processing has been performed.
- the image processing device P1 updates the position of the icon PP3 and the detection template TP3 displayed in the fifth display area AR5 based on the posture of the object Tg detected by the first processing.
- the image processing device P1 generates a template display screen SC in which the position of the icon PP3 and the detection template TP3 have been updated, and transmits it to the display 13 for display.
- the second display area AR2 includes a 3D model MD of at least one object stored in the 3D model database DB.
- the second display area AR2 shown in FIG. 7 includes a 3D model MD (3D) of object "A12", a 3D model MD (3D) of object “A13", a 3D model MD (3D) of object "A14”, and a 3D model MD (3D) of object "A15".
- the second display area AR2 can receive a user operation to select one of the objects via the operation device 14.
- the image processing device P1 receives a user operation to select one of the selection areas SL1, SL2, SL3, and SL4 corresponding to each object, and displays a 3D model corresponding to any one of the selection areas SL1 to SL4 specified by the user operation in the first display area AR1.
- the image processing device P1 in the second variation of the first embodiment accepts a user operation on the icon PP1 displayed in the first display area AR1, and displays a 3D model template (2D) corresponding to the position of the icon PP1 in the third display area AR3 based on the position of the icon PP1 moved by the user operation.
- the image processing device P1 updates (registers) the template (2D) displayed in the third display area AR3 as a template to be used for feature matching in the second process.
- the third display area AR3 includes a template TP1 (2D) when the 3D model MD of the object Tg is imaged from the position (angle) of the icon PP1.
- the fourth display area AR4 includes the position (angle) of the icon PP2, i.e., the predicted template TP2 (2D) when the 3D model MD of the object Tg is imaged from the predicted posture.
- the fifth display area AR5 includes the position (angle) of the icon PP3, i.e., the detection template TP3 (2D) when the 3D model MD of the object Tg is imaged from the posture in which the object Tg is detected.
- the registration button BT is a button that can accept the creation of a template (2D) that corresponds to the icon PP1 that has been moved based on a user operation.
- the picking system 100 according to the first embodiment has shown an example of performing template prediction by 3D matching using a 3D model.
- the picking system 100 according to the first modification of the first embodiment will be described as performing template prediction based on the position of the target object Tg detected in the first process, information on the movement of the target object Tg obtained in multiple second processes performed while the first process is being performed, and captured images used in the multiple second processes.
- the internal configuration example of the picking system 100 according to the first variation of the first embodiment has almost the same configuration as the internal configuration example of the picking system 100 according to the first embodiment, so a description thereof will be omitted.
- FIG. 8 is a diagram illustrating an example of the overall operation procedure of the picking system 100 according to the first modification of the first embodiment.
- the example of the overall operation procedure of the picking system 100 shown in FIG. 8 is just one example, and is not limited to this.
- FIG. 8 in order to make it easier to understand the relationship between the first process and the second process, an example is shown in which the first process is executed once and the second process is executed N times when picking one target object Tg, but the number of times that the first process and the second process are executed is not limited to this. It goes without saying that the picking system 100 may execute the first process multiple times when picking one target object Tg.
- the actuator AC picks up the object Tg being transported on the belt conveyor by the camera CM while capturing images of the object Tg at a predetermined frame rate (e.g., 1000 fps).
- the actuator AC shown in FIG. 8 shows a part of the picking process of the object Tg (time t11 to time t1N), and is executed repeatedly, for example, until the object Tg is picked.
- the image processing device P1 acquires an image of the object Tg captured by the camera CM at a predetermined frame rate, and performs a second process (step St100) on the acquired image.
- the image processing device P1 executes the first process (step St200A) based on the captured image captured by the camera CM, information related to the movement of the object Tg, such as the movement information, obtained by the multiple second processes executed during the first process, and the captured image used in the multiple second processes.
- the image processing device P1 feeds back to the second process the prediction template TP2 (see FIG. 7), which is a template candidate for the object Tg obtained by the first process.
- the camera CM captures an image of the object Tg at time t11 and transmits the captured image Img11 to the image processing device P1.
- the image processing device P1 acquires the first captured image Img11 (image data) transmitted from the camera CM and executes the first process and the second process using the first captured image Img11.
- the image processing device P1 outputs the movement information (information related to movement) of the object Tg obtained by the second process and the first captured image Img11 used in the second process to the first processing unit 110, and transmits information on the predicted position (x1, y1, z1) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired three-dimensional predicted position (x1, y1, z1) of the object Tg.
- the camera CM captures an image of the object Tg.
- the camera CM transmits the captured image Img12 to the image processing device P1.
- the image processing device P1 acquires the second captured image Img12 (image data) transmitted from the camera CM and executes a second process using the second captured image Img12.
- the image processing device P1 outputs the movement information (information related to movement) of the object Tg obtained by the second process and the second captured image Img12 used in the second process to the first processing unit 110, and transmits information on the predicted position (x2, y2, z2) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired predicted position (x2, y2, z2) of the object Tg.
- the picking system 100 repeatedly executes the same process from time t13 to time t1(N-1).
- the camera CM captures an image of the object Tg.
- the camera CM transmits the captured image Img1N to the image processing device P1.
- the image processing device P1 acquires the Nth captured image Img1N (image data) transmitted from the camera CM and executes the second process using the Nth captured image Img1N.
- the image processing device P1 outputs the movement information (information related to movement) of the object Tg obtained by the second process and the Nth captured image Img1N used in the second process to the first processing device 110, and transmits information on the predicted position (xN, yN, zN) of the object Tg to the actuator AC.
- the actuator AC moves the end effector EF toward the acquired predicted position (xN, yN, zN) of the object Tg to pick up the object Tg.
- the image processing device P1 feeds back the template Img31 of the object Tg selected by the user's operation to the second processing device 120.
- the image processing device P1 in variant 1 of embodiment 1 can perform template prediction without using the 3D model MD of the object Tg.
- the picking system 100 according to the first embodiment has shown an example in which a template is predicted using the matching tendency obtained by the second process and information on the movement of the target object Tg.
- the picking system 100 according to the second modification of the first embodiment will be described below with reference to an example in which a template is generated using a template TP1 obtained based on a user operation.
- the internal configuration example of the picking system 100 according to the second variation of the first embodiment has almost the same configuration as the internal configuration example of the picking system 100 according to the first embodiment, so a description thereof will be omitted.
- FIG. 9 is a flowchart showing an example of the first process procedure (step St200B) of the image processing device P1 in the second modification of the first embodiment.
- the first processing unit 110 determines whether or not a template TP1 of the target object Tg based on a user operation of operating the icon PP1 and pressing the registration button BT has been registered (St30A).
- the first processing unit 110 determines in the processing of step St30A that a template TP1 of the target object Tg has been registered (St30A, YES), it feeds back the registered template TP1 to the second processing unit 120 instead of the prediction template TP2 (St30B).
- the first processing unit 110 determines in the processing of step St30A that the template TP1 of the object Tg has not been registered (St30A, NO), it acquires the captured image output from the second processing unit 120 (St31).
- the image processing device P1 in the second modification of the first embodiment performs feature matching using the template TP1 specified by the user instead of the prediction template TP2, thereby improving the feature matching accuracy of the second processing unit 120 and also improving the tracking accuracy of tracking the target object Tg.
- the image processing device P1 is movable and capable of communicating with a camera CM capable of capturing an image of an object Tg, and executes a first process (steps St200, St200A) in which the image of the object Tg is captured, the position or posture of the object Tg is detected from the captured image, and a prediction template TP2 (an example of a template) of the object Tg is generated.
- the image processing device P1 executes feature matching (an example of template matching) based on the captured image and the prediction template TP2 of the object Tg, and executes a second process multiple times to acquire information regarding the movement of the object Tg.
- the first process predicts the posture of the object Tg based on the detected position or posture of the object Tg and the information regarding the movement of the object Tg acquired multiple times in the second process, and generates a prediction template TP2 of the object Tg corresponding to the predicted posture of the object Tg.
- the image processing device P1 generates a prediction template TP2 by the first process, and tracks the change in posture of the object Tg that changes during the first process based on the feature matching result and the position information of the object Tg output from the second processing unit 120, and can generate a template (prediction template TP2) of the object Tg that is closest to the posture of the actual object Tg.
- the image processing device P1 according to the first embodiment and the first modification of the first embodiment extracts features of the object Tg from the captured image, and performs feature matching based on the extracted features of the object Tg and the features of the object Tg depicted in the prediction template TP2. This allows the image processing device P1 according to the first embodiment and the first modification of the first embodiment to obtain the position of the object Tg depicted in the captured image by position fitting based on feature matching.
- the first process in the image processing device P1 according to embodiment 1 performs 3D matching based on the object Tg detected from the captured image and the 3D model MD recorded in a 3D model database DB (an example of a database) to identify the posture of the object Tg, predicts the posture of the object Tg based on information related to the movement of the object Tg obtained by the second process executed multiple times, and generates a prediction template TP2 for the object Tg. This allows the image processing device P1 according to embodiment 1 to obtain the posture of the object Tg appearing in the captured image.
- a 3D model database DB an example of a database
- the image processing device P1 acquires a first imaging position of the camera CM relative to the 3D model MD based on the predicted orientation of the object Tg, generates a prediction template TP2 corresponding to the predicted orientation based on the 3D model MD and the predicted orientation of the object Tg, and outputs the prediction template TP2 to the display 13 by associating the 3D model MD, the first imaging position relative to the 3D model MD, and the prediction template TP2.
- This allows the image processing device P1 according to the first embodiment to visualize to the user that the prediction template TP2 is a template (2D image) when the 3D model MD of the object Tg is imaged from the first imaging position.
- the user can visually confirm whether the 3D model MD of the object Tg recognized by the image processing device P1 is the correct 3D model MD based on the prediction template TP2 and the first imaging position.
- the image processing device P1 detects the object Tg from the captured image, identifies the detected orientation of the detected object Tg, acquires the second imaging position of the camera CM relative to the 3D model MD based on the detected orientation, generates a detection template TP3 corresponding to the detected orientation based on the 3D model MD and the detected orientation, associates the 3D model MD, the second imaging position relative to the 3D model MD, and the detection template TP3, and outputs them to the display 13.
- This allows the image processing device P1 according to the first embodiment to visualize to the user that the detection template TP3 is a template (2D image) when the 3D model MD of the object Tg is imaged from the second imaging position. Based on the detection template TP3 and the second imaging position, the user can visually confirm whether the 3D model MD of the object Tg recognized by the image processing device P1 is the correct 3D model MD.
- the image processing device P1 according to the first embodiment also acquires specification information that specifies the 3D model MD, and generates a prediction template TP2 of the object Tg based on the 3D model MD that corresponds to the specification information.
- the image processing device P1 according to the first embodiment generates a template with a clean background other than the object Tg, i.e., with little noise, enabling more accurate position identification in the feature matching process of the second process.
- the image processing device P1 also generates a prediction template TP2 in which the position or posture of the object Tg that changes during the first process is corrected, thereby enabling real-time tracking of changes in posture or position of the object Tg captured by the camera CM.
- the image processing device P1 according to the second modification of the first embodiment also acquires designation information that designates the template TP1, extracts features of the object Tg from the captured image, and performs feature matching based on the extracted features of the object Tg and the features of the object Tg appearing in the template TP1 that corresponds to the designation information. This allows the image processing device P1 according to the second modification of the first embodiment to acquire the position of the object Tg appearing in the captured image by position fitting based on feature matching.
- the second processing is executed at a frame rate different from that of the first processing for the image captured by the camera CM.
- the image processing device P1 according to the first embodiment and the first and second variations of the first embodiment can track changes in the posture or position of the object Tg captured by the camera CM in real time by combining advanced low-speed image processing (first processing) with a low-level high-speed image processing technique (second processing).
- the image processing device P1 includes a communication unit 10 (an example of an acquisition unit) that acquires an image of the object Tg captured by a movable camera CM capable of capturing an image of the object Tg, a first processing unit 110 that detects the position or posture of the object Tg from the captured image and generates a template of the object Tg, and a second processing unit 120 that performs template matching based on the captured image and the template of the object Tg multiple times while the first processing unit 110 is generating the template of the object Tg, and acquires information regarding the movement of the object Tg.
- a communication unit 10 an example of an acquisition unit
- a first processing unit 110 that detects the position or posture of the object Tg from the captured image and generates a template of the object Tg
- a second processing unit 120 that performs template matching based on the captured image and the template of the object Tg multiple times while the first processing unit 110 is generating the template of the object Tg, and acquires information regarding the movement of the object Tg.
- the first processing unit 110 predicts the posture of the object Tg based on the detected position or posture of the object Tg and the information regarding the movement of the object Tg acquired multiple times by the second processing unit 120, and generates a template of the object Tg corresponding to the predicted posture of the object Tg.
- the image processing device P1 generates the prediction template TP2 by the first process, and is able to track the change in posture of the object Tg that changes during the first process based on the feature matching result and the position information of the object Tg output from the second processing unit 120, and generate a template (prediction template TP2) of the object Tg that is closest to the posture of the actual object Tg.
- the present disclosure is useful as an image processing method and image processing device that registers highly accurate templates of an object that can be used for template matching even in a situation where the orientation of the object from the imaging device changes as the imaging device moves.
Landscapes
- Image Analysis (AREA)
Abstract
This image processing method: executes a first process for acquiring a captured image of a target object, detecting the position or posture of the target object from the captured image, and generating a template of the target object; and executes, during the execution of the first process, a second process a plurality of times for performing template matching on the basis of a captured image and the template of the target object to obtain information regarding the movement of the target object. In the first process, a template of the target object corresponding to a predicted posture obtained by predicting the postureof the target object is generated on the basis of the detected position or posture of the target object and the information regarding the movement of the target object as obtained a plurality of times by the second process.
Description
本開示は、画像処理方法および画像処理装置に関する。
This disclosure relates to an image processing method and an image processing device.
工場内の生産工程では、ロボットハンド等のエンドエフェクタによりピッキングしようとする部品が正しい部品(例えば工業製品の生産に使用する部品)であるか否かを判定することがある。このような判定の際には、判定処理をできるだけ高速に行うことにより生産工程のタクトタイムを低下させないことが求められる。従来の判定処理として、例えば予め用意された部品のテンプレート(例えば画像)と工場内に設置されたカメラにより撮像された部品の画像とを比較してマッチング処理するテンプレートマッチング法が知られている。
In production processes in factories, it is sometimes necessary to determine whether a part being picked by an end effector such as a robot hand is the correct part (for example, a part to be used in the production of an industrial product). When making such a determination, it is necessary to perform the determination process as quickly as possible to avoid reducing the takt time of the production process. A known example of a conventional determination process is the template matching method, which compares a prepared template (for example, an image) of the part with an image of the part captured by a camera installed in the factory and performs a matching process.
特許文献1は、テンプレートマッチングにより物体の認識を行う物体認識装置で用いられるテンプレートのセットを作成するテンプレート作成装置を開示している。テンプレート作成装置は、一つの物体の異なる姿勢に対する複数の画像のそれぞれから複数のテンプレートを取得し、複数のテンプレートから選ばれる2つのテンプレート間の画像特徴の類似度を計算し、類似度に基づき複数のテンプレートを複数のグループに分けるクラスタリングを行う。テンプレート作成装置は、複数のグループのそれぞれについてグループ内の全てのテンプレートを1つの統合テンプレートへ統合し、グループごとに統合テンプレートを有したテンプレートセットを生成する。
Patent Document 1 discloses a template creation device that creates a set of templates used in an object recognition device that recognizes objects by template matching. The template creation device acquires multiple templates from multiple images of an object in different poses, calculates the similarity of image features between two templates selected from the multiple templates, and performs clustering to divide the multiple templates into multiple groups based on the similarity. For each of the multiple groups, the template creation device integrates all the templates in the group into a single integrated template, and generates a template set with an integrated template for each group.
特許文献1では、物体認識装置は、階層的なテンプレートセットを作成し、解像度の低いテンプレートセットによるラフな認識を行い、その結果を用いて解像度の高いテンプレートセットによる詳細な認識を行う、といった階層的探索を行う。ところが、解像度の低いテンプレートセットを用いた認識処理、解像度の高いテンプレートセットを用いた認識処理のように少なくとも二段階でマッチング処理を行う必要があり、物体認識装置の処理負荷の増大を免れない。
In Patent Document 1, the object recognition device performs a hierarchical search by creating a hierarchical template set, performing rough recognition using a low-resolution template set, and then using the results to perform detailed recognition using a high-resolution template set. However, matching processing must be performed in at least two stages, such as recognition processing using a low-resolution template set and recognition processing using a high-resolution template set, which inevitably increases the processing load on the object recognition device.
また、上述した工場内の生産工程においてエンドエフェクタによりピッキングしようとする部品が正しい部品であるかを判定するためにエンドエフェクタおよびカメラを移動させてピッキングしようとする部品をカメラで撮像する際に、特許文献1の技術を適用しようとすると次のような課題が生じる。具体的には、エンドエフェクタの移動に伴ってカメラも移動するとなると、エンドエフェクタの位置変化に伴ってカメラからの部品の見え方(言い換えると、部品の姿勢)が変化する。このため、テンプレートマッチングの際に、エンドエフェクタの位置(言うなれば、カメラの位置)を考慮しなければ、予め生成されたテンプレートセットを使っても効率的なテンプレートマッチングを行うことができず、テンプレートマッチングの信頼性も向上しない。
Furthermore, when attempting to apply the technology of Patent Document 1 to the above-mentioned production process in a factory, in which the end effector and the camera are moved to capture an image of the part to be picked by the camera in order to determine whether the part to be picked by the end effector is the correct part, the following problem arises. Specifically, if the camera moves as the end effector moves, the way the part appears to the camera (in other words, the posture of the part) changes as the position of the end effector changes. For this reason, unless the position of the end effector (the position of the camera, so to speak) is taken into consideration during template matching, efficient template matching cannot be performed even if a pre-generated template set is used, and the reliability of template matching does not improve.
本開示は、従来の事情に鑑みて案出され、撮像装置の移動に伴って撮像装置に対する対象物の姿勢が変化する場合でも対象物の高精度なテンプレートマッチングを実現する画像処理方法および画像処理装置を提供することを目的とする。
The present disclosure has been devised in consideration of the conventional circumstances, and aims to provide an image processing method and image processing device that achieves highly accurate template matching of an object even when the orientation of the object relative to an imaging device changes as the imaging device moves.
本開示は、移動可能であって、かつ、対象物を撮像可能なカメラとの間で通信可能な画像処理装置が行う画像処理方法であって、前記対象物が撮像された撮像画像を取得し、前記撮像画像から前記対象物の位置または姿勢の検出を行い、前記対象物のテンプレートを生成する第1処理を実行し、前記第1処理を実行中に、前記撮像画像と前記対象物のテンプレートとに基づくテンプレートマッチングを実行し、前記対象物の移動に関する情報を取得する第2処理を複数回実行し、前記第1処理は、検出された前記対象物の位置または姿勢と、前記第2処理で複数回取得された前記対象物の移動に関する情報とに基づいて、前記対象物の姿勢を予測し、前記対象物の予測姿勢に対応する前記対象物のテンプレートを生成する、画像処理方法を提供する。
The present disclosure provides an image processing method performed by an image processing device that is mobile and capable of communicating with a camera capable of capturing an image of an object, which performs a first process of acquiring an image of the object, detecting a position or orientation of the object from the captured image, and generating a template of the object, and while performing the first process, performs template matching based on the captured image and the template of the object, and executes a second process multiple times to acquire information related to the movement of the object, and the first process predicts the orientation of the object based on the detected position or orientation of the object and the information related to the movement of the object acquired multiple times in the second process, and generates a template of the object corresponding to the predicted orientation of the object.
また、本開示は、移動可能であって、かつ、対象物を撮像可能なカメラにより撮像された前記対象物の撮像画像を取得する取得部と、前記撮像画像から前記対象物の位置または姿勢の検出を行い、前記対象物のテンプレートを生成する第1処理部と、前記第1処理部による前記対象物のテンプレートの生成中に、複数回、前記撮像画像と前記対象物のテンプレートとに基づくテンプレートマッチングを実行し、前記対象物の移動に関する情報を取得する第2処理部と、を備え、前記第1処理部は、検出された前記対象物の位置または姿勢と、前記第2処理部で複数回取得された前記対象物の移動に関する情報とに基づいて、前記対象物の姿勢を予測し、前記対象物の予測姿勢に対応する前記対象物のテンプレートを生成する、画像処理装置を提供する。
The present disclosure also provides an image processing device that includes an acquisition unit that acquires an image of an object captured by a movable camera capable of capturing an image of the object, a first processing unit that detects a position or posture of the object from the captured image and generates a template of the object, and a second processing unit that performs template matching based on the captured image and the object template multiple times while the first processing unit is generating the template of the object, and acquires information related to the movement of the object, and the first processing unit predicts the posture of the object based on the detected position or posture of the object and the information related to the movement of the object acquired multiple times by the second processing unit, and generates a template of the object corresponding to the predicted posture of the object.
本開示によれば、撮像装置の移動に伴って撮像装置からの対象物の姿勢が可変となる状況下でも対象物の高精度なテンプレートマッチングを実現できる。
According to the present disclosure, highly accurate template matching of an object can be achieved even in a situation where the orientation of the object from the imaging device changes as the imaging device moves.
(本開示に至る経緯)
従来、生産ライン等に対して移動不能な固定箇所に設置されたカメラから取り込まれた画像を用いて、ベルトコンベア上の物体のテンプレート作成装置を備える物体認識装置が開示されている。また、物体認識装置は、実際にマッチング処理を行う際に、グループごとの統合テンプレートに基づいて生成されたテンプレートセットがそのまま使用される。 (Background to this disclosure)
Conventionally, an object recognition device has been disclosed that includes a template creation device for objects on a belt conveyor using images captured by a camera installed at a fixed location that cannot be moved relative to a production line, etc. In addition, when actually performing matching processing, the object recognition device uses a template set generated based on an integrated template for each group as is.
従来、生産ライン等に対して移動不能な固定箇所に設置されたカメラから取り込まれた画像を用いて、ベルトコンベア上の物体のテンプレート作成装置を備える物体認識装置が開示されている。また、物体認識装置は、実際にマッチング処理を行う際に、グループごとの統合テンプレートに基づいて生成されたテンプレートセットがそのまま使用される。 (Background to this disclosure)
Conventionally, an object recognition device has been disclosed that includes a template creation device for objects on a belt conveyor using images captured by a camera installed at a fixed location that cannot be moved relative to a production line, etc. In addition, when actually performing matching processing, the object recognition device uses a template set generated based on an integrated template for each group as is.
ところが、上述した工場内の生産工程においてエンドエフェクタによりピッキング対象である部品が正しい部品であるかを判定するために、エンドエフェクタおよびカメラを移動させてピッキング対象である部品をカメラで撮像する場合、特許文献1(日本国特開2016-207147号公報)の技術を適用しようとすると次のような課題が生じる。具体的には、エンドエフェクタの移動に伴ってカメラも移動するとなると、エンドエフェクタの位置変化に伴ってカメラからの部品の見え方(言い換えると、部品の姿勢)が変化する。このため、テンプレートマッチングの際に、エンドエフェクタの位置(言うなれば、カメラの位置)を考慮しなければ、予め生成されたテンプレートセットを使っても効率的なテンプレートマッチングを行うことができず、テンプレートマッチングの信頼性も向上しない。
However, when moving the end effector and camera to capture an image of the part to be picked by the camera in order to determine whether the part to be picked by the end effector in the above-mentioned production process in the factory is the correct part, the following problem occurs when attempting to apply the technology of Patent Document 1 (JP Patent Publication JP 2016-207147 A). Specifically, if the camera moves as the end effector moves, the way the part appears to the camera (in other words, the posture of the part) changes as the position of the end effector changes. For this reason, if the position of the end effector (the position of the camera, so to speak) is not taken into consideration during template matching, efficient template matching cannot be performed even if a pre-generated template set is used, and the reliability of template matching does not improve.
そこで、このような課題を解決において、マッチング処理速度に合わせてベルトコンベアを停止したり低速させたりする方法があるが、このような方法を採用した場合、部品のピッキング効率(言い換えると、生産効率)が低下する。したがって、ベルトコンベアおよびエンドエフェクタの動作を停止させずに部品のマッチング処理および部品のピッキング処理を実現可能にする物体認識装置が要望されていた。
In order to solve this problem, one method is to stop or slow down the belt conveyor in accordance with the matching process speed, but when this method is adopted, the part picking efficiency (in other words, production efficiency) decreases. Therefore, there has been a demand for an object recognition device that can realize part matching and part picking without stopping the operation of the belt conveyor and end effector.
そこで、以下の各実施の形態では、撮像装置の移動に伴って撮像装置からの対象物の姿勢が可変となる状況下でもテンプレートマッチングに使用可能な対象物の高精度なテンプレートを登録する画像処理方法および画像処理装置の例を説明する。
In the following embodiments, we therefore describe examples of image processing methods and image processing devices that register highly accurate templates of objects that can be used for template matching even in situations where the orientation of the object from the imaging device changes as the imaging device moves.
以下、添付図面を適宜参照しながら、本開示に係る画像処理方法および画像処理装置を具体的に開示した各実施の形態を詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。なお、添付図面及び以下の説明は、当業者が本開示を十分に理解するために提供されるのであって、これらにより特許請求の範囲に記載の主題を限定することは意図されていない。
Below, with appropriate reference to the attached drawings, each embodiment that specifically discloses the image processing method and image processing device according to the present disclosure will be described in detail. However, more detailed explanation than necessary may be omitted. For example, detailed explanation of already well-known matters and duplicate explanation of substantially identical configurations may be omitted. This is to avoid the following explanation becoming unnecessarily redundant and to make it easier for those skilled in the art to understand. Note that the attached drawings and the following explanation are provided to enable those skilled in the art to fully understand the present disclosure, and are not intended to limit the subject matter described in the claims.
(実施の形態1)
実施の形態1では、例えば工場内の生産工程において、ロボットハンド等のエンドエフェクタによりピッキングしようとする部品(例えば工業製品の生産に使用する部品)を正しく認識するか否かをテンプレートマッチングによって判定するに際して、カメラにより撮像された撮像画像に物体認識処理を実行し、テンプレートマッチングに必要となるテンプレートを予測するユースケースを例示して説明する。本開示に係るテンプレート登録装置(例えば画像処理装置)は、対象物を撮像、かつ、移動により対象物に対する撮像位置が変更可能な撮像装置により撮像された対象物の入力画像に基づく情報(後述参照)と、対象物の入力画像に基づく情報(後述参照)とをテンプレートマッチングに用いるテンプレートとして、撮像装置の位置情報と対象物の入力画像に基づく情報(後述参照)とを関連付けて記憶部に登録する。 (Embodiment 1)
In the first embodiment, for example, in a production process in a factory, when determining whether a part to be picked by an end effector such as a robot hand (for example, a part used in the production of an industrial product) is correctly recognized by template matching, an object recognition process is performed on an image captured by a camera, and a template required for template matching is predicted. A template registration device (for example, an image processing device) according to the present disclosure images an object, and registers information based on an input image of an object captured by an imaging device that can change the imaging position relative to the object by moving (see below) and information based on the input image of the object (see below) in a storage unit in association with position information of the imaging device and information based on the input image of the object (see below) as templates to be used for template matching.
実施の形態1では、例えば工場内の生産工程において、ロボットハンド等のエンドエフェクタによりピッキングしようとする部品(例えば工業製品の生産に使用する部品)を正しく認識するか否かをテンプレートマッチングによって判定するに際して、カメラにより撮像された撮像画像に物体認識処理を実行し、テンプレートマッチングに必要となるテンプレートを予測するユースケースを例示して説明する。本開示に係るテンプレート登録装置(例えば画像処理装置)は、対象物を撮像、かつ、移動により対象物に対する撮像位置が変更可能な撮像装置により撮像された対象物の入力画像に基づく情報(後述参照)と、対象物の入力画像に基づく情報(後述参照)とをテンプレートマッチングに用いるテンプレートとして、撮像装置の位置情報と対象物の入力画像に基づく情報(後述参照)とを関連付けて記憶部に登録する。 (Embodiment 1)
In the first embodiment, for example, in a production process in a factory, when determining whether a part to be picked by an end effector such as a robot hand (for example, a part used in the production of an industrial product) is correctly recognized by template matching, an object recognition process is performed on an image captured by a camera, and a template required for template matching is predicted. A template registration device (for example, an image processing device) according to the present disclosure images an object, and registers information based on an input image of an object captured by an imaging device that can change the imaging position relative to the object by moving (see below) and information based on the input image of the object (see below) in a storage unit in association with position information of the imaging device and information based on the input image of the object (see below) as templates to be used for template matching.
図1は、ピッキングシステムの構成例を説明する図である。図2は、ピッキングシステムの内部構成例を示すブロック図である。図2に示すように、ピッキングシステム100は、アクチュエータACと、カメラCMと、画像処理装置P1と、ディスプレイ13と、操作デバイス14とを含む。アクチュエータACと画像処理装置P1との間、カメラCMと画像処理装置P1との間、画像処理装置P1と操作デバイス14との間は、それぞれデータ信号の入出力(送受信)が可能となるように接続されている。
FIG. 1 is a diagram for explaining an example of the configuration of a picking system. FIG. 2 is a block diagram showing an example of the internal configuration of a picking system. As shown in FIG. 2, the picking system 100 includes an actuator AC, a camera CM, an image processing device P1, a display 13, and an operation device 14. The actuator AC and the image processing device P1, the camera CM and the image processing device P1, and the image processing device P1 and the operation device 14 are connected so as to enable input/output (transmission/reception) of data signals.
カメラCMと対象物Tgとの間の位置関係について、図1を参照して説明する。なお、図1の説明は、実施の形態1だけでなく後述する実施の形態1の変形例にも同様に適用可能である。
The positional relationship between the camera CM and the target object Tg will be described with reference to FIG. 1. Note that the description of FIG. 1 is applicable not only to the first embodiment but also to a modified version of the first embodiment described below.
以下の説明において、対象物Tgは、工場内に配備されるピッキングシステム100のエンドエフェクタEFによりピッキングされる対象物であり、例えば工業部品、工業製品等である。工業部品であれば、例えばピッキングされた後に完成品を組み立てるために別のレーン(生産ライン)に移動される。工業製品であれば、例えばピッキングされた後に段ボール等の箱に収納される。なお、対象物Tgの種類は、上述した工業部品、工業製品に限定されないことは言うまでもない。
In the following description, the object Tg is an object that is picked by the end effector EF of the picking system 100 deployed in a factory, and is, for example, an industrial part, an industrial product, etc. If it is an industrial part, for example, after being picked, it is moved to another lane (production line) for assembling a finished product. If it is an industrial product, for example, after being picked, it is stored in a box such as a cardboard box. It goes without saying that the type of object Tg is not limited to the industrial parts and industrial products described above.
図1に示すように、アクチュエータACは、カメラCMを3次元的に移動可能に制御することにより、ベルトコンベア上を移動する対象物Tgと、対象物Tgのピッキングを行うエンドエフェクタEFおよびエンドエフェクタEFに固定設置されたカメラCMとの間の位置関係を変更可能に制御する。
As shown in Figure 1, the actuator AC controls the camera CM to be movable three-dimensionally, thereby changing the positional relationship between the object Tg moving on the belt conveyor, the end effector EF that picks the object Tg, and the camera CM fixed to the end effector EF.
アクチュエータACは、複数軸によりエンドエフェクタEFと、エンドエフェクタEFに備えられたカメラCMとをそれぞれ3次元で移動可能に制御する。つまり、アクチュエータACは、カメラCMの3次元位置(座標)の認識、維持、あるいは変更を制御可能である。
The actuator AC controls the end effector EF and the camera CM attached to the end effector EF so that they can be moved in three dimensions using multiple axes. In other words, the actuator AC can control the recognition, maintenance, or change of the three-dimensional position (coordinates) of the camera CM.
エンドエフェクタEFは、例えばピッキングシステム100に対応して配備されたロボットアームの先端部に設けられているロボットハンドであり、アクチュエータACによる制御で対象物Tgに接近し、対象物Tgをピッキングする。
The end effector EF is, for example, a robot hand provided at the tip of a robot arm deployed in correspondence with the picking system 100, and approaches the target object Tg under the control of the actuator AC, and picks up the target object Tg.
カメラCMは、エンドエフェクタEFの近傍に配置され、アクチュエータACの制御によってエンドエフェクタEFと一体的に移動して、対象物Tgを撮像する。カメラCMは、対象物Tgを所定のフレームレート(例えば、1000frame per rate(以降、「fps」と表記))で撮像し、この撮像の度に得られた対象物Tgの撮像画像(入力画像の一例)を都度、画像処理装置P1に送信する。
The camera CM is placed near the end effector EF and moves together with the end effector EF under the control of the actuator AC to capture an image of the object Tg. The camera CM captures the object Tg at a predetermined frame rate (e.g., 1000 frames per rate (hereinafter referred to as "fps")) and transmits the captured image of the object Tg (an example of an input image) obtained each time it is captured to the image processing device P1.
画像処理装置P1は、カメラCMから送信された対象物Tgの撮像画像を取得する。画像処理装置P1は、カメラCMから送信された対象物Tgの撮像画像を用いて、対象物Tgの姿勢を検出し、対象物Tgの姿勢を検出する間に、第2処理により得られる対象物Tgの移動に関する情報に基づいて、対象物Tgの動きを観察し、検出された対象物Tgの姿勢と、対象物Tgの動き情報とに基づいて、対象物Tgの姿勢検出処理が終了するタイミングにおける対象物Tgの姿勢を予測し、予測された対象物Tgの姿勢(検出姿勢の一例)に対応する検出テンプレートを生成する第1処理(図4、図6参照)と、対象物Tgの特徴量を抽出し、抽出された特徴量に基づく特徴マッチング(言い換えると、テンプレートマッチング)して対象物Tgの位置情報を取得し、取得された対象物Tgの位置情報をアクチュエータACに常時あるいは周期的に送信(フィードバック)する第2処理(図4、図5参照)とを実行可能なコンピュータにより構成される。
The image processing device P1 acquires the captured image of the object Tg transmitted from the camera CM. The image processing device P1 is configured by a computer capable of executing a first process (see Figs. 4 and 6) in which the image processing device detects the posture of the object Tg using the captured image of the object Tg transmitted from the camera CM, observes the movement of the object Tg based on information on the movement of the object Tg obtained by the second process while detecting the posture of the object Tg, predicts the posture of the object Tg at the timing when the posture detection process of the object Tg ends based on the detected posture of the object Tg and the movement information of the object Tg, and generates a detection template corresponding to the predicted posture of the object Tg (an example of a detected posture), and a second process (see Figs. 4 and 5) in which the feature amount of the object Tg is extracted, feature matching based on the extracted feature amount (in other words, template matching) is performed to acquire position information of the object Tg, and the acquired position information of the object Tg is constantly or periodically transmitted (feedback) to the actuator AC.
画像処理装置P1は、例えばPersonal Computer(以降、「PC」と表記)でもよいし、上述した第1処理および第2処理のそれぞれの実行に特化した専用のハードウェア機器でもよい。画像処理装置P1は、上述した第1処理および第2処理のそれぞれを実行することにより、エンドエフェクタEFによりピッキングされる対象物Tgの位置および姿勢の認識処理を実現する。画像処理装置P1は、通信部10と、プロセッサ11と、メモリ12と、3DモデルデータベースDBとを含む。
The image processing device P1 may be, for example, a personal computer (hereinafter referred to as "PC"), or may be a dedicated hardware device specialized for executing each of the above-mentioned first and second processes. The image processing device P1 realizes recognition processing of the position and orientation of the target object Tg picked by the end effector EF by executing each of the above-mentioned first and second processes. The image processing device P1 includes a communication unit 10, a processor 11, a memory 12, and a 3D model database DB.
画像処理装置P1は、ユーザ操作を受け付け、ユーザ操作に基づいて後述するアイコンPP1の位置および姿勢から見た対象物TgのテンプレートTP1(図7参照)と、第1処理により予測された対象物Tgの予測姿勢に対応する対象物Tgの予測テンプレートTP2(図7参照)と、第1処理により検出された対象物Tgの姿勢に対応する対象物Tgの検出テンプレートTP3(図7参照)とを含むテンプレート表示画面SC(図7参照)を生成してディスプレイ13に表示する。
The image processing device P1 accepts a user operation, and based on the user operation, generates a template display screen SC (see FIG. 7) including a template TP1 (see FIG. 7) of the object Tg viewed from the position and posture of the icon PP1 described below, a prediction template TP2 (see FIG. 7) of the object Tg corresponding to the predicted posture of the object Tg predicted by the first process, and a detection template TP3 (see FIG. 7) of the object Tg corresponding to the posture of the object Tg detected by the first process, and displays it on the display 13.
通信部10(取得部の一例)は、アクチュエータAC、カメラCM、ディスプレイ13、および操作デバイス14との間でそれぞれデータ通信可能に接続され、データの送受信を実行する。通信部10は、カメラCMから送信された撮像画像と、操作デバイス14から送信された制御指令とをそれぞれプロセッサ11に出力する。通信部10は、プロセッサ11から出力されたテンプレート表示画面SC(図7参照)をディスプレイ13に送信する。
The communication unit 10 (an example of an acquisition unit) is connected to the actuator AC, the camera CM, the display 13, and the operation device 14 so that data can be communicated between them, and transmits and receives data. The communication unit 10 outputs the captured image transmitted from the camera CM and the control command transmitted from the operation device 14 to the processor 11. The communication unit 10 transmits the template display screen SC (see FIG. 7) output from the processor 11 to the display 13.
プロセッサ11は、例えばCentral Processing Unit(CPU)またはField Programmable Gate Array(FPGA)を用いて構成されて、メモリ12と協働して、各種の処理および制御を行う。具体的には、プロセッサ11はメモリ12に保持されたプログラムおよびデータを参照し、そのプログラムを実行することにより、第1処理部110および第2処理部120のそれぞれの機能を実現する。
The processor 11 is configured using, for example, a Central Processing Unit (CPU) or a Field Programmable Gate Array (FPGA), and performs various processes and controls in cooperation with the memory 12. Specifically, the processor 11 references the programs and data stored in the memory 12 and executes the programs to realize the respective functions of the first processing unit 110 and the second processing unit 120.
第1処理部110は、対象物Tgの検出処理を実行し、対象物Tgの予測テンプレートTP2を生成する第1処理(図4、図6参照)を実行する。第1処理は、Deep Learning(深層学習)を用いた高度な画像処理であって、第2処理の実行に要する時間よりも長い時間(例えば、17ms)を要する処理である。ここで、第1処理を実行する間に、対象物Tgの位置および姿勢が変化し続ける。よって、第1処理部110は、第1処理が終了するタイミングで出力された検出テンプレートTP3の有効性を保つため、第1処理を実行している間に複数回実行される第2処理部120により実行される第2処理の結果をそれぞれ用いて、対象物Tgのテンプレート候補を予測する。これにより、第1処理部110は、第1処理の実行中に変化する対象物Tgの姿勢変化にリアルタイムに追従し、第2処理部120により実行される特徴マッチングにより適したテンプレート候補を予測することができる。
The first processing unit 110 executes a detection process for the object Tg and executes a first process (see FIG. 4 and FIG. 6) for generating a prediction template TP2 for the object Tg. The first process is an advanced image process using deep learning, and takes a longer time (e.g., 17 ms) than the time required for executing the second process. Here, the position and posture of the object Tg continue to change while the first process is being executed. Therefore, in order to maintain the validity of the detection template TP3 output at the timing when the first process ends, the first processing unit 110 predicts template candidates for the object Tg using the results of the second process executed by the second processing unit 120, which is executed multiple times while the first process is being executed. As a result, the first processing unit 110 can track the changes in posture of the object Tg that change during the execution of the first process in real time, and predict template candidates that are more suitable for the feature matching executed by the second processing unit 120.
第2処理部120は、第1処理によって得られた予測テンプレートTP2と、カメラCMにより撮像された撮像画像とを用いて特徴マッチングを実行し、対象物Tgの位置情報を推測し、推測された対象物Tgの位置情報をアクチュエータACに送信する第2処理(図4、図5参照)を実行する。第2処理は、特徴マッチングを用いた簡易な画像処理であって、第1処理の実行に要する時間よりも短い時間(例えば、1ms)を要する処理である。第2処理部120は、第1処理部110が第1処理を1回実行する間に、第2処理を複数回実行する。
The second processing unit 120 executes a second process (see Figures 4 and 5) in which feature matching is performed using the prediction template TP2 obtained by the first process and the captured image captured by the camera CM, the position information of the object Tg is estimated, and the estimated position information of the object Tg is transmitted to the actuator AC. The second process is a simple image process using feature matching, and requires a shorter time (e.g., 1 ms) than the time required to execute the first process. The second processing unit 120 executes the second process multiple times while the first processing unit 110 executes the first process once.
メモリ12は、例えばプロセッサ11の各処理を実行する際に用いられるワークメモリとしてのRandom Access Memory(RAM)と、プロセッサ11の動作を規定したプログラムおよびデータを格納するRead Only Memory(ROM)とを有する。RAMには、プロセッサ11により生成あるいは取得されたデータもしくは情報が一時的に保存される。ROMには、プロセッサ11の動作を規定するプログラムが書き込まれている。
Memory 12 has, for example, Random Access Memory (RAM) as a working memory used when executing each process of processor 11, and Read Only Memory (ROM) that stores programs and data that define the operation of processor 11. Data or information generated or acquired by processor 11 is temporarily stored in RAM. Programs that define the operation of processor 11 are written in ROM.
3DモデルデータベースDB(データベースの一例)は、例えばフラッシュメモリ、Hard Disk Drive(HDD)あるいはSolid State Drive(SSD)である。3DモデルデータベースDBは、ピッキング対象である少なくとも1つの対象物Tgの3Dモデルのデータと、対象物Tgに関する情報(例えば、対象物ごとの名称、識別番号等)とを対象物Tgごとに格納(登録)する。
The 3D model database DB (an example of a database) is, for example, a flash memory, a hard disk drive (HDD), or a solid state drive (SSD). The 3D model database DB stores (registers) 3D model data of at least one object Tg to be picked, and information about the object Tg (for example, the name and identification number of each object) for each object Tg.
ディスプレイ13は、画像処理装置P1により生成されたテンプレート表示画面SC(図7参照)を出力(表示)するデバイスであり、例えばLiquid Crystal Display(LCD)あるいは有機Electroluminescence(EL)デバイスにより構成される。
The display 13 is a device that outputs (displays) the template display screen SC (see FIG. 7) generated by the image processing device P1, and is configured, for example, by a Liquid Crystal Display (LCD) or an organic electroluminescence (EL) device.
操作デバイス14は、ユーザ操作の入力を検知するインターフェースであり、例えばマウス、キーボードあるいはタッチパネルにより構成される。操作デバイス14は、ユーザ操作を受け付けると、ユーザ操作に基づく電気信号を生成して画像処理装置P1に送信する。
The operation device 14 is an interface that detects user operation input, and is composed of, for example, a mouse, a keyboard, or a touch panel. When the operation device 14 receives a user operation, it generates an electrical signal based on the user operation and transmits it to the image processing device P1.
次に、図3を参照して、第1処理部110および第2処理部120のそれぞれにより実現される機能について説明する。図3は、実施の形態1における第1処理部110および第2処理部120の機能を説明する機能ブロック図である。
Next, the functions realized by the first processing unit 110 and the second processing unit 120 will be described with reference to FIG. 3. FIG. 3 is a functional block diagram illustrating the functions of the first processing unit 110 and the second processing unit 120 in the first embodiment.
第1処理部110は、物体検出部111と、3Dモデル選択部112と、第1の時間予測部113と、3Dモデル合成部117とを含む。
The first processing unit 110 includes an object detection unit 111, a 3D model selection unit 112, a first time prediction unit 113, and a 3D model synthesis unit 117.
物体検出部111は、Deep Learning(深層学習)を用いて、カメラCMから送信された撮像画像に画像認識処理を実行し、撮像画像から物体(対象物Tg)を検出する。物体検出部111は、検出された物体(対象物Tg)の情報を3Dモデル選択部112および第1の時間予測部113にそれぞれ出力する。なお、物体検出部111が用いるDeep Learning(深層学習)は、例えば、Convolutional Neural Network(CNN)等の対象物Tgの検出に適した任意の学習手法が用いられてよい。
The object detection unit 111 uses Deep Learning to perform image recognition processing on the captured image transmitted from the camera CM and detects an object (target object Tg) from the captured image. The object detection unit 111 outputs information on the detected object (target object Tg) to the 3D model selection unit 112 and the first time prediction unit 113. Note that the Deep Learning used by the object detection unit 111 may be any learning method suitable for detecting the target object Tg, such as a Convolutional Neural Network (CNN).
3Dモデル選択部112は、3DモデルデータベースDBに登録された少なくとも1つの対象物Tgの3Dモデルのうち、ユーザ操作により指定された物体(対象物Tg)の情報に対応する3DモデルMDを選出する。3Dモデル選択部112は、選出された対象物Tgの3DモデルMDを3Dマッチング部114および3Dモデル合成部117のそれぞれに出力する。
The 3D model selection unit 112 selects a 3D model MD that corresponds to information about an object (object Tg) specified by a user operation from among the 3D models of at least one object Tg registered in the 3D model database DB. The 3D model selection unit 112 outputs the selected 3D model MD of the object Tg to each of the 3D matching unit 114 and the 3D model synthesis unit 117.
第1の時間予測部113は、第1処理により対象物Tgの検出処理を実行している間に、第2処理により得られた対象物Tgの動き情報に基づいて、対象物Tgの位置および姿勢の変化を追跡(観察)し、第1処理が終了するタイミングにおける対象物Tgの姿勢に対応する予測テンプレートTP2を生成する。第1の時間予測部113は、3Dマッチング部114と、テンプレート予測部115と、予測モデル更新部116とを含む。
The first time prediction unit 113 tracks (observes) changes in the position and posture of the object Tg based on the movement information of the object Tg obtained by the second process while the detection process of the object Tg is being performed by the first process, and generates a prediction template TP2 corresponding to the posture of the object Tg at the time when the first process ends. The first time prediction unit 113 includes a 3D matching unit 114, a template prediction unit 115, and a prediction model update unit 116.
3Dマッチング部114は、物体検出部111から出力された物体(対象物Tg)の情報と、3Dモデル選択部112から出力された対象物Tgの3DモデルMDとを3次元空間でマッチングする3Dマッチングを実行し、カメラCMにより撮像された撮像画像に写る対象物Tgの姿勢(以降、「検出姿勢」と表記)を認識する。3Dマッチング部114は、対象物Tgの検出姿勢に関する情報と、3Dマッチングに用いられた対象物Tgの3DモデルMDとを対応付けて、テンプレート予測部115および3Dモデル合成部117のそれぞれに出力する。
The 3D matching unit 114 executes 3D matching in which the information about the object (target object Tg) output from the object detection unit 111 and the 3D model MD of the target object Tg output from the 3D model selection unit 112 are matched in a three-dimensional space, and recognizes the posture of the target object Tg that appears in the captured image captured by the camera CM (hereinafter referred to as the "detected posture"). The 3D matching unit 114 associates information about the detected posture of the target object Tg with the 3D model MD of the target object Tg used in the 3D matching, and outputs the information to the template prediction unit 115 and the 3D model synthesis unit 117.
テンプレート予測部115は、3Dマッチング部114から送信された対象物Tgの検出姿勢に関する情報および対象物Tgの3DモデルMDと、予測モデル更新部116から出力された予測モデルとに基づいて、第1処理が終了するタイミングにおける対象物Tgの姿勢に対応する予測テンプレートTP2を生成する。なお、ここでいう予測モデルは、対象物Tgの姿勢が予測された予測モデルであって、複数回実行された第2処理により得られた対象物Tgの移動に関する情報に基づく数学的なモデルである。
The template prediction unit 115 generates a prediction template TP2 corresponding to the posture of the object Tg at the time when the first process ends, based on information on the detected posture of the object Tg and the 3D model MD of the object Tg sent from the 3D matching unit 114, and the prediction model output from the prediction model update unit 116. Note that the prediction model here is a prediction model in which the posture of the object Tg is predicted, and is a mathematical model based on information on the movement of the object Tg obtained by the second process that has been executed multiple times.
テンプレート予測部115は、予測結果に基づいて、次に取得される撮像画像が撮像される時のカメラCMの姿勢であって、対象物Tgの予測姿勢(角度)から3DモデルMDを見た場合に得られる2D画像(以降、「予測テンプレート」と表記)を生成する。テンプレート予測部115は、対象物Tgの姿勢に関する情報と、生成された対象物Tgの予測テンプレートTP2(図7参照)とを対応付けて、3Dモデル合成部117と、テンプレート更新部121とにそれぞれ出力する。
The template prediction unit 115 generates a 2D image (hereinafter referred to as a "prediction template") that is the posture of the camera CM when the next captured image is captured, and that is obtained when the 3D model MD is viewed from the predicted posture (angle) of the object Tg, based on the prediction result. The template prediction unit 115 associates information about the posture of the object Tg with the generated prediction template TP2 of the object Tg (see Figure 7), and outputs the information to the 3D model synthesis unit 117 and the template update unit 121, respectively.
予測モデル更新部116は、特徴マッチング部123から出力された特徴マッチング結果と、位置フィッティング部124から出力された対象物Tgの動きとに基づいて、第1処理により対象物Tgの検出処理を実行している間に変化する対象物Tgの姿勢の変化を予測するための予測モデルを更新する。予測モデル更新部116は、更新後の予測モデルをテンプレート予測部115に出力する。
The prediction model update unit 116 updates the prediction model for predicting changes in the posture of the object Tg that occur while the detection process for the object Tg is being performed by the first process, based on the feature matching result output from the feature matching unit 123 and the movement of the object Tg output from the position fitting unit 124. The prediction model update unit 116 outputs the updated prediction model to the template prediction unit 115.
3Dモデル合成部117は、3Dマッチング部114から出力された対象物Tgの検出姿勢と、検出テンプレートTP3(図7参照)とを取得し、対象物Tgの検出姿勢に基づいて、撮像画像に写る対象物Tgが撮像された角度を示すアイコンPP3を生成する。3Dモデル合成部117は、テンプレート予測部115から出力された対象物Tgの予測姿勢と、予測テンプレートTP2とを取得し、対象物Tgの予測姿勢に基づいて、次の撮像画像で撮像される対象物Tgの角度を示すアイコンPP2を生成する。
The 3D model synthesis unit 117 acquires the detected orientation of the object Tg output from the 3D matching unit 114 and the detection template TP3 (see FIG. 7), and generates an icon PP3 indicating the angle at which the object Tg in the captured image was captured based on the detected orientation of the object Tg. The 3D model synthesis unit 117 acquires the predicted orientation of the object Tg output from the template prediction unit 115 and the prediction template TP2, and generates an icon PP2 indicating the angle of the object Tg to be captured in the next captured image based on the predicted orientation of the object Tg.
3Dモデル合成部117は、対象物Tgの3DモデルMDと、予測テンプレートTP2と、検出テンプレートTP3と、アイコンPP1,PP2,PP3のそれぞれとに基づいて、テンプレート表示画面SCを生成し、ディスプレイ13に送信する。
The 3D model synthesis unit 117 generates a template display screen SC based on the 3D model MD of the object Tg, the prediction template TP2, the detection template TP3, and each of the icons PP1, PP2, and PP3, and transmits it to the display 13.
第2処理部120は、テンプレート更新部121と、特徴抽出部122と、特徴マッチング部123と、位置フィッティング部124と、第2の時間予測部125と、制御部126とを含む。
The second processing unit 120 includes a template update unit 121, a feature extraction unit 122, a feature matching unit 123, a position fitting unit 124, a second time prediction unit 125, and a control unit 126.
テンプレート更新部121は、第1処理部110のテンプレート予測部115から出力された予測テンプレートTP2を取得し、特徴マッチングに使用される対象物Tgのテンプレート(2Dデータ)を、取得された予測テンプレートTP2に更新する。
The template update unit 121 acquires the prediction template TP2 output from the template prediction unit 115 of the first processing unit 110, and updates the template (2D data) of the object Tg used for feature matching to the acquired prediction template TP2.
特徴抽出部122は、カメラCMから送信された撮像画像から対象物Tgの特徴量を抽出する。特徴抽出部122は、抽出された対象物Tgの特徴量を特徴マッチング部123に出力する。
The feature extraction unit 122 extracts the feature amount of the object Tg from the captured image transmitted from the camera CM. The feature extraction unit 122 outputs the extracted feature amount of the object Tg to the feature matching unit 123.
特徴マッチング部123は、テンプレート更新部121から出力された予測テンプレートTP2に含まれる対象物Tgの特徴量と、特徴抽出部122から出力された対象物Tgの特徴量とをマッチングする。特徴マッチング部123は、マッチング結果を予測モデル更新部と位置フィッティング部124とにそれぞれ出力する。
The feature matching unit 123 matches the feature amount of the object Tg included in the prediction template TP2 output from the template update unit 121 with the feature amount of the object Tg output from the feature extraction unit 122. The feature matching unit 123 outputs the matching result to the prediction model update unit and the position fitting unit 124.
位置フィッティング部124は、特徴マッチング部123から出力されたマッチング結果を取得する。位置フィッティング部124は、特徴マッチングの結果に基づいて、撮像画像に写る対象物Tgの位置情報をフィッティングする。
The position fitting unit 124 acquires the matching results output from the feature matching unit 123. The position fitting unit 124 fits the position information of the object Tg appearing in the captured image based on the result of the feature matching.
位置フィッティング部124は、位置フィッティング後の対象物Tgの位置情報を予測モデル更新部116および第2の時間予測部125にそれぞれ出力する。
The position fitting unit 124 outputs the position information of the object Tg after position fitting to the prediction model update unit 116 and the second time prediction unit 125.
第2の時間予測部125は、位置フィッティング部124から出力された対象物Tgの位置情報に基づいて、第2処理を実行する間の対象物Tgの動きを予測して、第2処理が終了するタイミングの対象物Tgの位置を予測する。第2の時間予測部125は、予測された対象物Tgの予測位置の情報を制御部126に出力する。
The second time prediction unit 125 predicts the movement of the object Tg while the second process is being performed based on the position information of the object Tg output from the position fitting unit 124, and predicts the position of the object Tg at the time when the second process ends. The second time prediction unit 125 outputs information on the predicted position of the object Tg to the control unit 126.
制御部126は、第2の時間予測部125から出力された対象物Tgの予測位置の情報をアクチュエータACに出力する。
The control unit 126 outputs the information on the predicted position of the object Tg output from the second time prediction unit 125 to the actuator AC.
次に、図4を参照して、ピッキングシステム100の全体動作手順について説明する。図4は、実施の形態1に係るピッキングシステム100の全体動作手順例を説明する図である。
Next, the overall operation procedure of the picking system 100 will be described with reference to FIG. 4. FIG. 4 is a diagram illustrating an example of the overall operation procedure of the picking system 100 according to the first embodiment.
なお、図4に示すピッキングシステム100の全体動作手順例は一例であって、これに限定されない。図4では、第1処理と第2処理との関係を分かりやすくするために1個の対象物Tgのピッキングにおいて第1処理が1回、第2処理がN(N:3以上に整数)回実行される例を示しているが、第1処理および第2処理がそれぞれ実行される回数は、これに限定されない。ピッキングシステム100は、1個の対象物Tgのピッキングにおいて第1処理を複数回実行してもよいことは言うまでもない。
Note that the example of the overall operation procedure of the picking system 100 shown in FIG. 4 is just one example, and is not limited to this. In FIG. 4, in order to make it easier to understand the relationship between the first process and the second process, an example is shown in which the first process is executed once and the second process is executed N times (N: an integer of 3 or more) when picking one target object Tg, but the number of times that the first process and the second process are executed is not limited to this. It goes without saying that the picking system 100 may execute the first process multiple times when picking one target object Tg.
アクチュエータACは、カメラCMによってベルトコンベア上を搬送される対象物Tgを所定のフレームレート(例えば、1000fps)で撮像しながら、ピッキングする。図4に示すアクチュエータACは、対象物Tgのピッキングプロセスの一部(時刻t11~時刻t1N)を示しており、例えば、対象物Tgをピッキングするまで繰り返し実行される。
The actuator AC picks up the object Tg being transported on the belt conveyor by the camera CM while capturing images of the object Tg at a predetermined frame rate (e.g., 1000 fps). The actuator AC shown in FIG. 4 shows a part of the picking process of the object Tg (time t11 to time t1N), and is executed repeatedly, for example, until the object Tg is picked.
画像処理装置P1は、カメラCMによって所定のフレームレートで撮像された対象物Tgの撮像画像を取得し、取得された撮像画像に第2処理(ステップSt100)を実行する。
The image processing device P1 acquires an image of the object Tg captured by the camera CM at a predetermined frame rate, and performs a second process (step St100) on the acquired image.
また、画像処理装置P1は、第2処理と並列に、カメラCMによって撮像された撮像画像と、第1処理中に実行された複数回の第2処理により得られた特徴マッチングの結果(言い換えると、マッチング傾向)および対象物Tgの移動量(言い換えると、動き情報)とに基づいて、第1処理(ステップSt200)を実行する。画像処理装置P1は、第1処理により得られた対象物Tgのテンプレート候補である予測テンプレートTP2(図7参照)を第2処理にフィードバックする。
In addition, in parallel with the second process, the image processing device P1 executes the first process (step St200) based on the captured image captured by the camera CM, the feature matching results (in other words, matching tendency) obtained by the multiple second processes executed during the first process, and the amount of movement of the object Tg (in other words, movement information). The image processing device P1 feeds back to the second process the prediction template TP2 (see FIG. 7), which is a template candidate for the object Tg obtained by the first process.
図4に示す例において、カメラCMは、時刻t11で対象物Tgを撮像し、撮像された撮像画像Img11を画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信された1枚目の撮像画像Img11(画像データ)を取得し、1枚目の撮像画像Img11を用いて第1処理および第2処理のそれぞれを実行する。画像処理装置P1は、第2処理により得られた対象物Tgのマッチング傾向と、対象物Tgの動き情報とを第1処理部110に出力するとともに、対象物Tgの予測位置(x1,y1,z1)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの3次元の予測位置(x1,y1,z1)に向かってエンドエフェクタEFを移動させる。
In the example shown in FIG. 4, the camera CM captures an image of the object Tg at time t11 and transmits the captured image Img11 to the image processing device P1. The image processing device P1 acquires the first captured image Img11 (image data) transmitted from the camera CM and executes the first process and the second process using the first captured image Img11. The image processing device P1 outputs the matching tendency of the object Tg obtained by the second process and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (x1, y1, z1) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired three-dimensional predicted position (x1, y1, z1) of the object Tg.
時刻t12において、カメラCMは、対象物Tgを撮像する。カメラCMは、撮像された撮像画像Img12を画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信された2枚目の撮像画像Img12(画像データ)を取得し、2枚目の撮像画像Img12を用いて第2処理を実行する。画像処理装置P1は、第2処理により得られた対象物Tgのマッチング傾向と、対象物Tgの動き情報とを第1処理部110に出力するとともに、対象物Tgの予測位置(x2,y2,z2)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの予測位置(x2,y2,z2)に向かってエンドエフェクタEFを移動させる。
At time t12, the camera CM captures an image of the object Tg. The camera CM transmits the captured image Img12 to the image processing device P1. The image processing device P1 acquires the second captured image Img12 (image data) transmitted from the camera CM and executes the second process using the second captured image Img12. The image processing device P1 outputs the matching tendency of the object Tg obtained by the second process and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (x2, y2, z2) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired predicted position (x2, y2, z2) of the object Tg.
時刻t13において、カメラCMは、対象物Tgを撮像する。カメラCMは、撮像された撮像画像Img13を画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信された3枚目の撮像画像Img13(画像データ)を取得し、3枚目の撮像画像Img13を用いて第2処理を実行する。画像処理装置P1は、第2処理により得られた対象物Tgのマッチング傾向と、対象物Tgの動き情報とを第1処理部110に出力するとともに、対象物Tgの予測位置(図示略)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの予測位置に向かってエンドエフェクタEFを移動させる。
At time t13, the camera CM captures an image of the object Tg. The camera CM transmits the captured image Img13 to the image processing device P1. The image processing device P1 acquires the third captured image Img13 (image data) transmitted from the camera CM and executes the second process using the third captured image Img13. The image processing device P1 outputs the matching tendency of the object Tg obtained by the second process and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (not shown) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired predicted position of the object Tg.
時刻t1(N-2)において、カメラCMは、対象物Tgを撮像する。カメラCMは、撮像された撮像画像Img1(N-2)を画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信された(N-2)枚目の撮像画像Img1(N-2)(画像データ)を取得し、(N-2)枚目の撮像画像Img1(N-2)を用いて第2処理を実行する。画像処理装置P1は、第2処理により得られた対象物Tgのマッチング傾向と、対象物Tgの動き情報とを第1処理部110に出力するとともに、対象物Tgの予測位置(図示略)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの予測位置に向かってエンドエフェクタEFを移動させる。
At time t1 (N-2), the camera CM captures an image of the object Tg. The camera CM transmits the captured image Img1 (N-2) to the image processing device P1. The image processing device P1 acquires the (N-2)th captured image Img1 (N-2) (image data) transmitted from the camera CM and executes the second process using the (N-2)th captured image Img1 (N-2). The image processing device P1 outputs the matching tendency of the object Tg and the movement information of the object Tg obtained by the second process to the first processing unit 110, and transmits information on the predicted position (not shown) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired predicted position of the object Tg.
時刻t1(N-1)において、カメラCMは、対象物Tgを撮像する。カメラCMは、撮像された撮像画像Img1(N-1)を画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信された(N-1)枚目の撮像画像Img1(N-1)(画像データ)を取得し、(N-1)枚目の撮像画像Img1(N-1)を用いて第2処理を実行する。画像処理装置P1は、第2処理により得られた対象物Tgのマッチング傾向と、対象物Tgの動き情報とを第1処理部110に出力するとともに、対象物Tgの予測位置(図示略)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの予測位置に向かってエンドエフェクタEFを移動させる。
At time t1 (N-1), the camera CM captures an image of the object Tg. The camera CM transmits the captured image Img1 (N-1) to the image processing device P1. The image processing device P1 acquires the (N-1)th captured image Img1 (N-1) (image data) transmitted from the camera CM and executes the second process using the (N-1)th captured image Img1 (N-1). The image processing device P1 outputs the matching tendency of the object Tg and the movement information of the object Tg obtained by the second process to the first processing unit 110, and transmits information on the predicted position (not shown) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired predicted position of the object Tg.
時刻t1Nにおいて、カメラCMは、対象物Tgを撮像する。カメラCMは、撮像された撮像画像Img1Nを画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信されたN枚目の撮像画像Img1N(画像データ)を取得し、N枚目の撮像画像Img1Nを用いて第2処理を実行する。画像処理装置P1は、第2処理により得られた対象物Tgのマッチング傾向と、対象物Tgの動き情報とを第1処理部110に出力するとともに、対象物Tgの予測位置(xN,yN,zN)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの予測位置(xN,yN,zN)に向かってエンドエフェクタEFを移動させて、対象物Tgをピッキングする。画像処理装置P1は、時刻t1(N+1)で第1処理により得られた対象物Tgのテンプレート候補(予測テンプレートTP2)を第2処理部120にフィードバックし、テンプレート候補(予測テンプレートTP2)を更新する。
At time t1N, the camera CM captures an image of the object Tg. The camera CM transmits the captured image Img1N to the image processing device P1. The image processing device P1 acquires the Nth captured image Img1N (image data) transmitted from the camera CM and executes the second processing using the Nth captured image Img1N. The image processing device P1 outputs the matching tendency of the object Tg obtained by the second processing and the movement information of the object Tg to the first processing unit 110, and transmits information on the predicted position (xN, yN, zN) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired predicted position (xN, yN, zN) of the object Tg to pick up the object Tg. The image processing device P1 feeds back the template candidate (prediction template TP2) of the object Tg obtained by the first process at time t1(N+1) to the second processing unit 120, and updates the template candidate (prediction template TP2).
なお、図4に示す例では図示されていないが、画像処理装置P1は、以降に実行される第2処理において、再度第1処理により新たな予測テンプレートTP2がフィードバックされるまでの間、取得された最新の予測テンプレートTP2を用いて特徴マッチングを実行する。
Note that, although not shown in the example shown in FIG. 4, in the second process executed subsequently, the image processing device P1 performs feature matching using the latest obtained prediction template TP2 until a new prediction template TP2 is fed back again by the first process.
次に、図5を参照して、画像処理装置P1の第2処理について説明する。図5は、実施の形態1における画像処理装置P1の第2処理手順(ステップSt100)例を示すフローチャートである。
Next, the second process of the image processing device P1 will be described with reference to FIG. 5. FIG. 5 is a flowchart showing an example of the second process procedure (step St100) of the image processing device P1 in embodiment 1.
第2処理部120は、操作デバイス14を介して取得されたピッキング処理の開始を通知する制御指令に基づいて、対象物Tgのテンプレートを更新するための更新フラグを「1」に設定する(St11)。第2処理部120は、カメラCMから送信された撮像画像を取得する(St12)。
The second processing unit 120 sets the update flag for updating the template of the target object Tg to "1" based on the control command notifying the start of the picking process acquired via the operation device 14 (St11). The second processing unit 120 acquires the captured image transmitted from the camera CM (St12).
第2処理部120は、現在設定されている更新フラグが「1」であるか否かを判定する(St13)。
The second processing unit 120 determines whether the currently set update flag is "1" (St13).
第2処理部120は、ステップSt13の処理において、現在設定されている更新フラグが「1」であると判定した場合(St13,YES)、特徴マッチングに用いられる対象物Tgのテンプレート(テンプレートTP1または予測テンプレートTP2)を更新する(St14)。第2処理部120は、テンプレート候補(予測テンプレートTP2)を生成するための対象物Tgの撮像画像を第1処理部110に出力し(St15)、更新フラグを「0」に設定する(St16)。
If the second processing unit 120 determines in the processing of step St13 that the currently set update flag is "1" (St13, YES), it updates the template (template TP1 or prediction template TP2) of the object Tg used for feature matching (St14). The second processing unit 120 outputs the captured image of the object Tg for generating a template candidate (prediction template TP2) to the first processing unit 110 (St15), and sets the update flag to "0" (St16).
一方、第2処理部120は、ステップSt13の処理において、現在設定されている更新フラグが「1」でないと判定した場合(St13,NO)、特徴マッチングに用いられる対象物Tgのテンプレート(予測テンプレートTP2)があるか否かを判定する(St17)。
On the other hand, if the second processing unit 120 determines in the processing of step St13 that the currently set update flag is not "1" (St13, NO), it determines whether or not there is a template (prediction template TP2) of the object Tg to be used for feature matching (St17).
第2処理部120は、ステップSt17の処理において、特徴マッチングに用いられる対象物Tgのテンプレート(予測テンプレートTP2)があると判定した場合(St17,YES)、撮像画像から特徴量を抽出する(St18)。
If the second processing unit 120 determines in step St17 that there is a template (prediction template TP2) of the object Tg to be used for feature matching (St17, YES), it extracts features from the captured image (St18).
一方、第2処理部120は、ステップSt17の処理において、特徴マッチングに用いられる対象物Tgのテンプレート(予測テンプレートTP2)がないと判定した場合(St17,NO)、ステップSt13の処理に戻り、第1処理部110から対象物Tgのテンプレート(テンプレートTP1または予測テンプレートTP2)がフィードバックされるまで待機する。
On the other hand, if the second processing unit 120 determines in the processing of step St17 that there is no template (prediction template TP2) of the object Tg to be used for feature matching (St17, NO), it returns to the processing of step St13 and waits until a template (template TP1 or prediction template TP2) of the object Tg is fed back from the first processing unit 110.
第2処理部120は、撮像画像から抽出された特徴量(つまり、撮像画像の特徴量)と、テンプレートに基づく対象物Tgの特徴量(つまり、テンプレートの特徴量)とを特徴マッチングする。第2処理部120は、マッチング結果に基づいて、カメラCMにより撮像された撮像画像に写る対象物Tgの位置をフィッティングする(St19)。
The second processing unit 120 performs feature matching between the features extracted from the captured image (i.e., the features of the captured image) and the features of the object Tg based on the template (i.e., the features of the template). Based on the matching result, the second processing unit 120 fits the position of the object Tg appearing in the captured image captured by the camera CM (St19).
第2処理部120は、特徴マッチングに用いられた撮像画像(つまり、最新の撮像画像)と、この撮像画像と連続して撮像された1つ前の撮像画像との間での対象物Tgの移動量を算出する。第2処理部120は、算出された移動量に基づいて、第2処理を実行する間の対象物Tgの移動量を予測し、第2処理が終了するタイミングにおける対象物Tgの位置を予測し、予測された対象物Tgの予測位置を取得する(St20)。第2処理部120は、予測された対象物Tgの予測位置をアクチュエータACに送信する(St21)。
The second processing unit 120 calculates the amount of movement of the object Tg between the captured image used for feature matching (i.e., the most recent captured image) and the captured image captured immediately before this captured image. Based on the calculated amount of movement, the second processing unit 120 predicts the amount of movement of the object Tg while the second process is being performed, predicts the position of the object Tg at the time when the second process ends, and obtains the predicted position of the object Tg (St20). The second processing unit 120 transmits the predicted position of the object Tg to the actuator AC (St21).
第2処理部120は、特徴マッチング結果であるマッチング傾向と、対象物Tgの動き情報(移動情報)とを第1処理部110に出力する(St22)。
The second processing unit 120 outputs the matching tendency, which is the result of feature matching, and the movement information (movement information) of the object Tg to the first processing unit 110 (St22).
第2処理部120は、アクチュエータACによる対象物Tgのピッキング処理の終了を通知する制御指令に基づいて、対象物Tgのピッキング処理が終了したか否かを判定する(St23)。
The second processing unit 120 determines whether the picking process of the object Tg has been completed based on a control command notifying the completion of the picking process of the object Tg by the actuator AC (St23).
第2処理部120は、ステップSt23の処理において、対象物Tgのピッキング処理が終了したと判定した場合(St23,YES)、図5に示す第2処理(ステップSt100)を終了する。
If the second processing unit 120 determines in step St23 that the picking process of the target object Tg has been completed (St23, YES), it terminates the second process (step St100) shown in FIG. 5.
一方、第2処理部120は、ステップSt23の処理において、対象物Tgのピッキング処理が終了していないと判定した場合(St23,NO)、ステップSt11の処理に戻る。
On the other hand, if the second processing unit 120 determines in the processing of step St23 that the picking process of the target object Tg has not been completed (St23, NO), the processing returns to the processing of step St11.
以上により、実施の形態1における画像処理装置P1は、第2処理により高フレームレートで撮像された撮像画像に基づいて、第2処理を実行する間の対象物Tgの移動量を予測し、第2処理が終了するタイミングにおける対象物Tgの位置を予測できる。これにより、画像処理装置P1は、アクチュエータACによる対象物Tgのリアルタイムな追跡を支援できる。
As described above, the image processing device P1 in the first embodiment can predict the amount of movement of the object Tg while the second process is being performed based on the captured images captured at a high frame rate by the second process, and can predict the position of the object Tg at the timing when the second process ends. This allows the image processing device P1 to support the real-time tracking of the object Tg by the actuator AC.
次に、図6を参照して、画像処理装置P1の第1処理について説明する。図6は、実施の形態1における画像処理装置P1の第1処理手順(ステップSt200)例を示すフローチャートである。
Next, the first process of the image processing device P1 will be described with reference to FIG. 6. FIG. 6 is a flowchart showing an example of the first process procedure (step St200) of the image processing device P1 in the first embodiment.
第1処理部110は、第2処理部120から出力された撮像画像を取得する(St31)。第1処理部110は、Deep Learning等の高度な画像処理技術を用いて、取得された撮像画像から対象物Tgを検出する(St32)。
The first processing unit 110 acquires the captured image output from the second processing unit 120 (St31). The first processing unit 110 detects the object Tg from the acquired captured image using advanced image processing techniques such as Deep Learning (St32).
第1処理部110は、検出された対象物Tgに基づいて、3DモデルデータベースDBに格納された3Dモデルのうち対象物Tgに対応する3DモデルMDを選出する。第1処理部110は、選出された3DモデルMDと、検出された対象物Tgとを3Dマッチングして、撮像画像に写る対象物Tgの検出姿勢(言い換えると、対象物Tgが撮像された撮像姿勢)を取得する(St33)。これにより、第1処理部110は、検出テンプレートTP3(図7参照)を生成するための検出姿勢を取得できる。第1処理部110は、第2処理部120から取得された特徴マッチング結果および対象物Tgの動き情報に基づいて、対象物Tgの姿勢(言い換えると、撮像姿勢)を予測するための予測モデルを更新する(St34)。
Based on the detected object Tg, the first processing unit 110 selects a 3D model MD corresponding to the object Tg from among the 3D models stored in the 3D model database DB. The first processing unit 110 performs 3D matching between the selected 3D model MD and the detected object Tg to obtain the detected orientation of the object Tg in the captured image (in other words, the captured orientation of the object Tg when it is captured) (St33). This allows the first processing unit 110 to obtain the detected orientation for generating the detection template TP3 (see FIG. 7). The first processing unit 110 updates a prediction model for predicting the orientation of the object Tg (in other words, the captured orientation) based on the feature matching result and the movement information of the object Tg obtained from the second processing unit 120 (St34).
第1処理部110は、更新された対象物Tgの姿勢(言い換えると、撮像姿勢)を予測するための予測モデルと、対象物Tgの検出姿勢とに基づいて、第1処理が終了するタイミング(時刻)における対象物Tgの姿勢に対応するテンプレート候補を予測し(St35)、予測されたテンプレート候補(予測テンプレートTP2)を第2処理部120にフィードバック(出力)する(St36)。
The first processing unit 110 predicts a template candidate corresponding to the posture of the object Tg at the timing (time) when the first processing ends based on a prediction model for predicting the updated posture of the object Tg (in other words, the imaging posture) and the detected posture of the object Tg (St35), and feeds back (outputs) the predicted template candidate (prediction template TP2) to the second processing unit 120 (St36).
第1処理部110は、対象物Tgのテンプレートを更新するための更新フラグを「1」に設定する(St37)。
The first processing unit 110 sets the update flag for updating the template of the object Tg to "1" (St37).
第1処理部110は、アクチュエータACによる対象物Tgのピッキング処理の終了を通知する制御指令に基づいて、対象物Tgのピッキング処理が終了したか否かを判定する(St38)。
The first processing unit 110 determines whether the picking process of the object Tg has been completed based on a control command notifying the completion of the picking process of the object Tg by the actuator AC (St38).
第1処理部110は、ステップSt38の処理において、対象物Tgのピッキング処理が終了したと判定した場合(St38,YES)、図6に示す第1処理(ステップSt200)を終了する。
If the first processing unit 110 determines in step St38 that the picking process of the target object Tg has been completed (St38, YES), it ends the first process (step St200) shown in FIG. 6.
一方、第1処理部110は、ステップSt38の処理において、対象物Tgのピッキング処理が終了していないと判定した場合(St38,NO)、ステップSt31の処理に戻る。
On the other hand, if the first processing unit 110 determines in the processing of step St38 that the picking process of the target object Tg has not been completed (St38, NO), the processing returns to the processing of step St31.
以上により、実施の形態1における画像処理装置P1は、第1処理によってより高度な画像認識処理を実行することで、第2処理で用いられる予測テンプレートTP2を生成するための対象物Tgの3DモデルMDをより高精度に選定できる。また、画像処理装置P1は、処理時間が短い第2処理部120から出力された特徴マッチング結果および対象物Tgの動き情報のそれぞれに基づいて、第1処理中に変化する対象物Tgの姿勢変化を追跡し、実際の対象物Tgの姿勢により近い対象物Tgのテンプレート(予測テンプレートTP2)を生成できる。これにより、画像処理装置P1は、第2処理部120による特徴マッチング精度を向上させることができるとともに、対象物Tgを追跡する追跡精度を向上させることができる。
As described above, the image processing device P1 in embodiment 1 can perform more advanced image recognition processing through the first processing, and thereby select with higher accuracy a 3D model MD of the object Tg for generating a prediction template TP2 to be used in the second processing. Furthermore, the image processing device P1 can track changes in the posture of the object Tg that change during the first processing, based on the feature matching results and movement information of the object Tg output from the second processing unit 120, which has a short processing time, and generate a template (prediction template TP2) of the object Tg that is closer to the actual posture of the object Tg. This allows the image processing device P1 to improve the feature matching accuracy of the second processing unit 120, as well as improve the tracking accuracy of tracking the object Tg.
次に、図7を参照して、テンプレート表示画面について説明する。図7は、テンプレート表示画面SCの一例を示す図である。なお、図7に示すテンプレート表示画面SCは一例であってこれに限定されない。
Next, the template display screen will be described with reference to FIG. 7. FIG. 7 is a diagram showing an example of the template display screen SC. Note that the template display screen SC shown in FIG. 7 is an example and is not limited to this.
第1処理部110は、第1処理の結果に基づいて、テンプレート表示画面SCを生成し、生成されたテンプレート表示画面SCをディスプレイ13に送信して表示させる。テンプレート表示画面SCは、第1表示領域AR1と、第2表示領域AR2と、第3表示領域AR3と、第4表示領域AR4と、第5表示領域AR5と、登録ボタンBTとを含む。
The first processing unit 110 generates a template display screen SC based on the result of the first processing, and transmits the generated template display screen SC to the display 13 for display. The template display screen SC includes a first display area AR1, a second display area AR2, a third display area AR3, a fourth display area AR4, a fifth display area AR5, and a registration button BT.
第1表示領域AR1は、第1処理における画像処理により得られた、または、後述する実施の形態1の変形例においてユーザ操作により選択された対象物Tgの3DモデルMDと、この対象物Tgの3DモデルMDのデータに設定されているXYZ座標系とを含む。
The first display area AR1 includes a 3D model MD of the object Tg obtained by image processing in the first process or selected by user operation in a modified example of the first embodiment described below, and an XYZ coordinate system set in the data of the 3D model MD of the object Tg.
また、第1表示領域AR1は、テンプレートTP1に対応するカメラCMの撮像角度を示すアイコンPP1と、予測テンプレートTP2に対応するカメラCMの撮像角度(第1撮像位置の一例)を示すアイコンPP2と、検出テンプレートTP3に対応するカメラCMの撮像角度(第2撮像位置の一例)を示すアイコンPP3とを含む。
The first display area AR1 also includes an icon PP1 indicating the imaging angle of the camera CM corresponding to the template TP1, an icon PP2 indicating the imaging angle of the camera CM corresponding to the prediction template TP2 (an example of the first imaging position), and an icon PP3 indicating the imaging angle of the camera CM corresponding to the detection template TP3 (an example of the second imaging position).
アイコンPP1は、操作デバイス14を介して、ユーザ操作を受け付け可能である。画像処理装置P1は、ユーザ操作によりアイコンPP1の位置が変更された場合、対象物Tgの3DモデルMDをアイコンPP1の位置(角度)から見た3DモデルMDのテンプレート(2D)を生成する。画像処理装置P1は、生成された3DモデルMDのテンプレート(2D)をテンプレートTP1として第3表示領域AR3に表示したテンプレート表示画面SCを生成して、ディスプレイ13に送信して表示させる。
The icon PP1 can receive user operations via the operation device 14. When the position of the icon PP1 is changed by a user operation, the image processing device P1 generates a template (2D) of the 3D model MD of the object Tg viewed from the position (angle) of the icon PP1. The image processing device P1 generates a template display screen SC in which the generated template (2D) of the 3D model MD is displayed as the template TP1 in the third display area AR3, and transmits it to the display 13 for display.
アイコンPP2は、予測テンプレートTP2の3DモデルMDのテンプレート(2D)を撮像可能なカメラCMの位置(角度)であって、次に撮像される対象物Tgの撮像位置を示す。画像処理装置P1は、第2処理により得られた対象物Tgの予測位置に基づいて、第1処理により対象物Tgの予測姿勢を予測した場合、予測された対象物Tgの予測姿勢に基づいて、アイコンPP2の位置と、第4表示領域AR4に表示される予測テンプレートTP2の更新を実行する。画像処理装置P1は、アイコンPP2の位置および予測テンプレートTP2が更新されたテンプレート表示画面SCを生成して、ディスプレイ13に送信して表示させる。
Icon PP2 is the position (angle) of the camera CM capable of capturing an image of the template (2D) of the 3D model MD of the prediction template TP2, and indicates the imaging position of the object Tg to be imaged next. When the image processing device P1 predicts the predicted attitude of the object Tg by the first process based on the predicted position of the object Tg obtained by the second process, the image processing device P1 updates the position of the icon PP2 and the prediction template TP2 displayed in the fourth display area AR4 based on the predicted attitude of the object Tg. The image processing device P1 generates a template display screen SC in which the position of the icon PP2 and the prediction template TP2 have been updated, and transmits it to the display 13 for display.
アイコンPP3は、検出テンプレートTP3の3DモデルMDのテンプレート(2D)を撮像可能なカメラCMの位置(角度)であって、画像処理が実行された撮像画像から検出された対象物Tgの撮像位置を示す。画像処理装置P1は、第1処理により検出された対象物Tgの姿勢に基づいて、アイコンPP3の位置と、第5表示領域AR5に表示される検出テンプレートTP3の更新を実行する。画像処理装置P1は、アイコンPP3の位置および検出テンプレートTP3が更新されたテンプレート表示画面SCを生成して、ディスプレイ13に送信して表示させる。
Icon PP3 is the position (angle) of the camera CM capable of capturing an image of the template (2D) of the 3D model MD of the detection template TP3, and indicates the imaging position of the object Tg detected from the captured image on which image processing has been performed. The image processing device P1 updates the position of the icon PP3 and the detection template TP3 displayed in the fifth display area AR5 based on the posture of the object Tg detected by the first processing. The image processing device P1 generates a template display screen SC in which the position of the icon PP3 and the detection template TP3 have been updated, and transmits it to the display 13 for display.
第2表示領域AR2は、3DモデルデータベースDBに格納された少なくとも1つの対象物の3DモデルMDを含む。図7に示す第2表示領域AR2は、対象物「A12」の3DモデルMD(3D)と、対象物「A13」の3DモデルMD(3D)と、対象物「A14」の3DモデルMD(3D)と、対象物「A15」の3DモデルMD(3D)とを含む。
The second display area AR2 includes a 3D model MD of at least one object stored in the 3D model database DB. The second display area AR2 shown in FIG. 7 includes a 3D model MD (3D) of object "A12", a 3D model MD (3D) of object "A13", a 3D model MD (3D) of object "A14", and a 3D model MD (3D) of object "A15".
なお、第2表示領域AR2は、操作デバイス14を介して、いずれか1つの対象物を選択するユーザ操作を受け付け可能である。画像処理装置P1は、各対象物に対応する選択領域SL1,SL2,SL3,SL4のうちいずれか1つの選択領域を選択するユーザ操作を受け付け、ユーザ操作により指定されたいずれかの選択領域SL1~SL4に対応する3Dモデルを第1表示領域AR1に表示する。
The second display area AR2 can receive a user operation to select one of the objects via the operation device 14. The image processing device P1 receives a user operation to select one of the selection areas SL1, SL2, SL3, and SL4 corresponding to each object, and displays a 3D model corresponding to any one of the selection areas SL1 to SL4 specified by the user operation in the first display area AR1.
また、実施の形態1の変形例2における画像処理装置P1は、第1表示領域AR1に表示されたアイコンPP1に対するユーザ操作を受け付け、ユーザ操作により移動されたアイコンPP1の位置に基づいて、アイコンPP1の位置に対応する3Dモデルのテンプレート(2D)を第3表示領域AR3に表示する。画像処理装置P1は、ユーザ操作により登録ボタンBTが選択(押下)された場合、第3表示領域AR3に表示されているテンプレート(2D)を、第2処理の特徴マッチングに使用されるテンプレートとして更新(登録)する。
In addition, the image processing device P1 in the second variation of the first embodiment accepts a user operation on the icon PP1 displayed in the first display area AR1, and displays a 3D model template (2D) corresponding to the position of the icon PP1 in the third display area AR3 based on the position of the icon PP1 moved by the user operation. When the registration button BT is selected (pressed) by the user operation, the image processing device P1 updates (registers) the template (2D) displayed in the third display area AR3 as a template to be used for feature matching in the second process.
第3表示領域AR3は、アイコンPP1の位置(角度)から対象物Tgの3DモデルMDが撮像された場合のテンプレートTP1(2D)を含む。
The third display area AR3 includes a template TP1 (2D) when the 3D model MD of the object Tg is imaged from the position (angle) of the icon PP1.
第4表示領域AR4は、アイコンPP2の位置(角度)、つまり、予測姿勢から対象物Tgの3DモデルMDが撮像された場合の予測テンプレートTP2(2D)を含む。
The fourth display area AR4 includes the position (angle) of the icon PP2, i.e., the predicted template TP2 (2D) when the 3D model MD of the object Tg is imaged from the predicted posture.
第5表示領域AR5は、アイコンPP3の位置(角度)、つまり、対象物Tgが検出された姿勢から対象物Tgの3DモデルMDが撮像された場合の検出テンプレートTP3(2D)を含む。
The fifth display area AR5 includes the position (angle) of the icon PP3, i.e., the detection template TP3 (2D) when the 3D model MD of the object Tg is imaged from the posture in which the object Tg is detected.
登録ボタンBTは、ユーザ操作に基づいて移動されたアイコンPP1に対応するテンプレート(2D)の生成を受け付け可能なボタンである。
The registration button BT is a button that can accept the creation of a template (2D) that corresponds to the icon PP1 that has been moved based on a user operation.
(実施の形態1の変形例1)
実施の形態1に係るピッキングシステム100は、3Dモデルを用いた3Dマッチングによりテンプレート予測を実行する例を示した。実施の形態1の変形例1に係るピッキングシステム100は、第1処理で検出された対象物Tgの位置と、第1処理を実行している間に実行された複数回の第2処理で得られた対象物Tgの移動に関する情報および複数回の第2処理に用いられた撮像画像とに基づいて、テンプレート予測を実行する例について説明する。 (First Modification of First Embodiment)
Thepicking system 100 according to the first embodiment has shown an example of performing template prediction by 3D matching using a 3D model. The picking system 100 according to the first modification of the first embodiment will be described as performing template prediction based on the position of the target object Tg detected in the first process, information on the movement of the target object Tg obtained in multiple second processes performed while the first process is being performed, and captured images used in the multiple second processes.
実施の形態1に係るピッキングシステム100は、3Dモデルを用いた3Dマッチングによりテンプレート予測を実行する例を示した。実施の形態1の変形例1に係るピッキングシステム100は、第1処理で検出された対象物Tgの位置と、第1処理を実行している間に実行された複数回の第2処理で得られた対象物Tgの移動に関する情報および複数回の第2処理に用いられた撮像画像とに基づいて、テンプレート予測を実行する例について説明する。 (First Modification of First Embodiment)
The
なお、実施の形態1の変形例1に係るピッキングシステム100の内部構成例は、実施の形態1に係るピッキングシステム100の内部構成例とほぼ同一の構成を有するため、説明を省略する。
Note that the internal configuration example of the picking system 100 according to the first variation of the first embodiment has almost the same configuration as the internal configuration example of the picking system 100 according to the first embodiment, so a description thereof will be omitted.
次に、図8を参照して、ピッキングシステム100の全体動作手順について説明する。図8は、実施の形態1の変形例1に係るピッキングシステム100の全体動作手順例を説明する図である。
Next, the overall operation procedure of the picking system 100 will be described with reference to FIG. 8. FIG. 8 is a diagram illustrating an example of the overall operation procedure of the picking system 100 according to the first modification of the first embodiment.
なお、図8に示すピッキングシステム100の全体動作手順例は一例であって、これに限定されない。図8では、第1処理と第2処理との関係を分かりやすくするために1個の対象物Tgのピッキングにおいて第1処理が1回、第2処理がN回実行される例を示しているが、第1処理および第2処理がそれぞれ実行される回数は、これに限定されない。ピッキングシステム100は、1個の対象物Tgのピッキングにおいて第1処理を複数回実行してもよいことは言うまでもない。
Note that the example of the overall operation procedure of the picking system 100 shown in FIG. 8 is just one example, and is not limited to this. In FIG. 8, in order to make it easier to understand the relationship between the first process and the second process, an example is shown in which the first process is executed once and the second process is executed N times when picking one target object Tg, but the number of times that the first process and the second process are executed is not limited to this. It goes without saying that the picking system 100 may execute the first process multiple times when picking one target object Tg.
アクチュエータACは、カメラCMによってベルトコンベア上を搬送される対象物Tgを所定のフレームレート(例えば、1000fps)で撮像しながら、ピッキングする。図8に示すアクチュエータACは、対象物Tgのピッキングプロセスの一部(時刻t11~時刻t1N)を示しており、例えば、対象物Tgをピッキングするまで繰り返し実行される。
The actuator AC picks up the object Tg being transported on the belt conveyor by the camera CM while capturing images of the object Tg at a predetermined frame rate (e.g., 1000 fps). The actuator AC shown in FIG. 8 shows a part of the picking process of the object Tg (time t11 to time t1N), and is executed repeatedly, for example, until the object Tg is picked.
画像処理装置P1は、カメラCMによって所定のフレームレートで撮像された対象物Tgの撮像画像を取得し、取得された撮像画像に第2処理(ステップSt100)を実行する。
The image processing device P1 acquires an image of the object Tg captured by the camera CM at a predetermined frame rate, and performs a second process (step St100) on the acquired image.
また、画像処理装置P1は、第2処理と並列に、カメラCMによって撮像された撮像画像と、第1処理中に実行された複数回の第2処理により得られた対象物Tgの動き情報等の移動に関する情報および複数回の第2処理に使用された撮像画像とに基づいて、第1処理(ステップSt200A)を実行する。画像処理装置P1は、第1処理により得られた対象物Tgのテンプレート候補である予測テンプレートTP2(図7参照)を第2処理にフィードバックする。
In addition, in parallel with the second process, the image processing device P1 executes the first process (step St200A) based on the captured image captured by the camera CM, information related to the movement of the object Tg, such as the movement information, obtained by the multiple second processes executed during the first process, and the captured image used in the multiple second processes. The image processing device P1 feeds back to the second process the prediction template TP2 (see FIG. 7), which is a template candidate for the object Tg obtained by the first process.
図8に示す例において、カメラCMは、時刻t11で対象物Tgを撮像し、撮像された撮像画像Img11を画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信された1枚目の撮像画像Img11(画像データ)を取得し、1枚目の撮像画像Img11を用いて第1処理および第2処理のそれぞれを実行する。画像処理装置P1は、第2処理により得られた対象物Tgの動き情報(移動に関する情報)と、第2処理に使用された1枚目の撮像画像Img11とを第1処理部110に出力するとともに、対象物Tgの予測位置(x1,y1,z1)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの3次元の予測位置(x1,y1,z1)に向かってエンドエフェクタEFを移動させる。
In the example shown in FIG. 8, the camera CM captures an image of the object Tg at time t11 and transmits the captured image Img11 to the image processing device P1. The image processing device P1 acquires the first captured image Img11 (image data) transmitted from the camera CM and executes the first process and the second process using the first captured image Img11. The image processing device P1 outputs the movement information (information related to movement) of the object Tg obtained by the second process and the first captured image Img11 used in the second process to the first processing unit 110, and transmits information on the predicted position (x1, y1, z1) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired three-dimensional predicted position (x1, y1, z1) of the object Tg.
時刻t12において、カメラCMは、対象物Tgを撮像する。カメラCMは、撮像された撮像画像Img12を画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信された2枚目の撮像画像Img12(画像データ)を取得し、2枚目の撮像画像Img12を用いて第2処理を実行する。画像処理装置P1は、第2処理により得られた対象物Tgの動き情報(移動に関する情報)と、第2処理に使用された2枚目の撮像画像Img12とを第1処理部110に出力するとともに、対象物Tgの予測位置(x2,y2,z2)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの予測位置(x2,y2,z2)に向かってエンドエフェクタEFを移動させる。
At time t12, the camera CM captures an image of the object Tg. The camera CM transmits the captured image Img12 to the image processing device P1. The image processing device P1 acquires the second captured image Img12 (image data) transmitted from the camera CM and executes a second process using the second captured image Img12. The image processing device P1 outputs the movement information (information related to movement) of the object Tg obtained by the second process and the second captured image Img12 used in the second process to the first processing unit 110, and transmits information on the predicted position (x2, y2, z2) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired predicted position (x2, y2, z2) of the object Tg.
以降、ピッキングシステム100は、時刻t13~時刻t1(N-1)で同様の処理を繰り返し実行する。
Then, the picking system 100 repeatedly executes the same process from time t13 to time t1(N-1).
時刻t1Nにおいて、カメラCMは、対象物Tgを撮像する。カメラCMは、撮像された撮像画像Img1Nを画像処理装置P1に送信する。画像処理装置P1は、カメラCMから送信されたN枚目の撮像画像Img1N(画像データ)を取得し、N枚目の撮像画像Img1Nを用いて第2処理を実行する。画像処理装置P1は、第2処理により得られた対象物Tgの動き情報(移動に関する情報)と、第2処理に使用されたN枚目の撮像画像Img1Nとを第1処理部110に出力するとともに、対象物Tgの予測位置(xN,yN,zN)の情報をアクチュエータACに送信する。アクチュエータACは、取得された対象物Tgの予測位置(xN,yN,zN)に向かってエンドエフェクタEFを移動させて、対象物Tgをピッキングする。画像処理装置P1は、ユーザ操作により選択された対象物TgのテンプレートImg31を第2処理部120にフィードバックする。
At time t1N, the camera CM captures an image of the object Tg. The camera CM transmits the captured image Img1N to the image processing device P1. The image processing device P1 acquires the Nth captured image Img1N (image data) transmitted from the camera CM and executes the second process using the Nth captured image Img1N. The image processing device P1 outputs the movement information (information related to movement) of the object Tg obtained by the second process and the Nth captured image Img1N used in the second process to the first processing device 110, and transmits information on the predicted position (xN, yN, zN) of the object Tg to the actuator AC. The actuator AC moves the end effector EF toward the acquired predicted position (xN, yN, zN) of the object Tg to pick up the object Tg. The image processing device P1 feeds back the template Img31 of the object Tg selected by the user's operation to the second processing device 120.
以上により、実施の形態1の変形例1における画像処理装置P1は、対象物Tgの3DモデルMDを使用せずに、テンプレート予測を実行できる。
As a result, the image processing device P1 in variant 1 of embodiment 1 can perform template prediction without using the 3D model MD of the object Tg.
(実施の形態1の変形例2)
実施の形態1に係るピッキングシステム100は、第2処理により得られたマッチング傾向および対象物Tgの移動に関する情報を使用してテンプレートを予測する例を示した。実施の形態1の変形例2に係るピッキングシステム100は、ユーザ操作に基づいて得られたテンプレートTP1を用いてテンプレートを生成する例について説明する。 (Modification 2 of the First Embodiment)
Thepicking system 100 according to the first embodiment has shown an example in which a template is predicted using the matching tendency obtained by the second process and information on the movement of the target object Tg. The picking system 100 according to the second modification of the first embodiment will be described below with reference to an example in which a template is generated using a template TP1 obtained based on a user operation.
実施の形態1に係るピッキングシステム100は、第2処理により得られたマッチング傾向および対象物Tgの移動に関する情報を使用してテンプレートを予測する例を示した。実施の形態1の変形例2に係るピッキングシステム100は、ユーザ操作に基づいて得られたテンプレートTP1を用いてテンプレートを生成する例について説明する。 (
The
なお、実施の形態1の変形例2に係るピッキングシステム100の内部構成例は、実施の形態1に係るピッキングシステム100の内部構成例とほぼ同一の構成を有するため、説明を省略する。
Note that the internal configuration example of the picking system 100 according to the second variation of the first embodiment has almost the same configuration as the internal configuration example of the picking system 100 according to the first embodiment, so a description thereof will be omitted.
次に、図9を参照して、画像処理装置P1の第1処理について説明する。図9は、実施の形態1の変形例2における画像処理装置P1の第1処理手順(ステップSt200B)例を示すフローチャートである。
Next, the first process of the image processing device P1 will be described with reference to FIG. 9. FIG. 9 is a flowchart showing an example of the first process procedure (step St200B) of the image processing device P1 in the second modification of the first embodiment.
なお、図9に示す第1処理の動作手順例は、図6に示した第1処理の動作手順例とステップSt31~ステップSt32、およびステップSt34~ステップSt38のそれぞれが同様であるため、説明を省略する。
Note that the example of the operation procedure of the first process shown in FIG. 9 is similar to the example of the operation procedure of the first process shown in FIG. 6 in steps St31 to St32 and steps St34 to St38, and therefore a description thereof will be omitted.
第1処理部110は、アイコンPP1の操作および登録ボタンBTを押下操作するユーザ操作に基づく対象物TgのテンプレートTP1が登録されているか否かを判定する(St30A)。
The first processing unit 110 determines whether or not a template TP1 of the target object Tg based on a user operation of operating the icon PP1 and pressing the registration button BT has been registered (St30A).
第1処理部110は、ステップSt30Aの処理において、対象物TgのテンプレートTP1が登録されていると判定した場合(St30A,YES)、登録済みのテンプレートTP1を予測テンプレートTP2の代わりに第2処理部120にフィードバックする(St30B)。
If the first processing unit 110 determines in the processing of step St30A that a template TP1 of the target object Tg has been registered (St30A, YES), it feeds back the registered template TP1 to the second processing unit 120 instead of the prediction template TP2 (St30B).
一方、第1処理部110は、ステップSt30Aの処理において、対象物TgのテンプレートTP1が登録されていないと判定した場合(St30A,NO)、第2処理部120から出力された撮像画像を取得する(St31)。
On the other hand, if the first processing unit 110 determines in the processing of step St30A that the template TP1 of the object Tg has not been registered (St30A, NO), it acquires the captured image output from the second processing unit 120 (St31).
以上により、実施の形態1の変形例2における画像処理装置P1は、予測テンプレートTP2の代わりに、ユーザにより指定されたテンプレートTP1を用いた特徴マッチングを実行することで、第2処理部120による特徴マッチング精度を向上させることができるとともに、対象物Tgを追跡する追跡精度を向上させることができる。
As described above, the image processing device P1 in the second modification of the first embodiment performs feature matching using the template TP1 specified by the user instead of the prediction template TP2, thereby improving the feature matching accuracy of the second processing unit 120 and also improving the tracking accuracy of tracking the target object Tg.
以上により、実施の形態1および実施の形態1の変形例1に係る画像処理装置P1は、移動可能であって、かつ、対象物Tgを撮像可能なカメラCMとの間で通信可能であって、対象物Tgが撮像された撮像画像を取得し、撮像画像から対象物Tgの位置または姿勢の検出を行い、対象物Tgの予測テンプレートTP2(テンプレートの一例)を生成する第1処理(ステップSt200,St200A)を実行し、第1処理を実行中に、撮像画像と対象物Tgの予測テンプレートTP2とに基づく特徴マッチング(テンプレートマッチングの一例)を実行し、対象物Tgの移動に関する情報を取得する第2処理を複数回実行する。第1処理は、検出された対象物Tgの位置または姿勢と、第2処理で複数回取得された対象物Tgの移動に関する情報とに基づいて、対象物Tgの姿勢を予測し、対象物Tgの予測姿勢に対応する対象物Tgの予測テンプレートTP2を生成する。
As described above, the image processing device P1 according to the first embodiment and the first modification of the first embodiment is movable and capable of communicating with a camera CM capable of capturing an image of an object Tg, and executes a first process (steps St200, St200A) in which the image of the object Tg is captured, the position or posture of the object Tg is detected from the captured image, and a prediction template TP2 (an example of a template) of the object Tg is generated. During the first process, the image processing device P1 executes feature matching (an example of template matching) based on the captured image and the prediction template TP2 of the object Tg, and executes a second process multiple times to acquire information regarding the movement of the object Tg. The first process predicts the posture of the object Tg based on the detected position or posture of the object Tg and the information regarding the movement of the object Tg acquired multiple times in the second process, and generates a prediction template TP2 of the object Tg corresponding to the predicted posture of the object Tg.
これにより、実施の形態1および実施の形態1の変形例1に係る画像処理装置P1は、第1処理によって予測テンプレートTP2を生成するとともに、第2処理部120から出力された特徴マッチング結果および対象物Tgの位置情報のそれぞれに基づいて、第1処理中に変化する対象物Tgの姿勢変化を追跡し、実際の対象物Tgの姿勢により近い対象物Tgのテンプレート(予測テンプレートTP2)を生成できる。
As a result, the image processing device P1 according to embodiment 1 and variant 1 of embodiment 1 generates a prediction template TP2 by the first process, and tracks the change in posture of the object Tg that changes during the first process based on the feature matching result and the position information of the object Tg output from the second processing unit 120, and can generate a template (prediction template TP2) of the object Tg that is closest to the posture of the actual object Tg.
また、実施の形態1および実施の形態1の変形例1に係る画像処理装置P1は、撮像画像から対象物Tgの特徴量を抽出し、抽出された対象物Tgの特徴量と予測テンプレートTP2に写る対象物Tgの特徴量とに基づいて、特徴マッチングを実行する。これにより、実施の形態1および実施の形態1の変形例1に係る画像処理装置P1は、特徴マッチングに基づく位置フィッティングにより、撮像画像に写る対象物Tgの位置を取得できる。
In addition, the image processing device P1 according to the first embodiment and the first modification of the first embodiment extracts features of the object Tg from the captured image, and performs feature matching based on the extracted features of the object Tg and the features of the object Tg depicted in the prediction template TP2. This allows the image processing device P1 according to the first embodiment and the first modification of the first embodiment to obtain the position of the object Tg depicted in the captured image by position fitting based on feature matching.
また、実施の形態1に係る画像処理装置P1における第1処理は、撮像画像から検出された対象物Tgと、3DモデルデータベースDB(データベースの一例)に記録された3DモデルMDとに基づく3Dマッチングを実行して、対象物Tgの姿勢を特定し、複数回実行された第2処理により取得された対象物Tgの移動に関する情報に基づいて、対象物Tgの姿勢を予測し、対象物Tgの予測テンプレートTP2を生成する。これにより、実施の形態1に係る画像処理装置P1は、撮像画像に写る対象物Tgの姿勢を取得できる。
The first process in the image processing device P1 according to embodiment 1 performs 3D matching based on the object Tg detected from the captured image and the 3D model MD recorded in a 3D model database DB (an example of a database) to identify the posture of the object Tg, predicts the posture of the object Tg based on information related to the movement of the object Tg obtained by the second process executed multiple times, and generates a prediction template TP2 for the object Tg. This allows the image processing device P1 according to embodiment 1 to obtain the posture of the object Tg appearing in the captured image.
また、実施の形態1に係る画像処理装置P1は、対象物Tgの予測姿勢に基づいて、3DモデルMDに対するカメラCMの第1撮像位置を取得し、3DモデルMDと、対象物Tgの予測姿勢とに基づいて、予測姿勢に対応する予測テンプレートTP2を生成し、3DモデルMDと、3DモデルMDに対する第1撮像位置と、予測テンプレートTP2とを対応付けて、ディスプレイ13に出力する。これにより、実施の形態1に係る画像処理装置P1は、予測テンプレートTP2が対象物Tgの3DモデルMDを第1撮像位置から撮像した場合のテンプレート(2D画像)である旨をユーザに可視化できる。ユーザは、予測テンプレートTP2と、第1撮像位置とに基づいて、画像処理装置P1により認識されている対象物Tgの3DモデルMDが正しい3DモデルMDであるか否かを目視確認できる。
The image processing device P1 according to the first embodiment acquires a first imaging position of the camera CM relative to the 3D model MD based on the predicted orientation of the object Tg, generates a prediction template TP2 corresponding to the predicted orientation based on the 3D model MD and the predicted orientation of the object Tg, and outputs the prediction template TP2 to the display 13 by associating the 3D model MD, the first imaging position relative to the 3D model MD, and the prediction template TP2. This allows the image processing device P1 according to the first embodiment to visualize to the user that the prediction template TP2 is a template (2D image) when the 3D model MD of the object Tg is imaged from the first imaging position. The user can visually confirm whether the 3D model MD of the object Tg recognized by the image processing device P1 is the correct 3D model MD based on the prediction template TP2 and the first imaging position.
また、実施の形態1に係る画像処理装置P1は、撮像画像から対象物Tgを検出して、検出された対象物Tgの検出姿勢を特定し、検出姿勢に基づいて、3DモデルMDに対するカメラCMの第2撮像位置を取得し、3DモデルMDと、検出姿勢とに基づいて、検出姿勢に対応する検出テンプレートTP3を生成し、3DモデルMDと、3DモデルMDに対する第2撮像位置と、検出テンプレートTP3とを対応付けて、ディスプレイ13に出力する。これにより、実施の形態1に係る画像処理装置P1は、検出テンプレートTP3が対象物Tgの3DモデルMDを第2撮像位置から撮像した場合のテンプレート(2D画像)である旨をユーザに可視化できる。ユーザは、検出テンプレートTP3と、第2撮像位置とに基づいて、画像処理装置P1により認識されている対象物Tgの3DモデルMDが正しい3DモデルMDであるか否かを目視確認できる。
The image processing device P1 according to the first embodiment detects the object Tg from the captured image, identifies the detected orientation of the detected object Tg, acquires the second imaging position of the camera CM relative to the 3D model MD based on the detected orientation, generates a detection template TP3 corresponding to the detected orientation based on the 3D model MD and the detected orientation, associates the 3D model MD, the second imaging position relative to the 3D model MD, and the detection template TP3, and outputs them to the display 13. This allows the image processing device P1 according to the first embodiment to visualize to the user that the detection template TP3 is a template (2D image) when the 3D model MD of the object Tg is imaged from the second imaging position. Based on the detection template TP3 and the second imaging position, the user can visually confirm whether the 3D model MD of the object Tg recognized by the image processing device P1 is the correct 3D model MD.
また、実施の形態1に係る画像処理装置P1は、3DモデルMDを指定する指定情報を取得し、指定情報に対応する3DモデルMDに基づいて、対象物Tgの予測テンプレートTP2を生成する。これにより、実施の形態1に係る画像処理装置P1は、対象物Tg以外の背景が綺麗、つまり、ノイズが少ないテンプレートを生成することによって、第2処理の特徴マッチング処理でより高精度な位置特定が可能になる。また、画像処理装置P1は、第1処理を行う間に変化する対象物Tgの位置あるいは姿勢を補正した予測テンプレートTP2を生成することによって、カメラCMにより撮像された対象物Tgの姿勢または位置の変化をリアルタイムに追跡できる。
The image processing device P1 according to the first embodiment also acquires specification information that specifies the 3D model MD, and generates a prediction template TP2 of the object Tg based on the 3D model MD that corresponds to the specification information. As a result, the image processing device P1 according to the first embodiment generates a template with a clean background other than the object Tg, i.e., with little noise, enabling more accurate position identification in the feature matching process of the second process. The image processing device P1 also generates a prediction template TP2 in which the position or posture of the object Tg that changes during the first process is corrected, thereby enabling real-time tracking of changes in posture or position of the object Tg captured by the camera CM.
また、実施の形態1の変形例2に係る画像処理装置P1は、テンプレートTP1を指定する指定情報を取得し、撮像画像から対象物Tgの特徴量を抽出し、抽出された対象物Tgの特徴量と、指定情報に対応するテンプレートTP1に写る対象物Tgの特徴量とに基づいて、特徴マッチングを実行する。これにより、実施の形態1の変形例2に係る画像処理装置P1は、特徴マッチングに基づく位置フィッティングにより、撮像画像に写る対象物Tgの位置を取得できる。
The image processing device P1 according to the second modification of the first embodiment also acquires designation information that designates the template TP1, extracts features of the object Tg from the captured image, and performs feature matching based on the extracted features of the object Tg and the features of the object Tg appearing in the template TP1 that corresponds to the designation information. This allows the image processing device P1 according to the second modification of the first embodiment to acquire the position of the object Tg appearing in the captured image by position fitting based on feature matching.
また、実施の形態1および実施の形態1の変形例1,変形例2に係る画像処理装置P1において、カメラCMで撮像される撮像画像に対して、第2処理は、第1処理と異なるフレームレートで実行する。これにより、実施の形態1および実施の形態1の変形例1,変形例2に係る画像処理装置P1は、高度な低速画像処理(第1処理)と低度な高速画像処理技術(第2処理)とを組み合わせることで、カメラCMにより撮像された対象物Tgの姿勢または位置の変化をリアルタイムに追跡できる。
Furthermore, in the image processing device P1 according to the first embodiment and the first and second variations of the first embodiment, the second processing is executed at a frame rate different from that of the first processing for the image captured by the camera CM. As a result, the image processing device P1 according to the first embodiment and the first and second variations of the first embodiment can track changes in the posture or position of the object Tg captured by the camera CM in real time by combining advanced low-speed image processing (first processing) with a low-level high-speed image processing technique (second processing).
また、実施の形態1および実施の形態1の変形例1に係る画像処理装置P1は、移動可能であって、かつ、対象物Tgを撮像可能なカメラCMにより撮像された対象物Tgの撮像画像を取得する通信部10(取得部の一例)と、撮像画像から対象物Tgの位置または姿勢の検出を行い、対象物Tgのテンプレートを生成する第1処理部110と、第1処理部110による対象物Tgのテンプレートの生成中に、複数回、撮像画像と前記対象物Tgのテンプレートとに基づくテンプレートマッチングを実行し、対象物Tgの移動に関する情報を取得する第2処理部120と、を備える。第1処理部110は、検出された対象物Tgの位置または姿勢と、第2処理部120で複数回取得された対象物Tgの移動に関する情報とに基づいて、対象物Tgの姿勢を予測し、対象物Tgの予測姿勢に対応する対象物Tgのテンプレートを生成する。
The image processing device P1 according to the first embodiment and the first modification of the first embodiment includes a communication unit 10 (an example of an acquisition unit) that acquires an image of the object Tg captured by a movable camera CM capable of capturing an image of the object Tg, a first processing unit 110 that detects the position or posture of the object Tg from the captured image and generates a template of the object Tg, and a second processing unit 120 that performs template matching based on the captured image and the template of the object Tg multiple times while the first processing unit 110 is generating the template of the object Tg, and acquires information regarding the movement of the object Tg. The first processing unit 110 predicts the posture of the object Tg based on the detected position or posture of the object Tg and the information regarding the movement of the object Tg acquired multiple times by the second processing unit 120, and generates a template of the object Tg corresponding to the predicted posture of the object Tg.
これにより、実施の形態1および実施の形態1の変形例1に係る画像処理装置P1は、第1処理によって予測テンプレートTP2を生成するため、第2処理部120から出力された特徴マッチング結果および対象物Tgの位置情報のそれぞれに基づいて、第1処理中に変化する対象物Tgの姿勢変化を追跡し、実際の対象物Tgの姿勢により近い対象物Tgのテンプレート(予測テンプレートTP2)を生成できる。
As a result, the image processing device P1 according to embodiment 1 and variant 1 of embodiment 1 generates the prediction template TP2 by the first process, and is able to track the change in posture of the object Tg that changes during the first process based on the feature matching result and the position information of the object Tg output from the second processing unit 120, and generate a template (prediction template TP2) of the object Tg that is closest to the posture of the actual object Tg.
以上、添付図面を参照しながら各種の実施の形態について説明したが、本開示はかかる例に限定されない。当業者であれば、特許請求の範囲に記載された範疇内において、各種の変更例、修正例、置換例、付加例、削除例、均等例に想到し得ることは明らかであり、それらについても本開示の技術的範囲に属すると了解される。また、発明の趣旨を逸脱しない範囲において、上述した各種の実施の形態における各構成要素を任意に組み合わせてもよい。
Although various embodiments have been described above with reference to the attached drawings, the present disclosure is not limited to such examples. It is clear that a person skilled in the art can conceive of various modifications, amendments, substitutions, additions, deletions, and equivalents within the scope of the claims, and it is understood that these also fall within the technical scope of the present disclosure. Furthermore, the components in the various embodiments described above may be combined in any manner as long as they do not deviate from the spirit of the invention.
なお、本出願は、2023年4月27日出願の日本特許出願(特願2023-073618)に基づくものであり、その内容は本出願の中に参照として援用される。
This application is based on a Japanese patent application (Patent Application No. 2023-073618) filed on April 27, 2023, the contents of which are incorporated by reference into this application.
本開示は、撮像装置の移動に伴って撮像装置からの対象物の姿勢が可変となる状況下でもテンプレートマッチングに使用可能な対象物の高精度なテンプレートを登録する画像処理方法および画像処理装置として有用である。
The present disclosure is useful as an image processing method and image processing device that registers highly accurate templates of an object that can be used for template matching even in a situation where the orientation of the object from the imaging device changes as the imaging device moves.
10 通信部
11 プロセッサ
12 メモリ
13 ディスプレイ
14 操作デバイス
110 第1処理部
120 第2処理部
AC アクチュエータ
CM カメラ
DB 3Dモデルデータベース
MD 3Dモデル
P1 画像処理装置
Tg 対象物 REFERENCE SIGNSLIST 10 Communication unit 11 Processor 12 Memory 13 Display 14 Operation device 110 First processing unit 120 Second processing unit AC Actuator CM Camera DB 3D model database MD 3D model P1 Image processing device Tg Object
11 プロセッサ
12 メモリ
13 ディスプレイ
14 操作デバイス
110 第1処理部
120 第2処理部
AC アクチュエータ
CM カメラ
DB 3Dモデルデータベース
MD 3Dモデル
P1 画像処理装置
Tg 対象物 REFERENCE SIGNS
Claims (9)
- 移動可能であって、かつ、対象物を撮像可能なカメラとの間で通信可能な画像処理装置が行う画像処理方法であって、
前記対象物が撮像された撮像画像を取得し、前記撮像画像から前記対象物の位置または姿勢の検出を行い、前記対象物のテンプレートを生成する第1処理を実行し、
前記第1処理を実行中に、前記撮像画像と前記対象物のテンプレートとに基づくテンプレートマッチングを実行し、前記対象物の移動に関する情報を取得する第2処理を複数回実行し、
前記第1処理は、検出された前記対象物の位置または姿勢と、前記第2処理で複数回取得された前記対象物の移動に関する情報とに基づいて、前記対象物の姿勢を予測し、前記対象物の予測姿勢に対応する前記対象物のテンプレートを生成する、
画像処理方法。 An image processing method performed by an image processing device that is movable and capable of communicating with a camera that can capture an image of an object, comprising:
A first process is executed to acquire a captured image of the object, detect a position or a posture of the object from the captured image, and generate a template of the object;
executing a second process a plurality of times to execute template matching based on the captured image and a template of the object while executing the first process, and acquiring information regarding a movement of the object;
The first process predicts a posture of the object based on the detected position or posture of the object and information on the movement of the object acquired multiple times in the second process, and generates a template of the object corresponding to the predicted posture of the object.
Image processing methods. - 前記撮像画像から前記対象物の特徴量を抽出し、抽出された前記対象物の特徴量と前記テンプレートに写る前記対象物の特徴量とに基づいて、テンプレートマッチングを実行する、
請求項1に記載の画像処理方法。 extracting a feature amount of the object from the captured image, and performing template matching based on the extracted feature amount of the object and the feature amount of the object captured in the template;
The image processing method according to claim 1 . - 前記第1処理は、前記撮像画像から検出された前記対象物と、データベースに記録された3Dモデルとに3Dマッチングを実行して、前記対象物の姿勢を特定し、複数回実行された前記第2処理で取得された前記対象物の移動に関する情報に基づいて、前記対象物の姿勢を予測し、前記対象物のテンプレートを生成する、
請求項1に記載の画像処理方法。 The first process performs 3D matching between the object detected from the captured image and a 3D model recorded in a database to identify a posture of the object, predicts the posture of the object based on information on the movement of the object acquired in the second process executed multiple times, and generates a template of the object.
The image processing method according to claim 1 . - 前記対象物の予測姿勢に基づいて、前記3Dモデルに対する前記カメラの第1撮像位置を取得し、
前記3Dモデルと前記対象物の予測姿勢とに基づいて、前記予測姿勢に対応する予測テンプレートを生成し、
前記3Dモデルと、前記3Dモデルに対する前記第1撮像位置と、前記予測テンプレートとを対応付けて、ディスプレイに出力する、
請求項3に記載の画像処理方法。 obtaining a first imaging position of the camera relative to the 3D model based on the predicted pose of the object;
generating a predictive template based on the 3D model and the predicted pose of the object, the predictive template corresponding to the predicted pose;
outputting the 3D model, the first imaging position with respect to the 3D model, and the prediction template to a display in association with each other;
The image processing method according to claim 3. - 前記撮像画像から前記対象物を検出して、検出された前記対象物の検出姿勢を特定し、
前記検出姿勢に基づいて、前記3Dモデルに対する前記カメラの第2撮像位置を取得し、
前記3Dモデルと、前記検出姿勢とに基づいて、前記検出姿勢に対応する検出テンプレートを生成し、
前記3Dモデルと、前記3Dモデルに対する前記第2撮像位置と、前記検出テンプレートとを対応付けて、ディスプレイに出力する、
請求項3に記載の画像処理方法。 Detecting the object from the captured image and identifying a detected attitude of the detected object;
obtaining a second imaging position of the camera relative to the 3D model based on the detected orientation;
generating a detection template corresponding to the detected pose based on the 3D model and the detected pose;
outputting the 3D model, the second imaging position with respect to the 3D model, and the detection template to a display in association with each other;
The image processing method according to claim 3. - 前記3Dモデルを指定する指定情報を取得し、
前記指定情報に対応する3Dモデルに基づいて、前記対象物のテンプレートを生成する、
請求項3に記載の画像処理方法。 Acquire designation information for designating the 3D model;
generating a template of the object based on a 3D model corresponding to the specified information;
The image processing method according to claim 3. - 前記テンプレートを指定する指定情報を取得し、
前記撮像画像から前記対象物の特徴量を抽出し、
抽出された前記対象物の特徴量と、前記指定情報に対応するテンプレートに写る前記対象物の特徴量とに基づいて、テンプレートマッチングを実行する、
請求項1に記載の画像処理方法。 acquiring designation information designating the template;
Extracting a feature amount of the object from the captured image;
performing template matching based on the extracted feature amount of the object and the feature amount of the object depicted in a template corresponding to the designation information;
The image processing method according to claim 1 . - 前記カメラで撮像される前記撮像画像に対して、前記第2処理は、前記第1処理と異なるフレームレートで実行する、
請求項1に記載の画像処理方法。 The second processing is executed at a frame rate different from that of the first processing for the captured image captured by the camera.
The image processing method according to claim 1 . - 移動可能であって、かつ、対象物を撮像可能なカメラにより撮像された前記対象物の撮像画像を取得する取得部と、
前記撮像画像から前記対象物の位置または姿勢の検出を行い、前記対象物のテンプレートを生成する第1処理部と、
前記第1処理部による前記対象物のテンプレートの生成中に、複数回、前記撮像画像と前記対象物のテンプレートとに基づくテンプレートマッチングを実行し、前記対象物の移動に関する情報を取得する第2処理部と、を備え、
前記第1処理部は、検出された前記対象物の位置または姿勢と、前記第2処理部で複数回取得された前記対象物の移動に関する情報とに基づいて、前記対象物の姿勢を予測し、前記対象物の予測姿勢に対応する前記対象物のテンプレートを生成する、
画像処理装置。 an acquisition unit that acquires an image of an object captured by a movable camera that can capture an image of the object;
a first processing unit that detects a position or an orientation of the object from the captured image and generates a template of the object;
a second processing unit that performs template matching based on the captured image and the template of the object a plurality of times while the template of the object is being generated by the first processing unit, and acquires information regarding a movement of the object;
The first processing unit predicts a posture of the object based on the detected position or posture of the object and information on the movement of the object acquired multiple times by the second processing unit, and generates a template of the object corresponding to the predicted posture of the object.
Image processing device.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023-073618 | 2023-04-27 | ||
JP2023073618A JP2024158428A (en) | 2023-04-27 | Image processing method and image processing device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024224751A1 true WO2024224751A1 (en) | 2024-10-31 |
Family
ID=93255889
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2024/004653 WO2024224751A1 (en) | 2023-04-27 | 2024-02-09 | Image processing method and image processing device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024224751A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002208009A (en) * | 2000-11-09 | 2002-07-26 | Yaskawa Electric Corp | Object detection method |
JP2013036988A (en) * | 2011-07-08 | 2013-02-21 | Canon Inc | Information processing apparatus and information processing method |
WO2016175150A1 (en) * | 2015-04-28 | 2016-11-03 | オムロン株式会社 | Template creation device and template creation method |
JP2019185239A (en) * | 2018-04-05 | 2019-10-24 | オムロン株式会社 | Object recognition processor and method, and object picking device and method |
-
2024
- 2024-02-09 WO PCT/JP2024/004653 patent/WO2024224751A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002208009A (en) * | 2000-11-09 | 2002-07-26 | Yaskawa Electric Corp | Object detection method |
JP2013036988A (en) * | 2011-07-08 | 2013-02-21 | Canon Inc | Information processing apparatus and information processing method |
WO2016175150A1 (en) * | 2015-04-28 | 2016-11-03 | オムロン株式会社 | Template creation device and template creation method |
JP2019185239A (en) * | 2018-04-05 | 2019-10-24 | オムロン株式会社 | Object recognition processor and method, and object picking device and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4752721B2 (en) | Movement pattern identification device, movement pattern identification method, movement pattern identification program, and recording medium recording the same | |
CN111604942B (en) | Object detection device, control device, and object detection computer program | |
US20230205184A1 (en) | Industrial robotics systems and methods for continuous and automated learning | |
JP6902369B2 (en) | Presentation device, presentation method and program, and work system | |
US20190101885A1 (en) | Actuator control system, actuator control method, information processing program, and storage medium | |
CN117934721A (en) | Space robot reconstruction method and system for target spacecraft based on vision-touch fusion | |
Basamakis et al. | Deep object detection framework for automated quality inspection in assembly operations | |
Zhang et al. | Deep learning-based robot vision: High-end tools for smart manufacturing | |
WO2024224751A1 (en) | Image processing method and image processing device | |
JP2020077231A (en) | Position detection program, position detection method and position detection device | |
US11030767B2 (en) | Imaging apparatus and imaging system | |
Liu et al. | Vision-based excavator pose estimation for automatic control | |
JP2024158428A (en) | Image processing method and image processing device | |
US20230419509A1 (en) | Production line monitoring method and monitoring system thereof | |
US20240278434A1 (en) | Robotic Systems and Methods Used with Installation of Component Parts | |
Yang et al. | Skeleton-based hand gesture recognition for assembly line operation | |
US20240193919A1 (en) | Machine learning device, classification device, and control device | |
US12111643B2 (en) | Inspection system, terminal device, inspection method, and non-transitory computer readable storage medium | |
WO2023199572A1 (en) | Template registration device, template registration method, and template registration system | |
Lin et al. | Inference of 6-DOF robot grasps using point cloud data | |
JP2023155752A (en) | Template registration device, template registration method, and template registration system | |
CN118985135A (en) | Template registration device, template registration method, and template registration system | |
Jeong et al. | Data Preparation for AI-Assisted Video Analysis in Manual Assembly Task: A Step Towards Industry 5.0 | |
WO2023047530A1 (en) | Data collection program, data collection device, and data collection method | |
Yin et al. | Robotic grasp detection for parallel grippers: A review |