Nothing Special   »   [go: up one dir, main page]

US20160300356A1 - Measurement device that measures shape of object to be measured, measurement method, system, and article production method - Google Patents

Measurement device that measures shape of object to be measured, measurement method, system, and article production method Download PDF

Info

Publication number
US20160300356A1
US20160300356A1 US15/091,374 US201615091374A US2016300356A1 US 20160300356 A1 US20160300356 A1 US 20160300356A1 US 201615091374 A US201615091374 A US 201615091374A US 2016300356 A1 US2016300356 A1 US 2016300356A1
Authority
US
United States
Prior art keywords
luminance
measured
measurement device
image
lines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/091,374
Inventor
Tsuyoshi Kitamura
Takumi TOKIMITSU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KITAMURA, TSUYOSHI, TOKIMITSU, TAKUMI
Publication of US20160300356A1 publication Critical patent/US20160300356A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T7/0057
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0014Image feed-back for automatic industrial control, e.g. robot with camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • G06T7/602
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/45Nc applications
    • G05B2219/45063Pick and place manipulator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects
    • G06V2201/121Acquisition of 3D measurements of objects using special illumination

Definitions

  • the present disclosure relates to a measurement device that measures a shape of an object to be measured, a measurement method, a system, and an article production method.
  • an optical measurement device As a technique to measure a shape of an object to be measured, an optical measurement device is known. There are various methods that are used by the optical measurement device and one of the methods is referred to as a pattern projection method.
  • the pattern projection method the shape of the object to be measured is obtained by projecting a predetermined pattern onto the object to be measured and picking up the image thereof, detecting the pattern in the taken image, and calculating range information at each pixel position using the principle of triangulation.
  • a representative pattern of which is a pattern (a dot line pattern) in which disconnection dots (dots) are disposed on a pattern including alternating bright lines and dark lines see Japanese Patent No. 2517062).
  • Information on the coordinates of the detected dots provides indexes that indicate to which line each of the projected line corresponds on the pattern of the mask, which is the pattern generation unit, such that the projected lines can be distinguished from each other.
  • the dots serve as distinguishing portions that distinguish the lines from each other.
  • Influence of random noise of the taken image is included in the factors that decrease the measuring accuracy of the pattern projection method.
  • the coordinates of the pattern are specified by detecting the peak where the luminance value of the image of the pattern is the largest.
  • MIRU 2009 In the Meeting on Image Recognition and Understanding (MIRU 2009), pp. 222, in addition to such a peak, by also detecting a negative peak in which the luminance value of the image of the pattern is the smallest, an increase in the density (the number of detection points per unit area) of the detection point is achieved. By increasing the detection points when detecting the pattern in the taken image, the S/N ratio is improved and the influence of the random noise of the taken image can be reduced.
  • MIRU 2009 pp.
  • a measurement device that is an aspect of the present disclosure that overcomes the above problem is a measurement device that measures a shape of an object to be measured, including a processing unit that obtains information on the shape of the object to be measured on a basis of an image obtained by imaging the object to be measured on which a pattern light including a plurality of lines in which a distinguishing portion that distinguishes the lines from each other has been provided.
  • the processing unit acquires, in a luminosity distribution of the image, in a direction intersecting the lines, the plurality of positions including a position in which luminance is largest and a position in which the luminance is smallest, the processing unit specifies a position to be excluded from the position in which the luminance is the largest and from the position in which the luminance is the smallest on a basis of the position of the distinguishing portion, and the processing unit obtains the information on the shape of the object to be measured based on the plurality of positions except for the position that has been specified.
  • FIG. 1 is a schematic diagram illustrating a configuration of a measurement device that is an aspect of the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a dot line pattern projected on an object to be measured.
  • FIG. 3 is a diagram illustrating an image of an area around a dot.
  • FIG. 4 is a diagram illustrating luminosity distribution of evaluation sections of an image.
  • FIG. 5 is a diagram illustrating the luminosity distribution in which the portion around the dot has been enlarged.
  • FIG. 6 is a diagram illustrating a relationship between a distance from the dot and a measurement error.
  • FIG. 7 is a diagram illustrating a flow of the measurement.
  • FIG. 8 is a diagram illustrating luminosity distribution of an image in a case in which the duty ratio of the pattern light is 1:4.
  • FIG. 9 is a diagram illustrating luminosity distribution of an image in a case in which the duty ratio of the pattern light is 1:1.
  • FIG. 10 illustrates a diagram of a control system including a measurement device and a robot arm.
  • FIG. 1 is a schematic diagram illustrating a configuration of a measurement device 1 that is an aspect of the present disclosure.
  • the measurement device 1 measures the shape (a three-dimensional shape, a two-dimensional shape, and a position and orientation, for example) of an object 5 to be measured by using a pattern projection method.
  • the measurement device 1 includes a projection unit 2 , an image pickup unit 3 , and a processing unit 4 .
  • the projection unit 2 includes, for example, a light source unit 21 , a pattern generation unit 22 , and an optical projection system 23 , and projects a predetermined pattern onto the object 5 to be measured.
  • the light source unit 21 performs, for example, Kohler illumination such that light radiated from a light source is on the pattern generation unit 22 in a uniform manner.
  • the pattern generation unit 22 creates a pattern light that is projected onto the object 5 to be measured and, in the present exemplary embodiment, is a mask on which a pattern is formed by performing chrome etching on a glass substrate.
  • the pattern generation unit 22 may be a digital light processing (DLP) projector, a liquid crystal projector, or a DMD, which is capable of generating any pattern.
  • the optical projection system 23 is an optical system that projects the pattern light generated by the pattern generation unit 22 onto the object 5 to be measured.
  • FIG. 2 is a diagram illustrating a dot line pattern PT that is an example of a pattern that is generated by the pattern generation unit 22 and that is projected on the object 5 to be measured.
  • the dot line pattern PT includes a periodical pattern alternately including a bright line BP, in which bright portions (white) and dots (dark portions) DT (black) are continuously formed in one direction, and a dark line DP (black), which extends in one direction.
  • the dots DT are each provided on the bright line BP and between the bright portions so as to disconnect the bright portions with respect each other in the direction in which the bright portions extend.
  • the dots are distinguishing portions that distinguish the bright lines from each other.
  • the ratio (hereinafter, referred to as a “duty ratio”) between a width (a line width) LW BP of each bright line BP of the dot line pattern PT and the width LW DP of each dark line DP is assumed to be 1:1.
  • the image pickup unit 3 includes, for example, an image-pickup optical system 31 and an image pickup element 32 , and obtains an image by taking the object 5 to be measured.
  • the image pickup unit 3 performs image pickup of the object 5 to be measured on which the dot line pattern PT has been projected to acquire a so-called range image that is an image that includes the portion corresponding to the dot line pattern PT.
  • the image-pickup optical system 31 is an image-forming optical system that forms an image of the dot line pattern PT projected on the object 5 to be measured on the image pickup element 32 .
  • the image pickup element 32 is an image sensor including a plurality of pixels that performs image pickup of the object 5 to be measured on which the pattern has been projected, and includes a CMOS sensor or a CCD sensor, for example.
  • the processing unit 4 Based on the image acquired with the image pickup unit 3 , the processing unit 4 obtains the shape of the object 5 to be measured.
  • the processing unit 4 includes a control unit 41 , a memory 42 , a pattern detection unit 43 , and a calculation unit 44 , and is constituted by a processor, such as a CPU, a RAM, a controller chip, and the like.
  • the control unit 41 controls the operations of the projection unit 2 and the image pickup unit 3 , specifically, the control unit 41 controls the projection of the pattern onto the object 5 to be measured and the image pickup of the object 5 to be measured on which the pattern has been projected.
  • the memory 42 stores the image acquired by the image pickup unit 3 .
  • the pattern detection unit 43 uses the image stored in the memory 42 to detect the peaks, the edges, and the dots (the position subject to the detection) of the pattern light in the image to obtain the coordinates of the pattern, in other words, to obtain the position of the pattern light in the image.
  • the calculation unit 44 uses the indexes of the lines distinguished from the information of the positions (coordinates) subject to the detection and the dots, calculates the range information (three-dimensional information) of the object 5 to be measured at each pixel position of the image pickup element 32 using the principle of triangulation.
  • the pattern detection unit 43 detects the image of the dot line pattern PT included in the range image and specifies the position of the dot line pattern PT in the range image. Specifically, the pattern detection unit 43 specifies the positions of the lines of the dot line pattern PT in the range image from the optical image information, in other words, from the luminosity distribution (the light intensity distribution), of the evaluation sections each extending in a direction intersecting the lines of the dot line pattern PT, for example, in a direction intersecting the lines.
  • the optical image information in other words, from the luminosity distribution (the light intensity distribution)
  • FIG. 3 illustrates an image around a position DT′ that corresponds to a center position of a dot DT when the dot line pattern PT is projected onto a reference plane.
  • the image is calculated from a simulation.
  • the axis of abscissas x and the axis of ordinates y of the image corresponds to the position of the image pickup surface in the image pickup element 32 .
  • the coordinates of the detection point is calculated from an optical image information (the luminosity distribution) of an evaluation section extending in, for example, an X direction that is orthogonal to a Y direction in which the lines of the dot line pattern PT extend.
  • FIG. 4 illustrates an optical image (the luminosity distribution) of an evaluation section A that extends in the X direction and that does not pass through the position corresponding to the dot DT, in other words, that is not affected by the dot DT; an optical image of an evaluation section B that extends in the X direction and the vicinity of the position DT′ that corresponds to the center position of the dot DT; and an optical evaluation section C that extends in the X direction and that passes through the position DT′.
  • the axis of abscissas in FIG. 4 is the pixel position of the image pickup element 32 , and the axis of ordinates thereof is the luminance.
  • a peak position P in which the luminance value becomes the largest (at its maximum) in the portion around the zero point is indicated with a circle
  • an edge position E is indicated with a triangle
  • a negative peak position NP in which the luminance value becomes the smallest (at its minimum) is indicated with a square.
  • the peak position and the negative peak position can be obtained by calculating the extremal values from the luminosity distribution
  • the edge portion can be obtained by calculating the extremal value from a luminance gradient obtained by first order differentiation of the luminosity distribution.
  • FIG. 4 illustrates the edge position in which the luminance gradient is at its maximum.
  • the edge position is not limited to the extremal value of the luminance gradient, but may be a position that is determined from an evaluation value (an extremal value or a reference value) that is an evaluation of the luminance gradient. Furthermore, the edge position may be obtained by calculating a position that is a median value between the maximum and minimum value of the luminance, or may be obtained by calculating the intermediate point between the peak position P and the negative peak position NP. In other words, other than the peak position P and the negative peak position NP, the intermediate position between the peak position P and the negative peak position NP may be detected.
  • FIG. 5 is a diagram of an enlarged portion near the negative peak positions illustrated in FIG. 4 . It can be seen in FIG. 5 that the negative peak positions of the evaluation sections A, B, and C are displaced from each other. The displacement in the negative positions of the evaluation sections A, B, and C causes an error in calculating the range information of the object 5 to be measured. Specifically, when corresponding the pattern on the pattern generation unit 22 , since the negative peak position is correlated to the position of the dark line on the pattern generation unit 22 , the displacement of the negative peak positions represents the displacement in the position of the dark line generated by the pattern generation unit 22 . Due to the displacement in the position of the dark line, each piece of range information is different.
  • FIG. 6 illustrates a relationship between the distance from the dot in the Y direction and the measurement error.
  • the axis of abscissas in FIG. 6 represents the distance, in pixels (pix), from the position DT′ corresponding to the center position of the dot DT in the Y direction in which the lines of the dot line pattern PT extends.
  • the position above position DT′, which corresponds to the center position of the dot DT, or the position that is the closest to position DT′ is represented by 0.
  • the axis of ordinates in FIG. 6 represents the measurement error (displacement) of the calculated distance.
  • the measurement error related to the peak position P in which the luminance value becomes the largest (at its maximum) is indicated with a circle
  • the measurement error related to the edge position E is indicated with a triangle
  • the measurement error related to the negative peak position NP in which the luminance value becomes the smallest (at its minimum) is indicated with a square.
  • the peak position P regardless of the distance from the dot DT, since there is no displacement of the detection point, there is almost no measurement error.
  • the edge position E a measurement error of 42 ⁇ m occurs at the position closest to the dot DT due to the displacement of the detection point caused by the dot DT. Note that the occurrence of the same amount of measurement error has been confirmed in the evaluation of the edge with the smallest luminance gradient as well.
  • the negative peak position NP a measurement error of 380 ⁇ m occurs at the position closest to the dot DT due to the displacement of the detection point caused by the dot DT.
  • the density of the detection points (the number of detection points per unit area) is doubled. Furthermore, when the two positions, namely, the positions in which the luminance gradient is at its maximum and the positions in which the luminance gradient is at its minimum are included, the density of the detection points is quadrupled. Accordingly, data for calculating the distance increases with the increase in the density of the detection points, and the S/N ratio with respect to the random noise of the image pickup element 32 is improved enabling measurement to be performed with higher accuracy.
  • the measurement error of each negative peak positions NP is larger. Accordingly, depending on the dot density and the number of lines in the dot line pattern PT, there are cases in which the measurement accuracy improves when the negative peak positions NP are not employed as the detection points.
  • information on the shape of the object to be measured is obtained while the negative peak positions NP near the dots are excluded from the detection points.
  • a flow of the measurement is illustrated in FIG. 7 .
  • an image of the object to be measured on which the pattern light has been projected is picked up and the image is stored in the memory 42 (S 100 ).
  • the pattern detection unit 43 of the processing unit 4 acquires the image of the object to be measured stored in the memory 42 (S 101 ).
  • the pattern detection unit 43 uses the acquired image to obtain the peak positions P and the negative positions NP as detection points of the positions in the Y direction through calculation using luminosity distribution (evaluation sections) in the X direction, and detects the positions of the lines of the pattern light (S 102 ).
  • whether to perform detection of the edge positions E is optional.
  • the peak position P there may be cases in which the portion nearest (closest) to the peak position P has a certain width in the luminosity distribution. In such a case, a position within the nearest (closest) portion may be selected or the center position thereof may be selected as the largest (maximum) position. The same applies to the negative peak position NP.
  • the pattern detection unit 43 detects the positions of the dots in each line (S 103 ).
  • the positions of the dots can be detected with the luminosity distribution of detection lines that are configured by connecting, in the Y direction, the peak positions P that are detected by the evaluation sections.
  • the position in the luminosity distribution of the detection line having the minimum value may be obtained as the center position of the dot (dot detection processing).
  • the pattern detection unit 43 specifies the negative peak positions NP that are to be excluded from the detection point (S 104 ). Specifically, since the measurement errors at the negative peak positions NP in the evaluation section C passing through the dot DT and in the evaluation section B near (around) the dot DT are large, the above negative peak positions NP are excluded from the detection points. In other words, the negative peak positions at the dot or around the dot are excluded from the detection points.
  • the positions that are excluded are positions that are affected by the displacement caused by the dot, and in the examples in FIGS. 3 and 4 , are positions between the first bright line in which the dot is provided and second bright lines that are next to the first bright line.
  • the excluded negative peak positions NP may be specified based on the distance (the number of pixels) in the Y direction from the position DT′ that corresponds to the center position of the dot DT. As illustrated in FIG. 6 , the negative peak positions NP in which the distances (the number of pixels) from the dot DT are 0, 1, or 2 may be excluded from the detection points. Furthermore, the calculation unit 44 obtains information on the shape of the object to be measured by calculating the range information on the basis of the negative peak positions NP of the peak position P and that of the evaluation section A that are negative peak positions NP other than the excluded negative peak portions NP and on the basis of the edge positions E when the edge positions E are detected (S 105 ).
  • measurement accuracy is improved with the increase in the density of the detection points, while information on the shape of the object to be measured is obtained with a higher accuracy by not using the negative peak positions with relatively low measurement accuracies as the detection points. Furthermore, with the increase in the density of the detection points, it is possible to measure the size of a smaller object to be measured.
  • the dot line pattern is different from that of the first exemplary embodiment. Note that description that overlaps the first exemplary embodiment will be omitted.
  • the dot line pattern is a periodical pattern alternately including dark lines, in which dark portions and dots (bright portions) continue in a single direction, and bright lines extending in the single direction.
  • the dots are each provided on the dark line and between the dark portions so as to disconnect the dark portions with respect each other in the direction in which the dark portions extend.
  • the dots are distinguishing portions that distinguish the dark lines from each other.
  • the bright and dark of the first exemplary embodiment are inverted with respect to each other.
  • the pattern detection unit 43 specifies the largest (maximum) peak positions that are to be excluded from the detection points among the plurality of detection points obtained from luminosity distribution of the evaluation sections.
  • the positions that are excluded are positions that are affected by the displacement caused by the dot, and are located on the dot or around the dot, for example, the peak positions in the positions between the first dark line in which the dot is provided and second dark lines that are next to the first dark line are excluded from the detection points.
  • the calculation unit 44 calculates the range information and obtains information on the shape of the object to be measured.
  • a measurement error occurs in the edge position E as well. Accordingly, information on the shape of the object to be measured can be obtained while excluding the edge positions near the dots from the detection points.
  • the pattern detection unit 43 uses the acquired image to obtain detection points by calculating the peak positions P or the negative peak positions NP, and the edge positions E of each position in the Y direction from the luminosity distribution (evaluation sections) in the X direction. Then, the positions of the lines of the pattern light are detected from the detection points.
  • the edge position is not limited to the extremal value of the luminance gradient, but may be a position that is determined from an evaluation value that is an evaluation of the luminance gradient.
  • the edge position may be obtained by calculating a position that is a median value between the maximum and minimum value of the luminance, or may be obtained by calculating the intermediate point between the peak position P and the negative peak position NP. In other words, the intermediate position between the peak position P and the negative peak position NP may be detected.
  • the pattern detection unit 43 specifies the edge positions that are to be excluded from the detection points among the plurality of detection points obtained from luminosity distribution of the evaluation sections. Then, using the edge positions, and the negative peak positions or the peak positions that are detection points other than the edge positions that have been excluded, the calculation unit 44 calculates the range information and obtains information on the shape of the object to be measured.
  • the edge positions may be employed as the detection points while the range information is calculated.
  • the following method may be considered for determining whether the edge positions are excluded from the detection points.
  • the edge positions are excluded from the detection points.
  • Such detection points of the edge positions may be determined as unsuitable detection points and the edge positions may be excluded to exclude the detection points in which measurement errors occur, and, as a result, the measurement accuracy can be increased.
  • the duty ratio of each bright line and each dark line of the dot line pattern PT is 1:1; however, the duty ratio does not necessarily have to be 1:1. However, it is favorable that the ratio is 1:1 in detecting the edge positions.
  • the axis of abscissas is a position in the X direction orthogonal to the direction in which each line extends, and the axis of ordinates is luminance.
  • luminosity distribution in a case in which image pickup is performed in the image pickup element in the best focused state (optimal focus) and luminosity distribution in a case in which image pickup is performed in a defocused state (out of focus) shifted by 40 mm from the best focused position are illustrated.
  • the edge position (a white hollow triangle) detected from the luminosity distribution of the image on which image pickup with optimal focus has been performed and the edge position (a black triangle) detected from the luminosity distribution of the image on which image pickup out of focus has been performed are displaced with respect to each other.
  • the above displacement amount of the edge position is 267 ⁇ m. Accordingly, it can be understood that in the pattern light having a duty ratio of 1:4, the defocus causes a distance calculation error caused by the edge displacement to occur.
  • FIG. 9 is related to the pattern in which the duty ratio of each bright line and each dark line of the dot line pattern PT is 1:1 and illustrates luminosity distribution obtained by evaluation under the same condition as that of FIG. 8 .
  • FIG. 9 it can be seen that no displacement occurs between the edge position (a white hollow triangle) detected from the luminosity distribution of the image on which image pickup with optimal focus has been performed and the edge position (the white hollow triangle) detected from the luminosity distribution of the image on which image pickup out of focus has been performed.
  • the pattern that is generated by the pattern generation unit 22 and that is projected on the object 5 to be measured is not limited to a dot line pattern.
  • the pattern may be any pattern that includes a plurality of lines, such as a tone pattern or a multicolor pattern.
  • the lines may be straight lines or a curved line.
  • the distinguishing portion does not have to be a dot and may be any mark that allows each line to be distinguished from each other, such as a round shaped portion or a portion with a narrowed width.
  • the areas in which the dots occupy may be larger than the areas in which the bright portions occupy.
  • the measurement device 1 may be used while being supported by a support member.
  • a control system that is used while being attached to a robot arm 300 (holding device) as in FIG. 10 will be described as an example.
  • the measurement device 1 performs image pickup by projecting a pattern light onto an object 210 to be inspected placed on a support 350 and acquires an image.
  • the measurement device 1 includes a control unit 310 including the processing unit 4 described above.
  • the processing unit 4 obtains information on the shape of the object 210 to be inspected from the acquired image.
  • the control unit 310 of the measurement device 1 or an external control unit that is connected to the control unit 310 obtains the position and orientation of the object to be inspected and acquires information on the obtained position and orientation. Based on the information on the position and orientation, the control unit 310 transmits a driving command to the robot arm 300 and controls the robot arm 300 .
  • the robot arm 300 holds the object 210 to be inspected with a robot hand (a holding portion) at the distal end and moves and rotates the object 210 to be inspected. Furthermore, by installing the object 210 to be inspected to another component with the robot arm 300 , an article, which includes a plurality of components, such as an electronic circuit board or a machine can be manufactured.
  • the control unit 310 includes an arithmetic unit, such as a CPU, and a storage device, such as a memory. Furthermore, the measurement data measured and the image obtained with the measurement device 1 may be displayed on a display unit 320 , such as a display.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as a
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)M), a flash memory device, a memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)

Abstract

A measurement device measuring a shape of an object, including a processing unit obtaining information on the shape of the object based on an image obtained by imaging the object on which a pattern light including a plurality of lines in which a distinguishing portion that distinguishes the lines from each other has been provided. In the measurement device the processing unit acquires, in a luminosity distribution of the image, in a direction intersecting the lines, positions including positions in which luminance is the largest and is the smallest, and the processing unit specifies a position to be excluded from the positions in which the luminance is the largest and is the smallest on a basis of the position of the distinguishing portion and obtains the information on the shape of the object based on positions except for the specified position.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present disclosure relates to a measurement device that measures a shape of an object to be measured, a measurement method, a system, and an article production method.
  • 2. Description of the Related Art
  • As a technique to measure a shape of an object to be measured, an optical measurement device is known. There are various methods that are used by the optical measurement device and one of the methods is referred to as a pattern projection method. In the pattern projection method, the shape of the object to be measured is obtained by projecting a predetermined pattern onto the object to be measured and picking up the image thereof, detecting the pattern in the taken image, and calculating range information at each pixel position using the principle of triangulation. There are various modes in the pattern used in the projection method, a representative pattern of which is a pattern (a dot line pattern) in which disconnection dots (dots) are disposed on a pattern including alternating bright lines and dark lines (see Japanese Patent No. 2517062). Information on the coordinates of the detected dots provides indexes that indicate to which line each of the projected line corresponds on the pattern of the mask, which is the pattern generation unit, such that the projected lines can be distinguished from each other. As described above, the dots serve as distinguishing portions that distinguish the lines from each other.
  • Influence of random noise of the taken image is included in the factors that decrease the measuring accuracy of the pattern projection method. In detecting the pattern in the taken image, typically, the coordinates of the pattern are specified by detecting the peak where the luminance value of the image of the pattern is the largest. In the Meeting on Image Recognition and Understanding (MIRU 2009), pp. 222, in addition to such a peak, by also detecting a negative peak in which the luminance value of the image of the pattern is the smallest, an increase in the density (the number of detection points per unit area) of the detection point is achieved. By increasing the detection points when detecting the pattern in the taken image, the S/N ratio is improved and the influence of the random noise of the taken image can be reduced. In the Meeting on Image Recognition and Understanding (MIRU 2009), pp. 222, measurement is performed by projecting a grid pattern and no dot line pattern is disclosed. It has been found that in the pattern projection method using a dot line pattern, when the negative peak is detected as in Meeting on Image Recognition and Understanding (MIRU 2009), pp. 222, an error occurs in the detecting position of the negative peak at an area around the dot (the distinguishing portion). As described above, a positional error may occur at the detection point near the dot (the distinguishing portion).
  • SUMMARY OF THE INVENTION
  • A measurement device that is an aspect of the present disclosure that overcomes the above problem is a measurement device that measures a shape of an object to be measured, including a processing unit that obtains information on the shape of the object to be measured on a basis of an image obtained by imaging the object to be measured on which a pattern light including a plurality of lines in which a distinguishing portion that distinguishes the lines from each other has been provided. In the measurement device the processing unit acquires, in a luminosity distribution of the image, in a direction intersecting the lines, the plurality of positions including a position in which luminance is largest and a position in which the luminance is smallest, the processing unit specifies a position to be excluded from the position in which the luminance is the largest and from the position in which the luminance is the smallest on a basis of the position of the distinguishing portion, and the processing unit obtains the information on the shape of the object to be measured based on the plurality of positions except for the position that has been specified.
  • Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram illustrating a configuration of a measurement device that is an aspect of the present disclosure.
  • FIG. 2 is a diagram illustrating an example of a dot line pattern projected on an object to be measured.
  • FIG. 3 is a diagram illustrating an image of an area around a dot.
  • FIG. 4 is a diagram illustrating luminosity distribution of evaluation sections of an image.
  • FIG. 5 is a diagram illustrating the luminosity distribution in which the portion around the dot has been enlarged.
  • FIG. 6 is a diagram illustrating a relationship between a distance from the dot and a measurement error.
  • FIG. 7 is a diagram illustrating a flow of the measurement.
  • FIG. 8 is a diagram illustrating luminosity distribution of an image in a case in which the duty ratio of the pattern light is 1:4.
  • FIG. 9 is a diagram illustrating luminosity distribution of an image in a case in which the duty ratio of the pattern light is 1:1.
  • FIG. 10 illustrates a diagram of a control system including a measurement device and a robot arm.
  • DESCRIPTION OF THE EMBODIMENTS
  • Hereinafter, preferred embodiments of the present disclosure will be described with reference to the accompanying drawings. Note that in each drawing, the same members will be attached to the same reference numerals and redundant description thereof will be omitted.
  • First Exemplary Embodiment
  • FIG. 1 is a schematic diagram illustrating a configuration of a measurement device 1 that is an aspect of the present disclosure. The measurement device 1 measures the shape (a three-dimensional shape, a two-dimensional shape, and a position and orientation, for example) of an object 5 to be measured by using a pattern projection method. As illustrated in FIG. 1, the measurement device 1 includes a projection unit 2, an image pickup unit 3, and a processing unit 4.
  • The projection unit 2 includes, for example, a light source unit 21, a pattern generation unit 22, and an optical projection system 23, and projects a predetermined pattern onto the object 5 to be measured. The light source unit 21 performs, for example, Kohler illumination such that light radiated from a light source is on the pattern generation unit 22 in a uniform manner. The pattern generation unit 22 creates a pattern light that is projected onto the object 5 to be measured and, in the present exemplary embodiment, is a mask on which a pattern is formed by performing chrome etching on a glass substrate. Note that the pattern generation unit 22 may be a digital light processing (DLP) projector, a liquid crystal projector, or a DMD, which is capable of generating any pattern. The optical projection system 23 is an optical system that projects the pattern light generated by the pattern generation unit 22 onto the object 5 to be measured.
  • FIG. 2 is a diagram illustrating a dot line pattern PT that is an example of a pattern that is generated by the pattern generation unit 22 and that is projected on the object 5 to be measured. As illustrated in FIG. 2, the dot line pattern PT includes a periodical pattern alternately including a bright line BP, in which bright portions (white) and dots (dark portions) DT (black) are continuously formed in one direction, and a dark line DP (black), which extends in one direction. The dots DT are each provided on the bright line BP and between the bright portions so as to disconnect the bright portions with respect each other in the direction in which the bright portions extend. The dots are distinguishing portions that distinguish the bright lines from each other. Since the positions of the dots on each bright line are different, information on the coordinates (positions) of the detected dots provides indexes that indicate to which line each of the projected bright line corresponds on the pattern generation unit 22, thus, enabling the projected bright lines to be distinguished from each other. The ratio (hereinafter, referred to as a “duty ratio”) between a width (a line width) LWBP of each bright line BP of the dot line pattern PT and the width LWDP of each dark line DP is assumed to be 1:1.
  • The image pickup unit 3 includes, for example, an image-pickup optical system 31 and an image pickup element 32, and obtains an image by taking the object 5 to be measured. In the present exemplary embodiment, the image pickup unit 3 performs image pickup of the object 5 to be measured on which the dot line pattern PT has been projected to acquire a so-called range image that is an image that includes the portion corresponding to the dot line pattern PT. The image-pickup optical system 31 is an image-forming optical system that forms an image of the dot line pattern PT projected on the object 5 to be measured on the image pickup element 32. The image pickup element 32 is an image sensor including a plurality of pixels that performs image pickup of the object 5 to be measured on which the pattern has been projected, and includes a CMOS sensor or a CCD sensor, for example.
  • Based on the image acquired with the image pickup unit 3, the processing unit 4 obtains the shape of the object 5 to be measured. The processing unit 4 includes a control unit 41, a memory 42, a pattern detection unit 43, and a calculation unit 44, and is constituted by a processor, such as a CPU, a RAM, a controller chip, and the like. The control unit 41 controls the operations of the projection unit 2 and the image pickup unit 3, specifically, the control unit 41 controls the projection of the pattern onto the object 5 to be measured and the image pickup of the object 5 to be measured on which the pattern has been projected. The memory 42 stores the image acquired by the image pickup unit 3. Using the image stored in the memory 42, the pattern detection unit 43 detects the peaks, the edges, and the dots (the position subject to the detection) of the pattern light in the image to obtain the coordinates of the pattern, in other words, to obtain the position of the pattern light in the image. Using the indexes of the lines distinguished from the information of the positions (coordinates) subject to the detection and the dots, the calculation unit 44 calculates the range information (three-dimensional information) of the object 5 to be measured at each pixel position of the image pickup element 32 using the principle of triangulation.
  • Hereinafter, pattern detection with the pattern detection unit 43 will be described in detail. The pattern detection unit 43 detects the image of the dot line pattern PT included in the range image and specifies the position of the dot line pattern PT in the range image. Specifically, the pattern detection unit 43 specifies the positions of the lines of the dot line pattern PT in the range image from the optical image information, in other words, from the luminosity distribution (the light intensity distribution), of the evaluation sections each extending in a direction intersecting the lines of the dot line pattern PT, for example, in a direction intersecting the lines.
  • FIG. 3 illustrates an image around a position DT′ that corresponds to a center position of a dot DT when the dot line pattern PT is projected onto a reference plane. In the above, the image is calculated from a simulation. The axis of abscissas x and the axis of ordinates y of the image corresponds to the position of the image pickup surface in the image pickup element 32. Referring to FIGS. 3 and 4, the position subject to the detection (a detection point) will be described. The coordinates of the detection point is calculated from an optical image information (the luminosity distribution) of an evaluation section extending in, for example, an X direction that is orthogonal to a Y direction in which the lines of the dot line pattern PT extend.
  • FIG. 4 illustrates an optical image (the luminosity distribution) of an evaluation section A that extends in the X direction and that does not pass through the position corresponding to the dot DT, in other words, that is not affected by the dot DT; an optical image of an evaluation section B that extends in the X direction and the vicinity of the position DT′ that corresponds to the center position of the dot DT; and an optical evaluation section C that extends in the X direction and that passes through the position DT′. The axis of abscissas in FIG. 4 is the pixel position of the image pickup element 32, and the axis of ordinates thereof is the luminance. In FIG. 4, in each of the optical images, a peak position P in which the luminance value becomes the largest (at its maximum) in the portion around the zero point is indicated with a circle, an edge position E is indicated with a triangle, and a negative peak position NP in which the luminance value becomes the smallest (at its minimum) is indicated with a square. The peak position and the negative peak position can be obtained by calculating the extremal values from the luminosity distribution, and the edge portion can be obtained by calculating the extremal value from a luminance gradient obtained by first order differentiation of the luminosity distribution. Regarding the edge position, although there are two edges in which the luminance gradient is at its maximum or minimum, FIG. 4 illustrates the edge position in which the luminance gradient is at its maximum. Furthermore, the edge position is not limited to the extremal value of the luminance gradient, but may be a position that is determined from an evaluation value (an extremal value or a reference value) that is an evaluation of the luminance gradient. Furthermore, the edge position may be obtained by calculating a position that is a median value between the maximum and minimum value of the luminance, or may be obtained by calculating the intermediate point between the peak position P and the negative peak position NP. In other words, other than the peak position P and the negative peak position NP, the intermediate position between the peak position P and the negative peak position NP may be detected.
  • FIG. 5 is a diagram of an enlarged portion near the negative peak positions illustrated in FIG. 4. It can be seen in FIG. 5 that the negative peak positions of the evaluation sections A, B, and C are displaced from each other. The displacement in the negative positions of the evaluation sections A, B, and C causes an error in calculating the range information of the object 5 to be measured. Specifically, when corresponding the pattern on the pattern generation unit 22, since the negative peak position is correlated to the position of the dark line on the pattern generation unit 22, the displacement of the negative peak positions represents the displacement in the position of the dark line generated by the pattern generation unit 22. Due to the displacement in the position of the dark line, each piece of range information is different. However, in the evaluation sections A, B, and C, since the distances to the reference plane are the same, the difference leads to a measurement error. If the distances to the object 5 to be measured are different, the positions of the lines of the dot line pattern PT are displaced. Since the displacement in the positions of the lines due to the displacement in the negative peak positions and the displacement in the positions of the line due to the difference in the distances to the object 5 to be measured are both calculated without any discrimination, a measurement error occurs.
  • A relationship between the distance from the dot in the Y direction in which the lines of the dot line pattern extend and the measurement error will be described next. FIG. 6 illustrates a relationship between the distance from the dot in the Y direction and the measurement error. The axis of abscissas in FIG. 6 represents the distance, in pixels (pix), from the position DT′ corresponding to the center position of the dot DT in the Y direction in which the lines of the dot line pattern PT extends. The position above position DT′, which corresponds to the center position of the dot DT, or the position that is the closest to position DT′ is represented by 0. The axis of ordinates in FIG. 6 represents the measurement error (displacement) of the calculated distance. In FIG. 6, the measurement error related to the peak position P in which the luminance value becomes the largest (at its maximum) is indicated with a circle, the measurement error related to the edge position E is indicated with a triangle, and the measurement error related to the negative peak position NP in which the luminance value becomes the smallest (at its minimum) is indicated with a square.
  • As for the peak position P, regardless of the distance from the dot DT, since there is no displacement of the detection point, there is almost no measurement error. As for the edge position E, a measurement error of 42 μm occurs at the position closest to the dot DT due to the displacement of the detection point caused by the dot DT. Note that the occurrence of the same amount of measurement error has been confirmed in the evaluation of the edge with the smallest luminance gradient as well. As for the negative peak position NP, a measurement error of 380 μm occurs at the position closest to the dot DT due to the displacement of the detection point caused by the dot DT.
  • Other than the peak positions P, when the negative peak positions NP are included as the detection points of the pattern light detected by the pattern detection unit 43, the density of the detection points (the number of detection points per unit area) is doubled. Furthermore, when the two positions, namely, the positions in which the luminance gradient is at its maximum and the positions in which the luminance gradient is at its minimum are included, the density of the detection points is quadrupled. Accordingly, data for calculating the distance increases with the increase in the density of the detection points, and the S/N ratio with respect to the random noise of the image pickup element 32 is improved enabling measurement to be performed with higher accuracy.
  • However, as described above, regarding the detection points around the dots, compared with the random noise of the image pickup element 32, which is of a tens of micrometers, the measurement error of each negative peak positions NP is larger. Accordingly, depending on the dot density and the number of lines in the dot line pattern PT, there are cases in which the measurement accuracy improves when the negative peak positions NP are not employed as the detection points.
  • Accordingly, in the present exemplary embodiment, information on the shape of the object to be measured is obtained while the negative peak positions NP near the dots are excluded from the detection points. A flow of the measurement is illustrated in FIG. 7. First, an image of the object to be measured on which the pattern light has been projected is picked up and the image is stored in the memory 42 (S100). Subsequently, the pattern detection unit 43 of the processing unit 4 acquires the image of the object to be measured stored in the memory 42 (S101). Then, the pattern detection unit 43 uses the acquired image to obtain the peak positions P and the negative positions NP as detection points of the positions in the Y direction through calculation using luminosity distribution (evaluation sections) in the X direction, and detects the positions of the lines of the pattern light (S102). At this point, whether to perform detection of the edge positions E (the positions between the peak positions P and the negative positions NP) is optional. Regarding the peak position P, there may be cases in which the portion nearest (closest) to the peak position P has a certain width in the luminosity distribution. In such a case, a position within the nearest (closest) portion may be selected or the center position thereof may be selected as the largest (maximum) position. The same applies to the negative peak position NP. Regarding the detection of the lines, for example, by smoothing the luminance values of the bright portions and the dots in each bright line by applying a Gaussian filter or the like to the image, even if there are portions in the bright portions that are disconnected by the dots, the above bright portions can each be detected as a single continuous line. Subsequently, the pattern detection unit 43 detects the positions of the dots in each line (S103). Specifically, the positions of the dots can be detected with the luminosity distribution of detection lines that are configured by connecting, in the Y direction, the peak positions P that are detected by the evaluation sections. For example, the position in the luminosity distribution of the detection line having the minimum value may be obtained as the center position of the dot (dot detection processing). Subsequently, based on the positions of the dots, the pattern detection unit 43 specifies the negative peak positions NP that are to be excluded from the detection point (S104). Specifically, since the measurement errors at the negative peak positions NP in the evaluation section C passing through the dot DT and in the evaluation section B near (around) the dot DT are large, the above negative peak positions NP are excluded from the detection points. In other words, the negative peak positions at the dot or around the dot are excluded from the detection points. The positions that are excluded are positions that are affected by the displacement caused by the dot, and in the examples in FIGS. 3 and 4, are positions between the first bright line in which the dot is provided and second bright lines that are next to the first bright line. Furthermore, the excluded negative peak positions NP may be specified based on the distance (the number of pixels) in the Y direction from the position DT′ that corresponds to the center position of the dot DT. As illustrated in FIG. 6, the negative peak positions NP in which the distances (the number of pixels) from the dot DT are 0, 1, or 2 may be excluded from the detection points. Furthermore, the calculation unit 44 obtains information on the shape of the object to be measured by calculating the range information on the basis of the negative peak positions NP of the peak position P and that of the evaluation section A that are negative peak positions NP other than the excluded negative peak portions NP and on the basis of the edge positions E when the edge positions E are detected (S105).
  • It has been described that displacement occurs in the detection result of the edges and negative peaks near the dots in the present exemplary embodiment. Since the dot positions are specified by the dot detection described above, the detected negative peaks that are near the dot positions may be selected and excluded. As regards the negative peaks that are not near the dots, almost no displacement occurs in the detection result. Note that since the dots are shorter than the bright portions in the bright lines and the number of detection points in portions other than the vicinities of the dots are larger than the number of detection points in the vicinities of the dots, the advantageous effect obtained through increase in the detection points can be sufficiently obtained even if the detection points in the vicinities of the dots are excluded.
  • As described above, in the present embodiment, measurement accuracy is improved with the increase in the density of the detection points, while information on the shape of the object to be measured is obtained with a higher accuracy by not using the negative peak positions with relatively low measurement accuracies as the detection points. Furthermore, with the increase in the density of the detection points, it is possible to measure the size of a smaller object to be measured.
  • Second Exemplary Embodiment
  • Description of a second exemplary embodiment will be given next. In the present exemplary embodiment, the dot line pattern is different from that of the first exemplary embodiment. Note that description that overlaps the first exemplary embodiment will be omitted.
  • In the present exemplary embodiment, the dot line pattern is a periodical pattern alternately including dark lines, in which dark portions and dots (bright portions) continue in a single direction, and bright lines extending in the single direction. The dots are each provided on the dark line and between the dark portions so as to disconnect the dark portions with respect each other in the direction in which the dark portions extend. The dots are distinguishing portions that distinguish the dark lines from each other. In other words, in the pattern of the present exemplary embodiment, the bright and dark of the first exemplary embodiment are inverted with respect to each other.
  • As in FIGS. 4 and 5, in the first exemplary embodiment, while almost no displacement occurs at the peak position P, there are displacements in the negative peak position NP. Accordingly, as in the second exemplary embodiment, when the bright and dark are inverted with respect to each other, almost no displacement occurs at the negative peak portion while there are displacements in the peak position where it is the largest (at its maximum) in the luminosity distribution.
  • Accordingly, in the present exemplary embodiment, based on the positions of the dots, the pattern detection unit 43 specifies the largest (maximum) peak positions that are to be excluded from the detection points among the plurality of detection points obtained from luminosity distribution of the evaluation sections. The positions that are excluded are positions that are affected by the displacement caused by the dot, and are located on the dot or around the dot, for example, the peak positions in the positions between the first dark line in which the dot is provided and second dark lines that are next to the first dark line are excluded from the detection points. Subsequently, using the positions of the detection points (the negative peaks and the peaks) other than the peak positions that have been excluded, the calculation unit 44 calculates the range information and obtains information on the shape of the object to be measured.
  • As described above, in the pattern of the second exemplary embodiment as well, by calculating the distance while excluding the detection points in which the measurement errors occur, an advantageous effect that is similar to that of the first exemplary embodiment is obtained.
  • Third Exemplary Embodiment
  • Description of a third exemplary embodiment will be given next. Note that description that overlaps the first exemplary embodiment will be omitted.
  • While in the first exemplary embodiment, an example in which the negative peak positions near the dots are excluded from the detection points have been described, in the present exemplary embodiment, an example in which the edge positions near the dots are excluded from the detection points will be given.
  • As illustrated in FIG. 6, a measurement error occurs in the edge position E as well. Accordingly, information on the shape of the object to be measured can be obtained while excluding the edge positions near the dots from the detection points.
  • In the present exemplary embodiment, the pattern detection unit 43 uses the acquired image to obtain detection points by calculating the peak positions P or the negative peak positions NP, and the edge positions E of each position in the Y direction from the luminosity distribution (evaluation sections) in the X direction. Then, the positions of the lines of the pattern light are detected from the detection points. Note that similar to that first exemplary embodiment, the edge position is not limited to the extremal value of the luminance gradient, but may be a position that is determined from an evaluation value that is an evaluation of the luminance gradient. Furthermore, the edge position may be obtained by calculating a position that is a median value between the maximum and minimum value of the luminance, or may be obtained by calculating the intermediate point between the peak position P and the negative peak position NP. In other words, the intermediate position between the peak position P and the negative peak position NP may be detected.
  • Subsequently, based on the positions of the dots, the pattern detection unit 43 specifies the edge positions that are to be excluded from the detection points among the plurality of detection points obtained from luminosity distribution of the evaluation sections. Then, using the edge positions, and the negative peak positions or the peak positions that are detection points other than the edge positions that have been excluded, the calculation unit 44 calculates the range information and obtains information on the shape of the object to be measured.
  • Note that since the measurement errors related to the edge positions E are small compared to those of the negative peak positions, according to conditions such as when the measurement accuracy is low due to low density of the detection points and due to influence of random noise of the image pickup element, the edge positions may be employed as the detection points while the range information is calculated.
  • The following method may be considered for determining whether the edge positions are excluded from the detection points. When comparison between the positions of the dots that have been detected through the dot detection processing and the edge positions near the dots that have been detected through edge detection processing show that there is a large deviation therebetween, it can be considered that that there are errors in the positions of the detection points. Such detection points of the edge positions may be determined as unsuitable detection points and the edge positions may be excluded to exclude the detection points in which measurement errors occur, and, as a result, the measurement accuracy can be increased.
  • Exemplary embodiments of the present disclosure have been described above; however, the present disclosure is not limited by the exemplary embodiments and various modification can be made without departing from the scope of the disclosure.
  • In the exemplary embodiments described above, the duty ratio of each bright line and each dark line of the dot line pattern PT is 1:1; however, the duty ratio does not necessarily have to be 1:1. However, it is favorable that the ratio is 1:1 in detecting the edge positions. FIG. 8 illustrates a pattern light based on design in which the duty ratio of the bright line and the dark line is bright line:dark line=1:4, and the luminosity distribution of the measured images. The axis of abscissas is a position in the X direction orthogonal to the direction in which each line extends, and the axis of ordinates is luminance. As the measured images, luminosity distribution in a case in which image pickup is performed in the image pickup element in the best focused state (optimal focus) and luminosity distribution in a case in which image pickup is performed in a defocused state (out of focus) shifted by 40 mm from the best focused position are illustrated.
  • According to FIG. 8, it can be seen that the edge position (a white hollow triangle) detected from the luminosity distribution of the image on which image pickup with optimal focus has been performed and the edge position (a black triangle) detected from the luminosity distribution of the image on which image pickup out of focus has been performed are displaced with respect to each other. When converted into a distance calculation error, the above displacement amount of the edge position is 267 μm. Accordingly, it can be understood that in the pattern light having a duty ratio of 1:4, the defocus causes a distance calculation error caused by the edge displacement to occur.
  • Meanwhile, FIG. 9 is related to the pattern in which the duty ratio of each bright line and each dark line of the dot line pattern PT is 1:1 and illustrates luminosity distribution obtained by evaluation under the same condition as that of FIG. 8. According to FIG. 9, it can be seen that no displacement occurs between the edge position (a white hollow triangle) detected from the luminosity distribution of the image on which image pickup with optimal focus has been performed and the edge position (the white hollow triangle) detected from the luminosity distribution of the image on which image pickup out of focus has been performed. It is assumed that, in the case in which the pattern with the duty ratio of 1:1 is projected, in the luminosity distribution of the image, although the contrast is changed by the defocus, no displacement occurs in the peak positions, the negative peak positions, and the edge portion that is substantially the intermediate point thereof. Accordingly, considering the influence exerted during a defocused state by the displacement in detection, it is desirable that the duty ratio is near 1:1 when the edge positions are used as the detection points.
  • Furthermore, the pattern that is generated by the pattern generation unit 22 and that is projected on the object 5 to be measured is not limited to a dot line pattern. Not limited to the bright portion and the dark portion, the pattern may be any pattern that includes a plurality of lines, such as a tone pattern or a multicolor pattern. Furthermore, the lines may be straight lines or a curved line. Furthermore, the distinguishing portion does not have to be a dot and may be any mark that allows each line to be distinguished from each other, such as a round shaped portion or a portion with a narrowed width. Furthermore, in the bright line BP, the areas in which the dots occupy may be larger than the areas in which the bright portions occupy.
  • Fourth Exemplary Embodiment
  • The measurement device 1 according to one or more of the exemplary embodiments described above may be used while being supported by a support member. In the present exemplary embodiment, a control system that is used while being attached to a robot arm 300 (holding device) as in FIG. 10 will be described as an example. The measurement device 1 performs image pickup by projecting a pattern light onto an object 210 to be inspected placed on a support 350 and acquires an image. The measurement device 1 includes a control unit 310 including the processing unit 4 described above. The processing unit 4 obtains information on the shape of the object 210 to be inspected from the acquired image. Then, the control unit 310 of the measurement device 1 or an external control unit that is connected to the control unit 310 obtains the position and orientation of the object to be inspected and acquires information on the obtained position and orientation. Based on the information on the position and orientation, the control unit 310 transmits a driving command to the robot arm 300 and controls the robot arm 300. The robot arm 300 holds the object 210 to be inspected with a robot hand (a holding portion) at the distal end and moves and rotates the object 210 to be inspected. Furthermore, by installing the object 210 to be inspected to another component with the robot arm 300, an article, which includes a plurality of components, such as an electronic circuit board or a machine can be manufactured. The control unit 310 includes an arithmetic unit, such as a CPU, and a storage device, such as a memory. Furthermore, the measurement data measured and the image obtained with the measurement device 1 may be displayed on a display unit 320, such as a display.
  • Other Embodiments
  • Operation of the processing unit or the control unit according to one or more of the exemplary embodiments described above may be performed with the following configuration.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)M), a flash memory device, a memory card, and the like.
  • While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
  • This application claims the benefit of Japanese Patent Application No. 2015-081064, filed Apr. 10, 2015, which is hereby incorporated by reference herein in its entirety.

Claims (22)

What is claimed is:
1. A measurement device that measures a shape of an object to be measured, comprising:
a processing unit that obtains information on the shape of the object to be measured on a basis of an image obtained by imaging the object to be measured on which a pattern light including a plurality of lines in which a distinguishing portion that distinguishes the lines from each other has been provided, wherein
the processing unit acquires, in a luminosity distribution of the image, in a direction intersecting the lines, the plurality of positions including a position in which luminance is largest and a position in which the luminance is smallest,
the processing unit specifies a position to be excluded from the position in which the luminance is the largest and from the position in which the luminance is the smallest on a basis of the position of the distinguishing portion, and
the processing unit obtains the information on the shape of the object to be measured based on the plurality of positions except for the position that has been specified.
2. The measurement device according to claim 1, wherein
the position that has been specified is at least one of the position in which the luminance is the largest and the position in which the luminance is the smallest, the at least one of the positions being located on the distinguishing portion or around the distinguishing portion.
3. The measurement device according to claim 1, wherein
the position that has been specified is a position that is affected by a displacement caused by the distinguishing portion.
4. The measurement device according to claim 1, wherein
the position that is to be excluded from the position in which the luminance is the largest and the position in which the luminance is the smallest is specified based on a number of pixels from a center position of the distinguishing portion in a direction in which the plurality of lines in the image extends.
5. The measurement device according to claim 1, wherein
the pattern light includes a bright line and a dark line alternating each other, and
the distinguishing portion is a distinguishing portion that distinguishes the bright line or the dark line.
6. The measurement device according to claim 5, wherein
the distinguishing portion is a dark portion that is provided in the bright line, and
the position that has been specified is a position between a first bright line in which the distinguishing portion is provided and a second bright line next to the first bright line.
7. The measurement device according to claim 5, wherein
the distinguishing portion is a dark portion that is provided in the bright line, and
the position that has been specified is the position in which the luminance is the smallest that is located around the distinguishing portion.
8. The measurement device according to claim 5, wherein
the distinguishing portion is a bright portion that is provided in the dark line, and
the position that has been specified is a position between a first dark line in which the distinguishing portion is provided and a second dark line next to the first dark line.
9. The measurement device according to claim 5, wherein
the distinguishing portion is a bright portion that is provided in the dark line, and
the position that has been specified is the position in which the luminance is the largest that is located around the distinguishing portion.
10. The measurement device according to claim 1, wherein
the plurality of positions include an intermediate position between the position in which the luminance is the largest and the position in which the luminance is the smallest, and
the position that has been specified includes the intermediate position.
11. The measurement device according to claim 10, wherein
the intermediate position is a position determined by an evaluation value of a luminance gradient obtained from the luminosity distribution of the image in the direction intersecting the lines.
12. The measurement device according to claim 11, wherein
the intermediate position is a position where a value of the luminance gradient is extremal.
13. The measurement device according to claim 10, wherein
the intermediate position is a middle point between the position in which the luminance is the largest and the position in which the luminance is the smallest.
14. A measurement device that measures a shape of an object to be measured, comprising:
a processing unit that obtains information on the shape of the object to be measured on a basis of an image obtained by imaging the object to be measured on which a pattern light including a plurality of lines in which a distinguishing portion that distinguishes the lines from each other has been provided, wherein
the processing unit acquires, in a luminosity distribution of the image, in a direction intersecting the lines, the plurality of positions including a position in which luminance is largest and a position in which the luminance is smallest and an intermediate position between the position in which the luminance is the largest and the position in which the luminance is the smallest,
the processing unit specifies the intermediate position to be excluded on the basis of the position of the distinguishing portion, and
the processing unit obtains the information on the shape of the object to be measured based on the plurality of positions except for the position that has been specified.
15. The measurement device according to claim 1, wherein
the processing unit detects the position of the distinguishing portion from the luminosity distribution of the image, and
the processing unit specifies the position that is to be excluded on a basis of the position of the distinguishing portion that has been detected.
16. The measurement device according to claim 14, wherein
the processing unit detects the position of the distinguishing portion from the luminosity distribution of the image, and
the processing unit specifies the position that is to be excluded on a basis of the position of the distinguishing portion that has been detected.
17. A method of measuring a shape of an object to be measured, the method comprising:
obtaining information on the shape of the object to be measured on a basis of an image of the object to be measured obtained by imaging the object to be measured on which a pattern light including a plurality of lines in which a distinguishing portion that distinguishes the lines from each other has been provided,
acquiring, in the obtaining step and in a luminosity distribution of the image, in a direction intersecting the lines, the plurality of positions including a position in which luminance is largest and a position in which the luminance is smallest,
specifying a position to be excluded from the position in which the luminance is the largest and from the position in which the luminance is the smallest on a basis of the position of the distinguishing portion, and
obtaining the information on the shape of the object to be measured based on the plurality of positions except for the position that has been specified.
18. A method of measuring a shape of an object to be measured, the method comprising:
obtaining information on the shape of the object to be measured on a basis of an image obtained by picking up an image of the object to be measured on which a pattern light including a plurality of lines in which a distinguishing portion that distinguishes the lines from each other has been provided,
acquiring, in the obtaining step and in a luminosity distribution of the image, in a direction intersecting the lines, the plurality of positions including a position in which luminance is largest and a position in which the luminance is smallest and an intermediate position between the position in which the luminance is the largest and the position in which the luminance is the smallest,
specifying, on the basis of the position of the distinguishing portion, the intermediate position to be excluded, and
obtaining the information on the shape of the object to be measured based on the plurality of positions except for the position that has been specified.
19. A system, comprising:
the measurement device according to claim 1, the measurement device measuring an object to be measured; and
a robot that moves the object to be measured on a basis of a measurement result of the measurement device.
20. A method of manufacturing an article, comprising:
moving a component with the robot of the system according to claim 19; and
manufacturing an article by installing the component to another component with the robot.
21. The measurement device according to claim 1, further comprising:
a projection unit that projects, onto the object to be measured, the pattern light; and
an image pickup unit that acquires an image of the object to be measured by imaging the object to be measured on which the pattern light has been projected.
22. The measurement device according to claim 14, further comprising:
a projection unit that projects, onto the object to be measured, the pattern light;
an image pickup unit that acquires an image of the object to be measured by imaging the object to be measured on which the pattern light has been projected.
US15/091,374 2015-04-10 2016-04-05 Measurement device that measures shape of object to be measured, measurement method, system, and article production method Abandoned US20160300356A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015081064A JP6512912B2 (en) 2015-04-10 2015-04-10 Measuring device for measuring the shape of the object to be measured
JP2015-081064 2015-04-10

Publications (1)

Publication Number Publication Date
US20160300356A1 true US20160300356A1 (en) 2016-10-13

Family

ID=55661318

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/091,374 Abandoned US20160300356A1 (en) 2015-04-10 2016-04-05 Measurement device that measures shape of object to be measured, measurement method, system, and article production method

Country Status (4)

Country Link
US (1) US20160300356A1 (en)
EP (1) EP3081900B1 (en)
JP (1) JP6512912B2 (en)
CN (1) CN106052591B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180120087A1 (en) * 2016-10-31 2018-05-03 Omron Corporation Control system, and control method and program for control system
US20180126649A1 (en) 2016-11-07 2018-05-10 Velo3D, Inc. Gas flow in three-dimensional printing
US10272525B1 (en) 2017-12-27 2019-04-30 Velo3D, Inc. Three-dimensional printing systems and methods of their use
US10286603B2 (en) 2015-12-10 2019-05-14 Velo3D, Inc. Skillful three-dimensional printing
US10286452B2 (en) 2016-06-29 2019-05-14 Velo3D, Inc. Three-dimensional printing and three-dimensional printers
CN109855682A (en) * 2019-02-01 2019-06-07 广东康利达物联科技有限公司 Cargo measuring system and cargo with lamplight pointing function measure indicating means
US10315252B2 (en) 2017-03-02 2019-06-11 Velo3D, Inc. Three-dimensional printing of three-dimensional objects
US10357957B2 (en) 2015-11-06 2019-07-23 Velo3D, Inc. Adept three-dimensional printing
US10434573B2 (en) 2016-02-18 2019-10-08 Velo3D, Inc. Accurate three-dimensional printing
US10449696B2 (en) 2017-03-28 2019-10-22 Velo3D, Inc. Material manipulation in three-dimensional printing
CN110398215A (en) * 2018-04-24 2019-11-01 佳能株式会社 Image processing apparatus and method, system, article manufacturing method, storage medium
US10493564B2 (en) 2014-06-20 2019-12-03 Velo3D, Inc. Apparatuses, systems and methods for three-dimensional printing
US10611092B2 (en) * 2017-01-05 2020-04-07 Velo3D, Inc. Optics in three-dimensional printing
US20220113131A1 (en) * 2019-06-28 2022-04-14 Canon Kabushiki Kaisha Measurement apparatus, image capturing apparatus, measurement system, control method, and storage medium
US11999110B2 (en) 2019-07-26 2024-06-04 Velo3D, Inc. Quality assurance in formation of three-dimensional objects
US12070907B2 (en) 2016-09-30 2024-08-27 Velo3D Three-dimensional objects and their formation

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7064404B2 (en) * 2018-08-13 2022-05-10 株式会社キーエンス Optical displacement meter

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2517062Y2 (en) * 1988-02-03 1996-11-13 株式会社吉野工業所 Small atomizer
US20110221891A1 (en) * 2010-03-10 2011-09-15 Canon Kabushiki Kaisha Information processing apparatus, processing method therefor, and non-transitory computer-readable storage medium
US20120031682A1 (en) * 2010-08-05 2012-02-09 Weber Maschinenbau Gmbh Breidenbach Apparatus and method for handling portions of products
US20140006319A1 (en) * 2012-06-29 2014-01-02 International Business Machines Corporation Extension to the expert conversation builder
JP2014199193A (en) * 2013-03-29 2014-10-23 キヤノン株式会社 Three-dimensional measuring device, three-dimensional measuring method, and program

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2517062B2 (en) * 1988-04-26 1996-07-24 三菱電機株式会社 3D measuring device
PL1969307T3 (en) * 2005-11-28 2010-12-31 3Shape As Coded structured light
WO2011013373A1 (en) * 2009-07-29 2011-02-03 Canon Kabushiki Kaisha Measuring apparatus, measuring method, and program
US9982995B2 (en) * 2011-05-24 2018-05-29 Koninklijke Philips N.V. 3D scanner using structured lighting
US9182221B2 (en) * 2011-06-13 2015-11-10 Canon Kabushiki Kaisha Information processing apparatus and information processing method
JP2013064644A (en) * 2011-09-16 2013-04-11 Nikon Corp Shape-measuring device, shape-measuring method, system for manufacturing structures, and method for manufacturing structures
US9448064B2 (en) * 2012-05-24 2016-09-20 Qualcomm Incorporated Reception of affine-invariant spatial mask for active depth sensing
JP5816773B2 (en) * 2012-06-07 2015-11-18 ファロ テクノロジーズ インコーポレーテッド Coordinate measuring machine with removable accessories
JP2014052209A (en) * 2012-09-05 2014-03-20 Canon Inc Three-dimensional shape measurement device, three-dimensional shape measurement method, program, and recording medium
US9389067B2 (en) * 2012-09-05 2016-07-12 Canon Kabushiki Kaisha Three-dimensional shape measuring apparatus, three-dimensional shape measuring method, program, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2517062Y2 (en) * 1988-02-03 1996-11-13 株式会社吉野工業所 Small atomizer
US20110221891A1 (en) * 2010-03-10 2011-09-15 Canon Kabushiki Kaisha Information processing apparatus, processing method therefor, and non-transitory computer-readable storage medium
US20120031682A1 (en) * 2010-08-05 2012-02-09 Weber Maschinenbau Gmbh Breidenbach Apparatus and method for handling portions of products
US20140006319A1 (en) * 2012-06-29 2014-01-02 International Business Machines Corporation Extension to the expert conversation builder
JP2014199193A (en) * 2013-03-29 2014-10-23 キヤノン株式会社 Three-dimensional measuring device, three-dimensional measuring method, and program

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10507549B2 (en) 2014-06-20 2019-12-17 Velo3D, Inc. Apparatuses, systems and methods for three-dimensional printing
US10493564B2 (en) 2014-06-20 2019-12-03 Velo3D, Inc. Apparatuses, systems and methods for three-dimensional printing
US10357957B2 (en) 2015-11-06 2019-07-23 Velo3D, Inc. Adept three-dimensional printing
US10688722B2 (en) 2015-12-10 2020-06-23 Velo3D, Inc. Skillful three-dimensional printing
US10286603B2 (en) 2015-12-10 2019-05-14 Velo3D, Inc. Skillful three-dimensional printing
US10434573B2 (en) 2016-02-18 2019-10-08 Velo3D, Inc. Accurate three-dimensional printing
US10286452B2 (en) 2016-06-29 2019-05-14 Velo3D, Inc. Three-dimensional printing and three-dimensional printers
US12070907B2 (en) 2016-09-30 2024-08-27 Velo3D Three-dimensional objects and their formation
US20180120087A1 (en) * 2016-10-31 2018-05-03 Omron Corporation Control system, and control method and program for control system
US10661341B2 (en) 2016-11-07 2020-05-26 Velo3D, Inc. Gas flow in three-dimensional printing
US20180126649A1 (en) 2016-11-07 2018-05-10 Velo3D, Inc. Gas flow in three-dimensional printing
US10611092B2 (en) * 2017-01-05 2020-04-07 Velo3D, Inc. Optics in three-dimensional printing
US10315252B2 (en) 2017-03-02 2019-06-11 Velo3D, Inc. Three-dimensional printing of three-dimensional objects
US10442003B2 (en) 2017-03-02 2019-10-15 Velo3D, Inc. Three-dimensional printing of three-dimensional objects
US10369629B2 (en) 2017-03-02 2019-08-06 Veo3D, Inc. Three-dimensional printing of three-dimensional objects
US10888925B2 (en) 2017-03-02 2021-01-12 Velo3D, Inc. Three-dimensional printing of three-dimensional objects
US10357829B2 (en) 2017-03-02 2019-07-23 Velo3D, Inc. Three-dimensional printing of three-dimensional objects
US10449696B2 (en) 2017-03-28 2019-10-22 Velo3D, Inc. Material manipulation in three-dimensional printing
US10272525B1 (en) 2017-12-27 2019-04-30 Velo3D, Inc. Three-dimensional printing systems and methods of their use
CN110398215A (en) * 2018-04-24 2019-11-01 佳能株式会社 Image processing apparatus and method, system, article manufacturing method, storage medium
CN109855682A (en) * 2019-02-01 2019-06-07 广东康利达物联科技有限公司 Cargo measuring system and cargo with lamplight pointing function measure indicating means
US20220113131A1 (en) * 2019-06-28 2022-04-14 Canon Kabushiki Kaisha Measurement apparatus, image capturing apparatus, measurement system, control method, and storage medium
US11999110B2 (en) 2019-07-26 2024-06-04 Velo3D, Inc. Quality assurance in formation of three-dimensional objects

Also Published As

Publication number Publication date
EP3081900A1 (en) 2016-10-19
CN106052591A (en) 2016-10-26
JP2016200503A (en) 2016-12-01
JP6512912B2 (en) 2019-05-15
EP3081900B1 (en) 2018-08-29
CN106052591B (en) 2019-09-03

Similar Documents

Publication Publication Date Title
EP3081900B1 (en) Measurement devices and methods for measuring the shape of an object to be measured, and method of manufacturing an article
US9621793B2 (en) Information processing apparatus, method therefor, and measurement apparatus
KR101461068B1 (en) Three-dimensional measurement apparatus, three-dimensional measurement method, and storage medium
US10024653B2 (en) Information processing apparatus, information processing method, and storage medium
US9613425B2 (en) Three-dimensional measurement apparatus, three-dimensional measurement method and program
US10006762B2 (en) Information processing apparatus, information processing method, and storage medium
US20160267668A1 (en) Measurement apparatus
US20160356596A1 (en) Apparatus for measuring shape of object, and methods, system, and storage medium storing program related thereto
US10240913B2 (en) Three-dimensional coordinate measuring apparatus and three-dimensional coordinate measuring method
US9759549B2 (en) Distance detecting device
US20190325593A1 (en) Image processing apparatus, system, method of manufacturing article, image processing method, and non-transitory computer-readable storage medium
JP2015184056A (en) Measurement device, method, and program
JP2014155063A (en) Chart for resolution measurement, resolution measurement method, positional adjustment method for camera module, and camera module manufacturing method
US9739604B2 (en) Information processing apparatus, information processing method, and storage medium
JP5883688B2 (en) Installation state detection system, installation state detection device, and installation state detection method
JP2020046229A (en) Three-dimensional measuring device and three-dimensional measuring method
JP2016063336A (en) Calibration method, calibration program, and calibration device
JP2016176723A (en) Measurement device
JP2019174216A (en) Lens mark pattern center determination method, and device of the same, as well as program making computer implement determination method and recording medium of the same
JP5371015B2 (en) Cross mark detection apparatus and method, and program
JP6515946B2 (en) Image processing apparatus, image processing method, program
JP5261891B2 (en) Alignment mark and position measurement method
US20240013420A1 (en) Image processing apparatus, image processing method, and storage medium
JP4484041B2 (en) Edge position detection device
JP2017032449A (en) Measurement device and measurement method

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KITAMURA, TSUYOSHI;TOKIMITSU, TAKUMI;REEL/FRAME:039177/0726

Effective date: 20160322

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION