Nothing Special   »   [go: up one dir, main page]

WO2018195797A1 - 一种视觉检测方法、检测设备以及机器人 - Google Patents

一种视觉检测方法、检测设备以及机器人 Download PDF

Info

Publication number
WO2018195797A1
WO2018195797A1 PCT/CN2017/081989 CN2017081989W WO2018195797A1 WO 2018195797 A1 WO2018195797 A1 WO 2018195797A1 CN 2017081989 W CN2017081989 W CN 2017081989W WO 2018195797 A1 WO2018195797 A1 WO 2018195797A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detected
identification area
target
detecting device
Prior art date
Application number
PCT/CN2017/081989
Other languages
English (en)
French (fr)
Inventor
阳光
韩琨
张志明
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to CN201780089296.0A priority Critical patent/CN110476056B/zh
Priority to PCT/CN2017/081989 priority patent/WO2018195797A1/zh
Publication of WO2018195797A1 publication Critical patent/WO2018195797A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination

Definitions

  • the invention belongs to the technical field of detection, and in particular relates to a visual detection method, a detection device and a robot.
  • the black box is used, that is, the robot and the detected object are placed in the black box, and the detected object is photographed under artificial illumination, thereby avoiding the inability to be detected due to the change of the overall brightness when the ambient light is uncontrollable.
  • the embodiment of the invention provides a visual detection method, a detection device and a robot, so that the implementation of the visual detection is flexible and easy to operate.
  • a first aspect of the embodiments of the present invention provides a visual detection method, including:
  • the detecting device acquires a current image to be detected
  • the detecting device determines an identification area in the image to be detected, and the identification area includes one or more areas;
  • the detecting device acquires a target image parameter of the identification area
  • the detecting device performs correction processing on the image to be detected and sets target detection according to the target image parameter of the identification area and the target image parameter of the identification area in the pre-stored reference image. Threshold value
  • the detecting device visually detects the corrected image to be detected according to the target detection threshold.
  • a second aspect of the embodiments of the present invention provides a detecting apparatus, including:
  • a first acquiring unit configured to acquire a current image to be detected
  • a first determining unit configured to determine an identification area in the image to be detected, where the identification area includes one or more areas
  • a second acquiring unit configured to acquire a target image parameter of the identified area
  • a first correcting unit configured to perform a correction process on the image to be detected and set a target detection threshold according to the target image parameter of the identification area and the target image parameter of the identification area in the pre-stored reference image;
  • a detecting unit configured to perform visual detection on the corrected image to be detected according to the target detection threshold.
  • a third aspect of the embodiments of the present invention provides a detecting apparatus, including:
  • the sensor is configured to acquire a current image to be detected
  • the memory is configured to store an operation instruction
  • the processor is configured to: by calling the operation instruction:
  • the identification area including one or more areas
  • the corrected image to be detected is visually detected according to the target detection threshold.
  • a fourth aspect of the embodiments of the present invention provides a robot, including:
  • the detecting device comprising a sensor, the sensor of the detecting device being mounted on the robot arm;
  • the detecting device is configured to control the sensor to acquire a current image to be detected
  • the technical solution provided by the embodiment of the present invention has the following advantages: the image to be detected is corrected according to the target image parameter of the identified area in the target image and the target image parameter of the identified area in the reference image, and a better detection threshold is set. And using the set detection threshold to detect the corrected image to be detected, thereby solving the problem that the normal light detection cannot be detected normally.
  • the visual inspection method provided by the present invention does not limit the measurement environment, and the implementation of the visual inspection is flexible and easy to operate.
  • FIG. 1a is a schematic diagram of an application scenario according to an embodiment of the present invention.
  • FIG. 1b is a schematic diagram of another application scenario according to an embodiment of the present invention.
  • FIG. 2 is a flow chart of an embodiment of a visual detection method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of another embodiment of a visual detection method according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a technical method for determining an equivalent light source according to a plurality of pixel points according to an embodiment of the present invention
  • Figure 5 is a relationship diagram of pre-stored image parameter difference values and detection threshold values
  • FIG. 6 is a block diagram of an embodiment of a detecting device according to an embodiment of the present invention.
  • FIG. 7 is a block diagram of another embodiment of a detecting device according to an embodiment of the present invention.
  • Figure 8 is a block diagram of an embodiment of a detecting device in accordance with an embodiment of the present invention.
  • the embodiments of the present invention are applicable to the application scenarios shown in FIG. 1a and FIG. 1b.
  • the robot arm 101 of the robot in Fig. 1a and the robot arm 110 of the robot in Fig. 1b are in different positions of the robot arm of the same robot.
  • the position of the robot arm changes, as shown in Fig. 1a, the object 102 is detected.
  • the illumination intensity changes to the illumination intensity of the detected object 120 shown in FIG. 1b, and the change of the illumination intensity causes the threshold of the visual detection to change, thereby making the visual detection impossible.
  • the detecting device acquires the current image to be detected; the detecting device determines the identification area in the image to be detected, the identification area includes one or more areas; the detecting device acquires the target image parameter of the identification area; The device performs a correction process on the image to be detected according to the target image parameter of the identification area and the target image parameter of the identification area in the reference image and sets a target detection threshold; the detecting device visually detects the corrected image to be detected according to the target detection threshold.
  • an embodiment of the visual detection method in the embodiment of the present invention includes:
  • the detecting device acquires a current image to be detected.
  • the inspection equipment performs visual inspection of the object to be inspected.
  • the detecting device adopts an image capturing device such as a camera to perform image capturing on the object to be detected to obtain an image to be detected of the object to be detected for detection and analysis.
  • the detecting device determines the identification area in the image to be detected.
  • the identification area is determined in the image to be detected.
  • the identifier area may be one area or multiple areas, which is not limited herein.
  • the location of the identification area and the detection device are relatively fixed, that is, regardless of how the detection device changes, the relative position between the two is constant, for example, the identification area is 30 cm in front of the camera.
  • the location of the identification area and the detection device may not be relatively fixed, for example, the identification area is located on the object to be detected, or is located on a platform where the object to be detected is located.
  • the detecting device acquires a target image parameter of the identification area.
  • the detecting device After the detecting device determines the marking area in the image to be detected, the detecting device extracts the target image parameter in the marking area, wherein the target image parameter is a gray value of the pixel point.
  • the detecting device performs a correction process on the image to be detected according to the target image parameter and the target image parameter of the identified area in the reference image, and sets a target detection threshold.
  • the detection image is corrected according to the target image parameter and the target image parameter of the identification area in the reference image, and the target detection threshold is set, wherein the reference image is obtained by the camera under the reference parameter.
  • the image to which the reference image is obtained is pre-fetched. For specific processing, please refer to the following.
  • the detecting device performs visual detection on the corrected image to be detected according to the target detection threshold.
  • the detecting device After the detecting device performs the correction processing on the detected image and resets the target detection threshold, the detecting device visually detects the image to be detected according to the reset target detection threshold. It should be noted that the method for visually detecting an image by using a specified detection threshold may be an existing visual detection method, which is not limited herein.
  • the detecting device acquires the current image to be detected
  • the identification area is determined in the image to be detected, and the target image parameter according to the identification area and the target image of the identification area in the reference image are obtained.
  • the parameter is subjected to correction processing on the detected image and a target detection threshold is set to perform visual detection based on the detection threshold.
  • the image to be detected is corrected according to the target image parameter of the identified area in the target image and the target image parameter of the identified area in the reference image, and a better detection threshold is set, and the set detection is performed.
  • the threshold detects the corrected image to be detected, thereby solving the problem that the normal light detection cannot be detected normally.
  • FIG. 3 is a flowchart of another embodiment of a visual detection method according to an embodiment of the present invention.
  • the identification area may include one area or multiple areas.
  • the identification area is a plurality of areas as an example.
  • the detecting device acquires a reference image.
  • the detecting device visually detects the object to be detected, and obtains a reference image from the local memory, which is an image obtained by taking a picture of the detected object by the camera under a reference condition or a normal condition.
  • the detecting device acquires a reference parameter, wherein the reference parameter includes an image parameter of the reference image and a reference threshold.
  • the reference threshold is an acceptable error when the detecting device detects the defect. If the difference between the target image parameter of a pixel in the reference image and the target image parameter of the adjacent pixel is greater than the reference threshold, the detecting device considers the pixel as Defect point.
  • the reference threshold may also be a reference value when the detecting device detects a defect. For example, a pixel point whose target image parameter is larger than the reference threshold in the reference image is a defect point, so the definition of the reference threshold is not limited herein.
  • the image parameters of the reference image include gray value, contrast, or ambiguity
  • the reference threshold corresponds to the reference image parameter, and includes a grayscale difference threshold, a contrast threshold, or a ambiguity threshold, and is not limited herein.
  • the detecting device acquires a current image to be detected.
  • step 302 is similar to step 201 in FIG. 2, and details are not described herein again.
  • the detecting device finds the identification area in the image to be detected according to the preset identification area information.
  • the detection area searches for the identification area in the image to be detected according to the preset identification area information.
  • the technical means implemented include at least image segmentation or pattern matching.
  • the preset identification region information may include block area, block average brightness, or block aspect ratio, etc., in use.
  • the preset identification area information may be an identification area image or the like. For example, if the preset identification area information is that the identification area to be searched is a regular hexagon, the detection device searches for all regular hexagons in the image to be detected. The area is the identification area, so the preset identification area information is not limited herein. Therefore, the detecting device can find the identification area in multiple ways.
  • the detecting device finds the identification area that matches the shape of the pre-stored identification area image according to the pre-stored identification area image, or the size of the image to be detected.
  • the corner area that meets the preset condition is used as the identification area, which is not limited herein.
  • the identification area that is found by the detecting device that meets the preset identification area information may have one or more, and the information of the determined identification area is stored locally, so the number of the identification area determined by the detecting device is specific here. Not limited.
  • the detecting device determines a position of the equivalent light source according to the target image parameter of the plurality of pixel points in the identification area.
  • the difference of the gray value of each pixel on the image to be detected is related to two factors, including the light intensity of the light source irradiated to the pixel and the reflectivity of the pixel itself.
  • the difference in light intensity at which a light source illuminates a pixel is related to two factors, including the light intensity of the light source and the distance between the light source and the pixel.
  • the detecting device After determining the identification area in the image to be detected according to the preset identification area information, the detecting device determines the position of the equivalent light source according to the target image parameter of the plurality of pixel points in the identification area, as shown in FIG. 4, which is implemented by the present invention.
  • a technical diagram for determining an equivalent light source according to a plurality of pixel points is provided in the example.
  • P Light is a light source
  • l 11 , l n1 and 1 nm are any three pixel points selected by the detecting device in the identification area.
  • the three pixel points are respectively located in three different identification areas.
  • the three pixel points may also be located in one identification area or in two identification areas.
  • the light intensity of the three pixel points is determined according to the gray value of the three pixel points, and then the values of the d 1 , d 2 and d 3 can be calculated according to the above formula. Then, the three pixel points are respectively used as the center of the sphere, and the distances d 1 , d 2 and d 3 of each point to the light source are spherical, and the intersection of the three balls is the position of the equivalent light source. Obviously, There are two junctions, and one of them can be selected as the equivalent source.
  • the detecting device obtains an error caused by the position of each pixel on the image to be detected according to the position of the equivalent light source and the position of each pixel on the image to be detected;
  • the detecting device After determining the position of the equivalent light source according to the target image feature of the plurality of pixel points in the identification area, the detecting device can pass the position of each pixel in the image to be detected because the positions of all the pixel points in the image to be detected are known.
  • the error refers to a gray value error caused by the distance of the light source position reaching each pixel point being different.
  • the detecting device corrects the image to be detected according to the error.
  • the detecting device After the detecting device obtains an error caused by the position of the light source on each pixel of the image to be detected, the error is eliminated on the image to be detected, that is, the image to be detected is corrected according to the error.
  • the error caused by the position of the light source of each pixel on the image to be detected is known, and the detecting device subtracts the image parameter of each pixel in the image to be detected with the corresponding error, thereby correcting the to-be-detected
  • the image that is, the error caused by the difference in distance from each pixel on the image to be detected due to the distance from the light source.
  • Table 1 is the gray value of each pixel in the image to be detected.
  • the detecting device selects one pixel from the image to be detected as a specific pixel point (assumed to be a pixel point in the upper left corner), and if other pixels are assumed to have the same reflectance as the pixel, according to the The position of the effect light source and the position of each pixel point are used to calculate the gray value of each pixel point position, and Table 2 is obtained.
  • the gray value of all the pixel points in Table 2 is subtracted from the gray value of the specific pixel point to obtain an error caused by the position of the light source on each pixel on the image to be detected, that is, Table 3.
  • the error of the corresponding position in Table 3 is subtracted from all the pixel points in Table 1, thereby obtaining a corrected image to be detected.
  • steps 304 to 306 are used to correct the error of the image to be detected due to the position of the light source.
  • the error of the image to be detected due to the position of the light source may also be corrected by other methods.
  • step 304 to step 306 may be omitted directly, that is, step 307 is directly executed after step 303 is performed.
  • the detecting device acquires a target image parameter of the identifier area.
  • the detecting device acquires a target image parameter of the identified area from the corrected image to be detected. That is, the gradation value of each pixel of the identification area is acquired from the corrected image to be detected.
  • the detecting device acquires the target image parameter of the identification area from the image to be detected. That is, the gray value of each pixel of the identification area is acquired from the image to be detected.
  • the detecting device performs a correction process on the image to be detected according to the target image parameter of the identification area and the target image parameter of the identification area in the reference image stored in advance;
  • the detecting device After the detecting device obtains the target image parameter of the identified area in the image to be detected and the target image parameter of the identified area in the reference parameter, the average value of the target image parameter of the identified area in the reference image and the target image parameter of the identified area in the image to be detected are obtained. The average is divided to obtain the target coefficient. Based on the target coefficient, the detecting device multiplies the target image parameter of each pixel in the image to be detected by the target coefficient to perform correction processing on the detected image. That is, the detecting device divides the average value of the gray value of each pixel point of the identification area in the reference image and the average value of the gray value of each pixel point of the identification area in the image to be detected to obtain a target coefficient. And then multiplying the gray value of each pixel in the image to be detected by the target coefficient to perform correction processing on the image to be detected.
  • the detecting device subtracts the average value of the target image parameter of the identification area in the reference image and the average value of the target image parameter of the identification area in the image to be detected to obtain a difference;
  • the detecting device obtains the target image parameter of the identification area in the image to be detected and the identifier in the reference image After the target image parameter of the region, the target image parameter of the identified area in the image to be detected is subtracted from the target image parameter of the identified area in the reference image to obtain a difference.
  • the detecting device determines whether the difference is greater than a preset experience value; if yes, executing step 311; if not, executing step 312;
  • FIG. 5 it is a relationship diagram of pre-stored image parameter difference values and detection threshold values.
  • the relationship map stores an optimal detection threshold corresponding to each reference image difference value, that is, a detection threshold value capable of detecting a defect point corresponding to each reference image difference value is stored.
  • the horizontal axis of the graph in FIG. 5 is the difference between the average value of the target image parameters of the identification area in the reference image and the average value of the target image parameters of the identification area in the image to be detected, and the vertical axis is the detection threshold.
  • the value of the intersection of the empirical boundary line and the horizontal axis in the figure is the preset empirical value.
  • the detection threshold corresponding to the image parameter difference tends to be a stable value; when the image parameter difference is greater than the preset empirical value, the image parameter difference corresponding detection The threshold changes a lot. That is, when the image parameter difference is smaller than the preset experience value, the corresponding detection threshold changes little, and the reference threshold is used as the target detection threshold of the image to be detected.
  • the corresponding detection threshold varies greatly, and the detection threshold corresponding to the image parameter difference needs to be used as the target detection threshold of the image to be detected.
  • the detecting device determines, according to a relationship between a preset image parameter difference and a detection threshold, a detection threshold corresponding to the difference as the target detection threshold.
  • the detecting device After the detecting device determines that the difference is greater than the preset experience value, the detecting device adjusts the detection threshold of the to-be-detected image, and the detecting device finds the corresponding difference between the locally preset image parameter difference and the detection threshold according to the obtained difference.
  • the detection threshold is taken as the target detection threshold of the image to be detected.
  • the detecting device sets a reference threshold of the preset reference image as a target detection threshold.
  • the detecting device determines that the difference is less than the preset experience value
  • the detecting device uses the reference threshold of the preset reference image as the target detection threshold of the image to be detected.
  • the detecting device performs visual detection according to the target detection threshold.
  • the step 313 is similar to the step 205 in FIG. 2, and details are not described herein again.
  • the detecting device obtains the current image to be detected of the detected object, and acquires the target image parameter of the identified area, and then corrects the image to be detected according to the target image parameter and the target image parameter of the identified area of the reference image, and adjusts the image to be detected.
  • the target detection threshold of the detected image Since the embodiment of the present invention adopts a method of establishing a visual model in visual measurement, the measurement environment is not limited, and the visual inspection is performed. The implementation is flexible and easy to operate.
  • FIG. 6 is a block diagram of an embodiment of the detection device according to an embodiment of the present invention, including:
  • the first acquiring unit 601 is configured to acquire a current image to be detected
  • a first determining unit 602 configured to determine an identification area in the image to be detected, where the identification area includes one or more areas;
  • a second acquiring unit 603, configured to acquire a target image parameter of the identification area
  • a first correcting unit 604 configured to perform a correction process on the image to be detected according to the target image parameter of the identified area and the target image parameter of the identified area in the reference image stored in advance, and set a target detection threshold;
  • the detecting unit 605 is configured to perform visual detection on the corrected image to be detected according to the detection threshold.
  • the first acquiring unit obtains the current image to be detected of the detected object
  • the second acquiring unit acquires the target image parameter of the identified area
  • the correcting unit according to the target image parameter of the identified area of the image to be detected and pre-stored
  • the target image parameter correction processing of the identification area in the reference image and setting the target detection threshold is adopted, and the measurement environment is not limited, so that the implementation of the visual detection is flexible and easy to operate.
  • FIG. 7 is a block diagram of another embodiment of the detecting device in the embodiment of the present invention.
  • the correcting unit 704 can include:
  • a first correction module 7041 configured to perform a correction process on the image to be detected according to the target image parameter of the identification area and the target image parameter of the identification area in the pre-stored reference image;
  • the operation module 7042 is configured to subtract the average value of the target image parameter of the identification area in the reference image and the average value of the target image parameter of the identification area in the image to be detected to obtain a difference;
  • the determining module 7043 is configured to determine whether the difference is greater than a preset experience value
  • the first setting module 7044 is configured to set a target detection threshold according to a relationship between a preset image parameter difference value and a detection threshold value when the difference value is greater than a preset experience value;
  • the second setting module 7045 is configured to set a reference threshold of the preset reference image as the target detection threshold when the difference is not greater than the preset experience value.
  • the first setting module 7044 may specifically include:
  • the determining submodule 70441 is configured to determine, as a target detection threshold, a detection threshold corresponding to the difference value in a relationship between the preset image parameter difference value and the detection threshold.
  • the first correction module 7041 may specifically include:
  • the first operation sub-module 70411 is configured to divide the average value of the target image parameter of the identification area in the reference image and the average value of the target image parameter of the identification area in the image to be detected to obtain a target coefficient;
  • the second operation sub-module 70412 is configured to multiply the target image parameter of each pixel in the image to be detected with the target coefficient to correct the detected image.
  • the first determining unit 702 includes:
  • the searching module 7021 is configured to find an identification area in the image to be detected according to the preset identification area information.
  • the inspection device further includes:
  • a second determining unit 706, configured to determine a location of the equivalent light source according to the target image parameter of the plurality of pixel points in the identification area;
  • a third determining unit 707 configured to obtain an error caused by each light pixel position in the image to be detected according to the position of the equivalent light source and the position of each pixel on the image to be detected;
  • a second correcting unit 708, configured to correct an image to be detected according to an error
  • the first correction module 7041 is further configured to perform further correction processing on the corrected image to be detected according to the target image parameter of the identification area and the target image parameter of the identification area in the pre-stored reference image.
  • the present invention further provides a detection device.
  • FIG. 8 it is a device diagram of a detection device according to an embodiment of the present invention.
  • the detection device 80 includes a sensor 810, a memory 820, and a processor 830.
  • the sensor 810 is configured to acquire a current image to be detected.
  • a memory 820 configured to store an operation instruction
  • the processor 830 is configured to perform the following steps by calling an operation instruction stored in the memory 820:
  • the control sensor 810 acquires a current image to be detected; determines an identification area in the image to be detected, The identification area includes one or more areas; acquiring target image parameters of the identification area; performing correction processing on the image to be detected according to the target image parameter of the identification area and the target image parameter of the identification area in the pre-stored reference image, and setting a target detection threshold; Visual inspection is performed based on the detection threshold.
  • processor 830 may also be referred to as a central processing unit (English full name: Central Processing Unit, English abbreviation: CPU).
  • the memory 820 is configured to store operation instructions and data, so that the processor 830 calls the above operation instructions to implement corresponding operations, and may include a read only memory and a random access memory. A portion of the memory 820 may also include a non-volatile random access memory (English name: Non-Volatile Random Access Memory, English abbreviation: NVRAM).
  • NVRAM Non-Volatile Random Access Memory
  • the detection device 80 also includes a bus system 840 that couples various components of the detection device 80, including the sensor 810, the memory 820, and the processor 830, wherein the bus system 840 includes a data bus. In addition, it can also include a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are labeled as bus system 840 in the figure.
  • processor 830 may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 830 or an instruction in the form of software.
  • the processor 830 may be a general-purpose processor, a digital signal processor (English name: Digital Signal Processing, English abbreviation: DSP), an application-specific integrated circuit (English name: Application Specific Integrated Circuit, English abbreviation: ASIC), ready-made programmable Gate array (English name: Field-Programmable Gate Array, English abbreviation: FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA ready-made programmable Gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 820, and the processor 830 reads the information in the memory 820 and completes the steps of the above method in combination with its hardware.
  • processor 830 can also call the operation instruction in the memory 820 to perform the following steps:
  • the reference threshold of the preset reference image is set as the target detection threshold.
  • the specific implementation of the processor 830 to set the detection threshold according to the relationship between the preset image parameter difference value and the detection threshold may be:
  • a detection threshold corresponding to the difference is determined as a target detection threshold in a relationship between the preset image parameter difference value and the detection threshold.
  • the processor 830 performs a preset operation on the target image parameter of the identification area in the reference image and the target image parameter of the identification area in the image to be detected, and the specific implementation of the correction processing on the image to be detected may be:
  • the target image parameter of each pixel in the image to be detected is multiplied with the target coefficient to correct the image to be detected.
  • processor 830 can also call the operation instruction in the memory 820 to perform the following steps:
  • the control sensor finds the identification area in the image to be detected according to the preset identification area information.
  • processor 830 can also call the operation instruction in the memory 820 to perform the following steps:
  • the corrected image to be detected is further corrected according to the target image parameter of the identification area and the target image parameter of the identification area in the pre-stored reference image.
  • the detecting device acquires the current image to be detected, and after determining the identification area in the image to be detected, acquires the target image parameter of the identification area, and then according to the target image parameter of the identification area and the target image of the identification area in the reference image.
  • the parameter performs correction processing on the detected image and sets a target detection threshold, and the detecting device performs visual detection according to the detection threshold.
  • the detection device does not limit the measurement environment, so that the implementation of the visual inspection is flexible and easy to operate. Make the visual detection implementation flexible and easy to operate.
  • the embodiment of the present invention further provides a robot, the robot includes a robot arm and a detecting device, the detecting device includes a sensor, and the sensor of the detecting device is mounted on the robot arm itself, and the robot executes the above-mentioned FIG. 2 through the detecting device.
  • the function of the embodiment is such that the robot can adjust the image parameters of the image to be detected and reset the target of the image to be detected when the overall brightness of the external environment changes and the overall brightness of the detected object also changes.
  • the detection parameters enable the above-mentioned robot to complete the visual inspection without being separated from the detected object by ambient light.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. You can choose some or all of them according to actual needs.
  • the unit is to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种视觉检测方法、检测设备以及机器人,用于使视觉检测的实现方式灵活易操作。该方法包括:检测设备获取当前的待检测图像;检测设备在待检测图像中确定标识区域,标识区域包括一个或多个区域;检测设备获取标识区域的目标图像参数;检测设备根据标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值;检测设备根据检测阈值进行视觉检测。还提供了一种检测设备,能够解决因为环境光变化而导致的无法正常检测的问题。

Description

一种视觉检测方法、检测设备以及机器人 技术领域
本发明属于检测技术领域,尤其涉及一种视觉检测方法、检测设备以及机器人。
背景技术
随着工业产品精确程度的提高,一些传统的检测方法在速度、精度和成本等方面已经完全无法满足生产的要求。计算机视觉检测可以高速、可靠和不间断的对工业产品的质量问题准确的检测,能取代以往效率低下的人工检测方法。然而工业流水中被检查物体的光照强度经常发生变化,比如日照角度引起的环境整体亮度变化、因机器人的机械臂在不同的位置时拍照等,而视觉检测的阈值也会受光强的改变影响,导致无法检测。
现有技术中,采用黑箱的方式,即将机器人和被检测物体放置于黑箱中,在人工光照下对被检测物体进行拍摄,避免了外界环境光不可控时整体亮度发生变化导致的无法对被检测物体正常进行视觉检测的问题。
但是,现有技术中,需要使用黑箱将机器人和被检测物体隔离于外界环境光,黑箱在设计上不易被实施,依旧无法有效进行视觉检测,实现难度很大。
发明内容
本发明实施例提供了一种视觉检测方法、检测设备以及机器人,使视觉检测的实现方式灵活易操作。
本发明实施例的第一方面提供一种视觉检测方法,包括:
检测设备获取当前的待检测图像;
所述检测设备在所述待检测图像中确定标识区域,所述标识区域包括一个或多个区域;
所述检测设备获取所述标识区域的目标图像参数;
所述检测设备根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测 阈值;
所述检测设备根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
本发明实施例的第二方面提供一种检测设备,包括:
第一获取单元,用于获取当前的待检测图像;
第一确定单元,用于在所述待检测图像中确定标识区域,所述标识区域包括一个或多个区域;
第二获取单元,用于获取所述标识区域的目标图像参数;
第一校正单元,用于根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值;
检测单元,用于根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
本发明实施例的第三方面提供了一种检测设备,包括:
传感器、存储器和处理器;
所述传感器,用于获取当前的待检测图像;
所述存储器,用于存储操作指令;
所述处理器通过调用所述操作指令,用于:
控制所述传感器获取所述当前的待检测图像;
在所述待检测图像中确定标识区域,所述标识区域包括一个或多个区域;
获取所述标识区域的目标图像参数;
根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值;
根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
本发明实施例第四方面提供了一种机器人,包括:
机器臂和检测装置,所述检测装置包括传感器,所述检测装置的传感器安装在所述机械臂上;
所述检测装置,用于控制所述传感器获取当前的待检测图像;
还用于在所述待检测图像中确定标识区域,所述标识区域包括一个或多个 区域;
还用于获取所述标识区域的目标图像参数;
还用于根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值;
还用于根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
本发明实施例提供的技术方案中,具有如下优点:根据目标图像中标识区域的目标图像参数和参考图像中标识区域的目标图像参数对待检测图像进行了校正,并设定了更优的检测阈值,并利用设定的检测阈值对校正后的待检测图像进行检测,从而解决了因为环境光变化而导致的无法正常检测的问题。本发明所提供的视觉检测方法不限定测量环境,使视觉检测的实现方式灵活易操作。
附图说明
图1a为本发明实施例的一个应用场景示意图;
图1b为本发明实施例的另一应用场景示意图;
图2为本发明实施例中视觉检测方法的一个实施例的流程图;
图3为本发明实施例中视觉检测方法的另一实施例的流程图;
图4为本发明实施例中根据多个像素点确定等效光源的一个技术示意图;
图5为预存的图像参数差值和检测阈值的关系图;
图6为本发明实施例中检测设备的一个实施例的模块图;
图7为本发明实施例中检测设备的另一实施例的模块图;
图8为本发明实施例中检测设备的一个实施例的装置图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第 三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本发明实施例适用于如图1a和图1b所示的应用场景。图1a中机器人的机械臂101和图1b中机器人的机械臂110为同一个机器人的机械臂在不同的位置,当机械臂的位置发生变化时,如图1a所示,导致被检测物体102的光照强度变化成图1b所示的被检测物体120的光照强度,光照强度的变化引起视觉检测的阈值发生变化,进而使得视觉检测无法正常进行。
有鉴于此,本发明实施例中,检测设备获取当前的待检测图像;检测设备在待检测图像中确定标识区域,标识区域包括一个或多个区域;检测设备获取标识区域的目标图像参数;检测设备根据标识区域的目标图像参数和参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值;检测设备根据目标检测阈值对校正后的待检测图像进行视觉检测。
为便于理解,下面对本发明实施例中的具体流程进行描述,请参阅图2,本发明实施例中视觉检测方法的一个实施例包括:
201、检测设备获取当前的待检测图像;
在工业流水作业中,检测设备对待检测物体进行视觉检测。检测设备采用相机等摄像装置对待检测物体进行图像采集得到待检测物体当前的待检测图像以进行检测分析。
202、检测设备在待检测图像中确定标识区域;
检测设备获取到当前的待检测图像后,在该待检测图像中确定标识区域。其中标识区域可以是一个区域或者多个区域,具体此处不做限定。
在一个实施例中,所述标识区域与检测设备的位置相对固定,即不论检测设备如何移动变化,两者之间的相对位置一定,例如,标识区域在相机正前方30cm等。
在其他实施例中,所述标识区域与所述检测设备的位置也可不相对固定,例如所述标识区域位于所述待检测物体上,或者位于所述待检测物体所处的平台上。
203、检测设备获取标识区域的目标图像参数;
检测设备在待检测图像中确定了标识区域后,检测设备再提取出该标识区域中的目标图像参数,其中目标图像参数为像素点的灰度值。
204、检测设备根据目标图像参数和参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值;
检测设备得到标识区域的目标图像参数后,再根据该目标图像参数和参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值,其中,参考图像为参考参数下相机获取到的图像,参考图像为预先获取的。具体的处理过程,请参考下文。
205、检测设备根据目标检测阈值对校正后的待检测图像进行视觉检测。
检测设备对待检测图像进行校正处理并重新设定了目标检测阈值后,检测设备根据重新设定的目标检测阈值对待检测图像进行视觉检测。需要说明的是,利用一指定的检测阈值对图像进行视觉检测的方法,可以是现有的视觉检测方法,在此不做限定。
从以上技术方案可以看出,本发明实施例中,检测设备获取当前的待检测图像后,在待检测图像中确定标识区域,并根据标识区域的目标图像参数和参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值,以根据检测阈值进行视觉检测。由于本发明实施例中,根据目标图像中标识区域的目标图像参数和参考图像中标识区域的目标图像参数对待检测图像进行了校正,并设定了更优的检测阈值,并利用设定的检测阈值对校正后的待检测图像进行检测,从而解决了因为环境光变化而导致的无法正常检测的问题。
为便于理解,下面将对本发明实施例的视觉检测方法进行详细描述。请参阅图3,图3为本发明实施例中视觉检测方法的另一实施例的流程图。由于标识区域可以包括一个区域或者多个区域,图3中以标识区域为多个区域为例来进行说明。
本发明实施例中另一种视觉检测方法包括:
301、检测设备获取参考图像;
检测设备对待检测物体进行视觉检测,从本地存储器中获得参考图像,该参考图像为在参考条件或者是正常条件下相机对被检测物体拍照得到的图像。检测设备获取到参考参数,其中参考参数包括参考图像的图像参数和参考阈值。参考阈值为检测设备检测缺陷时的可接受误差,若参考图像中某个像素点的目标图像参数与相邻像素点的目标图像参数的差值大于该参考阈值时,检测设备认为该像素点为缺陷点。实际应用中,参考阈值也可以为检测设备检测缺陷时的参考值,例如在参考图像中目标图像参数大于该参考阈值的像素点即为缺陷点,故参考阈值的定义具体此处不做限定。
参考图像的图像参数包括灰度值、对比度或模糊度等,参考阈值与参考图像参数对应,也包括灰度差阈值、对比度阈值或模糊度阈值等,具体此处不做限定。
302、检测设备获取当前的待检测图像;
本发明实施例中,步骤302与图2中的步骤201类似,具体不再赘述。
303、检测设备根据预置的标识区域信息在待检测图像中找到标识区域;
检测设备获取到当前的待检测图像后,根据预置的标识区域信息在待检测图像中寻找标识区域。在实际应用中,实现的技术手段至少包括图像分割或模式匹配,在使用图像分割中,预置的标识区域信息可以包括区块面积、区块平均亮度或者区块的长宽比等,在使用模式匹配时,预置的标识区域信息可以为标识区域图像等,例如预置的标识区域信息为要查找的标识区域是正六边形,则检测设备在待检测图像中查找到的所有正六边形区域都是标识区域,故预置的标识区域信息具体此处不做限定。因此检测设备找到标识区域的方式有多种,例如检测设备根据预存的标识区域图像,在待检测图像中找出与该预存的标识区域图像形状匹配的的标识区域,或者将待检测图像中大小符合预置条件的边角区域作为标识区域,具体此处不做限定。
显然,检测设备找出的符合该预置的标识区域信息的标识区域可以有一个或者多个,并将确定的标识区域的信息存储至本地,故检测设备确定的标识区域的个数具体此处不做限定。
304、检测设备根据标识区域中的多个像素点的目标图像参数确定等效光源的位置;
需要说明的是,待检测图像上各像素点的灰度值的不同与两个因素有关,包括光源照射到该像素点的光强以及该像素点自身的反射率。光源照射到某个像素点的光强的不同与两个因素有关,包括光源的光照强度以及光源与像素点的距离。
检测设备根据预置的标识区域信息在待检测图像中确定标识区域后,根据标识区域中的多个像素点的目标图像参数确定出等效光源的位置,如图4所示,为本发明实施例提供的一种根据多个像素点确定等效光源的技术示意图,在图4中,PLight为光源,l11、ln1和lnm为检测设备在标识区域中选择的任意三个像素点。本实施例中,该三个像素点分别位于三个不同的标识区域。在其他实施例中,该三个像素点也可以位于一个标识区域,或者位于两个标识区域。需要说明一下,该三个像素点的反射率相同。d1、d2和d3分别表示为该三个像素点与光源的距离,如d1表示点l11到光源的距离,由于三个像素点的光照强度是已知的,且光照强度与距离的平方成反比,可用公式l=K/d2表示,l表示各像素点的光照强度,d表示各像素点到光源的位置,K为与光源光强相关的常量。当设定好等效光源的光照强度时,即可确定所述K的值。再根据所述三个像素点的灰度值确定三个像素点的光照强度,然后即可根据上述公式计算得到所述d1、d2和d3的值。再分别以该三个像素点作为球心,各点到光源的距离d1、d2和d3为半径做球形,得出的三个球的交汇点即为等效光源的位置,显然,交汇点有两个,任选其中一个作为等效光源的位置即可。
305、检测设备根据等效光源的位置和待检测图像上各个像素点的位置得到待检测图像上各个像素点因光源位置而导致的误差;
检测设备在根据标识区域中的多个像素点的目标图像特征确定了等效光源的位置后,由于待检测图像中所有像素点的位置已知,因此能通过待检测图像中各像素点的位置确定等效光源到待检测图像中各像素点的距离,再根据步骤304中提到的公式l=K/d2计算得出待检测图像中各像素点的由于光源位置引起的误差。本实施例中,所述误差指的是因光源位置到达各个像素点的距离不一样而导致的灰度值误差。
306、检测设备根据所述误差校正所述待检测图像;
检测设备得到待检测图像上各个像素点因光源位置而导致的误差后,在待检测图像上将所述误差消除,也即根据所述误差校正待检测图像。例如,待检测图像上每一个像素点的由于光源位置而引起的误差已知,所述检测设备将待检测图像中各像素点的图像参数与相应的误差进行减法运算,进而校正所述待检测图像,也即消除所述待检测图像上各个像素点因为与光源的距离不同而导致的误差。
假设待检测图像的像素点有3*3个,表1为待检测图像中各个像素点的灰度值。检测设备从所述待检测图像中任选一个像素点作为特定像素点(假设为左上角的像素点),并在假设其他像素点与该像素点的反射率相同的情况下,根据所述等效光源的位置以及各个像素点的位置来计算各个像素点位置的灰度值,得到表2。将表2中的所有像素点的灰度值减去所述特定像素点的灰度值,以得到所述待检测图像上各个像素点因光源位置而导致的误差,也即表3。将表1中的所有像素点减去表3中相应位置的误差,从而得到校正后的待检测图像。
表1
76 160 172
180 187 194
185 201 209
表2
76 81 84
85 92 101
90 95 107
表3
0 5 8
9 16 25
14 19 31
表4
76 155 164
171 171 169
171 182 178
需要说明的是,步骤304~步骤306用于校正所述待检测图像因为光源位置而导致的误差。在其他实施例中,也可以通过其他方法来校正所述待检测图像因为光源位置而导致的误差。在另一实施例中,步骤304~步骤306也可以直接省略,也即执行完步骤303就直接执行步骤307。
307、检测设备获取所述标识区域的目标图像参数;
在本发明实施例中,所述检测设备从校正后的待检测图像上获取所述标识区域的目标图像参数。也即,从校正后的待检测图像上获取所述标识区域的各个像素点的灰度值。
在另一实施例中,若步骤304~步骤306被省略,则所述检测设备从所述待检测图像上获取所述标识区域的目标图像参数。也即,从所述待检测图像上获取所述标识区域的各个像素点的灰度值。
308、检测设备根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理;
检测设备获得待检测图像中标识区域的目标图像参数和参考参数中标识区域的目标图像参数后,将参考图像中标识区域的目标图像参数的平均值和待检测图像中标识区域的目标图像参数的平均值进行除法运算得到目标系数。基于该目标系数,检测设备将待检测图像中各像素点的目标图像参数与该目标系数进行乘法运算来对待检测图像进行校正处理。也即,所述检测设备将所述参考图像中标识区域的各像素点的灰度值的平均值和待检测图像中标识区域的各像素点的灰度值的平均值进行除法运算得到目标系数,然后将所述待检测图像中各像素点的灰度值与该目标系数进行乘法运算来对所述待检测图像进行校正处理。
309、检测设备将参考图像中标识区域的目标图像参数的平均值和待检测图像中标识区域的目标图像参数的平均值进行减法运算得到差值;
检测设备获得待检测图像中标识区域的目标图像参数和参考图像中标识 区域的目标图像参数后,将待检测图像中标识区域的目标图像参数与参考图像中标识区域的目标图像参数相减得到差值。
310、检测设备判断所述差值是否大于预置的经验值;若是,则执行步骤311;若否,则执行步骤312;
如图5所示,是预存的图像参数差值和检测阈值的关系图。该关系图中存储着与每个参考图像差值对应的最优检测阈值,也即存储着与每个参考图像差值对应的能够检测到缺陷点的检测阈值。图5中的坐标图的横轴为参考图像中标识区域的目标图像参数的平均值与待检测图像中标识区域的目标图像参数的平均值的差值,纵轴为检测阈值。图中经验性分界线与横轴的交汇处的值即为预置的经验值。当图像参数差值小于该预置的经验值时,该图像参数差值对应的检测阈值趋于稳定值;当图像参数差值大于该预置的经验值时,该图像参数差值对应的检测阈值变化较大。也即,当图像参数差值小于该预置的经验值时,对应的检测阈值变化较小,将参考阈值作为待检测图像的目标检测阈值即可。当图像参数差值大于该预置的经验值时,对应的检测阈值变化较大,则需要将与该图像参数差值对应的检测阈值作为待检测图像的目标检测阈值。
311、检测设备在预设的图像参数差值与检测阈值的关系中确定与差值对应的检测阈值作为目标检测阈值;
检测设备确定差值大于预设的经验值后,检测设备调整该待检测图像的检测阈值,检测设备根据获得的差值在本地预设的图像参数差值与检测阈值关系找出与差值对应的检测阈值以作为待检测图像的目标检测阈值。
312、检测设备将预设的参考图像的参考阈值设定为目标检测阈值;
检测设备确定该差值小于预置的经验值时,检测设备将预设的参考图像的参考阈值作为待检测图像的目标检测阈值。
313、检测设备根据目标检测阈值进行视觉检测。
本发明实施例中,步骤313与图2中的步骤205类似,具体不再赘述。
本发明实施例中,检测设备获得检测物体当前的待检测图像,并获取到标识区域的目标图像参数,进而根据该目标图像参数和参考图像的标识区域的目标图像参数校正待检测图像并调整待检测图像的目标检测阈值。由于本发明实施例中,采用视觉测量中建立视觉模型的方式,不限定测量环境,使视觉检测 的实现方式灵活易操作。
上面对本发明实施例中的视觉检测方法进行了描述,下面对本发明实施例中的检测设备进行描述,请参阅图6,为本发明实施例中检测设备一个实施例的模块图,包括:
第一获取单元601,用于获取当前的待检测图像;
第一确定单元602,用于在待检测图像中确定标识区域,标识区域包括一个或多个区域;
第二获取单元603,用于获取标识区域的目标图像参数;
第一校正单元604,用于根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值;
检测单元605,用于根据检测阈值对校正后的待检测图像进行视觉检测。
本发明实施例中,第一获取单元获得检测物体当前的待检测图像,第二获取单元获取到标识区域的目标图像参数,进而校正单元根据待检测图像的标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数校正处理并设定目标检测阈值。由于本发明实施例中,采用视觉测量中建立视觉模型的方式,不限定测量环境,使视觉检测的实现方式灵活易操作。
为便于理解,下面对本发明实施例中的检测设备进行详细描述,在上述图6所示的基础上,请参阅7,为本发明实施例中检测设备的另一个实施例的模块图,第一校正单元704可以包括:
第一校正模块7041,用于根据标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对待检测图像进行校正处理;
运算模块7042,用于将参考图像中标识区域的目标图像参数的平均值和待检测图像中标识区域的目标图像参数的平均值进行减法运算得到差值;
判断模块7043,用于判断差值是否大于预置的经验值;
第一设定模块7044,当所述差值大于预置的经验值时,则用于根据预设的图像参数差值与检测阈值的关系设定目标检测阈值;
第二设定模块7045,当所述差值不大于预置的经验值,则用于将预设的参考图像的参考阈值设定为目标检测阈值。
其中,第一设定模块7044具体可包括:
确定子模块70441,用于在预设的图像参数差值与检测阈值的关系中确定与差值对应的检测阈值作为目标检测阈值。
其中,第一校正模块7041具体可包括:
第一运算子模块70411,用于将参考图像中标识区域的目标图像参数的平均值和待检测图像中标识区域的目标图像参数的平均值进行除法运算得到目标系数;
第二运算子模块70412,用于将待检测图像中各像素点的目标图像参数与目标系数进行乘法运算以校正检测图像。
其中,第一确定单元702包括:
查找模块7021,用于根据预置的标识区域信息在待检测图像中找出标识区域。
其中,所述检查设备还包括:
第二确定单元706,用于根据标识区域中的多个像素点的目标图像参数确定等效光源的位置;
第三确定单元707,用于根据等效光源的位置和待检测图像上各像素点的位置得到待检测图像中各个像素点因为光源位置而导致的误差;
第二校正单元708,用于根据误差校正待检测图像;
第一校正模块7041,还用于根据标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对校正后的待检测图像做进一步的校正处理。
上述图6至图7中的模块的具体实现方案请参见上述的视觉检测方法部分,在此不做赘述。
本发明还提供了一种检测设备,请参阅图8,为本发明实施例中检测设备的装置图,其中所述检测设备80包括:传感器810、存储器820、处理器830。
其中,传感器810,用于获取当前的待检测图像。
存储器820,用于存储操作指令;
处理器830通过调用存储器820存储的操作指令,用于执行如下步骤:
控制传感器810获取当前的待检测图像;在待检测图像中确定标识区域, 标识区域包括一个或多个区域;获取标识区域的目标图像参数;根据标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值;根据检测阈值进行视觉检测。
需要说明的是,本实施例中,处理器830还可以称为中央处理单元(英文全称:Central Processing Unit,英文缩写:CPU)。
存储器820,用于存储操作指令和数据,以便处理器830调用上述操作指令实现相应操作,可以包括只读存储器和随机存取存储器。存储器820的一部分还可以包括非易失性随机存取存储器(英文全称:Non-Volatile Random Access Memory,英文缩写:NVRAM)。
所述检测设备80还包括总线系统840,所述总线系统840将检测设备80的各个组件耦合在一起,上述各个组件包括传感器810、存储器820、处理器830,其中总线系统840除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统840。
本实施例中,还需要说明的是,上述本发明实施例揭示的方法可以应用于处理器830中,或者由处理器830实现。处理器830可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器830中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器830可以是通用处理器、数字信号处理器(英文全称:Digital Signal Processing,英文缩写:DSP)、专用集成电路(英文全称:Application Specific Integrated Circuit,英文缩写:ASIC)、现成可编程门阵列(英文全称:Field-Programmable Gate Array,英文缩写:FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器820,处理器830读取存储器820中的信息,结合其硬件完成上述方法的步骤。
在另一个可能的实施例中,处理器830还可以调用存储器820中的操作指令,执行如下步骤:
根据标识区域的目标图像参数和预先存储的所述参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理;
将参考图像中标识区域的目标图像参数的平均值和待检测图像中标识区域的目标图像参数的平均值进行减法运算得到差值;
判断差值是否大于预置的经验值;
当所述差值大于预置的经验值时,则根据预设的图像参数差值与检测阈值的关系设定目标检测阈值;
当所述差值不大于预置的经验值时,则将预设的参考图像的参考阈值设定为目标检测阈值。
在上述实施例中处理器830根据预设的图像参数差值与检测阈值的关系设定检测阈值的具体实现可为:
在预设的图像参数差值与检测阈值的关系中确定与差值对应的检测阈值作为目标检测阈值。
处理器830将参考图像中标识区域的目标图像参数和待检测图像中标识区域的目标图像参数做预设的运算以对待检测图像进行校正处理的具体实现可为:
将参考图像中标识区域的目标图像参数的平均值和待检测图像中标识区域的目标图像参数的平均值进行除法运算得到目标系数;
将待检测图像中各像素点的目标图像参数与目标系数进行乘法运算以校正待检测图像。
在另一个可能的实施例中,处理器830还可以调用存储器820中的操作指令,执行如下步骤:
控制传感器根据预置的标识区域信息在待检测图像中找出标识区域。
在另一个可能的实施例中,处理器830还可以调用存储器820中的操作指令,执行如下步骤:
根据所述标识区域中的多个像素点的目标图像参数确定等效光源的位置;
根据等效光源的位置和待检测图像上各个像素点的位置得到待检测图像 上各个像素点因为光源位置而导致的误差;
根据误差校正待检测图像;
根据标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对校正后的所述待检测图像做进一步的校正处理。
以上实施例中,检测设备获取当前的待检测图像,并在待检测图像中确定标识区域后,获取标识区域的目标图像参数,再根据标识区域的目标图像参数和参考图像中标识区域的目标图像参数对待检测图像进行校正处理并设定目标检测阈值,检测设备再根据检测阈值进行视觉检测。该检测设备不限定测量环境,使视觉检测的实现方式灵活易操作。使视觉检测的实现方式灵活易操作。
本发明实施例还提供了一种机器人,上述机器人包括机器臂和检测装置,所述检测装置包括传感器,上述检测装置的传感器安装于上述机器臂本身上,上述机器人通过上述检测装置执行上述图2和图3对应实施例的功能,以使得上述机器人可以在外界环境整体亮度发生变化导致被检测物体的整体亮度也发生变化时,能调整待检测图像的图像参数并重新设定待检测图像的目标检测参数,使上述机器人不必和被检测物体隔离于外界环境光便可正常完成视觉检测。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部 单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (14)

  1. 一种视觉检测方法,其特征在于,包括:
    检测设备获取当前的待检测图像;
    所述检测设备在所述待检测图像中确定标识区域,所述标识区域包括一个或多个区域;
    所述检测设备获取所述标识区域的目标图像参数;
    所述检测设备根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值;
    所述检测设备根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
  2. 根据权利要求1所述的视觉检测方法,其特征在于,所述目标图像参数为像素点的灰度值,所述检测设备根据所述标识区域的目标图像参数和参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值包括:
    所述检测设备根据所述标识区域的目标图像参数和预先存储的所述参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理;
    所述检测设备将所述参考图像中标识区域的目标图像参数的平均值和所述待检测图像中标识区域的目标图像参数的平均值进行减法运算得到差值;
    所述检测设备判断所述差值是否大于预置的经验值;
    当所述差值大于预置的经验值时,则所述检测设备根据预设的图像参数差值与检测阈值的关系设定所述目标检测阈值;
    当所述差值不大于预置的经验值时,则所述检测设备将预设的所述参考图像的参考阈值设定为所述目标检测阈值。
  3. 根据权利要求2所述的视觉检测方法,其特征在于,所述检测设备根据预设的图像参数差值与检测阈值的关系设定所述检测阈值包括:
    所述检测设备在所述预设的图像参数差值与检测阈值的关系中确定与所述差值对应的检测阈值作为所述目标检测阈值。
  4. 根据权利要求2所述的视觉检测方法,其特征在于,所述检测设备根 据所述标识区域的目标图像参数和预先存储的所述参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理包括:
    所述检测设备将所述参考图像中标识区域的目标图像参数的平均值和所述待检测图像中标识区域的目标图像参数的平均值进行除法运算得到目标系数;
    所述检测设备将所述待检测图像中各像素点的目标图像参数与所述目标系数进行乘法运算以校正所述待检测图像。
  5. 根据权利要求1所述的视觉检测方法,所述检测设备在所述待检测图像中确定标识区域包括:
    所述检测设备根据预置的标识区域信息在所述待检测图像中找出所述标识区域。
  6. 根据权利要求2至5中任一项所述的视觉检测方法,其特征在于,所述检测设备在根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理之前,还包括:
    所述检测设备根据所述标识区域中的多个像素点的目标图像参数确定等效光源的位置;
    所述检测设备根据所述等效光源的位置和所述待检测图像上各个像素点的位置得到所述待检测图像上各个像素点因为光源位置而导致的误差;
    所述检测设备根据所述误差校正所述待检测图像;
    所述检测设备在根据所述标识区域的目标图像参数和预先存储的所述参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理,包括:
    所述检测设备根据所述标识区域的目标图像参数和预先存储的所述参考图像中标识区域的目标图像参数对校正后的所述待检测图像做进一步的校正处理。
  7. 一种检测设备,其特征在于,包括:
    第一获取单元,用于获取当前的待检测图像;
    第一确定单元,用于在所述待检测图像中确定标识区域,所述标识区域包括一个或多个区域;
    第二获取单元,用于获取所述标识区域的目标图像参数;
    第一校正单元,用于根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值;
    检测单元,用于根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
  8. 根据权利要求7所述的检测设备,其特征在于,所述第一校正单元包括:
    第一校正模块,用于根据所述标识区域的目标图像参数和预先存储的所述参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理;
    运算模块,用于将所述参考图像中标识区域的目标图像参数的平均值和所述待检测图像中标识区域的目标图像参数的平均值进行减法运算得到差值;
    判断模块,用于判断所述差值是否大于预置的经验值;
    第一设定模块,当所述差值大于预置的经验值时,则用于根据预设的图像参数差值与检测阈值的关系设定所述目标检测阈值;
    第二设定模块,当所述差值不大于预置的经验值时,则用于将预设的所述参考图像的参考阈值设定为所述目标检测阈值。
  9. 根据权利要求8所述的检测设备,其特征在于,所述第一设定模块包括:
    确定子模块,用于在所述预设的图像参数差值与检测阈值的关系中确定与所述差值对应的检测阈值作为所述目标检测阈值。
  10. 根据权利要求8所述的检测设备,其特征在于,所述第一校正模块包括:
    第一运算子模块,用于将所述参考图像中标识区域的目标图像参数的平均值和所述待检测图像中标识区域的目标图像参数的平均值进行除法运算得到目标系数;
    第二运算子模块,用于将所述待检测图像中各像素点的目标图像参数与所述目标系数进行乘法运算以校正所述检测图像。
  11. 根据权利要求7所述的检测设备,其特征在于,所述第一确定单元包括:
    查找模块,用于根据预置的标识区域信息在所述待检测图像中找出所述标识区域。
  12. 根据权利要求8至11中任一项所述的检测设备,其特征在于,所述检测设备还包括:
    第二确定单元,用于根据所述标识区域中的多个像素点的目标图像参数确定等效光源的位置;
    第三确定单元,用于根据所述等效光源的位置和所述待检测图像上各个像素点的位置得到所述待检测图像中各个像素点因为光源位置而导致的误差;
    第二校正单元,还用于根据所述误差校正所述待检测图像;
    所述第一校正模块,还用于根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对校正后的所述待检测图像做进一步的校正处理。
  13. 一种检测设备,其特征在于,所述检测设备包括:
    传感器、存储器和处理器;
    所述传感器,用于获取当前的待检测图像;
    所述存储器,用于存储操作指令;
    所述处理器通过调用所述操作指令,用于:
    控制所述传感器获取所述当前的待检测图像;
    在所述待检测图像中确定标识区域,所述标识区域包括一个或多个区域;
    获取所述标识区域的目标图像参数;
    根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值;
    根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
  14. 一种机器人,其特征在于,包括:
    机器臂和检测装置,所述检测装置包括传感器,所述检测装置的传感器安装在所述机械臂上;
    所述检测装置,用于控制所述传感器获取当前的待检测图像;
    还用于在所述待检测图像中确定标识区域,所述标识区域包括一个或多个区域;
    还用于获取所述标识区域的目标图像参数;
    还用于根据所述标识区域的目标图像参数和预先存储的参考图像中标识区域的目标图像参数对所述待检测图像进行校正处理并设定目标检测阈值;
    还用于根据所述目标检测阈值对校正后的所述待检测图像进行视觉检测。
PCT/CN2017/081989 2017-04-26 2017-04-26 一种视觉检测方法、检测设备以及机器人 WO2018195797A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780089296.0A CN110476056B (zh) 2017-04-26 2017-04-26 一种视觉检测方法、检测设备以及机器人
PCT/CN2017/081989 WO2018195797A1 (zh) 2017-04-26 2017-04-26 一种视觉检测方法、检测设备以及机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/081989 WO2018195797A1 (zh) 2017-04-26 2017-04-26 一种视觉检测方法、检测设备以及机器人

Publications (1)

Publication Number Publication Date
WO2018195797A1 true WO2018195797A1 (zh) 2018-11-01

Family

ID=63919356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/081989 WO2018195797A1 (zh) 2017-04-26 2017-04-26 一种视觉检测方法、检测设备以及机器人

Country Status (2)

Country Link
CN (1) CN110476056B (zh)
WO (1) WO2018195797A1 (zh)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414511A (zh) * 2019-07-30 2019-11-05 深圳市普渡科技有限公司 合作标志识别方法及系统
CN110796149A (zh) * 2019-10-09 2020-02-14 陈浩能 食品追溯的图像比对方法及相关装置
CN111243015A (zh) * 2018-11-29 2020-06-05 合肥泰禾光电科技股份有限公司 货箱位置检测方法及装置
CN111369513A (zh) * 2020-02-28 2020-07-03 广州视源电子科技股份有限公司 一种异常检测方法、装置、终端设备及存储介质
CN111654955A (zh) * 2020-04-30 2020-09-11 钱丽丽 基于图像分析的室内环境光变化因素识别方法
CN111723724A (zh) * 2020-06-16 2020-09-29 东软睿驰汽车技术(沈阳)有限公司 一种路面障碍物识别方法和相关装置
CN111862050A (zh) * 2020-07-22 2020-10-30 无锡先导智能装备股份有限公司 一种物料检测系统、方法及设备
CN111882541A (zh) * 2020-07-28 2020-11-03 广州柔视智能科技有限公司 缺陷检测方法、装置、设备及计算机可读存储介质
CN112150544A (zh) * 2020-09-24 2020-12-29 西门子(中国)有限公司 吊钩到位检测方法、装置和计算机可读介质
CN112255973A (zh) * 2019-07-02 2021-01-22 库卡机器人(广东)有限公司 工业生产系统中的目标检测方法、检测终端及存储介质
CN113128302A (zh) * 2019-12-30 2021-07-16 深圳云天励飞技术有限公司 图像检测方法及相关产品
CN113776629A (zh) * 2021-09-08 2021-12-10 广州计量检测技术研究院 玻璃量器自动校准系统及控制方法、装置
CN115002320A (zh) * 2022-05-27 2022-09-02 北京理工大学 基于视觉检测的光强调节方法、装置、系统及处理设备
CN115283296A (zh) * 2022-10-08 2022-11-04 烟台台芯电子科技有限公司 基于igbt生产的视觉检测系统及方法
CN116993716A (zh) * 2023-09-22 2023-11-03 山东力加力钢结构有限公司 一种混凝土搅拌过程视觉检测方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112730424A (zh) * 2020-12-11 2021-04-30 上海大学 一种航空插头针脚缺陷检测装置及方法
CN113176270B (zh) * 2021-06-29 2021-11-09 中移(上海)信息通信科技有限公司 一种调光方法、装置及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105091790A (zh) * 2014-05-23 2015-11-25 旺矽科技股份有限公司 视觉检测系统及视觉检测方法
JP2016078150A (ja) * 2014-10-15 2016-05-16 Jfeスチール株式会社 鋼板表面欠陥研削装置および方法
CN106091996A (zh) * 2016-05-26 2016-11-09 东华大学 一种铺料平整度在线视觉检测方法
CN106204540A (zh) * 2016-06-29 2016-12-07 上海晨兴希姆通电子科技有限公司 视觉检测方法
CN106378514A (zh) * 2016-11-22 2017-02-08 上海大学 基于机器视觉的不锈钢非均匀细微多焊缝视觉检测系统和方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4174487B2 (ja) * 2005-03-24 2008-10-29 アドバンスド・マスク・インスペクション・テクノロジー株式会社 画像補正方法
CN105323459B (zh) * 2015-05-25 2018-10-16 维沃移动通信有限公司 图像处理方法及移动终端
CN105578063B (zh) * 2015-07-14 2018-04-10 宇龙计算机通信科技(深圳)有限公司 一种图像处理方法和终端
KR20170032602A (ko) * 2015-09-15 2017-03-23 삼성전자주식회사 결함 촬상 장치, 이를 구비하는 결함 검사 시스템 및 이를 이용한 결함 검사 방법
CN106529380B (zh) * 2015-09-15 2019-12-10 阿里巴巴集团控股有限公司 图像的识别方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105091790A (zh) * 2014-05-23 2015-11-25 旺矽科技股份有限公司 视觉检测系统及视觉检测方法
JP2016078150A (ja) * 2014-10-15 2016-05-16 Jfeスチール株式会社 鋼板表面欠陥研削装置および方法
CN106091996A (zh) * 2016-05-26 2016-11-09 东华大学 一种铺料平整度在线视觉检测方法
CN106204540A (zh) * 2016-06-29 2016-12-07 上海晨兴希姆通电子科技有限公司 视觉检测方法
CN106378514A (zh) * 2016-11-22 2017-02-08 上海大学 基于机器视觉的不锈钢非均匀细微多焊缝视觉检测系统和方法

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111243015A (zh) * 2018-11-29 2020-06-05 合肥泰禾光电科技股份有限公司 货箱位置检测方法及装置
CN111243015B (zh) * 2018-11-29 2023-05-12 合肥泰禾智能科技集团股份有限公司 货箱位置检测方法及装置
CN112255973A (zh) * 2019-07-02 2021-01-22 库卡机器人(广东)有限公司 工业生产系统中的目标检测方法、检测终端及存储介质
CN110414511A (zh) * 2019-07-30 2019-11-05 深圳市普渡科技有限公司 合作标志识别方法及系统
CN110796149A (zh) * 2019-10-09 2020-02-14 陈浩能 食品追溯的图像比对方法及相关装置
CN110796149B (zh) * 2019-10-09 2023-10-27 陈浩能 食品追溯的图像比对方法及相关装置
CN113128302A (zh) * 2019-12-30 2021-07-16 深圳云天励飞技术有限公司 图像检测方法及相关产品
CN113128302B (zh) * 2019-12-30 2024-06-11 深圳云天励飞技术有限公司 图像检测方法及相关产品
CN111369513A (zh) * 2020-02-28 2020-07-03 广州视源电子科技股份有限公司 一种异常检测方法、装置、终端设备及存储介质
CN111369513B (zh) * 2020-02-28 2023-10-20 广州视源电子科技股份有限公司 一种异常检测方法、装置、终端设备及存储介质
CN111654955B (zh) * 2020-04-30 2022-07-29 钱丽丽 基于图像分析的室内环境光变化因素识别方法
CN111654955A (zh) * 2020-04-30 2020-09-11 钱丽丽 基于图像分析的室内环境光变化因素识别方法
CN111723724A (zh) * 2020-06-16 2020-09-29 东软睿驰汽车技术(沈阳)有限公司 一种路面障碍物识别方法和相关装置
CN111723724B (zh) * 2020-06-16 2024-04-02 东软睿驰汽车技术(沈阳)有限公司 一种路面障碍物识别方法和相关装置
CN111862050A (zh) * 2020-07-22 2020-10-30 无锡先导智能装备股份有限公司 一种物料检测系统、方法及设备
CN111882541A (zh) * 2020-07-28 2020-11-03 广州柔视智能科技有限公司 缺陷检测方法、装置、设备及计算机可读存储介质
CN112150544B (zh) * 2020-09-24 2024-03-19 西门子(中国)有限公司 吊钩到位检测方法、装置和计算机可读介质
CN112150544A (zh) * 2020-09-24 2020-12-29 西门子(中国)有限公司 吊钩到位检测方法、装置和计算机可读介质
CN113776629A (zh) * 2021-09-08 2021-12-10 广州计量检测技术研究院 玻璃量器自动校准系统及控制方法、装置
CN115002320A (zh) * 2022-05-27 2022-09-02 北京理工大学 基于视觉检测的光强调节方法、装置、系统及处理设备
CN115283296A (zh) * 2022-10-08 2022-11-04 烟台台芯电子科技有限公司 基于igbt生产的视觉检测系统及方法
CN115283296B (zh) * 2022-10-08 2023-03-10 烟台台芯电子科技有限公司 基于igbt生产的视觉检测系统及方法
CN116993716B (zh) * 2023-09-22 2023-12-19 山东力加力钢结构有限公司 一种混凝土搅拌过程视觉检测方法
CN116993716A (zh) * 2023-09-22 2023-11-03 山东力加力钢结构有限公司 一种混凝土搅拌过程视觉检测方法

Also Published As

Publication number Publication date
CN110476056A (zh) 2019-11-19
CN110476056B (zh) 2022-02-18

Similar Documents

Publication Publication Date Title
WO2018195797A1 (zh) 一种视觉检测方法、检测设备以及机器人
CN111340752B (zh) 屏幕的检测方法、装置、电子设备及计算机可读存储介质
CN110544258B (zh) 图像分割的方法、装置、电子设备和存储介质
CN111308448B (zh) 图像采集设备与雷达的外参确定方法及装置
CN109002795B (zh) 车道线检测方法、装置及电子设备
US10504244B2 (en) Systems and methods to improve camera intrinsic parameter calibration
CN111028213A (zh) 图像缺陷检测方法、装置、电子设备及存储介质
US9646370B2 (en) Automatic detection method for defects of a display panel
CN109993800A (zh) 一种工件尺寸的检测方法、装置及存储介质
EP4411633A1 (en) Method and device for detecting stability of vision system
JP2012032370A (ja) 欠陥検出方法、欠陥検出装置、学習方法、プログラム、及び記録媒体
CN105957041B (zh) 一种广角镜头红外图像畸变校正方法
US20210103741A1 (en) Detection method and apparatus for automatic driving sensor, and electronic device
CN110596120A (zh) 玻璃边界缺陷检测方法、装置、终端及存储介质
Ouellet et al. Precise ellipse estimation without contour point extraction
CN108871185B (zh) 零件检测的方法、装置、设备以及计算机可读存储介质
CN112345534B (zh) 一种基于视觉的泡罩板中颗粒的缺陷检测方法及系统
CN111025701A (zh) 一种曲面液晶屏幕检测方法
CN111681186A (zh) 图像处理方法、装置、电子设备及可读存储介质
CN115564723A (zh) 一种晶圆缺陷检测方法及应用
US9239230B2 (en) Computing device and method for measuring widths of measured parts
CN110310239B (zh) 一种基于特性值拟合消除光照影响的图像处理方法
WO2024016715A1 (zh) 检测系统的成像一致性的方法、装置和计算机存储介质
CN111062984A (zh) 视频图像区域面积的测量方法、装置、设备及存储介质
CN117173090A (zh) 焊接缺陷类型识别方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17907666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17907666

Country of ref document: EP

Kind code of ref document: A1