CN113838144B - Method for positioning object on UV printer based on machine vision and deep learning - Google Patents
Method for positioning object on UV printer based on machine vision and deep learning Download PDFInfo
- Publication number
- CN113838144B CN113838144B CN202111073232.6A CN202111073232A CN113838144B CN 113838144 B CN113838144 B CN 113838144B CN 202111073232 A CN202111073232 A CN 202111073232A CN 113838144 B CN113838144 B CN 113838144B
- Authority
- CN
- China
- Prior art keywords
- workbench
- article
- image
- segmentation
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method for positioning an object on a UV printer based on machine vision and deep learning, which comprises the steps of shooting an image of a workbench area through a workbench image acquisition module arranged right above a workbench of the UV printer, analyzing the image by adopting an advanced deep learning method, calculating the accurate position of each object in the workbench area, and enabling drawing software to make a current printing picture according to the positions and the pattern to be printed of each object and to deliver the current printing picture to the UV printer for printing, so that the accurate printing of the object which is arbitrarily placed on the workbench according to the pattern to be printed by the workbench can be completed. The object positioning method on the UV printer based on machine vision and deep learning has high object positioning precision, so that the printing precision is high, strict requirements on the type and the placement position of the object are not needed, and the working effect and the utilization rate of the UV printer are improved.
Description
Technical Field
The invention belongs to the technical field of object positioning, and particularly relates to an object positioning method on a UV printer based on machine vision and deep learning.
Background
UV printing is one of the most common and widely applied printing technologies in the printing industry, and has the advantages of no material limitation, no plate making, instant brushing, high precision, high speed, economy, environmental protection and the like, so that the UV printing is applied to various planar printing scenes. The UV printer is simple to operate, ink-jet printing is carried out on the surface of the article placed on the workbench according to the drawing in the matched software, but the drawing pattern area is required to be accurately corresponding to the article surface area of the workbench, otherwise, the problem of inaccurate printing of the article surface pattern can occur.
The industry typically uses molds to avoid this problem, i.e., a mold grid of the same size as the article to be printed is held on a table, and the article is placed in the grid during printing. This approach solves the problem to some extent, but at the same time brings about some drawbacks, the fixed mould makes it possible to print only on the fixed type of article corresponding to the mould, and the flexibility of the printer is limited.
Disclosure of Invention
The invention aims to solve the technical problems and provides an object positioning method on a UV printer based on machine vision and deep learning.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a method for positioning an object on a UV printer based on machine vision and deep learning, the method comprising:
s1, correcting an image acquisition module of a workbench, wherein the correction method comprises the following steps:
s101, preparing a calibration plate, and printing a black-and-white chessboard with the specification of m multiplied by n for calibration;
s102, placing the calibration plates on a workbench according to different positions and inclination angles, and shooting the calibration plates placed each time by using a workbench image acquisition module above the workbench;
s103, performing chessboard detection on all the calibration images, and finally calculating an internal reference matrix, an external reference matrix and a distortion system of the camera, which are collectively called as camera parameters;
s2, collecting an area image on a workbench of the UV printer by a workbench image collecting module;
s3, after the workbench image object detection module receives the image sent by the workbench image acquisition module, preprocessing the acquired workbench image, inputting the preprocessed workbench image into an object detection network to obtain the output of the rectangular position of the object, and detecting the rectangular area where each object in the image is located by the workbench image object detection module;
s4, after the object segmentation module of the workbench image receives the area images of all the objects, preprocessing the object area images, inputting the object area images into an object segmentation network to obtain output of object segmentation masks, extracting edge contours of the segmentation masks, and outputting minimum bounding rectangles of the object segmentation areas according to the contours to obtain pixel positions of all the objects;
s5, calculating the coordinate position of each article relative to the workbench according to the obtained pixel position of each article and the camera parameters.
In S1, a specific calibration code is set on a workbench of the UV printer, the calibration code is known relative to the world coordinate of the workbench, and is recorded as a true value, then an image of a region of the workbench is shot, the pixel coordinate of the calibration code on the image is detected, the estimated value of the world coordinate of the calibration code relative to the workbench is calculated based on an internal reference of an image acquisition module of the workbench, the euclidean distance between the estimated value of the coordinate and the true value is the reference error of a camera calibration and positioning module, if the error is greater than a certain threshold, the camera and the workbench need to be adjusted, and the camera calibration module is repeated until the error meets the condition.
As a preferable embodiment, in S4, the position information of the minimum bounding rectangle of each article is calculated from the camera parameters to obtain the rectangular position information and the rotation angle of the rectangle in the world coordinates of each article with respect to the table.
As a preferable technical scheme, for articles with different heights, height parameters are added, and world coordinate conversion is carried out by matching with camera parameters.
As a preferable technical scheme, for an article with a linear edge, linear detection is added, and a linear calculation angle with the largest peripheral length is selected to correct deviation based on the minimum bounding rectangle calculation angle.
As a preferable technical solution, before executing S1, the following judgment is made: judging whether the camera parameters exist or not and whether the calibration error is larger than a threshold value, and executing S1 if the camera parameters do not exist or the calibration error is larger than the threshold value; and if the camera parameters exist and the calibration error is not greater than the threshold value, executing step S2.
In a preferred embodiment, in S3, the article detection model is generated by the article detection model training module and loaded into the article detection network.
As a preferable technical solution, the step of generating the article detection model by the article detection model training module is as follows:
s301, acquiring an article detection sample based on image synthesis;
s302, preprocessing an article detection sample, scaling to a fixed size, and taking the article detection sample as a training sample, wherein the corresponding label is the upper left corner coordinate and the width and the height of the rectangular position of the article on the image;
s303, training is carried out after training conditions are set, and an article detection model is stored after training is finished.
In a preferred embodiment, in S4, the article segmentation model is generated by the article segmentation model training module and loaded into the article segmentation network.
As a preferable technical solution, the step of generating the object segmentation model by the object segmentation model training module is as follows:
s401, acquiring an article segmentation sample based on manual labeling;
s402, acquiring an article segmentation sample based on image synthesis;
s403, preprocessing an article segmentation sample, scaling to a fixed size, and taking the article segmentation sample as a training sample, wherein a corresponding label is a binary image of an article segmentation mask;
s404, training is carried out after training conditions are set, and an article segmentation model is stored after training is finished.
After the technical scheme is adopted, the invention has the following advantages:
the invention aims to provide a positioning method for an object on a UV printer based on machine vision and deep learning, which is characterized in that a workbench image acquisition module arranged right above a workbench of the UV printer is used for shooting a workbench area image, an advanced deep learning method is adopted for analyzing the image, the accurate position of each object in the workbench area is calculated, drawing software is used for making a current printing picture according to the positions and the pattern to be printed of each object and delivering the current printing picture to the UV printer for printing, and thus, the accurate printing of the objects randomly placed on the workbench according to the patterns to be printed can be completed. The object positioning method on the UV printer based on machine vision and deep learning has high object positioning precision, so that the printing precision is high, strict requirements on the type and the placement position of the object are not needed, and the working effect and the utilization rate of the UV printer are improved.
Detailed Description
The present invention will be described in further detail with reference to specific examples.
A method for positioning an object on a UV printer based on machine vision and deep learning, the method comprising:
s0. it is determined whether the camera parameter is present, the calibration error is greater than a threshold, and if the camera parameter is absent or the calibration error is greater than the threshold, S1 is performed. And if the camera parameters exist and the calibration error is not greater than the threshold value, executing step S2.
This step occurs after the first installation of the workstation image acquisition module is completed, or when the position of the workstation image acquisition module is significantly shifted, or when the printing accuracy of the UV printer is problematic. The workbench image acquisition module in the embodiment is a camera or/and a camera.
S1, correcting an image acquisition module of a workbench, wherein the correction method comprises the following steps:
s101, preparing a calibration plate, wherein the size specification of the calibration plate can be determined relative to the size of a workbench, for example, 25mmx25mm or 60mmx60mm, the size of the calibration plate is 1/3 of the size of the workbench, the surface of the calibration plate is as flat as possible, and the specification for printing and calibrating is an m multiplied by n black and white chessboard, and in the embodiment, the specification is 12 multiplied by 9.
S102, placing the calibration plates on a workbench according to different positions and inclination angles, and shooting the calibration plates placed each time by using a workbench image acquisition module above the workbench, wherein the total shooting amount is 20-40.
S103, performing chessboard detection on all calibration images, and finally calculating an internal reference matrix, an external reference matrix and a distortion system of the camera, which are collectively called as camera parameters, wherein the method mainly comprises the following steps of: 1. carrying out distortion correction on an image shot by a camera; 2. the pixel coordinates of the object on the camera captured image are converted to world coordinates of the object relative to the table.
S104, in order to verify the precision of the camera parameters, a specific calibration code is set on the workbench, the calibration code is known relative to the world coordinates of the workbench and recorded as a true value, then an image of the region of the workbench is shot, and the pixel coordinates of the calibration code on the image are detected.
S105, calculating the world coordinate estimated value of the calibration code relative to the workbench based on the internal reference of the camera, wherein the Euclidean distance between the coordinate estimated value and the true value is the reference error of the camera calibration and positioning module, if the error is larger than a certain threshold value, such as 0.5mm, the camera and the workbench are required to be adjusted, and the camera calibration module is repeated until the error meets the condition.
S2, the workbench image acquisition module acquires an area image on the workbench of the UV printer. Triggering, shooting, sending and result acquisition of the workbench image acquisition are usually completed through hardware collaborative operation. The collected workbench area image is that a plurality of objects are placed on the workbench in a certain posture.
S3, after the workbench image object detection module receives the image sent by the workbench image acquisition module, preprocessing the acquired workbench image, inputting the preprocessed workbench image into the object detection network to obtain the output of the rectangular position of the object, and detecting the rectangular area where each object in the image is located by the workbench image object detection module.
The specific steps of preprocessing the collected workbench image are as follows: scaling the picture to 640 according to the longest side, storing the scaling coefficients of the horizontal and vertical directions, and filling the pixel value (128,128,128) in the short side direction until the image size is 640x640; subtracting 128 from each pixel value on the image RGB channel and dividing 128;
inputting the processed 3-channel 640x640 data into an article detection network, and outputting rectangular information of the position of the article in the image by the network: the upper left corner coordinates (x, y) and width and height, as well as the item's classification labels and confidence.
And generating an article detection model through an article detection model training module, and loading the article detection model into an article detection network.
The step of generating the article detection model by the article detection model training module is as follows:
s301, acquiring an article detection sample based on image synthesis;
s302, preprocessing an article detection sample, scaling to a fixed size, and taking the article detection sample as a training sample, wherein the corresponding label is the upper left corner coordinate and the width and the height of the rectangular position of the article on the image;
s303, training is carried out after training conditions are set, and an article detection model is stored after training is finished.
Carrying out manual rectangular frame labeling on the collected object images on the work, and recording the rectangular frame coordinates of each target area;
the specific steps of preprocessing the article detection sample are as follows: synthesizing a customized content finished product image by using the customized content product preview image and the workbench background image, and recording the rectangular frame coordinates of each target area; preprocessing the sample picture, specifically, scaling the picture to 640 according to the longest side, storing the scaling coefficients of the horizontal and vertical directions, and filling the pixel value (128,128,128) in the short side direction until the image size is 640x640; subtracting 128 from each pixel value on the image RGB channel and dividing 128; inputting the processed sample picture and rectangular coordinates into an article detection network, wherein the network is a convolutional neural network, and specifically comprises 25 convolutional layers; and constructing a corner detection network and a training process by using PyTorch, setting the initial learning rate to 0.01 and the number of terminating iteration cycles to 300, selecting an optimizer as SGD, and finally outputting an article detection model.
S4, after the object segmentation module of the workbench image receives the area images of the objects, preprocessing the object area images, inputting the object area images into an object segmentation network to obtain output of object segmentation masks, extracting edge contours of the segmentation masks, and outputting minimum bounding rectangles of the object segmentation areas according to the contours to obtain pixel positions of the objects.
The specific steps of preprocessing the object area image are as follows: scaling the picture to 640 according to the longest edge, then placing the picture in the middle of an image with the size of 640x640, and subtracting 128 from each pixel value on an RGB channel of the image, wherein the pixel value at other positions is (128,128,128); inputting the processed 3-channel 640x640 data into an image matching network, and outputting a binary image of the 1-channel 640x640 as an article segmentation mask by the network; and extracting edges from the binary image of the object segmentation mask, and calculating a minimum bounding rectangle through the edges.
For the position information of the smallest surrounding rectangle of each article, calculating the position information of the rectangle and the rotation angle of the rectangle under the world coordinate of each article relative to the workbench according to the camera parameters.
In particular, for objects with different heights, height parameters are added, and world coordinate conversion is carried out by matching with camera parameters.
In particular, for an article whose edge is linear, linear detection is added, and a linear calculation angle at which the length of the periphery is largest is selected to correct a deviation based on the minimum bounding rectangle calculation angle.
The article segmentation model is generated through an article segmentation model training module and is loaded to an article segmentation network.
The object segmentation model training module generates an object segmentation model as follows:
s401, acquiring an article segmentation sample based on manual labeling;
s402, acquiring an article segmentation sample based on image synthesis;
s403, preprocessing an article segmentation sample, scaling to a fixed size, and taking the article segmentation sample as a training sample, wherein a corresponding label is a binary image of an article segmentation mask;
s404, training is carried out after training conditions are set, and an article segmentation model is stored after training is finished.
The specific steps of preprocessing the article segmentation sample are as follows: the method comprises the steps of collecting pictures of articles placed on a workbench, manually labeling contour points of each article area image, generating corresponding mask binary images through the contour points, wherein the images are used as labels for training an article segmentation model, and presetting image enhancement for the article area images, wherein the scheme comprises the following steps: random color transformation, random rotation, random Gaussian noise and random Gaussian blur; preprocessing the sample picture, specifically scaling the picture to 640x640 according to the longest edge, subtracting 128 from each pixel value on the RGB channel of the image and dividing by 128; inputting the processed sample picture and the mask binary image into an article image segmentation network, wherein the network is a convolutional neural network, and specifically consists of 30 convolutional layers; and constructing an article segmentation network and a training process by using PyTorch, setting the initial learning rate as 0.01 and the number of terminating iteration cycles as 100, selecting an optimizer as SGD, and finally outputting an image matching model.
S5, calculating the coordinate position of each article relative to the workbench according to the obtained pixel position of each article and the camera parameters.
After calibrating a device camera, placing an object to be printed on a workbench of a UV printer, collecting an image of a region of the workbench by using the device camera, positioning the position and the angle of the object on the image based on machine vision and deep learning, and converting the position and the angle of the object into the position and the angle of a relative workbench, so that the accurate printing of the surface region of each object is realized; and the accurate printing is independent of the placement of the articles on the workbench and the types of the articles, so that one UV printer enables the articles to be flexibly and efficiently printed.
The invention aims to provide a positioning method for an object on a UV printer based on machine vision and deep learning, which is characterized in that a workbench image acquisition module arranged right above a workbench of the UV printer is used for shooting a workbench area image, an advanced deep learning method is adopted for analyzing the image, the accurate position of each object in the workbench area is calculated, drawing software is used for making a current printing picture according to the positions and the pattern to be printed of each object and delivering the current printing picture to the UV printer for printing, and thus, the accurate printing of the objects randomly placed on the workbench according to the patterns to be printed can be completed. The object positioning method on the UV printer based on machine vision and deep learning has high object positioning precision, so that the printing precision is high, strict requirements on the type and the placement position of the object are not needed, and the working effect and the utilization rate of the UV printer are improved.
In addition to the above preferred embodiments, the present invention has other embodiments, and various changes and modifications may be made by those skilled in the art without departing from the spirit of the invention, which is defined in the appended claims.
Claims (2)
1. The method for positioning the object on the UV printer based on machine vision and deep learning is characterized by comprising the following steps:
s1, correcting an image acquisition module of a workbench, wherein the correction method comprises the following steps:
s101, preparing a calibration plate, and printing a black-and-white chessboard with the specification of m multiplied by n for calibration;
s102, placing the calibration plates on a workbench according to different positions and inclination angles, and shooting the calibration plates placed each time by using a workbench image acquisition module above the workbench;
s103, performing chessboard detection on all the calibration images, and finally calculating an internal reference matrix, an external reference matrix and a distortion system of the camera, which are collectively called as camera parameters;
in S1, setting a specific calibration code on a workbench of the UV printer, recording the calibration code as a true value relative to the world coordinate of the workbench, shooting a region image of the workbench, detecting the pixel coordinate of the calibration code on the image, calculating the world coordinate estimated value of the calibration code relative to the workbench based on an internal reference of an image acquisition module of the workbench, calculating Euclidean distance between the coordinate estimated value and the true value to be a reference error of a camera calibration and positioning module, and if the error is greater than a certain threshold value, adjusting the camera and the workbench and repeating the camera calibration module until the error meets the condition;
s2, collecting an area image on a workbench of the UV printer by a workbench image collecting module;
s3, after the workbench image object detection module receives the image sent by the workbench image acquisition module, preprocessing the acquired workbench image, inputting the preprocessed workbench image into an object detection network to obtain the output of the rectangular position of the object, and detecting the rectangular area where each object in the image is located by the workbench image object detection module;
the article detection model is generated through the article detection model training module and is loaded to the article detection network, and the article detection model is generated through the article detection model training module as follows:
s301, acquiring an article detection sample based on image synthesis;
s302, preprocessing an article detection sample, scaling to a fixed size, and taking the article detection sample as a training sample, wherein the corresponding label is the upper left corner coordinate and the width and the height of the rectangular position of the article on the image;
s303, training after training conditions are set, and storing an article detection model after training is finished;
s4, after the object segmentation module of the workbench image receives the area images of all the objects, preprocessing the object area images, inputting the object area images into an object segmentation network to obtain output of object segmentation masks, extracting edge contours of the segmentation masks, and outputting minimum bounding rectangles of the object segmentation areas according to the contours to obtain pixel positions of all the objects;
calculating the position information of the minimum bounding rectangle of each article according to the camera parameters to obtain the rectangular position information and the rotation angle of the rectangle of each article relative to the world coordinate of the workbench; adding height parameters for articles with different heights, and carrying out world coordinate conversion by matching with camera parameters; for an article with a linear edge, adding linear detection, and selecting a linear calculation angle with the largest peripheral length to correct deviation based on the minimum bounding rectangle calculation angle;
the article segmentation model is generated through the article segmentation model training module and is loaded to the article segmentation network, and the article segmentation model is generated through the article segmentation model training module as follows:
s401, acquiring an article segmentation sample based on manual labeling;
s402, acquiring an article segmentation sample based on image synthesis;
s403, preprocessing an article segmentation sample, scaling to a fixed size, and taking the article segmentation sample as a training sample, wherein a corresponding label is a binary image of an article segmentation mask;
s404, training is carried out after training conditions are set, and an article segmentation model is stored after training is finished;
s5, calculating the coordinate position of each article relative to the workbench according to the obtained pixel position of each article and the camera parameters;
and the drawing software makes a current printing picture according to the coordinate position of each article relative to the workbench and the pattern to be printed of each article, and then the current printing picture is delivered to the UV printer for printing.
2. The method for positioning an object on a UV printer based on machine vision and deep learning according to claim 1, wherein the following determination is made before S1 is performed: judging whether the camera parameters exist or not and whether the calibration error is larger than a threshold value, and executing S1 if the camera parameters do not exist or the calibration error is larger than the threshold value; and if the camera parameters exist and the calibration error is not greater than the threshold value, executing step S2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111073232.6A CN113838144B (en) | 2021-09-14 | 2021-09-14 | Method for positioning object on UV printer based on machine vision and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111073232.6A CN113838144B (en) | 2021-09-14 | 2021-09-14 | Method for positioning object on UV printer based on machine vision and deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113838144A CN113838144A (en) | 2021-12-24 |
CN113838144B true CN113838144B (en) | 2023-05-19 |
Family
ID=78959141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111073232.6A Active CN113838144B (en) | 2021-09-14 | 2021-09-14 | Method for positioning object on UV printer based on machine vision and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113838144B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116416020A (en) * | 2021-12-29 | 2023-07-11 | 霍夫纳格智能科技(嘉兴)有限公司 | Pattern printing method for vending machine and vending machine |
CN114463752A (en) * | 2022-01-20 | 2022-05-10 | 湖南视比特机器人有限公司 | Vision-based code spraying positioning method and device |
CN116080290B (en) * | 2022-12-29 | 2024-08-27 | 上海魅奈儿科技有限公司 | Three-dimensional high-precision fixed-point printing method and device |
CN117495961A (en) * | 2023-11-01 | 2024-02-02 | 广州市森扬电子科技有限公司 | Detection method, equipment and storage medium for mark point positioning printing based on 2D vision |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784297A (en) * | 2019-01-26 | 2019-05-21 | 福州大学 | A kind of Three-dimensional target recognition based on deep learning and Optimal Grasp method |
JP2020121503A (en) * | 2019-01-31 | 2020-08-13 | セイコーエプソン株式会社 | Printer, machine learning device, machine learning method and printing control program |
CN112700499B (en) * | 2020-11-04 | 2022-09-13 | 南京理工大学 | Deep learning-based visual positioning simulation method and system in irradiation environment |
-
2021
- 2021-09-14 CN CN202111073232.6A patent/CN113838144B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113838144A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113838144B (en) | Method for positioning object on UV printer based on machine vision and deep learning | |
CN108920992B (en) | Deep learning-based medicine label bar code positioning and identifying method | |
CN104992449B (en) | Information identification and surface defect online test method based on machine vision | |
CN104029510B (en) | Ink jet printing concentration correction process, correcting device and ink jet printing method, equipment | |
CN108562250B (en) | Keyboard keycap flatness rapid measurement method and device based on structured light imaging | |
CN112264992B (en) | Industrial robot coordinate system calibration method, system, device and storage medium | |
WO2023284784A1 (en) | Bar code image restoration method and apparatus, computer device and storage medium | |
CN111968185A (en) | Calibration board, nine-point calibration object grabbing method and system based on code definition | |
CN112729691A (en) | Batch workpiece airtightness detection method based on artificial intelligence | |
TWI823463B (en) | Label integrity adaptive detection method and system | |
CN112183134A (en) | Splicing and correcting method for express delivery bar codes | |
CN109493418B (en) | Three-dimensional point cloud obtaining method based on LabVIEW | |
CN108230400B (en) | Self-adaptive coordinate reconstruction method suitable for laser cutting machine | |
CN111627059B (en) | Cotton leaf center point positioning method | |
CN110632094B (en) | Pattern quality detection method, device and system based on point-by-point comparison analysis | |
CN107256556A (en) | A kind of solar cell module unit partioning method based on Gray Level Jump thought | |
WO2024140185A1 (en) | Method and system for identifying and positioning nails | |
CN111062907B (en) | Homography transformation method based on geometric transformation | |
CN115194323A (en) | Positioning welding method of laser welding machine | |
CN113642550A (en) | Entropy maximization card-smearing identification method based on pixel probability distribution statistics | |
CN113850100A (en) | Method and device for correcting two-dimensional code | |
CN113043723A (en) | Screen frame nesting method | |
CN115393195B (en) | Generator stator core infrared image panorama stitching method based on visible light assistance | |
CN109668520A (en) | A kind of system and method for machine vision extract material wheel exterior feature | |
CN117818215B (en) | Printing device and method based on visual positioning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |