Nothing Special   »   [go: up one dir, main page]

CN114264661A - Definition self-adaptive coiled material detection method, device and system - Google Patents

Definition self-adaptive coiled material detection method, device and system Download PDF

Info

Publication number
CN114264661A
CN114264661A CN202111480868.2A CN202111480868A CN114264661A CN 114264661 A CN114264661 A CN 114264661A CN 202111480868 A CN202111480868 A CN 202111480868A CN 114264661 A CN114264661 A CN 114264661A
Authority
CN
China
Prior art keywords
image
area
flaw
images
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111480868.2A
Other languages
Chinese (zh)
Other versions
CN114264661B (en
Inventor
余建安
陈浙泊
潘凌锋
陈镇元
叶雪旺
张一航
林建宇
陈一信
陈龙威
吴荻苇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Research Institute of Zhejiang University Taizhou
Original Assignee
Research Institute of Zhejiang University Taizhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Research Institute of Zhejiang University Taizhou filed Critical Research Institute of Zhejiang University Taizhou
Priority to CN202111480868.2A priority Critical patent/CN114264661B/en
Priority to CN202410586785.9A priority patent/CN118671067A/en
Publication of CN114264661A publication Critical patent/CN114264661A/en
Application granted granted Critical
Publication of CN114264661B publication Critical patent/CN114264661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention relates to a definition self-adaptive coiled material detection method, a device and a system, wherein the device comprises an operation table, a support frame, a sliding rod, a camera module and a light source module; the number of the support frames is two, and the two support frames are symmetrically arranged on two sides of the assembly line; the sliding frame is arranged on the support frame in a sliding mode and can slide up and down along the support frame; the sliding rods are hinged on the sliding frames of the two supporting frames; the camera module is arranged on the sliding rod in a sliding manner through the horizontal adjusting device; the light source module is arranged on the support frame and is respectively positioned above and below the coiled material on the production line; the operating platform is arranged on the ground and is in communication connection with the camera module and the light source module; the accurate position adjustment of the camera module is realized by adjusting the height adjusting device on the supporting frame, rotating the sliding rod and adjusting the position of the camera module on the sliding rod.

Description

Definition self-adaptive coiled material detection method, device and system
Technical Field
The invention relates to the field of image detection, in particular to a definition self-adaptive coiled material detection algorithm, a device and a system.
Background
At present, the coiled materials are conveyed by adopting a roller assembly line in the production and processing process, and the coiled materials are kept in an unfolded state on the roller assembly line and are tiled to move forward. In order to find the defects of the coiled material in time, the detection of the defects of the coiled material is often carried out in the process of the streamline transmission of the coiled material. Wherein different types of coiled materials contain different flaws, such as the coiled materials of PVB materials, which mainly comprise flaws of crystal points, insect spots, black points, air bubbles and the like, and the coiled materials of synthetic leather materials mainly comprise flaws of scratches, air bubbles, burrs, uneven gluing and the like. The detection of the defects of the coiled material mainly comprises two steps of acquiring images by a camera and analyzing the images by a processor.
For a camera to acquire images, a line scanning camera is mostly adopted in the current flaw detection equipment, wherein the line scanning camera is fixedly arranged on a support, so that a lens of the camera is opposite to a production line and is matched with a fixed light source to finish the acquisition of images of coiled materials. In the traditional equipment, the position, the angle, the exposure parameters and the like of a camera need to be adjusted in advance, and the adjustment is usually carried out manually, which is very inconvenient; on the other hand, once the assembly line starts to operate, an operator is difficult to contact with the camera, and the dynamic adaptive adjustment of the flaw detection equipment on the coiled material assembly line cannot be realized, so that the image quality acquired by the camera is influenced.
For a processor to analyze an image, in the current flaw detection method, because the detection method is low in efficiency, the flaw detection of the image is often delayed, so that the warning speed of the flaw is low, the position of the flaw is missed or the position judgment is not accurate, the coiled material needs to be manually rechecked, time and labor are wasted, and the production efficiency of the coiled material is reduced. In addition, the image obtained by the camera during the operation of the acquisition pipeline can cause the problem of image blurring and influence the analysis result of the image, so that the image needs to be filtered firstly. In the existing image analysis method, the same filtering parameters are often adopted for the whole image to complete image filtering; however, since different blurring degrees exist in the same image, the same filtering parameters are adopted, so that the filtering effect of partial regions is not ideal.
Therefore, the existing detection device and detection method for the coiled material assembly line are difficult to meet the detection requirements of real-time, high-efficiency and high-quality coiled material flaws.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a definition self-adaptive coil detection algorithm, a device and a system.
In order to solve the problems, the invention adopts the following technical scheme:
a coiled material detection device comprises an operation table, a support frame, a sliding rod, a camera module and a light source module; the number of the support frames is two, and the two support frames are symmetrically arranged on two sides of the assembly line; the sliding frame is arranged on the support frame in a sliding mode and can slide up and down along the support frame; the sliding rods are hinged on the sliding frames of the two supporting frames; the camera module is arranged on the sliding rod in a sliding manner through the horizontal adjusting device; the light source module is arranged on the support frame and is respectively positioned above and below the coiled material on the production line; the operating platform is arranged on the ground and is in communication connection with the camera module and the light source module; the console includes a display and a processor.
A definition adaptive coiled material detection method comprises the following steps:
step S1: the operation desk acquires images acquired by all the line scanning cameras and counts the number of the images;
step S2: judging whether the acquired image is a multi-channel image; if the image is a multi-channel image, converting the image into a gray image, and entering step S3; otherwise, directly entering step S3;
step S3: sequentially selecting images, calculating the size of the images, finishing the cutting of each group of images according to left and right undetected areas and acquiring the gray value of the images; each group of images represent images collected by different line scanning cameras at the same moment;
step S4: calculating whether periodic stripes exist according to the overall image background evaluation; if the periodic texture exists, the texture is a textured material, a texture removing step is needed, and the next step is carried out after the texture removal is finished; otherwise, directly entering the next step;
step S5: completing a definition self-adaption process of the image according to the gray value of the image, judging the corresponding filtering stage number, completing image filtering, and obtaining a total defect area in the image;
step S6: performing adjacent multi-defect processing on the total defect area based on a clustering method, and communicating defect areas meeting requirements;
step S7: obtaining a flaw output priority according to the area of a flaw area or a flaw communication area; sorting the flaws according to flaw output priority;
step S8: acquiring flaw information, and sequentially judging whether the flaw information meets the requirement of a set threshold range according to a flaw output priority sequence; if the flaw information meets the requirement of a set threshold, sequentially putting the flaw information into an output queue;
step S9: judging whether the flaw information in the output queue exceeds the set output quantity upper limit of the flaw information or not; if the set output quantity upper limit of the flaw information is exceeded, outputting the flaw information of the set output quantity according to the flaw output priority sequence, and ending the step; otherwise, outputting all the flaw information in the queue, and ending the step.
Further, the process of finishing the cropping of the image and acquiring the gray scale value in step S3 includes the following steps:
step S31: acquiring a group of images, and judging that the number of the images in the group of images is more than or equal to one; if the number of images is one, the process proceeds to step S32; if the number of the images is more than one, otherwise, the step S33 is executed;
step S32: if the group of images only contains one image, judging the width of a left clipping area and a right clipping area and the relation between the width of the left clipping area and the width of the right clipping area and the width of the image, wherein the left clipping area and the right clipping area are obtained through a left non-detection area and a right non-detection area; if the sum of the widths of the left and right trimming areas is greater than the image width, the process proceeds to step S35; if the sum of the widths of the left cut region and the right cut region is equal to or less than the image width, the process proceeds to step S36;
step S33: if the number of the images contained in the group of images is more than one, judging the relation between the width of the first image and the width of the left cutting area; if the width of the first image is smaller than the width of the left cutting area, the step S35 is executed; otherwise, the process goes to step S34;
step S34: judging the relation between the width of the last image and the width of the right cutting area; if the width of the last image is smaller than the width of the right cutting area, the step S35 is executed; otherwise, the process goes to step S36;
step S35: if the image cutting area is too large, judging that the image detection is abnormal, ending the step and ending the detection algorithm;
step S36: sequentially acquiring one image in the group of images, and judging whether the image is the first image of the group of images; if the first image is the set of images, the process proceeds to step S37; if not, the process proceeds to step S38;
step S37: if the image is the first image of the group of images, judging whether the group of images only has one image; if only one image exists, acquiring left and right cutting areas of the image according to the left and right undetected areas, finishing the cutting of the image, and entering the step S310; if not, calculating a left clipping area of the image, completing clipping of the image, and entering step S310;
step S38: if the image is not the first image of the group of images, judging whether the image is the last image of the group of images; if the last image of the group of images, go to step S39; if the image is not the last image of the group of images, directly entering step S310;
step S39: if the image is the last image of the group of images and is not the first image, calculating a right cutting area of the image, finishing the cutting of the image, and entering step S310;
step S310: and calculating the gray value of the obtained image, and ending the step.
Further, in step S310, after the gray scale value of the image is obtained, the average gray scale value of the image is also obtained, and if the average gray scale value of the image is higher than the set value Y1 or lower than the set value Y2, the image is considered to be too bright or too dark, the image is abnormal, and the detection algorithm is ended.
Further, the texture removing step in step S4 includes:
step S41: obtaining the cut image, and calculating the Width and Height of the image;
step S42: extracting 1/2Width 1/2Height sub-images in random areas in the image;
step S43: two straight lines L1, L2 perpendicular to each other are arranged at random positions in the sub-images;
step S44: carrying out bilateral filtering on the subimages to remove sharp noise and storing the edges without blurring, carrying out edge enhancement, and respectively calculating quadratic derivative function images of the subimages in the width direction and the height direction;
step S45: acquiring a linear region in the secondary derivative function image according to the secondary derivative function image of the subimage in the width direction and the height direction;
step S46: extracting a linear region skeleton from the secondary derivative function image according to the linear region and the polarity change from light to dark, and converting a linear object to obtain stripes;
step S47: calculating Hough transform values of all the extracted straight lines;
step S48: combining the stripes which are in the same linear region framework and have the same angle and are lower than the set pixel point length L3;
step S49: after the stripes are combined, the stripes with the length less than the set pixel point length L4 are eliminated;
step S410: acquiring residual stripes, and screening out the stripes intersected with the straight line L1 or the straight line L2;
step S411: extracting stripes with repeated intersection point distance and included angle according to intersection points of the stripes and the straight line L1; extracting stripes with repeated intersection point distance and included angle according to intersection points of the stripes and the straight line L2;
step S412: judging whether included angles exist between the stripes extracted in the step S411 and the straight lines L1 and L2; if there is no included angle with any of the straight lines L1 and L2, the stripe is considered to be parallel to the straight line L1 or L2, and the process proceeds to step S414; if included angles exist, the stripe is considered to be not parallel to the straight line L1 and the straight line L2, and the step S413 is executed;
step S413: acquiring projections of the intersection point distances on the stripes according to the included angles between the stripes and the straight lines L1 and L2 and the intersection point distances, wherein the projections comprise projection lengths and projection positions, and the projection lengths are the distances of the periodic stripes;
step S414: acquiring the periodicity information of the stripes, wherein the periodicity information comprises projection length and projection positions including projections of intersection point distances on the stripes, intersection points of the stripes on a straight line L1 or L2, and angles of the stripes and the straight lines L1 and L2;
step S415: generating a periodic function according to the periodic line information of the stripes, and obtaining the periodic frequency of the stripes through Fourier series continuation expansion;
step S416: obtaining a specific spatial filter according to the periodic frequency of n items before the Fourier series;
step S417: and (4) performing convolution calculation on the cropped sub-image obtained in the step (S41) through a specific spatial filter to realize image de-texture operation, and ending the step.
Further, the step S5 is a process of adaptive definition of the image, which includes clipping the image according to the gray value distribution, and obtaining the gray variance d (x), the contrast cont, the gray energy ASM, the inverse difference Homo, and the correlation Corr of each clipped sub-image;
the contrast cont represents the definition of the image and the depth of the grooves of the texture, the higher the contrast is, the clearer the image is, and otherwise, the lower the contrast is, the fuzzy the image is represented;
the gray level energy ASM reflects the gray level distribution uniformity degree of the image, when the image is fuzzy, the gray level distribution is more uniform, and the energy value is larger; when the image is clear, the energy value is smaller;
the inverse difference Homo reflects the local variation degree of the image texture; when the image is fuzzy, the gray distribution is more uniform, and the inverse difference value is larger; when the image is clear, the inverse difference value is small;
the correlation Corr reflects the similarity degree of the overall gray value of the cutting sub-images; when the image is fuzzy, the gray level change is small, the correlation is good, and the numerical value is large; when the image is clear, the gray scale changes violently, the correlation is poor, and the numerical value is low;
and (3) counting the gray-scale variance D (x), the contrast cont, the gray-scale energy ASM, the inverse difference distance Homo and the correlation Corr of the clipping sub-image, and acquiring a weighted mean value Ambiguity:
Figure BDA0003395211180000021
wherein chi, epsilon, eta, alpha and beta are set weights; the higher the weighted mean value Ambiguity, the more blurred the image is represented, and the smaller the filter kernel size, the gaussian filter needs to be used.
Further, the critical area multi-defect processing flow in step S6 includes the following steps:
step S61: obtaining a whole image after filtering, and filtering the image by using a Gaussian filter of one-level to four-level respectively to obtain four re-filtered images; the kernel size of the first-level Gaussian filter is 64 x 64, the kernel size of the second-level Gaussian filter is 32 x 32, the kernel size of the third-level Gaussian filter is 16 x 16, and the kernel size of the fourth-level Gaussian filter is 8 x 8;
step S62: acquiring a pixel point set of a common dark area, a pixel point set of a very dark area, a pixel point set of a large-area dark area, a pixel point set of a bright area and a pixel point set of a hole defect area in an image;
step S63: acquiring a dark area; the dark area comprises a common dark area, a very dark area and a large-area dark area;
step S64: acquiring a total defect area; the total defect area comprises a dark area, a bright area and a hole defect area;
step S65: performing closed operation on the total defect area to communicate with the adjacent area;
step S66: calculating a connected domain of the total defect area, and separating all closed and unconnected areas;
step S67: and calculating the area size and the center point coordinate of all connected domains in the total defect area, and ending the step.
Further, in the step S62, a pixel point in the primary filtered image, in which the gray value of the pixel point is smaller than the difference between the gray value of the pixel point at the corresponding position in the tertiary filtered image and the ordinary dark threshold Z1, is set as an ordinary dark region; setting the pixel points with the gray values of the pixel points in the first-level filtering image smaller than the difference value between the gray value of the pixel point at the corresponding position in the third-level filtering image and the very dark threshold value Z2 as very dark regions; setting the pixel points with the gray values of the pixel points in the secondary filtering image smaller than the difference value between the gray value of the pixel point at the corresponding position in the quaternary filtering image and the large-area dark threshold Z3 as large-area dark areas; setting the pixel points with the gray values of the pixel points in the first-level filtering image larger than the sum of the gray values of the pixel points at the corresponding positions in the third-level filtering image and the common bright threshold value as bright areas; and setting the pixel points with the gray value range of 250 to 255 before filtering again in the step S61 as hole defect areas.
Further, the defect information in the step S8 includes defect types, and the defect types include black dots, bubbles, insect spots, crystal dots, and lint; the flaw classification is obtained through a flaw identification characteristic algorithm, and the method comprises the following steps:
step S81: acquiring an image, and detecting a dark area in the image; wherein the dark area is an area where the image gray value is lower than a set threshold value Y3;
step S82: judging whether the number of dark areas in the image is one or not; if there is only one dark area, go to step S83; if the number of dark regions is 0 or more than one, the process proceeds to step S84;
step S83: the number of the dark areas is only one, and the outline of the dark areas is further judged to be approximate to a solid circle, approximate to a solid rectangle or other shapes; if the outline of the dark area is approximately circular, judging that the flaw is a large black spot, and ending the step; if the dark area is approximately rectangular in outline, judging that the defect is plush, and ending the step; if the outline of the dark area is other shape, go to step S84;
step S84: detecting a bright area in the image; judging whether the number of the bright areas and the dark areas is 0 or not, and if so, determining that the image is flawless; otherwise, the process goes to step S85; wherein the bright area is an area with an image gray value higher than a set threshold value Y4;
step S85: judging whether the edge of the bright area is surrounded by the dark area; if the edge of the bright area is surrounded by the dark area, the flaw is considered as a bubble, and the step is finished; if the bright area edge is not surrounded by the dark area, the process proceeds to step S86:
step S86: if the edge of the bright area is not surrounded by the dark area, counting the number of the bright area and the dark area, and acquiring barycentric coordinates of all the bright areas and the dark areas in the image;
step S87: sorting the barycentric coordinates of the bright area and the dark area according to the size of the abscissa, and judging whether the barycentric coordinates can be fitted into a straight line; if the barycentric coordinates can be fitted to a straight line, the process proceeds to step S88; otherwise, the process proceeds to step S89;
step S88: if the barycentric coordinates can be fitted into a straight line, further judging whether the bright area and the dark area appear alternately; if the bright area and the dark area appear alternately, judging the flaw as the insect spot, and ending the step; if the bright area and the dark area do not appear alternately, the process proceeds to step S89;
step S89: acquiring a total area framework of a bright area and a dark area, and judging whether the shape of the framework is in an arrow arrangement shape; if the area skeleton is in an arrow arrangement shape, the flaw is regarded as a crystal point, and the step is finished; otherwise, go to step S810;
step S810: considering the flaws as others, acquiring the gravity center distance of adjacent regions, aggregating the regions, and calculating the aggregated regions and areas; wherein the regions include light regions and dark regions.
A coiled material detection system is based on the detection method, and comprises the following steps:
step 1: the operation panel receives a starting-up instruction, and the processor starts a starting-up initialization process; after the startup initialization process is completed, the display displays a startup interface; the starting interface comprises a system entering button and a system exiting button which respectively correspond to a system entering instruction and a system exiting instruction;
step 2: the operation console receives an operation instruction of the starting interface and judges whether the operation instruction is a system exit instruction or a system entry instruction; if the command is 'exit system', the operation table is closed, and the step is ended; if the instruction is a system entering instruction, the display jumps to a flaw detection interface from a starting interface, and a plurality of line scanning camera acquisition threads are started; the flaw detection interface comprises a clearing button, a detecting button, a pause button, a history volume button, a roll changing button, a setting button, a forward button, a backward button and an exit button, and the clearing button, the detecting button, the pause button, the history volume button, the roll changing button, the setting button, the forward button, the backward button and the exit button are respectively corresponding to clearing instructions, detecting buttons, pause buttons, history volume buttons, roll changing instructions, setting instructions, forward buttons, backward buttons and exit instructions; the flaw detection interface also comprises an image display area, and the image display area displays images acquired by the camera module in real time; a detection area slide bar is arranged in the image display area, and the detection area slide bar corresponds to a detection area division instruction;
and step 3: the operating platform judges whether an operating instruction of a flaw detection interface is received; if an operation instruction is received, entering the step 4, otherwise, returning to the step 3;
and 4, step 4: judging the type of the operation instruction; if the command is 'clear', entering a step 5; if the command is 'detection/pause', entering step 6; if the instruction is a 'history volume' instruction, entering a step 7; if the command is a roll change command, entering step 8; if the command is a 'set' command, entering step 9; if the command is a forward command or a backward command, the step 10 is entered; if the command is an exit command, entering step 11; if the command is 'detection area division', the step 12 is carried out;
and 5: the console receives a 'clear' instruction and prompts a user whether to clear the history information prompt; if the confirmation instruction is received, the history information prompt is cleared; if a negative confirmation instruction is received, returning to the step 3;
step 6: the console receives the detection/pause instruction and judges whether the detection/pause instruction is the detection instruction or the pause instruction; if the command is a detection command, entering a real-time detection process until a pause command is received, and returning to the step 3; if the command is a pause command, ending the real-time detection process and returning to the step 3; it should be noted that, when entering the flaw detection interface for the first time after starting up, a "detection" button is displayed, corresponding to a "detection" instruction, and the "detection" button is switched to a "pause" button after being clicked, and the "pause" button corresponds to a "pause" instruction;
and 7: the operation desk receives the 'history volume' instruction, controls the display to enter a history volume interface, starts a history volume interface flow, and returns to the step 3 after finishing the history volume interface flow;
and 8: the operation desk receives a roll change instruction and judges whether the real-time detection process is started or not; if the real-time detection process is started, prompting 'please pause the real-time detection first' on the display, and returning to the step 3; if the real-time detection process is not started, updating the volume number and the volume length information according to the input, releasing the display image resources, writing the corresponding data into the database, and returning to the step 3;
and step 9: the operation desk receives the 'setting' instruction, and creates and enters a setting interface; returning to the step 3 after exiting the setting interface;
step 10: the operating platform receives the forward command or the backward command and judges whether the command is the forward command or the backward command; if the instruction is a 'forward' instruction, further judging whether the instruction is the last page of the current image, if so, prompting and returning to the step 3, and if not, advancing one page and returning to the step 3; if the image is a 'back' instruction, further judging whether the image is the first page of the current image, if the image is the first page, prompting and returning to the step 3, and if the image is not the first page, backing one page and returning to the step 3;
step 11: the console receives the exit command and prompts the user whether to confirm exiting the system; if the system is confirmed to be exited, the step is ended; if not, returning to the step 3;
step 12: and (3) when the console receives the detection area division instruction, correspondingly modifying the values of the left and right undetected areas of the image, storing the modified values into system parameters, and returning to the step 3.
The invention has the beneficial effects that:
the accurate position adjustment of the camera module is realized by adjusting the height adjusting device on the supporting frame, rotating the sliding rod and adjusting the position of the camera module on the sliding rod;
the fans are arranged in the camera module, and the four fans are respectively responsible for air inlet and air outlet, so that the line scanning camera is effectively cooled;
the air guide pipe is arranged corresponding to the fan responsible for air outlet, and the air outlet of the air guide pipe is positioned on the outer side of the glass of the camera module shell, so that dust on the outer side of the glass is prevented from falling;
the light source module comprises a positive surface light source and a backlight source, so that the brightness requirement in the lens range of the line scanning camera in the flaw detection of coiled materials made of various materials is met, and the requirement of image shooting is met;
the backlight source module comprises the light source and the light shading sheets, the light shading sheets are arranged on two sides of the light source, and the installation height of the light shading sheets can be adjusted, so that the scattering range of the light source can be accurately adjusted;
connecting a camera module through a system initialization process, acquiring image data, displaying in real time, and completing flaw detection on the coiled material according to a set algorithm;
the method comprises the steps of controlling intelligent switching of a light source and automatic adjustment of light source brightness and camera exposure through setting an image gray value dynamic adjustment flow, so that the gray value of an image acquired by a camera module is ensured to reach a set range;
by operating the flaw marking process, the positions of flaws on the coiled material can be accurately recorded and displayed in real time, so that the current flaw and historical flaw information can be checked;
splicing and displaying images of the line scanning cameras in real time, so that a system flaw real-time interface can completely display the breadth of the coiled material;
the method comprises the steps of extracting a gray abnormal area in an image, mainly extracting an area with abnormal background gray values of the image, distinguishing different flaws after morphological characteristic judgment, and realizing identification and distinction of flaw types;
through image de-fringe processing, a fringe region is extracted, a spatial filter is obtained periodically according to fringes, filtering processing aiming at the regularly distributed fringes is achieved, and interference of the fringes on subsequent definition self-adaptive processes and defect judgment is avoided;
by carrying out segmented definition self-adaptive detection on the image, the same image with wider width can respectively adopt corresponding filter kernels to finish filtering according to parameters such as gray values and the like, so that a better filtering effect is obtained, and the filtered image is ensured to meet the image processing requirement;
the defects are processed in the connected domain based on the clustering algorithm, so that the defects meeting the conditions can be connected, the algorithm efficiency is improved, and the detection of the defects which are small but densely distributed is realized.
Drawings
Fig. 1 is a perspective view of a console according to a first embodiment of the present invention;
FIG. 2 is a front view of a console according to a first embodiment of the present invention;
fig. 3 is a schematic view showing the connection between the camera frame and the carriage according to the first embodiment of the present invention;
fig. 4 is a front view of a front panel of an open case of a camera module according to a first embodiment of the present invention;
FIG. 5 is a bottom view of a camera module according to a first embodiment of the present invention;
fig. 6 is a front view of a light source holder and a light source module according to a first embodiment of the invention;
fig. 7 is a perspective view of a light source frame and a light source module according to a first embodiment of the invention;
FIG. 8 is a second perspective view of the first embodiment of the present invention except the console;
FIG. 9 is a general flow chart of a system according to a first embodiment of the present invention;
FIG. 10 is a block diagram illustrating a boot initialization procedure according to a first embodiment of the present invention;
FIG. 11 is a block diagram illustrating a database connection and reading process according to a first embodiment of the present invention;
FIG. 12 is a block diagram illustrating an initialization process of an alarm module according to a first embodiment of the present invention;
fig. 13 is a block diagram illustrating a boot defect information display process according to a first embodiment of the invention;
FIG. 14 is a block diagram illustrating a real-time defect detection process according to a first embodiment of the present invention;
FIG. 15 is a block diagram illustrating a dynamic adjustment process of gray level of an image according to a first embodiment of the present invention;
FIG. 16 is a block diagram illustrating a defect photo wall information updating process according to a first embodiment of the present invention;
FIG. 17 is a block diagram of a real-time defect marking process according to a first embodiment of the present invention;
FIG. 18 is a block diagram of a fault detection alarm process according to a first embodiment of the present invention;
FIG. 19 is a block diagram illustrating a process of displaying defect information of a history according to a first embodiment of the present invention;
FIG. 20 is a block diagram illustrating a flow chart of generating a fault detection report for a history volume according to a first embodiment of the present invention;
FIG. 21 is a flowchart of a parameter operation interface according to a first embodiment of the present invention;
FIG. 22 is a flowchart illustrating a defect detection parameter setting process according to a first embodiment of the present invention;
FIG. 23 is a flowchart illustrating defect classification parameter setting according to a first embodiment of the invention
FIG. 24 is a schematic diagram of a real-time defect detection interface according to a first embodiment of the present invention;
FIG. 25 is a diagram of a parameter setting interface according to a first embodiment of the present invention;
FIG. 26 is a diagram illustrating a system parameter setting interface according to a first embodiment of the present invention;
FIG. 27 is a schematic view of a defect detection parameter setting interface according to a first embodiment of the present invention;
FIG. 28 is a schematic diagram of a defect classification setting interface according to a first embodiment of the invention;
FIG. 29 is a diagram of a history volume information display interface according to a first embodiment of the present invention;
FIG. 30 is a diagram illustrating a defect statistics interface of a history volume according to a first embodiment of the present invention;
FIG. 31 is a flowchart of a web inspection algorithm according to a first embodiment of the present invention;
FIG. 32 is a flowchart illustrating image cropping according to a first embodiment of the present invention;
FIG. 33 is a flow chart illustrating a web de-texturing process according to a first embodiment of the present invention;
FIG. 34 is a schematic diagram of an outline of a first embodiment of the present invention;
FIG. 35 is a graph of contrast and variance trend of the image S according to the first embodiment of the present invention;
FIG. 36 shows gray scale energy, inverse difference distance and correlation trend of the image S according to the first embodiment of the present invention;
FIG. 37 is a flowchart illustrating a critical area multi-defect process according to a first embodiment of the present invention;
FIG. 38 is a flowchart illustrating a method for determining defective output priority according to a first embodiment of the present invention;
FIG. 39 is a flowchart of a flaw identification feature algorithm according to a second embodiment of the present invention;
FIG. 40 is a defect type and image schematic of example two of the present invention.
Description of reference numerals: the device comprises a support frame 1, a height adjusting device 11, a stand column 12, a camera frame 13, a light source frame 14, a carriage 2, a rotation adjusting device 21, a sliding rod 3, a horizontal adjusting device 31, a camera module 4, a camera shell 41, an air port 42, an air guide pipe 43, a fan 44, a line scanning camera 45, a light source module 5, a front light source 51, an arc-shaped lampshade 52, a light transmitting gap 53, a backlight source 54, a light source 55, a shading sheet 56 and a coiled material 6.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The first embodiment is as follows:
as shown in fig. 1 and 2, a coil detection device based on a line scan camera comprises an operation table, a support frame 1, a carriage 2, a slide bar 3, a camera module 4 and a light source module 5; the number of the support frames 1 is two, and the two support frames 1 are symmetrically arranged on two sides of the assembly line; the sliding frame 2 is arranged on the support frame 1 in a sliding mode, and the sliding frame 2 can slide up and down along the support frame 1; a height adjusting device 11 is arranged between the sliding frame 2 and the supporting frame 1; the sliding rods 3 are hinged on the sliding frames 2 of the two support frames 1; the camera module 4 is arranged on the sliding rod 3 in a sliding manner through the horizontal adjusting device 31; the light source modules 5 are arranged on the support frame 1 and are respectively positioned above and below the coiled material 6 on the production line; the operating platform is arranged on the ground and is in communication connection with the camera module 4 and the light source module 5; the console includes a display and a processor.
As shown in fig. 3, the support frame 1 includes two upright columns 12, a camera frame 13 and a light source frame 14, wherein the two upright columns 12 are symmetrically arranged at two sides of the production line; the camera frame 13 and the light source frame 14 are disposed on the two columns 12, and the camera frame 13 is disposed above the light source frame 14. The camera frame 13 comprises two vertical rods and a cross rod, the cross rod is arranged at the bottoms of the two vertical rods, and the two vertical rods are arranged in parallel; the vertical rod is provided with a slide rail; the sliding frame 2 is arranged between the two vertical rods, is connected with the sliding rails on the vertical rods in a sliding manner and can slide along the sliding rails; the sliding frame 2 is parallel to the cross bar; a height adjusting device 11 is arranged between the cross bar and the carriage 2, the height adjusting device 11 can be an electric push rod or an air cylinder, in this example, the height adjusting device 11 is an electric push rod with 150mm specification, and two ends of the electric push rod are respectively hinged with the cross bar and the carriage 2. The height of the carriage 2 can be adjusted by electric push-rod action. The light source frame 14 is of a square structure, and the two light source frames 14 are symmetrically arranged on the two upright posts 12.
The sliding frame 2 is hinged with a sliding rod 3, the sliding rod 3 is positioned between the camera frames 13 on the two supporting frames 1, the sliding rod 3 can rotate around the hinged part, and in the embodiment, the sliding rod 3 rotates around the central axis. A rotation adjusting device 21 is further disposed between the sliding rod 3 and the sliding frame 2, in this example, the rotation adjusting device 21 is a stepping motor, the stepping motor is disposed at a hinge portion at one end of the sliding rod 3, and the rotation of the sliding rod 3 is controlled by the stepping motor, so as to control an angle of the camera module 4 on the sliding rod 3. The slide rod is provided with a horizontal adjusting device 31, the horizontal adjusting device 31 can be a rodless cylinder, a screw structure or a hand-cranking module, and in this example, the horizontal adjusting device 31 adopts a 1100mm long hand-cranking module. The camera module 4 is arranged on the sliding block of the hand-cranking module, and can realize position adjustment along with the action of the hand-cranking module. The horizontal adjusting device 31 is provided with at least one camera module 4, and the line scan cameras 45 in different camera modules 4 are connected to the same trigger to ensure that images are acquired simultaneously.
As shown in fig. 4 and 5, the camera module 4 includes a camera housing 41, a fan 44, and a line scan camera 45, wherein the line scan camera 45 and the fan 44 are disposed inside the camera housing 41. In this embodiment, four fans 44 are disposed in the camera housing 41, wherein two fans 44 are set as a group and are respectively disposed at the upper and lower portions of the line scan camera 45, the fan set located at the upper portion of the line scan camera 45 is used for cooling the line scan camera 45, and the fan set 44 located at the lower portion of the line scan camera 45 is used for preventing the lens of the line scan camera 45 from falling ash and the inner side of the glass at the bottom of the housing from falling ash; it should be noted that the fan 44 does not block the lens area of the line scan camera 45. In this example, one fan 44 in each set of fans 44 is responsible for air intake, the other fan 44 is responsible for air exhaust, and the two fans 44 on the same side of the upper and lower portions of the linear scanning camera 45 are responsible for air intake and air exhaust respectively, for example, the fan 44 on the left side of the upper portion is responsible for air intake, and the fan 44 on the left side of the lower portion is responsible for air exhaust, so as to form an air duct circulating up and down in the housing of the camera module 4, and improve the cooling efficiency of the linear scanning camera 45 by the fans 44. An air port 42 or an air guide pipe 43 is arranged on the shell corresponding to the position of the fan 44; in this embodiment, the air guiding pipe 43 is disposed at the position of the air outlet fan 44 at the lower part of the line scan camera 45, one end of the air guiding pipe 43 is connected to the fan 44, and the other end faces the outside of the glass at the bottom of the housing, so as to prevent dust from falling from the outside of the glass at the bottom of the housing, thereby ensuring the quality of the image collected by the line scan camera 45, and the other three fans 44 are correspondingly disposed with air ports 42 on the housing.
As shown in fig. 6 to 8, the light source module 5 includes a front light source 51 and a backlight 54, the front light source 51 is disposed on the upper rod of the light source frame 14, and the backlight 54 is disposed on the lower rod of the light source frame 14; the web 6 on the line passes between the front light source 51 and the backlight 54; wherein the front light source 51 is also located between the pipeline and the camera module 4. The front light source 51 is a dome light source and comprises an arc-shaped lampshade 52, a light transmitting gap 53 is arranged on the arc-shaped lampshade 52, the light transmitting gap 53 is linear, and the light transmitting gap 53 is opposite to the line scanning camera 45 in the camera module 4. In this example, the front light source 51 is slidably disposed on the upper rod of the light source frame 14, so that the horizontal position of the front light source 51 on the production line is adjustable, and the corresponding relationship between the line scanning camera 45 and the light-transmitting slit 53 is conveniently realized. The backlight 54 is hinged at both ends to the light source stand 14 so that the backlight 54 can rotate about the hinge. The backlight source 54 includes a light source 55 and a shade 56, wherein two ends of the light source 55 are hinged to the stand; the light-shielding sheets 56 are disposed on two sides of the light source, the light-shielding sheets 56 are engaged with the inner wall of the housing of the light source 55, and the upper and lower ends of the light-shielding sheets 56 are through, so that the light emitted from the light source 55 can pass through the light-shielding sheets 56 and be emitted, and the light-shielding sheets 56 limit the divergence of the light. The periphery of the light shielding sheet 56 is provided with a columnar protrusion, a waist-shaped hole is correspondingly formed in the shell of the light source 55, the waist-shaped hole is matched with the columnar protrusion, and the waist-shaped hole is vertically arranged in the light shielding sheet 56, so that the light shielding sheet 56 can be vertically adjusted in the waist-shaped hole, the height of the light source 55 extending out of the light shielding sheet 56 is controlled, and the scattering range of light is further controlled.
In the implementation process, the accurate position adjustment of the camera module 4 is realized by adjusting the height adjusting device on the support frame 1, rotating the slide rod 3 and adjusting the position of the camera module 4 on the slide rod 3; the fans 44 are arranged in the camera module 4, and the four fans 44 are respectively responsible for air inlet and air outlet, so that the line scanning camera 45 is effectively cooled; the air guide pipe 43 is arranged corresponding to the fan 44 responsible for air outlet, and the air outlet 42 of the air guide pipe 43 is positioned outside the glass of the shell of the camera module 4, so that dust falling outside the glass is prevented; by arranging the light source module 5 to comprise the positive surface light source 51 and the backlight source 54, the sufficient brightness in the lens range of the line scanning camera 45 is fully ensured, and the requirement of image shooting is met; through setting up backlight 54 and including light source 55 and anti-dazzling screen 56 to anti-dazzling screen 56 inlays in light source 55, and anti-dazzling screen 56 is adjustable with the embedded degree of depth of light source 55, realizes adjusting the angle of walking of light source 55, the range of polishing of accurate control light module.
As shown in fig. 9, a web inspection system based on a line scan camera includes the following steps:
step 1: the operation panel receives a starting-up instruction, and the processor starts a starting-up initialization process; after the startup initialization process is completed, the display displays a startup interface; the starting interface comprises a system entering button and a system exiting button which respectively correspond to a system entering instruction and a system exiting instruction;
step 2: the operation console receives an operation instruction of the starting interface and judges whether the operation instruction is a system exit instruction or a system entry instruction; if the command is 'exit system', the operation table is closed, and the step is ended; if the instruction is a system entering instruction, the display jumps to a flaw detection interface from a starting interface, and a plurality of line scanning camera acquisition threads are started; the flaw detection interface comprises a clearing button, a detecting button, a pause button, a history volume button, a roll changing button, a setting button, a forward button, a backward button and an exit button, and the clearing button, the detecting button, the pause button, the history volume button, the roll changing button, the setting button, the forward button, the backward button and the exit button are respectively corresponding to clearing instructions, detecting buttons, pause buttons, history volume buttons, roll changing instructions, setting instructions, forward buttons, backward buttons and exit instructions; the flaw detection interface also comprises an image display area, and the image display area displays images acquired by the camera module in real time; a detection area slide bar is arranged in the image display area, and the detection area slide bar corresponds to a detection area division instruction;
and step 3: the operating platform judges whether an operating instruction of a flaw detection interface is received; if an operation instruction is received, entering the step 4, otherwise, returning to the step 3;
and 4, step 4: judging the type of the operation instruction; if the command is 'clear', entering a step 5; if the command is 'detection/pause', entering step 6; if the instruction is a 'history volume' instruction, entering a step 7; if the command is a roll change command, entering step 8; if the command is a 'set' command, entering step 9; if the command is a forward command or a backward command, the step 10 is entered; if the command is an exit command, entering step 11; if the command is 'detection area division', the step 12 is carried out;
and 5: the console receives the 'clear' instruction and prompts a user whether to clear the history information prompt, wherein the history information is history program abnormal information and the like in the example; if the confirmation instruction is received, the history information prompt is cleared; if a negative confirmation instruction is received, returning to the step 3;
step 6: the console receives the detection/pause instruction and judges whether the detection/pause instruction is the detection instruction or the pause instruction; if the command is a detection command, entering a real-time detection process until a pause command is received, and returning to the step 3; if the command is a pause command, ending the real-time detection process and returning to the step 3; it should be noted that, when entering the flaw detection interface for the first time after starting up, a "detection" button is displayed, corresponding to a "detection" instruction, and the "detection" button is switched to a "pause" button after being clicked, and the "pause" button corresponds to a "pause" instruction;
and 7: the operation desk receives the 'history volume' instruction, controls the display to enter a history volume interface, starts a history volume interface flow, and returns to the step 3 after finishing the history volume interface flow;
and 8: the operation desk receives a roll change instruction and judges whether the real-time detection process is started or not; if the real-time detection process is started, prompting 'please pause the real-time detection first' on the display, and returning to the step 3; if the real-time detection process is not started, updating information such as volume number, volume length and the like according to input, releasing display image resources, writing corresponding data into a database, and returning to the step 3;
and step 9: the operation desk receives the 'setting' instruction, and creates and enters a setting interface; returning to the step 3 after exiting the setting interface;
step 10: the operating platform receives the forward command or the backward command and judges whether the command is the forward command or the backward command; if the instruction is a 'forward' instruction, further judging whether the instruction is the last page of the current image, if so, prompting and returning to the step 3, and if not, advancing one page and returning to the step 3; if the image is a 'back' instruction, further judging whether the image is the first page of the current image, if the image is the first page, prompting and returning to the step 3, and if the image is not the first page, backing one page and returning to the step 3;
step 11: the console receives the exit command and prompts the user whether to confirm exiting the system; if the system is confirmed to be exited, the step is ended; if not, returning to the step 3;
step 12: and (3) when the console receives the detection area division instruction, correspondingly modifying the values of the left and right undetected areas of the image, storing the modified values into system parameters, and returning to the step 3.
As shown in fig. 10, the boot initialization procedure in step 1 includes the following steps:
step 101: the operation panel completes initialization of a starting detection interface;
step 102: after the interface initialization is finished, a database connection process is carried out;
step 103: after the database connection process is completed, reading database data; the database data comprises system configuration parameters, camera configuration parameters, flaw detection internal parameters, flaw detection product parameters and the like;
step 104: the storage equipment in the control console is connected with the line scanning camera; enabling data collected by the line scan camera to be stored in a storage device; in this example, the storage device employs an acquisition card;
step 105: connecting serial ports and checking each serial port channel;
step 106: checking the initialization state, including checking whether the process from the step 101 to the step 105 is successfully completed;
if the abnormal step exists, prompting the user to select to still enter the system or exit the system; if the user still enters the system, entering a starting interface, and ending the step; if the user selects to quit the system, quitting the system, and ending the step;
if no abnormal step exists, directly entering a starting interface and ending the step.
The initialization process of the power-on detection interface of the step 101 includes the following steps:
step 1011: the operation console is connected through a network to acquire the current time and date;
step 1012: starting date acquisition and display of a timer;
step 1013: initializing startup detection related variables, wherein the related variables comprise a result flag bit for database connection and reading, a storage device and line scan camera connection result flag bit and a serial port module connection result flag bit; wherein, the system of the operation desk can set the value of the flag bit according to the initialized state in the running process;
step 1014: acquiring a starting interface control and finishing the display setting of the starting interface control; displaying a starting interface control when a starting interface is entered;
step 1015: and creating a thread for connecting the database and reading the database, a thread for connecting the storage device and the line scanning camera, and a thread for converting the serial port into the IO module, and ending the steps.
The purpose of creating the flag bit in step 1013 is to facilitate checking the initialization state in step 106, and by reading the flag bit, the completion of steps 102 to 105 can be obtained.
As shown in fig. 11, the database connection process in step 102 includes the following steps:
step 1021: acquiring setting information, calling a database connection function, and inputting the setting information; setting information is self-carried information of an operation console or pre-input information, and the setting information comprises a database server name, a user name and a password;
step 1022: acquiring a return value of a database connection function, and judging the database connection condition according to the return value, wherein the database connection condition comprises database connection success, database connection failure and database connection abnormity; if the database connection is successful, go to step 1023; if the database connection fails, go to step 1024; if the database connection is abnormal, go to step 1025;
step 1023: if the database is successfully connected, setting a database connection flag bit as a database connection successful state, setting a control to display that the database connection is successful, and entering step 1026;
step 1024: setting a database connection flag bit as a database connection failure state, and setting a control to display as a database connection failure; writing the database connection failure information into a system log file, popping up a database connection failure prompt dialog box, and ending the step;
step 1025: setting a database connection flag bit as a database connection abnormal state and setting a control to display the database connection abnormal state; writing the abnormal database connection information into a system log file, popping up a database connection failure prompt dialog box, and ending the step; the database connection abnormity represents an abnormal result except the connection success and the connection failure of the database connection function display;
step 1026: if the database is successfully connected, judging whether a project database exists in the database; if the project database exists, the step is ended; otherwise, creating project database and ending the step.
The item database in step 1026 includes a system parameter table, a camera configuration parameter table, a product defect parameter table, a real-time detection parameter table, a history volume information recording table, and a history volume defect information table. In the process of creating the project database, a system parameter table, a camera configuration parameter table, a product flaw parameter table, a real-time detection parameter table, a history volume information recording table and a history volume flaw information table need to be configured, and meanwhile, a table establishing command, a judgment command for judging whether the tables exist, a data command for searching the data tables and a data command for inserting the data tables of each data table are set. In this example, each data table includes the following contents:
system parameter table, named systemparameter: "camera number", "camera for system configuration", "camera for detection parameter", "serial number", "left edge detection position", "right edge detection position", "software version number", "administrator password", "operator password", "technician password";
camera configuration parameter table, named CameraParameter: "camera type", "maximum exposure", "minimum exposure", "adjustment scale", "upper gray limit", "lower gray limit", "magnification", "upper defect number limit", "dynamic dark threshold", "dynamic light threshold", "dynamic extremely dark threshold", "dynamic extremely light threshold", "ordinary dark threshold", "ordinary light threshold", "large area dynamic dark threshold", "number of segments", "detection segment", "equalization convolution kernel 1", "equalization convolution kernel 2", "equalization convolution kernel 3", "equalization convolution kernel 4";
product defect parameter table, named productdefactparameter: "product name", "defect name", "priority level", "upper limit of width", "lower limit of width", "upper limit of length", "lower limit of length", "upper limit of length to width", "lower limit of length to width", "upper limit of length to width ratio of length", "lower limit of width to length", "upper limit of area", "lower limit of area", "upper limit of bright area ratio", "lower limit of bright area ratio", "upper limit of hole area ratio", "hole area ratio lower limit", "dark area ratio upper limit", "dark area ratio lower limit", "whether or not to display a marker symbol", "font style", "font size", "marker character", "color R", "color G", "color B", "red light alarm switch", "red light alarm time period", "green light alarm switch", "green light alarm time period", "yellow light alarm switch", "yellow light alarm time period";
real-time detection parameter table: "current shift", "current volume number", "currently inspected product", "currently inspected flaw";
history volume information record table: "volume number", "width", "detected length";
historical roll defect information table: the "volume number", "defect number", "camera number", "product name", "defect name", "image address", "width", "length", "work longitudinal position", "work transverse position", "work edge", "edge passing", "area", "dark bright hole", "line width ratio", "detection time".
In the step 103, in the process of reading the database, it is first necessary to determine whether data exists in the project database, where the data represents specific data of the data table in the step 102; if the data exists, reading the corresponding data, and completing the configuration of the corresponding equipment according to the read data; if no data exists, writing in default data, reading the data of each data table, storing the data in the program, and completing corresponding equipment configuration. The default data for the project database in this example includes:
default data of system parameter table: default value of camera number is 2; the default value of the camera number selection of the camera configuration parameters is 1; the default value of the serial port number is COM 5; the default value of the left edge detection position is 100 and the unit is mm; the default value of the right edge detection position is 100, and the unit is mm; the default value of the software version number is V1.0; administrator password default is 123456; an operator password default of 123456, a technician password default of 123456;
camera configuration parameter table default data: a camera number, which is incremented from 1 according to the camera number; maximum exposure time, default 950; minimum exposure time, default 50; the exposure adjustment scale default value is 10; the upper gray level default value is 160; the lower gray limit default value is 120; default camera magnification is 8.822; the default value of the upper limit of the number of the flaws is 10; the default value of the dynamic dark threshold is 50, the default value of the dynamic bright threshold is 50, the default value of the dynamic extremely dark threshold is 60, the default value of the dynamic extremely bright threshold is 60, the default value of the ordinary dark threshold is 45, the default value of the ordinary bright threshold is 45, the default value of the large-area dynamic dark threshold is 60, the default value of the number of segments is 10, the default value of the detection segment is 5, the default value of the averaging convolution kernel 1 is 5, the default value of the averaging convolution kernel 2 is 30, the default value of the averaging convolution kernel 3 is 80, and the default value of the averaging convolution kernel 4 is 130;
defect names in the product defect parameter list default to include nine types of small black dots, medium black dots, large black dots, small white dots, medium white dots, large white dots, small holes, medium holes and large holes, and for each type, the defect parameter is set, including:
small black dots: PVB as a product name default, a defect name default is a small black dot, a priority default is 1, a width upper limit default is 0, a width lower limit default is 0, a lengthwise upper limit default is 0, a lengthwise lower limit default is 0, a lengthwise ratio line width upper limit default is 0, a lengthwise ratio line width lower limit default is 0), a width ratio lengthwise upper limit default is 0), a width ratio lower limit lengthwise default is 0, an area upper limit default is 0.35, an area lower limit default is 0.1, a bright area ratio upper limit default is 0, a bright area ratio lower limit default is 0, a hole area ratio upper limit default is 0, a hole area ratio lower limit default is 0, a dark area ratio upper limit default is 1.1, a dark area ratio lower limit default is 0.4, whether a mark symbol default is True, a font style value is Song font, a font size default is 12, a mark character default is b as a mark character default, a mark character style value is 1, a font style value is a style value is Song style value, The default value of the color R is 0, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
Medium black dots: PVB as a product name default, a black dot as a defect name default, 2 as a priority default, 0 as a width upper limit default, 0 as a width lower limit default, 0 as a length upper limit default, 0 as a length lower limit default, 0 as a length ratio line width upper limit default, 0 as a length ratio line width lower limit default, 0 as a width ratio length upper limit default, 0 as a width ratio lower limit length default, 0.5 as an area upper limit default, 0.35 as an area lower limit default, 0 as a bright area ratio upper limit default, 0 as a bright area ratio lower limit default, 0 as a hole area ratio upper limit default, 0 as a hole area ratio lower limit default, 1.1 as a dark area ratio upper limit default, 0.4 as a dark area ratio lower limit default, whether a mark symbol default is True or not displayed, Song style as a font style default, 12 as a font size default, B as a mark character size default, and B as a font style, The default value of the color R is 0, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
Large black spots: the default value of product name is PVB, the default value of defect name is large black dot, the default value of priority level is 3, the default value of upper limit of horizontal width is 0, the default value of lower limit of horizontal width is 0, the default value of upper limit of vertical length is 0, the default value of lower limit of vertical length is 0, the default value of upper limit of horizontal width is 0, the default value of lower limit of vertical length is 0, the default value of upper limit of area is 2000, the default value of lower limit of area is 0.5, the default value of upper limit of bright area is 0, the default value of lower limit of bright area is 0, the default value of upper limit of hole area is 0, the default value of lower limit of hole area is 0, the default value of upper limit of dark area is 1.1, the default value of lower limit of dark area is 0.4, the default value of whether to mark symbol is True, the default value of font style is Song, the default value of font size is 12, the default value of mark character is B, The default value of the color R is 255, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
Small white point: PVB as a product name default, a defect name default of small white dots, 4 as a priority default, 0 as a width upper limit default, 0 as a width lower limit default, 0 as a length upper limit default, 0 as a length lower limit default, 0 as a length ratio line width upper limit default, 0 as a length ratio line width lower limit default, 0 as a width ratio length upper limit default, 0 as a width ratio length lower limit default, 0 as an area upper limit default, 0.35 as an area lower limit default, 0.1 as an area lower limit default, 1.1 as a bright area upper limit default, 0.4 as a bright area lower limit default, 0.0 as an aperture area upper limit default, 0 as an aperture area lower limit default, 0 as a dark area upper limit default, 0 as a dark area lower limit default, whether a mark symbol default is displayed as True, Song as a font style default, 12 as a font size default, 12 as a mark character default, w as a mark character size default, 0 as a width ratio lower limit default, 0 as a width ratio length upper limit default, 0 as a length lower limit default, 0 as an area lower limit default, 0.35 as an area lower limit default, 0.1.1.1.1.1.1.1., The default value of the color R is 0, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
And (3) medium white point: PVB as a product name default, a defect name default, medium white point as a defect name default, 5 as a priority default, 0 as a width upper limit default, 0 as a width lower limit default, 0 as a length upper limit default, 0 as a length lower limit default, 0 as a length ratio line width upper limit default, 0 as a length ratio line width lower limit default, 0 as a width ratio length upper limit default, 0 as a width ratio length lower limit default, 0.5 as an area upper limit default, 0.35 as an area lower limit default, 1.1 as a bright area ratio upper limit default, 0.4 as a bright area ratio lower limit default, 0 as a hole area ratio upper limit default, 0 as a hole area ratio lower limit default, 0 as a dark area ratio upper limit default, 0 as a dark area ratio lower limit default, whether a mark symbol default is displayed as True, Song as a font style default, 12 as a font size default, W as a mark character default, and W as a font style default, The default value of the color R is 0, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
Big white point: PVB as a product name default, PVB as a defect name default, Rough white point as a defect name default, 6 as a priority default, 0 as a width upper limit default, 0 as a width lower limit default, 0 as a lengthwise upper limit default, 0 as a lengthwise lower limit default, 0 as a lengthwise ratio line width upper limit default, 0 as a lengthwise ratio line width lower limit default, 0 as a widthwise ratio lengthwise upper limit default, 0 as a width lower limit default, 2000 as an area upper limit default, 0.5 as an area lower limit default, 1.1 as a bright area upper limit default, 0.4 as a bright area lower limit default, 0 as a hole area upper limit default, 0 as a hole area lower limit default, 0 as a dark area upper limit default, 0 as a dark area lower limit default, 12 as a mark character default, W as a mark character style default, Song as a font style default, 12 as a font size default, and a mark character size as a mark character size, The default value of the color R is 255, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
Small holes: the default value of the product name is PVB, the default value of the defect name is a small hole, the default value of the priority level is 7, the default value of the upper limit of the horizontal width is 0, the default value of the lower limit of the horizontal width is 0, the default value of the upper limit of the vertical length is 0, the default value of the lower limit of the vertical length is 0, the default value of the upper limit of the horizontal width is 0, the default value of the lower limit of the vertical width is 0, the default value of the upper limit of the area is 0.35, the default value of the lower limit of the area is 0.1, the default value of the upper limit of the bright area is 0, the default value of the lower limit of the bright area is 0, the default value of the upper limit of the hole area is 1.1, the default value of the lower limit of the hole area is 0.4, the default value of the upper limit of the dark area is 0, the default value of the lower limit of the dark area is 0, whether the default value of the mark symbol is displayed as True, the font style is Song, the default value of the font size is 12, the default value of the mark character, The default value of the color R is 0, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
A middle hole: PVB as a product name default, a defect name default, a void as a defect name default, 8 as a priority default, 0 as a width upper limit default, 0 as a width lower limit default, 0 as a length upper limit default, 0 as a length lower limit default, 0 as a length ratio line width upper limit default, 0 as a length ratio line width lower limit default, 0 as a width ratio length upper limit default, 0 as a width ratio length lower limit default, 0 as an area upper limit default, 0.5 as an area lower limit default, 0.35 as an area lower limit default, 0 as a bright area ratio upper limit default, 0 as a bright area ratio lower limit default, 1.1 as a hole area ratio upper limit default, 0.4 as a hole area ratio lower limit default, 0 as a dark area ratio lower limit default, whether a mark symbol default is displayed as True, Song as a font style as a font default, 12 as a font size default, H as a mark character size default, H as a mark size, H as a mark size, a mark size as a mark size, and a mark size, and a mark size, and a mark size, a mark, The default value of the color R is 0, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0.
Large holes: PVB as a product name default, PVB as a defect name default, large holes as a defect name default, 9 as a priority default, 0 as a width upper limit default, 0 as a width lower limit default, 0 as a length upper limit default, 0 as a length lower limit default, 0 as a length ratio line width upper limit default, 0 as a length ratio line width lower limit default, 0 as a width ratio length upper limit default, 0 as a width ratio length lower limit default, 2000 as an area upper limit default, 0.5 as an area lower limit default, 0 as a bright area ratio upper limit default, 0 as a bright area ratio lower limit default, 1.1 as a hole area ratio upper limit default, 0.4 as a hole area ratio lower limit default, 0 as a dark area ratio lower limit default, whether a flag symbol default is displayed as True, Song as a font style default, 12 as a font size default, 12 as a flag character default, H as a font style default, B as a font style default, and a font style, The default value of the color R is 255, the default value of the color G is 0, the default value of the color B is 0, the default value of the red light alarm switch is True, the default value of the red light alarm time length is 1, the default value of the green light alarm switch is False, the default value of the green light alarm time length is 0, the default value of the yellow light alarm switch is False, and the default value of the yellow light alarm time length is 0;
detecting default data of a parameter table in real time: the default value of the current shift is 1, the default value of the current volume number is 1, the default value of the current detection product is PVB, and the default value of the current detection flaw is a small black dot;
default data of the historical volume information recording table: volume number, width, detected length;
the default data of the historical roll defect information comprise a roll number, a defect number, a camera number, a product name, a defect name, an image address, a width, a length, a work longitudinal position, a work transverse position, a work edge, a work transmission edge, an area, a dark and bright hole, a line width ratio and detection time.
As shown in fig. 12, the check serial port in step 105 opens the serial port and returns an operation result mainly through the serial port number and the baud rate read from the database, and the operation result is obtained by comparing the serial port number and the baud rate. The process of connecting the serial port and checking comprises the following steps:
step 1051: adding a serial port callback function, and setting processing modes of return signals of different serial ports;
step 1052: opening a serial port;
step 1053: judging the serial port connection state and returning to the serial port connection state;
step 1054: judging whether the serial port is connected or not according to the returned serial port state; if the serial port is connected, go to step 1055; otherwise, prompting and ending the step;
step 1055: reading the connected serial port address, setting the serial port address to be 0, and reading a return value; if the return value is "FE 4200AD 24", the serial port address setting is successful; otherwise, prompting and ending the step;
step 1056: sequentially opening and closing each channel of the serial port, sending a self-checking command to the opened channel, and judging whether the channel is normal according to a return value;
step 1057: and finishing the verification of each channel of the serial port and finishing the step.
The image display area of the flaw detection interface in the step 2 comprises an upper part and a lower part, wherein the upper part displays a marking image, the marking image is obtained through a real-time detection process, the lower part displays an image acquired by the line scanning camera in real time, and the lower part displays an image spliced by the line scanning camera in the embodiment. It should be noted that when the defect detection interface enters the real-time detection process, the "clear", "history volume", "change", "set", "forward", "backward", and "exit" buttons of the defect detection interface may be disabled, and only the "detect/pause" button is enabled.
The method for displaying the images acquired by the line scanning camera in real time comprises the following steps:
step 21: the line scanning camera collects images and transmits the image cache of the camera to the operation console;
step 22: the operation desk receives the image cache, and converts the camera cache into image data through a callback function of the line scanning camera;
step 23: storing the image data into image queues in sequence, and judging whether all the queues have image data cache; if all queues have image data cache, taking out the first image data of each queue to form a real-time image list; if the queue is empty, returning to step 21;
step 24: after the first image is taken out from each queue, emptying the image buffer of each image queue;
step 25: and splicing the taken images, displaying the spliced images in an image display area, and ending the step.
The number of image queues in step 23 is related to the number of line scan cameras, for example, if ten camera modules are arranged on the slide bar for image acquisition, the image queues are set to be 10 queues, and the image data acquired by each line scan camera is stored in the corresponding queue.
The top of the image display area of the flaw detection interface is provided with two detection area sliding bars which are respectively arranged on the left side and the right side, in the example, the right image of the left sliding bar and the left image of the right sliding bar are detection areas of the image, and the left image of the left sliding bar and the right image of the right sliding bar are non-detection areas of the image.
The flaw detection interface further comprises a flaw photo wall area, the flaw photo wall area is located on the right side of the flaw detection interface in the embodiment and is used for displaying a flaw screenshot, the flaw screenshot is an image of the flaw area, captured in the flaw image, and is obtained through a real-time detection process, the length and the width of the flaw screenshot and the length of a detection roll of image data to which the flaw screenshot belongs are further arranged corresponding to the flaw screenshot in the embodiment, and the length of the detection roll indicates the length of a coiled material roll corresponding to the acquired image data; the forward and backward buttons are arranged in the defect photo wall area, and are used for controlling photos displayed in the defect photo wall. The flaw detection interface further comprises a real-time flaw information display area of the coiled material, the real-time flaw information display area of the coiled material displays specific information of the current coiled material, including product name, shift number, coil number, length, width, speed, flaw number, evenness and system time of the detected coil, and the real-time flaw information display area of the coiled material is located at the top of the flaw detection interface in the embodiment. The flaw detection interface also comprises a latest flaw information display area, which is positioned on the left side of the flaw detection interface in the embodiment and comprises types, longitudinal positions, transverse positions, working edges, conveying edges, dark and bright holes, line width ratios, widths, lengths, areas, camera numbers, volume numbers, serial numbers and detection time. The flaw detection interface further comprises a message prompt box, the message prompt box is arranged on the lower left side of the flaw detection interface in the embodiment, the 'clear' button is arranged on the message prompt box, and the message prompt box is used for displaying prompts such as program abnormity or automatic exposure adjustment. Data such as the number of camera images taken, the number of camera buffers, the number of processed images and the like are displayed on the lowest part of the flaw detection interface.
As shown in fig. 13, the data displayed on the defect detection interface is updated in real time, including the information of the length, width and speed of the detected roll in the defect information display area, and the information of the number of captured images, the number of buffered images and the number of processed images of the camera, wherein the data is updated in real time, and first, the length, speed, number of captured images, number of processed images and number of buffered images of the camera are updated according to the length of the currently detected roll, the transmission speed of the production line, the number of captured images of the camera, the number of processed images and the number of unprocessed images; secondly, judging and updating the width data according to the detection area slide bar arranged in the image display area; and finally, judging and updating the class number and the coil number data of the coiled material, wherein the class number and the coil number data of the coiled material are determined according to the input of a user in the embodiment.
As shown in fig. 14, the real-time detection process in step 6 includes the following steps:
step 61: the operation platform acquires image data through a line scanning camera; deeply copying the image data; the deep copy represents a copy form that a source object and a copy object are independent from each other, and is the existing copy technology;
step 62: initializing a return parameter of the detection algorithm, and converting a current product detection flaw parameter into an algorithm data format; the algorithm data format is a format corresponding to the algorithm detection flow;
and step 63: inputting the product detection flaw parameters converted into the algorithm data format, the deeply copied image data and the initialized detection algorithm return parameters into a set detection algorithm flow, and returning a detection algorithm result; detecting parameters such as flaws, gray values and the like in the image data through a detection algorithm process, wherein the flaws comprise small black points, medium black points, large black points, small white points, medium white points, large white points, small holes, medium holes and large holes in the example;
step 64: traversing the detection algorithm result, and screening out the detection algorithm result judged as a flaw;
step 65: sequentially judging whether the positions of the detection algorithm results of the flaws are within the detection range; in this example, whether the abscissa representing the position of the detection algorithm result is located in the range corresponding to the "detection region division" instruction;
and step 66: acquiring defect information of defect points in the detection range, adding the defect information into a defect list and a marking queue, and recording the defect information into a database; the flaw information comprises detected flaw images, flaw positions, flaw types, flaw screenshots and the like, and the flaw images are spliced image data comprising flaws;
step 67: acquiring image data, finishing a gray value dynamic adjusting process according to the image data, and adjusting the exposure of the line scanning camera, the brightness of the light source module and the like;
step 68: finishing a flaw processing flow according to the flaw information;
step 69: and releasing the resources used in the detection thread, and ending the step.
In step 61, the image data is acquired, and first the image cache of the camera module needs to be acquired, and then the camera cache is converted into the image data by the callback function of the line scan camera.
The detection fault parameters in the algorithm data format in step 62 include:
a) defect name.
b) The lower limit of the flaw screening length threshold mm is detectLengthArrayFrom.
c) The upper limit of the flaw screening length threshold mm is detectLengthArrayTo.
d) The lower limit of the flaw screening width threshold value mm is detectWidthSarrarayFrom.
e) The upper limit of the flaw screening width threshold value mm is detectWidthHarrayTo.
f) The flaw screening area threshold is mm2 lower limit, 0 indicates that the criterion is not used as the criterion, detectareaArrayFrom.
g) Defect screening area threshold mm2 Upper Limit, 0 means no line detectareaArrayTo.
h) The lower limit of the ratio of the flaw screening width to the flaw screening length is detectWidthLengthArrayFrom.
i) The upper limit of the ratio of the flaw screening width to the flaw screening length is detectWidthLengthArrayTo.
j) The lower limit of the ratio of the line width to the length of the flaw screening is detectLineWidthLengthArrayFrom.
k) The upper limit of the ratio of the line width to the length of the flaw screening is detectLineWidthLengThrayTo.
l) lower limit of flaw bright area ratio detectLightRegionPercentsArrayFrom.
m) upper limit of flaw bright area proportion detectLightRegionPercentsArrayTo.
n) lower limit of defect dark area ratio detectDarkRegionPercentsArrayFrom.
o) upper limit of defect dark area ratio detectDarkRegionPercentsArrayTo.
p) lower limit of defective cell area ratio detectholeremopercentrarrayfrom.
q) upper limit of defective cell area ratio detectholeremoperccentsarrayto.
r) left undetected region length leftdetecteonemageabandonwidth.
s) length of the right undetected region rightwards detecteonemageAbandonWidth.
t) the maximum number of defects detected, maxDefectNumber.
u) common dark threshold, CommonDarkThresh.
v) dynamic dark threshold, dynamicdark thresh.
w) dynamic extreme dark threshold, DynamicVeryDarkThresh.
x) large area dynamic dark threshold, dynamic bigAreaDarkThresh.
y) common light threshold, CommonLightThresh.
z) dynamic bright threshold, DynamicLightThresh.
aa) dynamic extreme luminance threshold, DynamicVeryLightThresh.
bb) image magnification (piexels/mm) CameraImageMagnification.
cc) the number of camera gray scale adjustment stages, CameraImageGrayAdjustImageSectionNum.
dd) camera gray level adjustment detection section, Camera ImageGrayAdjustDetectSection.
In step 63, the returned detection algorithm results include:
a) and returning a result that result is (0: and (3) qualified detection, 1: the defects are as follows, 1: the detection anomaly is-2: error in output detection parameters, -3: image is empty, 10: image too bright, 11: the image is too dark).
b) Gray value ImageGray.
c) The number of photos is picNumber.
d) Flaw list arrayDefectsClass.
e) Flaw center X coordinate array ArrayDefect center X.
f) Flaw center Y coordinate array ArrayDefect center Y.
// minimum defective circumscribed rectangle
g)arrayDefectRectangleX.
h)arrayDefectRectangleY.
i)arrayDefectRectangleWidth.
j)arrayDefectRectangleHeight.
k) And the array of the flaw area is arrayDefect area.
l) flaw length array arrayDefect Length.
m) flaw width array, arrayDefect Width.
n) flaw width length ratio array, arrayDefect Width.
o) flaw linearity length ratio array, arrayDefect LineWidth.
p) defective hole area ratio array, arrayDefect HoleRegionPercents.
q) flaw bright area ratio array, arrayDefect LightRegionPercents.
r) defect dark area ratio array, arrayDefect DarkRegionPercents.
s) average gray level imageMean.
t) gray-scale variance imagedevelopment.
u) picture width mm, imageWidth.
v) image height mm imageHeight.
w) width of detection region mm detectRegionWidth.
Before traversing the detection algorithm results in step 65, the algorithm data format of the detection algorithm results needs to be changed into the C # data format, so as to facilitate the screening of the detection algorithm results.
As shown in fig. 15, the dynamic adjustment of the gray scale value in step 67 includes the following steps:
step 6701: the operation desk acquires real-time stitched image data A1 acquired by all the line cameras, and detects the whole gray value of the stitched image data A1;
step 6702: judging the relation between the overall gray value of the image A1 and the set limit value X1; the image overall gradation value is smaller than the limit value X1 in this example, indicating that the image data a1 is an image close to pure black; if the overall gray value of the image is smaller than the limit value X1, starting a light source switching process, clearing the exposure adjustment zone bit of the line scan camera after the light source switching process is completed, and entering the step 6703; otherwise, go directly to step 6703;
step 6703: judging whether unprocessed detection threads exist or not; if the unprocessed detection thread exists, ending the step; otherwise, go to step 6704; the detection thread represents the algorithm detection of the image;
step 6704: the operation desk does not carry out a real-time detection process, and the image data A1 is segmented according to the set segmentation parameters; in this example, 10 stages are set;
step 6705: respectively calculating the integral gray value of each segment of image of the obtained segmented images, and judging the relationship between the ratio of the integral gray value of the set segment image A2 to the difference value of the average gray values of other segment images and a set value X2, wherein the set segment image A2 is the segment image of the 5 th segment in the example; if the ratio is greater than the set value X2, the image is considered to be abnormal, and the step is ended, in this example, the set value X2 is 0.1; if the ratio is less than or equal to the predetermined value X2, go to step 6706;
step 6706: judging whether the current image data A1 is the first frame image collected by the line scan camera; if the first frame image is the first frame image, go to step 6712 directly; otherwise go to step 6707;
step 6707: acquiring the line scan camera in front of the current image data A1nFrame image data, if the image data before the current image data A1 is less thannFrame, all images preceding image data a1 are acquired; in this casenTaking 10;
step 6708: calculating the average gray value of the previous image data obtained in the step 6707, comparing the average gray value with the overall gray value of the segment image A2, and judging the relationship between the ratio of the difference value of the average gray value and the overall gray value of the segment image A2 and a set value X3; if the difference ratio is greater than the set value X3, the image is considered to be abnormal, and the step is ended, wherein the value X3 is set to 0.1 in the example; if the ratio is less than or equal to the set value X3, go to step 6709;
step 6709: acquiring new image data A3 formed by splicing data acquired by all line cameras, and acquiring the whole gray value of the image data A3;
step 6710: comparing the entire gradation value of the image data a3 with the set system gradation value range (X3, X4); if the overall gray-scale value of the stitched image data a3 is greater than the upper limit X4 of the system gray-scale value, go to step 6711; if the overall gray value of the spliced image data A3 is smaller than the lower limit X3 of the system gray value, entering step 6712; otherwise, go directly to step 6713;
step 6711: judging whether the overall gray value of the image data A3 is greater than the upper limit X4 of the system gray value and whether the brightness of the light source module is greater than the set minimum value X5; if the brightness of the light source module is greater than the set minimum value X5, decreasing the brightness of the light source module, and clearing the exposure adjustment flag bit of the line scan camera, and then entering step 6713; otherwise, go directly to step 6713; wherein the brightness of the light source module is measured and adjusted by power;
step 6712: judging whether the overall gray value of the image data A3 is less than the lower limit X3 of the system gray value, and judging whether the brightness of the light source module is less than the set maximum value X6; if the brightness of the light source module is less than the set maximum value X6, the brightness of the light source module is enhanced, and the exposure adjustment flag bit of the line scan camera is cleared, and the process proceeds to step 6713; otherwise, go directly to step 6713;
step 6713: acquiring exposure adjustment zone bits of all line cameras, and judging whether exposure adjustment is finished or not; if the exposure adjustment is completed, go to step 6723; otherwise, go to step 6714;
step 6714: acquiring the serial number of the line scanning camera which does not finish exposure adjustment; the serial number is a preset value of the line scan camera and is used for distinguishing different line scan cameras in the system, and the serial number of the line scan camera corresponds to an image storage queue of the line scan camera in the embodiment; controlling the line scan camera with the set number to acquire image data A4, in this case, the line scan camera with the smallest number in the line scan cameras with the control unfinished exposure adjustment;
step 6715: judging whether a gray value accurate adjustment flag bit of the line scanning camera is opened or not; if the gray-level value fine adjustment flag is on, go to step 6716; otherwise, go to step 6719; the gray value accurate adjusting zone bit can be switched on and off through external input;
step 6716: opening a gray value accurate adjustment flag bit, and judging the relationship between the whole gray value of the image data A4 and a high-precision gray value range (X7, X8); if the overall gray-scale value of the image data a4 is less than X7, go to step 6717; if the overall gray-scale value of the image data a4 is greater than X8, go to step 6718; otherwise, indicating that the gray value of the image data A4 conforms to the high-precision gray value range, closing the accurate gray value adjusting flag bit, inputting the accurate gray value adjusting flag bit into the line scan camera exposure adjusting flag bit to indicate that exposure adjustment has been performed, and entering step 6722;
step 6717: the overall gray value of the image data A4 is smaller than X7, and the relationship between the current exposure of the line scan camera and the set exposure upper limit X9 is judged; if the current exposure amount of the line scan camera is larger than the exposure upper limit X9, prompting that the adjustment cannot be performed, inputting an exposure adjustment flag bit of the line scan camera, and entering step 6722; otherwise, increasing the exposure of the line scan camera, inputting an exposure adjustment flag bit of the line scan camera, and entering step 6722;
step 6718: the overall gray value of the image data A4 is larger than X8, and the relation between the current exposure of the line scan camera and the set lower exposure limit X10 is judged; if the current exposure amount of the line scan camera is smaller than the lower exposure limit X10, prompting that the adjustment cannot be performed, inputting an exposure adjustment flag bit of the line scan camera, and entering step 6722; otherwise, the exposure of the line scan camera is reduced, and the line scan camera exposure adjustment flag bit is inputted, and the process goes to step 6722;
step 6719: judging the relationship between the whole gray value of the image data A4 and the gray value regulation range (X11, X12) if the gray value precision regulation flag bit is not opened; if the overall gray-scale value of the image data a4 is less than X11, go to step 6720; if the overall gray-scale value of the image data A4 is greater than X12, go to step 6720; otherwise, the gray value of the image data a4 is in accordance with the gray value adjustment range, and the line scan camera exposure adjustment flag bit is input, and the process proceeds to step 6722;
step 6720: the overall gray value of the image data A4 is smaller than X11, and the relationship between the current exposure of the line scan camera and the set exposure upper limit X9 is judged; if the current exposure amount of the line scan camera is larger than the exposure upper limit X9, prompting that the adjustment cannot be performed, inputting an exposure adjustment flag bit of the line scan camera, and entering step 6722; otherwise, increasing the exposure of the line scan camera, inputting an exposure adjustment flag bit of the line scan camera, and entering step 6722;
step 6721: the overall gray value of the image data A4 is larger than X12, and the relation between the current exposure of the line scan camera and the set lower exposure limit X10 is judged; if the current exposure amount of the line scan camera is smaller than the lower exposure limit X10, prompting that the adjustment cannot be performed, inputting an exposure adjustment flag bit of the line scan camera, and entering step 6722; otherwise, the exposure of the line scan camera is reduced, and the line scan camera exposure adjustment flag bit is inputted, and the process goes to step 6722;
step 6722: acquiring exposure adjustment zone bits of all line cameras, and judging whether exposure adjustment is finished or not; if the exposure adjustment is completed, go to step 6723; otherwise, returning to the step 6714;
step 6723: and finishing the dynamic gray value adjustment after all the line cameras finish the exposure adjustment.
In step 6701, the line scan cameras that need to be adjusted are determined, and images of the line scan cameras are acquired, so as to adjust the exposure of each line scan camera, respectively, and ensure accurate adjustment of the exposure of the line scan cameras.
In the light source switching process of step 6702, it is first necessary to determine the on-off states of the front surface light source and the backlight source in the light source module, and it should be noted that, in general, only one set of the front surface light source and the backlight source is turned on; and then, the brightness of the light source started in the light source module is enhanced, the gray value change of the acquired image data is judged, if the integral gray value lifting amount of the image is greater than a set threshold value X2, the light source is not required to be changed, otherwise, the lamp group is switched, and the front light source is switched to the backlight source or the backlight source is switched to the front light source.
In step 6704, the image data is segmented, the detection segment image is set, and the detection segment image is compared with other segment images or previous frame image data, because in some coiled materials, only the coiled materials within the set range width need to be ensured to meet the requirements, unnecessary detection is avoided, and the detection speed is ensured.
In step 6704-6708, the gray value relationship between the current segment image a2 and other segment images and the previous image data is determined to avoid the situation that the web is transported or the non-single-tone web is transported, because the gray value of the image is greatly changed after the web is transported or the web with different tones is transported, whether the web is transported can be known by comparing with the gray value of the previous image, and unnecessary detection and adjustment can be avoided; in addition, the condition that the tail end of the coiled material is not flush can be handled by comparing the current section image A2 with other section images, and the judgment can be quickly and accurately made when the tail end of the coiled material is not flush. It should be noted that even if the light source switching process occurs in step 6702, the accuracy of the determination result in this step is not affected, because the gray value relationship in this step is mainly used to determine the transport condition of the web and ensure that the web is still transported.
In step 6710 and step 6714, the new spliced real-time image data A3 and image data a4 are obtained because there is a light source switching process in step 6702 and in step 6711-6712, which affects the image data collected by the line scan camera, so that the real-time image data needs to be obtained again to avoid the effect of light source switching on the gray-level value of the new image data, thereby ensuring the accurate gray-level value dynamic adjustment of the line scan camera.
The high-precision gradation value range (X7, X8) ∈ gradation value adjustment range (X11, X12) in step 6716 and step 6719.
And setting an exposure adjustment zone bit in the gray value dynamic adjustment process, wherein the exposure adjustment zone bit can carry out input marking after the line scanning camera completes exposure adjustment, and can clear reset after the light source module is adjusted, so that all cameras can carry out exposure adjustment in the gray value dynamic adjustment process, and the light source module can carry out adaptive adjustment on the exposure of the line scanning camera again, thereby ensuring the image quality and reducing unnecessary exposure adjustment. It should be noted that in some other embodiments, the exposure adjustment flag of the line scan camera is reset after the setting time or the reel change, so that the exposure adjustment adaptation of the line scan camera can be performed on different coiled materials at the first time, and the quality of the acquired image is ensured. The exposure adjustment flag can also be set based on external input.
As shown in fig. 16, fig. 16 is a flow chart of updating a defect photo wall, and the defect processing flow in step 68 includes the following steps:
step 681: updating display data of a flaw detection interface according to the flaw information, wherein the display data comprises a flaw information display area, a flaw photo wall and the like;
step 682: finishing a flaw marking process according to the marking queue and the flaw information;
step 683: and performing flaw alarm according to the flaw information.
As shown in fig. 17, the defect marking process in step 682 includes the following steps:
step 6821: acquiring defect information and a marking queue, wherein the defect information comprises a defect image; the flaw image is spliced image data comprising flaws;
step 6822: respectively acquiring the roll length and the width of a flaw image to be marked, and judging whether the roll length and the width are 0; if either the length or the width of the roll is 0, ending the step; otherwise, go to step 6823; wherein the roll length of the defect image represents the length of the roll displayed in the defect image, and the width of the defect image represents the width of the roll displayed in the defect image;
step 6823: generating a white background image with the same length and width according to the roll length and the width of the defective image;
step 6824: printing a detection roll length scale on a set position of the white background image; detecting the roll length which represents the roll length of the coiled material corresponding to the acquired image data;
step 6825: finding the position of the flaw point on the white background image according to the position information in the flaw information, and marking to obtain a marking image; the mark comprises information such as characters, patterns, colors and the like;
step 6826: judging whether a marking image is displayed in an image display area of a flaw detection interface; if the marking image is displayed, connecting the newly obtained marking image to the lower part of the old marking image; if the real-time marking image is not displayed, directly displaying the marking image;
step 6827: judging whether the pixel length of the marking image displayed in the image display area is larger than a set control size upper limit Y or not; if the size of the marking image is larger than the set upper limit Y of the control size, cutting the marking image displayed in the image display area, and reserving an image with the length of Y1 pixels at the tail of the marking image, wherein the tail is the side of the marking image close to the real-time detection roll length; setting Y to 40000 and Y1 to 0 in the example, emptying the marking image;
step 6828: controlling the marking image displayed in the image display area to slide to the tail end;
step 6829: releasing image resources and updating the volume data; and finishing the step.
As shown in fig. 18, in the fault alarm of step 683, the fault information needs to be acquired first; then traversing the flaw information to obtain the flaw with the highest alarm priority, wherein the alarm priority is set according to the type of the flaw, and different alarm priorities correspond to different alarm lamp flashing time, alarm lamp colors and the like; then, taking the maximum value of the flashing time of the alarm lamps with different colors of the flaw point, and determining the maximum value as the alarm time of the alarm, wherein the maximum value comprises the flashing time of a red lamp, the flashing time of a green lamp and the flashing time of a yellow lamp; and finally, controlling the alarm to give an alarm according to the alarm time of the alarm, and controlling the alarm lamp to flash according to the flashing time of the alarm lamps with different colors to finish flaw alarm.
As shown in fig. 19, the history volume interface in step 7 includes a defect volume number area, a history marking image display area, and a history defect photo wall area; the defective roll numbering area comprises date display and the number of the defective roll in the current day, and is displayed by adopting a tree diagram, so that the defective roll numbering area is convenient to check and select; the historical marking image display area is used for displaying marking images corresponding to the numbers of the defective rolls, and the historical defect photo wall area is used for displaying defect screenshots intercepted by the numbers of the corresponding defective rolls. The history volume interface is also provided with operation buttons, wherein the operation buttons comprise a 'view report' button, a 'forward' button, a 'backward' button and a 'return' button; the 'report viewing' button is used for jumping to a flaw statistical report interface, and in the example, the 'report viewing' button is arranged in a flaw volume numbering area; the 'forward' button and the 'backward' button are arranged in a historical marking image display area and a historical flaw picture wall area and are respectively used for controlling forward and backward movement of the marking image and the flaw picture. The historical roll interface further comprises a page number input box used for realizing rapid jump of the display image of the historical marking image display area. The "view report" button, the "forward" button, the "back" button, and the "back" button correspond to a "view report" instruction, a "forward" instruction, a "back" instruction, and a "back" instruction, respectively. The history volume interface flow comprises the following steps:
step 71: reading historical roll information in a database by an operation console, wherein the historical roll information comprises historical marking images and historical flaw screenshots;
step 72: generating a historical volume tree diagram according to the read historical volume information, and loading and displaying the historical volume tree diagram in a defective volume number area; in this example, the tree diagram of the history volume is a three-layer tree diagram, including the annual month-day-flaw volume number;
step 73: judging whether an input instruction of a user through a history volume tree diagram is received or not; if receiving an input instruction, loading corresponding history volume information from a database; otherwise, return to step 73;
step 74: loading and displaying marking images in the corresponding historical roll information in a historical marking image display area, and loading and displaying a flaw screenshot in a historical flaw photo wall area;
step 75: judging whether an operation instruction sent by a user through an operation button or a history volume tree diagram is received; if the operation instruction is received, the operation flow is completed according to the operation instruction; otherwise, the procedure returns to step 75.
As shown in fig. 20, the defect statistics report interface includes a defect statistics table, a defect histogram, and roll information, where the defect statistics table counts the defect types, the number, and the ratio of the defect types and the number of the rolls corresponding to the number of the defective roll; drawing a defect histogram according to the defect statistical table, wherein the abscissa of the defect histogram is the defect type and the ordinate is the defect number; the roll information includes information such as product name, roll length, roll number, total defect number, width, shift number, etc., and the roll information is acquired by input from a sensor or an operation panel. The flaw statistical report interface also comprises a PDF file export button and a return button; the 'export PDF file' button is used for printing the image displayed on the flaw statistical report form interface into a PDF format and outputting the image; the "back" button is used to return to the history volume interface.
Updating the display information of the flaw detection interface in the step 8, wherein the updating comprises updating the coil number, the coil parameters and emptying the real-time flaw information display area of the coil, and the length and the width of the real-time coil are set to be 0; freeing up display image resources includes clearing up the image display area and the defect photo wall area. In addition, in this example, automatic roll change is set every other day.
As shown in fig. 21-30, the setting interface in step 9 includes a "system parameter" button, a "defect classification" button, a "detection parameter" button, and a "return" button; the system parameter button, the detection parameter button, the defect classification button and the return button are all jump buttons; the system parameter button corresponds to a system configuration interface; the detection parameter button corresponds to a detection parameter setting interface; the 'flaw classification' button corresponds to a flaw classification setting interface; the "back" button corresponds to the flaw detection interface.
In this example, the system configuration interface includes parameter settings of a camera number, a maximum exposure value, a minimum exposure value, an adjustment scale, an upper gray limit, a lower gray limit, a number of cameras, a serial port, a left edge, and a right edge, where the camera number represents a line scan camera object modified by a user; the maximum exposure value represents the upper limit of the exposure which can be adjusted when the gray value is dynamically adjusted, the variable is integer, the value range is the minimum exposure value-1000 Hv, and the unit is Hv; the minimum exposure value represents the lower limit of exposure which can be adjusted when the gray value is dynamically adjusted, the variable is integer, the value range is 0 Hv-the maximum exposure value, and the unit is Hv; the adjustment scale represents the exposure of the camera adjusted each time during gray value dynamic adjustment, the variable is integer, the value range is 0Hv-100Hv, and the unit is Hv; the upper limit of the gray scale represents the upper limit of the gray scale value for starting the dynamic adjustment of the gray scale value, if the gray scale value of the returned image is greater than the upper limit of the gray scale value, the exposure is adjusted, the variable is integer, and the value range is-255 of the lower limit of the gray scale value; the lower gray limit represents the lower gray limit for starting gray value dynamic adjustment, if the gray value of the returned image is smaller than the lower gray limit, the exposure is adjusted, the variable is integer, and the value range is 0-the upper gray value limit; the number of cameras represents the number of the started line scan cameras, the variable is integer, and the value range is 1-4; the serial port represents the port number connected with the serial module, the variable is integer, and the value range is 1-20; the left edge represents the length of a left undetected area in a flaw detection process, the variable is integer, the value range is 0mm-200mm, and the unit is mm; the right edge represents the length of the right non-detection area in the flaw detection process, and the variable is integer and has a value range of 0mm-200mm in mm.
The detection parameter setting interface comprises parameter settings of a camera number, an upper limit of the number of flaws, a dynamic dark threshold, a dynamic quantity threshold, a dynamic extremely dark threshold, a dynamic extremely bright threshold, a common dark threshold, a common bright threshold, a large-area dark threshold, a mean convolution 1, a mean convolution 2, a mean convolution 3 and a mean convolution 4. Wherein the camera number represents a user-modified line scan camera object; the upper limit of the number of the flaws represents the upper limit of the number of the flaws which can be detected in each flaw detection of the camera image, the variable is integer, and the value range is 0-20; the dynamic dark threshold value represents a standard dynamic dark threshold value of image detection, the variable is integer, and the value range is 5-150; the dynamic quantity threshold value represents a standard dynamic quantity threshold value of image detection, the variable is integer, and the value range is 5-150; the dynamic extremely-dark threshold value represents a standard dynamic extremely-dark threshold value of image detection, and the variable is integer and has a value range of 5-150; the dynamic extremely bright threshold value represents a standard dynamic extremely bright threshold value of image detection, the variable is integer, and the value range is 5-150; the common dark threshold value represents a standard common dark threshold value of image detection, the variable is integer, and the value range represents 5-150; the common bright threshold value represents a standard common bright threshold value of image detection, the variable is integer, and the value range is 5-150; the large-area dark threshold value represents a standard large-area dark threshold value of image detection, the variable is integer, and the value range is 5-150; the mean convolution 1 represents the mean convolution 1 of the image detection, the variable is integer, and the value range is 5-150; the mean convolution 2 represents the mean convolution 2 of the image detection, the variable is integer, and the value range is 5-150; the mean convolution 3 represents the mean convolution 3 of the image detection, the variable is integer, and the value range is 5-150; the mean convolution 4 represents the mean convolution 4 of the image detection, and the variable is integer and has a value ranging from 5 to 150.
The flaw classification setting interface comprises parameter settings of a product name, a flaw name, a transverse width upper limit, a transverse width lower limit, a longitudinal upper limit, a longitudinal lower limit, a longitudinal/line width upper limit, a longitudinal/line width lower limit, a line width/longitudinal upper limit, a line width/longitudinal lower limit, an area upper limit, an area lower limit, a bright area upper limit, a bright area lower limit, a hole area upper limit, a hole area lower limit, a dark area upper limit, a dark area lower limit, a priority, a marking display, a marking font, a marking character, a green alarm lamp, a yellow alarm lamp, a red alarm lamp, a green alarm lamp, a yellow alarm lamp alarm time and a red alarm time. Wherein the product name represents a product object modified by the user; the defect name represents a specified defect object in the product objects modified by the user; the upper limit of the horizontal width represents the upper limit of the horizontal width of the detected flaw point, the variable is integer, and the value range is the lower limit of the horizontal width of-100; the lower limit of the transverse width represents the lower limit of the transverse width of the detected flaw point, the variable is integer, and the value range is 0-upper limit of the transverse width; the upper lengthwise limit represents the upper lengthwise limit of the detected flaw, the variable is integer, and the value range is the lower lengthwise limit of-100; the lower lengthwise limit represents the lower lengthwise limit of the detected flaw, the variable is integer, and the value range is 0-upper lengthwise limit; the upper limit of the lengthwise/linewidth represents the upper limit of the lengthwise/linewidth of the detected flaw, the variable is integer, and the value range is that the lower limit of the lengthwise/linewidth is-100; the lower lengthwise/linewidth limit represents the lower lengthwise/linewidth limit of the detected flaw, the variable is integer, and the value range is 0-upper lengthwise/linewidth limit; the line width/lengthwise upper limit represents the line width/lengthwise upper limit of the detected flaw, the variable is integer, and the value range is the line width/lengthwise lower limit of-100; the line width/lengthwise lower limit represents the line width/lengthwise lower limit of the detected flaw, the variable is integer, and the value range is 0-line width/lengthwise upper limit; the upper limit of the area represents the upper limit of the area of the detected flaw, the variable is integer, and the value range is-100 of the lower limit of the area; the area lower limit represents the area lower limit of the detected flaw point, the variable is integer, and the value range is 0-area upper limit; the upper limit of the bright area represents the upper limit of the bright area of the detected flaw point, the variable is integer, and the value range is the lower limit of the bright area of-100; the lower limit of the bright area represents the lower limit of the bright area of the detected flaw point, the variable is integer, and the value range is 0-upper limit of the bright area; the upper limit of the hole area represents the upper limit of the hole area of the detected flaw, the variable is integer, and the value range is the lower limit of the hole area of-100; the lower limit of the hole area represents the lower limit of the hole area of the detected flaw point, the variable is integer, and the value range is 0-the upper limit of the hole area; the upper limit of the dark area represents the upper limit of the dark area of the detected flaw, the variable is integer, and the value range is-100 of the lower limit of the dark area; the lower limit of the dark area represents the lower limit of the dark area of the detected flaw, the variable is integer, and the value range is 0-the upper limit of the dark area; the priority level indicates the priority display level of the defect, and 1 is the highest. The flaws with high priority can be preferentially alarmed and marked, the variable is integer, and the value range is 0-100; marking display indicates whether marking display is performed after the flaw is detected, and marking is selected in a hooking mode, and marking is not selected in a non-hooking mode; the marking font represents the marked font size, the font color and the font style; the marking characters represent characters of the selection marks; the alarm green light indicates whether the green light is on after the flaw is detected, and the green light is selected to be on or not; the yellow alarm lamp indicates whether the yellow lamp is on after the flaw is detected, and the yellow lamp is selected to be on or not; the alarm red light indicates whether the red light is on after the flaw is detected, and the red light is selected to be on or not; the green light alarm time length represents the time length of the green light after the flaw is detected, the variable is integer, and the value range is 1ms-100ms, and the unit ms; the yellow light alarm time length represents the time length of the yellow light after the flaw is detected, the variable is integer, and the value range is 1ms-100ms, and the unit ms; the red light alarm time length represents the time length of the red light which is lighted after the flaw is detected, the variable is integer, and the value range is 1ms-100ms, and the unit ms.
The values of the left and right undetected areas of the image in step 12 are determined according to the detection area slider of the image display area of the defect detection interface.
The operating platform is also provided with an external mechanical button, and the mechanical button is in communication connection with the operating platform through a serial port; the mechanical buttons in this example are a detect/pause mechanical button and a change mechanical button. The button serial port control process can be realized through a mechanical button, and the method comprises the following steps:
step a 1: the operating console sends an optocoupler inquiry signal to a serial port connected with the mechanical button at a set time interval through a set timer;
step a 2: the mechanical button receives the optical coupler query signal and returns an optical coupler query result signal according to the pressing state of the mechanical button;
step a 3: the operation console receives the returned optocoupler query result signal, acquires the pressing state of the mechanical button, and acquires the type of the connected mechanical button according to the serial port of the returned signal;
step a 4: and the operating console executes corresponding actions according to the type and the pressing state of the mechanical button, and the step is ended.
In the implementation process, an operation platform is connected with a camera module to obtain image data, real-time display is carried out according to the obtained image data, and flaw detection of the coiled material is completed according to a set algorithm; the exposure, the brightness and the like of the camera module and the light source module are controlled by setting a gray value dynamic adjusting process, so that the gray value of an image acquired by the camera module meets a set range; by setting the flaw marking process, the positions of flaws on the coiled material can be accurately judged and displayed, so that the historical flaws can be conveniently checked; the images of the line scanning camera are spliced and displayed in real time, so that the line scanning camera can collect and display the coiled material image with complete width on an operation table.
As shown in fig. 31, a web inspection method based on a line scanning camera, which can be used as an inspection algorithm in a web real-time inspection process, includes the following steps:
step S1: the operation desk acquires images acquired by all the line scanning cameras and counts the number of the images;
step S2: judging whether the acquired image is a multi-channel image; if the image is a multi-channel image, converting the image into a gray image, and entering step S3; otherwise, directly entering step S3;
step S3: sequentially selecting images, calculating the size of the images, finishing the cutting of each group of images according to left and right undetected areas and acquiring the gray value of the images; each group of images represent images collected by different line scanning cameras at the same moment;
step S4: calculating whether periodic stripes exist according to the overall image background evaluation; if the periodic texture exists, the texture is a textured material, a texture removing step is needed, and the next step is carried out after the texture removal is finished; otherwise, directly entering the next step;
step S5: completing a definition self-adaption process of the image according to the gray value of the image, judging the corresponding filtering stage number, completing image filtering, and obtaining a total defect area in the image;
step S6: performing adjacent multi-defect processing on the total defect area based on a clustering method, and communicating defect areas meeting requirements;
step S7: obtaining a flaw output priority according to the area of a flaw area or a flaw communication area; sorting the flaws according to flaw output priority;
step S8: acquiring flaw information, and sequentially judging whether the flaw information meets the requirement of a set threshold range according to a flaw output priority sequence; if the flaw information meets the requirement of a set threshold, sequentially putting the flaw information into an output queue;
step S9: judging whether the flaw information in the output queue exceeds the set output quantity upper limit of the flaw information or not; if the set output quantity upper limit of the flaw information is exceeded, outputting the flaw information of the set output quantity according to the flaw output priority sequence, and ending the step; otherwise, outputting all the flaw information in the queue, and ending the step.
As shown in fig. 32, the process of completing the cropping of the image and acquiring the gray scale value in step S3 includes the following steps:
step S31: acquiring a group of images, and judging that the number of the images in the group of images is more than or equal to one; if the number of images is one, the process proceeds to step S32; if the number of the images is more than one, otherwise, the step S33 is executed;
step S32: if the group of images only contains one image, judging the width of a left clipping area and a right clipping area and the relation between the width of the left clipping area and the width of the right clipping area and the width of the image, wherein the left clipping area and the right clipping area are obtained through a left non-detection area and a right non-detection area; if the sum of the widths of the left and right trimming areas is greater than the image width, the process proceeds to step S35; if the sum of the widths of the left cut region and the right cut region is equal to or less than the image width, the process proceeds to step S36;
step S33: if the number of the images contained in the group of images is more than one, judging the relation between the width of the first image and the width of the left cutting area; if the width of the first image is smaller than the width of the left cutting area, the step S35 is executed; otherwise, the process goes to step S34;
step S34: judging the relation between the width of the last image and the width of the right cutting area; if the width of the last image is smaller than the width of the right cutting area, the step S35 is executed; otherwise, the process goes to step S36;
step S35: if the image cutting area is too large, judging that the image detection is abnormal, ending the step and ending the detection algorithm;
step S36: sequentially acquiring one image in the group of images, and judging whether the image is the first image of the group of images; if the first image is the set of images, the process proceeds to step S37; if not, the process proceeds to step S38;
step S37: if the image is the first image of the group of images, judging whether the group of images only has one image; if only one image exists, acquiring left and right cutting areas of the image according to the left and right undetected areas, finishing the cutting of the image, and entering the step S310; if not, calculating a left clipping area of the image, completing clipping of the image, and entering step S310;
step S38: if the image is not the first image of the group of images, judging whether the image is the last image of the group of images; if the last image of the group of images, go to step S39; if the image is not the last image of the group of images, directly entering step S310;
step S39: if the image is the last image of the group of images and is not the first image, calculating a right cutting area of the image, finishing the cutting of the image, and entering step S310;
step S310: and calculating the gray value of the obtained image, and ending the step.
In step S310, after the gray scale value of the image is obtained, the average gray scale value of the image is also obtained, and if the average gray scale value of the image is higher than the set value Y1 or lower than the set value Y2, the image is considered to be too bright or too dark, the image is abnormal, and the detection algorithm is ended; in this example, the set value Y1 is 230 and Y2 is 30.
As shown in fig. 33, the de-texturing step in step S4 includes:
step S41: obtaining the cut image, and calculating the Width and Height of the image;
step S42: extracting 1/2Width 1/2Height sub-images in random areas in the image;
step S43: two straight lines L1, L2 perpendicular to each other are arranged at random positions in the sub-images; in this example, two straight lines L1, L2 are also parallel to the edges of the width and height of the sub-image, respectively, and the straight lines L1 and L2 pass through the center points of the sub-images;
step S44: carrying out bilateral filtering on the subimages to remove sharp noise and storing the edges without blurring, carrying out edge enhancement, and respectively calculating quadratic derivative function images of the subimages in the width direction and the height direction;
step S45: acquiring a linear region in the secondary derivative function image according to the secondary derivative function image of the subimage in the width direction and the height direction;
step S46: extracting a linear region skeleton from the secondary derivative function image according to the linear region and the polarity change from light to dark, and converting a linear object to obtain stripes; the stripes are bright areas in this example;
step S47: calculating Hough transform values (p, theta) of all the extracted straight lines; rectangular coordinates are converted into polar coordinates through Hough transformation, so that included angles can be conveniently determined and position positioning can be completed;
step S48: combining the stripes which are in the same linear region framework and have the same angle and are lower than the set pixel point length L3; l3 is 4 pixels long in this example;
step S49: after the stripes are combined, the stripes with the length less than the set pixel point length L4 are eliminated; l4 is 20 pixels long in this example;
step S410: acquiring residual stripes, and screening out the stripes intersected with the straight line L1 or the straight line L2;
step S411: extracting stripes with repeated intersection point distance and included angle according to intersection points of the stripes and the straight line L1; extracting stripes with repeated intersection point distance and included angle according to intersection points of the stripes and the straight line L2;
step S412: judging whether included angles exist between the stripes extracted in the step S411 and the straight lines L1 and L2; if there is no included angle with any of the straight lines L1 and L2, the stripe is considered to be parallel to the straight line L1 or L2, and the process proceeds to step S414; if included angles exist, the stripe is considered to be not parallel to the straight line L1 and the straight line L2, and the step S413 is executed;
step S413: acquiring projections of the intersection point distances on the stripes according to the included angles between the stripes and the straight lines L1 and L2 and the intersection point distances, wherein the projections comprise projection lengths and projection positions, and the projection lengths are the distances of the periodic stripes;
step S414: acquiring the periodicity information of the stripes, wherein the periodicity information comprises the projection length and the projection position of the intersection point distance on the stripes, the intersection points of the stripes on a straight line L1 or L2, and the angles of the stripes and the straight lines L1 and L2;
step S415: generating a periodic function according to the periodic line information of the stripes, and obtaining the periodic frequency of the stripes through Fourier series continuation expansion;
step S416: obtaining a spatial filter according to the periodic frequency of n items before the Fourier series;
step S417: and (5) performing convolution calculation on the cropped sub-image obtained in the step (S41) through a spatial filter to realize image de-texture operation, and ending the step.
It should be noted that the stripes referred to in this application are periodic stripes, including vertical, horizontal or oblique stripes, squares, etc.
As shown in fig. 34 to 36, the adaptive flow of the sharpness of the image in step S5 is specifically implemented by the following method:
firstly, aiming at the image obtained in the step S4, obtaining the central coordinate position of the image, and drawing a contour line parallel to the X-axis direction through the central coordinate position, wherein the X-axis is represented as the width direction of the coiled material in the image; extracting gray information of pixel points on the contour line to obtain a maximum gray value Gmax, a minimum gray value Gmin and an average gray value Gmean; setting the division gray level to n levels, wherein the gray level above the average gray value Gmean is n1 levels, the gray level below the average gray value Gmean is n2 levels, n1+ n2 is n, in this example n1 is n2 is n/2; acquiring the gray level difference a1 of Gmean higher than the average gray value and the gray level difference a2 of Gmean lower than the average gray value:
a1=(Gmax-Gmean)/n1
a2=(Gmean-Gmin)/n2
dividing gray-scale values in the image according to the n levels, wherein the gray-scale level difference a1 higher than the average gray-scale value Gmean and the gray-scale level difference lower than the average gray-scale value Gmean is a 2; dividing pixel points on the contour line according to the n-level gray value, finishing image cutting according to the dividing position and the direction perpendicular to the contour line, and marking the number of finally cut regions as Nw, wherein the continuous regions belonging to the uniform gray value level on the contour line are divided into one block until the pixel points of different gray value levels appear; after finishing image cutting, coding according to position information to obtain a two-dimensional array [ Nw, n ], wherein Nw is n image subregions;
acquiring the size of the cutting sub-image, and calculating the gray mean value and the gray variance D (x) of each image and the gray co-occurrence matrix of each image; wherein the gray level co-occurrence matrix GLCM for cropped sub-images having a gray level lower than the mean gray level Gmean is expressed as:
Figure BDA0003395211180000121
wherein n' is 1, 2, 3.. n 2; the gray level co-occurrence matrix GLCM for cropped sub-images with a gray level higher than the mean gray level Gmean is expressed as:
Figure BDA0003395211180000122
wherein n' is 1, 2, 3.. n 1; p (i, j) { counter (i ═ f (x, y), j ═ f (x, y +1)) }/(α ×, α), which represents the probability that an adjacent gray-scale value is (i, j) occurs across the entire cropped sub-image, a ═ a1 if the gray-scale value of the cropped sub-image is higher than the average gray-scale value Gmean, and a ═ a2 if the gray-scale value of the cropped sub-image is lower than the average gray-scale value Gmean; f (x, y) represents the gray-scale value of the point (x, y) position on the cropped sub-image; counter (i ═ f (x, y), j ═ f (x, y +1)), representing the number of times that an adjacent gray-scale value of (i, j) appears by traversing the entire cropped sub-image;
subsequently obtaining a contrast cont for clipping the subimage according to the gray level co-occurrence matrix CLCM, wherein
Figure BDA0003395211180000123
Wherein i ═ f (x, y), j ═ f (x, y + 1); the contrast of the image mainly reflects the definition of the image and the depth of the grooves of the texture, the image is clearer when the contrast is higher, and the image is fuzzy when the contrast is lower.
The sum of the squares of the elements of the gray level co-occurrence matrix GLCM for the cropped sub-images, i.e. the ASM energy, is then calculated as:
Figure BDA0003395211180000131
wherein the energy reflects the degree of uniformity of the gray level distribution of the image; when the image is fuzzy, the gray distribution is more uniform, and the energy value is larger; when the image is sharp, the energy value is small. The inverse difference distance Homogenity, abbreviated to Homo, of the cropped sub-image is then calculated and expressed as:
Figure BDA0003395211180000132
homogenity reflects the degree of local change in image texture. When the image is fuzzy, the gray distribution is more uniform, and the inverse difference value is larger; when the image is sharp, the inverse difference value is small.
The correlation Corr of the grey values of the cropped sub-images in the row or column direction is then calculated,
Figure BDA0003395211180000133
Figure BDA0003395211180000134
Figure BDA0003395211180000135
Figure BDA0003395211180000136
Figure BDA0003395211180000137
wherein ui and uj represent average values of the gray level co-occurrence matrix in the horizontal direction and the vertical direction, respectively, and δ i and δ j represent variance values of the horizontal direction and the vertical direction. The size of the correlation reflects the similarity degree of the whole gray value of the clipped sub-image; when the image is fuzzy, the gray level change is small, the correlation is good, and the numerical value is large; when the image is clear, the gray scale changes dramatically, the correlation is poor, and the numerical value is low.
Finally, the gray variance D (x), the contrast cont, the gray energy ASM, the inverse difference distance Homo and the correlation Corr of the clipping sub-image are counted, and a weighted mean value Ambiguity is obtained,
Figure BDA0003395211180000138
wherein chi, epsilon, eta, alpha and beta are set weights; the higher the weighted average value Ambiguity is, the more fuzzy the image is represented, and a Gaussian filter with the smaller filter kernel size is required to be used; if Ambiguty is less than A1, a first-order Gaussian filter is selected; if A1 ═ Ambiguty < A2, then a second-order Gaussian filter is selected; if A2 is ═ Ambiguty > A3, a three-level Gaussian filter is selected; if a3 ═ Ambiguity, a four-level gaussian filter is selected; in this example, a1, a2 and A3 are all set values, the kernel size of the first-order gaussian filter is 64 × 64, the kernel size of the second-order gaussian filter is 32 × 32, the kernel size of the third-order gaussian filter is 16 × 16, and the kernel size of the fourth-order gaussian filter is 8 × 8; and selecting a Gaussian filter with corresponding progression to finish the filtering processing of the clipping sub-images until the filtering processing of all the clipping sub-images is finished.
As shown in fig. 37, the critical area multi-defect processing procedure in step S6 includes the following steps:
step S61: obtaining a whole image after filtering, and filtering the image by using a Gaussian filter of one-level to four-level respectively to obtain four re-filtered images;
step S62: acquiring a pixel point set of a common dark area, a pixel point set of a very dark area, a pixel point set of a large-area dark area, a pixel point set of a bright area and a pixel point set of a hole defect area in an image;
step S63: acquiring a dark area; in this example, the dark regions include a normal dark region, a very dark region, and a large-area dark region;
step S64: acquiring a total defect area; in this example, the total defect area includes a dark area, a light area, and a hole defect area;
step S65: performing closed operation on the total defect area to communicate with the adjacent area;
step S66: calculating a connected domain of the total defect area, and separating all closed and unconnected areas;
step S67: and calculating the area size and the center point coordinate of all connected domains in the total defect area, and ending the step.
In step S62, in this example, a pixel point of the gray value < (gray value of a pixel point at a corresponding position in the third-level filtered image — the ordinary dark threshold Z1) of the pixel point in the first-level filtered image is regarded as an ordinary dark region; regarding the pixel point of the gray value < (the gray value of the pixel point at the corresponding position in the third-level filtering image-the very dark threshold value Z2) of the pixel point in the first-level filtering image as a very dark area; regarding the pixel point of the gray value < (gray value of pixel point at corresponding position in the four-level filtering image-large area dark threshold value Z3) of the pixel point in the two-level filtering image as a large area dark area; regarding the gray value of the pixel point in the first-level filtering image (the gray value of the pixel point at the corresponding position in the third-level filtering image + the common bright threshold value) as a bright area; and regarding the pixel points with the gray value range of (250,255) before the filtering in the step S61 as the hole defect areas.
As shown in fig. 38, the defective output priority in step S7 is determined by the size of the connected component area, and in this example, the larger the connected component area, the higher the defective output priority. Before determining the output priority in step S7, it is necessary to screen the defective connected components, where the screening condition is that the area of the connected component is greater than the set "upper limit threshold of defective area".
In step S8, the defect information includes a defect area, a length and a width of a minimum bounding rectangle of the defect, an aspect ratio, a defect line length-to-width ratio, a defect dark area ratio, a defect bright area ratio, and a defect aperture area ratio. Wherein the flaw line width is the flaw area/the minimum circumscribed rectangle length of the flaw; the length-width ratio of the flaw line is equal to the width of the flaw line/the length of the minimum circumscribed rectangle of the flaw; the defect dark area ratio is the intersection of the dark defect area and the current defect area/the current defect area; the defect bright area ratio is equal to the intersection of the bright defect area and the current defect area/the current defect area; the defective hole area ratio is 1-light area ratio-dark area ratio. In this example, if the defect information satisfies the threshold range, the defect information is output. The flaw information also includes flaw type, flaw center point coordinates, and the like. The defect types comprise black points, white points and holes, wherein the black points, the white points and the holes are judged through the gray values of pixel points, and each type of defect is divided into three levels, namely a large level, a middle level and a small level according to the defect area; in this example, the pixels determined to be in the very dark area in the critical area multi-defect processing flow of step S6 are considered as black dots, the pixels determined to be in the bright area are considered as white dots, and the pixels determined to be in the hole defect area are considered as holes.
In the method, the abnormal gray level area in the image is extracted, the area with the abnormal background gray level of the image is mainly extracted, different flaws are distinguished after morphological characteristic judgment is carried out, and the identification and the distinction of the flaw types are realized; acquiring gray scale change information of a transverse contour line of an image, and performing segmented definition self-adaptive detection on the image, so that the same image with wider width can obtain higher filtering effect, and the filtered images are clear and consistent; through to different blemishes, carry out shape, profile, bright dark zone position correlation characteristic and judge in order to reach careful classification, through carrying out connected domain processing to the blemish based on clustering algorithm for the blemish that satisfies the condition can communicate, concentrates a position of a plurality of defect output of the position of closing on, reduces and beats mark the output, helps improving the efficiency of algorithm simultaneously, and guarantees also can discover less intensive flaw.
Example two:
as shown in fig. 39 and 40, the present embodiment is obtained based on a modification of the embodiment, wherein the defect types in the defect information output in step S8 are obtained by a defect identification feature algorithm, and the defect types include black dots, air bubbles, insect spots, crystal dots, and lint. In the example, the defects which are characterized by full black are identified as black points, and further identified as large, medium and small black points according to the defect area; identifying a defect characterized by black and white circles as a bubble; the flaw characterized in that the black tooth arrangement is identified as a spot; identifying defects characterized by a black area adjacent to a white area as crystalline dots; a flaw characterized by a black line is identified as pile. The flaw identification characteristic algorithm comprises the following steps:
step S81: acquiring an image, and detecting a dark area in the image; wherein the dark area is an area where the image gray value is lower than a set threshold value Y3;
step S82: judging whether the number of dark areas in the image is one or not; if there is only one dark area, go to step S83; if the number of dark regions is 0 or more than one, the process proceeds to step S84;
step S83: the number of the dark areas is only one, and the outline of the dark areas is further judged to be approximate to a solid circle, approximate to a solid rectangle or other shapes; if the outline of the dark area is approximately circular, judging that the flaw is a large black spot, and ending the step; if the dark area is approximately rectangular in outline, judging that the defect is plush, and ending the step; if the outline of the dark area is other shape, go to step S84;
step S84: detecting a bright area in the image; judging whether the number of the bright areas and the dark areas is 0 or not, and if so, determining that the image is flawless; otherwise, the process goes to step S85; wherein the bright area is an area with an image gray value higher than a set threshold value Y4;
step S85: judging whether the edge of the bright area is surrounded by the dark area; if the edge of the bright area is surrounded by the dark area, the flaw is considered as a bubble, and the step is finished; if the bright area edge is not surrounded by the dark area, the process proceeds to step S86:
step S86: if the edge of the bright area is not surrounded by the dark area, counting the number of the bright area and the dark area, and acquiring barycentric coordinates of all the bright areas and the dark areas in the image;
step S87: sorting the barycentric coordinates of the bright area and the dark area according to the size of the abscissa, and judging whether the barycentric coordinates can be fitted into a straight line; if the barycentric coordinates can be fitted to a straight line, the process proceeds to step S88; otherwise, the process proceeds to step S89;
step S88: if the barycentric coordinates can be fitted into a straight line, further judging whether the bright area and the dark area appear alternately; if the bright area and the dark area appear alternately, judging the flaw as the insect spot, and ending the step; if the bright area and the dark area do not appear alternately, the process proceeds to step S89;
step S89: acquiring a total area framework of a bright area and a dark area, and judging whether the shape of the framework is in an arrow arrangement shape; if the area skeleton is in an arrow arrangement shape, the flaw is regarded as a crystal point, and the step is finished; otherwise, go to step S810;
step S810: considering the flaws as others, acquiring the gravity center distance of adjacent regions, aggregating the regions, and calculating the aggregated regions and areas; wherein the regions include light regions and dark regions.
The dark region in step S81 and the light region in step S84 are obtained by the adjacent domain multi-defect processing flow of step S6 in the first embodiment.
In the process of counting the bright areas and the dark areas in step S86, the bright areas and the dark areas are respectively marked with 1 and 0, and the area of each area and the minimum circumscribed circle of each area are counted.
The above description is only one specific example of the present invention and should not be construed as limiting the invention in any way. It will be apparent to persons skilled in the relevant art(s) that, having the benefit of this disclosure and its principles, various modifications and changes in form and detail can be made without departing from the principles and structures of the invention, which are, however, encompassed by the appended claims.

Claims (8)

1. The coiled material detection device is characterized by comprising an operation table, a support frame, a sliding rod, a camera module and a light source module; the number of the support frames is two, and the two support frames are symmetrically arranged on two sides of the assembly line; the sliding frame is arranged on the support frame in a sliding mode and can slide up and down along the support frame; the sliding rods are hinged on the sliding frames of the two supporting frames; the camera module is arranged on the sliding rod in a sliding manner through the horizontal adjusting device; the light source module is arranged on the support frame and is respectively positioned above and below the coiled material on the production line; the operating platform is arranged on the ground and is in communication connection with the camera module and the light source module; the console includes a display and a processor.
2. A definition adaptive coiled material detection method is characterized by comprising the following steps:
step S1: the operation desk acquires images acquired by all the line scanning cameras and counts the number of the images;
step S2: judging whether the acquired image is a multi-channel image; if the image is a multi-channel image, converting the image into a gray image, and entering step S3; otherwise, directly entering step S3;
step S3: sequentially selecting images, calculating the size of the images, finishing the cutting of each group of images according to left and right undetected areas and acquiring the gray value of the images; each group of images represent images collected by different line scanning cameras at the same moment;
step S4: calculating whether periodic stripes exist according to the overall image background evaluation; if the periodic texture exists, the texture is a textured material, a texture removing step is needed, and the next step is carried out after the texture removal is finished; otherwise, directly entering the next step;
step S5: completing a definition self-adaption process of the image according to the gray value of the image, judging the corresponding filtering stage number, completing image filtering, and obtaining a total defect area in the image;
step S6: performing adjacent multi-defect processing on the total defect area based on a clustering method, and communicating defect areas meeting requirements;
step S7: obtaining a flaw output priority according to the area of a flaw area or a flaw communication area; sorting the flaws according to flaw output priority;
step S8: acquiring flaw information, and sequentially judging whether the flaw information meets the requirement of a set threshold range according to a flaw output priority sequence; if the flaw information meets the requirement of a set threshold, sequentially putting the flaw information into an output queue;
step S9: judging whether the flaw information in the output queue exceeds the set output quantity upper limit of the flaw information or not; if the set output quantity upper limit of the flaw information is exceeded, outputting the flaw information of the set output quantity according to the flaw output priority sequence, and ending the step; otherwise, outputting all the flaw information in the queue, and ending the step.
3. A method for detecting a self-adaptive definition web according to claim 2, wherein the process of completing the image cutting and obtaining the gray value in step S3 includes the following steps:
step S31: acquiring a group of images, and judging that the number of the images in the group of images is more than or equal to one; if the number of images is one, the process proceeds to step S32; if the number of the images is more than one, otherwise, the step S33 is executed;
step S32: if the group of images only contains one image, judging the width of a left clipping area and a right clipping area and the relation between the width of the left clipping area and the width of the right clipping area and the width of the image, wherein the left clipping area and the right clipping area are obtained through a left non-detection area and a right non-detection area; if the sum of the widths of the left and right trimming areas is greater than the image width, the process proceeds to step S35; if the sum of the widths of the left cut region and the right cut region is equal to or less than the image width, the process proceeds to step S36;
step S33: if the number of the images contained in the group of images is more than one, judging the relation between the width of the first image and the width of the left cutting area; if the width of the first image is smaller than the width of the left cutting area, the step S35 is executed; otherwise, the process goes to step S34;
step S34: judging the relation between the width of the last image and the width of the right cutting area; if the width of the last image is smaller than the width of the right cutting area, the step S35 is executed; otherwise, the process goes to step S36;
step S35: if the image cutting area is too large, judging that the image detection is abnormal, ending the step and ending the detection algorithm;
step S36: sequentially acquiring one image in the group of images, and judging whether the image is the first image of the group of images; if the first image is the set of images, the process proceeds to step S37; if not, the process proceeds to step S38;
step S37: if the image is the first image of the group of images, judging whether the group of images only has one image; if only one image exists, acquiring left and right cutting areas of the image according to the left and right undetected areas, finishing the cutting of the image, and entering the step S310; if not, calculating a left clipping area of the image, completing clipping of the image, and entering step S310;
step S38: if the image is not the first image of the group of images, judging whether the image is the last image of the group of images; if the last image of the group of images, go to step S39; if the image is not the last image of the group of images, directly entering step S310;
step S39: if the image is the last image of the group of images and is not the first image, calculating a right cutting area of the image, finishing the cutting of the image, and entering step S310;
step S310: and calculating the gray value of the obtained image, and ending the step.
4. The adaptive web inspection method according to claim 2, wherein the de-texturing step of step S4 comprises:
step S41: obtaining the cut image, and calculating the Width and Height of the image;
step S42: extracting 1/2Width 1/2Height sub-images in random areas in the image;
step S43: two straight lines L1, L2 perpendicular to each other are arranged at random positions in the sub-images;
step S44: carrying out bilateral filtering on the subimages to remove sharp noise and storing the edges without blurring, carrying out edge enhancement, and respectively calculating quadratic derivative function images of the subimages in the width direction and the height direction;
step S45: acquiring a linear region in the secondary derivative function image according to the secondary derivative function image of the subimage in the width direction and the height direction;
step S46: extracting a linear region skeleton from the secondary derivative function image according to the linear region and the polarity change from light to dark, and converting a linear object to obtain stripes;
step S47: calculating Hough transform values of all the extracted straight lines;
step S48: combining the stripes which are in the same linear region framework and have the same angle and are lower than the set pixel point length L3;
step S49: after the stripes are combined, the stripes with the length less than the set pixel point length L4 are eliminated;
step S410: acquiring residual stripes, and screening out the stripes intersected with the straight line L1 or the straight line L2;
step S411: extracting stripes with repeated intersection point distance and included angle according to intersection points of the stripes and the straight line L1; extracting stripes with repeated intersection point distance and included angle according to intersection points of the stripes and the straight line L2;
step S412: judging whether included angles exist between the stripes extracted in the step S411 and the straight lines L1 and L2; if there is no included angle with any of the straight lines L1 and L2, the stripe is considered to be parallel to the straight line L1 or L2, and the process proceeds to step S414; if included angles exist, the stripe is considered to be not parallel to the straight line L1 and the straight line L2, and the step S413 is executed;
step S413: acquiring projections of the intersection point distances on the stripes according to the included angles between the stripes and the straight lines L1 and L2 and the intersection point distances, wherein the projections comprise projection lengths and projection positions, and the projection lengths are the distances of the periodic stripes;
step S414: acquiring the periodicity information of the stripes, wherein the periodicity information comprises projection length and projection positions including projections of intersection point distances on the stripes, intersection points of the stripes on a straight line L1 or L2, and angles of the stripes and the straight lines L1 and L2;
step S415: generating a periodic function according to the periodic line information of the stripes, and obtaining the periodic frequency of the stripes through Fourier series continuation expansion;
step S416: obtaining a specific spatial filter according to the periodic frequency of n items before the Fourier series;
step S417: and (4) performing convolution calculation on the cropped sub-image obtained in the step (S41) through a specific spatial filter to realize image de-texture operation, and ending the step.
5. A method for detecting a sharpness adaptive web according to claim 2, wherein the sharpness adaptive process of the image in step S5 includes cropping the image according to a gray value distribution, and obtaining a gray variance d (x), a contrast cont, a gray energy ASM, an inverse difference Homo, and a correlation Corr for each of the cropped sub-images;
the contrast cont represents the definition of the image and the depth of the grooves of the texture, the higher the contrast is, the clearer the image is, and otherwise, the lower the contrast is, the fuzzy the image is represented;
the gray level energy ASM reflects the gray level distribution uniformity degree of the image, when the image is fuzzy, the gray level distribution is more uniform, and the energy value is larger; when the image is clear, the energy value is smaller;
the inverse difference Homo reflects the local variation degree of the image texture; when the image is fuzzy, the gray distribution is more uniform, and the inverse difference value is larger; when the image is clear, the inverse difference value is small;
the correlation Corr reflects the similarity degree of the overall gray value of the cutting sub-images; when the image is fuzzy, the gray level change is small, the correlation is good, and the numerical value is large; when the image is clear, the gray scale changes violently, the correlation is poor, and the numerical value is low;
and (3) counting the gray-scale variance D (x), the contrast cont, the gray-scale energy ASM, the inverse difference distance Homo and the correlation Corr of the clipping sub-image, and acquiring a weighted mean value Ambiguity:
Figure FDA0003395211170000041
wherein chi, epsilon, eta, alpha and beta are set weights; the higher the weighted mean value Ambiguity, the more blurred the image is represented, and the smaller the filter kernel size, the gaussian filter needs to be used.
6. The adaptive web inspection method according to claim 2, wherein the critical multi-defect processing procedure of step S6 comprises the following steps:
step S61: obtaining a whole image after filtering, and filtering the image by using a Gaussian filter of one-level to four-level respectively to obtain four re-filtered images; the kernel size of the first-level Gaussian filter is 64 x 64, the kernel size of the second-level Gaussian filter is 32 x 32, the kernel size of the third-level Gaussian filter is 16 x 16, and the kernel size of the fourth-level Gaussian filter is 8 x 8;
step S62: acquiring a pixel point set of a common dark area, a pixel point set of a very dark area, a pixel point set of a large-area dark area, a pixel point set of a bright area and a pixel point set of a hole defect area in an image;
step S63: acquiring a dark area; the dark area comprises a common dark area, a very dark area and a large-area dark area;
step S64: acquiring a total defect area; the total defect area comprises a dark area, a bright area and a hole defect area;
step S65: performing closed operation on the total defect area to communicate with the adjacent area;
step S66: calculating a connected domain of the total defect area, and separating all closed and unconnected areas;
step S67: and calculating the area size and the center point coordinate of all connected domains in the total defect area, and ending the step.
7. The web inspection method according to claim 2, wherein the defect information in step S8 includes defect types, defect categories include black dots, bubbles, insect spots, crystal dots, and lint; the flaw classification is obtained through a flaw identification characteristic algorithm, and the method comprises the following steps:
step S81: acquiring an image, and detecting a dark area in the image; wherein the dark area is an area where the image gray value is lower than a set threshold value Y3;
step S82: judging whether the number of dark areas in the image is one or not; if there is only one dark area, go to step S83; if the number of dark regions is 0 or more than one, the process proceeds to step S84;
step S83: the number of the dark areas is only one, and the outline of the dark areas is further judged to be approximate to a solid circle, approximate to a solid rectangle or other shapes; if the outline of the dark area is approximately circular, judging that the flaw is a large black spot, and ending the step; if the dark area is approximately rectangular in outline, judging that the defect is plush, and ending the step; if the outline of the dark area is other shape, go to step S84;
step S84: detecting a bright area in the image; judging whether the number of the bright areas and the dark areas is 0 or not, and if so, determining that the image is flawless; otherwise, the process goes to step S85; wherein the bright area is an area with an image gray value higher than a set threshold value Y4;
step S85: judging whether the edge of the bright area is surrounded by the dark area; if the edge of the bright area is surrounded by the dark area, the flaw is considered as a bubble, and the step is finished; if the bright area edge is not surrounded by the dark area, the process proceeds to step S86:
step S86: if the edge of the bright area is not surrounded by the dark area, counting the number of the bright area and the dark area, and acquiring barycentric coordinates of all the bright areas and the dark areas in the image;
step S87: sorting the barycentric coordinates of the bright area and the dark area according to the size of the abscissa, and judging whether the barycentric coordinates can be fitted into a straight line; if the barycentric coordinates can be fitted to a straight line, the process proceeds to step S88; otherwise, the process proceeds to step S89;
step S88: if the barycentric coordinates can be fitted into a straight line, further judging whether the bright area and the dark area appear alternately; if the bright area and the dark area appear alternately, judging the flaw as the insect spot, and ending the step; if the bright area and the dark area do not appear alternately, the process proceeds to step S89;
step S89: acquiring a total area framework of a bright area and a dark area, and judging whether the shape of the framework is in an arrow arrangement shape; if the area skeleton is in an arrow arrangement shape, the flaw is regarded as a crystal point, and the step is finished; otherwise, go to step S810;
step S810: considering the flaws as others, acquiring the gravity center distance of adjacent regions, aggregating the regions, and calculating the aggregated regions and areas; wherein the regions include light regions and dark regions.
8. A web inspection system, characterized in that the inspection system is based on the inspection method according to any one of claims 2-7, the inspection system comprising the steps of:
step 1: the operation panel receives a starting-up instruction, and the processor starts a starting-up initialization process; after the startup initialization process is completed, the display displays a startup interface; the starting interface comprises a system entering button and a system exiting button which respectively correspond to a system entering instruction and a system exiting instruction;
step 2: the operation console receives an operation instruction of the starting interface and judges whether the operation instruction is a system exit instruction or a system entry instruction; if the command is 'exit system', the operation table is closed, and the step is ended; if the instruction is a system entering instruction, the display jumps to a flaw detection interface from a starting interface, and a plurality of line scanning camera acquisition threads are started; the flaw detection interface comprises a clearing button, a detecting button, a pause button, a history volume button, a roll changing button, a setting button, a forward button, a backward button and an exit button, and the clearing button, the detecting button, the pause button, the history volume button, the roll changing button, the setting button, the forward button, the backward button and the exit button are respectively corresponding to clearing instructions, detecting buttons, pause buttons, history volume buttons, roll changing instructions, setting instructions, forward buttons, backward buttons and exit instructions; the flaw detection interface also comprises an image display area, and the image display area displays images acquired by the camera module in real time; a detection area slide bar is arranged in the image display area, and the detection area slide bar corresponds to a detection area division instruction;
and step 3: the operating platform judges whether an operating instruction of a flaw detection interface is received; if an operation instruction is received, entering the step 4, otherwise, returning to the step 3;
and 4, step 4: judging the type of the operation instruction; if the command is 'clear', entering a step 5; if the command is 'detection/pause', entering step 6; if the instruction is a 'history volume' instruction, entering a step 7; if the command is a roll change command, entering step 8; if the command is a 'set' command, entering step 9; if the command is a forward command or a backward command, the step 10 is entered; if the command is an exit command, entering step 11; if the command is 'detection area division', the step 12 is carried out;
and 5: the console receives a 'clear' instruction and prompts a user whether to clear the history information prompt; if the confirmation instruction is received, the history information prompt is cleared; if a negative confirmation instruction is received, returning to the step 3;
step 6: the console receives the detection/pause instruction and judges whether the detection/pause instruction is the detection instruction or the pause instruction; if the command is a detection command, entering a real-time detection process until a pause command is received, and returning to the step 3; if the command is a pause command, ending the real-time detection process and returning to the step 3; it should be noted that, when entering the flaw detection interface for the first time after starting up, a "detection" button is displayed, corresponding to a "detection" instruction, and the "detection" button is switched to a "pause" button after being clicked, and the "pause" button corresponds to a "pause" instruction;
and 7: the operation desk receives the 'history volume' instruction, controls the display to enter a history volume interface, starts a history volume interface flow, and returns to the step 3 after finishing the history volume interface flow;
and 8: the operation desk receives a roll change instruction and judges whether the real-time detection process is started or not; if the real-time detection process is started, prompting 'please pause the real-time detection first' on the display, and returning to the step 3; if the real-time detection process is not started, updating the volume number and the volume length information according to the input, releasing the display image resources, writing the corresponding data into the database, and returning to the step 3;
and step 9: the operation desk receives the 'setting' instruction, and creates and enters a setting interface; returning to the step 3 after exiting the setting interface;
step 10: the operating platform receives the forward command or the backward command and judges whether the command is the forward command or the backward command; if the instruction is a 'forward' instruction, further judging whether the instruction is the last page of the current image, if so, prompting and returning to the step 3, and if not, advancing one page and returning to the step 3; if the image is a 'back' instruction, further judging whether the image is the first page of the current image, if the image is the first page, prompting and returning to the step 3, and if the image is not the first page, backing one page and returning to the step 3;
step 11: the console receives the exit command and prompts the user whether to confirm exiting the system; if the system is confirmed to be exited, the step is ended; if not, returning to the step 3;
step 12: and (3) when the console receives the detection area division instruction, correspondingly modifying the values of the left and right undetected areas of the image, storing the modified values into system parameters, and returning to the step 3.
CN202111480868.2A 2021-12-06 2021-12-06 Definition self-adaptive coiled material detection method, device and system Active CN114264661B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111480868.2A CN114264661B (en) 2021-12-06 2021-12-06 Definition self-adaptive coiled material detection method, device and system
CN202410586785.9A CN118671067A (en) 2021-12-06 2021-12-06 Image definition self-adaption method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111480868.2A CN114264661B (en) 2021-12-06 2021-12-06 Definition self-adaptive coiled material detection method, device and system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410586785.9A Division CN118671067A (en) 2021-12-06 2021-12-06 Image definition self-adaption method

Publications (2)

Publication Number Publication Date
CN114264661A true CN114264661A (en) 2022-04-01
CN114264661B CN114264661B (en) 2024-05-31

Family

ID=80826387

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410586785.9A Pending CN118671067A (en) 2021-12-06 2021-12-06 Image definition self-adaption method
CN202111480868.2A Active CN114264661B (en) 2021-12-06 2021-12-06 Definition self-adaptive coiled material detection method, device and system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410586785.9A Pending CN118671067A (en) 2021-12-06 2021-12-06 Image definition self-adaption method

Country Status (1)

Country Link
CN (2) CN118671067A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082445A (en) * 2022-07-25 2022-09-20 山东鲁泰防水科技有限公司 Method for detecting surface defects of building waterproof roll
WO2023102952A1 (en) * 2021-12-06 2023-06-15 浙江大学台州研究院 Coiled material detection device, system and method capable of achieving real-time detection
WO2024012438A1 (en) * 2022-07-12 2024-01-18 厦门兴全龙机械有限公司 Grey fabric detecting device and detecting method suitable for open-width fabric winding machine

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076027A1 (en) * 2005-10-05 2007-04-05 Fuji Xerox Co., Ltd. Method for driving a liquid droplet ejecting head and liquid droplet ejecting device
CN101383479A (en) * 2008-09-28 2009-03-11 中国科学院上海光学精密机械研究所 Two-dimensional fiber laser array phase locking and aperture filling device
JP2010226259A (en) * 2009-03-19 2010-10-07 Rhythm Watch Co Ltd Detection system and signal processing method thereof
CN102590218A (en) * 2012-01-16 2012-07-18 安徽中科智能高技术有限责任公司 Device and method for detecting micro defects on bright and clean surface of metal part based on machine vision
US20150355104A1 (en) * 2014-06-09 2015-12-10 Keyence Corporation Inspection Apparatus, Inspection Method, And Program
CN105631854A (en) * 2015-12-16 2016-06-01 天津天地伟业数码科技有限公司 FPGA platform-based self-adaptive image definition evaluation algorithm
US20160313866A1 (en) * 2015-04-21 2016-10-27 Dell Products L.P. Managing Inputs at an Information Handling System by Adaptive Infrared Illumination and Detection
CN106251332A (en) * 2016-07-17 2016-12-21 西安电子科技大学 SAR image airport target detection method based on edge feature
EP3260505A1 (en) * 2016-06-22 2017-12-27 Agfa Nv Methods of manufacturing packaging for food, cosmetics and pharma
CN107894252A (en) * 2017-11-14 2018-04-10 江苏科沃纺织有限公司 It is a kind of to monitor the buried telescopic monitoring system for being sprayed filling device running status in real time
CN108259753A (en) * 2018-02-28 2018-07-06 中国航空工业集团公司洛阳电光设备研究所 A kind of camera auto-focusing method and device that climbing method is improved based on defocus estimation
CN108305234A (en) * 2018-01-17 2018-07-20 华侨大学 A kind of Double-histogram equalization methods based on optimal model
DE102017102664A1 (en) * 2017-02-10 2018-08-16 Retzlaff Schweißtechnik UG (haftungsbeschränkt) Method for underwater repair of a steel structure
CN109685766A (en) * 2018-11-23 2019-04-26 江苏大学 A kind of Fabric Defect detection method based on region fusion feature
CN111209876A (en) * 2020-01-10 2020-05-29 汕头大学 Oil leakage defect detection method and system
CN111707675A (en) * 2020-06-11 2020-09-25 圣山集团有限公司 Cloth surface flaw on-line monitoring device and monitoring method thereof
CN112330599A (en) * 2020-10-15 2021-02-05 浙江大学台州研究院 Size measurement scoring device, adjusting method and scoring method
CN113420810A (en) * 2021-06-22 2021-09-21 中国民航大学 Cable trench intelligent inspection system and method based on infrared and visible light
CN113567447A (en) * 2019-08-07 2021-10-29 浙江大学台州研究院 Synthetic leather hemming online detection method

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076027A1 (en) * 2005-10-05 2007-04-05 Fuji Xerox Co., Ltd. Method for driving a liquid droplet ejecting head and liquid droplet ejecting device
CN101383479A (en) * 2008-09-28 2009-03-11 中国科学院上海光学精密机械研究所 Two-dimensional fiber laser array phase locking and aperture filling device
JP2010226259A (en) * 2009-03-19 2010-10-07 Rhythm Watch Co Ltd Detection system and signal processing method thereof
CN102590218A (en) * 2012-01-16 2012-07-18 安徽中科智能高技术有限责任公司 Device and method for detecting micro defects on bright and clean surface of metal part based on machine vision
US20150355104A1 (en) * 2014-06-09 2015-12-10 Keyence Corporation Inspection Apparatus, Inspection Method, And Program
US20160313866A1 (en) * 2015-04-21 2016-10-27 Dell Products L.P. Managing Inputs at an Information Handling System by Adaptive Infrared Illumination and Detection
CN105631854A (en) * 2015-12-16 2016-06-01 天津天地伟业数码科技有限公司 FPGA platform-based self-adaptive image definition evaluation algorithm
EP3260505A1 (en) * 2016-06-22 2017-12-27 Agfa Nv Methods of manufacturing packaging for food, cosmetics and pharma
CN106251332A (en) * 2016-07-17 2016-12-21 西安电子科技大学 SAR image airport target detection method based on edge feature
DE102017102664A1 (en) * 2017-02-10 2018-08-16 Retzlaff Schweißtechnik UG (haftungsbeschränkt) Method for underwater repair of a steel structure
CN107894252A (en) * 2017-11-14 2018-04-10 江苏科沃纺织有限公司 It is a kind of to monitor the buried telescopic monitoring system for being sprayed filling device running status in real time
CN108305234A (en) * 2018-01-17 2018-07-20 华侨大学 A kind of Double-histogram equalization methods based on optimal model
CN108259753A (en) * 2018-02-28 2018-07-06 中国航空工业集团公司洛阳电光设备研究所 A kind of camera auto-focusing method and device that climbing method is improved based on defocus estimation
CN109685766A (en) * 2018-11-23 2019-04-26 江苏大学 A kind of Fabric Defect detection method based on region fusion feature
CN113567447A (en) * 2019-08-07 2021-10-29 浙江大学台州研究院 Synthetic leather hemming online detection method
CN111209876A (en) * 2020-01-10 2020-05-29 汕头大学 Oil leakage defect detection method and system
CN111707675A (en) * 2020-06-11 2020-09-25 圣山集团有限公司 Cloth surface flaw on-line monitoring device and monitoring method thereof
CN112330599A (en) * 2020-10-15 2021-02-05 浙江大学台州研究院 Size measurement scoring device, adjusting method and scoring method
CN113420810A (en) * 2021-06-22 2021-09-21 中国民航大学 Cable trench intelligent inspection system and method based on infrared and visible light

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
CAI NING: "Optimized dithering technique in frequency domain for high-quality three-dimensional depth data acquisition", 《CHINESE PHYSICS B》, vol. 28, no. 8, 4 September 2019 (2019-09-04), pages 1 - 11 *
EIGENWILLIG CHRISTOPH M.: "K-space linear Fourier domain mode locked laser and applications for optical coherence tomography", 《OPTICS EXPRESS》, vol. 16, no. 12, 9 June 2008 (2008-06-09), pages 8916 - 8937, XP002614206, DOI: 10.1364/OE.16.008916 *
SUTISNA D: "Flaw Detection in Welded Metal Using Magnetic Induction Tomography", 《ADVANCED MATERIALS RESEARCH》, vol. 896, 28 August 2014 (2014-08-28), pages 722 - 725 *
张亚洲;: "基于改进相干增强扩散与纹理能量测度和高斯混合模型的导光板表面缺陷检测方法", 《计算机应用》, no. 05, 31 December 2020 (2020-12-31), pages 309 - 316 *
张亚洲;卢先领;: "基于改进相干增强扩散与纹理能量测度和高斯混合模型的导光板表面缺陷检测方法", 计算机应用, no. 05, 31 December 2020 (2020-12-31), pages 309 - 316 *
张田;: "基于清晰度评价的自适应阈值图像分割法", 《东北大学学报(自然科学版)》, no. 09, 15 September 2020 (2020-09-15), pages 17 - 24 *
张田;田勇;王子;王昭东;: "基于清晰度评价的自适应阈值图像分割法", 东北大学学报(自然科学版), no. 09, 15 September 2020 (2020-09-15), pages 17 - 24 *
汤建文;王仁波;王海涛;: "基于FPGA的数字多道梯形成形算法研究", 测试技术学报, no. 05, 29 October 2018 (2018-10-29), pages 42 - 47 *
汪成龙;孙培宜;林晓鹏;黄余凤;陈国壮;: "基于SeetaFace的人脸识别门禁系统", 制造业自动化, no. 08, 25 August 2018 (2018-08-25), pages 117 - 118 *
胡君;王栋;孙天宇;: "现代航天光学成像遥感器的应用与发展", 中国光学与应用光学, no. 06, 31 December 2010 (2010-12-31), pages 519 - 533 *
陈浙泊;徐进;林斌;陆祖康;: "一种基于信息理论的高动态范围图像评价方法", 光学技术, no. 01, 15 January 2010 (2010-01-15), pages 108 - 112 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023102952A1 (en) * 2021-12-06 2023-06-15 浙江大学台州研究院 Coiled material detection device, system and method capable of achieving real-time detection
WO2024012438A1 (en) * 2022-07-12 2024-01-18 厦门兴全龙机械有限公司 Grey fabric detecting device and detecting method suitable for open-width fabric winding machine
CN115082445A (en) * 2022-07-25 2022-09-20 山东鲁泰防水科技有限公司 Method for detecting surface defects of building waterproof roll

Also Published As

Publication number Publication date
CN114264661B (en) 2024-05-31
CN118671067A (en) 2024-09-20

Similar Documents

Publication Publication Date Title
CN114689591A (en) Coiled material detection device, system and detection method based on line scanning camera
CN114486903B (en) Gray-scale self-adaptive coiled material detection system, device and algorithm
CN114264661A (en) Definition self-adaptive coiled material detection method, device and system
JP4416795B2 (en) Correction method
US8009208B2 (en) Detection and removal of blemishes in digital images utilizing original images of defocused scenes
CN102221559B (en) Online automatic detection method of fabric defects based on machine vision and device thereof
US7424170B2 (en) Automated statistical self-calibrating detection and removal of blemishes in digital images based on determining probabilities based on image analysis of single images
US20080152255A1 (en) Automated statistical self-calibrating detection and removal of blemishes in digital images dependent upon changes in extracted parameter values
US7310450B2 (en) Method of detecting and correcting dust in digital images based on aura and shadow region analysis
US20100259622A1 (en) Determination of need to service a camera based on detection of blemishes in digital images
US7308156B2 (en) Automated statistical self-calibrating detection and removal of blemishes in digital images based on a dust map developed from actual image data
CN116559183B (en) Method and system for improving defect judging efficiency
JP5659623B2 (en) Exposure attribute setting method and computer-readable storage medium
CN110618134A (en) Steel plate surface quality defect detection and rating system and method
CN106604005A (en) Automatic projection TV focusing method and system
CN107766784A (en) A kind of novel video people counting algorithm
CN118097305B (en) Method and system for detecting quality of semiconductor light-emitting element
CN112329893A (en) Data-driven heterogeneous multi-target intelligent detection method and system
CN116256366A (en) Chip defect detection method, detection system and storage medium
CN113639630A (en) Dimension measuring instrument system based on multi-template matching and automatic focusing functions
JP2005164565A (en) Defect detection method for flat panel light- related plate element in low and high resolution images
CN116258703A (en) Defect detection method, defect detection device, electronic equipment and computer readable storage medium
CN113467664A (en) Size measuring instrument using method based on template matching
CN113645398A (en) Dimension measuring instrument system based on automatic focusing function
JP2001307067A (en) Defect inspecting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant