CN111881730A - Wearing detection method for on-site safety helmet of thermal power plant - Google Patents
Wearing detection method for on-site safety helmet of thermal power plant Download PDFInfo
- Publication number
- CN111881730A CN111881730A CN202010550623.1A CN202010550623A CN111881730A CN 111881730 A CN111881730 A CN 111881730A CN 202010550623 A CN202010550623 A CN 202010550623A CN 111881730 A CN111881730 A CN 111881730A
- Authority
- CN
- China
- Prior art keywords
- safety helmet
- wearing detection
- model
- helmet wearing
- power plant
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Human Computer Interaction (AREA)
- Helmets And Other Head Coverings (AREA)
Abstract
The application discloses a method for detecting wearing of safety helmets on site in a thermal power plant, which comprises the steps of obtaining historical video pictures with workers in a monitoring area and forming a safety helmet wearing detection data set; labeling a safety helmet wearing detection data set, and randomly dividing the safety helmet wearing detection data set into a training set and a testing set; generating an anchor frame which corresponds to the obtained safety helmet wearing detection data set and is suitable for a monitoring area; training an improved YOLOv3 model by using a training set to obtain a safety helmet wearing detection model; adjusting the model confidence of the helmet wearing detection model; and detecting the wearing condition of the safety helmet in the monitoring area in real time based on the safety helmet wearing detection model. The thermal power plant field safety helmet wearing real-time detection method based on the improved YOLOv3 model can adapt to field complex scenes and improve mAP of safety helmet wearing detection, and is beneficial to improving safety awareness of workers and helping field safety production to be developed orderly.
Description
Technical Field
The invention belongs to the technical field of intelligent video monitoring safety, and relates to a method for detecting wearing of a safety helmet in a thermal power plant site.
Background
Wearing a safety helmet is a measure to prevent brain injury. Research shows that in construction sites and inspection sites, nearly 90% of brain injuries are caused by the fact that safety helmets are not correctly worn, and the incidence rate of related accidents can be effectively reduced by detecting the wearing of the safety helmets of related personnel.
In the early stage, a construction and inspection site usually has a special safety supervisor to detect the wearing condition of the safety helmet of a worker, but the mode is difficult to supervise in all directions and cannot ensure the supervision effectiveness. Therefore, there is a need for a method of detecting wearing of a helmet that can supervise construction and inspection sites in real time and reduce supervision costs.
In a general target detection method, only the existence of a detection target needs to be judged on a picture, the number of the targets is obtained, and the target position is marked. For a safety helmet wearing detection algorithm, real-time identification and depth optimization aiming at a dynamic video are also required on the basis, so that higher identification and tracking precision is achieved; the adaptability to different environments such as light, cloudy days and the like is strong, and the device is not influenced by the shielding of personnel glasses, beards, hairstyles, expressions and the like; and the device is not influenced by different postures of the front, the back, the side, running, head lowering and the like of people. In recent years, researchers have made many innovative studies on the detection of wearing of safety helmets in two detection methods, i.e., sensor-based detection and image processing-based detection. However, due to the problems of low positioning precision, low speed measurement, low accuracy and the like, the method is not suitable for construction and inspection sites with high complexity, cannot meet the actual requirements of wearing detection of the safety helmet, and is superior to the traditional detection algorithm in the aspects of wearing detection of the safety helmet by virtue of the characteristics of simple network, high detection speed, high accuracy and the like.
YOLOv3 replaced the dark net-19 of YOLOv2 with a residual network model dark net-53 network structure, extracted features through 53 convolutional layers and 5 max pooling layers, used bulk normalization and dropout removal operations to prevent overfitting, loss functions replaced softmax with logistic, and so on. The YOLOv3 pre-detection system employed multi-scale training, performed detection tasks multiple times using classifiers, applied the model to multiple locations and scales of the image, with input picture sizes of 416 x 416 pixels, and feature map sizes of 13 x 13, 26 x 26, 52 x 52 pixels.
However, for a specific scene in a field, the YOLOv3 model and related parameters cannot achieve a high detection effect, and a corresponding improved model and a detection strategy need to be formulated for the scene.
Disclosure of Invention
For solving the not enough among the prior art, this application provides a detection method is worn to scene safety helmet of thermal power plant, can detect whether personnel wear the safety helmet among the scene video information to effectively reduce because personnel's speed of travel is fast, the object shelters from, the light complicated situation such as dim causes the missed measure wrong detection problem.
In order to achieve the above objective, the following technical solutions are adopted in the present application:
a thermal power plant field safety helmet donning detection method, the method comprising the steps of:
s1: acquiring a video picture with workers in a monitoring area to form a safety helmet wearing detection data set;
s2: whether a worker wears the safety helmet in the safety helmet wearing detection data set picture is marked, and the safety helmet wearing detection data set picture is randomly divided into a training set and a testing set;
s3: generating corresponding anchor frames suitable for the monitoring area according to the helmet wearing detection data set obtained in the step S1;
s4: training an improved YOLOv3 model by using a training set, detecting the improved YOLOv3 model after each generation of training by using a test set, and screening to obtain a safety helmet wearing detection model;
s5: adjusting the model confidence of the helmet wearing detection model to obtain a helmet wearing detection model with an optimal detection result;
s6: and detecting the wearing condition of the safety helmet in the monitoring area in real time based on the safety helmet wearing detection model obtained in the step S5.
The invention further comprises the following preferred embodiments:
step S2 specifically includes the steps of:
s201: manually marking whether the safety helmet is worn by the person in the safety helmet wearing detection data set one by one, wherein the marked part is used as a positive sample, and the unmarked part is used as a negative sample, so as to obtain an xml format file;
s202: acquiring the category and the detection frame of each labeled object in the xml format file and generating a corresponding txt format file;
s203: generating all txt format file paths corresponding to the helmet wearing detection data set and storing the txt format file paths in a dataset.
S204: txt files with the label content stored in the dataset txt files in the random assignment step S203 are divided into a training set and a test set.
In step S3, an anchor frame is generated by using a clustering method, the distance measurement method in clustering is an IOU distance, and the calculation formula is:
D=1-IOU(box,clusters)
d is the distance between the prediction box and the real box, box is the labeled box, and clusters is the number of clusters.
Wherein, IOU: Intersection-over-Unit, called IOU for short.
The improved YOLOv3 model of step S4 is divided into 2 categories, one category of which is marked hat when the safety helmet is worn and none when the safety helmet is not worn;
the improved YOLOv3 model having an input picture size of 576 x 576;
the improved YOLOv3 model comprises a first YOLO layer, a second YOLO layer and a third YOLO layer which are respectively positioned on an 89 th layer, a 101 th layer and a 113 th layer;
the number of anchor frames generated by the first, second and third YOLO layers is 3, 3 and 5 respectively, and the anchor frames are used for detecting large targets, medium targets and small targets, wherein the size of the large targets is larger than 96 x 96, the size of the medium targets is larger than 32 x 32 and smaller than or equal to 96 x 96, and the size of the small targets is smaller than or equal to 32 x 32;
the first, second and third YOLO layers have feature sizes of 9 × 9, 36 × 36 and 72 × 72, respectively;
the step size of the improved YOLOv3 model is 2 at the 88 th layer, 35 feature maps exist at the 112 th layer, and the number calculation formula of the feature maps is as follows:
filters=(classes+5)*anchors
wherein classes is 2, represents the number of model categories, anchors is 5, represents the number of anchor boxes, and 5 anchor boxes are x, y, length, width and background respectively.
Step S4 uses FOCAL LOSS as a LOSS function during training, where the LOSS function formula is:
FL(pt)=-αt(1-pt)γlog(pt)
wherein p is the predicted probability, and both alpha and gamma are hyperparameters;
swish is taken as an activation function, and the activation function formula is as follows:
f(x)=x·sigmoid(x);
where x is the tensor of the input image.
The step S4 includes the following sub-steps:
s401: uniformly adjusting the pictures in the training set to the input picture size of the improved YOLOv3 model;
s402: performing image enhancement processing on the picture subjected to size adjustment in the step S401;
s403: setting iteration times, the number of images of each batch of training, an initial learning rate and a learning rate updating rule;
s404: training an improved Yolov3 model using the image enhanced processed pictures;
s405: and detecting mAP of the improved YOLOv3 model after each generation of training by using the test set, and selecting the improved YOLOv3 model with the highest mAP as a safety helmet wearing detection model.
In step S402, image enhancement is performed by using image flipping, cropping, color changing, brightness modification, contrast modification, and saturation modification.
In step S403, the number of iterations is set to 10000, the initial learning rate is 0.001 for each 32 images, and the learning rate is updated to 0.5 times of the original learning rate if there is no more optimal result for 200 consecutive iterations.
In step S5, the model confidence is set to 0.65.
Step S6 specifically includes the following steps:
s601: collecting video pictures in a monitoring area in real time;
s602: the safety helmet wearing detection model judges whether workers exist in the collected video pictures or not;
s603: segmenting characters and backgrounds of video pictures with workers, acquiring the position of an anchor frame and judging whether the workers wear safety helmets or not;
if the safety helmet is worn, only the staff is marked, otherwise, a prompt is sent and the staff is marked;
s604: and displaying the pictures marked by the safety helmet wearing detection model in real time.
The beneficial effect that this application reached:
the method is used for realizing real-time detection of wearing of the thermal power plant field safety helmet based on the improved YOLOv3 model, compared with the original model, the improved YOLOv3 model obtains an anchor frame according to a data set, and the anchor frame value is more suitable for a field scene; the size of the input picture is 576 × 576, the larger pixel size improves the information quantity, and the model is more accurate; the size of the output feature graph of the third YOLO layer is 72 x 72, the number of anchor frames is increased to 5, and higher accuracy is achieved on small target detection; the FOCALOSS is used for solving the problem of category imbalance, and the SWISH activation function is used for avoiding the problem of overlarge saturation; through the improvement, the model adapts to a complex scene on site, the accuracy, the recall rate and the mAP of the model applied on site are improved, the mAP refers to the mean average accuracy value mean average precision, namely mAP for short, and the advantage is more prominent in small target detection. The safety awareness of workers is improved, and the on-site safety production is developed orderly.
Drawings
FIG. 1 is a flow chart of a method for detecting the wearing of a thermal power plant field safety helmet according to the present application;
FIG. 2 is a network architecture diagram of the improved YOLOv3 model of the present application;
fig. 3 is a flow chart of real-time detection of the wearing condition of the safety helmet in the monitoring area based on the safety helmet wearing detection model.
Detailed Description
The present application is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present application is not limited thereby.
As shown in fig. 1, a method for detecting wearing of a safety helmet in a thermal power plant site according to the present application includes the following steps:
s1: acquiring a video picture with workers in a monitoring area to form a safety helmet wearing detection data set;
in the embodiment of the application, a video of a monitoring area is obtained through a field camera, the video is converted into a corresponding picture by using opencv, and then the video picture with a worker is obtained through screening to form a safety helmet wearing detection data set;
the acquired historical video pictures are not limited to scale, lighting, style, color and whether occlusion exists.
S2: whether a worker wears the safety helmet in the safety helmet wearing detection data set picture is marked, and the safety helmet wearing detection data set picture is randomly divided into a training set and a testing set according to the proportion of 8: 2; the method specifically comprises the following steps:
s201: manually marking whether the safety helmet is worn or not in the safety helmet wearing detection data set one by one, wherein the marked part is used as a positive sample, and the unmarked part is used as a negative sample, namely a background class, so as to obtain an xml format file;
any image annotation software can be used to manually annotate the person wearing the crash helmet in the crash helmet wear detection dataset one by one, such as, but not limited to, Labelme, labelImg, or yolo _ mark.
S202: acquiring the type and detection frame of each labeled object in the xml format file, namely a bounding box, and generating a corresponding txt format file;
s203: generating all txt format file paths corresponding to the helmet wearing detection data set and storing the txt format file paths in a dataset.
S204: txt files with the label contents stored in the dataset txt files in the random assignment step S203 are divided into a training set and a test set at a ratio of 8: 2.
S3: generating a corresponding anchor frame suitable for the monitoring area according to the helmet wearing detection data set obtained in the step S1, so that the current detection environment can be better adapted, and higher detection accuracy can be obtained;
in step S3, an anchor frame is generated by using a clustering method, the distance measurement method in clustering is an IOU distance, and the calculation formula is:
D=1-IOU(box,clusters)
d is the distance between the prediction box and the real box, box is the labeled box, and clusters is the number of clusters.
In the step S3, the number of anchor frames is set according to the actual scene on the spot, and the anchor frames are obtained according to the data set, and the value of the anchor frames is more suitable for the actual scene on the spot, in this embodiment, the number of anchor frames is set to be 11, and specifically, in the model with input of 576 × 576:
[6,12],[7,17],[9,18],[10,16],[27,39],[39,58],[49,77],[68,91],[123,191],[168,228],[293,446]. Wherein [6,12] represents a rectangular box with a width of 6 and a length of 12, and so on.
S4: training an improved YOLOv3 model by using a training set, detecting the improved YOLOv3 model after each generation of training by using a test set, and screening to obtain a safety helmet wearing detection model;
as shown in FIG. 2, the modified Yolov3 model was divided into 2 categories, one category of worn helmets labeled hat and unworn helmets labeled none;
the input picture size of the improved YOLOv3 model is 576 x 576, the larger pixel size increases the information content, and the model is more accurate;
the improved YOLOv3 model comprises a first YOLO layer, a second YOLO layer and a third YOLO layer which are respectively positioned on an 89 th layer, a 101 th layer and a 113 th layer;
the number of anchor frames generated by the first, second and third YOLO layers is 3, 3 and 5 respectively, and the anchor frames are used for detecting large targets, medium targets and small targets, wherein the size of the large targets is larger than 96 x 96, the size of the medium targets is larger than 32 x 32 and smaller than or equal to 96 x 96, and the size of the small targets is smaller than or equal to 32 x 32;
the first, second and third YOLO layers have feature sizes of 9 × 9, 36 × 36 and 72 × 72, respectively; the size of the output feature graph of the third YOLO layer is 72 x 72, the number of anchor frames is increased to 5, and higher accuracy is achieved on small target detection;
the improved YOLOv3 model has a step size of 2 at the 88 th layer, 35 characteristic maps at the 112 th layer, and a calculation formula of the characteristic maps is as follows:
filters=(classes+5)*anchors
wherein classes is 2, represents the number of model categories, anchors is 5, represents the number of anchor boxes, and 5 anchor boxes are x, y, length, width and background respectively.
The processing process of the improved YOLOv3 model on the picture specifically comprises the following steps: firstly, inputting a size 576 x 576, then entering a dark net feature extraction network, wherein the first path passes through six times of 3 x 3 convolution kernels, the step length is 2 of down sampling, and the size of an output feature diagram is 9 x 9; and the second path performs tensor splicing on the features before the last downsampling and the features subjected to the darknet, the 6 DBLs and the first upsampling to obtain a second feature map with the size of 36 x 36, wherein each time one convolution, regularization and activation function is performed, the DBL is obtained. And the third path is subjected to tensor splicing again by using the features subjected to residual error (res) operation for 11 times and the features subjected to DBL and upsampling after tensor splicing of the second path, so that a feature map with the third size of 72 x 72 is obtained.
In fig. 2, DBL: convolution + regularization layer BN + linear cell leakage relu, which is the smallest component of YOLOv 3;
and (2) resn: n represents a number, including res1, res2, res8, etc., indicating how many res _ units are contained in the res _ block. Is the large component of YOLO-v3, the basic component of resn is DBL;
during training, FOCAL LOSS is used as a LOSS function, and the LOSS function formula is as follows:
FL(pt)=-αt(1-pt)γlog(pt)
wherein p is the predicted probability, and both alpha and gamma are hyperparameters;
swish is taken as an activation function, and the activation function formula is as follows:
f(x)=x·sigmoid(x);
where x is the tensor of the input image.
The FOCALOSS is used for solving the problem of category imbalance, and the SWISH activation function is used for avoiding the problem of overlarge saturation;
by the improvement, the model adapts to the complex scene of the site and the accuracy, the recall rate and the mAP of the model applied on the site are improved
The training process of the improved YOLOv3 model is as follows: the input picture-picture preprocessing is an input model with a specified size, a convolution layer, a pooling layer, a BN regularization layer, an activation function and the like are arranged in the model, a plurality of coordinate center points are obtained after the model is processed, an anchor frame is generated on the basis of the coordinate center points, corresponding results are obtained according to non-maximum value inhibition, namely the classification and the corresponding positions of all targets in the predicted picture, namely the prediction frame-prediction classification and the positions are compared with real classification and positions, namely a marking frame, loss values are obtained through a loss function, the maximum direction of gradients is searched for carrying out back propagation, and model parameters are updated.
Specifically, the step S4 includes the following sub-steps:
s401: uniformly adjusting the pictures in the training set to 576 x 576;
s402: performing image enhancement processing on the picture subjected to size adjustment in the step S401;
specifically, image enhancement processing is performed by using image flipping, cropping, color changing, brightness modification, contrast, saturation and the like.
S403: the iteration number (epoch) is set to 10000 times, each batch (bitch _ size) of 32 images has an initial learning rate of 0.001, and the learning rate is updated to 0.5 times of the original learning rate if no more optimal result exists for 200 times continuously. (ii) a
S404: training an improved Yolov3 model using the image enhanced processed pictures;
s405: and detecting mAP of the improved YOLOv3 model after each generation of training by using the test set, and selecting the improved YOLOv3 model with the highest mAP as a safety helmet wearing detection model.
S5: the test set is tested for multiple times by using different confidence degrees, and the model confidence degree of the helmet wearing detection model obtained in the step S4 is adjusted to obtain a helmet wearing detection model with an optimal detection result, so that the detection precision is improved; the model with the highest detection precision is obtained firstly, and then the confidence coefficient is adjusted on the basis of the model to obtain the final application model.
In the embodiment of the present application, the model confidence is set to 0.65. Greater than this confidence level results in a reduced model recall rate, and many samples that should be detected are not detected. Less than this confidence level may result in a reduced accuracy of the model, and many undetected samples may not be detected. A well-adjusted confidence level can balance the recall rate and the accuracy rate to obtain the optimal mAP.
S6: and detecting the wearing condition of the safety helmet in the monitoring area in real time based on the safety helmet wearing detection model obtained in the step S5.
The real-time detection process comprises the following steps: obtaining video, preprocessing, judging whether a target exists by a model, obtaining a central point, obtaining a result after applying an anchor frame and NMS (network management system), judging whether the target is worn by classification by the model, obtaining a coordinate point of a prediction frame by regression of the model, drawing, and displaying a drawn image on a monitor.
As shown in fig. 3, step S6 specifically includes the following steps:
s601: collecting video pictures in a monitoring area in real time;
s602: the safety helmet wearing detection model judges whether a worker, namely a target, exists in the acquired video picture;
s603: segmenting characters and backgrounds of video pictures with workers, acquiring the position of an anchor frame and judging whether the workers wear safety helmets or not; that is, the detection model performs classification and regression, where the classification is to divide the picture into positive samples or negative samples (background class), and the regression is to obtain the position of the predicted point.
The sizes of the feature maps are respectively 9 × 9, 36 × 36 and 72 × 72, and in which grid the predicted central coordinate falls in the model, the grid generates a corresponding prediction frame according to the corresponding anchor frame size, obtains the confidence coefficient, and selects the highest one as the prediction frame.
If the safety helmet is worn, the rectangular frame is directly drawn, otherwise, a prompt is sent and the rectangular frame of the object without the safety helmet is drawn;
for objects with safety helmets, green rectangular boxes can be used for labeling and have a hat character, and for objects without safety helmets, red rectangular boxes can be used for labeling and have a none character.
S604: and displaying the pictures marked by the safety helmet wearing detection model in real time.
According to the method, a safety helmet wearing detection data set is obtained by collecting video pictures in a monitored area and is labeled, an anchor frame which corresponds to the safety helmet wearing detection data set and is suitable for the monitored area is obtained by using a clustering algorithm, and a trained model can adapt to a complex scene on site and improve the accuracy, recall rate and mAP of the model in on-site application by modifying the size of an input picture, changing the number of convolution kernels, outputting the size of a characteristic diagram, the number of anchor frames of each YOLO layer, modifying a loss function and activating a function. The safety awareness of workers is improved, and the on-site safety production is developed orderly.
The present applicant has described and illustrated embodiments of the present invention in detail with reference to the accompanying drawings, but it should be understood by those skilled in the art that the above embodiments are merely preferred embodiments of the present invention, and the detailed description is only for the purpose of helping the reader to better understand the spirit of the present invention, and not for limiting the scope of the present invention, and on the contrary, any improvement or modification made based on the spirit of the present invention should fall within the scope of the present invention.
Claims (10)
1. A method for detecting wearing of a safety helmet in a thermal power plant site is characterized by comprising the following steps:
the method comprises the following steps:
s1: acquiring a video picture with workers in a monitoring area to form a safety helmet wearing detection data set;
s2: whether a worker wears the safety helmet in the safety helmet wearing detection data set picture is marked, and the safety helmet wearing detection data set picture is randomly divided into a training set and a testing set;
s3: generating corresponding anchor frames suitable for the monitoring area according to the helmet wearing detection data set obtained in the step S1;
s4: training an improved YOLOv3 model by using a training set, detecting the improved YOLOv3 model after each generation of training by using a test set, and screening to obtain a safety helmet wearing detection model;
s5: adjusting the model confidence of the helmet wearing detection model to obtain a helmet wearing detection model with an optimal detection result;
s6: and detecting the wearing condition of the safety helmet in the monitoring area in real time based on the safety helmet wearing detection model obtained in the step S5.
2. The thermal power plant site safety helmet wearing detection method according to claim 1, characterized in that:
step S2 specifically includes the steps of:
s201: manually marking whether the safety helmet is worn by the person in the safety helmet wearing detection data set one by one, wherein the marked part is used as a positive sample, and the unmarked part is used as a negative sample, so as to obtain an xml format file;
s202: acquiring the category and the detection frame of each labeled object in the xml format file and generating a corresponding txt format file;
s203: generating all txt format file paths corresponding to the helmet wearing detection data set and storing the txt format file paths in a dataset.
S204: txt files with the label content stored in the dataset txt files in the random assignment step S203 are divided into a training set and a test set.
3. The thermal power plant site safety helmet wearing detection method according to claim 1, characterized in that:
in step S3, an anchor frame is generated by using a clustering method, the distance measurement method in clustering is an IOU distance, and the calculation formula is:
D=1-IOU(box,clusters)
d is the distance between the prediction box and the real box, box is the labeled box, and clusters is the number of clusters.
4. The thermal power plant site safety helmet wearing detection method according to claim 1, characterized in that:
the improved YOLOv3 model of step S4 is divided into 2 categories, one category of which is marked hat when the safety helmet is worn and none when the safety helmet is not worn;
the improved YOLOv3 model having an input picture size of 576 x 576;
the improved YOLOv3 model comprises a first YOLO layer, a second YOLO layer and a third YOLO layer which are respectively positioned on an 89 th layer, a 101 th layer and a 113 th layer;
the number of anchor frames generated by the first, second and third YOLO layers is 3, 3 and 5 respectively, and the anchor frames are used for detecting large targets, medium targets and small targets, wherein the size of the large targets is larger than 96 x 96, the size of the medium targets is larger than 32 x 32 and smaller than or equal to 96 x 96, and the size of the small targets is smaller than or equal to 32 x 32;
the first, second and third YOLO layers have feature sizes of 9 × 9, 36 × 36 and 72 × 72, respectively;
the step size of the improved YOLOv3 model is 2 at the 88 th layer, 35 feature maps exist at the 112 th layer, and the number calculation formula of the feature maps is as follows:
filters=(classes+5)*anchors
wherein classes is 2, represents the number of model categories, anchors is 5, represents the number of anchor boxes, and 5 anchor boxes are x, y, length, width and background respectively.
5. The thermal power plant site safety helmet wearing detection method according to claim 1, characterized in that:
step S4 uses FOCAL LOSS as a LOSS function during training, where the LOSS function formula is:
FL(pt)=-αt(1-pt)γlog(pt)
wherein p is the predicted probability, and both alpha and gamma are hyperparameters;
swish is taken as an activation function, and the activation function formula is as follows:
f(x)=x·sigmoid(x);
where x is the tensor of the input image.
6. The thermal power plant site safety helmet wearing detection method according to claim 1, characterized in that:
the step S4 includes the following sub-steps:
s401: uniformly adjusting the pictures in the training set to the input picture size of the improved YOLOv3 model;
s402: performing image enhancement processing on the picture subjected to size adjustment in the step S401;
s403: setting iteration times, the number of images of each batch of training, an initial learning rate and a learning rate updating rule;
s404: training an improved Yolov3 model using the image enhanced processed pictures;
s405: and detecting mAP of the improved YOLOv3 model after each generation of training by using the test set, and selecting the improved YOLOv3 model with the highest mAP as a safety helmet wearing detection model.
7. The thermal power plant site safety helmet wearing detection method according to claim 6, characterized in that:
in step S402, image enhancement is performed by using image flipping, cropping, color changing, brightness modification, contrast modification, and saturation modification.
8. The thermal power plant site safety helmet wearing detection method according to claim 6, characterized in that:
in step S403, the number of iterations is set to 10000, the initial learning rate is 0.001 for each 32 images, and the learning rate is updated to 0.5 times of the original learning rate if there is no more optimal result for 200 consecutive iterations.
9. The thermal power plant site safety helmet wearing detection method according to claim 1, characterized in that:
in step S5, the model confidence is set to 0.65.
10. The thermal power plant site safety helmet wearing detection method according to claim 1, characterized in that:
step S6 specifically includes the following steps:
s601: collecting video pictures in a monitoring area in real time;
s602: the safety helmet wearing detection model judges whether workers exist in the collected video pictures or not;
s603: segmenting characters and backgrounds of video pictures with workers, acquiring the position of an anchor frame and judging whether the workers wear safety helmets or not;
if the safety helmet is worn, only the staff is marked, otherwise, a prompt is sent and the staff is marked;
s604: and displaying the pictures marked by the safety helmet wearing detection model in real time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010550623.1A CN111881730A (en) | 2020-06-16 | 2020-06-16 | Wearing detection method for on-site safety helmet of thermal power plant |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010550623.1A CN111881730A (en) | 2020-06-16 | 2020-06-16 | Wearing detection method for on-site safety helmet of thermal power plant |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111881730A true CN111881730A (en) | 2020-11-03 |
Family
ID=73156776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010550623.1A Pending CN111881730A (en) | 2020-06-16 | 2020-06-16 | Wearing detection method for on-site safety helmet of thermal power plant |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111881730A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112347943A (en) * | 2020-11-09 | 2021-02-09 | 哈尔滨理工大学 | Anchor optimization safety helmet detection method based on YOLOV4 |
CN112381005A (en) * | 2020-11-17 | 2021-02-19 | 温州大学 | Safety helmet detection system for complex scene |
CN112597902A (en) * | 2020-12-24 | 2021-04-02 | 上海核工程研究设计院有限公司 | Small target intelligent identification method based on nuclear power safety |
CN112633174A (en) * | 2020-12-23 | 2021-04-09 | 电子科技大学 | Improved YOLOv4 high-dome-based fire detection method and storage medium |
CN112668628A (en) * | 2020-12-24 | 2021-04-16 | 山东大学 | Quality detection and visualization method for air conditioner outdoor unit |
CN112861646A (en) * | 2021-01-18 | 2021-05-28 | 浙江大学 | Cascade detection method for oil unloading worker safety helmet in complex environment small target recognition scene |
CN113052133A (en) * | 2021-04-20 | 2021-06-29 | 平安普惠企业管理有限公司 | Yolov 3-based safety helmet identification method, apparatus, medium and equipment |
CN113139437A (en) * | 2021-03-31 | 2021-07-20 | 成都飞机工业(集团)有限责任公司 | Helmet wearing inspection method based on YOLOv3 algorithm |
CN113158851A (en) * | 2021-04-07 | 2021-07-23 | 浙江大华技术股份有限公司 | Wearing safety helmet detection method and device and computer storage medium |
CN113392857A (en) * | 2021-08-17 | 2021-09-14 | 深圳市爱深盈通信息技术有限公司 | Target detection method, device and equipment terminal based on yolo network |
CN113553979A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN113553977A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Improved YOLO V5-based safety helmet detection method and system |
CN113688709A (en) * | 2021-08-17 | 2021-11-23 | 长江大学 | Intelligent detection method, system, terminal and medium for wearing safety helmet |
CN113705476A (en) * | 2021-08-30 | 2021-11-26 | 国网四川省电力公司营销服务中心 | Neural network-based field operation violation behavior analysis method and system |
CN113780342A (en) * | 2021-08-04 | 2021-12-10 | 杭州国辰机器人科技有限公司 | Intelligent detection method and device based on self-supervision pre-training and robot |
CN113936294A (en) * | 2021-09-13 | 2022-01-14 | 微特技术有限公司 | Construction site personnel identification method, readable storage medium and electronic device |
CN114332773A (en) * | 2022-01-05 | 2022-04-12 | 苏州麦科斯工程科技有限公司 | Intelligent construction site safety helmet wearing identification control system based on Yolo v4 improved model |
CN114627425A (en) * | 2021-06-11 | 2022-06-14 | 珠海路讯科技有限公司 | Method for detecting whether worker wears safety helmet or not based on deep learning |
CN114831378A (en) * | 2022-05-17 | 2022-08-02 | 杭州品茗安控信息技术股份有限公司 | Method, device, equipment and storage medium for monitoring wearing state of lower jaw belt |
CN114973080A (en) * | 2022-05-18 | 2022-08-30 | 深圳能源环保股份有限公司 | Method, device, equipment and storage medium for detecting wearing of safety helmet |
CN113792629B (en) * | 2021-08-31 | 2023-07-18 | 华南理工大学 | Safety helmet wearing detection method and system based on deep neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110399905A (en) * | 2019-07-03 | 2019-11-01 | 常州大学 | The detection and description method of safety cap wear condition in scene of constructing |
KR102095152B1 (en) * | 2019-06-07 | 2020-03-30 | 건국대학교 산학협력단 | A method of recognizing a situation and apparatus performing the same |
CN111222474A (en) * | 2020-01-09 | 2020-06-02 | 电子科技大学 | Method for detecting small target of high-resolution image with any scale |
AU2020100711A4 (en) * | 2020-05-05 | 2020-06-11 | Chang, Cheng Mr | The retrieval system of wearing safety helmet based on deep learning |
-
2020
- 2020-06-16 CN CN202010550623.1A patent/CN111881730A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102095152B1 (en) * | 2019-06-07 | 2020-03-30 | 건국대학교 산학협력단 | A method of recognizing a situation and apparatus performing the same |
CN110399905A (en) * | 2019-07-03 | 2019-11-01 | 常州大学 | The detection and description method of safety cap wear condition in scene of constructing |
CN111222474A (en) * | 2020-01-09 | 2020-06-02 | 电子科技大学 | Method for detecting small target of high-resolution image with any scale |
AU2020100711A4 (en) * | 2020-05-05 | 2020-06-11 | Chang, Cheng Mr | The retrieval system of wearing safety helmet based on deep learning |
Non-Patent Citations (4)
Title |
---|
XIAOTONG ZHAO等: "Aggregated Residual Dilation-Based Feature Pyramid Network for Object Detection", 《IEEE ACCESS》, vol. 7, pages 134014 - 134027, XP011747567, DOI: 10.1109/ACCESS.2019.2941892 * |
何超: "基于改进YOLOv3的安全帽检测系统研究", 《中国优秀硕士学位论文全文数据库:信息科技辑》, vol. 1, no. 3, pages 221 - 224 * |
农民工小陈: "YOLOv3 笔记_yolov3 logistics", pages 1 - 4, Retrieved from the Internet <URL:https://blog.csdn.net/weixin_42102248/article/details/102665185> * |
吴伟浩等: "基于改进Yolo v3的电连接器缺陷检测", 《传感技术学报》, vol. 33, no. 2, pages 299 - 307 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112347943A (en) * | 2020-11-09 | 2021-02-09 | 哈尔滨理工大学 | Anchor optimization safety helmet detection method based on YOLOV4 |
CN112381005A (en) * | 2020-11-17 | 2021-02-19 | 温州大学 | Safety helmet detection system for complex scene |
CN112633174A (en) * | 2020-12-23 | 2021-04-09 | 电子科技大学 | Improved YOLOv4 high-dome-based fire detection method and storage medium |
CN112633174B (en) * | 2020-12-23 | 2022-08-02 | 电子科技大学 | Improved YOLOv4 high-dome-based fire detection method and storage medium |
CN112597902A (en) * | 2020-12-24 | 2021-04-02 | 上海核工程研究设计院有限公司 | Small target intelligent identification method based on nuclear power safety |
CN112668628A (en) * | 2020-12-24 | 2021-04-16 | 山东大学 | Quality detection and visualization method for air conditioner outdoor unit |
CN112861646A (en) * | 2021-01-18 | 2021-05-28 | 浙江大学 | Cascade detection method for oil unloading worker safety helmet in complex environment small target recognition scene |
CN113139437A (en) * | 2021-03-31 | 2021-07-20 | 成都飞机工业(集团)有限责任公司 | Helmet wearing inspection method based on YOLOv3 algorithm |
CN113158851A (en) * | 2021-04-07 | 2021-07-23 | 浙江大华技术股份有限公司 | Wearing safety helmet detection method and device and computer storage medium |
CN113158851B (en) * | 2021-04-07 | 2022-08-09 | 浙江大华技术股份有限公司 | Wearing safety helmet detection method and device and computer storage medium |
CN113052133A (en) * | 2021-04-20 | 2021-06-29 | 平安普惠企业管理有限公司 | Yolov 3-based safety helmet identification method, apparatus, medium and equipment |
CN114627425A (en) * | 2021-06-11 | 2022-06-14 | 珠海路讯科技有限公司 | Method for detecting whether worker wears safety helmet or not based on deep learning |
CN114627425B (en) * | 2021-06-11 | 2024-05-24 | 珠海路讯科技有限公司 | Method for detecting whether worker wears safety helmet or not based on deep learning |
CN113553977A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Improved YOLO V5-based safety helmet detection method and system |
CN113553979B (en) * | 2021-07-30 | 2023-08-08 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN113553979A (en) * | 2021-07-30 | 2021-10-26 | 国电汉川发电有限公司 | Safety clothing detection method and system based on improved YOLO V5 |
CN113780342A (en) * | 2021-08-04 | 2021-12-10 | 杭州国辰机器人科技有限公司 | Intelligent detection method and device based on self-supervision pre-training and robot |
CN113392857A (en) * | 2021-08-17 | 2021-09-14 | 深圳市爱深盈通信息技术有限公司 | Target detection method, device and equipment terminal based on yolo network |
CN113392857B (en) * | 2021-08-17 | 2022-03-11 | 深圳市爱深盈通信息技术有限公司 | Target detection method, device and equipment terminal based on yolo network |
CN113688709A (en) * | 2021-08-17 | 2021-11-23 | 长江大学 | Intelligent detection method, system, terminal and medium for wearing safety helmet |
CN113688709B (en) * | 2021-08-17 | 2023-12-05 | 广东海洋大学 | Intelligent detection method, system, terminal and medium for wearing safety helmet |
CN113705476A (en) * | 2021-08-30 | 2021-11-26 | 国网四川省电力公司营销服务中心 | Neural network-based field operation violation behavior analysis method and system |
CN113792629B (en) * | 2021-08-31 | 2023-07-18 | 华南理工大学 | Safety helmet wearing detection method and system based on deep neural network |
CN113936294A (en) * | 2021-09-13 | 2022-01-14 | 微特技术有限公司 | Construction site personnel identification method, readable storage medium and electronic device |
CN114332773A (en) * | 2022-01-05 | 2022-04-12 | 苏州麦科斯工程科技有限公司 | Intelligent construction site safety helmet wearing identification control system based on Yolo v4 improved model |
CN114831378A (en) * | 2022-05-17 | 2022-08-02 | 杭州品茗安控信息技术股份有限公司 | Method, device, equipment and storage medium for monitoring wearing state of lower jaw belt |
CN114973080A (en) * | 2022-05-18 | 2022-08-30 | 深圳能源环保股份有限公司 | Method, device, equipment and storage medium for detecting wearing of safety helmet |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111881730A (en) | Wearing detection method for on-site safety helmet of thermal power plant | |
AU2020100705A4 (en) | A helmet detection method with lightweight backbone based on yolov3 network | |
CN110059694B (en) | Intelligent identification method for character data in complex scene of power industry | |
CN111860160B (en) | Method for detecting wearing of mask indoors | |
CN111414887B (en) | Secondary detection mask face recognition method based on YOLOV3 algorithm | |
CN110084165B (en) | Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation | |
CN110502965A (en) | A kind of construction safety helmet wearing monitoring method based on the estimation of computer vision human body attitude | |
CN111967393A (en) | Helmet wearing detection method based on improved YOLOv4 | |
CN110348312A (en) | A kind of area video human action behavior real-time identification method | |
CN113553977B (en) | Improved YOLO V5-based safety helmet detection method and system | |
CN113324864B (en) | Pantograph carbon slide plate abrasion detection method based on deep learning target detection | |
CN110728252B (en) | Face detection method applied to regional personnel motion trail monitoring | |
CN112633308A (en) | Detection method and detection system for whether power plant operating personnel wear safety belts | |
CN113642474A (en) | Hazardous area personnel monitoring method based on YOLOV5 | |
CN110852179B (en) | Suspicious personnel invasion detection method based on video monitoring platform | |
CN111626169A (en) | Image-based railway dangerous falling rock size judgment method | |
CN112163572A (en) | Method and device for identifying object | |
CN112365497A (en) | High-speed target detection method and system based on Trident Net and Cascade-RCNN structures | |
CN111626170A (en) | Image identification method for railway slope rockfall invasion limit detection | |
CN112184773A (en) | Helmet wearing detection method and system based on deep learning | |
CN110674887A (en) | End-to-end road congestion detection algorithm based on video classification | |
CN113343926A (en) | Driver fatigue detection method based on convolutional neural network | |
CN106570440A (en) | People counting method and people counting device based on image analysis | |
Isa et al. | CNN transfer learning of shrimp detection for underwater vision system | |
CN113887455B (en) | Face mask detection system and method based on improved FCOS |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |