CN113537153A - Meter image identification method and device, electronic equipment and computer readable medium - Google Patents
Meter image identification method and device, electronic equipment and computer readable medium Download PDFInfo
- Publication number
- CN113537153A CN113537153A CN202110959804.4A CN202110959804A CN113537153A CN 113537153 A CN113537153 A CN 113537153A CN 202110959804 A CN202110959804 A CN 202110959804A CN 113537153 A CN113537153 A CN 113537153A
- Authority
- CN
- China
- Prior art keywords
- image
- recognition
- pointer
- instrument
- identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012937 correction Methods 0.000 claims description 38
- 238000005070 sampling Methods 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 31
- 238000000605 extraction Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 5
- 238000007499 fusion processing Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the disclosure discloses a meter image identification method and device, electronic equipment and a computer readable medium. One embodiment of the method comprises: carrying out image recognition on a pre-acquired instrument image through a preset instrument image recognition model so as to generate recognition information; intercepting an image area corresponding to the identification frame included in the identification information from the instrument image to obtain an intercepted image; correcting the intercepted image to generate a corrected instrument image and a corrected scale point coordinate set; performing pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information; and generating a meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information and the recognition frame center coordinate and the recognition frame width value included in the recognition information. The embodiment can improve the accuracy of generating the instrument image recognition result.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a meter image identification method, a meter image identification device, electronic equipment and a computer readable medium.
Background
The instrument image recognizing method is one technology for recognizing instrument data in instrument image. At present, when instrument image recognition is performed, the method generally adopted is as follows: and directly carrying out model training by using the collected instrument images, and then carrying out instrument image recognition by using the trained model.
However, when the meter image recognition is performed in the above manner, there are often technical problems as follows:
firstly, simultaneously, the pointer and dial plate recognition is carried out on the instrument image, so that the pointer characteristic and the dial plate characteristic are mutually influenced, higher coupling performance is generated, and the accuracy rate of instrument image recognition is reduced;
secondly, it is difficult to acquire pointer state pictures of all ranges to train the instrument image recognition model, thereby resulting in low robustness of the instrument image recognition model generated in the conventional method. Thus, the efficiency of meter image recognition is reduced.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a meter image recognition method, apparatus, electronic device, and computer readable medium to solve one or more of the technical problems set forth in the background section above.
In a first aspect, some embodiments of the present disclosure provide a meter image recognition method, including: carrying out image recognition on a pre-acquired instrument image through a preset instrument image recognition model to generate recognition information, wherein the recognition information comprises a recognition frame, a recognition frame center coordinate and a recognition frame width value, and the recognition frame is composed of a frame coordinate group; intercepting an image area corresponding to the identification frame included in the identification information from the instrument image to obtain an intercepted image; based on preset template image information, correcting the intercepted image to generate a corrected instrument image and a corrected scale point coordinate set; performing pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information; and generating a meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information and the recognition frame center coordinate and the recognition frame width value included in the recognition information, wherein the meter image recognition result includes a meter image pointer scale value.
In a second aspect, some embodiments of the present disclosure provide a meter image recognition apparatus, including: the system comprises an image recognition unit, a processing unit and a display unit, wherein the image recognition unit is configured to perform image recognition on a pre-acquired instrument image through a preset instrument image recognition model so as to generate recognition information, the recognition information comprises a recognition frame, a recognition frame center coordinate and a recognition frame width value, and the recognition frame is composed of a frame coordinate group; an image capture unit configured to capture an image area corresponding to the identification frame included in the identification information from the meter image to obtain a captured image; the correction processing unit is configured to perform correction processing on the intercepted image based on preset template image information so as to generate a corrected instrument image and a corrected scale point coordinate set; the pointer identification unit is configured to perform pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information; a generating unit configured to generate a meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information, and a recognition frame center coordinate included in the recognition information and a width value of a recognition frame, wherein the meter image recognition result includes a meter image pointer scale value.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the instrument image identification method of some embodiments of the present disclosure, the accuracy of instrument image identification can be improved. Specifically, the reason why the accuracy of the meter image recognition is reduced is that: and simultaneously, the pointer and dial plate identification is carried out on the instrument image, so that the pointer characteristic and the dial plate characteristic are mutually influenced, and higher coupling performance is generated. Based on this, the meter image recognition method of some embodiments of the present disclosure introduces a meter image recognition model and a pointer recognition model. Therefore, the instrument image pointer identification process and the instrument image dial plate identification process can be separated, so that the decoupling effect is achieved. Thus, the accuracy of instrument image recognition can be improved. In addition, preset template image information is introduced to correct the intercepted image. And the generated instrument image identification result is more accurate based on the corrected instrument image, the corrected scale point coordinate set, the pointer identification information and the identification frame center coordinate and the identification frame width value included by the identification information. Thus, the accuracy of the meter image recognition can be further improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of a meter image recognition method of some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a meter image identification method according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of a meter image identification method according to the present disclosure;
FIG. 4 is a schematic block diagram of some embodiments of a meter image recognition device according to the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of the obstacle information generation method of some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may perform image recognition on a pre-acquired meter image 103 through a preset meter image recognition model 102 to generate recognition information 104, where the recognition information 104 includes a recognition frame 1041, a recognition frame center coordinate 1042, and a recognition frame width value 1043, and the recognition frame 1041 is formed by a frame coordinate set. Next, the computing device 101 may cut out an image area corresponding to the identification frame 1041 included in the identification information 104 from the meter image 103 to obtain a cut-out image 105. Then, the computing device 101 may perform correction processing on the above-described cut image 105 based on preset template image information 106 to generate a corrected meter image 107 and a corrected scale point coordinate set 108. Thereafter, the computing device 101 may perform pointer recognition on the corrected meter image 107 through a preset pointer recognition model 109 to obtain pointer recognition information 110. Finally, the computing device 101 may generate a meter image recognition result 111 based on the corrected meter image 107, the corrected scale point coordinate set 108, the pointer recognition information 110, the recognition frame center coordinate 1042 included in the recognition information 104, and the recognition frame width value 1043, wherein the meter image recognition result 111 includes a meter image pointer scale value 1111.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a meter image identification method according to the present disclosure is shown. The flow 200 of the instrument image identification method comprises the following steps:
In some embodiments, an executing subject of the meter image recognition method (such as the computing device 101 shown in fig. 1) may perform image recognition on a pre-acquired meter image through a preset meter image recognition model to generate the recognition information. The identification information may include an identification frame, a center coordinate of the identification frame, and a width value of the identification frame. The recognition frame may be formed by a frame coordinate set. The preset instrument image recognition model may be: YOLO (young only look once) model, mobilenetV2 (lightweight object detection) model, FPN (Feature Pyramid Networks), and the like. The pre-acquired meter image may be an image taken of a single meter. The recognition frame can be a circumscribed rectangle frame of an image area representing the meter in the meter image.
In some embodiments, the execution subject may capture an image area corresponding to the identification frame included in the identification information from the meter image to obtain a captured image. The image capturing may be performed by using the recognition border as a capturing boundary. Thus, the image area surrounded by the recognition frame can be determined as the clipped image. Through image interception, other image areas in the preset instrument image can be removed. Interference to instrument image recognition is avoided.
In some optional implementation manners of some embodiments, the executing body may intercept, from the meter image, an image area corresponding to an identification border included in the identification information to obtain an intercepted image, and may include the following steps:
firstly, adjusting the identification frame included in the identification information to obtain an adjusted frame. The input of the subsequent pointer segmentation link is usually a square picture. Therefore, the center point of the recognition frame can be kept unchanged. For example, the longer side (the maximum value of the width and height values of the recognition frame) is enlarged by 20% with reference to the longer side. The other side length is changed along with the change, so that the adjusted frame is also square.
As an example, the width value of the recognition border may be: 100 pixels, the height value may be: 80 pixels. Then, the left and right bounding box distances may be increased by 10 pixels for each side and 20 pixels for each top and bottom bounding box, resulting in a square bounding box of 120 pixels by 120 pixels.
And secondly, intercepting an image area corresponding to the adjusted frame from the instrument image to obtain an intercepted image. The image area included in the adjusted frame may be cut out from the meter image as a cut-out image.
And 203, based on the preset template image information, correcting the intercepted image to generate a corrected instrument image and a corrected scale point coordinate set.
In some embodiments, the executing body may perform a correction process on the captured image based on preset template image information to generate a corrected instrument image and a corrected scale point coordinate set. The preset template image information may be label information generated after labeling the preset template image. The preset template image can be a clear template image of the instrument to be identified, which is shot by the instrument. The labeling information can be an instrument area circumscribed rectangle frame manually labeled on the instrument area in the instrument image to be identified. The rectified meter image may be generated by:
firstly, an image of a rectangular frame area externally connected with the instrument area is cut out from the template image of the instrument to be identified and used as a cut template image.
And secondly, adjusting the intercepted template image to the same size as the intercepted image to obtain an adjusted image. Wherein, the intercepted template image is adjusted. Accordingly, the bounding box of the adjusted image may be determined as the replacement rectangular box. The replacement rectangular box may be comprised of a set of replacement coordinate values.
And thirdly, fusing the replacing rectangular frame and the identification frame to obtain a fused frame. The fusing may be to determine each replacement coordinate value in the replacement rectangular frame and a midpoint coordinate value of the frame coordinate value corresponding to the identification frame as a fused coordinate value, so as to obtain a fused coordinate value group. The set of fused coordinate values can be used to characterize the fused bounding box. In addition, the replacement coordinate value in the replacement rectangular frame and the frame coordinate value closest to the recognized frame may be determined as the basic correspondence relationship. Then, based on the replacement coordinate value and the frame coordinate value having the basic correspondence relationship, in the same direction on the replacement rectangular frame and the recognition frame, the replacement coordinate value and the frame coordinate value adjacent to the replacement coordinate value and the frame coordinate value are respectively determined as the correspondence relationship.
And fourthly, intercepting the image of the fusion frame region from the instrument image to obtain the corrected instrument image.
Therefore, key point identification can be carried out on the corrected instrument image to generate a correction scale point coordinate set. Each of the set of calibration scale point coordinates may be used to characterize a scale value on the meter in the meter image.
And 204, carrying out pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information.
In some embodiments, the executing body may perform pointer recognition on the corrected meter image through a preset pointer recognition model to obtain pointer recognition information. The preset pointer identification model may be: BiSeNet (real-time Semantic Segmentation algorithm), Deep lab (Semantic Image Segmentation with Deep computational networks, and atom Segmentation and full Connected CRFs, which employ Deep Convolutional networks, and the like). The pointer identification information may include a semantically segmented image and a frame of the target region in the corresponding semantically segmented image. The frame of the target area may be formed by a target area coordinate set. The border of the target area may be used to characterize the pointer area in the meter image.
In some embodiments, the executing body may generate the meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information, and a recognition frame center coordinate included in the recognition information, and a width value of the recognition frame. Wherein the instrument image recognition result may be generated by:
firstly, carrying out image recognition on the corrected instrument image through the instrument image recognition model so as to generate corrected recognition information. Wherein the corrected identification information may include corrected identification center coordinates.
And secondly, determining the middle point of a connecting line between the center coordinates of the identification frame included in the identification information and the corrected identification center coordinates as a target center point to obtain the coordinates of the target center point.
And thirdly, determining a connecting line between the target center point coordinate and the target area coordinate which is farthest away in a target area coordinate group forming a frame of the target area in the pointer identification information as a pointer central line.
And fourthly, determining the coordinate of the correction scale point with the shortest distance between the ray and the coordinate set of the correction scale point as the coordinate of the first target correction scale point by taking the coordinate of the target central point as a ray formed by an end point and the central line of the pointer. And determining the coordinate of the correction scale point with the shortest distance between the ray and the correction scale point coordinate set except the first target correction scale point coordinate as a second target correction scale point coordinate.
And fifthly, determining a connecting line between the coordinate of the first target correction scale point and the coordinate of the target center point as a first standard scale mark. And determining a connecting line between the second target correction scale point coordinate and the target central point coordinate as a second standard scale mark.
And sixthly, determining an included angle between the first standard scale mark and the central line of the pointer to obtain a first angle value. And determining an included angle between the second standard scale mark and the central line of the pointer to obtain a second angle value.
And seventhly, determining the scale value represented by the coordinate of the first target correction scale point as a first scale value. And determining the scale value represented by the second target correction scale point coordinate as a second scale value.
And eighthly, generating a meter image recognition result through the following formula:
where K represents a meter image recognition result. Theta1Representing the first angle value described above. Theta2Representing the second angle value. v. of1Representing the first scale value. v. of2Representing the second scale value.
In some optional implementations of some embodiments, the meter image recognition model is generated by training in the following manner:
in a first step, a meter image set and a natural image set are acquired. The executing body can acquire the instrument image set natural image set in a wired mode or a wireless mode. The meter image in the above-described meter image set may be an image of a single meter taken in advance. The natural images in the natural image set may be images of any natural scene acquired.
And secondly, fusing each instrument image in the instrument image set and each natural image in the natural image set to generate an image set to be processed. The fusion processing may be to fuse and superimpose an instrument image on any natural image in the natural image set to obtain an image to be processed. Therefore, the recognition capability of the generated instrument image recognition model to instruments in different backgrounds can be improved. Therefore, robustness of the instrument image recognition model is improved.
And thirdly, carrying out sample processing on each image to be processed in the image set to be processed to generate a sample set. The sample processing may be image labeling of the to-be-processed image in the to-be-processed image set to generate a sample set. The image annotation may be a circumscribed rectangle frame for annotating the meter image area in each image to be processed in the image set to be processed. Each sample in the sample set may include a post-annotation image to be processed and a sample label.
And fourthly, training an initial instrument image recognition model based on the sample set to generate the instrument image recognition model. The image to be processed after the labeling included in each sample in the sample set can be input to the initial instrument image recognition model for model training. Thus, a meter image recognition model can be generated. In some optional implementations of some embodiments, the pointer recognition model is generated by training in the following manner:
firstly, performing dial area interception on each instrument image in the instrument image set to obtain a dial area image set. The intercepting may be to intercept an image area where the marked circumscribed rectangular frame is located from each meter image in the meter image set to generate a dial area image, so as to obtain a dial area image set.
And secondly, performing pointer annotation on each dial area image in the dial area image set to obtain a pointer annotation image set. The pointer label can indicate the area of the pointer in the dial area image. For example, the image area where the pointer is located may be approximately marked as a circumscribed quadrangle.
And thirdly, carrying out image enhancement on each pointer labeled image in the pointer labeled image set to obtain an enhanced image set. Wherein, the enhancement can be through operations such as projection transformation, rotation, etc., enriching the direction and form of the pointer. Therefore, the recognition capability of the generated instrument image recognition model to pointers at different angles is improved. Thus, the robustness of the pointer identification model can be improved.
And fourthly, training an initial pointer identification model by using the enhanced image to generate the pointer identification model. Wherein each enhanced image in the set of enhanced images may be input to the pointer recognition model for model training. Thus, a pointer recognition model may be generated.
The above embodiments of the present disclosure have the following advantages: by the instrument image identification method of some embodiments of the present disclosure, the accuracy of instrument image identification can be improved. Specifically, the reason why the accuracy of the meter image recognition is reduced is that: and simultaneously, the pointer and dial plate identification is carried out on the instrument image, so that the pointer characteristic and the dial plate characteristic are mutually influenced, and higher coupling performance is generated. Based on this, the meter image recognition method of some embodiments of the present disclosure introduces a meter image recognition model and a pointer recognition model. Therefore, the instrument image pointer identification process and the instrument image dial plate identification process can be separated, so that the decoupling effect is achieved. Thus, the accuracy of instrument image recognition can be improved. In addition, preset template image information is introduced to correct the intercepted image. And the generated instrument image identification result is more accurate based on the corrected instrument image, the corrected scale point coordinate set, the pointer identification information and the identification frame center coordinate and the identification frame width value included by the identification information. Thus, the accuracy of the meter image recognition can be further improved.
With further reference to fig. 3, a flow 300 of further embodiments of a meter image identification method is shown. The process 300 of the meter image recognition method includes the following steps:
In some embodiments, an executing entity (e.g., the computing device 101 shown in fig. 1) of the meter image recognition method may perform semantic extraction processing on the meter image through the semantic extraction network to generate the first semantic feature, the second semantic feature, and the third semantic feature. Wherein, the instrument image recognition model may include: the system comprises a semantic extraction network, a feature sampling network and an instrument image recognition network. In addition, the semantic extraction network may include a first semantic convolution module, a second semantic convolution module, and a feature mapping module. Thus, the first semantic features may be generated by:
firstly, the instrument image is input to a first semantic convolution module included in the semantic extraction network to generate a first semantic feature. The first semantic convolution module may include a first convolution layer, a first batch normalization layer, and a first activation layer. The number of channels of the input feature may be increased by the convolution operation of the first convolution layer.
As an example, the convolution kernel size of the first convolution layer may be 1 × 1.
And secondly, inputting the first semantic features into a second semantic convolution module included in the semantic extraction network to generate second semantic features. The second semantic convolution module may include a second convolution layer, a second batch normalization layer, and a second activation layer. In addition, the second convolution layer may be a separable convolution. Through the convolution operation of the second convolution layer, the spatial feature of the first semantic feature can be learned channel by channel so as to reduce the calculation amount.
As an example, the convolution kernel size of the second convolution layer may be 3 × 3.
And thirdly, inputting the second semantic features into a feature mapping module included in the semantic extraction network to generate third semantic features. The feature mapping module may include a third convolution layer and a batch normalization layer. This reduces the number of channels of the feature, and makes the number of channels of the output tensor equal to the number of channels of the input tensor. And then a residual error connection mode can be used at last, namely a residual error structure is introduced, and the gradient dispersion problem and the degradation problem of the deep network are reduced. Therefore, the recognition accuracy of the meter image model can be improved.
In some embodiments, the execution subject may perform feature sampling processing on the first semantic feature, the second semantic feature, and the third semantic feature through the feature sampling network to generate a first sampling feature, a second sampling feature, and a third sampling feature. First, the first semantic feature, the second semantic feature, and the third semantic feature may be respectively upsampled to obtain a first upsampling feature, a second upsampling feature, and a third upsampling feature. Then, the smoothing operation of the channel number may be performed through standard convolution combination, and the first upsampling feature, the second upsampling feature, and the third upsampling feature are merged into a fused feature. Finally, feature extraction can be carried out on the fusion features through convolution combination, and a first sampling feature, a second sampling feature and a third sampling feature are obtained. The standard convolution combination may be the same as the structure of the first semantic convolution module.
As an example, the number of channels of the first sampling feature may be 64. The number of channels of the second sampling characteristic may be 128. The number of channels of the third sampling characteristic may be 256.
In some embodiments, the execution subject may perform feature mapping processing on the first sampling feature, the second sampling feature, and the third sampling feature through the meter image recognition network to generate the identification information. The instrument image recognition network may include a first head network, a second head network, and a third head network. The above identification information may be generated by:
first, the first sampling feature is subjected to feature recognition through a first head network, and first recognition information is generated. The first identification information may include at least one identification frame and a confidence corresponding to each identification frame, and a center coordinate of the identification frame and a width value of the identification frame. The identification box may be formed by a set of identification coordinates for characterizing the image of the identified meter region.
And secondly, performing feature recognition on the second sampling features through a second head network to generate second recognition information. The second identification information may include at least one identification box and a confidence corresponding to each identification box.
And thirdly, performing feature recognition on the third sampling feature through a third head network to generate third recognition information. The third identification information may include at least one identification frame and a confidence corresponding to each identification frame.
And fourthly, selecting the recognition frame larger than a preset confidence coefficient threshold value from the first recognition information, the second recognition information and the third recognition information as a target recognition frame to obtain a target recognition frame group.
And fifthly, selecting an optimal target recognition frame from the target recognition frame group as a recognition frame through a non-maximum suppression algorithm, and determining the center coordinates of the recognition frame corresponding to the recognition frame and the width value of the recognition frame as recognition information.
And 304, capturing an image area corresponding to the identification frame included in the identification information from the instrument image to obtain a captured image.
In some embodiments, the specific implementation manner and technical effects of step 304 may refer to step 202 in those embodiments corresponding to fig. 2, and are not described herein again.
And 305, based on preset template image information, correcting the intercepted image to generate a corrected instrument image and a corrected scale point coordinate set.
In some embodiments, the executing body may perform a correction process on the captured image based on preset template image information to generate a corrected instrument image and a corrected scale point coordinate set. The template image information may include a template image and a set of key points labeled to a target region in the template image. The target area may be an area where a dial of the meter is located in the template image. The corrected instrument image and the set of corrected tick point coordinates may be generated by:
firstly, feature point detection is carried out on the intercepted image to obtain a detection feature point set. The feature point detection can be performed on the captured image through AKAZE (estimated-KAZE, local feature matching algorithm), so as to obtain a detection feature point set.
And secondly, matching each detection characteristic point in the detection characteristic point set with each key point in the key point set to generate a matched characteristic point set. Wherein, the matching process may be: and clustering each detection characteristic point in the detection characteristic point set and each key point in the key point set by a k-means clustering algorithm to generate a clustering characteristic point group set. Then, the cluster feature points of which the distance value from the cluster center is greater than a preset distance threshold value in the cluster feature point group can be removed to generate extracted cluster feature points, so that a cluster feature point set after removal is obtained. Finally, the cluster feature point group set after the removal can be determined as a matching feature point set.
And thirdly, generating a homography matrix based on the template image. The template image can be sampled by a random sampling consistency algorithm to generate a homography matrix.
And fourthly, correcting the intercepted image by using the homography matrix to generate a corrected instrument image and a corrected scale point coordinate set. And performing perspective transformation on the intercepted image through the homography matrix to obtain a corrected instrument image. Because the perspective transformation is carried out on the intercepted image, the positions of the feature points in the intercepted image are changed along with the perspective transformation. Therefore, the homography matrix can be used for carrying out coordinate transformation on the matching feature points in the matching feature point set to obtain a correction scale point coordinate set. Through correction processing, the influence of image characteristic errors generated by operations such as dial plate inclination and rotation on subsequent steps can be eliminated.
And step 306, carrying out pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information.
In some embodiments, the executing body may perform pointer recognition on the corrected meter image through a preset pointer recognition model to obtain pointer recognition information. The backbone Network part of the lightweight semantic segmentation Network (binary Network) can be replaced by the semantic extraction Network. Thus, the network structure of the pointer recognition model can be obtained. The pointer identification information may include a feature map after identification. The pixel point of the area where the pointer is located in the identified feature map may be marked as 1, and the other areas may be 0.
And 307, generating a meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the recognition frame center coordinate included by the pointer recognition information and the width value of the recognition frame.
In some embodiments, the executing body may generate the meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information, and a recognition frame center coordinate included in the recognition information, and a width value of the recognition frame. The meter image recognition result may include a meter image pointer scale value. The meter image recognition result may be generated by:
firstly, arc fitting is carried out on each correction scale point coordinate in the correction scale point coordinate set to generate an arc equation and an arc center point coordinate. And performing arc fitting on each correction scale point coordinate in the correction scale point coordinate set by a curve fitting method to generate an arc equation and an arc center point coordinate.
And secondly, performing binarization processing on the corrected instrument image based on the pointer identification information to obtain a binarized instrument image. The pixel point of the area where the pointer is located in the identified feature map may be marked as 1, and the other areas may be 0. Therefore, the corrected instrument image is subjected to binarization processing, and the obtained binarized instrument image can obviously identify the image area representing the pointer.
And thirdly, generating a pointer fitting linear equation based on the binaryzation instrument image, the arc line central point coordinate and the identification frame central coordinate and the identification frame width value included by the identification information. If the width value of the identification frame is larger than a preset width threshold value, the minimum circumscribed triangle of the pointer area in the binarized instrument image can be determined by using an edge detection algorithm. And finally, determining a connecting line of the vertex of the circumscribed triangle and the central coordinate of the identification frame as a pointer fitting linear equation. If the width value of the identification frame is smaller than or equal to the preset width threshold value, a pointer fitting linear equation can be generated through the formula and the related content, and then an instrument image identification result can be generated.
And fourthly, generating an instrument image recognition result based on the pointer fitting linear equation, the arc equation and the correction scale point coordinate set. Preferably, the coordinates of the intersection point of the pointer fitting linear equation and the arc equation can be determined. Then, the correction scale point coordinate closest to the intersection coordinate in the correction scale point coordinate set may be determined as a first intersection scale coordinate. Then, in addition to the first intersection scale coordinates, the correction scale point coordinates closest to the intersection coordinates in the correction scale point coordinate set may be determined as second intersection scale coordinates. Next, the scale value corresponding to the first intersection scale coordinate may be set as a first intersection scale value. And taking the scale value corresponding to the second intersection scale coordinate as a second intersection scale value. Then, the arc length between the intersection point coordinate and the first intersection point scale coordinate can be determined to obtain a first intersection point arc length. And determining the arc length between the intersection point coordinate and the second intersection point scale coordinate to obtain a second intersection point arc length. Finally, the meter image recognition result may be generated by the following formula:
where K represents a meter image recognition result. L is1Indicating the first intersection arc length. L is2Indicating the second intersection arc length. a is1The first intersection scale value is expressed. a is2The second intersection scale value is expressed.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of the meter image recognition method in some embodiments corresponding to fig. 3 embodies the steps of generating the recognition information, the corrected meter image, the corrected scale point coordinate set, and the meter image recognition result. By introducing the instrument image recognition model, the pointer recognition model and the preset template image information, the instrument image recognition model and the pointer recognition model can be subjected to model training and instrument image recognition by using a small number of template images. Therefore, the acquisition of pointer state pictures of all ranges is avoided. And because a pointer identification model and related contents for generating an instrument image identification result are introduced, the high-quality requirement on a shot natural scene image can be avoided, and the identification capability of the instrument image with scenes such as dial plate fouling, light reflection and shadow is improved. Therefore, the robustness of the instrument image recognition model is improved. Further, the efficiency of meter image recognition is improved. In addition, the neural network algorithms according to one or more embodiments corresponding to fig. 2 and 3 are based on separable convolution and inverse residual error modules designed for the mobile terminal computing power, and are improved in light weight structure. Thus, the mobile device can be used for low-calculation-force mobile devices such as smart glasses.
With further reference to fig. 4, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a meter image recognition apparatus, which correspond to those of the method embodiments illustrated in fig. 2, and which may be particularly applicable in various electronic devices.
As shown in fig. 4, the meter image recognition apparatus 400 of some embodiments includes: an image recognition unit 401, an image cutout unit 402, a correction processing unit 403, a pointer recognition unit 404, and a generation unit 405. The image recognition unit 401 is configured to perform image recognition on a pre-acquired instrument image through a preset instrument image recognition model to generate recognition information, where the recognition information includes a recognition frame, a recognition frame center coordinate, and a recognition frame width value, and the recognition frame is composed of a frame coordinate set; an image capture unit 402 configured to capture an image area corresponding to an identification frame included in the identification information from the meter image, and obtain a captured image; a correction processing unit 403, configured to perform correction processing on the captured image based on preset template image information to generate a corrected instrument image and a corrected scale point coordinate set; a pointer identification unit 404 configured to perform pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information; a generating unit 405 configured to generate a meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information, and a recognition frame center coordinate included in the recognition information and a width value of a recognition frame, wherein the meter image recognition result includes a meter image pointer scale value.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: carrying out image recognition on a pre-acquired instrument image through a preset instrument image recognition model to generate recognition information, wherein the recognition information comprises a recognition frame, a recognition frame center coordinate and a recognition frame width value, and the recognition frame is composed of a frame coordinate group; intercepting an image area corresponding to the identification frame included in the identification information from the instrument image to obtain an intercepted image; based on preset template image information, correcting the intercepted image to generate a corrected instrument image and a corrected scale point coordinate set; performing pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information; and generating a meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information and the recognition frame center coordinate and the recognition frame width value included in the recognition information, wherein the meter image recognition result includes a meter image pointer scale value.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an image recognition unit, an image cutout unit, a correction processing unit, a pointer recognition unit, and a generation unit. Here, the names of these units do not constitute a limitation of the unit itself in some cases, and for example, the generation unit may also be described as a "unit that generates a meter image recognition result".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.
Claims (10)
1. A meter image recognition method includes:
carrying out image recognition on a pre-acquired instrument image through a preset instrument image recognition model to generate recognition information, wherein the recognition information comprises a recognition frame, a recognition frame center coordinate and a recognition frame width value, and the recognition frame is composed of a frame coordinate group;
intercepting an image area corresponding to the identification frame included in the identification information from the instrument image to obtain an intercepted image;
based on preset template image information, correcting the intercepted image to generate a corrected instrument image and a corrected scale point coordinate set;
performing pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information;
and generating a meter image recognition result based on the corrected meter image, the corrected scale point coordinate set, the pointer recognition information and the recognition frame center coordinate and the recognition frame width value included by the recognition information, wherein the meter image recognition result includes a meter image pointer scale value.
2. The method of claim 1, wherein the meter image recognition model comprises: a semantic extraction network, a feature sampling network and an instrument image recognition network; and
the image recognition is carried out on the pre-acquired instrument image through a preset instrument image recognition model so as to generate recognition information, and the method comprises the following steps:
performing semantic extraction processing on the instrument image through the semantic extraction network to generate a first semantic feature, a second semantic feature and a third semantic feature;
performing feature sampling processing on the first semantic feature, the second semantic feature and the third semantic feature through the feature sampling network to generate a first sampling feature, a second sampling feature and a third sampling feature;
and performing feature mapping processing on the first sampling feature, the second sampling feature and the third sampling feature through the instrument image recognition network to generate the identification information.
3. The method of claim 1, wherein the intercepting an image area corresponding to an identification border included in the identification information from the meter image, resulting in an intercepted image, comprises:
adjusting the identification frame included by the identification information to obtain an adjusted frame;
and intercepting an image area corresponding to the adjusted frame from the instrument image to obtain an intercepted image.
4. The method of claim 1, wherein the template image information comprises a template image and a set of keypoints labeling a target region in the template image; and
the correcting process is carried out on the intercepted image based on the preset template image information so as to generate a corrected instrument image and a corrected scale point coordinate set, and the correcting process comprises the following steps:
carrying out feature point detection on the intercepted image to obtain a detection feature point set;
matching each detection characteristic point in the detection characteristic point set with each key point in the key point set to generate a matched characteristic point set;
generating a homography matrix based on the template image;
and correcting the intercepted image by utilizing the homography matrix to generate a corrected instrument image and a corrected scale point coordinate set.
5. The method of claim 1, wherein the generating a meter image recognition result based on the rectified meter image, the set of rectified scale point coordinates, the pointer identification information, and identification information including identification bezel center coordinates and identification bezel width values comprises:
performing arc fitting on each correction scale point coordinate in the correction scale point coordinate set to generate an arc equation and an arc center point coordinate;
performing binarization processing on the corrected instrument image based on the pointer identification information to obtain a binarized instrument image;
generating a pointer fitting linear equation based on the binaryzation instrument image, the arc line central point coordinate and the identification frame central coordinate and the identification frame width value included by the identification information;
and generating an instrument image recognition result based on the pointer fitting linear equation, the arc equation and the correction scale point coordinate set.
6. The method of claim 1, wherein the meter image recognition model is generated by training in the following manner:
acquiring an instrument image set and a natural image set;
performing fusion processing on each instrument image in the instrument image set and each natural image in the natural image set to generate an image set to be processed;
performing sample processing on each image to be processed in the image set to be processed to generate a sample set;
training an initial meter image recognition model based on the sample set to generate the meter image recognition model.
7. The method of claim 6, wherein the pointer recognition model is generated by training in the following manner:
performing dial area interception on each instrument image in the instrument image set to obtain a dial area image set;
performing pointer annotation on each dial area image in the dial area image set to obtain a pointer annotation image set;
performing image enhancement on each pointer labeled image in the pointer labeled image set to obtain an enhanced image set;
training an initial pointer recognition model using the enhanced image to generate the pointer recognition model.
8. A meter image recognition device comprising:
the system comprises an image recognition unit, a processing unit and a display unit, wherein the image recognition unit is configured to perform image recognition on a pre-acquired instrument image through a preset instrument image recognition model to generate recognition information, the recognition information comprises a recognition frame, recognition frame center coordinates and a recognition frame width value, and the recognition frame is composed of a frame coordinate group;
an image intercepting unit configured to intercept an image area corresponding to an identification frame included in the identification information from the meter image to obtain an intercepted image;
the correction processing unit is configured to perform correction processing on the intercepted image based on preset template image information so as to generate a corrected instrument image and a correction scale point coordinate set;
the pointer identification unit is configured to perform pointer identification on the corrected instrument image through a preset pointer identification model to obtain pointer identification information;
a generating unit configured to generate a meter image recognition result based on the corrected meter image, the set of corrected scale point coordinates, the pointer recognition information, and a recognition border center coordinate and a recognition border width value included in the recognition information, wherein the meter image recognition result includes a meter image pointer scale value.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110959804.4A CN113537153A (en) | 2021-08-20 | 2021-08-20 | Meter image identification method and device, electronic equipment and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110959804.4A CN113537153A (en) | 2021-08-20 | 2021-08-20 | Meter image identification method and device, electronic equipment and computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113537153A true CN113537153A (en) | 2021-10-22 |
Family
ID=78091925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110959804.4A Pending CN113537153A (en) | 2021-08-20 | 2021-08-20 | Meter image identification method and device, electronic equipment and computer readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113537153A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114863129A (en) * | 2022-04-11 | 2022-08-05 | 珠海优特电力科技股份有限公司 | Instrument numerical analysis method, device, equipment and storage medium |
CN115797934A (en) * | 2022-12-01 | 2023-03-14 | 北京百度网讯科技有限公司 | Instrument number indicating method and device, electronic equipment and storage medium |
CN116310285A (en) * | 2023-02-16 | 2023-06-23 | 武汉科技大学 | Automatic pointer instrument reading method and system based on deep learning |
CN116844058A (en) * | 2023-08-30 | 2023-10-03 | 广州市扬新技术研究有限责任公司 | Pointer instrument indication recognition method, device, equipment and storage medium |
CN118279640A (en) * | 2024-01-29 | 2024-07-02 | 中国人民解放军陆军炮兵防空兵学院 | FPGA-based large target key feature recognition method and device |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543682A (en) * | 2018-11-23 | 2019-03-29 | 电子科技大学 | A kind of readings of pointer type meters method based on deep learning |
CN109948469A (en) * | 2019-03-01 | 2019-06-28 | 吉林大学 | The automatic detection recognition method of crusing robot instrument based on deep learning |
CN110427819A (en) * | 2019-06-26 | 2019-11-08 | 深圳市容会科技有限公司 | The method and relevant device of PPT frame in a kind of identification image |
CN110443242A (en) * | 2019-07-31 | 2019-11-12 | 新华三大数据技术有限公司 | Read frame detection method, Model of Target Recognition training method and relevant apparatus |
CN110796095A (en) * | 2019-10-30 | 2020-02-14 | 浙江大华技术股份有限公司 | Instrument template establishing method, terminal equipment and computer storage medium |
CN111368825A (en) * | 2020-02-25 | 2020-07-03 | 华南理工大学 | Pointer positioning method based on semantic segmentation |
CN111401377A (en) * | 2020-03-13 | 2020-07-10 | 北京市商汤科技开发有限公司 | Instrument data reading method and device, electronic equipment and storage medium |
CN111950330A (en) * | 2019-05-16 | 2020-11-17 | 杭州测质成科技有限公司 | Pointer instrument indicating number detection method based on target detection |
CN112115895A (en) * | 2020-09-24 | 2020-12-22 | 深圳市赛为智能股份有限公司 | Pointer type instrument reading identification method and device, computer equipment and storage medium |
CN112115893A (en) * | 2020-09-24 | 2020-12-22 | 深圳市赛为智能股份有限公司 | Instrument panel pointer reading identification method and device, computer equipment and storage medium |
CN112749813A (en) * | 2020-10-29 | 2021-05-04 | 广东电网有限责任公司 | Data processing system, method, electronic equipment and storage medium |
CN112906694A (en) * | 2021-03-25 | 2021-06-04 | 中国长江三峡集团有限公司 | Reading correction system and method for inclined pointer instrument image of transformer substation |
CN112990179A (en) * | 2021-04-20 | 2021-06-18 | 成都阿莱夫信息技术有限公司 | Single-pointer type dial reading automatic identification method based on picture processing |
-
2021
- 2021-08-20 CN CN202110959804.4A patent/CN113537153A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109543682A (en) * | 2018-11-23 | 2019-03-29 | 电子科技大学 | A kind of readings of pointer type meters method based on deep learning |
CN109948469A (en) * | 2019-03-01 | 2019-06-28 | 吉林大学 | The automatic detection recognition method of crusing robot instrument based on deep learning |
CN111950330A (en) * | 2019-05-16 | 2020-11-17 | 杭州测质成科技有限公司 | Pointer instrument indicating number detection method based on target detection |
CN110427819A (en) * | 2019-06-26 | 2019-11-08 | 深圳市容会科技有限公司 | The method and relevant device of PPT frame in a kind of identification image |
CN110443242A (en) * | 2019-07-31 | 2019-11-12 | 新华三大数据技术有限公司 | Read frame detection method, Model of Target Recognition training method and relevant apparatus |
CN110796095A (en) * | 2019-10-30 | 2020-02-14 | 浙江大华技术股份有限公司 | Instrument template establishing method, terminal equipment and computer storage medium |
CN111368825A (en) * | 2020-02-25 | 2020-07-03 | 华南理工大学 | Pointer positioning method based on semantic segmentation |
CN111401377A (en) * | 2020-03-13 | 2020-07-10 | 北京市商汤科技开发有限公司 | Instrument data reading method and device, electronic equipment and storage medium |
CN112115895A (en) * | 2020-09-24 | 2020-12-22 | 深圳市赛为智能股份有限公司 | Pointer type instrument reading identification method and device, computer equipment and storage medium |
CN112115893A (en) * | 2020-09-24 | 2020-12-22 | 深圳市赛为智能股份有限公司 | Instrument panel pointer reading identification method and device, computer equipment and storage medium |
CN112749813A (en) * | 2020-10-29 | 2021-05-04 | 广东电网有限责任公司 | Data processing system, method, electronic equipment and storage medium |
CN112906694A (en) * | 2021-03-25 | 2021-06-04 | 中国长江三峡集团有限公司 | Reading correction system and method for inclined pointer instrument image of transformer substation |
CN112990179A (en) * | 2021-04-20 | 2021-06-18 | 成都阿莱夫信息技术有限公司 | Single-pointer type dial reading automatic identification method based on picture processing |
Non-Patent Citations (2)
Title |
---|
崔胜民: "现代机械工程系列精品教材 新工科普通高等教育汽车类系列教材 智能网联汽车技术", 31 January 2021, 机械工业出版社, pages: 87 - 106 * |
董洪义: "《深度学习之PyTorch物体检测实战》", 机械工业出版社, pages: 99 - 106 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114863129A (en) * | 2022-04-11 | 2022-08-05 | 珠海优特电力科技股份有限公司 | Instrument numerical analysis method, device, equipment and storage medium |
CN115797934A (en) * | 2022-12-01 | 2023-03-14 | 北京百度网讯科技有限公司 | Instrument number indicating method and device, electronic equipment and storage medium |
CN115797934B (en) * | 2022-12-01 | 2023-12-01 | 北京百度网讯科技有限公司 | Meter registration method, apparatus, electronic device and storage medium |
CN116310285A (en) * | 2023-02-16 | 2023-06-23 | 武汉科技大学 | Automatic pointer instrument reading method and system based on deep learning |
CN116310285B (en) * | 2023-02-16 | 2024-02-27 | 科大集智技术湖北有限公司 | Automatic pointer instrument reading method and system based on deep learning |
CN116844058A (en) * | 2023-08-30 | 2023-10-03 | 广州市扬新技术研究有限责任公司 | Pointer instrument indication recognition method, device, equipment and storage medium |
CN116844058B (en) * | 2023-08-30 | 2024-03-12 | 广州市扬新技术研究有限责任公司 | Pointer instrument indication recognition method, device, equipment and storage medium |
CN118279640A (en) * | 2024-01-29 | 2024-07-02 | 中国人民解放军陆军炮兵防空兵学院 | FPGA-based large target key feature recognition method and device |
CN118279640B (en) * | 2024-01-29 | 2024-10-18 | 中国人民解放军陆军炮兵防空兵学院 | FPGA-based large target key feature recognition method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113537153A (en) | Meter image identification method and device, electronic equipment and computer readable medium | |
WO2020125495A1 (en) | Panoramic segmentation method, apparatus and device | |
CN108446698B (en) | Method, device, medium and electronic equipment for detecting text in image | |
CN111369427B (en) | Image processing method, image processing device, readable medium and electronic equipment | |
CN110517214B (en) | Method and apparatus for generating image | |
CN113869293B (en) | Lane line recognition method and device, electronic equipment and computer readable medium | |
CN111292420B (en) | Method and device for constructing map | |
CN111414879B (en) | Face shielding degree identification method and device, electronic equipment and readable storage medium | |
CN111915480B (en) | Method, apparatus, device and computer readable medium for generating feature extraction network | |
CN112085775B (en) | Image processing method, device, terminal and storage medium | |
CN114399588B (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN110211195B (en) | Method, device, electronic equipment and computer-readable storage medium for generating image set | |
WO2023072015A1 (en) | Method and apparatus for generating character style image, device, and storage medium | |
US20240221126A1 (en) | Image splicing method and apparatus, and device and medium | |
CN111325792A (en) | Method, apparatus, device, and medium for determining camera pose | |
CN112597788B (en) | Target measuring method, target measuring device, electronic apparatus, and computer-readable medium | |
CN115393815A (en) | Road information generation method and device, electronic equipment and computer readable medium | |
CN111209856B (en) | Invoice information identification method and device, electronic equipment and storage medium | |
CN114723640B (en) | Obstacle information generation method and device, electronic equipment and computer readable medium | |
CN115100536B (en) | Building identification method and device, electronic equipment and computer readable medium | |
CN114742707B (en) | Multi-source remote sensing image splicing method and device, electronic equipment and readable medium | |
CN114140427B (en) | Object detection method and device | |
CN115393826A (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN114882308A (en) | Biological feature extraction model training method and image segmentation method | |
CN113239943B (en) | Three-dimensional component extraction and combination method and device based on component semantic graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |