CN114842205B - Vehicle loss detection method, device, equipment and storage medium - Google Patents
Vehicle loss detection method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114842205B CN114842205B CN202210602497.9A CN202210602497A CN114842205B CN 114842205 B CN114842205 B CN 114842205B CN 202210602497 A CN202210602497 A CN 202210602497A CN 114842205 B CN114842205 B CN 114842205B
- Authority
- CN
- China
- Prior art keywords
- damage
- image
- information
- processed
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 232
- 230000011218 segmentation Effects 0.000 claims abstract description 57
- 238000012545 processing Methods 0.000 claims abstract description 32
- 238000012216 screening Methods 0.000 claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims description 25
- 238000007477 logistic regression Methods 0.000 claims description 12
- 230000000873 masking effect Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 abstract description 12
- 238000013473 artificial intelligence Methods 0.000 abstract description 7
- 230000003902 lesion Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000007790 scraping Methods 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 2
- 230000006735 deficit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/08—Insurance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Business, Economics & Management (AREA)
- Biophysics (AREA)
- Finance (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Accounting & Taxation (AREA)
- Biomedical Technology (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Technology Law (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to artificial intelligence and provides a vehicle damage detection method, device and equipment and a storage medium. The method comprises the steps of detecting an image to be processed based on a basic damage detection model to obtain first damage information, carrying out segmentation processing on the image to be processed based on a component segmentation model to obtain an initial component image and component position information, identifying whether cross-component damage exists in the image to be processed according to the initial component image, the component position information and the first damage information, screening out a target component image from the initial component image based on the first damage information if cross-component damage exists, inputting the target component image into the cross-component damage detection model to obtain second damage information, and accurately generating a vehicle damage detection result according to the second damage information and the first damage information. In addition, the invention also relates to a block chain technology, and the damage detection result can be stored in the block chain.
Description
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting vehicle damage.
Background
With the development of the Internet, the vehicle damage detection is realized based on a deep learning method to gradually replace manual operation. However, in the current vehicle damage detection scheme, the damage condition of the cross-component damage type still cannot be accurately detected, so that the vehicle damage detection precision is low.
Disclosure of Invention
In view of the foregoing, it is necessary to provide a vehicle damage detection method, apparatus, device, and storage medium, which can accurately detect damage conditions of a cross-component damage type, and improve vehicle damage detection accuracy.
In one aspect, the present invention provides a vehicle loss detection method, where the vehicle loss detection method includes:
when a vehicle loss detection request is received, acquiring an image to be processed according to the vehicle loss detection request;
Detecting the image to be processed based on a pre-trained basic damage detection model to obtain first damage information;
dividing the image to be processed based on a pre-trained part dividing model to obtain a plurality of initial part images and part position information of a part to be detected in each initial part image on the image to be processed;
Identifying whether cross-component damage exists in the image to be processed according to the plurality of initial component images, the component position information and the first damage information;
if cross-component damage exists in the image to be processed, screening out a target component image from the plurality of initial component images based on the first damage information;
inputting the target part image into a pre-trained cross-part damage detection model to obtain second damage information of the target part image;
And generating a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
According to a preferred embodiment of the present invention, the first damage information includes a damage component detection frame, damage location information, and a damage type, the basic damage detection model includes a feature extraction network layer, a candidate frame detection network layer, and an output network layer, and the detecting the image to be processed based on the basic damage detection model trained in advance includes:
Extracting image features of the image to be processed based on a convolution layer in the feature extraction network layer;
Detecting the image features based on the candidate frame detection network layer to obtain the damaged part detection frame;
Acquiring a position identification layer and a logistic regression layer of the output network layer;
Performing position recognition on the image candidate frame based on the position recognition layer to obtain the damage position information;
and carrying out regression processing on the image features based on the logistic regression layer to obtain the damage type.
According to a preferred embodiment of the present invention, the identifying whether there is a cross-component lesion in the image to be processed according to the plurality of initial component images, the component position information, and the first lesion information includes:
Screening out an initial part image overlapped with the damaged part detection frame based on the damaged position information and the part position information to serve as a part image to be detected;
counting the number of images of the part images to be detected;
If the number of the images is greater than or equal to a preset number, detecting whether a plurality of to-be-detected part images are connected in the to-be-processed image or not based on the part position information;
And if the plurality of to-be-detected part images are connected in the to-be-processed image, determining that cross-part damage exists in the to-be-processed image.
According to a preferred embodiment of the present invention, the plurality of initial component images include the plurality of component images to be detected, and the screening the target component image from the plurality of initial component images based on the first damage information includes:
Identifying an overlapping area of each part image to be detected and the damaged part detection frame based on the damaged position information and the part position information;
Extracting the damage type of each part image to be detected from the first damage information, and identifying the damage grade corresponding to the damage type of each part image to be detected;
Calculating the area ratio of the overlapping area on the damaged part detection frame;
and if the damage levels in the damage part detection frames are the same, determining the part image to be detected corresponding to the overlapping area with the smallest area ratio in each damage part detection frame as the target part image.
According to a preferred embodiment of the invention, the method further comprises:
If the plurality of damage grades are different in the damage part detection frame, obtaining a grade score of each damage grade;
calculating a damage score of the overlapping region on the damage component detection frame based on the rank score and the area ratio;
and determining the part image to be detected corresponding to the overlapping area with the minimum damage score in each damage part detection frame as the target part image.
According to a preferred embodiment of the present invention, before the segmentation processing is performed on the image to be processed based on a pre-trained part segmentation model, the method further comprises:
Acquiring a plurality of vehicle training images, each vehicle training image including a plurality of vehicle components;
masking each vehicle training image based on any vehicle component to obtain a plurality of masking images of each vehicle training image;
Carrying out semantic segmentation processing on each vehicle training image based on a preset network to obtain a plurality of prediction segmentation images of each vehicle training image;
counting the number of first pixel points in each prediction segmentation image as a first number, and counting the number of second pixel points in each mask image as a second number;
calculating the absolute value of the difference between each first quantity and each second quantity as a prediction difference;
calculating the average value of the predicted difference values to obtain a segmentation damage value of the preset network;
And adjusting network parameters in the preset network based on the segmentation damage value to obtain the part segmentation model.
According to a preferred embodiment of the present invention, the generating the vehicle damage detection result of the image to be processed according to the second damage information and the first damage information includes:
determining the rest part images to be detected except the target part image as processed part images;
extracting first damage position information and first damage type of the processed part image from the first damage information, and extracting second damage position information and second damage type of the target part image from the first damage information;
Identifying a first predicted damage degree of the basic damage detection model to the target part image according to the second damage position information and the second damage type, and identifying a second predicted damage degree of the cross-part damage detection model to the target part image according to the second damage information;
Screening target damage position information and target damage types of the target part image from the second damage position information, the second damage type and the second damage information according to the first predicted damage degree and the second predicted damage degree;
And generating the vehicle damage detection result according to the part name of the part to be detected corresponding to the processed part image, the mapping relation between the first damage position information and the first damage type, and the part name of the part to be detected corresponding to the target part image, the mapping relation between the target damage position information and the target damage type.
On the other hand, the invention also provides a vehicle damage detection device, which comprises:
the acquisition unit is used for acquiring an image to be processed according to the vehicle loss detection request when the vehicle loss detection request is received;
the detection unit is used for detecting the image to be processed based on a pre-trained basic damage detection model to obtain first damage information;
the segmentation unit is used for carrying out segmentation processing on the image to be processed based on a pre-trained part segmentation model to obtain a plurality of initial part images and part position information of a part to be detected in each initial part image on the image to be processed;
the identifying unit is used for identifying whether cross-component damage exists in the image to be processed according to the plurality of initial component images, the component position information and the first damage information;
the screening unit is used for screening target component images from the plurality of initial component images based on the first damage information if cross-component damage exists in the image to be processed;
The input unit is used for inputting the target component image into a pre-trained cross-component damage detection model to obtain second damage information of the target component image;
And the generating unit is used for generating a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
In another aspect, the present invention also proposes an electronic device, including:
A memory storing computer readable instructions; and
And the processor executes the computer readable instructions stored in the memory to realize the vehicle loss detection method.
In another aspect, the present invention also proposes a computer readable storage medium having stored therein computer readable instructions that are executed by a processor in an electronic device to implement the vehicle loss detection method.
According to the technical scheme, whether cross-component damage exists in the image to be processed or not can be accurately identified by combining the plurality of initial component images, the component position information and the first damage information, and further when cross-component damage exists in the image to be processed, the target damage position information and the target damage type of the target component image can be accurately generated by combining the basic damage detection model and the cross-component damage detection model, so that the accuracy of the vehicle damage detection result is improved.
Drawings
FIG. 1 is a flow chart of a method for detecting vehicle loss according to a preferred embodiment of the present invention.
FIG. 2 is a functional block diagram of a vehicle loss detection device according to a preferred embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing the vehicle loss detection method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of a method for detecting vehicle loss according to a preferred embodiment of the present invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The vehicle loss detection method can acquire and process related data based on an artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The vehicle damage detection method is applied to one or more electronic devices, wherein the electronic devices are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored computer readable instructions, and the hardware comprises, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an embedded device and the like.
The electronic device may be any electronic product that can interact with a user in a human-computer manner, such as a Personal computer, a tablet computer, a smart phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a game console, an interactive internet protocol television (Internet Protocol Television, IPTV), a smart wearable device, etc.
The electronic device may comprise a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network electronic device, a group of electronic devices made up of multiple network electronic devices, or a Cloud based Cloud Computing (Cloud Computing) made up of a large number of hosts or network electronic devices.
The network in which the electronic device is located includes, but is not limited to: the internet, wide area networks, metropolitan area networks, local area networks, virtual private networks (Virtual Private Network, VPN), etc.
And S10, when a vehicle loss detection request is received, acquiring an image to be processed according to the vehicle loss detection request.
In at least one embodiment of the present invention, the vehicle loss detection request may be a request triggered and generated by a vehicle risk agent, and the vehicle loss detection request may also be a request triggered and generated by a reimbursement user when uploading the image to be processed in a vehicle risk reimbursement system.
The image to be processed refers to an image which needs to be subjected to traffic loss detection.
In at least one embodiment of the present invention, the electronic device obtaining the image to be processed according to the vehicle loss detection request includes:
analyzing a request message of the vehicle loss detection request to obtain data information carried by the request message;
extracting a generation address and an image identifier of the vehicle damage detection request from the data information;
and acquiring an image corresponding to the image identifier from the configuration library of the generated address as the image to be processed.
The generated address is address information corresponding to a system triggering the generation of the vehicle loss detection request.
By combining the generated address and the image identifier, the image to be processed can be accurately acquired.
And S11, detecting the image to be processed based on a pre-trained basic damage detection model to obtain first damage information.
In at least one embodiment of the present invention, the basic impairment detection model comprises a feature extraction network layer, a candidate box detection network layer, and an output network layer.
The first damage information comprises a damage part detection frame, damage position information and damage types. The damage part detection frame is a candidate frame with damage information in the image to be processed. The damage position information comprises coordinate values of the upper left corner of the damage part detection frame and the length and width of the damage part detection frame. The types of damage include, but are not limited to: scoring, scraping, dishing, creasing, dead folding, tearing, missing, etc.
In at least one embodiment of the present invention, the electronic device detecting the image to be processed based on a pre-trained basic damage detection model, and obtaining the first damage information includes:
Extracting image features of the image to be processed based on a convolution layer in the feature extraction network layer;
Detecting the image features based on the candidate frame detection network layer to obtain the damaged part detection frame;
Acquiring a position identification layer and a logistic regression layer of the output network layer;
Performing position recognition on the image candidate frame based on the position recognition layer to obtain the damage position information;
and carrying out regression processing on the image features based on the logistic regression layer to obtain the damage type.
Wherein the image features refer to feature information in the image to be processed.
By the method, the first damage information can be rapidly extracted from the image to be processed.
Specifically, the electronic device detecting the image feature based on the candidate frame detection network layer, and obtaining the damage component detection frame includes:
And the electronic equipment identifies the boundary box of the image features in the image to be processed, carries out transformation processing on the boundary box until the boundary box after transformation processing contains all damage features in the image to be processed, and determines the boundary box after transformation processing as the damage component detection box.
According to the embodiment, the damage part detection frame can be ensured to contain all damage characteristics in the image to be processed, so that the situation that the vehicle damage detection result cannot be accurately determined can be avoided.
Specifically, the electronic device performs regression processing on the image feature based on the logistic regression layer, and obtaining the damage type includes:
Acquiring pixel information of the image features;
obtaining class pixel values of a plurality of preset classes from the logistic regression layer;
And carrying out matching processing on the pixel information and a plurality of category pixel values, and determining a preset category corresponding to the category pixel value successfully matched with the pixel information as the damage type.
Wherein the plurality of preset categories include, but are not limited to: scoring, scraping, dishing, creasing, dead folding, tearing, missing, etc.
The pixel information is matched with a plurality of category pixel values, so that the matching can be performed on the basis of the basic information, interference information is avoided, and the accuracy of determining the damage type is improved.
S12, carrying out segmentation processing on the image to be processed based on a pre-trained part segmentation model to obtain a plurality of initial part images and part position information of a part to be detected in each initial part image on the image to be processed.
In at least one embodiment of the invention, the component segmentation model is based on a preset network training. Wherein, the preset network is usually referred to as DeepLab series of split networks.
Each initial component image carries a corresponding component to be detected. The component position information comprises coordinate information of the component to be detected on the image to be processed.
In at least one embodiment of the present invention, before performing the segmentation processing on the image to be processed based on the pre-trained component segmentation model, the method further includes:
Acquiring a plurality of vehicle training images, each vehicle training image including a plurality of vehicle components;
masking each vehicle training image based on any vehicle component to obtain a plurality of masking images of each vehicle training image;
Carrying out semantic segmentation processing on each vehicle training image based on a preset network to obtain a plurality of prediction segmentation images of each vehicle training image;
counting the number of first pixel points in each prediction segmentation image as a first number, and counting the number of second pixel points in each mask image as a second number;
calculating the absolute value of the difference between each first quantity and each second quantity as a prediction difference;
calculating the average value of the predicted difference values to obtain a segmentation damage value of the preset network;
And adjusting network parameters in the preset network based on the segmentation damage value to obtain the part segmentation model.
The first pixel points refer to all pixel points in the prediction segmentation image.
The second pixel point is a pixel point in the mask image, where the pixel value of the pixel point is not a preset value. The preset value may be 0 and the preset value may also be 1.
The segmentation damage value can be accurately determined through the number of the first pixel points in each prediction segmentation image and the number of the second pixel points in each mask image, and then the part segmentation model can be accurately adjusted through the segmentation damage value.
S13, identifying whether cross-component damage exists in the image to be processed according to the initial component images, the component position information and the first damage information.
In at least one embodiment of the present invention, the presence of cross-component damage in the image to be processed means that there is damage of a plurality of different components in the same damaged component detection frame, and the plurality of different components are connected to each other.
In at least one embodiment of the present invention, the electronic device identifying whether there is a cross-component lesion in the image to be processed according to the plurality of initial component images, the component position information, and the first lesion information includes:
Screening out an initial part image overlapped with the damaged part detection frame based on the damaged position information and the part position information to serve as a part image to be detected;
counting the number of images of the part images to be detected;
If the number of the images is greater than or equal to a preset number, detecting whether a plurality of to-be-detected part images are connected in the to-be-processed image or not based on the part position information;
And if the plurality of to-be-detected part images are connected in the to-be-processed image, determining that cross-part damage exists in the to-be-processed image.
Wherein the preset number is typically set to 2.
The damage position information and the component position information are combined, so that the component images to be detected can be accurately screened, and further when the number of the images of the component images to be detected is multiple, whether cross-component damage exists in the images to be processed can be accurately determined by detecting that the multiple component images to be detected are connected in the images to be processed.
In other embodiments, if the number of images is less than the preset number, or the plurality of to-be-detected component images are not connected in the to-be-processed image, determining that there is no cross-component damage in the to-be-processed image.
In other embodiments, if there is no cross-component damage in the image to be processed, the first damage information is determined as a vehicle damage detection result of the image to be processed.
Because the basic damage detection model can accurately determine the damage condition of the non-cross-component damage type, the vehicle damage detection result determination efficiency can be improved through the implementation mode.
S14, if cross-component damage exists in the image to be processed, screening out target component images from the initial component images based on the first damage information.
In at least one embodiment of the present invention, the target part image refers to an initial part image having a lesion area ratio or a lesion score that is too small.
In at least one embodiment of the present invention, the plurality of initial component images include the plurality of component images to be detected, and the electronic device screening the target component image from the plurality of initial component images based on the first damage information includes:
Identifying an overlapping area of each part image to be detected and the damaged part detection frame based on the damaged position information and the part position information;
Extracting the damage type of each part image to be detected from the first damage information, and identifying the damage grade corresponding to the damage type of each part image to be detected;
Calculating the area ratio of the overlapping area on the damaged part detection frame;
and if the damage levels in the damage part detection frames are the same, determining the part image to be detected corresponding to the overlapping area with the smallest area ratio in each damage part detection frame as the target part image.
The damage level corresponds to the damage type of each to-be-detected part image, for example, if the damage type of each to-be-detected part image is scratch or scratch, the corresponding damage level is low; if the damage type of each part image to be detected is concave, the corresponding damage grade is medium; if the damage type of each part image to be detected is fold, dead fold, tear and missing, the corresponding damage grade is high.
According to the above embodiment, when the plurality of damage levels in the damage part detection frame are the same, damage with a small area ratio is easy to be detected by mistake, so that the part image to be detected corresponding to the overlapping area with the smallest area ratio is selected as the target part image, and further vehicle damage detection can be performed on the target part image later, so that the detection accuracy of the target part image is improved.
In other embodiments, if the plurality of lesion levels are different in the lesion part detection frame, obtaining a level score for each lesion level;
calculating a damage score of the overlapping region on the damage component detection frame based on the rank score and the area ratio;
and determining the part image to be detected corresponding to the overlapping area with the minimum damage score in each damage part detection frame as the target part image.
According to the above embodiment, the plurality of damage levels are different in the damage part detection frame, and the target part image is selected by combining the damage level and the area ratio, so that the selection rationality of the target part image is improved.
And S15, inputting the target part image into a pre-trained cross-part damage detection model to obtain second damage information of the target part image.
In at least one embodiment of the present application, the model structure of the cross-component damage detection model is the same as the model structure of the basic damage detection model, which is not described in detail herein.
In at least one embodiment of the present invention, the second damage information includes predicted damage location information and predicted damage type of the cross-component damage detection model to the target component image.
S16, generating a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
It should be emphasized that, to further ensure the privacy and safety of the loss detection result, the loss detection result may also be stored in a node of a blockchain.
In at least one embodiment of the present invention, the generating, by the electronic device, the vehicle damage detection result of the image to be processed according to the second damage information and the first damage information includes:
determining the rest part images to be detected except the target part image as processed part images;
extracting first damage position information and first damage type of the processed part image from the first damage information, and extracting second damage position information and second damage type of the target part image from the first damage information;
Identifying a first predicted damage degree of the basic damage detection model to the target part image according to the second damage position information and the second damage type, and identifying a second predicted damage degree of the cross-part damage detection model to the target part image according to the second damage information;
Screening target damage position information and target damage types of the target part image from the second damage position information, the second damage type and the second damage information according to the first predicted damage degree and the second predicted damage degree;
And generating the vehicle damage detection result according to the part name of the part to be detected corresponding to the processed part image, the mapping relation between the first damage position information and the first damage type, and the part name of the part to be detected corresponding to the target part image, the mapping relation between the target damage position information and the target damage type.
The first predicted damage degree is determined according to the damage area of the target part image and the damage grade corresponding to the second damage type, the larger the damage area of the target part image is, the larger the first predicted damage degree is, the higher the damage grade corresponding to the second damage type is, and the larger the first predicted damage degree is, and vice versa.
The determining manner of the second predicted damage degree is similar to that of the first predicted damage degree, and the present application will not be repeated.
And by combining the basic damage detection model and the cross-component damage detection model, the target damage position information and the target damage type of the target component image are accurately generated, so that the accuracy of the vehicle damage detection result is improved.
Specifically, the electronic device screening the target damage position information and the target damage type of the target component image from the second damage position information, the second damage type and the second damage information according to the first predicted damage degree and the second predicted damage degree includes:
comparing the first predicted damage level with the second predicted damage level;
If the first predicted damage degree is smaller than the second predicted damage degree, determining the second damage position information as the target damage position information, and determining the second damage type as the target damage type; or alternatively
And if the first predicted damage degree is greater than or equal to the second predicted damage degree, the predicted damage position information in the second damage information is used as the target damage position information, and the predicted damage type in the second damage information is determined as the target damage type.
By determining the information corresponding to the lower damage level as the target damage position information and the target damage type, the false detection rate can be reduced.
According to the technical scheme, whether cross-component damage exists in the image to be processed or not can be accurately identified by combining the plurality of initial component images, the component position information and the first damage information, and further when cross-component damage exists in the image to be processed, the target damage position information and the target damage type of the target component image can be accurately generated by combining the basic damage detection model and the cross-component damage detection model, so that the accuracy of the vehicle damage detection result is improved.
FIG. 2 is a functional block diagram of a vehicle loss detection device according to a preferred embodiment of the present invention. The vehicle loss detection device 11 includes an acquisition unit 110, a detection unit 111, a division unit 112, an identification unit 113, a screening unit 114, an input unit 115, a generation unit 116, a masking unit 117, a calculation unit 118, an adjustment unit 119, and a determination unit 120. The module/unit referred to herein is a series of computer readable instructions capable of being retrieved by the processor 13 and performing a fixed function and stored in the memory 12. In the present embodiment, the functions of the respective modules/units will be described in detail in the following embodiments.
When a loss detection request is received, the acquisition unit 110 acquires an image to be processed according to the loss detection request.
In at least one embodiment of the present invention, the vehicle loss detection request may be a request triggered and generated by a vehicle risk agent, and the vehicle loss detection request may also be a request triggered and generated by a reimbursement user when uploading the image to be processed in a vehicle risk reimbursement system.
The image to be processed refers to an image which needs to be subjected to traffic loss detection.
In at least one embodiment of the present invention, the acquiring unit 110 acquires the image to be processed according to the vehicle loss detection request includes:
analyzing a request message of the vehicle loss detection request to obtain data information carried by the request message;
extracting a generation address and an image identifier of the vehicle damage detection request from the data information;
and acquiring an image corresponding to the image identifier from the configuration library of the generated address as the image to be processed.
The generated address is address information corresponding to a system triggering the generation of the vehicle loss detection request.
By combining the generated address and the image identifier, the image to be processed can be accurately acquired.
The detection unit 111 detects the image to be processed based on a pre-trained basic damage detection model, and obtains first damage information.
In at least one embodiment of the present invention, the basic impairment detection model comprises a feature extraction network layer, a candidate box detection network layer, and an output network layer.
The first damage information comprises a damage part detection frame, damage position information and damage types. The damage part detection frame is a candidate frame with damage information in the image to be processed. The damage position information comprises coordinate values of the upper left corner of the damage part detection frame and the length and width of the damage part detection frame. The types of damage include, but are not limited to: scoring, scraping, dishing, creasing, dead folding, tearing, missing, etc.
In at least one embodiment of the present invention, the detecting unit 111 detects the image to be processed based on a pre-trained basic damage detection model, and the obtaining the first damage information includes:
Extracting image features of the image to be processed based on a convolution layer in the feature extraction network layer;
Detecting the image features based on the candidate frame detection network layer to obtain the damaged part detection frame;
Acquiring a position identification layer and a logistic regression layer of the output network layer;
Performing position recognition on the image candidate frame based on the position recognition layer to obtain the damage position information;
and carrying out regression processing on the image features based on the logistic regression layer to obtain the damage type.
Wherein the image features refer to feature information in the image to be processed.
By the method, the first damage information can be rapidly extracted from the image to be processed.
Specifically, the detecting unit 111 detects the image feature based on the candidate frame detection network layer, and the obtaining the damaged component detection frame includes:
And the electronic equipment identifies the boundary box of the image features in the image to be processed, carries out transformation processing on the boundary box until the boundary box after transformation processing contains all damage features in the image to be processed, and determines the boundary box after transformation processing as the damage component detection box.
According to the embodiment, the damage part detection frame can be ensured to contain all damage characteristics in the image to be processed, so that the situation that the vehicle damage detection result cannot be accurately determined can be avoided.
Specifically, the detecting unit 111 performs regression processing on the image feature based on the logistic regression layer, and the obtaining the damage type includes:
Acquiring pixel information of the image features;
obtaining class pixel values of a plurality of preset classes from the logistic regression layer;
And carrying out matching processing on the pixel information and a plurality of category pixel values, and determining a preset category corresponding to the category pixel value successfully matched with the pixel information as the damage type.
Wherein the plurality of preset categories include, but are not limited to: scoring, scraping, dishing, creasing, dead folding, tearing, missing, etc.
The pixel information is matched with a plurality of category pixel values, so that the matching can be performed on the basis of the basic information, interference information is avoided, and the accuracy of determining the damage type is improved.
The segmentation unit 112 performs segmentation processing on the image to be processed based on a pre-trained component segmentation model, so as to obtain a plurality of initial component images and component position information of a component to be detected in each initial component image on the image to be processed.
In at least one embodiment of the invention, the component segmentation model is based on a preset network training. Wherein, the preset network is usually referred to as DeepLab series of split networks.
Each initial component image carries a corresponding component to be detected. The component position information comprises coordinate information of the component to be detected on the image to be processed.
In at least one embodiment of the present invention, the acquiring unit 110 acquires a plurality of vehicle training images, each including a plurality of vehicle components, before performing the segmentation process on the image to be processed based on a pre-trained component segmentation model;
Masking unit 117 performs masking processing on each vehicle training image based on any vehicle component, resulting in a plurality of mask images for each vehicle training image;
The segmentation unit 112 performs semantic segmentation processing on each vehicle training image based on a preset network to obtain a plurality of prediction segmentation images of each vehicle training image;
The calculation unit 118 counts the number of first pixels in each of the prediction divided images as a first number, and counts the number of second pixels in each of the mask images as a second number;
The calculation unit 118 calculates the absolute value of the difference between each first number and each second number as a predicted difference;
The calculating unit 118 calculates an average value of the prediction difference values to obtain a segmentation damage value of the preset network;
The adjustment unit 119 adjusts network parameters in the preset network based on the segmentation damage value, so as to obtain the component segmentation model.
The first pixel points refer to all pixel points in the prediction segmentation image.
The second pixel point is a pixel point in the mask image, where the pixel value of the pixel point is not a preset value. The preset value may be 0 and the preset value may also be 1.
The segmentation damage value can be accurately determined through the number of the first pixel points in each prediction segmentation image and the number of the second pixel points in each mask image, and then the part segmentation model can be accurately adjusted through the segmentation damage value.
The identifying unit 113 identifies whether there is a cross-component damage in the image to be processed according to the plurality of initial component images, the component position information, and the first damage information.
In at least one embodiment of the present invention, the presence of cross-component damage in the image to be processed means that there is damage of a plurality of different components in the same damaged component detection frame, and the plurality of different components are connected to each other.
In at least one embodiment of the present invention, the identifying unit 113 identifies whether there is a cross-component lesion in the image to be processed according to the plurality of initial component images, the component position information, and the first lesion information includes:
Screening out an initial part image overlapped with the damaged part detection frame based on the damaged position information and the part position information to serve as a part image to be detected;
counting the number of images of the part images to be detected;
If the number of the images is greater than or equal to a preset number, detecting whether a plurality of to-be-detected part images are connected in the to-be-processed image or not based on the part position information;
And if the plurality of to-be-detected part images are connected in the to-be-processed image, determining that cross-part damage exists in the to-be-processed image.
Wherein the preset number is typically set to 2.
The damage position information and the component position information are combined, so that the component images to be detected can be accurately screened, and further when the number of the images of the component images to be detected is multiple, whether cross-component damage exists in the images to be processed can be accurately determined by detecting that the multiple component images to be detected are connected in the images to be processed.
In other embodiments, if the number of images is less than the preset number, or the plurality of to-be-detected component images are not connected in the to-be-processed image, determining that there is no cross-component damage in the to-be-processed image.
In other embodiments, if there is no cross-component damage in the image to be processed, the first damage information is determined as a vehicle damage detection result of the image to be processed.
Because the basic damage detection model can accurately determine the damage condition of the non-cross-component damage type, the vehicle damage detection result determination efficiency can be improved through the implementation mode.
If there is a cross-component damage in the image to be processed, the screening unit 114 screens out a target component image from the plurality of initial component images based on the first damage information.
In at least one embodiment of the present invention, the target part image refers to an initial part image having a lesion area ratio or a lesion score that is too small.
In at least one embodiment of the present invention, the plurality of initial part images include the plurality of part images to be detected, and the screening unit 114 screens a target part image from the plurality of initial part images based on the first damage information includes:
Identifying an overlapping area of each part image to be detected and the damaged part detection frame based on the damaged position information and the part position information;
Extracting the damage type of each part image to be detected from the first damage information, and identifying the damage grade corresponding to the damage type of each part image to be detected;
Calculating the area ratio of the overlapping area on the damaged part detection frame;
and if the damage levels in the damage part detection frames are the same, determining the part image to be detected corresponding to the overlapping area with the smallest area ratio in each damage part detection frame as the target part image.
The damage level corresponds to the damage type of each to-be-detected part image, for example, if the damage type of each to-be-detected part image is scratch or scratch, the corresponding damage level is low; if the damage type of each part image to be detected is concave, the corresponding damage grade is medium; if the damage type of each part image to be detected is fold, dead fold, tear and missing, the corresponding damage grade is high.
According to the above embodiment, when the plurality of damage levels in the damage part detection frame are the same, damage with a small area ratio is easy to be detected by mistake, so that the part image to be detected corresponding to the overlapping area with the smallest area ratio is selected as the target part image, and further vehicle damage detection can be performed on the target part image later, so that the detection accuracy of the target part image is improved.
In other embodiments, if the plurality of damage levels are different in the damage component detection frame, the obtaining unit 110 obtains a level score of each damage level;
The calculation unit 118 calculates a damage score of the overlapping region on the damage component detection frame based on the rank score and the area ratio;
The determining unit 120 determines, as the target component image, a component image to be detected corresponding to an overlapping region where the damage score is minimum in each damaged component detection frame.
According to the above embodiment, the plurality of damage levels are different in the damage part detection frame, and the target part image is selected by combining the damage level and the area ratio, so that the selection rationality of the target part image is improved.
The input unit 115 inputs the target component image into a pre-trained cross-component damage detection model, and obtains second damage information of the target component image.
In at least one embodiment of the present application, the model structure of the cross-component damage detection model is the same as the model structure of the basic damage detection model, which is not described in detail herein.
In at least one embodiment of the present invention, the second damage information includes predicted damage location information and predicted damage type of the cross-component damage detection model to the target component image.
The generating unit 116 generates a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
It should be emphasized that, to further ensure the privacy and safety of the loss detection result, the loss detection result may also be stored in a node of a blockchain.
In at least one embodiment of the present invention, the generating unit 116 generates the vehicle damage detection result of the image to be processed according to the second damage information and the first damage information, including:
determining the rest part images to be detected except the target part image as processed part images;
extracting first damage position information and first damage type of the processed part image from the first damage information, and extracting second damage position information and second damage type of the target part image from the first damage information;
Identifying a first predicted damage degree of the basic damage detection model to the target part image according to the second damage position information and the second damage type, and identifying a second predicted damage degree of the cross-part damage detection model to the target part image according to the second damage information;
Screening target damage position information and target damage types of the target part image from the second damage position information, the second damage type and the second damage information according to the first predicted damage degree and the second predicted damage degree;
And generating the vehicle damage detection result according to the part name of the part to be detected corresponding to the processed part image, the mapping relation between the first damage position information and the first damage type, and the part name of the part to be detected corresponding to the target part image, the mapping relation between the target damage position information and the target damage type.
The first predicted damage degree is determined according to the damage area of the target part image and the damage grade corresponding to the second damage type, the larger the damage area of the target part image is, the larger the first predicted damage degree is, the higher the damage grade corresponding to the second damage type is, and the larger the first predicted damage degree is, and vice versa.
The determining manner of the second predicted damage degree is similar to that of the first predicted damage degree, and the present application will not be repeated.
And by combining the basic damage detection model and the cross-component damage detection model, the target damage position information and the target damage type of the target component image are accurately generated, so that the accuracy of the vehicle damage detection result is improved.
Specifically, the generating unit 116 screening the target damage position information and the target damage type of the target component image from the second damage position information, the second damage type and the second damage information according to the first predicted damage degree and the second predicted damage degree includes:
comparing the first predicted damage level with the second predicted damage level;
If the first predicted damage degree is smaller than the second predicted damage degree, determining the second damage position information as the target damage position information, and determining the second damage type as the target damage type; or alternatively
And if the first predicted damage degree is greater than or equal to the second predicted damage degree, the predicted damage position information in the second damage information is used as the target damage position information, and the predicted damage type in the second damage information is determined as the target damage type.
By determining the information corresponding to the lower damage level as the target damage position information and the target damage type, the false detection rate can be reduced.
According to the technical scheme, whether cross-component damage exists in the image to be processed or not can be accurately identified by combining the plurality of initial component images, the component position information and the first damage information, and further when cross-component damage exists in the image to be processed, the target damage position information and the target damage type of the target component image can be accurately generated by combining the basic damage detection model and the cross-component damage detection model, so that the accuracy of the vehicle damage detection result is improved.
Fig. 3 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present invention for implementing the vehicle loss detection method.
In one embodiment of the invention, the electronic device 1 includes, but is not limited to, a memory 12, a processor 13, and computer readable instructions, such as a vehicle loss detection program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the electronic device 1 and does not constitute a limitation of the electronic device 1, and may include more or less components than illustrated, or may combine certain components, or different components, e.g. the electronic device 1 may further include input-output devices, network access devices, buses, etc.
The Processor 13 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor 13 is an operation core and a control center of the electronic device 1, connects various parts of the entire electronic device 1 using various interfaces and lines, and executes an operating system of the electronic device 1 and various installed applications, program codes, etc.
Illustratively, the computer readable instructions may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present invention. The one or more modules/units may be a series of computer readable instructions capable of performing a specific function, the computer readable instructions describing a process of executing the computer readable instructions in the electronic device 1. For example, the computer-readable instructions may be divided into an acquisition unit 110, a detection unit 111, a division unit 112, an identification unit 113, a screening unit 114, an input unit 115, a generation unit 116, a masking unit 117, a calculation unit 118, an adjustment unit 119, and a determination unit 120.
The memory 12 may be used to store the computer readable instructions and/or modules, and the processor 13 may implement various functions of the electronic device 1 by executing or executing the computer readable instructions and/or modules stored in the memory 12 and invoking data stored in the memory 12. The memory 12 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. Memory 12 may include non-volatile and volatile memory, such as: a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one magnetic disk storage device, flash memory device, or other storage device.
The memory 12 may be an external memory and/or an internal memory of the electronic device 1. Further, the memory 12 may be a physical memory, such as a memory bank, a TF card (Trans-FLASH CARD), or the like.
The integrated modules/units of the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may also be implemented by implementing all or part of the processes in the methods of the embodiments described above, by instructing the associated hardware by means of computer readable instructions, which may be stored in a computer readable storage medium, the computer readable instructions, when executed by a processor, implementing the steps of the respective method embodiments described above.
Wherein the computer readable instructions comprise computer readable instruction code which may be in the form of source code, object code, executable files, or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer readable instruction code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory).
The block chain is a novel application mode of computer technologies such as distributed vehicle loss detection, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
In connection with fig. 1, the memory 12 in the electronic device 1 stores computer readable instructions for implementing a method for detecting vehicle damage, the processor 13 being executable to implement:
when a vehicle loss detection request is received, acquiring an image to be processed according to the vehicle loss detection request;
Detecting the image to be processed based on a pre-trained basic damage detection model to obtain first damage information;
dividing the image to be processed based on a pre-trained part dividing model to obtain a plurality of initial part images and part position information of a part to be detected in each initial part image on the image to be processed;
Identifying whether cross-component damage exists in the image to be processed according to the plurality of initial component images, the component position information and the first damage information;
if cross-component damage exists in the image to be processed, screening out a target component image from the plurality of initial component images based on the first damage information;
inputting the target part image into a pre-trained cross-part damage detection model to obtain second damage information of the target part image;
And generating a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
In particular, the specific implementation method of the processor 13 on the computer readable instructions may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The computer readable storage medium has stored thereon computer readable instructions, wherein the computer readable instructions when executed by the processor 13 are configured to implement the steps of:
when a vehicle loss detection request is received, acquiring an image to be processed according to the vehicle loss detection request;
Detecting the image to be processed based on a pre-trained basic damage detection model to obtain first damage information;
dividing the image to be processed based on a pre-trained part dividing model to obtain a plurality of initial part images and part position information of a part to be detected in each initial part image on the image to be processed;
Identifying whether cross-component damage exists in the image to be processed according to the plurality of initial component images, the component position information and the first damage information;
if cross-component damage exists in the image to be processed, screening out a target component image from the plurality of initial component images based on the first damage information;
inputting the target part image into a pre-trained cross-part damage detection model to obtain second damage information of the target part image;
And generating a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or means may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (7)
1. The vehicle loss detection method is characterized by comprising the following steps of:
when a vehicle loss detection request is received, acquiring an image to be processed according to the vehicle loss detection request;
Detecting the image to be processed based on a pre-trained basic damage detection model to obtain first damage information;
Dividing the image to be processed based on a pre-trained part dividing model to obtain a plurality of initial part images and part position information of a part to be detected in each initial part image on the image to be processed, wherein the initial part images comprise a plurality of part images to be detected;
Identifying whether cross-component damage exists in the image to be processed according to the plurality of initial component images, the component position information and the first damage information comprises the following steps: screening out an initial part image overlapped with a damaged part detection frame in the first damage information as a part image to be detected based on the damage position information in the first damage information and the part position information; counting the number of images of the part images to be detected; if the number of the images is greater than or equal to a preset number, detecting whether a plurality of to-be-detected part images are connected in the to-be-processed image or not based on the part position information; if the plurality of to-be-detected part images are connected in the to-be-processed image, determining that cross-part damage exists in the to-be-processed image, wherein the number of the images is greater than or equal to the preset number, which indicates that the number of the to-be-detected part images is multiple, the cross-part damage exists in the to-be-processed image, namely, damage of a plurality of different parts exists in the same damaged part detection frame, and the different parts are connected with each other;
If the cross-component damage exists in the to-be-processed image, screening a target component image from the plurality of initial component images based on the first damage information, wherein the method comprises the following steps: identifying an overlapping area of each part image to be detected and the damaged part detection frame based on the damaged position information and the part position information; extracting the damage type of each part image to be detected from the first damage information, and identifying the damage grade corresponding to the damage type of each part image to be detected; calculating the area ratio of the overlapping area on the damaged part detection frame; if a plurality of damage levels in the damage part detection frames are the same, determining a part image to be detected corresponding to an overlapping region with the smallest area occupation ratio in each damage part detection frame as the target part image; if the plurality of damage grades are different in the damage part detection frame, obtaining a grade score of each damage grade; calculating a damage score of the overlapping region on the damage component detection frame based on the rank score and the area ratio; determining a part image to be detected corresponding to an overlapping region with the minimum damage score in each damage part detection frame as the target part image;
inputting the target part image into a pre-trained cross-part damage detection model to obtain second damage information of the target part image;
And generating a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
2. The vehicle damage detection method according to claim 1, wherein the first damage information includes a damage type, the basic damage detection model includes a feature extraction network layer, a candidate frame detection network layer, and an output network layer, the detecting the image to be processed based on the pre-trained basic damage detection model includes:
Extracting image features of the image to be processed based on a convolution layer in the feature extraction network layer;
Detecting the image features based on the candidate frame detection network layer to obtain the damaged part detection frame;
Acquiring a position identification layer and a logistic regression layer of the output network layer;
Performing position recognition on the image candidate frame based on the position recognition layer to obtain the damage position information;
and carrying out regression processing on the image features based on the logistic regression layer to obtain the damage type.
3. The vehicle loss detection method according to claim 1, wherein before the segmentation processing of the image to be processed based on a pre-trained part segmentation model, the method further comprises:
Acquiring a plurality of vehicle training images, each vehicle training image including a plurality of vehicle components;
masking each vehicle training image based on any vehicle component to obtain a plurality of masking images of each vehicle training image;
Carrying out semantic segmentation processing on each vehicle training image based on a preset network to obtain a plurality of prediction segmentation images of each vehicle training image;
counting the number of first pixel points in each prediction segmentation image as a first number, and counting the number of second pixel points in each mask image as a second number;
calculating the absolute value of the difference between each first quantity and each second quantity as a prediction difference;
calculating the average value of the predicted difference values to obtain a segmentation damage value of the preset network;
And adjusting network parameters in the preset network based on the segmentation damage value to obtain the part segmentation model.
4. The vehicle damage detection method according to claim 1, wherein the generating the vehicle damage detection result of the image to be processed according to the second damage information and the first damage information includes:
determining the rest part images to be detected except the target part image as processed part images;
extracting first damage position information and first damage type of the processed part image from the first damage information, and extracting second damage position information and second damage type of the target part image from the first damage information;
Identifying a first predicted damage degree of the basic damage detection model to the target part image according to the second damage position information and the second damage type, and identifying a second predicted damage degree of the cross-part damage detection model to the target part image according to the second damage information;
Screening target damage position information and target damage types of the target part image from the second damage position information, the second damage type and the second damage information according to the first predicted damage degree and the second predicted damage degree;
And generating the vehicle damage detection result according to the part name of the part to be detected corresponding to the processed part image, the mapping relation between the first damage position information and the first damage type, and the part name of the part to be detected corresponding to the target part image, the mapping relation between the target damage position information and the target damage type.
5. A vehicle damage detection device, characterized in that the vehicle damage detection device comprises:
the acquisition unit is used for acquiring an image to be processed according to the vehicle loss detection request when the vehicle loss detection request is received;
the detection unit is used for detecting the image to be processed based on a pre-trained basic damage detection model to obtain first damage information;
The segmentation unit is used for carrying out segmentation processing on the image to be processed based on a pre-trained part segmentation model to obtain a plurality of initial part images and part position information of a part to be detected in each initial part image on the image to be processed, wherein the initial part images comprise a plurality of part images to be detected;
The identifying unit is configured to identify whether cross-component damage exists in the image to be processed according to the plurality of initial component images, the component position information and the first damage information, and includes: screening out an initial part image overlapped with a damaged part detection frame in the first damage information as a part image to be detected based on the damage position information in the first damage information and the part position information; counting the number of images of the part images to be detected; if the number of the images is greater than or equal to a preset number, detecting whether a plurality of to-be-detected part images are connected in the to-be-processed image or not based on the part position information; if the plurality of to-be-detected part images are connected in the to-be-processed image, determining that cross-part damage exists in the to-be-processed image, wherein the number of the images is greater than or equal to the preset number, which indicates that the number of the to-be-detected part images is multiple, the cross-part damage exists in the to-be-processed image, namely, damage of a plurality of different parts exists in the same damaged part detection frame, and the different parts are connected with each other;
The screening unit is configured to screen, if there is a cross-component damage in the image to be processed, a target component image from the plurality of initial component images based on the first damage information, and includes: identifying an overlapping area of each part image to be detected and the damaged part detection frame based on the damaged position information and the part position information; extracting the damage type of each part image to be detected from the first damage information, and identifying the damage grade corresponding to the damage type of each part image to be detected; calculating the area ratio of the overlapping area on the damaged part detection frame; if a plurality of damage levels in the damage part detection frames are the same, determining a part image to be detected corresponding to an overlapping region with the smallest area occupation ratio in each damage part detection frame as the target part image; if the plurality of damage grades are different in the damage part detection frame, obtaining a grade score of each damage grade; calculating a damage score of the overlapping region on the damage component detection frame based on the rank score and the area ratio; determining a part image to be detected corresponding to an overlapping region with the minimum damage score in each damage part detection frame as the target part image;
The input unit is used for inputting the target component image into a pre-trained cross-component damage detection model to obtain second damage information of the target component image;
And the generating unit is used for generating a vehicle damage detection result of the image to be processed according to the second damage information and the first damage information.
6. An electronic device, the electronic device comprising:
a memory storing computer readable instructions; and
A processor executing computer readable instructions stored in the memory to implement the vehicle loss detection method of any one of claims 1 to 4.
7. A computer-readable storage medium, characterized by: the computer readable storage medium has stored therein computer readable instructions that are executed by a processor in an electronic device to implement the vehicle loss detection method of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210602497.9A CN114842205B (en) | 2022-05-30 | 2022-05-30 | Vehicle loss detection method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210602497.9A CN114842205B (en) | 2022-05-30 | 2022-05-30 | Vehicle loss detection method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114842205A CN114842205A (en) | 2022-08-02 |
CN114842205B true CN114842205B (en) | 2024-05-07 |
Family
ID=82571448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210602497.9A Active CN114842205B (en) | 2022-05-30 | 2022-05-30 | Vehicle loss detection method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114842205B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018205467A1 (en) * | 2017-05-10 | 2018-11-15 | 平安科技(深圳)有限公司 | Automobile damage part recognition method, system and electronic device and storage medium |
CN112907576A (en) * | 2021-03-25 | 2021-06-04 | 平安科技(深圳)有限公司 | Vehicle damage grade detection method and device, computer equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9152744B2 (en) * | 2012-03-29 | 2015-10-06 | Airbus Operations (S.A.S.) | Methods, systems, and computer readable media for generating a non-destructive inspection model for a composite part from a design model of the composite part |
US12106213B2 (en) * | 2016-02-01 | 2024-10-01 | Mitchell International, Inc. | Systems and methods for automatically determining adjacent panel dependencies during damage appraisal |
CN108062712B (en) * | 2017-11-21 | 2020-11-06 | 创新先进技术有限公司 | Processing method, device and processing equipment for vehicle insurance loss assessment data |
US11454595B2 (en) * | 2019-12-06 | 2022-09-27 | Saudi Arabian Oil Company | Systems and methods for evaluating a structural health of composite components by correlating positions of displaced nanoparticles |
-
2022
- 2022-05-30 CN CN202210602497.9A patent/CN114842205B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018205467A1 (en) * | 2017-05-10 | 2018-11-15 | 平安科技(深圳)有限公司 | Automobile damage part recognition method, system and electronic device and storage medium |
CN112907576A (en) * | 2021-03-25 | 2021-06-04 | 平安科技(深圳)有限公司 | Vehicle damage grade detection method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114842205A (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112052787A (en) | Target detection method and device based on artificial intelligence and electronic equipment | |
CN110705583A (en) | Cell detection model training method and device, computer equipment and storage medium | |
CN113689436B (en) | Image semantic segmentation method, device, equipment and storage medium | |
CN110969200B (en) | Image target detection model training method and device based on consistency negative sample | |
CN113449725B (en) | Object classification method, device, equipment and storage medium | |
CN111783812B (en) | Forbidden image recognition method, forbidden image recognition device and computer readable storage medium | |
CN112232203B (en) | Pedestrian recognition method and device, electronic equipment and storage medium | |
CN110582783A (en) | Training device, image recognition device, training method, and program | |
TWI803243B (en) | Method for expanding images, computer device and storage medium | |
CN114972771B (en) | Method and device for vehicle damage assessment and claim, electronic equipment and storage medium | |
CN114996109B (en) | User behavior recognition method, device, equipment and storage medium | |
CN113705468B (en) | Digital image recognition method based on artificial intelligence and related equipment | |
JP2019220014A (en) | Image analyzing apparatus, image analyzing method and program | |
CN113627576B (en) | Code scanning information detection method, device, equipment and storage medium | |
CN113486848B (en) | Document table identification method, device, equipment and storage medium | |
CN114418398A (en) | Scene task development method, device, equipment and storage medium | |
CN113420545A (en) | Abstract generation method, device, equipment and storage medium | |
CN114898155B (en) | Vehicle damage assessment method, device, equipment and storage medium | |
CN115037790B (en) | Abnormal registration identification method, device, equipment and storage medium | |
CN114842205B (en) | Vehicle loss detection method, device, equipment and storage medium | |
CN116452802A (en) | Vehicle loss detection method, device, equipment and storage medium | |
CN113902302B (en) | Data analysis method, device, equipment and storage medium based on artificial intelligence | |
CN114003784A (en) | Request recording method, device, equipment and storage medium | |
CN114820409A (en) | Image anomaly detection method and device, electronic device and storage medium | |
CN113283421B (en) | Information identification method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |