Nothing Special   »   [go: up one dir, main page]

CN112733845A - Interest area problem identification method, interest area inspection method and device - Google Patents

Interest area problem identification method, interest area inspection method and device Download PDF

Info

Publication number
CN112733845A
CN112733845A CN202011624229.4A CN202011624229A CN112733845A CN 112733845 A CN112733845 A CN 112733845A CN 202011624229 A CN202011624229 A CN 202011624229A CN 112733845 A CN112733845 A CN 112733845A
Authority
CN
China
Prior art keywords
interest
area
information
image
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011624229.4A
Other languages
Chinese (zh)
Inventor
张哲维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202011624229.4A priority Critical patent/CN112733845A/en
Publication of CN112733845A publication Critical patent/CN112733845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method for identifying a problem of an interest area, a method and a device for inspecting the interest area, wherein the method for identifying the problem of the interest area comprises the following steps: acquiring a first image of a region of interest in a working region; inputting the first image into a problem recognition model, determining whether the interest area has a set problem, and obtaining first problem information of the interest area with the set problem, wherein the first problem information comprises: a question type for the area of interest; and outputting the first question information. The problem of misoperation can be avoided appearing in this application, the operating efficiency and the accuracy of discernment can also greatly be promoted.

Description

Interest area problem identification method, interest area inspection method and device
Technical Field
The application relates to the technical field of unmanned aerial vehicles, in particular to a method for identifying a problem of an interest area, a method and a device for inspecting the interest area.
Background
With the continuous development of unmanned aerial vehicle technology, the use of industrial application unmanned aerial vehicles is more and more extensive. Among them, agricultural drones are a class of drones that occupy an important position among industrial-grade application drones. Agricultural unmanned aerial vehicle can be used for the plant protection operation in farmland, the farmland is patrolled and examined etc. can play effects such as promotion operating efficiency, increase operation income in the operation in agricultural field. Wherein, an unmanned aerial vehicle for the farmland is patrolled and examined can be called and patrolled and examined unmanned aerial vehicle.
Among the prior art, when using to patrol and examine unmanned aerial vehicle and carry out the farmland problem inspection, patrol and examine unmanned aerial vehicle by staff's manual remote control and patrol and examine the farmland, patrol and examine the in-process and patrol and examine the image that unmanned aerial vehicle gathered the farmland. After the inspection is finished, the staff manually discriminates the image acquired by the inspection unmanned aerial vehicle to judge whether the farmland is out of order.
However, the conventional method requires manual work, which may cause problems such as an operational error and a low work efficiency.
Disclosure of Invention
The objective of the present application is to provide a method for identifying a problem in an interest area, a method and a device for inspecting an interest area, so as to solve at least the problems of misoperation, low operation efficiency and the like caused by manual work involved in farmland inspection in the prior art.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a method for identifying a problem in an interest area, including:
acquiring a first image of a region of interest in a working region;
inputting the first image into a problem recognition model, determining whether the interest area has a set problem, and obtaining first problem information of the interest area with the set problem, wherein the first problem information comprises: a question type for the area of interest;
and outputting the first question information.
As an alternative implementation, the problem identification model includes: a plurality of sub-problem identification models, each sub-problem identification model for identifying a problem type;
the inputting the first image into a problem recognition model, determining whether the interest area has a set problem and obtaining first problem information of the interest area with the set problem comprises:
respectively inputting the first image into each subproblem recognition model to obtain probability information of the problem types respectively output by each subproblem recognition model;
and determining whether the interest area has a set problem or not and the problem type of the interest area with the set problem according to the probability information of various problem types.
As an optional implementation manner, the first question information further includes: a first target location in the first image of a problem in the region of interest.
As an optional implementation manner, the outputting the first question information includes:
displaying the first image;
when the interest area has a set problem, displaying the corresponding problem type of the interest point corresponding to the interest area on the first target position of the first image in an overlapping mode.
As an optional implementation manner, after outputting the first question information, the method further includes:
for an interest area with a setting problem, acquiring a second image of the interest area, wherein the acquisition time of the second image is later than that of the first image;
inputting the second image into the problem identification model, determining whether the interest area has a set problem, and obtaining second problem information of the interest area with the set problem, wherein the second problem information comprises: a question type of a region of interest in which a set question exists, a second target position of the question in the region of interest in the second image;
outputting prompt information according to the first question information and the second question information; the prompt message is used for indicating that: whether the problem at the first target location is resolved.
As an optional implementation manner, the outputting prompt information according to the first question information and the second question information includes:
determining whether the first target location and the second target location correspond to the same geographic area;
when the first target position and the second target position correspond to the same geographical area, outputting prompt information for indicating that the problem at the first target position is not solved;
when the first target position and the second target position do not correspond to the same geographical area, the output prompt message is used for indicating that the problem at the first target position is solved.
As an optional implementation manner, the prompt information is further used to indicate: whether setting problems exist in other areas except the first target position in the interest area or not;
when the first target location and the second target location do not correspond to the same geographical area, the output prompt message is used for indicating that the problem at the first target location is solved, and the method comprises the following steps:
when the first target position and the second target position do not correspond to the same geographical area, the output prompt message is further used for indicating that: the other areas have setup problems.
As an optional implementation manner, the acquiring a first image of an area of interest in a job area includes:
sending the position information of the interest point corresponding to the interest area to the unmanned aerial vehicle;
acquiring a first image of the interest area shot by the unmanned aerial vehicle at the interest point according to the position information;
wherein the position information of the interest point is obtained by at least one of the following methods:
obtaining the data based on the interest points selected by the user in the map corresponding to the operation area;
identifying a map corresponding to the operation area through a pre-trained interest point identification model to obtain an interest point;
and identifying the map corresponding to the operation area through a pre-trained interest point identification model to obtain a plurality of interest points, and acquiring the interest points selected by the user from the interest points.
As an optional implementation manner, the operation area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one problem type;
the sending of the location information of the interest point corresponding to the interest area to the unmanned aerial vehicle includes:
acquiring a target interest point group to be inspected;
determining target interest points to be inspected according to the target interest point groups;
sending the position information of the target interest point to an unmanned aerial vehicle, or sending the position information of the target interest point and a route generated based on the target interest point to the unmanned aerial vehicle;
and/or after first question information of the interest area with the set question is obtained, the method further comprises the following steps:
and associating the interest points with corresponding interest point groups according to the first question information and the interest points corresponding to the interest areas.
As an optional implementation manner, before the acquiring the first image of the region of interest in the job region, the method further includes:
acquiring a current image of a working area;
processing the current image by using a pre-trained problem recognition model, and determining an interest area with a set problem in the operation area;
determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area, wherein the original information comprises: location information of points of interest.
As an optional implementation manner, the original information further includes: the course angle, the flying height and the pitch angle of the camera when the unmanned aerial vehicle shoots the image at the interest point.
In a second aspect, an embodiment of the present application provides a method for inspecting an interest area, including:
controlling an unmanned aerial vehicle to inspect the interest area based on the original information of the interest area in the operation area; the original information includes: location information of points of interest;
wherein, the original information based on the interest area in the operation area controls the unmanned aerial vehicle to patrol the interest area, including:
determining and obtaining an interest area with a set problem in a working area based on a current image of the working area; determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area; controlling an unmanned aerial vehicle to patrol the interest area according to the position information of the interest point;
or, the operation area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one problem type; the unmanned aerial vehicle is controlled to patrol the interest area based on the original information of the interest area in the operation area, and the method comprises the following steps:
acquiring interest point groups required to be inspected; determining interest points to be inspected according to the interest point groups; and controlling the unmanned aerial vehicle to patrol the interest area according to the position information of the interest point.
As an optional implementation manner, the controlling the drone to patrol the interest area according to the location information of the interest point includes:
when the interest area to be inspected comprises a plurality of interest areas, generating an inspection route based on the starting waypoint, each interest point and the return waypoint;
and controlling the unmanned aerial vehicle to patrol according to the patrol route, controlling the unmanned aerial vehicle to fly at a course angle and a flying height corresponding to the current interest point when the unmanned aerial vehicle runs to any interest point along the patrol route, and controlling a camera carried by the unmanned aerial vehicle to shoot the interest area at a pitch angle corresponding to the current interest point.
In a third aspect, an embodiment of the present application provides an apparatus for identifying a problem in an area of interest, including:
the acquisition module is used for acquiring a first image of a interest area in the operation area;
a processing module, configured to input the first image into a problem identification model, determine whether a set problem exists in the interest region, and obtain first problem information of the interest region with the set problem, where the first problem information includes: a question type for the area of interest;
and the output module is used for outputting the first question information.
As an alternative implementation, the problem identification model includes: a plurality of sub-problem identification models, each sub-problem identification model for identifying a problem type;
the processing module is specifically configured to:
respectively inputting the first image into each subproblem recognition model to obtain probability information of the problem types respectively output by each subproblem recognition model; and determining whether the interest area has a set problem or not and the problem type of the interest area with the set problem according to the probability information of various problem types.
As an optional implementation manner, the first question information further includes: a first target location in the first image of a problem in the region of interest.
As an optional implementation manner, the output module is specifically configured to:
displaying the first image; when the interest area has a set problem, displaying the corresponding problem type of the interest point corresponding to the interest area on the first target position of the first image in an overlapping mode.
As an optional implementation manner, the processing module is further configured to:
for an interest area with a setting problem, acquiring a second image of the interest area, wherein the acquisition time of the second image is later than that of the first image; inputting the second image into the problem identification model, determining whether the interest area has a set problem, and obtaining second problem information of the interest area with the set problem, wherein the second problem information comprises: a question type of a region of interest in which a set question exists, a second target position of the question in the region of interest in the second image; outputting prompt information according to the first question information and the second question information; the prompt message is used for indicating that: whether the problem at the first target location is resolved.
As an optional implementation manner, the processing module is specifically configured to:
determining whether the first target location and the second target location correspond to the same geographic area.
When the first target position and the second target position correspond to the same geographical area, the output prompt information is used for indicating that the problem at the first target position is not solved.
When the first target position and the second target position do not correspond to the same geographical area, the output prompt message is used for indicating that the problem at the first target position is solved.
As an optional implementation manner, the prompt information is further used to indicate: and whether setting problems exist in other areas except the first target position in the interest area or not.
The processing module is specifically configured to:
when the first target position and the second target position do not correspond to the same geographical area, the output prompt message is further used for indicating that: the other areas have setup problems.
As an optional implementation manner, the obtaining module is specifically configured to:
and sending the position information of the interest point corresponding to the interest area to the unmanned aerial vehicle.
And acquiring a first image of the interest area shot by the unmanned aerial vehicle at the interest point according to the position information.
Wherein the position information of the interest point is obtained by at least one of the following methods:
obtaining the data based on the interest points selected by the user in the map corresponding to the operation area;
identifying a map corresponding to the operation area through a pre-trained interest point identification model to obtain an interest point;
and identifying the map corresponding to the operation area through a pre-trained interest point identification model to obtain a plurality of interest points, and acquiring the interest points selected by the user from the interest points.
As an optional implementation manner, the working area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one question type.
As an optional implementation manner, the processing module is specifically configured to:
and acquiring a target interest point group required to be inspected.
And determining the target interest points to be inspected according to the target interest point groups.
And sending the position information of the target interest point to the unmanned aerial vehicle, or sending the position information of the target interest point and the route generated based on the target interest point to the unmanned aerial vehicle.
And/or associating the interest points with corresponding interest point groups according to the first question information and the interest points corresponding to the interest areas.
As an optional implementation manner, the processing module is further configured to:
acquiring a current image of a working area;
processing the current image by using a pre-trained problem recognition model, and determining an interest area with a set problem in the operation area;
determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area, wherein the original information comprises: location information of points of interest.
As an optional implementation manner, the original information further includes: the course angle, the flying height and the pitch angle of the camera when the unmanned aerial vehicle shoots the image at the interest point.
In a fourth aspect, an embodiment of the present application provides an interest area inspection device, including:
and the acquisition module is used for acquiring the original information of the interest area in the operation area.
The processing module is used for controlling the unmanned aerial vehicle to inspect the interest area based on the original information of the interest area in the operation area; the original information includes: location information of points of interest.
Wherein, the original information based on the interest area in the operation area controls the unmanned aerial vehicle to patrol the interest area, including:
determining and obtaining an interest area with a set problem in a working area based on a current image of the working area; determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area; controlling an unmanned aerial vehicle to patrol the interest area according to the position information of the interest point;
or, the operation area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one problem type; the unmanned aerial vehicle is controlled to patrol the interest area based on the original information of the interest area in the operation area, and the method comprises the following steps:
acquiring interest point groups required to be inspected; determining interest points to be inspected according to the interest point groups; and controlling the unmanned aerial vehicle to patrol the interest area according to the position information of the interest point.
As an optional implementation manner, the processing module is specifically configured to:
when the interest area to be inspected comprises a plurality of interest areas, generating an inspection route based on the starting waypoint, each interest point and the return waypoint;
and controlling the unmanned aerial vehicle to patrol according to the patrol route, controlling the unmanned aerial vehicle to fly at a course angle and a flying height corresponding to the current interest point when the unmanned aerial vehicle runs to any interest point along the patrol route, and controlling a camera carried by the unmanned aerial vehicle to shoot the interest area at a pitch angle corresponding to the current interest point.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium communicate through the bus, and the processor executes the machine-readable instructions to perform the steps of the method for identifying a problem in a region of interest according to the first aspect or the steps of the method for inspecting a region of interest according to the second aspect.
In a sixth aspect, an embodiment of the present application provides a smart agriculture management system, including the electronic device, the control terminal of the unmanned aerial vehicle, and the unmanned aerial vehicle described in the fifth aspect, or including the area of interest problem identification device described in the third aspect and/or the area of interest inspection device described in the fourth aspect.
In a seventh aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps of the method for identifying a problem in an area of interest according to the first aspect or the steps of the method for inspecting an area of interest according to the second aspect.
The beneficial effect of this application is:
after the image of the interest area is acquired, the image of the interest area is input into a problem identification model acquired by pre-training, namely whether the interest area has a set problem or not and the problem type of the interest area with the set problem can be obtained, and the image of the interest area and problem identification information can be output for a user to check. Compare in the mode that current manual work carried out problem identification, this embodiment uses the problem of the model automatic identification interest area of training in advance, consequently, even when image quantity is more, also can accomplish problem identification fast, consequently, can greatly promote the operating efficiency who discerns. In addition, the problem recognition model is obtained by training a large number of training samples in advance, so that the robustness of the model is high, and the accuracy of the problem type recognized by the model can be ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic diagram illustrating an architecture of an intelligent agriculture management system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for identifying a problem in an interest area according to an embodiment of the present application;
fig. 3 is another schematic flowchart of a method for identifying a problem in an interest area according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for identifying a problem in an interest area according to an embodiment of the present application;
FIG. 5 is a schematic interface diagram showing the type of question for a region of interest superimposed on a first image;
fig. 6 is a further flowchart of a method for identifying a problem in an interest area according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a method for identifying a problem in an interest area according to an embodiment of the present application to obtain a first image;
fig. 8 is a schematic flowchart of a method for routing inspection of an interest area according to an embodiment of the present application;
fig. 9 is a block diagram of a device for identifying a problem in an area of interest according to an embodiment of the present application;
fig. 10 is a block diagram of a module of an interest area inspection device according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device 110 according to an embodiment of the present disclosure.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
In the scene that unmanned aerial vehicle was patrolled and examined in the farmland was patrolled and examined in current use, the problem discernment of process and later stage of patrolling and examining all need rely on artificial mode to go on. Since the inspection process is manually controlled, it may cause an operational error. For example, a certain field that needs to be inspected is not inspected, and for example, a certain field that does not need to be inspected is inspected. In addition, problem identification in later stage needs manual work to be accomplished, leads to the operating efficiency low, and especially when the scope that unmanned aerial vehicle patrolled and examined is great, the image quantity that unmanned aerial vehicle gathered is huge for the work load of discernment is huge, the operating efficiency is low.
In view of the problems of misoperation and low operation efficiency caused by manual operation in the prior art, the embodiment of the application provides an inventive concept that: automatic control unmanned aerial vehicle patrols and examines the interest area that needs to patrol and examine to through image processing automatic identification interest area existing problem, thereby can avoid misoperation and greatly promote the operating efficiency, simultaneously, can also guarantee interest area problem identification's accuracy.
Fig. 1 is a schematic diagram of an architecture of an intelligent agriculture management system according to an embodiment of the present application, and as shown in fig. 1, the intelligent agriculture management system 100 includes: a network 110, an electronic device 120, a control terminal 130 of the drone, and a drone 140.
In some embodiments, the electronic device 120 may include a processor. The processor may process information and/or data related to the drone to perform one or more of the functions described herein. In some embodiments, a processor may include one or more processing cores (e.g., a single-core processor (S) or a multi-core processor (S)). Merely by way of example, a Processor may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), an Application Specific Instruction Set Processor (ASIP), a Graphics Processing Unit (GPU), a Physical Processing Unit (PPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a microcontroller Unit, a Reduced Instruction Set computer (Reduced Instruction Set computer), a microprocessor, or the like, or any combination thereof.
In some embodiments, the electronic device 120 may refer to a cloud platform, or may also refer to a control terminal of a drone. It should be understood that when the electronic device 120 refers to a control terminal of a drone, the electronic device 120 represents the same device as the control terminal 130 of the drone described above.
In some embodiments, the control terminal 130 of the drone may communicate with the electronic devices and the drone 140 via the network 110. Network 110 may be used for the exchange of information and/or data. In some embodiments, the network 110 may be any type of wired or wireless network, or combination thereof. Merely by way of example, the Network 110 may include a wired Network, a Wireless Network, a fiber optic Network, a telecommunications Network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth Network, a ZigBee Network, a Near Field Communication (NFC) Network, or the like, or any combination thereof. In some embodiments, network 110 may include one or more network access points. For example, the network 110 may include wired or wireless network access points, such as base stations and/or network switching nodes.
In some embodiments, the drone's control terminal 130 may be a control device located at the ground end for remote manipulation of the drone 140. In some embodiments, the control terminal of the drone may be one or more of a remote control, a smartphone, a desktop computer, a laptop computer, a wearable device (watch, bracelet). In fig. 1, a control terminal 130 of the drone is schematically illustrated as a remote controller 1301 and a terminal device 1302. The terminal device 1302 is, for example, a smart phone, a wearable device, a tablet computer, and the like, but the embodiments of the present application are not limited thereto.
In some embodiments, the drone 140 may be in wireless communication with the control terminal 130 of the drone and the electronic device 120. The drone 140 may include a camera component, a power system, a flight control system, a frame, a plurality of holders carried on the frame, a battery, and the like. In some embodiments, the drone 140 may be a rotorcraft, or other types of drones, and the form of the drone 140 is not limited in the embodiments of the present application.
The method for identifying a problem in an area of interest and the method for inspecting an area of interest provided by the embodiment of the present application will be described in detail below with reference to the content described in the intelligent agriculture management system 100 shown in fig. 1.
For convenience of description, in the following embodiments of the present application, a control terminal of an unmanned aerial vehicle is simply referred to as a control terminal.
Fig. 2 is a flowchart illustrating a method for identifying a problem in an area of interest according to an embodiment of the present disclosure, where an execution subject of the method may be, for example, the electronic device 120 shown in fig. 1, but the present disclosure is not limited thereto. For convenience of description, the following description will be made by taking an electronic device as an example. As shown in fig. 2, the method includes:
s201, acquiring a first image of a region of interest in the operation region.
Optionally, the interest region may be one interest region or a plurality of interest regions. When there are a plurality of regions of interest, the following steps may be performed for each region of interest, respectively.
The region of interest may be a predetermined region of interest, for example, the region of interest may be set by a user in advance, or the region of interest may be automatically determined by the electronic device. The process of determining the region of interest will be described in detail in the following embodiments.
Optionally, the first image of the interest area may be acquired by an unmanned aerial vehicle, or may also be acquired by a field camera disposed in the work area.
After determining the region of interest, the electronic device may indicate the original information of the region of interest to the control terminal. In the embodiment of the present application, the interest region may correspond to one or more interest points. The interest point can refer to a point where the unmanned aerial vehicle is located when acquiring an image, and one interest point can be represented through position information. When the unmanned aerial vehicle shoots on the interest point, the covered area is the interest area corresponding to the interest point. The original information of the interest area may refer to original information of a point of interest corresponding to the interest area, and the original information may include location information of the point of interest. The control terminal controls the unmanned aerial vehicle to fly to the interest point corresponding to the interest area and collects a first image of the interest area according to the indication of the electronic equipment. Or, the electronic device may also directly control the unmanned aerial vehicle to fly to a point of interest corresponding to the area of interest and acquire the first image of the area of interest.
After the unmanned aerial vehicle collects the image, the image can be sent to the control terminal, and then sent to the electronic equipment by the control terminal. Alternatively, if there is a communication connection between the drone and the electronic device, the drone may also directly send the image to the electronic device without going through the control terminal. When the unmanned aerial vehicle sends the image, the image can be sent in batch after one-time routing inspection is completed, or the collected image can be sent in real time after a frame of new image is collected.
After receiving the image from the control terminal or the unmanned aerial vehicle, the electronic device acquires a first image of the interest area.
S202, inputting the first image of the region of interest into a problem recognition model, determining whether the region of interest has a set problem, and obtaining first problem information of the region of interest having the set problem, where the first problem information includes: the type of problem in the area of interest.
Alternatively, the problem recognition model may be a machine learning model. For example, the machine learning model may use an image classification recognition method to perform problem recognition of the region of interest. The problem recognition model can be obtained by training in advance by using a training sample. The training sample may include an image of the region of interest and whether a set problem exists in the region of interest, a type of the problem, a location of the type of the problem in the image of the region of interest, and the like. Using these training samples, the problem recognition model described above can be trained.
Furthermore, in this step, the image of the interest area acquired by the unmanned aerial vehicle is input into the problem identification model, the model is processed to output the probability that a certain problem type occurs in the interest area, and based on the probability output by the model, whether the interest area has a set problem or not and the problem type of the interest area with the set problem can be determined.
Taking the inspection scene applied to the farmland as an example, the above problem types may include: weeds, insect pests, lodging of crops, water leakage from waist peduncles, water in plots and the like.
It should be understood that the problem types resulting from the problem identification model described above may be one type or multiple types. For example, where both weeds and pests may be present in a particular area of interest, two types of problems may be identified from the problem identification model.
It should be noted that, for the same interest area, the image acquired by the drone may be multiple frames, and may also be one frame. When the collected images are multiple frames, the electronic equipment can perform screening according to the definition, the angle and the like of the images, so that one frame of image with the best quality is screened out and input into the problem recognition model for processing.
And S203, outputting the first question information.
Optionally, the outputting the first question information may refer to displaying and/or storing the first question information.
After the electronic device obtains the first question information through the processing, the first image and the first question information may be directly displayed, or the first image and the first question information may be first stored and displayed under the trigger of some conditions. For example, an interest area question viewing interface is provided on the electronic device, and after a user selects a certain interest area on the interface, the electronic device can read the stored image of the interest area and the question information of the interest area. The process of the electronic device displaying the image of the region of interest and the question information will be described in detail in the following embodiments.
Optionally, when the electronic device stores the first image and the first question information, the electronic device may store the first image and the first question information in an associated manner, so that an association relationship between the first image and the first question information is maintained.
In this embodiment, after the image of the interest region is acquired, the image of the interest region is input into the problem identification model acquired through pre-training, that is, whether the interest region has a set problem or not and the problem type of the interest region having the set problem can be obtained, and the image of the interest region and the problem identification information can be output for the user to check. Compare in the mode that current manual work carried out problem identification, this embodiment uses the problem of the model automatic identification interest area of training in advance, consequently, even when image quantity is more, also can accomplish problem identification fast, consequently, can greatly promote the operating efficiency who discerns. In addition, the problem recognition model is obtained by training a large number of training samples in advance, so that the robustness of the model is high, and the accuracy of the problem type recognized by the model can be ensured.
In an alternative embodiment, the problem identification model may be a model that outputs probabilities that regions of interest belong to various problem types.
In another alternative embodiment, the problem identification model may be a model formed by combining a plurality of sub problem identification models, that is, the problem identification model includes a plurality of sub problem identification models, and each sub problem identification model is used for identifying a problem type. Taking the foregoing inspection scene of the farmland as an example, the problem types may include: weeds, insect pest, crop lodging, waist stalk string water, plot have water, then correspondingly, the subproblem recognition model can include: the system comprises a subproblem recognition model for recognizing weeds, a subproblem recognition model for recognizing insect pests, a subproblem recognition model for recognizing crop lodging, a subproblem recognition model for recognizing waist peduncle water cross and a subproblem recognition model for recognizing water in plots.
The sub-problem recognition models can be obtained by training using pre-labeled training samples respectively.
When the problem recognition model includes a plurality of sub problem recognition models, an alternative manner of the step S202 is the following process.
Fig. 3 is another schematic flow chart of a method for identifying a problem in an area of interest according to an embodiment of the present application, and as shown in fig. 3, an alternative manner of the step S202 includes:
and S301, inputting the first image into each subproblem recognition model respectively to obtain probability information of the problem type output by each subproblem recognition model respectively.
S302, determining whether the interest area has a setting problem or not and the problem type of the interest area with the setting problem according to the probability information of various problem types.
Optionally, each sub-problem recognition model may identify and output a probability that the problem type corresponding to the sub-problem recognition model occurs in the interest region. In a specific implementation process, a preset threshold may be set, and when the probability of the problem type corresponding to the sub-problem identification model is greater than the preset threshold, it is indicated that the problem type corresponding to the sub-problem identification model exists in the interest area. And the electronic equipment judges the probability value output by each sub-problem identification model to obtain the complete problem type of the interest area.
The sub-problem recognition models of the hypothesis problem recognition model include: the system comprises a subproblem recognition model for recognizing weeds, a subproblem recognition model for recognizing insect pests, a subproblem recognition model for recognizing crop lodging, a subproblem recognition model for recognizing waist peduncle water cross and a subproblem recognition model for recognizing water in plots. In one example, if only the probability information output by the sub-problem recognition model for recognizing the weeds is greater than the preset threshold and the probability information output by the other sub-problem recognition models is less than the preset threshold, the electronic device may determine that the problem type of the area of interest is weeds. In another example, if the probabilities of the sub-problem recognition models for recognizing weeds and the sub-problem recognition models for recognizing pests are both greater than a preset threshold, and the probabilities of the outputs of the other sub-problem recognition models are both less than the preset threshold, the electronic device may determine that the problem type of the area of interest is weeds and pests.
In this embodiment, the problem identification model includes a plurality of sub-problem identification models, each sub-problem identification model is used to identify a problem type, the electronic device can obtain the complete problem type of the interest region based on the probability information output by each sub-problem identification model, and since one sub-problem identification model is only used to identify one problem type, the robustness of the model is higher, and the accuracy of the identified result is higher.
As an optional implementation manner, the first question information obtained based on the question recognition model further includes: a problem exists in the region of interest at a first target location in the first image.
The first target position of the problem type in the first image may be a pixel coordinate of the problem type in the image, or a range of the pixel coordinate.
It should be understood that the problem types resulting from the problem identification model described above may be one type or multiple types. When multiple, the problem identification model may output the location in the image where each problem type occurs separately. For example, if both weeds and pests may be present in a particular area of interest, then the problem identification model may determine the types of problems, the locations of the weeds in the image, and the locations of the pests in the image.
When the problem identification model comprises a plurality of sub problem identification models, each sub problem identification model can respectively determine a first target position of the problem type existing in the interest area in the first image.
In one example, if only the probability information output by the sub-problem recognition model for recognizing weeds is greater than a preset threshold and the probability information output by the other sub-problem recognition models is less than the preset threshold, the electronic device may determine that the problem type of the area of interest is weeds and the first target position of occurrence of weeds is the position output by the sub-problem recognition model for recognizing weeds. In another example, if the probabilities of the sub-problem recognition models for identifying weeds and the sub-problem recognition models for identifying pests are both greater than a preset threshold value and the probabilities of the outputs of the other sub-problem recognition models are both less than the preset threshold value, the electronic device may determine that the problem type of the area of interest is weeds and pests, the target position of occurrence of weeds is the position of the output of the sub-problem recognition models for identifying weeds, and the target position of occurrence of pests is the position of the output of the sub-problem recognition models for identifying pests.
As described above, the electronic device may directly display the first image and the first question information after obtaining the first image and the first question information, or may store the first image and the first question information before displaying the first image and the first question information under a specific trigger condition. The process of this display will be explained below.
Fig. 4 is a further flowchart of a method for identifying a problem in an interest area according to an embodiment of the present application, and as shown in fig. 4, an alternative manner of the step S202 includes:
and S401, displaying the first image.
S402, when the interest area has a setting problem, displaying the corresponding problem type of the interest point corresponding to the interest area on the first target position of the first image in an overlapping mode.
It is to be noted that the above steps S401 and S402 may be performed simultaneously.
Optionally, the identification information may be a text, an icon, or the like of the question type of the interest area.
For example, assuming that the target position on the first image is a coordinate range of vertex coordinates (a, b) at the top left corner and vertex coordinates (c, d) at the bottom right corner, the electronic device may calculate the center position of the coordinate range. And the electronic equipment displays the first image and simultaneously superposes and displays characters, icons and the like of the question types of the interest areas on the central position of the first image.
Optionally, the electronic device may display the text, the icon, and the like of the problem type of the interest region in an overlapping manner on the target position of the first image in a layer overlapping manner. And taking the first image as a lower layer, and taking the problem of the problem type, the icon and the like as coordinates of an upper layer. When overlapping, the upper layer may be displayed in a non-transparent or partially transparent manner.
Fig. 5 is a schematic interface diagram showing the problem type of the interest area superimposed on the first image, and as shown in fig. 5, if the problem type of the interest area corresponding to the first image is a weed, the text and icon of the weed are displayed at the position where the problem type occurs in the first image.
In this embodiment, by displaying the first image and displaying the problem type of the interest area in the target position of the first image in an overlapping manner, the user can visually and conveniently view the position of the sent problem and the specific problem type, which is convenient for the user to solve the subsequent problem, and therefore, the user experience can be greatly improved.
The above describes a process in which the electronic device identifies and outputs the type of problem for the area of interest. As an optional implementation, the electronic device may further identify and compare the interest area with the set problem again to determine whether the problem has been solved. And finding a new problem type based on the identification comparison of the two images.
Fig. 6 is a further flow chart of the method for identifying a problem in an area of interest according to the embodiment of the present application, and as shown in fig. 6, after step S203, the method further includes:
s601, for the interest area with the setting problem, acquiring a second image of the interest area, wherein the acquisition time of the second image is later than that of the first image.
As described above, after obtaining the first question information of the interest area, the electronic device may associate and store the first image and the first question information. Optionally, when the electronic device stores the information, the identifier of the interest area may be used as an index, and the identifier of the interest area may be, for example, the identifier of the interest area, latitude and longitude information of a point of interest corresponding to the interest area, and the like. Furthermore, after a certain time interval, the electronic device can send the information of the interest area identified by the index to the control terminal, the control terminal controls the unmanned aerial vehicle to acquire the image of the interest area again, or directly sends the information of the interest area identified by the index to the unmanned aerial vehicle, and the unmanned aerial vehicle acquires the image of the interest area again. Thereby obtaining the second image.
S602, inputting the second image into the problem recognition model, determining whether the region of interest has a set problem, and obtaining second problem information of the region of interest having the set problem, where the second problem information includes: there is a question type of setting a question region of interest in which there is a question at a second target position in the second image.
The method for obtaining the second question information using the question identification model is the same as the method for obtaining the first question information using the question identification model, and reference may be made to the foregoing embodiments, which are not described herein again.
S603, outputting a prompt message according to the first question information and the second question information, the prompt message being used to instruct: whether the problem at the first target location described above is resolved.
As an alternative implementation, it may be determined whether the first target location and the second target location correspond to the same geographic area, and further, when the first target location and the second target location correspond to the same geographic area, the output prompt information is used to indicate that the problem at the first target location is not solved, and when the first target location and the second target location do not correspond to the same geographic area, the output prompt information is used to indicate that the problem at the first target location is solved.
Optionally, whether the first target location and the second target location correspond to the same geographic area may be determined by determining whether the first target location and the second target location are the same. In the specific implementation process, when the unmanned aerial vehicle acquires images of the interest area at different times, the images can be acquired based on the same parameters. The parameters may include, for example: the shooting point, the navigation angle, the flying height, the camera pitch angle and the like. Thus, the position in two images acquired at different times for the same geographic area may be the same. Accordingly, if the first target location is the same as the second target location, i.e., the first target location and the second target location correspond to the same geographic area, it may be indicated that a setup problem exists at the first target location and the second target location, and thus, the problem indicating the first target location is solved. If the first target location is different from the second target location, i.e., the first target location and the second target location correspond to different geographic areas, it indicates that the problem at the first target location has been solved.
In addition, when the first target location and the second target location correspond to different geographical areas, the problem of the first target location area is solved, and meanwhile, a setting problem occurs on the second target location. Therefore, as an optional mode, the prompt message is further used to indicate: whether setting problems exist in other areas except the first target position in the interest area.
Optionally, when the first target location and the second target location do not correspond to the same geographic area, the output prompt information is further used to indicate: the other regions have a setting problem.
Optionally, for the unresolved problem, the electronic device may output alarm information for the interest area, where the alarm information may be, for example: the problem of water leakage of the waist peduncle of the interest area A is not solved and please pay attention to. The alarm information can be in the forms of characters, voice and the like. By outputting the alarm information, the user can be reminded to pay attention to the problems in the interest area in time.
In the embodiment, the image acquisition and the problem identification are carried out on the region of interest with the problem again, and whether the problem in the region of interest is solved or not is judged by comparing the problems twice before and after, so that the user can conveniently know whether the problem in the region of interest is solved or not, and then can take solving measures in time when the problem is not solved, thereby helping the user to further improve the operation efficiency.
As an alternative implementation manner, the obtaining of the first image of the interest area in the working area in step S201 may be obtained by controlling the drone to shoot at a point of interest corresponding to the interest area. This mode will be explained below.
Fig. 7 is a schematic flowchart of a method for identifying a problem in a region of interest according to an embodiment of the present application, where as shown in fig. 7, the process includes:
s701, sending the position information of the interest points corresponding to the interest areas to the unmanned aerial vehicle.
Optionally, the electronic device may directly send the position information to the unmanned aerial vehicle, and also send the position-free information to the control terminal, and the control terminal sends the position-free information to the unmanned aerial vehicle to control the unmanned aerial vehicle to perform image acquisition on the point of interest.
Wherein, the position information of the interest point can be obtained by any one of the following manners.
In the first mode, the information is obtained based on the interest points selected by the user in the map corresponding to the operation area.
In this manner, the user may indicate location information of the point of interest based on the map. Accordingly, the electronic device may obtain location information of the point of interest indicated by the user based on the map. In addition, the user can also indicate the flight height, the pitch angle, the course angle and other routing inspection parameters of the interest point.
Optionally, the electronic device may display a satellite map or a high-definition map in the inspection area on the user operation interface, where the high-definition map may be generated based on the acquired image after the unmanned aerial vehicle acquires the image in the inspection area. In turn, the user may set points of interest for one or more plots on the map. For example, a user selects a certain parcel as an interest area, the electronic device may prompt the user to input the flying height, the pitch angle, the course angle and the like of an interest point corresponding to the interest area, the electronic device uses the longitude and latitude of the parcel selected by the user as the position of the interest point, and the information input by the user is used as the inspection parameter of the interest point. Optionally, the user may set the same inspection parameters in advance for a certain type of interest point, and then, after selecting one interest point, the set inspection parameters may be directly applied to the interest point. The above-mentioned certain kind of interest points may refer to, for example, interest points for which it is necessary to identify whether there are weeds, interest points for which it is necessary to identify whether there are pests, and the like.
In the second mode, a map corresponding to the operation area is identified through a pre-trained interest point identification model to obtain an interest point.
In this way, the region of interest recognition model may be obtained by training in advance using a training sample. The region of interest identification model may be, for example, a neural network model. Furthermore, the whole image of the operation area can be collected by the unmanned aerial vehicle, the whole image is input into the interest area identification model, and interest points corresponding to a plurality of interest areas in the operation area of the unmanned aerial vehicle and position information of the interest points can be obtained. In addition, the model interest area identification model can also output the patrol inspection parameters of the unmanned aerial vehicle at each interest point.
In the third mode, a map corresponding to the operation area is identified through a pre-trained interest point identification model to obtain a plurality of interest points, and the interest points selected by the user from the interest points are obtained.
In this way, the region of interest recognition model may be obtained by training in advance using a training sample. The region of interest identification model may be, for example, a neural network model. Furthermore, the whole image of the operation area can be collected by the unmanned aerial vehicle, the whole image is input into the interest area identification model, and a plurality of selectable interest points of a plurality of interest areas in the unmanned aerial vehicle inspection area and the position information of the selectable interest points can be obtained. Furthermore, the electronic device may display the obtained multiple selectable interest points to the user, from which the user may select an interest point that needs to be inspected, and/or the user may further adjust or add an interest point according to the interest point displayed by the electronic device. And after the user finishes the selection, the electronic equipment records the position information of each interest point selected by the user. In addition, the model interest area identification model can also output the patrol inspection parameters of the unmanned aerial vehicle at each interest point.
Optionally, when the position information of the point of interest is sent to the unmanned aerial vehicle, the routing inspection parameter of the unmanned aerial vehicle at the point of interest can be sent to the unmanned aerial vehicle. And controlling the unmanned aerial vehicle to patrol according to the patrol parameters.
S702, acquiring a first image of the interest area shot by the unmanned aerial vehicle at the interest point according to the position information.
After the position information of the interest points corresponding to the interest areas is sent to the unmanned aerial vehicle, the unmanned aerial vehicle can correspondingly fly to the interest points, image acquisition can be carried out according to the routing inspection parameters of the interest points, and the acquired images are sent to the electronic equipment.
In this embodiment, electronic equipment obtains the positional information of point of interest and is patrolled and examined automatically by unmanned aerial vehicle according to the parameter execution of patrolling and examining of point of interest through the mode that user instruction and/or model automatically generated point of interest, compares in prior art, can avoid appearing misoperation, promotes the operating efficiency.
In an optional implementation manner, when the electronic device sends the location information of the point of interest corresponding to the area of interest to the drone, the location information may be sent in the form of an area of interest group. The operation area corresponds to at least one interest point group, each interest point group can correspond to at least one interest point, and each interest point group can correspond to one problem type. For example, a certain interest point group corresponds to weed types, which indicates that the interest points in the interest point group mainly need to pay attention to the weed problems, and accordingly, the patrol inspection parameters of the interest points in the group can use the same or similar parameter values.
Optionally, the interest point groups may be divided according to the types of the problems that need to be paid attention to by each interest point, so that each interest point in one interest point group has the same or similar patrol inspection parameters.
In this manner, the electronic device may perform the following two ways when dividing the points of interest into at least one point of interest group.
In the first way, the user may indicate the interest points and then indicate the interest point groups.
For example, if the user indicates 5 points of interest on the high-definition map, 3 points of interest of the 5 points of interest need to focus on weeds, and 2 points of interest need to focus on pests, the user may select the 3 points of interest, group them into one point of interest group, and select the 2 points of interest, group them into another point of interest group. The electronic equipment stores the information of the interest area points according to the instruction of the user.
In a second manner, the interest point indicated by the user may be input into the interest point grouping model obtained by pre-training, so as to obtain the at least one interest point grouping.
In this way, the interest point grouping model can be obtained by training in advance by using the training samples. For example, for the interest point group needing attention to weeds, the model can set the flying height to be 1-5 meters, the pitch angle range to be 30-75 degrees and the like, and if the information of a certain interest point is consistent with the set information, the model can judge that the interest point can be divided into the interest point group needing attention to weeds. The method for obtaining the interest point grouping by utilizing the interest point grouping model through automatic division can improve the division efficiency and can also ensure the accuracy of the division result.
Based on the above-mentioned interest point group, when the location information of the interest point is sent to the drone in the above-mentioned step S701, the location information may be sent in accordance with the interest point group.
Optionally, the target interest points to be inspected may be first grouped, and then the target interest points to be inspected are determined according to the target interest point groups, and then the position information of the target interest points is sent to the unmanned aerial vehicle, or the position information of the target interest points and the route generated based on the target interest points are sent to the unmanned aerial vehicle.
The target point of interest grouping may be indicated by a user or may be selected by the electronic device. For example, the electronic device selects the interest point group of the corresponding question type as the target interest point group according to a certain period. And after the target interest point group is obtained, taking the interest points belonging to the target interest point group as target interest points and sending the position information and the routing inspection parameters of the target interest points to the unmanned aerial vehicle. Optionally, routes generated based on the target point of interest may also be transmitted simultaneously.
Wherein the route information may be generated based on the order of the regions of interest in the region of interest grouping, the starting waypoint, and the return waypoint. For example, the electronic device may generate route information according to an order of the points of interest in the point of interest group, for example, an order of latitude and longitude, and an initial waypoint and a return waypoint, and perform routing inspection by the unmanned aerial vehicle according to the route information.
In addition, as an optional mode, after the first question information of the interest area with the setting question is obtained, the interest points may be associated with the corresponding interest points to be grouped according to the first question information and the interest points corresponding to the interest area.
For example, after a certain inspection is completed and image recognition is completed, a problem type corresponding to a certain interest region is obtained, and then an interest point corresponding to the interest region may be associated to an interest point group corresponding to the problem type. When the electronic device instructs the unmanned aerial vehicle to acquire images again according to the interest point group, the associated interest points can be patrolled by the unmanned aerial vehicle.
Hereinafter, a process of determining the region of interest mentioned in the foregoing step S201 will be explained.
Alternatively, before the first image of the region of interest is acquired in the foregoing step S201, the following process may be performed to determine the region of interest.
Firstly, a current image of a working area is obtained, and then the current image is processed by utilizing a pre-trained problem recognition model to determine an interest area with a set problem in the working area. Then, determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area, wherein the original information comprises: location information of points of interest.
Optionally, the original information may further include: the course angle, the flying height and the pitch angle of the camera when the unmanned aerial vehicle shoots the image at the interest point. This information may be referred to as the tour inspection parameters of the drone at the point of interest.
The current image of the operation area can be collected in advance by an unmanned aerial vehicle or a field camera, and after the current image is obtained, the current image can be input into a problem recognition model to obtain an interest area with a set problem in the operation area. Furthermore, the shooting point where the camera shooting range covers the interest area can be used as the interest point corresponding to the interest area. The electronic equipment can record the position information of the interest point, and the course angle, the flying height and the pitch angle of the camera of the unmanned aerial vehicle when the unmanned aerial vehicle shoots at the shooting point are used as the original information of the interest point.
It should be noted that, the manner of determining the interest point of the interest area may be selected from or used in combination with the manner of obtaining the location information of the interest point in the foregoing step S701, which is not specifically limited in this application.
In the specific implementation process, when the electronic equipment is a cloud platform or a control terminal of an unmanned aerial vehicle, the electronic equipment can complete automatic inspection and problem identification through direct interaction with the unmanned aerial vehicle. Or, when the equipment is cloud platform, the electronic equipment can also control the inspection of the unmanned aerial vehicle through the control terminal. In this way, the electronic device can send the position information of the interest point corresponding to the determined interest area and the inspection parameter to the control terminal, and then the control terminal controls the unmanned aerial vehicle to inspect. Or, when the electronic device is the control terminal of the unmanned aerial vehicle, the control terminal can determine the position information and the patrol inspection parameters of the interest points corresponding to the interest areas by itself and control the unmanned aerial vehicle to patrol inspection.
The following describes a process of controlling the unmanned aerial vehicle to patrol by the electronic device.
Fig. 8 is a schematic flowchart of a method for inspecting an interest area according to an embodiment of the present application, where an execution subject of the method is the electronic device. For convenience of understanding, the following description will take an electronic device as an example of a control terminal. As shown in fig. 8, the method includes:
s801, determining original information of interest areas in the operation areas.
S802, controlling the unmanned aerial vehicle to patrol the interest area based on the original information of the interest area in the operation area.
Wherein, the original information may include: the location information of the interest point corresponding to the interest area may further include: the course angle, the flying height, the pitch angle of the camera and the like when the unmanned aerial vehicle takes images at the interest points.
In an alternative mode, the control terminal may determine, based on a current image of the work area, an interest area in the work area where a set problem exists, and determine, based on the interest area, a shooting point where a camera shooting range covers the interest area, so as to obtain original information of the interest point corresponding to the interest area. For the specific processing procedure of this method, reference may be made to the foregoing embodiments, which are not described herein again. On this basis, control terminal can control unmanned aerial vehicle and patrol and examine the region of interest according to the positional information of this point of interest.
In another alternative, the work area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one question type. The control terminal acquires interest point groups needing to be inspected; and determining the interest points needing to be inspected according to the interest point groups. For the specific processing procedure of this method, reference may be made to the foregoing embodiments, which are not described herein again. On this basis, control terminal can control unmanned aerial vehicle and patrol and examine the region of interest according to the positional information of point of interest.
As an optional implementation manner, when the control terminal inspects the interest area according to the location information of the interest point, the following process may be performed:
and when the interest area needing to be inspected comprises a plurality of interest areas, generating an inspection route based on the starting waypoint, the interest points and the return waypoint. And then, controlling the unmanned aerial vehicle to patrol according to the patrol route, controlling the unmanned aerial vehicle to fly at a course angle and a flying height corresponding to the current interest point when the unmanned aerial vehicle runs to any interest point along the patrol route, and controlling a camera carried by the unmanned aerial vehicle to shoot the interest area at a pitch angle corresponding to the current interest point.
For example, the control terminal may generate route information according to an order of the points of interest in the point of interest group, for example, an order of longitude and latitude, and the starting waypoint and the returning waypoint, and the unmanned aerial vehicle performs routing inspection according to the route information. When the unmanned aerial vehicle runs to a certain interest point along the air route, the control terminal can control the unmanned aerial vehicle to fly at the course angle and the flying height corresponding to the interest point, and meanwhile, the camera of the unmanned aerial vehicle is controlled to shoot according to the pitch angle corresponding to the interest point, so that the image of the interest area is obtained.
In the embodiment, the control terminal controls the unmanned aerial vehicle to patrol according to the patrol route and performs image acquisition according to the course angle, the flying height and the pitch angle corresponding to the interest point, so that redundant flight can be avoided during patrol, and meanwhile, the interest area groups correspond to a problem type, so that the unmanned aerial vehicle can also avoid performance degradation caused by frequent modification of patrol parameters such as the course angle, the flying height and the pitch angle during grouped flight.
Based on the same inventive concept, the embodiment of the present application further provides an interesting area problem identification device corresponding to the interesting area problem identification method and an interesting area polling device corresponding to the interesting area polling method, and as the principle and technical effect of the device in the embodiment of the present application for solving the problem are similar to the interesting area problem identification method and the interesting area polling method in the embodiment of the present application, the implementation of the device can refer to the implementation of the method, and repeated parts are not repeated.
Fig. 9 is a block diagram of a device for identifying a problem in an area of interest according to an embodiment of the present application, where as shown in fig. 9, the device includes:
an obtaining module 901, configured to obtain a first image of a region of interest in a work region.
A processing module 902, configured to input the first image into a problem identification model, determine whether a set problem exists in the region of interest, and obtain first problem information of the region of interest with the set problem, where the first problem information includes: a type of issue for the area of interest.
An output module 903, configured to output the first question information.
In an alternative embodiment, the problem identification model comprises: a plurality of sub-problem identification models, each sub-problem identification model for identifying a problem type;
the processing module 902 is specifically configured to:
respectively inputting the first image into each subproblem recognition model to obtain probability information of the problem types respectively output by each subproblem recognition model; and determining whether the interest area has a set problem or not and the problem type of the interest area with the set problem according to the probability information of various problem types.
In an optional implementation, the first question information further includes: a first target location in the first image of a problem in the region of interest.
In an optional implementation manner, the output module 903 is specifically configured to:
displaying the first image; when the interest area has a set problem, displaying the corresponding problem type of the interest point corresponding to the interest area on the first target position of the first image in an overlapping mode.
In an alternative embodiment, the processing module 902 is further configured to:
for an interest area with a setting problem, acquiring a second image of the interest area, wherein the acquisition time of the second image is later than that of the first image; inputting the second image into the problem identification model, determining whether the interest area has a set problem, and obtaining second problem information of the interest area with the set problem, wherein the second problem information comprises: a question type of a region of interest in which a set question exists, a second target position of the question in the region of interest in the second image; outputting prompt information according to the first question information and the second question information; the prompt message is used for indicating that: whether the problem at the first target location is resolved.
In an optional implementation, the processing module 902 is specifically configured to:
determining whether the first target location and the second target location correspond to the same geographic area.
When the first target position and the second target position correspond to the same geographical area, the output prompt information is used for indicating that the problem at the first target position is not solved.
When the first target position and the second target position do not correspond to the same geographical area, the output prompt message is used for indicating that the problem at the first target position is solved.
In an optional embodiment, the prompt message is further used to indicate: and whether setting problems exist in other areas except the first target position in the interest area or not.
The processing module 902 is specifically configured to:
when the first target position and the second target position do not correspond to the same geographical area, the output prompt message is further used for indicating that: the other areas have setup problems.
In an optional implementation, the obtaining module 901 is specifically configured to:
and sending the position information of the interest point corresponding to the interest area to the unmanned aerial vehicle.
And acquiring a first image of the interest area shot by the unmanned aerial vehicle at the interest point according to the position information.
Wherein the position information of the interest point is obtained by at least one of the following methods:
obtaining the data based on the interest points selected by the user in the map corresponding to the operation area;
identifying a map corresponding to the operation area through a pre-trained interest point identification model to obtain an interest point;
and identifying the map corresponding to the operation area through a pre-trained interest point identification model to obtain a plurality of interest points, and acquiring the interest points selected by the user from the interest points.
In an optional implementation manner, the work area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one question type.
In an optional implementation, the processing module 902 is specifically configured to:
and acquiring a target interest point group required to be inspected.
And determining the target interest points to be inspected according to the target interest point groups.
And sending the position information of the target interest point to the unmanned aerial vehicle, or sending the position information of the target interest point and the route generated based on the target interest point to the unmanned aerial vehicle.
And/or associating the interest points with corresponding interest point groups according to the first question information and the interest points corresponding to the interest areas.
In an alternative embodiment, the processing module 902 is further configured to:
acquiring a current image of a working area;
processing the current image by using a pre-trained problem recognition model, and determining an interest area with a set problem in the operation area;
determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area, wherein the original information comprises: location information of points of interest.
In an optional implementation, the original information further includes: the course angle, the flying height and the pitch angle of the camera when the unmanned aerial vehicle shoots the image at the interest point.
Fig. 10 is a block diagram of a device for inspecting an area of interest according to an embodiment of the present disclosure, and as shown in fig. 10, the device includes:
an obtaining module 1001 is configured to obtain original information of a region of interest in a work region.
The processing module 1002 is configured to control the unmanned aerial vehicle to patrol the interest area based on original information of the interest area in the work area; the original information includes: location information of points of interest.
Wherein, the original information based on the interest area in the operation area controls the unmanned aerial vehicle to patrol the interest area, including:
determining and obtaining an interest area with a set problem in a working area based on a current image of the working area; determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area; controlling an unmanned aerial vehicle to patrol the interest area according to the position information of the interest point;
or, the operation area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one problem type; the unmanned aerial vehicle is controlled to patrol the interest area based on the original information of the interest area in the operation area, and the method comprises the following steps:
acquiring interest point groups required to be inspected; determining interest points to be inspected according to the interest point groups; and controlling the unmanned aerial vehicle to patrol the interest area according to the position information of the interest point.
In an optional implementation, the processing module 1002 is specifically configured to:
when the interest area to be inspected comprises a plurality of interest areas, generating an inspection route based on the starting waypoint, each interest point and the return waypoint;
and controlling the unmanned aerial vehicle to patrol according to the patrol route, controlling the unmanned aerial vehicle to fly at a course angle and a flying height corresponding to the current interest point when the unmanned aerial vehicle runs to any interest point along the patrol route, and controlling a camera carried by the unmanned aerial vehicle to shoot the interest area at a pitch angle corresponding to the current interest point.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
The embodiment of the application further provides an electronic device 1100, and the electronic device can be a cloud platform or a control terminal of an unmanned aerial vehicle. Fig. 11 is a schematic structural diagram of an electronic device 1100 according to an embodiment of the present application, and as shown in fig. 11, the electronic device 1100 includes: a processor 1101, a memory 1102, and a bus 1103. The memory 1102 stores machine-readable instructions executable by the processor 1101, the processor 1101 and the memory 1102 communicating via the bus 1103 when the electronic device 1100 is operating, the machine-readable instructions when executed by the processor 1101 perform the method steps performed by the electronic device of the above-described method embodiments.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method for identifying a problem in an area of interest or the method for inspecting an area of interest.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (17)

1. A method for identifying a problem in a region of interest, comprising:
acquiring a first image of a region of interest in a working region;
inputting the first image into a problem recognition model, determining whether the interest area has a set problem, and obtaining first problem information of the interest area with the set problem, wherein the first problem information comprises: a question type for the area of interest;
and outputting the first question information.
2. The method of claim 1, wherein the problem identification model comprises: a plurality of sub-problem identification models, each sub-problem identification model for identifying a problem type;
the inputting the first image into a problem recognition model, determining whether the interest area has a set problem and obtaining first problem information of the interest area with the set problem comprises:
respectively inputting the first image into each subproblem recognition model to obtain probability information of the problem types respectively output by each subproblem recognition model;
and determining whether the interest area has a set problem or not and the problem type of the interest area with the set problem according to the probability information of various problem types.
3. The method of claim 1, wherein the first question information further comprises: a first target location in the first image of a problem in the region of interest.
4. The method of claim 1, wherein after outputting the first question information, further comprising:
for an interest area with a setting problem, acquiring a second image of the interest area, wherein the acquisition time of the second image is later than that of the first image;
inputting the second image into the problem identification model, determining whether the interest area has a set problem, and obtaining second problem information of the interest area with the set problem, wherein the second problem information comprises: a question type of a region of interest in which a set question exists, a second target position of the question in the region of interest in the second image;
outputting prompt information according to the first question information and the second question information; the prompt message is used for indicating that: whether the problem at the first target location is resolved.
5. The method of claim 4, wherein outputting a prompt based on the first question information and the second question information comprises:
determining whether the first target location and the second target location correspond to the same geographic area;
when the first target position and the second target position correspond to the same geographical area, outputting prompt information for indicating that the problem at the first target position is not solved;
when the first target position and the second target position do not correspond to the same geographical area, the output prompt message is used for indicating that the problem at the first target position is solved.
6. The method of claim 5, wherein the hint information is further used to indicate: whether setting problems exist in other areas except the first target position in the interest area or not;
when the first target location and the second target location do not correspond to the same geographical area, the output prompt message is used for indicating that the problem at the first target location is solved, and the method comprises the following steps:
when the first target position and the second target position do not correspond to the same geographical area, the output prompt message is further used for indicating that: the other areas have setup problems.
7. The method of any of claims 1-6, wherein said obtaining a first image of a region of interest in a work area comprises:
sending the position information of the interest point corresponding to the interest area to the unmanned aerial vehicle;
acquiring a first image of the interest area shot by the unmanned aerial vehicle at the interest point according to the position information;
wherein the position information of the interest point is obtained by at least one of the following methods:
obtaining the data based on the interest points selected by the user in the map corresponding to the operation area;
identifying a map corresponding to the operation area through a pre-trained interest point identification model to obtain an interest point;
and identifying the map corresponding to the operation area through a pre-trained interest point identification model to obtain a plurality of interest points, and acquiring the interest points selected by the user from the interest points.
8. The method of claim 7,
the operation area is correspondingly provided with at least one interest point group, each interest point group is correspondingly provided with at least one interest point, and each interest point group corresponds to one problem type;
the sending of the location information of the interest point corresponding to the interest area to the unmanned aerial vehicle includes:
acquiring a target interest point group to be inspected;
determining target interest points to be inspected according to the target interest point groups;
sending the position information of the target interest point to an unmanned aerial vehicle, or sending the position information of the target interest point and a route generated based on the target interest point to the unmanned aerial vehicle;
and/or the presence of a gas in the gas,
after obtaining the first question information of the interest area with the setting question, the method further comprises:
and associating the interest points with corresponding interest point groups according to the first question information and the interest points corresponding to the interest areas.
9. The method of claim 1, wherein prior to obtaining the first image of the region of interest in the work area, further comprising:
acquiring a current image of a working area;
processing the current image by using a pre-trained problem recognition model, and determining an interest area with a set problem in the operation area;
determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area, wherein the original information comprises: location information of points of interest.
10. The method of claim 3, wherein outputting the first question information comprises:
displaying the first image;
when the interest area has a set problem, displaying identification information in an overlapping mode on the first target position of the first image, wherein the identification information is used for identifying the corresponding problem type of the interest point corresponding to the interest area.
11. A method for polling an interest area is characterized by comprising the following steps:
controlling an unmanned aerial vehicle to inspect the interest area based on the original information of the interest area in the operation area; the original information includes: location information of points of interest;
wherein, the original information based on the interest area in the operation area controls the unmanned aerial vehicle to patrol the interest area, including:
determining and obtaining an interest area with a set problem in a working area based on a current image of the working area; determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area; controlling an unmanned aerial vehicle to patrol the interest area according to the position information of the interest point;
or, the operation area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one problem type; the unmanned aerial vehicle is controlled to patrol the interest area based on the original information of the interest area in the operation area, and the method comprises the following steps:
acquiring interest point groups required to be inspected; determining interest points to be inspected according to the interest point groups; and controlling the unmanned aerial vehicle to patrol the interest area according to the position information of the interest point.
12. The method of claim 11, wherein controlling the drone to patrol the area of interest according to the location information of the point of interest comprises:
when the interest area to be inspected comprises a plurality of interest areas, generating an inspection route based on the starting waypoint, each interest point and the return waypoint;
and controlling the unmanned aerial vehicle to patrol according to the patrol route, controlling the unmanned aerial vehicle to fly at a course angle and a flying height corresponding to the current interest point when the unmanned aerial vehicle runs to any interest point along the patrol route, and controlling a camera carried by the unmanned aerial vehicle to shoot the interest area at a pitch angle corresponding to the current interest point.
13. An apparatus for identifying a problem in a region of interest, comprising:
the acquisition module is used for acquiring a first image of a interest area in the operation area;
a processing module, configured to input the first image into a problem identification model, determine whether a set problem exists in the interest region, and obtain first problem information of the interest region with the set problem, where the first problem information includes: a question type for the area of interest;
and the output module is used for outputting the first question information.
14. An interest area inspection device, comprising:
the acquisition module is used for acquiring original information of interest areas in the operation areas;
the control module is used for controlling the unmanned aerial vehicle to inspect the interest area based on the original information of the interest area in the operation area; the original information includes: location information of points of interest;
wherein, the original information based on the interest area in the operation area controls the unmanned aerial vehicle to patrol the interest area, including:
determining and obtaining an interest area with a set problem in a working area based on a current image of the working area; determining shooting points of which the camera shooting range covers the interest area based on the interest area to obtain original information of the interest points corresponding to the interest area; controlling an unmanned aerial vehicle to patrol the interest area according to the position information of the interest point;
or, the operation area corresponds to at least one interest point group, each interest point group corresponds to at least one interest point, and each interest point group corresponds to one problem type; the unmanned aerial vehicle is controlled to patrol the interest area based on the original information of the interest area in the operation area, and the method comprises the following steps:
acquiring interest point groups required to be inspected; determining interest points to be inspected according to the interest point groups; and controlling the unmanned aerial vehicle to patrol the interest area according to the position information of the interest point.
15. An intelligent agricultural management system, comprising the area-of-interest problem recognition apparatus of claim 13 and/or the area-of-interest inspection apparatus of claim 14.
16. An electronic device, comprising: a processor, and a storage medium; the storage medium stores machine-readable instructions executable by the processor, and the processor implements the steps of the method for identifying a problem in an area of interest according to any one of claims 1 to 10 or the steps of the method for inspecting an area of interest according to claim 11 or 12 when executing the machine-readable instructions.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for problem identification of an area of interest according to one of claims 1 to 10 or the steps of the method for area of interest polling according to claim 11 or 12.
CN202011624229.4A 2020-12-31 2020-12-31 Interest area problem identification method, interest area inspection method and device Pending CN112733845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011624229.4A CN112733845A (en) 2020-12-31 2020-12-31 Interest area problem identification method, interest area inspection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011624229.4A CN112733845A (en) 2020-12-31 2020-12-31 Interest area problem identification method, interest area inspection method and device

Publications (1)

Publication Number Publication Date
CN112733845A true CN112733845A (en) 2021-04-30

Family

ID=75609628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011624229.4A Pending CN112733845A (en) 2020-12-31 2020-12-31 Interest area problem identification method, interest area inspection method and device

Country Status (1)

Country Link
CN (1) CN112733845A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205144A (en) * 2021-05-13 2021-08-03 北京三快在线科技有限公司 Model training method and device
CN113325872A (en) * 2021-06-10 2021-08-31 广州极飞科技股份有限公司 Plant inspection method, device and system and aircraft

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446959A (en) * 2018-10-18 2019-03-08 广州极飞科技有限公司 Partitioning method and device, the sprinkling control method of drug of target area
CN111027422A (en) * 2019-11-27 2020-04-17 国网山东省电力公司电力科学研究院 Emergency unmanned aerial vehicle inspection method and system applied to power transmission line corridor
CN112073681A (en) * 2020-07-14 2020-12-11 中国电力科学研究院有限公司 Processing method and system for routing inspection, positioning and shooting images of overhead power line unmanned aerial vehicle
CN112101088A (en) * 2020-07-27 2020-12-18 长江大学 Automatic unmanned aerial vehicle power inspection method, device and system
CN112154447A (en) * 2019-09-17 2020-12-29 深圳市大疆创新科技有限公司 Surface feature recognition method and device, unmanned aerial vehicle and computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109446959A (en) * 2018-10-18 2019-03-08 广州极飞科技有限公司 Partitioning method and device, the sprinkling control method of drug of target area
CN112154447A (en) * 2019-09-17 2020-12-29 深圳市大疆创新科技有限公司 Surface feature recognition method and device, unmanned aerial vehicle and computer-readable storage medium
CN111027422A (en) * 2019-11-27 2020-04-17 国网山东省电力公司电力科学研究院 Emergency unmanned aerial vehicle inspection method and system applied to power transmission line corridor
CN112073681A (en) * 2020-07-14 2020-12-11 中国电力科学研究院有限公司 Processing method and system for routing inspection, positioning and shooting images of overhead power line unmanned aerial vehicle
CN112101088A (en) * 2020-07-27 2020-12-18 长江大学 Automatic unmanned aerial vehicle power inspection method, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205144A (en) * 2021-05-13 2021-08-03 北京三快在线科技有限公司 Model training method and device
CN113325872A (en) * 2021-06-10 2021-08-31 广州极飞科技股份有限公司 Plant inspection method, device and system and aircraft

Similar Documents

Publication Publication Date Title
US11869192B2 (en) System and method for vegetation modeling using satellite imagery and/or aerial imagery
CN109978820B (en) Unmanned aerial vehicle route acquisition method, system and equipment based on laser point cloud
JP2021051767A (en) System and method for supporting operation using drone
CN111582234B (en) Large-scale oil tea tree forest fruit intelligent detection and counting method based on UAV and deep learning
WO2020103108A1 (en) Semantic generation method and device, drone and storage medium
CN111008561B (en) Method, terminal and computer storage medium for determining quantity of livestock
CN112733845A (en) Interest area problem identification method, interest area inspection method and device
CN113252053B (en) High-precision map generation method and device and electronic equipment
JP2020091640A (en) Object classification system, learning system, learning data generation method, learned model generation method, learned model, discrimination device, discrimination method, and computer program
CN108124479A (en) Map labeling method and device, cloud server, terminal and application program
CN112529836A (en) High-voltage line defect detection method and device, storage medium and electronic equipment
CN114332435A (en) Image labeling method and device based on three-dimensional reconstruction
CN115228092A (en) Game battle force evaluation method, device and computer readable storage medium
CN112236727B (en) Unmanned aerial vehicle simulation flight method and device and recording medium
JP6847424B2 (en) Detection device, detection method, computer program and learning model
CN113822348A (en) Model training method, training device, electronic device and readable storage medium
CN111860344A (en) Method and device for determining number of target objects in image
CN112632309A (en) Image display method and device, electronic equipment and storage medium
JP6495559B2 (en) Point cloud processing system
CN116501093B (en) Aircraft landing navigation method and device based on air corridor
CN113052941B (en) Regional power facility state analysis method and image acquisition method
US20240005770A1 (en) Disaster information processing apparatus, disaster information processing system, disaster information processing method, and program
CN115909101A (en) Power equipment fault identification and marking method and system based on color gamut fusion
CN114240921A (en) Area inspection method, device, electronic equipment, system and readable storage medium
CN116912335A (en) Color difference detection method, system, equipment and storage medium for tile image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination