Nothing Special   »   [go: up one dir, main page]

WO2024154504A1 - Information processing method, information processing system, and program - Google Patents

Information processing method, information processing system, and program Download PDF

Info

Publication number
WO2024154504A1
WO2024154504A1 PCT/JP2023/045096 JP2023045096W WO2024154504A1 WO 2024154504 A1 WO2024154504 A1 WO 2024154504A1 JP 2023045096 W JP2023045096 W JP 2023045096W WO 2024154504 A1 WO2024154504 A1 WO 2024154504A1
Authority
WO
WIPO (PCT)
Prior art keywords
lines
line
information
straight
image
Prior art date
Application number
PCT/JP2023/045096
Other languages
French (fr)
Japanese (ja)
Inventor
雅弘 小川
裕樹 丸目
竜司 山田
Original Assignee
株式会社センシンロボティクス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社センシンロボティクス filed Critical 株式会社センシンロボティクス
Publication of WO2024154504A1 publication Critical patent/WO2024154504A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U20/00Constructional aspects of UAVs
    • B64U20/80Arrangement of on-board electronics, e.g. avionics systems or wiring
    • B64U20/87Mounting of imaging devices, e.g. mounting of gimbals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/89Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles
    • G01N21/892Investigating the presence of flaws or contamination in moving material, e.g. running paper or textiles characterised by the flaw, defect or object feature examined
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02GINSTALLATION OF ELECTRIC CABLES OR LINES, OR OF COMBINED OPTICAL AND ELECTRIC CABLES OR LINES
    • H02G1/00Methods or apparatus specially adapted for installing, maintaining, repairing or dismantling electric cables or lines
    • H02G1/02Methods or apparatus specially adapted for installing, maintaining, repairing or dismantling electric cables or lines for overhead lines or cables

Definitions

  • the present invention relates to an information processing method, an information processing system, and a program.
  • Patent Document 1 discloses a system that uses flying objects to photograph and inspect power lines.
  • edge detection when identifying a target power line in order to track the power line under inspection, one possible method is to perform edge detection on the captured image to extract the edges of the power line and identify the position of the power line.
  • edge detection may extract a large number of edges that become noise, making it difficult to identify the edges of the target power line.
  • the edge of a linear object especially a long and thin object
  • it may end up being tracked, causing the target power line to be lost and making it difficult to perform a proper inspection.
  • the power lines strung on each cross arm of a support such as a steel tower that supports the power lines may each consist of a single power line (single conductor system), but may also consist of a multi-conductor system in which each cross arm is composed of two or more power lines (conductors).
  • a multi-conductor power line it is necessary to identify which of the edges of multiple linear objects (particularly long and thin objects) extracted in the captured image are the target power line, which is more difficult to identify than identifying a single target power line.
  • the present invention was made in light of this background, and one of its objectives is to provide an information processing method, information processing system, and program that can extract straight lines corresponding to power lines of the multi-conductor type, in addition to single-conductor type, from images captured by an aircraft.
  • an information processing method for extracting straight lines corresponding to a plurality of multi-conductor power lines from an image captured by an unmanned aerial vehicle including: a step of generating an edge image from the captured image by an edge image generating unit; and a step of extracting, by a specific straight line extracting unit, from the straight lines in the edge image, straight line extraction condition information indicating extraction conditions for the straight lines constituting the plurality of power lines, the straight line extraction condition information including at least conditions related to straight line voting number information obtained by a Hough transform for edges in the generated edge image and expected number information indicating the expected number of straight lines of the plurality of power lines expected in the edge image, based on the expected number of straight lines that fall under the extraction conditions.
  • the present invention provides an information processing method, information processing system, and program that can extract straight lines corresponding to multiple multi-conductor power lines from images captured by an aircraft.
  • FIG. 1 is a diagram showing an overall configuration of an embodiment of the present invention
  • 1 is a diagram showing a system configuration of an information processing system according to an embodiment of the present invention
  • FIG. 3 is a block diagram showing a hardware configuration of the terminal of FIG. 2.
  • FIG. 3 is a block diagram showing a hardware configuration of the server in FIG. 2 .
  • FIG. 3 is a block diagram showing the functions of the terminal and the server in FIG. 2 .
  • This is an example of an image acquired from the imaging unit of an unmanned aerial vehicle.
  • 8 is an example of an edge image generated from the image of FIG. 7.
  • 11 is another example of an edge image.
  • 13 is an example of an image in which a straight line estimated using straight line voting number information is visualized.
  • 13 is an example of an image in which an extracted specific straight line is visualized.
  • 13 is an example of an image in which an extracted specific straight line is visualized.
  • 13 is an example of an image in which an extracted specific straight line is visualized.
  • 1 is an example of an image in which a tracking target position is visualized. 1 is a flowchart showing a process for implementing an edge extraction method by the information processing system according to the present embodiment.
  • An information processing method, an information processing system, and a program according to the embodiments of the present invention have the following configuration.
  • An information processing method for extracting straight lines corresponding to multiple power lines of a multi-conductor system from an image captured by an unmanned aerial vehicle comprising: generating an edge image from the captured image by an edge image generating unit; a step of extracting lines corresponding to the assumed number of lines that fall under extraction conditions from the lines in the edge image based on line extraction condition information indicating extraction conditions for lines constituting the multiple power lines, the line extraction condition information including at least conditions related to line vote number information obtained by a Hough transform for edges in the generated edge image and assumed number information indicating the assumed number of lines of the multiple power lines assumed in the edge image, from the multiple lines in the edge image by a specific line extraction unit;
  • An information processing method comprising: [Item 2] The number of assumed straight lines is twice the number of the plurality of power lines, The extraction condition is that the
  • the method further includes a step of detecting, by the edge image generation, a portion in which an amount of change in pixel value increases and is greater than a reference value in each pixel row in a predetermined direction as rising edge information, and detecting a portion in which an amount of change in pixel value decreases and is greater than a reference value as falling edge information,
  • the number of assumed straight lines is the same as the number of the plurality of power lines
  • the extraction condition is that a line parallel to a first line having the largest number of line votes in either one of the lines corresponding to the rising edge information or the lines corresponding to the falling edge information forms a plurality of pairs with the other of the lines corresponding to the rising edge information or the lines corresponding to the falling edge information located nearby the first line, and the number of pairs is the expected number of lines.
  • the method further includes a step of determining, by a gradient determination unit, which of the gradients of the plurality of straight lines detected by the specific straight line extraction unit is a first gradient that is most prevalent, by gradient voting;
  • the number of assumed straight lines is twice the number of the plurality of power lines, the extraction condition is that at least the number of assumed straight lines is a straight line having a slope within a predetermined range with respect to the first slope determined by the slope determination unit;
  • the specific line extraction unit further includes a step of extracting lines, the number of which is equal to the number of expected lines, from lines having a slope within a predetermined range with respect to the determined most common slope in descending order of the number of line votes.
  • the information processing method [Item 5] detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where the amount of change in pixel value increases is greater than a reference value as rising edge information, and detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where the amount of change in pixel value decreases is greater than a reference value as falling edge information; and determining, by a gradient determining unit, which of the gradients is a first gradient having the most prevalent gradient in a straight line corresponding to either the rising edge information or the falling edge information, by gradient voting;
  • the assumed number of straight lines is the number of the plurality of power lines
  • the extraction condition is that a plurality of pairs are formed by a straight line corresponding to either the rising edge information or the falling edge information, the straight line being determined to have a gradient within a predetermined range with respect to the first gradient determined by the determining unit, and a straight line corresponding to the other of the rising edge information or the falling edge information in the
  • the information processing method according to item 1, [Item 6] a step of detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where a change amount in pixel value increases is greater than a reference value as rising edge information, and detecting a portion in each pixel row in a predetermined direction where a change amount in pixel value decreases is greater than a reference value as falling edge information; and determining, by a gradient determining unit, which of a plurality of pairs of gradients formed by a straight line corresponding to either the rising edge information or the falling edge information and a straight line adjacent thereto corresponding to the other of the rising edge information or the falling edge information is a first gradient having the most number of gradients by gradient voting,
  • the assumed number of straight lines is the number of the plurality of power lines
  • the extraction condition is that the number of pairs determined to have a gradient within a predetermined range with respect to the first gradient determined by the determining unit is at least the number of assumed straight lines; extracting lines corresponding to the pairs
  • the information processing method according to item 1, [Item 7] The method further includes a step of setting a target position to be tracked by the image capture unit of the unmanned aerial vehicle based on the straight line extracted by the specific straight line extraction unit, by a tracking target position setting unit. 7.
  • An information processing system comprising: [Item 9] A program for causing a computer to execute an information processing method for extracting straight lines corresponding to a plurality of multi-conductor power lines from an image captured by an unmanned aerial vehicle, comprising: The information processing method includes: generating an edge image from the captured image by an edge image generating unit; a step of extracting lines corresponding to the
  • the information processing system in this embodiment extracts straight lines corresponding to a plurality of power lines (particularly multi-conductor power lines, for example, two, three, four, six, etc. power lines separated by span spacers) from an image captured along the plurality of power lines extending side by side, for example, on a support (for example, a steel tower, etc.).
  • images of the plurality of power lines may be captured by remotely controlling a camera mounted on an unmanned aerial vehicle 4 as shown in Fig. 1, which flies autonomously or remotely, based on instructions from a terminal 1 owned by a user.
  • the information processing system in this embodiment generates an edge image from an image captured by an aircraft, and makes it possible to extract from the edge image an estimated number of straight lines that fall under the straight line extraction conditions, based on straight line voting information obtained by a Hough transform of edges in the edge image and straight line extraction conditions indicating extraction conditions for straight lines that make up power lines based on estimated number information indicating the estimated number of straight lines related to power lines estimated in the captured image.
  • the information processing system in this embodiment has a terminal 1, a server 2, and an unmanned aerial vehicle 4.
  • the terminal 1, the server 2, and the unmanned aerial vehicle 4 may be connected to each other so as to be able to communicate with each other via a network NW.
  • the illustrated configuration is an example, and is not limited to this, and for example, the unmanned aerial vehicle 4 may not be connected to the network NW.
  • the unmanned aerial vehicle 4 may be operated by a transmitter (so-called radio control) operated by a user, or image data acquired by a camera of the unmanned aerial vehicle 4 may be stored in an auxiliary storage device (for example, a memory card such as an SD card or a USB memory) connected to the unmanned aerial vehicle 4, and the image data may be read out from the auxiliary storage device to the terminal 1 or the server 2 and stored by the user after the fact, and the unmanned aerial vehicle 4 may be connected to the network NW only for the purpose of operation or for the purpose of storing image data.
  • a transmitter so-called radio control
  • image data acquired by a camera of the unmanned aerial vehicle 4 may be stored in an auxiliary storage device (for example, a memory card such as an SD card or a USB memory) connected to the unmanned aerial vehicle 4, and the image data may be read out from the auxiliary storage device to the terminal 1 or the server 2 and stored by the user after the fact, and the unmanned aerial vehicle 4 may be connected to the network NW only for the purpose of operation or for
  • ⁇ Hardware configuration of terminal 1> 3 is a diagram showing a hardware configuration of the terminal 1 in this embodiment. Note that the illustrated configuration is an example, and the terminal 1 may have other configurations.
  • the terminal 1 includes at least a processor 10, a memory 11, a storage 12, a transmission/reception unit 13, an input/output unit 14, etc., which are electrically connected to each other via a bus 15.
  • the terminal 1 may be, for example, a general-purpose computer such as a workstation or a personal computer.
  • the processor 10 is a computing device that controls the operation of the entire terminal 1, controls the transmission and reception of data between each element, and performs information processing necessary for application execution and authentication processing.
  • the processor 10 is a CPU (Central Processing Unit) and/or a GPU (Graphics Processing Unit), and executes programs stored in the storage 12 and deployed in the memory 11 to perform various information processing operations.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • Memory 11 includes a main memory consisting of a volatile storage device such as DRAM (Dynamic Random Access Memory) and an auxiliary memory consisting of a non-volatile storage device such as a flash memory or HDD (Hard Disc Drive). Memory 11 is used as a work area for processor 10, and also stores the BIOS (Basic Input/Output System) that is executed when terminal 1 is started up, various setting information, etc.
  • BIOS Basic Input/Output System
  • Storage 12 stores various programs such as application programs.
  • a database that stores data used for each process may be constructed in storage 12.
  • a storage unit 130 which will be described later, may be provided in part of the storage area.
  • the transmission/reception unit 13 is a communication interface that enables the terminal 1 to communicate with an external device (not shown) or the unmanned aerial vehicle 4 via a communication network.
  • the transmission/reception unit 13 may further include a Bluetooth (registered trademark) or BLE (Bluetooth Low Energy) short-range communication interface, a USB (Universal Serial Bus) terminal, etc.
  • the input/output unit 14 includes information input devices such as a keyboard and mouse, and output devices such as a display.
  • the bus 15 is commonly connected to each of the above elements and transmits, for example, address signals, data signals, and various control signals.
  • ⁇ Server 2> 4 also includes a processor 20, a memory 21, a storage 22, a transmission/reception unit 23, an input/output unit 24, etc., which are electrically connected to each other via a bus 25.
  • the server 2 may be a general-purpose computer such as a workstation or a personal computer, or may be logically realized by cloud computing.
  • the functions of each element can be configured in the same way as the terminal 1 described above, so detailed description of each element will be omitted.
  • ⁇ Unmanned Aerial Vehicle 4> 5 is a block diagram showing a hardware configuration of the unmanned aerial vehicle 4.
  • the flight controller 41 can have one or more processors, such as a programmable processor (e.g., a central processing unit (CPU)).
  • a programmable processor e.g., a central processing unit (CPU)
  • Flight controller 41 also has and can access memory 411.
  • Memory 411 stores logic, code, and/or program instructions that the flight controller can execute to perform one or more steps.
  • Flight controller 41 may also include sensors 412, such as inertial sensors (accelerometers, gyro sensors), GPS sensors, and proximity sensors (e.g., lidar).
  • sensors 412 such as inertial sensors (accelerometers, gyro sensors), GPS sensors, and proximity sensors (e.g., lidar).
  • the memory 411 may include, for example, a separable medium such as an SD card or random access memory (RAM) or an external storage device. Data acquired from the camera/sensors 42 may be directly transmitted to and stored in the memory 411. For example, still image/video data captured by a camera or the like may be recorded in an internal memory or an external memory, but is not limited to this, and may be recorded in at least one of the terminal 1 and the server 2 from the camera/sensor 42 or the internal memory via the network NW.
  • the camera 42 is installed on the unmanned aerial vehicle 4 via a gimbal 43.
  • the flight controller 41 includes a control module (not shown) configured to control the state of the unmanned aerial vehicle 4.
  • the control module controls the propulsion mechanism (motor 45, etc.) of the unmanned aerial vehicle 4 via an ESC 44 (Electric Speed Controller) to adjust the spatial arrangement, speed, and/or acceleration of the unmanned aerial vehicle 4, which has six degrees of freedom (translational motion x , y, and z , and rotational motion ⁇ x , ⁇ y, and ⁇ z ).
  • the propellers 46 rotate by the motor 45 powered by a battery 48, thereby generating lift for the unmanned aerial vehicle 4.
  • the control module can control one or more of the states of the onboard units and sensors.
  • the flight controller 41 can communicate with a transceiver 47 configured to transmit and/or receive data from one or more external devices (e.g., a transceiver 49, a terminal 1, a display device, or other remote control).
  • a transceiver 49 can use any suitable communication means, such as wired or wireless communication.
  • the transceiver unit 47 can utilize one or more of a local area network (LAN), a wide area network (WAN), infrared, wireless, WiFi, a point-to-point (P2P) network, a telecommunications network, cloud communications, etc.
  • LAN local area network
  • WAN wide area network
  • P2P point-to-point
  • the transmitter/receiver 47 can transmit and/or receive one or more of the following: data acquired by the sensors 42, processing results generated by the flight controller 41, specific control data, user commands from a terminal or a remote controller, etc.
  • the sensors 42 in this embodiment may include inertial sensors (accelerometers, gyro sensors), GPS sensors, proximity sensors (e.g., lidar), or vision/image sensors (e.g., cameras).
  • inertial sensors accelerelerometers, gyro sensors
  • GPS sensors GPS sensors
  • proximity sensors e.g., lidar
  • vision/image sensors e.g., cameras
  • FIG. 6 is a block diagram illustrating functions implemented in the terminal 1 and the server 2.
  • the terminal 1 includes an image acquisition unit 115, a processing unit 120, and a storage unit 130.
  • the processing unit 120 includes an edge image generation unit 121, a specific straight line extraction unit 122, a slope determination unit 123, and a tracking target position setting unit 124.
  • the storage unit 130 includes an information/image storage unit 131.
  • the various functional units are illustrated as functional units in the processor 10 of the terminal 1, but some or all of the various functional units may be realized in any of the configurations of the processor 10 of the terminal 1 or the processor 20 of the server 2, and the controller 41 of the unmanned aerial vehicle 4, depending on the capabilities of the processor 10 of the terminal 1 or the processor 20 of the server 2, and the controller 41 of the unmanned aerial vehicle 4.
  • the communication unit 110 communicates with the server 2 and the unmanned aerial vehicle 4 via the network NW.
  • the communication unit 110 also functions as a reception unit that receives various requests and data from the server 2, the unmanned aerial vehicle 4, etc.
  • the image acquisition unit 115 acquires images captured by a digital camera mounted on the unmanned aerial vehicle 4 or a digital camera used by a user from the digital camera, for example, by wireless communication via a communication interface or wired communication via a USB terminal or the like.
  • the image acquisition unit 115 may be configured to acquire images via a storage medium such as a USB memory or SD memory, but it is more preferable to configure the image acquisition unit 115 to acquire images in real time, particularly by wireless communication from the unmanned aerial vehicle 4 or within the unmanned aerial vehicle 4.
  • the processing unit 120 has various functional units that execute a series of processes to extract straight lines corresponding to multiple power lines from an image acquired by the image acquisition unit 115 (for example, an image captured along multiple power lines extending alongside one another, or an image that has been grayscaled or binarized).
  • the edge image generating unit 121 detects edge information in the image acquired by the image acquiring unit 115, and generates an edge image based on the edge information (including the xy coordinates in the image where the detected edge exists).
  • the edge detection method may be any known method (such as a differential filter) as long as it is possible to determine the outer edge (edge) of the power line in the image, but may also be a method in which a portion where the amount of change in pixel value (luminance) increases is greater than a reference value in each pixel row in a specified direction in the image is detected as rising edge information, and a portion where the amount of change in pixel value (luminance) decreases is greater than a reference value is detected as falling edge information.
  • the pixel value information when an image is binarized, the pixel value information is set to 1 for white and 0 for black, and when the image is grayscaled, the pixel value information is set to 255 for the whitest pixel and 0 for the blackest pixel, and adjacent pixels in each pixel row in a specified direction in the image are compared to calculate the difference value, and as described above, pixels with difference values equal to or greater than a preset reference value are set as white, and other pixels are set as black, and an edge image can be generated.
  • each edge when generating the edge image of FIG. 8 from the captured image of FIG. 7, if rising edges and falling edges are to be determined as described above, each edge can be visualized in the edge image as shown in FIG. 8 by setting, for example, rising edges to green, falling edges to blue, and other edges to black.
  • the processing unit 120 of the present invention further has the following functional units.
  • the specific line extraction unit 122 extracts lines that correspond to the expected number of lines that fall under the extraction conditions from the detected lines, based on line extraction condition information indicating the extraction conditions for lines that constitute the power lines, which includes at least conditions related to line vote number information obtained by performing a Hough transform on edge information in the generated edge image and expected number information indicating the expected number of lines related to the power lines expected in the edge image.
  • the line vote number information is obtained by first executing a Hough transform on the edge information in the edge image by the specific line extraction unit 122.
  • the Hough transform is a method for detecting lines contained in an image by majority vote. For example, when a perpendicular line is drawn from the origin to a certain line, ⁇ is the length, and ⁇ is the angle between the perpendicular line and the x-axis, the xy coordinate information of each point constituting an edge in the image is used to vote for the ⁇ coordinate, line vote number information indicating the number of votes is obtained, and a combination of ⁇ indicating a line is estimated by majority vote from the line vote number information.
  • rising edge information for example, a rising edge image in which only the rising edges are extracted and a falling edge image in which only the falling edges are extracted are generated, and a Hough transform is performed on each image to obtain line vote number information in which the lines corresponding to the rising edges and the lines corresponding to the falling edges can be distinguished from each other.
  • straight lines as shown in FIG. 10 are estimated by majority vote from the edge image illustrated in FIG. 9.
  • the Hough transform obtains "straight lines” that continue infinitely and not “line segments” of finite length, it is possible to extract (estimate) straight lines that make up power lines even when edges are interrupted, while even an unrelated long and thin object that is interrupted in the background as shown in FIG. 9 can be extracted as a straight line.
  • the expected number information indicating the expected number of straight lines related to the power lines expected in the edge image is information that can be set by the user via the terminal 1 or the like.
  • the expected number information (i.e., the expected number of straight lines) may be information indicating twice the number of lines constituting multiple power lines (e.g., the number of lines constituting a multi-conductor power line and the number of lines bundled together by a span spacer). This is because one power line is composed of two straight lines (outer edges).
  • the expected number information may be information indicating the same number as the number of lines constituting multiple power lines (e.g., the number of lines constituting a multi-conductor power line).
  • one power line is composed of a straight line extracted by one rising edge and a straight line extracted by a falling edge.
  • the background of the power line is a background with many areas of pixel values darker than the power line (e.g., forest trees, soil, grass, etc.)
  • the boundary portion where the pixel value of the dark background changes to the pixel value of the power line is detected as rising edge information because the amount of change in the pixel value (brightness) is greater than the reference value
  • the boundary portion where the pixel value of the power line returns to the pixel value of the dark background is detected as falling edge information because the amount of change in the pixel value (brightness) is greater than the reference value, so it can be said that one power line is composed of both edges.
  • the background is a background with many areas of pixel values brighter than the power line (e.g., when snow is piled up)
  • the relationship between the rising edge and the falling edge is reversed, but as a result, it can be said that one power line is similarly composed of both edges.
  • the line extraction condition information indicating the extraction conditions of the lines constituting the multiple power lines is information that can be set by the user via the terminal 1 or the like.
  • the first extraction condition is that there are at least the expected number of lines (twice the number of lines constituting the multiple power lines) parallel to the first line with the highest number of line votes, including the first line, and when the specific line extraction unit 122 determines that this extraction condition is satisfied, it extracts the corresponding line (i.e., the line constituting the multiple power lines). That is, in FIG.
  • the line vote count information for the lines extending to the left and right that are determined to be parallel to each other may select the first line with the most votes as a non-target line, or alternatively or in addition, the Hough transform may be run again.
  • the extracted straight lines may be paired with each other if they are located close to each other (within a predetermined reference distance range, in particular with a width that is approximately the same as the outer diameter of the power line (and furthermore with a width that is approximately the same as the outer diameter of the power line taking into account the front and back sides of the multiple power lines)), and the number of pairs may be the expected number of straight lines (in this case, the same number as the number of lines that make up the multiple power lines).
  • a second extraction condition is to pair a straight line parallel to the first straight line that has the highest number of straight line votes for either the rising edge or the falling edge with a straight line (particularly a straight line parallel to the first straight line) based on the other of either the rising edge or the falling edge that is in its vicinity (within a specified reference distance range, particularly a width that approximately matches the outer diameter of the power line (and furthermore a width that approximately matches the outer diameter of the power line taking into account the front and back sides of the multiple power lines)), and the number of such pairs, including the pair that includes the first straight line, is the expected number of straight lines (the same number as the number of lines that make up the multiple power lines), and when the specific straight line extraction unit 122 determines that this extraction condition is satisfied, the corresponding straight line (i.e., the straight lines that make up the multiple power lines) is extracted.
  • straight lines extending diagonally straight lines shown by solid lines in FIG. 12
  • straight lines parallel to the first straight line are determined, and pairs are formed with the straight lines corresponding to the falling edge (straight lines shown by dotted lines in FIG. 12) in the vicinity of each determined straight line, and since the number of pairs (four pairs) corresponding to the expected number of straight lines (four) exists, the straight lines constituting the power line are extracted as shown in FIG. 11.
  • the straight line is not extracted because the number of pairs formed (two pairs) does not exist, which is the expected number of straight lines.
  • the first straight line with the largest number of straight line votes for the pair of straight lines extending to the left and right determined to be parallel to each other may be selected again as a non-target, or alternatively or in addition, the Hough transform may be executed again.
  • the Hough transform may be executed again.
  • a slope determination unit 123 may be provided that determines which of the slopes of the detected multiple straight lines is the first slope with the most votes based on the number of slope votes.
  • the determination method based on the number of slope votes may be, for example, based on information about the angle ( ⁇ ) included in the straight line vote number information obtained by the Hough transform, and the slope determination unit 123 may determine the first slope with the most votes based on the angle (slope vote number).
  • a third extraction condition is that there are at least the expected number of straight lines (twice the number of lines constituting the multiple power lines) whose inclination is within a predetermined range with respect to the first inclination determined by the inclination determination unit 123, and when the specific straight line extraction unit 122 determines that this extraction condition is satisfied, the specific straight line extraction unit 122 extracts the expected number of straight lines from among the straight lines whose inclination is within a predetermined range with respect to the first inclination in order of the number of straight line votes indicated by the straight line vote number information. That is, in FIG.
  • a fourth extraction condition is to determine a line corresponding to either a rising edge or a falling edge, the slope of which is within a predetermined range with respect to the first slope determined by the slope determination unit 123, and pair the determined line with a line based on either the rising edge or the falling edge (particularly a line parallel to the first slope) that is in the vicinity of the determined line (within a predetermined reference distance range, particularly a width that is approximately the same as the outer diameter of the power line (and furthermore a width that is approximately the same as the outer diameter of the power line taking into account the front and rear sides of the multiple power lines)), and the pairs include at least the expected number of straight lines (the same number as the number of lines that make up the multiple power lines).
  • straight line extraction unit 122 determines that this extraction condition is satisfied, only straight lines that correspond to the pairs of the expected number of straight lines are extracted from the pairs in order of the number of straight line votes indicated by the straight line vote number information. That is, in FIG. 9, for example, if the slope of any of the diagonally extending straight lines (straight lines shown by solid lines in FIG. 12) corresponding to the rising edge is determined to be the first slope, straight lines (other diagonally extending straight lines) with a slope within a predetermined range with respect to the first slope are determined, and a pair is formed with a diagonally extending straight line (straight line shown by dotted lines in FIG.
  • a fifth extraction condition is to pair a straight line corresponding to either a rising edge or a falling edge with a straight line corresponding to the other of either a rising edge or a falling edge in its vicinity (within a specified reference distance range), and the slope determination unit 123 determines which of the pairs has the most common first slope based on the number of slope votes (for example, the slope of the pair is the average slope of the slopes of both straight lines constituting the pair), and there are at least the expected number of straight lines (the same number as the number of lines constituting the multiple power lines) of pairs with slopes within a specified range relative to the first slope, and when the specific straight line extraction unit 122 determines that this extraction condition is satisfied, only straight lines corresponding to the pairs of the expected number of straight lines are extracted from the pairs in order of the most number of straight line votes indicated by the straight line vote number information.
  • a straight line corresponding to a rising edge (a straight line shown by a solid line in FIG. 13) is paired with a straight line corresponding to a nearby falling edge (a straight line shown by a dotted line in FIG. 13), and the slope determination unit 123 determines which slope is the most common first slope based on the number of slope votes (in FIG. 13, the slope of the line extending diagonally is determined to be the first slope), and pairs of slopes within a predetermined range relative to the first slope are extracted in descending order of the number of line votes, which is the expected number of straight lines, that is, four pairs. Even in this case, the slope of a line extending to the left or right is unlikely to be determined to be the first slope, so the possibility of erroneous detection can be reduced.
  • the tracking target position setting unit 124 sets a target position to be tracked by the camera (photographing unit) of the unmanned aerial vehicle 4 based on the straight lines extracted by the specific straight line extraction unit 122 (i.e., the straight lines constituting the multiple power lines).
  • the tracking target position setting unit 124 may calculate an average straight line at the average position of the multiple straight lines extracted by the specific straight line extraction unit 122 (visualize and add to the image as necessary), and set the average straight line as the target position (target straight line) to be tracked.
  • FIG. 14 illustrates an example of an image in which the average straight line is visualized and added by a dotted line.
  • the tracking target position setting unit 124 may also set one of the multiple straight lines or pairs extracted by the specific straight line extraction unit 122 as the target position (target straight line) to be tracked. Then, the tracking target position setting unit 124 generates instruction information for controlling at least one of the photographing unit or the unmanned aerial vehicle itself so that the set target position is included in the image photographed by the photographing unit, and transmits the instruction information to the unmanned aerial vehicle 4 via the communication unit 110.
  • a predetermined position e.g., a central position or an approximate central position, etc.
  • a predetermined range in the image particularly, a range near the center of the image and further based on the central position or the approximate central position of the image.
  • difference information between the predetermined position of the target straight line and the predetermined range is generated, and instruction information for controlling at least one of the state of the shooting unit (e.g., shooting direction information, angle of view information, etc.) or the state of the unmanned aerial vehicle itself (e.g., current position information (latitude, longitude, altitude coordinates), attitude information, rotor motor output information, etc.) is generated based on the difference information.
  • the state of the shooting unit e.g., shooting direction information, angle of view information, etc.
  • the state of the unmanned aerial vehicle itself e.g., current position information (latitude, longitude, altitude coordinates), attitude information, rotor motor output information, etc.
  • the difference information may be any information that allows the amount of deviation and the direction of deviation from the predetermined range to be grasped, and may be, for example, information on the direction and distance (number of pixels) on the image from the center of the image to the predetermined position of the target straight line.
  • the instruction information may be generated based on, for example, the difference information, the correspondence information between the direction and distance on the image and the control direction and control distance in the shooting unit or the unmanned aerial vehicle shooting unit, and at least one of the current state information of the unmanned aerial vehicle, but is not limited thereto. If it is determined that the specified position of the target line is within a specified range in the image, the state of the imaging unit or unmanned aerial vehicle is maintained without generating new instruction information.
  • the information/image storage unit 131 of the storage unit 130 at least temporarily stores, in addition to the images acquired by the image acquisition unit 115, edge images and edge information generated by the edge image generation unit 121, line vote number information generated by the specific line extraction unit 122 and information on the lines that make up multiple power lines, information on the first line, slope vote number information and information on the first slope generated by the slope determination unit 123, an image in which the target line is visualized by the tracking target position setting unit 124, expected number information set by the user, reference value information for rising edges or falling edges, reference range information indicating the vicinity, and other information and data generated in the processing by each of the functional units 121 to 126 of the processing unit 120.
  • Fig. 15 is a flowchart showing a process for implementing the straight line extraction method by the information processing system according to the present embodiment.
  • the image acquisition unit 115 of the terminal 1 acquires images such as images captured by a camera mounted on the unmanned aerial vehicle 4 (S101).
  • the edge image generating unit 121 of the terminal 1 detects edge information in the image acquired by the image acquiring unit 115, and generates an edge image based on the edge information (including the x and y coordinates in the image where the detected edge exists) (S102).
  • the specific line extraction unit 122 of the terminal 1 extracts lines from the detected lines according to the line extraction condition information indicating the extraction conditions for the lines constituting the power lines, which includes at least conditions related to the line vote number information obtained by the Hough transform of the edge information in the generated edge image and the expected number information indicating the expected number of lines related to the power lines expected in the edge image (S103).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Textile Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Mechanical Engineering (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Electric Cable Installation (AREA)

Abstract

In this information processing method for extracting straight lines corresponding to a plurality of multi-conductor type power lines from a captured image captured by an unmanned aerial vehicle, a computer executes: a step (S102) for employing an edge image generating unit to generate an edge image from the captured image; and a step (S103) for employing a specific straight line extracting unit to extract, from among a plurality of straight lines in the edge image, an expected number of straight lines meeting an extraction condition for straight lines forming the plurality of power lines, on the basis of straight line vote number information acquired by subjecting edges in the generated edge image to a Hough transform, and straight line extraction condition information indicating the extraction condition, which includes at least a condition relating to expected number information indicating the expected number of the plurality of power lines expected to be in the edge image.

Description

情報処理方法、情報処理システム及びプログラムInformation processing method, information processing system, and program
 本発明は、情報処理方法、情報処理システム及びプログラムに関する。 The present invention relates to an information processing method, an information processing system, and a program.
 近年、ドローン(Drone)や無人航空機(UAV:Unmanned Aerial Vehicle)などの飛行体(以下、「飛行体」と総称する)が産業に利用され始めている。こうした中で、特許文献1には、飛行体により電力線を撮影して検査するシステムが開示されている。 In recent years, drones, unmanned aerial vehicles (UAVs), and other flying objects (hereinafter collectively referred to as "flying objects") have begun to be used in industry. In this context, Patent Document 1 discloses a system that uses flying objects to photograph and inspect power lines.
特開2020-196355号公報JP 2020-196355 A
 ここで、例えば、点検対象の電力線の追跡などのために対象電力線を特定する場合に、一つの手法として撮影画像に対してエッジ検出を行うことで電力線のエッジを抽出し電力線の位置を特定するという手法が考え得る。しかしながら、撮影した画像には様々な電力線や建物や道路等の直線的な構造物が背景に映りこむことからエッジ検出を行った場合にはノイズとなるエッジが多数抽出され得るため、対象電力線のエッジを特定することが難しい場合があり得る。特に、背景にある点検対象外の直線的な物体(特に細長い物体)のエッジを抽出した場合には、そちらの方を追跡してしまい、点検対象の電力線を失い適切な点検を行うことが難しい場合があり得る。 Here, for example, when identifying a target power line in order to track the power line under inspection, one possible method is to perform edge detection on the captured image to extract the edges of the power line and identify the position of the power line. However, since various power lines and linear structures such as buildings and roads are reflected in the background of the captured image, edge detection may extract a large number of edges that become noise, making it difficult to identify the edges of the target power line. In particular, if the edge of a linear object (especially a long and thin object) in the background that is not the subject of inspection is extracted, it may end up being tracked, causing the target power line to be lost and making it difficult to perform a proper inspection.
 また、特許文献1においては特段言及していないが、電力線を支持する鉄塔等の支持物の各々の腕金に架線される電力線は、それぞれ1本の電力線で構成されている場合(単導体方式)だけでなく、各腕金に架線される電力線がそれぞれ2本以上の電力線(導体)で構成される多導体方式の電力線もあり得る。そして、多導体方式の電力線に対して上述のようなエッジ検出を行う場合においては、撮影画像内において抽出された複数の直線的な物体(特に細長い物体)のエッジのうち、どれが対象電力線であるかを複数特定する必要があり、唯一の対象電力線を特定する状況よりも特定が難しい。 In addition, although Patent Document 1 does not specifically mention this, the power lines strung on each cross arm of a support such as a steel tower that supports the power lines may each consist of a single power line (single conductor system), but may also consist of a multi-conductor system in which each cross arm is composed of two or more power lines (conductors). When performing edge detection as described above on a multi-conductor power line, it is necessary to identify which of the edges of multiple linear objects (particularly long and thin objects) extracted in the captured image are the target power line, which is more difficult to identify than identifying a single target power line.
 本発明はこのような背景を鑑みてなされたものであり、単導体方式の他に、特に多導体方式の電力線に対応する直線を飛行体による撮影画像から抽出することが可能な情報処理方法、情報処理システム及びプログラムを提供することを一つの目的とする。 The present invention was made in light of this background, and one of its objectives is to provide an information processing method, information processing system, and program that can extract straight lines corresponding to power lines of the multi-conductor type, in addition to single-conductor type, from images captured by an aircraft.
 本発明の一態様によれば、無人飛行体により撮影した撮影画像から多導体方式の複数の電力線に対応する直線を抽出する情報処理方法であって、エッジ画像生成部により、前記撮影画像からエッジ画像を生成するステップと、特定直線抽出部により、生成された前記エッジ画像内のエッジに対するハフ変換(Hough変換)により取得される直線投票数情報及び前記エッジ画像内に想定される前記複数の電力線の想定直線本数を示す想定数情報に関する条件を少なくとも含む、前記複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、前記エッジ画像内の複数の直線から前記抽出条件下にある前記想定直線本数の直線を抽出するステップと、を含む情報処理方法等が提供される。 According to one aspect of the present invention, there is provided an information processing method for extracting straight lines corresponding to a plurality of multi-conductor power lines from an image captured by an unmanned aerial vehicle, the information processing method including: a step of generating an edge image from the captured image by an edge image generating unit; and a step of extracting, by a specific straight line extracting unit, from the straight lines in the edge image, straight line extraction condition information indicating extraction conditions for the straight lines constituting the plurality of power lines, the straight line extraction condition information including at least conditions related to straight line voting number information obtained by a Hough transform for edges in the generated edge image and expected number information indicating the expected number of straight lines of the plurality of power lines expected in the edge image, based on the expected number of straight lines that fall under the extraction conditions.
 本発明によれば、特に、多導体方式の複数の電力線に対応する直線を飛行体による撮影画像から抽出することが可能な情報処理方法、情報処理システム及びプログラムを提供することができる。 The present invention provides an information processing method, information processing system, and program that can extract straight lines corresponding to multiple multi-conductor power lines from images captured by an aircraft.
本発明の実施の形態の全体構成を示す図である。1 is a diagram showing an overall configuration of an embodiment of the present invention; 本発明の実施の形態にかかる情報処理システムのシステム構成を示す図である。1 is a diagram showing a system configuration of an information processing system according to an embodiment of the present invention; 図2の端末のハードウェア構成を示すブロック図である。FIG. 3 is a block diagram showing a hardware configuration of the terminal of FIG. 2. 図2のサーバのハードウェア構成を示すブロック図である。FIG. 3 is a block diagram showing a hardware configuration of the server in FIG. 2 . 図2の無人飛行体のハードウェア構成を示すブロック図である。A block diagram showing the hardware configuration of the unmanned aerial vehicle of Figure 2. 図2の端末、サーバの機能を示すブロック図である。FIG. 3 is a block diagram showing the functions of the terminal and the server in FIG. 2 . 無人飛行体の撮影部から取得した画像の一例である。This is an example of an image acquired from the imaging unit of an unmanned aerial vehicle. 図7の画像から生成されたエッジ画像の一例である。8 is an example of an edge image generated from the image of FIG. 7. エッジ画像の他の例である。11 is another example of an edge image. 直線投票数情報により推定された直線が可視化された画像の一例である。13 is an example of an image in which a straight line estimated using straight line voting number information is visualized. 抽出された特定の直線が可視化された画像の一例である。13 is an example of an image in which an extracted specific straight line is visualized. 抽出された特定の直線が可視化された画像の一例である。13 is an example of an image in which an extracted specific straight line is visualized. 抽出された特定の直線が可視化された画像の一例である。13 is an example of an image in which an extracted specific straight line is visualized. 追跡対象位置が可視化された画像の一例である。1 is an example of an image in which a tracking target position is visualized. 本実施形態にかかる情報処理システムによるエッジ抽出方法を実施する処理を示すフローチャートである。1 is a flowchart showing a process for implementing an edge extraction method by the information processing system according to the present embodiment.
 本発明の実施形態の内容を列記して説明する。本発明の実施の形態による情報処理方法、情報処理システム及びプログラムは、以下のような構成を備える。
[項目1]
 無人飛行体により撮影した撮影画像から多導体方式の複数の電力線に対応する直線を抽出する情報処理方法であって、
 エッジ画像生成部により、前記撮影画像からエッジ画像を生成するステップと、
 特定直線抽出部により、生成された前記エッジ画像内のエッジに対するハフ変換により取得される直線投票数情報及び前記エッジ画像内に想定される前記複数の電力線の想定直線本数を示す想定数情報に関する条件を少なくとも含む、前記複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、前記エッジ画像内の複数の直線から前記抽出条件下にある前記想定直線本数の直線を抽出するステップと、
 を含む情報処理方法。
[項目2]
 前記想定直線本数は、前記複数の電力線の本数の2倍の数であって、
 前記抽出条件は、前記直線投票数が一番多い第1直線と平行な直線が当該第1直線を含めて前記想定直線本数あること、である、
 ことを特徴とする項目1に記載の情報処理方法。
[項目3]
 前記エッジ画像生成により、所定方向の各画素列において画素値が高くなる変化量が基準値より大きい部分を立ち上がりエッジ情報として検出して、画素値が低くなる変化量が基準値より大きい部分を立ち下がりエッジ情報として検出するステップをさらに含み、
 前記想定直線本数は、前記複数の電力線の本数と同一の数であって、
 前記抽出条件は、前記立ち上がりエッジ情報に対応する直線または前記立ち下がりエッジ情報に対応する直線のいずれか一方において前記直線投票数が一番多い第1直線に平行な直線と、その近傍にある前記立ち上がりエッジ情報に対応する直線または前記立ち下がりエッジ情報に対応する直線のいずれか他方とで複数のペアを形成し、そのペアが前記想定直線本数あること、である、
 ことを特徴とする項目1に記載の情報処理方法。
[項目4]
 さらに、傾き判定部により、前記特定直線抽出部により検出された複数の直線の傾きのうち、いずれの傾きが一番多い第1傾きかを傾き投票で判定するステップをさらに含み、
 前記想定直線本数は、前記複数の電力線の2倍の数であって、
 前記抽出条件は、前記傾き判定部により判定された第1傾きに対して所定範囲内の傾きの直線が少なくとも前記想定直線本数あること、であり、
 前記特定直線抽出部により、前記判定された一番多い傾きに対して所定範囲内の傾きの直線のうち、前記直線投票数が多い順に前記想定直線本数だけ直線を抽出するステップをさらに含む、
 ことを特徴とする項目1に記載の情報処理方法。
[項目5]
 前記エッジ画像生成により、所定方向の各画素列において画素値が高くなる変化量が基準値より大きい部分を立ち上がりエッジ情報として検出して、画素値が低くなる変化量が基準値より大きい部分を立ち下がりエッジ情報として検出ステップと、
 傾き判定部により、前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか一方に対応する直線において、いずれの傾きが一番多い第1傾きかを傾き投票で判定するステップと、をさらに含み、
 前記想定直線本数は、前記複数の電力線の数であって、
 前記抽出条件は、前記判定部により判定された第1傾きに対して所定範囲内の傾きであると判定された前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか一方に対応する直線と、その近傍にある前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか他方に対応する直線とで複数のペアを形成し、そのペアの数が少なくとも前記想定直線本数あること、であり、
 前記特定直線抽出部により、前記ペアから前記直線投票数が多い順に前記想定直線本数のペアに対応する直線を抽出するステップをさらに含む、
 ことを特徴とする項目1に記載の情報処理方法。
[項目6]
 前記エッジ画像生成により、所定方向の各画素列において画素値が高くなる変化量が基準値より大きい部分を立ち上がりエッジ情報として検出して、画素値が低くなる変化量が基準値より大きい部分を立ち下がりエッジ情報として検出するステップと、
 傾き判定部により、前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか一方に対応する直線と、その近傍にある前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか他方に対応する直線により形成された複数のペアの傾きのうち、いずれの傾きが一番多い第1傾きかを傾き投票で判定するステップと、をさらに含み、
 前記想定直線本数は、前記複数の電力線の数であって、
 前記抽出条件は、前記判定部により判定された第1傾きに対して所定範囲内の傾きであると判定されたペアの数が少なくとも前記想定直線本数あること、であり、
 前記特定直線抽出部により、前記ペアから前記直線投票数が多い順に前記想定直線本数のペアに対応する直線を抽出するステップをさらに含む、
 ことを特徴とする項目1に記載の情報処理方法。
[項目7]
 追跡対象位置設定部により、前記特定直線抽出部により抽出された直線に基づいて、前記無人飛行体の撮影部により追跡する対象位置を設定するステップをさらに含む、
 ことを特徴とする項目1乃至6のいずれかに記載の情報処理方法。
[項目8]
 無人飛行体により撮影した撮影画像から多導体方式の複数の電力線に対応する直線を抽出する情報処理システムであって、
 前記撮影画像からエッジ画像を生成するエッジ画像生成部と、
 生成された前記エッジ画像内のエッジに対するハフ変換により取得される直線投票数情報及び前記エッジ画像内に想定される前記複数の電力線の想定直線本数を示す想定数情報に関する条件を少なくとも含む、前記複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、前記エッジ画像内の複数の直線から前記抽出条件下にある前記想定直線本数の直線を抽出する特定直線抽出部と、
 を備える情報処理システム。
[項目9]
 無人飛行体により撮影した撮影画像から多導体方式の複数の電力線に対応する直線を抽出する情報処理方法をコンピュータに実行させるプログラムであって、
 前記情報処理方法として、
 エッジ画像生成部により、前記撮影画像からエッジ画像を生成するステップと、
 特定直線抽出部により、生成された前記エッジ画像内のエッジに対するハフ変換により取得される直線投票数情報及び前記エッジ画像内に想定される前記複数の電力線の想定直線本数を示す想定数情報に関する条件を少なくとも含む、前記複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、前記エッジ画像内の複数の直線から前記抽出条件下にある前記想定直線本数の直線を抽出するステップと、
 を実行させるプログラム。
The contents of the embodiments of the present invention will be listed and described below. An information processing method, an information processing system, and a program according to the embodiments of the present invention have the following configuration.
[Item 1]
An information processing method for extracting straight lines corresponding to multiple power lines of a multi-conductor system from an image captured by an unmanned aerial vehicle, comprising:
generating an edge image from the captured image by an edge image generating unit;
a step of extracting lines corresponding to the assumed number of lines that fall under extraction conditions from the lines in the edge image based on line extraction condition information indicating extraction conditions for lines constituting the multiple power lines, the line extraction condition information including at least conditions related to line vote number information obtained by a Hough transform for edges in the generated edge image and assumed number information indicating the assumed number of lines of the multiple power lines assumed in the edge image, from the multiple lines in the edge image by a specific line extraction unit;
An information processing method comprising:
[Item 2]
The number of assumed straight lines is twice the number of the plurality of power lines,
The extraction condition is that the number of lines parallel to the first line having the largest number of line votes is the number of lines assumed to be the number of lines including the first line.
2. The information processing method according to item 1,
[Item 3]
The method further includes a step of detecting, by the edge image generation, a portion in which an amount of change in pixel value increases and is greater than a reference value in each pixel row in a predetermined direction as rising edge information, and detecting a portion in which an amount of change in pixel value decreases and is greater than a reference value as falling edge information,
The number of assumed straight lines is the same as the number of the plurality of power lines,
The extraction condition is that a line parallel to a first line having the largest number of line votes in either one of the lines corresponding to the rising edge information or the lines corresponding to the falling edge information forms a plurality of pairs with the other of the lines corresponding to the rising edge information or the lines corresponding to the falling edge information located nearby the first line, and the number of pairs is the expected number of lines.
2. The information processing method according to item 1,
[Item 4]
The method further includes a step of determining, by a gradient determination unit, which of the gradients of the plurality of straight lines detected by the specific straight line extraction unit is a first gradient that is most prevalent, by gradient voting;
The number of assumed straight lines is twice the number of the plurality of power lines,
the extraction condition is that at least the number of assumed straight lines is a straight line having a slope within a predetermined range with respect to the first slope determined by the slope determination unit;
The specific line extraction unit further includes a step of extracting lines, the number of which is equal to the number of expected lines, from lines having a slope within a predetermined range with respect to the determined most common slope in descending order of the number of line votes.
2. The information processing method according to item 1,
[Item 5]
detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where the amount of change in pixel value increases is greater than a reference value as rising edge information, and detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where the amount of change in pixel value decreases is greater than a reference value as falling edge information;
and determining, by a gradient determining unit, which of the gradients is a first gradient having the most prevalent gradient in a straight line corresponding to either the rising edge information or the falling edge information, by gradient voting;
The assumed number of straight lines is the number of the plurality of power lines,
the extraction condition is that a plurality of pairs are formed by a straight line corresponding to either the rising edge information or the falling edge information, the straight line being determined to have a gradient within a predetermined range with respect to the first gradient determined by the determining unit, and a straight line corresponding to the other of the rising edge information or the falling edge information in the vicinity of the straight line, and the number of pairs is at least the number of the assumed straight lines;
extracting lines corresponding to the pairs of the expected number of lines from the pairs in descending order of the number of line votes by the specific line extraction unit,
2. The information processing method according to item 1,
[Item 6]
a step of detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where a change amount in pixel value increases is greater than a reference value as rising edge information, and detecting a portion in each pixel row in a predetermined direction where a change amount in pixel value decreases is greater than a reference value as falling edge information;
and determining, by a gradient determining unit, which of a plurality of pairs of gradients formed by a straight line corresponding to either the rising edge information or the falling edge information and a straight line adjacent thereto corresponding to the other of the rising edge information or the falling edge information is a first gradient having the most number of gradients by gradient voting,
The assumed number of straight lines is the number of the plurality of power lines,
the extraction condition is that the number of pairs determined to have a gradient within a predetermined range with respect to the first gradient determined by the determining unit is at least the number of assumed straight lines;
extracting lines corresponding to the pairs of the expected number of lines from the pairs in descending order of the number of line votes by the specific line extraction unit,
2. The information processing method according to item 1,
[Item 7]
The method further includes a step of setting a target position to be tracked by the image capture unit of the unmanned aerial vehicle based on the straight line extracted by the specific straight line extraction unit, by a tracking target position setting unit.
7. The information processing method according to any one of items 1 to 6.
[Item 8]
An information processing system for extracting straight lines corresponding to multiple power lines of a multi-conductor type from an image captured by an unmanned aerial vehicle,
an edge image generating unit that generates an edge image from the captured image;
a specific line extraction unit that extracts lines that correspond to the assumed number of lines that fall under the extraction conditions from the lines in the edge image based on line extraction condition information indicating extraction conditions for lines that constitute the multiple power lines, the line extraction condition information including at least conditions related to line vote number information obtained by performing a Hough transform on edges in the generated edge image and assumed number information indicating the assumed number of lines of the multiple power lines assumed in the edge image; and
An information processing system comprising:
[Item 9]
A program for causing a computer to execute an information processing method for extracting straight lines corresponding to a plurality of multi-conductor power lines from an image captured by an unmanned aerial vehicle, comprising:
The information processing method includes:
generating an edge image from the captured image by an edge image generating unit;
a step of extracting lines corresponding to the assumed number of lines that fall under extraction conditions from the lines in the edge image based on line extraction condition information indicating extraction conditions for lines constituting the multiple power lines, the line extraction condition information including at least conditions related to line vote number information obtained by a Hough transform for edges in the generated edge image and assumed number information indicating the assumed number of lines of the multiple power lines assumed in the edge image, from the multiple lines in the edge image by a specific line extraction unit;
A program that executes the following.
<実施の形態の詳細>
 以下、本発明の実施の形態による情報処理システムを説明する。添付図面において、同一または類似の要素には同一または類似の参照符号及び名称が付され、各実施形態の説明において同一または類似の要素に関する重複する説明は省略することがある。また、各実施形態で示される特徴は、互いに矛盾しない限り他の実施形態にも適用可能である。
<Details of the embodiment>
Hereinafter, an information processing system according to an embodiment of the present invention will be described. In the accompanying drawings, identical or similar elements are given identical or similar reference symbols and names, and duplicated descriptions of identical or similar elements may be omitted in the description of each embodiment. Furthermore, features shown in each embodiment may be applied to other embodiments as long as they are not mutually inconsistent.
<本実施形態の概要>
 図1に示されるように、本実施の形態における情報処理システムは、例えば支持物(例えば、鉄塔など)に設けられた互いに並んで延伸する複数の電力線に沿って撮像した画像から、複数の電力線(特に多導体方式の電力線であって、例えば径間スペーサにより2本、3本、4本、6本などの電力線など)に対応する直線を抽出するものである。複数の電力線の画像の取得は、一例として、ユーザの所有する端末1からの指示に基づき、自律飛行もしくは遠隔操作により飛行する図1に示すような無人飛行体4に搭載したカメラを遠隔操作して撮像してもよい。
<Outline of this embodiment>
As shown in Fig. 1, the information processing system in this embodiment extracts straight lines corresponding to a plurality of power lines (particularly multi-conductor power lines, for example, two, three, four, six, etc. power lines separated by span spacers) from an image captured along the plurality of power lines extending side by side, for example, on a support (for example, a steel tower, etc.). As an example, images of the plurality of power lines may be captured by remotely controlling a camera mounted on an unmanned aerial vehicle 4 as shown in Fig. 1, which flies autonomously or remotely, based on instructions from a terminal 1 owned by a user.
 より具体的には、本実施形態における情報処理システムは、飛行体により撮影した撮影画像からエッジ画像を生成し、エッジ画像内のエッジに対するハフ変換により取得される直線投票数情報及び撮影画像内に想定される電力線に関連する想定直線本数を示す想定数情報に基づく電力線を構成する直線の抽出条件を示す直線抽出条件に基づき、直線の抽出条件下にある想定直線本数の直線をエッジ画像内から抽出することを可能にするものである。 More specifically, the information processing system in this embodiment generates an edge image from an image captured by an aircraft, and makes it possible to extract from the edge image an estimated number of straight lines that fall under the straight line extraction conditions, based on straight line voting information obtained by a Hough transform of edges in the edge image and straight line extraction conditions indicating extraction conditions for straight lines that make up power lines based on estimated number information indicating the estimated number of straight lines related to power lines estimated in the captured image.
 このように、撮影画像内に想定される電力線に関連する想定直線本数(特に多導体方式の電力線の本数の2倍に対応する想定直線本数)に関する情報を利用することで、撮影画像内から適切な数の直線を抽出することが可能となる。 In this way, by using information about the expected number of straight lines associated with power lines expected in the captured image (especially the expected number of straight lines corresponding to twice the number of power lines in a multi-conductor system), it is possible to extract an appropriate number of straight lines from within the captured image.
<システム構成>
 図2に示されるように、本実施の形態における情報処理システムは、端末1と、サーバ2と、無人飛行体4とを有している。端末1と、サーバ2と、無人飛行体4は、ネットワークNWを介して互いに通信可能に接続されていてもよい。なお、図示された構成は一例であり、これに限らず、例えば無人飛行体4がネットワークNWに接続されていなくてもよい。その場合、無人飛行体4の操作がユーザの操作する送信機(いわゆるプロポ)により行われたり、無人飛行体4のカメラにより取得した画像データが無人飛行体4に接続される補助記憶装置(例えばSDカードなどのメモリカードやUSBメモリなど)に記憶され、ユーザにより事後的に補助記憶装置から端末1やサーバ2に読み出されて記憶されたりする構成であってもよく、操作目的または画像データの記憶目的のいずれか一方の目的のためだけに無人飛行体4がネットワークNWに接続されていてもよい。
<System Configuration>
As shown in FIG. 2, the information processing system in this embodiment has a terminal 1, a server 2, and an unmanned aerial vehicle 4. The terminal 1, the server 2, and the unmanned aerial vehicle 4 may be connected to each other so as to be able to communicate with each other via a network NW. The illustrated configuration is an example, and is not limited to this, and for example, the unmanned aerial vehicle 4 may not be connected to the network NW. In that case, the unmanned aerial vehicle 4 may be operated by a transmitter (so-called radio control) operated by a user, or image data acquired by a camera of the unmanned aerial vehicle 4 may be stored in an auxiliary storage device (for example, a memory card such as an SD card or a USB memory) connected to the unmanned aerial vehicle 4, and the image data may be read out from the auxiliary storage device to the terminal 1 or the server 2 and stored by the user after the fact, and the unmanned aerial vehicle 4 may be connected to the network NW only for the purpose of operation or for the purpose of storing image data.
<端末1のハードウェア構成>
 図3は、本実施形態における端末1のハードウェア構成を示す図である。なお、図示された構成は一例であり、これ以外の構成を有していてもよい。
<Hardware configuration of terminal 1>
3 is a diagram showing a hardware configuration of the terminal 1 in this embodiment. Note that the illustrated configuration is an example, and the terminal 1 may have other configurations.
 端末1は、少なくとも、プロセッサ10、メモリ11、ストレージ12、送受信部13、入出力部14等を備え、これらはバス15を通じて相互に電気的に接続される。端末1は、例えばワークステーションやパーソナルコンピュータのような汎用コンピュータとしてもよい。 The terminal 1 includes at least a processor 10, a memory 11, a storage 12, a transmission/reception unit 13, an input/output unit 14, etc., which are electrically connected to each other via a bus 15. The terminal 1 may be, for example, a general-purpose computer such as a workstation or a personal computer.
 プロセッサ10は、端末1全体の動作を制御し、各要素間におけるデータの送受信の制御、及びアプリケーションの実行及び認証処理に必要な情報処理等を行う演算装置である。例えばプロセッサ10はCPU(Central Processing Unit)および/またはGPU(Graphics Processing Unit)であり、ストレージ12に格納されメモリ11に展開されたプログラム等を実行して各情報処理を実施する。 The processor 10 is a computing device that controls the operation of the entire terminal 1, controls the transmission and reception of data between each element, and performs information processing necessary for application execution and authentication processing. For example, the processor 10 is a CPU (Central Processing Unit) and/or a GPU (Graphics Processing Unit), and executes programs stored in the storage 12 and deployed in the memory 11 to perform various information processing operations.
 メモリ11は、DRAM(Dynamic Random Access Memory)等の揮発性記憶装置で構成される主記憶と、フラッシュメモリやHDD(Hard Disc Drive)等の不揮発性記憶装置で構成される補助記憶と、を含む。メモリ11は、プロセッサ10のワークエリア等として使用され、また、端末1の起動時に実行されるBIOS(Basic Input / Output System)、及び各種設定情報等を格納する。 Memory 11 includes a main memory consisting of a volatile storage device such as DRAM (Dynamic Random Access Memory) and an auxiliary memory consisting of a non-volatile storage device such as a flash memory or HDD (Hard Disc Drive). Memory 11 is used as a work area for processor 10, and also stores the BIOS (Basic Input/Output System) that is executed when terminal 1 is started up, various setting information, etc.
 ストレージ12は、アプリケーション・プログラム等の各種プログラムを格納する。各処理に用いられるデータを格納したデータベースがストレージ12に構築されていてもよい。また、後述の記憶部130が記憶領域の一部に設けられていてもよい。 Storage 12 stores various programs such as application programs. A database that stores data used for each process may be constructed in storage 12. In addition, a storage unit 130, which will be described later, may be provided in part of the storage area.
 送受信部13は、端末1が通信ネットワークを介して外部装置(不図示)や無人飛行体4等と通信を行うための通信インターフェースである。送受信部13は、Bluetooth(登録商標)及びBLE(Bluetooth Low Energy)の近距離通信インターフェースやUSB(Universal Serial Bus)端子等をさらに備えていてもよい。 The transmission/reception unit 13 is a communication interface that enables the terminal 1 to communicate with an external device (not shown) or the unmanned aerial vehicle 4 via a communication network. The transmission/reception unit 13 may further include a Bluetooth (registered trademark) or BLE (Bluetooth Low Energy) short-range communication interface, a USB (Universal Serial Bus) terminal, etc.
 入出力部14は、キーボード・マウス類等の情報入力機器、及びディスプレイ等の出力機器である。 The input/output unit 14 includes information input devices such as a keyboard and mouse, and output devices such as a display.
 バス15は、上記各要素に共通に接続され、例えば、アドレス信号、データ信号及び各種制御信号を伝達する。 The bus 15 is commonly connected to each of the above elements and transmits, for example, address signals, data signals, and various control signals.
<サーバ2>
 図4に示されるサーバ2もまた、プロセッサ20、メモリ21、ストレージ22、送受信部23、入出力部24等を備え、これらはバス25を通じて相互に電気的に接続される。サーバ2は、例えばワークステーションやパーソナルコンピュータのような汎用コンピュータとしてもよいし、或いはクラウド・コンピューティングによって論理的に実現されてもよい。各要素の機能は、上述した端末1と同様に構成することが可能であることから、各要素の詳細な説明は省略する。
<Server 2>
4 also includes a processor 20, a memory 21, a storage 22, a transmission/reception unit 23, an input/output unit 24, etc., which are electrically connected to each other via a bus 25. The server 2 may be a general-purpose computer such as a workstation or a personal computer, or may be logically realized by cloud computing. The functions of each element can be configured in the same way as the terminal 1 described above, so detailed description of each element will be omitted.
<無人飛行体4>
 図5は、無人飛行体4のハードウェア構成を示すブロック図である。フライトコントローラ41は、プログラマブルプロセッサ(例えば、中央演算処理装置(CPU))などの1つ以上のプロセッサを有することができる。
<Unmanned Aerial Vehicle 4>
5 is a block diagram showing a hardware configuration of the unmanned aerial vehicle 4. The flight controller 41 can have one or more processors, such as a programmable processor (e.g., a central processing unit (CPU)).
 また、フライトコントローラ41は、メモリ411を有しており、当該メモリにアクセス可能である。メモリ411は、1つ以上のステップを行うためにフライトコントローラが実行可能であるロジック、コード、および/またはプログラム命令を記憶している。また、フライトコントローラ41は、慣性センサ(加速度センサ、ジャイロセンサ)、GPSセンサ、近接センサ(例えば、ライダー)等のセンサ類412を含みうる。 Flight controller 41 also has and can access memory 411. Memory 411 stores logic, code, and/or program instructions that the flight controller can execute to perform one or more steps. Flight controller 41 may also include sensors 412, such as inertial sensors (accelerometers, gyro sensors), GPS sensors, and proximity sensors (e.g., lidar).
 メモリ411は、例えば、SDカードやランダムアクセスメモリ(RAM)などの分離可能な媒体または外部の記憶装置を含んでいてもよい。カメラ/センサ類42から取得したデータは、メモリ411に直接に伝達されかつ記憶されてもよい。例えば、カメラ等で撮影した静止画・動画データが内蔵メモリ又は外部メモリに記録されてもよいが、これに限らず、カメラ/センサ42または内蔵メモリからネットワークNWを介して、少なくとも端末1やサーバ2のいずれか1つに記録されてもよい。カメラ42は無人飛行体4にジンバル43を介して設置される。 The memory 411 may include, for example, a separable medium such as an SD card or random access memory (RAM) or an external storage device. Data acquired from the camera/sensors 42 may be directly transmitted to and stored in the memory 411. For example, still image/video data captured by a camera or the like may be recorded in an internal memory or an external memory, but is not limited to this, and may be recorded in at least one of the terminal 1 and the server 2 from the camera/sensor 42 or the internal memory via the network NW. The camera 42 is installed on the unmanned aerial vehicle 4 via a gimbal 43.
 フライトコントローラ41は、無人飛行体4の状態を制御するように構成された図示しない制御モジュールを含んでいる。例えば、制御モジュールは、6自由度(並進運動x、y及びz、並びに回転運動θ、θ及びθ)を有する無人飛行体4の空間的配置、速度、および/または加速度を調整するために、ESC44(Electric Speed Controller)を経由して無人飛行体4の推進機構(モータ45等)を制御する。バッテリー48から給電されるモータ45によりプロペラ46が回転することで無人飛行体4の揚力を生じさせる。制御モジュールは、搭載部、センサ類の状態のうちの1つ以上を制御することができる。 The flight controller 41 includes a control module (not shown) configured to control the state of the unmanned aerial vehicle 4. For example, the control module controls the propulsion mechanism (motor 45, etc.) of the unmanned aerial vehicle 4 via an ESC 44 (Electric Speed Controller) to adjust the spatial arrangement, speed, and/or acceleration of the unmanned aerial vehicle 4, which has six degrees of freedom (translational motion x , y, and z , and rotational motion θ x , θ y, and θ z ). The propellers 46 rotate by the motor 45 powered by a battery 48, thereby generating lift for the unmanned aerial vehicle 4. The control module can control one or more of the states of the onboard units and sensors.
 フライトコントローラ41は、1つ以上の外部のデバイス(例えば、送受信機(プロポ)49、端末1、表示装置、または他の遠隔の制御器)からのデータを送信および/または受け取るように構成された送受信部47と通信可能である。送受信機49は、有線通信または無線通信などの任意の適当な通信手段を使用することができる。 The flight controller 41 can communicate with a transceiver 47 configured to transmit and/or receive data from one or more external devices (e.g., a transceiver 49, a terminal 1, a display device, or other remote control). The transceiver 49 can use any suitable communication means, such as wired or wireless communication.
 例えば、送受信部47は、ローカルエリアネットワーク(LAN)、ワイドエリアネットワーク(WAN)、赤外線、無線、WiFi、ポイントツーポイント(P2P)ネットワーク、電気通信ネットワーク、クラウド通信などのうちの1つ以上を利用することができる。 For example, the transceiver unit 47 can utilize one or more of a local area network (LAN), a wide area network (WAN), infrared, wireless, WiFi, a point-to-point (P2P) network, a telecommunications network, cloud communications, etc.
 送受信部47は、センサ類42で取得したデータ、フライトコントローラ41が生成した処理結果、所定の制御データ、端末または遠隔の制御器からのユーザコマンドなどのうちの1つ以上を送信および/または受け取ることができる。 The transmitter/receiver 47 can transmit and/or receive one or more of the following: data acquired by the sensors 42, processing results generated by the flight controller 41, specific control data, user commands from a terminal or a remote controller, etc.
 本実施の形態によるセンサ類42は、慣性センサ(加速度センサ、ジャイロセンサ)、GPSセンサ、近接センサ(例えば、ライダー)、またはビジョン/イメージセンサ(例えば、カメラ)を含み得る。 The sensors 42 in this embodiment may include inertial sensors (accelerometers, gyro sensors), GPS sensors, proximity sensors (e.g., lidar), or vision/image sensors (e.g., cameras).
<端末1の機能>
 図6は、端末1及びサーバ2に実装される機能を例示したブロック図である。本実施の形態においては、端末1は、画像取得部115、処理部120、記憶部130を備えている。処理部120は、エッジ画像生成部121、特定直線抽出部122、傾き判定部123、追跡対象位置設定部124を含んでいる。また、記憶部130は、情報・画像記憶部131を含んでいる。なお、各種機能部は、端末1のプロセッサ10における機能部として例示しているが、各種機能部の一部または全部は、端末1のプロセッサ10またはサーバ2のプロセッサ20、無人飛行体4のコントローラ41の能力等に合わせて、プロセッサ10またはプロセッサ20、コントローラ41のうちのいずれの構成において実現されていてもよい。
<Functions of Terminal 1>
FIG. 6 is a block diagram illustrating functions implemented in the terminal 1 and the server 2. In this embodiment, the terminal 1 includes an image acquisition unit 115, a processing unit 120, and a storage unit 130. The processing unit 120 includes an edge image generation unit 121, a specific straight line extraction unit 122, a slope determination unit 123, and a tracking target position setting unit 124. The storage unit 130 includes an information/image storage unit 131. Note that the various functional units are illustrated as functional units in the processor 10 of the terminal 1, but some or all of the various functional units may be realized in any of the configurations of the processor 10 of the terminal 1 or the processor 20 of the server 2, and the controller 41 of the unmanned aerial vehicle 4, depending on the capabilities of the processor 10 of the terminal 1 or the processor 20 of the server 2, and the controller 41 of the unmanned aerial vehicle 4.
 通信部110は、ネットワークNWを介してサーバ2や、無人飛行体4と通信を行う。通信部110は、サーバ2や無人飛行体4等からの各種要求やデータ等を受け付ける受付部としても機能する。 The communication unit 110 communicates with the server 2 and the unmanned aerial vehicle 4 via the network NW. The communication unit 110 also functions as a reception unit that receives various requests and data from the server 2, the unmanned aerial vehicle 4, etc.
 画像取得部115は、例えば、通信インターフェースを介した無線通信あるいはUSB端子等を介した有線通信によって、無人飛行体4に搭載されたデジタルカメラやユーザが用いたデジタルカメラで撮像された画像をそれらのデジタルカメラから取得する。画像取得部115は、USBメモリやSDメモリ等の記憶媒体を介して画像を取得するように構成されていてもよいが、特に無人飛行体4からの無線通信で、または、無人飛行体4内において、リアルタイムに画像を取得する構成がより好ましい。 The image acquisition unit 115 acquires images captured by a digital camera mounted on the unmanned aerial vehicle 4 or a digital camera used by a user from the digital camera, for example, by wireless communication via a communication interface or wired communication via a USB terminal or the like. The image acquisition unit 115 may be configured to acquire images via a storage medium such as a USB memory or SD memory, but it is more preferable to configure the image acquisition unit 115 to acquire images in real time, particularly by wireless communication from the unmanned aerial vehicle 4 or within the unmanned aerial vehicle 4.
 処理部120は、画像取得部115が取得した画像(例えば、互いに並んで延伸する複数の電力線に沿って撮影した撮影画像や、撮影画像をグレースケール化または二値化した画像など)について、当該画像内において複数の電力線に対応する直線を抽出する一連の処理を実行する各機能部を備えている。 The processing unit 120 has various functional units that execute a series of processes to extract straight lines corresponding to multiple power lines from an image acquired by the image acquisition unit 115 (for example, an image captured along multiple power lines extending alongside one another, or an image that has been grayscaled or binarized).
 エッジ画像生成部121は、画像取得部115が取得した画像内のエッジ情報を検出し、エッジ情報(検出されたエッジが存在する画像内のxy座標を含む)に基づきエッジ画像を生成する。エッジ検出の手法は、画像内の電力線の外縁(エッジ)が判定可能であれば、どのような既知の手法(例えば微分フィルタなど)でもよいが、特に画像内の所定方向の各画素列において画素値(輝度)が高くなる変化量が基準値より大きい部分を立ちあがりエッジ情報として検出して、画素値(輝度)が低くなる変化量が基準値より大きい部分を立ち下がりエッジ情報として検出する手法であってもよい。例えば、画像が二値化されている場合には、画素値情報は白を1、黒を0として設定し、グレースケール化している場合には、画素値情報は一番白を255、一番黒を0として設定し、画像内の所定方向の各画素列の隣接する画素をそれぞれ比較して差分値を算出し、上述のとおり差分値が予め設定した基準値以上となったピクセルを白色、それ以外のピクセルを黒色としてエッジを設定し、エッジ画像を生成することが可能である。例えば、図7の撮影画像から図8のエッジ画像を生成する際に、上述のとおり立ち上がりエッジ、立ち下がりエッジを判定する場合には、例えば、立ち上がりエッジを緑色、立ち上がりエッジを青色、それ以外を黒色などと設定すると図8のようなエッジ画像において各エッジが可視化可能である。 The edge image generating unit 121 detects edge information in the image acquired by the image acquiring unit 115, and generates an edge image based on the edge information (including the xy coordinates in the image where the detected edge exists). The edge detection method may be any known method (such as a differential filter) as long as it is possible to determine the outer edge (edge) of the power line in the image, but may also be a method in which a portion where the amount of change in pixel value (luminance) increases is greater than a reference value in each pixel row in a specified direction in the image is detected as rising edge information, and a portion where the amount of change in pixel value (luminance) decreases is greater than a reference value is detected as falling edge information. For example, when an image is binarized, the pixel value information is set to 1 for white and 0 for black, and when the image is grayscaled, the pixel value information is set to 255 for the whitest pixel and 0 for the blackest pixel, and adjacent pixels in each pixel row in a specified direction in the image are compared to calculate the difference value, and as described above, pixels with difference values equal to or greater than a preset reference value are set as white, and other pixels are set as black, and an edge image can be generated. For example, when generating the edge image of FIG. 8 from the captured image of FIG. 7, if rising edges and falling edges are to be determined as described above, each edge can be visualized in the edge image as shown in FIG. 8 by setting, for example, rising edges to green, falling edges to blue, and other edges to black.
 ここで、図8の例示においては、電力線のエッジだけではなく、背景の草などのエッジも細かく検出されており、このようなノイズから複数の電力線に関係するエッジ情報のみを抽出することが難しい。特に、図9の例示のように、背景の木や草だけでなく、電力線と類似の形状・構造の直線的な物体(特に細長い物体)が映り込んでいる場合(例えば、斜めに延伸する直線的な物体4本が電力線で、左右に延伸する直線的な物体2本が無関係な細長い物体の場合)、直線的なエッジの検出が多くなるために画像内から複数の電力線に関係するエッジ情報のみを抽出することがさらに難しい。そこで、本発明の処理部120においては、以下の機能部をさらに有する。 In the example of FIG. 8, not only the edges of the power lines but also the edges of grass and the like in the background are detected in detail, making it difficult to extract only edge information related to multiple power lines from such noise. In particular, as in the example of FIG. 9, when not only the trees and grass in the background but also linear objects (particularly elongated objects) with a similar shape and structure to the power lines are reflected (for example, when four linear objects extending diagonally are power lines and two linear objects extending to the left and right are unrelated elongated objects), the detection of more linear edges makes it even more difficult to extract only edge information related to multiple power lines from the image. Therefore, the processing unit 120 of the present invention further has the following functional units.
 特定直線抽出部122は、生成されたエッジ画像内のエッジ情報に対するハフ変換(Hough変換)により取得される直線投票数情報及びエッジ画像内に想定される電力線に関連する想定直線本数を示す想定数情報に関する条件を少なくとも含む、電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、検出した複数の直線から抽出条件下にある想定直線本数の直線を抽出する。 The specific line extraction unit 122 extracts lines that correspond to the expected number of lines that fall under the extraction conditions from the detected lines, based on line extraction condition information indicating the extraction conditions for lines that constitute the power lines, which includes at least conditions related to line vote number information obtained by performing a Hough transform on edge information in the generated edge image and expected number information indicating the expected number of lines related to the power lines expected in the edge image.
 より具体的には、まず直線投票数情報は、特定直線抽出部122により、エッジ画像内のエッジ情報に対してハフ変換を実行することで取得される。ハフ変換とは、画像中に含まれる直線を多数決で検出する手法であって、例えば、ある直線に対して原点からの垂線を引いた際の長さρ、垂線とx軸がなす角度をθとして、画像中のエッジを構成する各点のxy座標情報を用いてρθ座標に投票すると投票数を示す直線投票数情報が得られ、当該直線投票数情報から直線を示すρθの組み合わせが多数決で推定される。特に立ち上がりエッジ情報と立ち下がりエッジ情報を区別する場合には、例えば立ち上がりエッジのみを抽出した立ち上がりエッジ画像と立ち下がりエッジのみを抽出した立ち下がりエッジ画像を生成し、各画像に対してハフ変換を行って立ち上がりエッジに対応する直線と立ち下がりエッジに対応する直線がそれぞれ区別可能な直線投票数情報であってもよい。 More specifically, the line vote number information is obtained by first executing a Hough transform on the edge information in the edge image by the specific line extraction unit 122. The Hough transform is a method for detecting lines contained in an image by majority vote. For example, when a perpendicular line is drawn from the origin to a certain line, ρ is the length, and θ is the angle between the perpendicular line and the x-axis, the xy coordinate information of each point constituting an edge in the image is used to vote for the ρθ coordinate, line vote number information indicating the number of votes is obtained, and a combination of ρθ indicating a line is estimated by majority vote from the line vote number information. In particular, when distinguishing between rising edge information and falling edge information, for example, a rising edge image in which only the rising edges are extracted and a falling edge image in which only the falling edges are extracted are generated, and a Hough transform is performed on each image to obtain line vote number information in which the lines corresponding to the rising edges and the lines corresponding to the falling edges can be distinguished from each other.
 これにより、特定直線抽出部122により直線投票数情報のみを参照すると、例えば、図9に例示されるエッジ画像から図10に示すような直線(斜めに延伸する直線が8本、左右に延伸する直線が4本)が多数決により推定される。なお、ハフ変換により得られるのは無限に続く「直線」であって有限な長さの「線分」ではないため、エッジが途切れている場合であっても電力線を構成する直線の抽出(推定)が可能である一方、図9に示されるような奥側に存在して途切れている無関係な細長い物体であっても直線として抽出され得る。 As a result, when the specific straight line extraction unit 122 refers only to the straight line vote number information, for example, straight lines as shown in FIG. 10 (eight lines extending diagonally and four lines extending to the left and right) are estimated by majority vote from the edge image illustrated in FIG. 9. Note that since the Hough transform obtains "straight lines" that continue infinitely and not "line segments" of finite length, it is possible to extract (estimate) straight lines that make up power lines even when edges are interrupted, while even an unrelated long and thin object that is interrupted in the background as shown in FIG. 9 can be extracted as a straight line.
 次に、エッジ画像内に想定される電力線に関連する想定直線本数を示す想定数情報は、端末1等を介してユーザが設定可能な情報である。想定数情報(すなわち、想定直線本数)は、複数の電力線を構成する本数(例えば多導体方式の電力線を構成する本数であって、径間スペーサによりまとめられている本数)の2倍の本数を示す情報であってもよい。これは、1本の電力線は2本の直線(外縁)によって構成されるからである。もしくは、立ち上がりエッジ情報と立ち下がりエッジ情報を区別する場合には、想定数情報は、複数の電力線を構成する本数(例えば多導体方式の電力線を構成する本数)と同一の本数を示す情報であってもよい。これは、1本の電力線はそれぞれ1本の立ち上がりエッジにより抽出される直線と立ち下がりエッジにより抽出される直線から構成されるからである。より具体的には、例えば電力線の背景が電力線よりも暗い画素値の領域が多い背景(例えば、森の木々や土、草など)である場合には、上述のエッジ検出の手法例を用いると、画像内の所定方向の各画素列において暗い背景の画素値から電力線の画素値へ移る境界部分は画素値(輝度)が高くなる変化量が基準値より大きくなるので立ちあがりエッジ情報として検出され、その後の電力線の画素値から暗い背景の画素値へ戻る境界部分は画素値(輝度)が低くなる変化量が基準値より大きくなるので立ち下がりエッジ情報として検出されるため、両エッジにより1本の電力線が構成されているといえる。また、背景が電力線よりも明るい画素値の領域が多い背景(例えば、雪が積もっている場合など)である場合には、立ち上がりエッジと立ち下がりエッジの関係は逆転するが、結果として同様に両エッジにより1本の電力線が構成されているといえる。 Next, the expected number information indicating the expected number of straight lines related to the power lines expected in the edge image is information that can be set by the user via the terminal 1 or the like. The expected number information (i.e., the expected number of straight lines) may be information indicating twice the number of lines constituting multiple power lines (e.g., the number of lines constituting a multi-conductor power line and the number of lines bundled together by a span spacer). This is because one power line is composed of two straight lines (outer edges). Alternatively, when distinguishing between rising edge information and falling edge information, the expected number information may be information indicating the same number as the number of lines constituting multiple power lines (e.g., the number of lines constituting a multi-conductor power line). This is because one power line is composed of a straight line extracted by one rising edge and a straight line extracted by a falling edge. More specifically, for example, if the background of the power line is a background with many areas of pixel values darker than the power line (e.g., forest trees, soil, grass, etc.), when the above-mentioned edge detection method example is used, in each pixel row in a predetermined direction in the image, the boundary portion where the pixel value of the dark background changes to the pixel value of the power line is detected as rising edge information because the amount of change in the pixel value (brightness) is greater than the reference value, and the boundary portion where the pixel value of the power line returns to the pixel value of the dark background is detected as falling edge information because the amount of change in the pixel value (brightness) is greater than the reference value, so it can be said that one power line is composed of both edges. Also, if the background is a background with many areas of pixel values brighter than the power line (e.g., when snow is piled up), the relationship between the rising edge and the falling edge is reversed, but as a result, it can be said that one power line is similarly composed of both edges.
 そして、複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報は、端末1等を介してユーザが設定可能な情報である。例えば第1の抽出条件は、直線投票数が一番多い第1直線と平行な直線が当該第1直線を含めて少なくとも想定直線本数(複数の電力線を構成する本数の2倍の本数)あることであって、この抽出条件を満たすと特定直線抽出部122が判定した場合に該当する直線(すなわち、複数の電力線を構成する直線)を抽出する。すなわち、図9においては、例えば直線投票数情報に基づき斜めに延伸する直線のいずれかが第1直線と選択された場合、互いに平行な直線が想定直線本数(8本)に対応する本数存在しているため、図11に示されるように電力線を構成する直線が抽出されることとなる。また、例えば直線投票数情報に基づき左右に延伸する直線のいずれかが第1直線として選択された場合には、想定直線本数の8本存在していないため抽出されない。この場合、互いに平行と判定された左右に延伸する直線に関する直線投票数情報は非対象として一番多い第1直線を再度選択してもよいし、これに代えて、または、加えて、ハフ変換から再度実行するようにしてもよい。両者を組み合わせる場合には、例えば、複数回再選択を繰り返しても想定数情報が示す本数(例えば複数の電力線を構成する本数の2倍の本数)の直線が抽出されない場合に、ハフ変換から再度実行するようにしてもよい。なお、ここでいう「平行」とは、平行として近似可能な略平行なものも含み、以下同様である。 The line extraction condition information indicating the extraction conditions of the lines constituting the multiple power lines is information that can be set by the user via the terminal 1 or the like. For example, the first extraction condition is that there are at least the expected number of lines (twice the number of lines constituting the multiple power lines) parallel to the first line with the highest number of line votes, including the first line, and when the specific line extraction unit 122 determines that this extraction condition is satisfied, it extracts the corresponding line (i.e., the line constituting the multiple power lines). That is, in FIG. 9, for example, if any of the lines extending diagonally is selected as the first line based on the line vote number information, there are a number of parallel lines corresponding to the expected number of lines (8 lines), so the lines constituting the power lines are extracted as shown in FIG. 11. Also, for example, if any of the lines extending to the left and right are selected as the first line based on the line vote number information, there are not eight lines, which is the expected number of lines, so they are not extracted. In this case, the line vote count information for the lines extending to the left and right that are determined to be parallel to each other may select the first line with the most votes as a non-target line, or alternatively or in addition, the Hough transform may be run again. When combining the two, for example, if the number of lines indicated by the expected number information (e.g., twice the number of lines that make up the multiple power lines) is not extracted even after multiple reselections, the Hough transform may be run again. Note that "parallel" here includes lines that are approximately parallel and can be approximated as parallel, and the same applies below.
 さらに、上記第1の抽出条件に代えて、抽出された直線において、互いに近傍(所定の基準距離範囲内、特に電力線の外径と略一致の幅(さらには複数の電力線のうち手前側と奥側を考慮した電力線の外径と略一致の幅))に位置する直線同士をペアリングし、そのペアが想定直線本数(この場合、複数の電力線を構成する本数と同一の本数)あることを抽出条件としてもよい。 Furthermore, instead of the first extraction condition, the extracted straight lines may be paired with each other if they are located close to each other (within a predetermined reference distance range, in particular with a width that is approximately the same as the outer diameter of the power line (and furthermore with a width that is approximately the same as the outer diameter of the power line taking into account the front and back sides of the multiple power lines)), and the number of pairs may be the expected number of straight lines (in this case, the same number as the number of lines that make up the multiple power lines).
 また、立ち上がりエッジ情報と立ち下がりエッジ情報を区別する場合、例えば第2の抽出条件として、立ち上がりエッジまたは立ち下がりエッジのいずれか一方において直線投票数が一番多い第1直線に平行な直線と、その近傍(所定の基準距離範囲内、特に電力線の外径と略一致の幅(さらには複数の電力線のうち手前側と奥側を考慮した電力線の外径と略一致の幅))にある立ち上がりエッジまたは立ち下がりエッジのいずれか他方に基づく直線(特に第1直線に平行な直線)とをペアリングし、そのペアが第1直線を含むペアを含めて想定直線本数(複数の電力線を構成する本数と同一の本数)あることであって、この抽出条件を満たすと特定直線抽出部122が判定した場合に該当する直線(すなわち、複数の電力線を構成する直線)を抽出する。すなわち、図9においては、例えば立ち上がりエッジに対応する直線投票数情報に基づき斜めに延伸する直線(図12において実線にて示される直線)のいずれかが第1直線と選択された場合、第1直線に平行な直線を判定し、判定された各直線の近傍にある立ち下がりエッジに対応する直線(図12において点線にて示される直線)とそれぞれペアを形成し、これらのペアが想定直線本数(4本)に対応する数(4ペア)存在しているため、図11に示されるように電力線を構成する直線が抽出されることとなる。また、例えば立ち上がりエッジに対応する直線投票数情報に基づき左右に延伸する直線のいずれかが第1直線として選択された場合には、形成されるペア数(2ペア)が想定直線本数の4本存在していないため抽出されない。この場合、互いに平行と判定された左右に延伸する直線のペアに関する直線投票数情報は非対象として一番多い第1直線を再度選択してもよいし、これに代えて、または、加えて、ハフ変換から再度実行するようにしてもよい。両者を組み合わせる場合には、例えば、複数回再選択を繰り返しても想定数情報が示す本数(例えば複数の電力線を構成する本数と同一の本数)のペアが抽出されない場合に、ハフ変換から再度実行するようにしてもよい。 Furthermore, when distinguishing between rising edge information and falling edge information, for example, a second extraction condition is to pair a straight line parallel to the first straight line that has the highest number of straight line votes for either the rising edge or the falling edge with a straight line (particularly a straight line parallel to the first straight line) based on the other of either the rising edge or the falling edge that is in its vicinity (within a specified reference distance range, particularly a width that approximately matches the outer diameter of the power line (and furthermore a width that approximately matches the outer diameter of the power line taking into account the front and back sides of the multiple power lines)), and the number of such pairs, including the pair that includes the first straight line, is the expected number of straight lines (the same number as the number of lines that make up the multiple power lines), and when the specific straight line extraction unit 122 determines that this extraction condition is satisfied, the corresponding straight line (i.e., the straight lines that make up the multiple power lines) is extracted. That is, in FIG. 9, for example, when any of the straight lines extending diagonally (straight lines shown by solid lines in FIG. 12) is selected as the first straight line based on the straight line vote number information corresponding to the rising edge, straight lines parallel to the first straight line are determined, and pairs are formed with the straight lines corresponding to the falling edge (straight lines shown by dotted lines in FIG. 12) in the vicinity of each determined straight line, and since the number of pairs (four pairs) corresponding to the expected number of straight lines (four) exists, the straight lines constituting the power line are extracted as shown in FIG. 11. Also, for example, when any of the straight lines extending to the left and right are selected as the first straight line based on the straight line vote number information corresponding to the rising edge, the straight line is not extracted because the number of pairs formed (two pairs) does not exist, which is the expected number of straight lines. In this case, the first straight line with the largest number of straight line votes for the pair of straight lines extending to the left and right determined to be parallel to each other may be selected again as a non-target, or alternatively or in addition, the Hough transform may be executed again. When combining the two, for example, if the number of pairs indicated by the expected number information (e.g., the same number as the number of power lines) is not extracted even after multiple reselections, the Hough transform may be executed again.
 さらに、検出された複数の直線の傾きのうち、いずれの傾きが一番多い第1傾きであるかを傾き投票数により判定する傾き判定部123を備えてもよい。傾き投票数による判定手法は、例えばハフ変換により取得する直線投票数情報に含まれる角度(θ)に関する情報に基づいて、角度基準での投票数(傾き投票数)が最も多い第1傾きを傾き判定部123により判定してもよい。 Furthermore, a slope determination unit 123 may be provided that determines which of the slopes of the detected multiple straight lines is the first slope with the most votes based on the number of slope votes. The determination method based on the number of slope votes may be, for example, based on information about the angle (θ) included in the straight line vote number information obtained by the Hough transform, and the slope determination unit 123 may determine the first slope with the most votes based on the angle (slope vote number).
 そして、上記傾き判定部123を備える場合、例えば第3の抽出条件として、傾き判定部123により判定された第1傾きに対して所定範囲内の傾きの直線が少なくとも想定直線本数(複数の電力線を構成する本数の2倍の本数)あること、であって、この抽出条件を満たすと特定直線抽出部122が判定した場合には、第1傾きに対して所定範囲内の傾きの直線のうち、直線投票数情報が示す直線投票数が多い順に想定直線本数だけ直線を抽出する。すなわち、図9においては、例えば斜めに延伸する直線のいずれかの傾きが第1傾きと判定された場合、第1傾きに対して所定範囲の傾きの直線(斜めに延伸する他の直線)が想定直線本数(8本)に対応する本数存在しているため、図11に示されるように電力線を構成する直線が抽出されることとなる。この場合においては、左右に延伸する直線の傾きが第1傾きと判定される可能性が少ないため、誤検出の可能性がより低くなり得る。 In the case where the inclination determination unit 123 is provided, for example, a third extraction condition is that there are at least the expected number of straight lines (twice the number of lines constituting the multiple power lines) whose inclination is within a predetermined range with respect to the first inclination determined by the inclination determination unit 123, and when the specific straight line extraction unit 122 determines that this extraction condition is satisfied, the specific straight line extraction unit 122 extracts the expected number of straight lines from among the straight lines whose inclination is within a predetermined range with respect to the first inclination in order of the number of straight line votes indicated by the straight line vote number information. That is, in FIG. 9, for example, when the inclination of any of the straight lines extending diagonally is determined to be the first inclination, there are straight lines (other straight lines extending diagonally) whose inclination is within a predetermined range with respect to the first inclination in a number corresponding to the expected number of straight lines (8 lines), so that the straight lines constituting the power lines are extracted as shown in FIG. 11. In this case, since the inclination of the straight lines extending to the left and right is less likely to be determined to be the first inclination, the possibility of erroneous detection may be lower.
 また、上記傾き判定部123を備える場合であって、立ち上がりエッジ情報と立ち下がりエッジ情報を区別する場合、例えば第4の抽出条件として、立ち上がりエッジまたは立ち下がりエッジのいずれか一方に対応する直線において傾き判定部123により判定された第1傾きに対して所定範囲内の傾きの直線を判定し、判定された直線の近傍(所定の基準距離範囲内、特に電力線の外径と略一致の幅(さらには複数の電力線のうち手前側と奥側を考慮した電力線の外径と略一致の幅))にある立ち上がりエッジまたは立ち下がりエッジのいずれか他方に基づく直線(特に第1傾きと平行な直線)とをペアリングし、そのペアが少なくとも想定直線本数(複数の電力線を構成する本数と同一の本数)あること、であって、この抽出条件を満たすと特定直線抽出部122が判定した場合には、ペアのうち、直線投票数情報が示す直線投票数が多い順に想定直線本数のペアに対応する直線だけを抽出する。すなわち、図9においては、例えば立ち上がりエッジに対応する斜めに延伸する直線(図12において実線にて示される直線)のいずれかの傾きが第1傾きと判定された場合、第1傾きに対して所定範囲の傾きの直線(斜めに延伸する他の直線)を判定し、判定された直線の近傍にある立ち下がりエッジに対応する斜めに延伸する直線(図12において点線にて示される直線)とペアを形成し、そのペアが想定直線本数(4本)に対応する本数存在しているため、図11に示されるように電力線を構成する直線が抽出されることとなる。この場合においても、左右に延伸する直線の傾きが第1傾きと判定される可能性が少ないため、誤検出の可能性がより低くなり得る。 Furthermore, in the case where the above-mentioned slope determination unit 123 is provided and the rising edge information and the falling edge information are to be distinguished from each other, for example, a fourth extraction condition is to determine a line corresponding to either a rising edge or a falling edge, the slope of which is within a predetermined range with respect to the first slope determined by the slope determination unit 123, and pair the determined line with a line based on either the rising edge or the falling edge (particularly a line parallel to the first slope) that is in the vicinity of the determined line (within a predetermined reference distance range, particularly a width that is approximately the same as the outer diameter of the power line (and furthermore a width that is approximately the same as the outer diameter of the power line taking into account the front and rear sides of the multiple power lines)), and the pairs include at least the expected number of straight lines (the same number as the number of lines that make up the multiple power lines). If the specific straight line extraction unit 122 determines that this extraction condition is satisfied, only straight lines that correspond to the pairs of the expected number of straight lines are extracted from the pairs in order of the number of straight line votes indicated by the straight line vote number information. That is, in FIG. 9, for example, if the slope of any of the diagonally extending straight lines (straight lines shown by solid lines in FIG. 12) corresponding to the rising edge is determined to be the first slope, straight lines (other diagonally extending straight lines) with a slope within a predetermined range with respect to the first slope are determined, and a pair is formed with a diagonally extending straight line (straight line shown by dotted lines in FIG. 12) corresponding to the falling edge in the vicinity of the determined straight line, and since there are a number of such pairs corresponding to the expected number of straight lines (4), the straight lines constituting the power line are extracted as shown in FIG. 11. Even in this case, the slope of the straight lines extending to the left and right is unlikely to be determined to be the first slope, so the possibility of erroneous detection can be reduced.
 また、上記傾き判定部123を備える場合であって、立ち上がりエッジ情報と立ち下がりエッジ情報を区別する場合、例えば第5の抽出条件として、立ち上がりエッジまたは立ち下がりエッジのいずれか一方に対応する直線とその近傍(所定の基準距離範囲内)にある立ち上がりエッジまたは立ち下がりエッジのいずれか他方に対応する直線とをペアリングし、それらペアに対して傾き判定部123によりいずれの傾きが一番多い第1傾きであるかを傾き投票数により判定し(例えば、ペアの傾きとは、ペアを構成する両直線の傾きの平均傾きなど)、第1傾きに対して所定範囲内の傾きのペアが少なくとも想定直線本数(複数の電力線を構成する本数と同一の本数)あること、であって、この抽出条件を満たすと特定直線抽出部122が判定した場合には、ペアのうち、直線投票数情報が示す直線投票数が多い順に想定直線本数のペアに対応する直線だけを抽出する。すなわち、図9においては、例えば立ち上がりエッジに対応する直線(図13において実線にて示される直線)と近傍の立ち下がりエッジに対応する直線(図13において点線にて示される直線)とでペアを形成し、傾き判定部123によりいずれの傾きが一番多い第1傾きであるかを傾き投票数により判定し(図13においては斜めに延伸する直線の傾きを第1傾きと判定)、第1の傾きに対して所定範囲内の傾きのペアを直線投票数が多い順に想定直線本数である4ペアだけ抽出する。この場合においても、左右に延伸する直線の傾きが第1傾きと判定される可能性が少ないため、誤検出の可能性がより低くなり得る。 Furthermore, in the case where the above-mentioned slope determination unit 123 is provided and the rising edge information and the falling edge information are to be distinguished from each other, for example, a fifth extraction condition is to pair a straight line corresponding to either a rising edge or a falling edge with a straight line corresponding to the other of either a rising edge or a falling edge in its vicinity (within a specified reference distance range), and the slope determination unit 123 determines which of the pairs has the most common first slope based on the number of slope votes (for example, the slope of the pair is the average slope of the slopes of both straight lines constituting the pair), and there are at least the expected number of straight lines (the same number as the number of lines constituting the multiple power lines) of pairs with slopes within a specified range relative to the first slope, and when the specific straight line extraction unit 122 determines that this extraction condition is satisfied, only straight lines corresponding to the pairs of the expected number of straight lines are extracted from the pairs in order of the most number of straight line votes indicated by the straight line vote number information. That is, in FIG. 9, for example, a straight line corresponding to a rising edge (a straight line shown by a solid line in FIG. 13) is paired with a straight line corresponding to a nearby falling edge (a straight line shown by a dotted line in FIG. 13), and the slope determination unit 123 determines which slope is the most common first slope based on the number of slope votes (in FIG. 13, the slope of the line extending diagonally is determined to be the first slope), and pairs of slopes within a predetermined range relative to the first slope are extracted in descending order of the number of line votes, which is the expected number of straight lines, that is, four pairs. Even in this case, the slope of a line extending to the left or right is unlikely to be determined to be the first slope, so the possibility of erroneous detection can be reduced.
 このように、特に背景にある対象外の直線的な物体(特に細長い物体)のエッジを検出した場合においても、多導体方式の複数の電力線を構成する直線のみを画像内から抽出することが可能となる。 In this way, even when detecting the edges of non-target linear objects (especially elongated objects) in the background, it is possible to extract only the straight lines that make up multiple power lines of a multi-conductor system from within the image.
 追跡対象位置設定部124は、特定直線抽出部122により抽出された直線(すなわち、複数の電力線を構成する直線)に基づいて、無人飛行体4のカメラ(撮影部)により追跡する対象位置を設定する。より具体的な例として、追跡対象位置設定部124は、特定直線抽出部122により抽出された複数の直線の平均の位置に平均直線を算出し(必要に応じて画像中に可視化して追加し)、当該平均直線を追跡する対象位置(対象直線)として設定してもよい。図14には、平均直線が点線により可視化されて追加されている画像が例示されている。また、追跡対象位置設定部124は、特定直線抽出部122により抽出された複数の直線またはペアのうちの1つを追跡する対象位置(対象直線)として設定してもよい。そして、追跡対象位置設定部124は、設定した対象位置が撮影部により撮影している画像中に含まれるように撮影部または無人飛行体自体の少なくともいずれかを制御するための指示情報を生成し、通信部110を介して無人飛行体4へ送信する。より具体的な例としては、まず、対象直線の所定位置(例えば、中心位置または略中心位置など)が画像中の所定範囲(特に、画像の中心近傍であり、さらに画像の中心位置または略中心位置を基準とした範囲)に含まれているかを判定する。そして、所定範囲に含まれていないと判定された場合には、対象直線の所定位置と所定範囲との差分情報を生成し、当該差分情報に基づき撮影部の状態(例えば、撮影方向情報、画角情報など)または無人飛行体自体の状態(例えば、現在位置情報(緯度経度高度座標)や、姿勢情報、回転翼のモータ出力情報など)の少なくともいずれかを制御するための指示情報を生成する。差分情報は、所定範囲からのずれ量及びずれ方向が把握可能な情報であればどのような情報であってもよいが、例えば、画像の中心から対象直線の所定位置までの画像上での方向及び距離(画素数)に関する情報であり得る。指示情報は、例えば、差分情報と、画像上の方向及び距離と撮影部または無人飛行体撮影部における制御方向及び制御距離の対応情報、並びに、無人飛行体の少なくとも何れかの現在状態情報に基づき生成されてもよいが、これに限定されない。なお、対象直線の所定位置が画像中の所定範囲に含まれていると判定された場合には、指示情報を新たに生成せずに撮影部または無人飛行体の状態を維持させる。 The tracking target position setting unit 124 sets a target position to be tracked by the camera (photographing unit) of the unmanned aerial vehicle 4 based on the straight lines extracted by the specific straight line extraction unit 122 (i.e., the straight lines constituting the multiple power lines). As a more specific example, the tracking target position setting unit 124 may calculate an average straight line at the average position of the multiple straight lines extracted by the specific straight line extraction unit 122 (visualize and add to the image as necessary), and set the average straight line as the target position (target straight line) to be tracked. FIG. 14 illustrates an example of an image in which the average straight line is visualized and added by a dotted line. The tracking target position setting unit 124 may also set one of the multiple straight lines or pairs extracted by the specific straight line extraction unit 122 as the target position (target straight line) to be tracked. Then, the tracking target position setting unit 124 generates instruction information for controlling at least one of the photographing unit or the unmanned aerial vehicle itself so that the set target position is included in the image photographed by the photographing unit, and transmits the instruction information to the unmanned aerial vehicle 4 via the communication unit 110. As a more specific example, first, it is determined whether a predetermined position (e.g., a central position or an approximate central position, etc.) of the target straight line is included in a predetermined range in the image (particularly, a range near the center of the image and further based on the central position or the approximate central position of the image). Then, if it is determined that the target straight line is not included in the predetermined range, difference information between the predetermined position of the target straight line and the predetermined range is generated, and instruction information for controlling at least one of the state of the shooting unit (e.g., shooting direction information, angle of view information, etc.) or the state of the unmanned aerial vehicle itself (e.g., current position information (latitude, longitude, altitude coordinates), attitude information, rotor motor output information, etc.) is generated based on the difference information. The difference information may be any information that allows the amount of deviation and the direction of deviation from the predetermined range to be grasped, and may be, for example, information on the direction and distance (number of pixels) on the image from the center of the image to the predetermined position of the target straight line. The instruction information may be generated based on, for example, the difference information, the correspondence information between the direction and distance on the image and the control direction and control distance in the shooting unit or the unmanned aerial vehicle shooting unit, and at least one of the current state information of the unmanned aerial vehicle, but is not limited thereto. If it is determined that the specified position of the target line is within a specified range in the image, the state of the imaging unit or unmanned aerial vehicle is maintained without generating new instruction information.
 次に、記憶部130の情報・画像記憶部131は、画像取得部115が取得した画像の他、エッジ画像生成部121が生成したエッジ画像やエッジ情報、特定直線抽出部122が生成した直線投票数情報や複数の電力線を構成する直線に関する情報、第1直線に関する情報、傾き判定部123が生成した傾き投票数情報や第1傾きに関する情報、追跡対象位置設定部124により対象直線が可視化された画像、ユーザが設定した想定数情報や立ち上がりエッジまたは立ち下がりエッジのための基準値情報、近傍を示す基準範囲情報など、または、処理部120の各機能部121~126による処理に生成されたその他の情報・データなどを、少なくとも一時的に記憶する。 Next, the information/image storage unit 131 of the storage unit 130 at least temporarily stores, in addition to the images acquired by the image acquisition unit 115, edge images and edge information generated by the edge image generation unit 121, line vote number information generated by the specific line extraction unit 122 and information on the lines that make up multiple power lines, information on the first line, slope vote number information and information on the first slope generated by the slope determination unit 123, an image in which the target line is visualized by the tracking target position setting unit 124, expected number information set by the user, reference value information for rising edges or falling edges, reference range information indicating the vicinity, and other information and data generated in the processing by each of the functional units 121 to 126 of the processing unit 120.
<直線抽出方法の一例>
 続いて、図15等を参照して、本実施形態にかかる情報処理システムによる直線抽出方法について説明する。図15は、本実施形態にかかる情報処理システムによる直線抽出方法を実施する処理を示すフローチャートである。
<An example of a line extraction method>
Next, a straight line extraction method by the information processing system according to the present embodiment will be described with reference to Fig. 15 etc. Fig. 15 is a flowchart showing a process for implementing the straight line extraction method by the information processing system according to the present embodiment.
 最初に、端末1の画像取得部115により、無人飛行体4に搭載されたカメラにより撮影された撮影画像などの画像を取得する(S101)。 First, the image acquisition unit 115 of the terminal 1 acquires images such as images captured by a camera mounted on the unmanned aerial vehicle 4 (S101).
 次に、端末1のエッジ画像生成部121により、画像取得部115が取得した画像内のエッジ情報を検出し、エッジ情報(検出されたエッジが存在する画像内のxy座標を含む)に基づきエッジ画像を生成する(S102)。 Next, the edge image generating unit 121 of the terminal 1 detects edge information in the image acquired by the image acquiring unit 115, and generates an edge image based on the edge information (including the x and y coordinates in the image where the detected edge exists) (S102).
 次に、端末1の特定直線抽出部122により、生成されたエッジ画像内のエッジ情報に対するハフ変換により取得される直線投票数情報及びエッジ画像内に想定される電力線に関連する想定直線本数を示す想定数情報に関する条件を少なくとも含む、電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、検出した複数の直線から抽出条件下にある想定直線本数の直線を抽出する(S103)。 Next, the specific line extraction unit 122 of the terminal 1 extracts lines from the detected lines according to the line extraction condition information indicating the extraction conditions for the lines constituting the power lines, which includes at least conditions related to the line vote number information obtained by the Hough transform of the edge information in the generated edge image and the expected number information indicating the expected number of lines related to the power lines expected in the edge image (S103).
 このように、本実施形態の端末1によれば、特に背景にある対象外の直線的な物体(特に細長い物体)のエッジを検出した場合においても、多導体方式の複数の電力線を構成する直線のみを画像内から抽出することが可能となる。 In this way, with the terminal 1 of this embodiment, even when detecting the edges of non-target linear objects (especially elongated objects) in the background, it is possible to extract only the straight lines that make up multiple power lines of the multi-conductor system from within the image.
 上述した実施の形態は、本発明の理解を容易にするための例示に過ぎず、本発明を限定して解釈するためのものではない。本発明は、その趣旨を逸脱することなく、変更、改良することができると共に、本発明にはその均等物が含まれることは言うまでもない。 The above-described embodiment is merely an example to facilitate understanding of the present invention, and is not intended to limit the interpretation of the present invention. The present invention can be modified and improved without departing from the spirit of the invention, and it goes without saying that the present invention includes equivalents.
 1    端末
 2    サーバ
 4    無人飛行体

 
1 Terminal 2 Server 4 Unmanned aerial vehicle

Claims (9)

  1.  コンピュータが無人飛行体により撮影した撮影画像から複数の電力線に対応する直線を抽出する情報処理方法であって、
     エッジ画像生成部により、前記撮影画像からエッジ画像を生成するステップと、
     特定直線抽出部により、生成された前記エッジ画像内のエッジに対するハフ変換により取得される直線投票数情報及び前記エッジ画像内に想定される前記複数の電力線の想定直線本数を示す想定数情報に関する条件を少なくとも含む、前記複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、前記エッジ画像内の複数の直線から前記抽出条件下にある前記想定直線本数の直線を抽出するステップと、
     を前記コンピュータが実行する情報処理方法。
    An information processing method for extracting straight lines corresponding to a plurality of power lines from an image captured by an unmanned aerial vehicle by a computer, comprising:
    generating an edge image from the captured image by an edge image generating unit;
    a step of extracting lines corresponding to the assumed number of lines that fall under extraction conditions from the lines in the edge image based on line extraction condition information indicating extraction conditions for lines constituting the multiple power lines, the line extraction condition information including at least conditions related to line vote number information obtained by a Hough transform for edges in the generated edge image and assumed number information indicating the assumed number of lines of the multiple power lines assumed in the edge image, from the multiple lines in the edge image by a specific line extraction unit;
    The information processing method is executed by the computer.
  2.  前記想定直線本数は、前記複数の電力線の本数の2倍の数であって、
     前記抽出条件は、前記直線投票数が一番多い第1直線と平行な直線が当該第1直線を含めて前記想定直線本数あること、である、
     ことを特徴とする請求項1に記載の情報処理方法。
    The number of assumed straight lines is twice the number of the plurality of power lines,
    The extraction condition is that the number of lines parallel to the first line having the largest number of line votes is the number of lines assumed to be the number of lines including the first line.
    2. The information processing method according to claim 1,
  3.  前記エッジ画像生成部により、所定方向の各画素列において画素値が高くなる変化量が基準値より大きい部分を立ち上がりエッジ情報として検出して、画素値が低くなる変化量が基準値より大きい部分を立ち下がりエッジ情報として検出するステップをさらに含み、
     前記想定直線本数は、前記複数の電力線の本数と同一の数であって、
     前記抽出条件は、前記立ち上がりエッジ情報に対応する直線または前記立ち下がりエッジ情報に対応する直線のいずれか一方において前記直線投票数が一番多い第1直線に平行な直線と、その近傍にある前記立ち上がりエッジ情報に対応する直線または前記立ち下がりエッジ情報に対応する直線のいずれか他方とで複数のペアを形成し、そのペアが前記想定直線本数あること、である、
     ことを特徴とする請求項1に記載の情報処理方法。
    The method further includes a step of detecting, by the edge image generating unit, a portion in which an amount of change in pixel value increases and is greater than a reference value in each pixel row in a predetermined direction as rising edge information, and detecting a portion in which an amount of change in pixel value decreases and is greater than a reference value as falling edge information,
    The number of assumed straight lines is the same as the number of the plurality of power lines,
    The extraction condition is that a line parallel to a first line having the largest number of line votes in either one of the lines corresponding to the rising edge information or the lines corresponding to the falling edge information forms a plurality of pairs with the other of the lines corresponding to the rising edge information or the lines corresponding to the falling edge information located nearby the first line, and the number of pairs is the expected number of lines.
    2. The information processing method according to claim 1,
  4.  さらに、傾き判定部により、前記特定直線抽出部により検出された複数の直線の傾きのうち、いずれの傾きが一番多い第1傾きかを傾き投票で判定するステップをさらに含み、
     前記想定直線本数は、前記複数の電力線の2倍の数であって、
     前記抽出条件は、前記傾き判定部により判定された第1傾きに対して所定範囲内の傾きの直線が少なくとも前記想定直線本数あること、であり、
     前記特定直線抽出部により、前記判定された一番多い傾きに対して所定範囲内の傾きの直線のうち、前記直線投票数が多い順に前記想定直線本数だけ直線を抽出するステップをさらに含む、
     ことを特徴とする請求項1に記載の情報処理方法。
    The method further includes a step of determining, by a gradient determination unit, which of the gradients of the plurality of straight lines detected by the specific straight line extraction unit is a first gradient that is most prevalent, by gradient voting;
    The number of assumed straight lines is twice the number of the plurality of power lines,
    the extraction condition is that at least the number of assumed straight lines is a straight line having a slope within a predetermined range with respect to the first slope determined by the slope determination unit;
    The method further includes a step of extracting, by the specific line extraction unit, lines having a slope within a predetermined range from the determined most common slope, the number of lines corresponding to the number of expected lines in descending order of the number of line votes.
    2. The information processing method according to claim 1,
  5.  前記エッジ画像生成により、所定方向の各画素列において画素値が高くなる変化量が基準値より大きい部分を立ち上がりエッジ情報として検出して、画素値が低くなる変化量が基準値より大きい部分を立ち下がりエッジ情報として検出ステップと、
     傾き判定部により、前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか一方に対応する直線において、いずれの傾きが一番多い第1傾きかを傾き投票で判定するステップと、をさらに含み、
     前記想定直線本数は、前記複数の電力線の数であって、
     前記抽出条件は、前記判定部により判定された第1傾きに対して所定範囲内の傾きであると判定された前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか一方に対応する直線と、その近傍にある前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか他方に対応する直線とで複数のペアを形成し、そのペアの数が少なくとも前記想定直線本数あること、であり、
     前記特定直線抽出部により、前記ペアから前記直線投票数が多い順に前記想定直線本数のペアに対応する直線を抽出するステップをさらに含む、
     ことを特徴とする請求項1に記載の情報処理方法。
    detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where a change amount in which a pixel value increases is greater than a reference value as rising edge information, and detecting, by the edge image generation, a portion in which a change amount in which a pixel value decreases is greater than a reference value as falling edge information;
    and determining, by a gradient determining unit, which of the gradients is a first gradient having the most prevalent gradient in a straight line corresponding to either the rising edge information or the falling edge information, by gradient voting;
    The assumed number of straight lines is the number of the plurality of power lines,
    the extraction condition is that a plurality of pairs are formed by a straight line corresponding to either the rising edge information or the falling edge information, the straight line being determined to have a gradient within a predetermined range with respect to the first gradient determined by the determining unit, and a straight line corresponding to the other of the rising edge information or the falling edge information in the vicinity of the straight line, and the number of pairs is at least the number of the assumed straight lines;
    extracting lines corresponding to the pairs of the expected number of lines from the pairs in descending order of the number of line votes by the specific line extraction unit,
    2. The information processing method according to claim 1,
  6.  前記エッジ画像生成により、所定方向の各画素列において画素値が高くなる変化量が基準値より大きい部分を立ち上がりエッジ情報として検出して、画素値が低くなる変化量が基準値より大きい部分を立ち下がりエッジ情報として検出するステップと、
     傾き判定部により、前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか一方に対応する直線と、その近傍にある前記立ち上がりエッジ情報または前記立ち下がりエッジ情報のいずれか他方に対応する直線により形成された複数のペアの傾きのうち、いずれの傾きが一番多い第1傾きかを傾き投票で判定するステップと、をさらに含み、
     前記想定直線本数は、前記複数の電力線の数であって、
     前記抽出条件は、前記判定部により判定された第1傾きに対して所定範囲内の傾きであると判定されたペアの数が少なくとも前記想定直線本数あること、であり、
     前記特定直線抽出部により、前記ペアから前記直線投票数が多い順に前記想定直線本数のペアに対応する直線を抽出するステップをさらに含む、
     ことを特徴とする請求項1に記載の情報処理方法。
    a step of detecting, by the edge image generation, a portion in each pixel row in a predetermined direction where a change amount in pixel value increases is greater than a reference value as rising edge information, and detecting a portion in each pixel row in a predetermined direction where a change amount in pixel value decreases is greater than a reference value as falling edge information;
    and determining, by a gradient determining unit, which of a plurality of pairs of gradients formed by a straight line corresponding to either the rising edge information or the falling edge information and a straight line adjacent thereto corresponding to the other of the rising edge information or the falling edge information is a first gradient having the most number of gradients by gradient voting,
    The assumed number of straight lines is the number of the plurality of power lines,
    the extraction condition is that the number of pairs determined to have a gradient within a predetermined range with respect to the first gradient determined by the determining unit is at least the number of assumed straight lines;
    extracting lines corresponding to the pairs of the expected number of lines from the pairs in descending order of the number of line votes by the specific line extraction unit,
    2. The information processing method according to claim 1,
  7.  追跡対象位置設定部により、前記特定直線抽出部により抽出された直線に基づいて、前記無人飛行体の撮影部により追跡する対象位置を設定するステップをさらに含む、
     ことを特徴とする請求項1乃至6のいずれかに記載の情報処理方法。
    The method further includes a step of setting a target position to be tracked by the image capture unit of the unmanned aerial vehicle based on the straight line extracted by the specific straight line extraction unit, by a tracking target position setting unit.
    7. The information processing method according to claim 1,
  8.  無人飛行体により撮影した撮影画像から多導体方式の複数の電力線に対応する直線を抽出する情報処理システムであって、
     前記撮影画像からエッジ画像を生成するエッジ画像生成部と、
     生成された前記エッジ画像内のエッジに対するハフ変換により取得される直線投票数情報及び前記エッジ画像内に想定される前記複数の電力線の想定直線本数を示す想定数情報に関する条件を少なくとも含む、前記複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、前記エッジ画像内の複数の直線から前記抽出条件下にある前記想定直線本数の直線を抽出する特定直線抽出部と、
     を備える情報処理システム。
    An information processing system for extracting straight lines corresponding to multiple power lines of a multi-conductor type from an image captured by an unmanned aerial vehicle,
    an edge image generating unit that generates an edge image from the captured image;
    a specific line extraction unit that extracts lines that correspond to the assumed number of lines that fall under the extraction conditions from the lines in the edge image based on line extraction condition information indicating extraction conditions for lines that constitute the multiple power lines, the line extraction condition information including at least conditions related to line vote number information obtained by performing a Hough transform on edges in the generated edge image and assumed number information indicating the assumed number of lines of the multiple power lines assumed in the edge image; and
    An information processing system comprising:
  9.  無人飛行体により撮影した撮影画像から多導体方式の複数の電力線に対応する直線を抽出する情報処理方法をコンピュータに実行させるプログラムであって、
     前記情報処理方法として、
     エッジ画像生成部により、前記撮影画像からエッジ画像を生成するステップと、
     特定直線抽出部により、生成された前記エッジ画像内のエッジに対するハフ変換により取得される直線投票数情報及び前記エッジ画像内に想定される前記複数の電力線の想定直線本数を示す想定数情報に関する条件を少なくとも含む、前記複数の電力線を構成する直線の抽出条件を示す直線抽出条件情報に基づき、前記エッジ画像内の複数の直線から前記抽出条件下にある前記想定直線本数の直線を抽出するステップと、
     を実行させるプログラム。

     
    A program for causing a computer to execute an information processing method for extracting straight lines corresponding to a plurality of multi-conductor power lines from an image captured by an unmanned aerial vehicle, comprising:
    The information processing method includes:
    generating an edge image from the captured image by an edge image generating unit;
    a step of extracting lines corresponding to the assumed number of lines that fall under extraction conditions from the lines in the edge image based on line extraction condition information indicating extraction conditions for lines constituting the multiple power lines, the line extraction condition information including at least conditions related to line vote number information obtained by a Hough transform for edges in the generated edge image and assumed number information indicating the assumed number of lines of the multiple power lines assumed in the edge image, from the multiple lines in the edge image by a specific line extraction unit;
    A program that executes the following.

PCT/JP2023/045096 2023-01-17 2023-12-15 Information processing method, information processing system, and program WO2024154504A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2023-005472 2023-01-17
JP2023005472A JP7487900B1 (en) 2023-01-17 2023-01-17 Information processing method, information processing system, and program

Publications (1)

Publication Number Publication Date
WO2024154504A1 true WO2024154504A1 (en) 2024-07-25

Family

ID=91082676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/045096 WO2024154504A1 (en) 2023-01-17 2023-12-15 Information processing method, information processing system, and program

Country Status (2)

Country Link
JP (2) JP7487900B1 (en)
WO (1) WO2024154504A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006027448A (en) * 2004-07-16 2006-02-02 Chugoku Electric Power Co Inc:The Aerial photographing method and device using unmanned flying body
JP2008269264A (en) * 2007-04-19 2008-11-06 Central Res Inst Of Electric Power Ind Method, device and program for tracking multiconductor cable by image processing, and method, device and program for detecting abnormality of multiconductor cable using the same
JP2019185407A (en) * 2018-04-10 2019-10-24 国立大学法人岐阜大学 Image analysis program
JP2022018517A (en) * 2020-07-15 2022-01-27 日本電気通信システム株式会社 Cable detection apparatus, method, and program
JP2022055463A (en) * 2020-09-29 2022-04-08 株式会社日立製作所 Charged particle beam device and sample observation method using the same
JP2022089285A (en) * 2020-12-04 2022-06-16 アルプスアルパイン株式会社 Control device, unmanned aircraft, and control method of camera mounted on unmanned aircraft

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006027448A (en) * 2004-07-16 2006-02-02 Chugoku Electric Power Co Inc:The Aerial photographing method and device using unmanned flying body
JP2008269264A (en) * 2007-04-19 2008-11-06 Central Res Inst Of Electric Power Ind Method, device and program for tracking multiconductor cable by image processing, and method, device and program for detecting abnormality of multiconductor cable using the same
JP2019185407A (en) * 2018-04-10 2019-10-24 国立大学法人岐阜大学 Image analysis program
JP2022018517A (en) * 2020-07-15 2022-01-27 日本電気通信システム株式会社 Cable detection apparatus, method, and program
JP2022055463A (en) * 2020-09-29 2022-04-08 株式会社日立製作所 Charged particle beam device and sample observation method using the same
JP2022089285A (en) * 2020-12-04 2022-06-16 アルプスアルパイン株式会社 Control device, unmanned aircraft, and control method of camera mounted on unmanned aircraft

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
KATANO YUSUKE, ONOGUCHI, KAZUNORI: "KATANO, Yusuke, ONOGUCHI, Kazunori. Lane detection using multiple road information", PROCEEDINGS OF THE 74TH NATIONAL CONVENTION OF IPSJ (3), 8 March 2010 (2010-03-08), pages 201 - 202, XP093193563 *
LIU XIONG, HOU LIN, JU XIONGMING: "A method for detecting power lines in UAV aerial images", 2017 3RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), IEEE, 1 December 2017 (2017-12-01) - 16 December 2017 (2017-12-16), pages 2132 - 2136, XP093193562, DOI: 10.1109/CompComm.2017.8322913 *
NASSERI M. H.; MORADI H.; NASIRI S.M.; HOSSEINI R.: "Power Line Detection and Tracking Using Hough Transform and Particle Filter", 2018 6TH RSI INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), IEEE, 23 October 2018 (2018-10-23), pages 130 - 134, XP033524732, DOI: 10.1109/ICRoM.2018.8657568 *
TIAN FENG; WANG YAPING; ZHU LINLIN: "Power line recognition and tracking method for UAVs inspection", 2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, IEEE, 8 August 2015 (2015-08-08), pages 2136 - 2141, XP033222729, DOI: 10.1109/ICInfA.2015.7279641 *
ZHANG XU; WANG KE; CAI XIAOBO; YAN ZIHUA; SHU LIANG; HU ZHIHUI: "Conventional detection methods for the combination of power lines and power towers", 2022 IEEE 5TH INTERNATIONAL CONFERENCE ON AUTOMATION, ELECTRONICS AND ELECTRICAL ENGINEERING (AUTEEE), IEEE, 18 November 2022 (2022-11-18), pages 27 - 31, XP034265869, DOI: 10.1109/AUTEEE56487.2022.9994310 *
ZOU KUANSHENG; JIANG ZHENBANG; ZHAO SHUAI QIANG; ZHANG QIAN: "Power Line Extraction Based on Combined Clustering of Line Segments", 2022 34TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), IEEE, 15 August 2022 (2022-08-15), pages 1200 - 1205, XP034294068, DOI: 10.1109/CCDC55256.2022.10033890 *
沼田宗敏. "高速M推定を用いたロバストな高速Hough変換". 画像ラボ. 10 March 2011, vol. 22, no. 3, pp. 15-20, (Image Laboratory), non-official translation (NUMADA, Munetoshi. Robust-Fast Hough Transform by Using High-Speed Algorithm for M-Estimator.) *
藤原孝幸,外5名. "オプティカルフローとグラフカットに基づく架空送電線領域検出". 電気学会研究会資料. 26 March 2015, pp. 39-43, (FUJIWARA, Takayuki et al. Transmission Power Line Detection Based on Optical Flow and Graph Cuts), non-official translation (Papers of Technical Meeting, IEE Japan.) *

Also Published As

Publication number Publication date
JP2024101569A (en) 2024-07-29
JP7487900B1 (en) 2024-05-21
JP2024101468A (en) 2024-07-29

Similar Documents

Publication Publication Date Title
CN109144097B (en) Obstacle or ground recognition and flight control method, device, equipment and medium
JP2019532268A (en) Determination of stereo distance information using an imaging device integrated in the propeller blades
CN108508916B (en) Control method, device and equipment for unmanned aerial vehicle formation and storage medium
WO2019082301A1 (en) Unmanned aircraft control system, unmanned aircraft control method, and program
WO2021199449A1 (en) Position calculation method and information processing system
JP2021100234A (en) Aircraft imaging method and information processing device
EP3982332A1 (en) Methods and systems for generating training data for horizon and road plane detection
JP6807092B1 (en) Inspection system and management server, program, crack information provision method
JP6807093B1 (en) Inspection system and management server, program, crack information provision method
JP6775748B2 (en) Computer system, location estimation method and program
WO2024154504A1 (en) Information processing method, information processing system, and program
JP7149569B2 (en) Building measurement method
CN116805397A (en) System and method for detecting and identifying small objects in images using machine learning algorithms
WO2024069669A1 (en) Information processing system, program, information processing method, terminal, and server
WO2021124579A1 (en) Image capturing method of flight vehicle and information processing device
WO2021035746A1 (en) Image processing method and device, and movable platform
JP2022053447A (en) Inspection system, management server, program, and crack information providing method
WO2022113482A1 (en) Information processing device, method, and program
JP7401068B1 (en) Information processing system, information processing method and program
JP7228310B1 (en) Information processing system and program, information processing method, server
JP7217570B1 (en) Information processing system and program, information processing method, server
JP7370045B2 (en) Dimension display system and method
JP7385332B1 (en) Information processing system and program, information processing method, server
US20240013460A1 (en) Information processing apparatus, information processing method, program, and information processing system
EP4250251A1 (en) System and method for detecting and recognizing small objects in images using a machine learning algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23917715

Country of ref document: EP

Kind code of ref document: A1