Nothing Special   »   [go: up one dir, main page]

WO2020059292A1 - Autonomous traveling cleaner - Google Patents

Autonomous traveling cleaner Download PDF

Info

Publication number
WO2020059292A1
WO2020059292A1 PCT/JP2019/029120 JP2019029120W WO2020059292A1 WO 2020059292 A1 WO2020059292 A1 WO 2020059292A1 JP 2019029120 W JP2019029120 W JP 2019029120W WO 2020059292 A1 WO2020059292 A1 WO 2020059292A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
autonomous traveling
imaging unit
control unit
vacuum cleaner
Prior art date
Application number
PCT/JP2019/029120
Other languages
French (fr)
Japanese (ja)
Inventor
浅井 幸治
杉本 博子
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2020059292A1 publication Critical patent/WO2020059292A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L9/00Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating action; Storing devices specially adapted to suction cleaners or parts thereof; Carrying-vehicles specially adapted for suction cleaners
    • A47L9/28Installation of the electric equipment, e.g. adaptation or attachment to the suction cleaner; Controlling suction cleaners by electric means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present invention relates to an autonomous traveling cleaner for autonomously traveling and cleaning a floor.
  • an autonomous traveling vacuum cleaner that calculates a distance from a shooting point of a camera to an object included in the image from an image obtained by the camera (for example, see Patent Document 1).
  • the autonomous traveling vacuum cleaner disclosed in Patent Literature 1 estimates a self-position by one camera and detects an obstacle. At this time, for example, when an obstacle is detected by imaging a floor surface and the own position is estimated from the obtained feature points, if the user turns, the self position is easily lost. On the other hand, if a camera with a wide angle of view is used to image obstacles existing on the ceiling and floor at one time, the self-position can be estimated using the feature points existing on the ceiling. Can be avoided. However, in this case, since the distortion of the obtained image becomes too large, a large error is included in the value of the distance to the obstacle, and the error of the self-position estimation becomes large.
  • the present invention provides an autonomous traveling vacuum cleaner capable of estimating its own position and imaging an obstacle on the floor with a single imaging unit.
  • One embodiment of the present invention is an autonomous traveling cleaner that autonomously travels and performs cleaning.
  • the autonomous traveling cleaner has an image capturing unit that captures an image of an object present in the surroundings, a driving mechanism that drives the attitude of the image capturing unit to be changeable at least around one axis along a horizontal plane, and image information obtained from the image capturing unit. And an estimating unit for performing self-position estimation. Furthermore, a drive control unit that controls the drive mechanism and changes the attitude of the imaging unit is provided.
  • each process by the autonomous traveling cleaner may be realized by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM. Further, each processing by the autonomous traveling cleaner may be realized by an arbitrary combination of a system, a method, an integrated circuit, a computer program, and a recording medium.
  • an autonomous traveling cleaner capable of accurately detecting an obstacle existing on a floor surface and accurately estimating its own position with one imaging unit.
  • FIG. 1 is a plan view showing the autonomous traveling cleaner according to the embodiment from above.
  • FIG. 2 is a plan view showing the autonomous traveling vacuum cleaner from below.
  • FIG. 3 is a side view showing the autonomous traveling cleaner from a side.
  • FIG. 4 is a block diagram showing a functional configuration of the autonomous traveling vacuum cleaner together with a mechanical configuration and the like.
  • FIG. 5 is a flowchart showing the operation of the autonomous traveling vacuum cleaner.
  • each drawing is a schematic diagram and is not necessarily strictly illustrated.
  • the same components are denoted by the same reference numerals.
  • the orthogonal direction means not only completely orthogonal, but also substantially orthogonal, that is, including an angle difference of, for example, about several percent. The same applies to expressions using other directions.
  • FIG. 1 is a plan view showing the autonomous traveling vacuum cleaner 100 according to the present embodiment from above.
  • FIG. 2 is a plan view showing the autonomous traveling cleaner 100 from below.
  • FIG. 3 is a side view showing the autonomous traveling cleaner 100 from the side.
  • the autonomous traveling vacuum cleaner 100 according to the embodiment is a robot-type vacuum cleaner that autonomously travels on a floor of a house having a ceiling and sucks dust on the floor.
  • the autonomous traveling cleaner 100 of the present embodiment includes a main body 110, a driving unit 120, a cleaning unit 130, a suction unit 140, and a control unit 150.
  • the drive unit 120 moves the main body 110 on the floor.
  • the cleaning unit 130 collects refuse existing on the floor.
  • the suction unit 140 sucks dust collected by the cleaning unit 130 into the main body 110.
  • the control unit 150 controls operations of the drive unit 120, the cleaning unit 130, the suction unit 140, and the like.
  • the main body 110 forms a housing that houses the drive unit 120, the control unit 150, and the like.
  • the main body 110 includes a lower part and an upper part, and the upper part is configured to be removable with respect to the lower part.
  • the main body 110 includes a bumper (not shown) provided on the outer peripheral portion so as to be displaceable with respect to the main body 110. Further, the main body 110 has a suction port 111 at a lower portion for sucking dust into the main body 110.
  • Drive unit 120 causes autonomous traveling cleaner 100 to travel on the floor based on an instruction from control unit 150.
  • one drive unit 120 is arranged on each of the left and right sides with respect to the center in the width direction of main body 110 in plan view.
  • the number of drive units 120 is not limited to two, but may be one, or three or more.
  • the drive unit 120 includes a wheel (not shown) that travels on the floor, a travel motor that applies torque to the wheel, a housing that houses the travel motor, and the like.
  • the wheel is housed in a recess (not shown) formed on the lower surface of the lower part of the main body 110 and is rotatably attached to the main body 110.
  • the drive unit 120 may include an encoder 121 (see FIG. 4) for detecting odometry information used for calculating the traveling direction of the autonomous traveling cleaner 100.
  • the traveling direction of the autonomous traveling vacuum cleaner 100 (the direction in which the autonomous traveling vacuum cleaner 100 travels) is described not only in the direction in which the vehicle is actually traveling but also in the direction including the direction in which the vehicle travels.
  • the autonomous traveling cleaner 100 is configured by a two-wheel opposed drive system including the casters 129 as auxiliary wheels. By independently controlling the rotation of the two wheels, the autonomous traveling cleaner 100 can travel freely, such as going straight, retreating, turning left, turning right, and so on.
  • the cleaning unit 130 is a unit for sucking dust from the suction port 111.
  • the cleaning unit 130 includes a main brush (not shown) disposed in the suction port 111, a brush drive motor for rotating the main brush, and the like.
  • the suction unit 140 is arranged inside the main body 110.
  • the suction unit 140 includes a fan case (not shown), an electric fan disposed inside the fan case, and the like.
  • the electric fan sucks air inside the trash box unit 141 and discharges the air outside the main body 110.
  • the refuse sucked from the suction port 111 is stored in the trash box unit 141.
  • the control unit 150 is arranged inside the main body 110.
  • the control unit 150 includes a memory, a CPU (Central Processing Unit) that executes a control program stored in the memory, and the like. Each processing unit realized by the control unit 150 will be described later.
  • CPU Central Processing Unit
  • the image capturing section 160 is configured by a so-called digital camera or the like that captures an image of an object such as an obstacle existing around the main body 110 and outputs image information composed of digital signals.
  • the imaging unit 160 according to the present embodiment includes a single-lens optical system having a single optical axis. Therefore, the image information obtained from the imaging unit 160 does not include information indicating the distance from the imaging unit 160 to the object.
  • the angle of view of the optical system included in the imaging unit 160 is selected from, for example, 90 degrees or less. Therefore, the imaging unit 160 employs an optical system that cannot capture an image of the entire circumference of the upper space of the main body 110. Accordingly, distortion of an image obtained from the imaging unit 160, a difference in resolution depending on a position in the image, and the like are suppressed.
  • the drive mechanism 170 includes a mechanism for moving the posture of the imaging unit 160 at least around one axis along a horizontal plane, as shown in FIG.
  • the drive mechanism 170 of the present embodiment surrounds the first rotation axis extending in the width direction (the X-axis direction in FIG. 3) of the autonomous traveling cleaner 100 as indicated by a broken arrow in FIG. 3.
  • the posture of the imaging unit 160 is changed.
  • the drive mechanism 170 is configured to change the attitude of the imaging unit 160 around a second rotation axis extending in a vertical direction indicated by a chain line in FIG.
  • the driving mechanism 170 may be configured so that the attitude of the imaging unit 160 can be changed around a rotation axis extending in other directions other than the X-axis direction and the vertical direction.
  • FIG. 3 illustrates and illustrates an example in which the imaging unit 160 and the driving mechanism 170 are provided on the top surface of the main body 110 so as to protrude.
  • a configuration may be adopted in which a concave portion is provided on the top surface of the main body 110, and the imaging section 160 and the driving mechanism 170 are installed in the concave portion.
  • Battery 180 (see FIG. 4) is electrically connected to the electronic device included in autonomous traveling vacuum cleaner 100.
  • the battery 180 is a battery, such as a secondary battery, for supplying power to the connected electronic device.
  • the storage unit 190 is a memory such as a ROM (Read Only Memory) or a RAM (Random Access Memory) in which control programs executed by various processing units are stored.
  • the storage unit 190 is realized by, for example, a hard disk drive (HDD), a flash memory, or the like.
  • the storage unit 190 stores, for example, correlation information, size information, a floor map, and the like.
  • the correlation information is information on the estimated size of the pixel in the real space with respect to the pixel position of the image generated by the imaging unit 160.
  • the size information is information on the size of the autonomous traveling vacuum cleaner 100 (specifically, the width of the main body 110, etc.).
  • the floor map is information indicating a floor plan of a house or the like. Note that the lateral width of the main body 110 is the width of the autonomous traveling cleaner 100 in a direction orthogonal to the traveling direction of the autonomous traveling cleaner 100 when viewed from above.
  • the autonomous traveling vacuum cleaner 100 includes various sensors such as an obstacle sensor 273, a distance measurement sensor 274, a collision sensor (not shown), a floor sensor 276, and a dust amount sensor 277, which are exemplified below. .
  • the obstacle sensor 273 is a sensor that detects an obstacle such as a peripheral wall or furniture existing in front of the main body 110, which is an obstacle to traveling.
  • the obstacle sensor 273 for example, an ultrasonic sensor is used.
  • the obstacle sensor 273 includes a transmitting unit 271, a receiving unit 272, and the like.
  • Transmitting section 271 is disposed at the front center of main body 110 and transmits ultrasonic waves forward.
  • the receiving units 272 are arranged on both sides of the transmitting unit 271 in the X direction, and receive the ultrasonic waves transmitted from the transmitting unit 271. That is, the obstacle sensor 273 causes the receiving unit 272 to receive the reflected ultrasonic wave transmitted from the transmitting unit 271 and reflected by the obstacle and returned. Thereby, the obstacle sensor 273 detects the distance to the obstacle, the position of the obstacle, and the like.
  • the distance measuring sensor 274 is a sensor that detects a distance between the main body 110 and an object such as an obstacle existing around the main body 110.
  • the distance measuring sensor 274 is configured by, for example, an infrared sensor having a light emitting unit and a light receiving unit. That is, the distance measurement sensor 274 measures the distance to the obstacle based on the elapsed time from when the infrared light emitted from the light emitting unit and reflected by the obstacle returns and is received by the light receiving unit.
  • the distance measurement sensor 274 is arranged, for example, near the right front top and near the left front top.
  • the distance measuring sensor 274 on the right side outputs light (infrared rays) toward the diagonally right front of the main body 110.
  • the distance measuring sensor 274 on the left side outputs light (infrared light) toward the front left side of the main body 110.
  • the floor sensor 276 is a sensor that is disposed at a plurality of locations on the bottom surface of the main body 110 and detects the state of the floor.
  • floor sensor 276 is formed of, for example, an infrared sensor having a light emitting unit and a light receiving unit. That is, the floor surface sensor 276 detects the state of the floor surface, for example, the floor surface is wet, based on the amount of light (infrared light) returned from the light emitting unit and received by the light receiving unit.
  • the dust amount sensor 277 is, for example, a sensor including a light-emitting element and a light-receiving element, and the amount of light emitted from the light-emitting element is detected by the light-receiving element and output.
  • the control unit 150 associates the amount of received light with the amount of dust based on the output information. Specifically, control unit 150 determines that the smaller the amount of received light, the greater the amount of dust. Then, the control unit 150 generates dust amount information indicating that.
  • the obstacle sensor 273, the distance measurement sensor 274, the floor sensor 276, and the dust amount sensor 277 described above are examples of sensors. Therefore, the autonomous traveling vacuum cleaner 100 does not need to include all the above sensors. In addition, the autonomous traveling cleaner 100 may include a sensor having a different form from the above.
  • the autonomous traveling cleaner 100 may further include a collision sensor, an encoder 121 (see FIG. 4), an acceleration sensor, an angular velocity sensor, and the like, which are not shown.
  • the collision sensor is composed of, for example, a switch contact displacement sensor, and is disposed on a bumper attached around the main body 110.
  • the switch contact displacement sensor is turned on when the obstacle comes into contact with the bumper and the bumper is pushed into the main body 110.
  • the encoder 121 is disposed in the drive unit 120.
  • the encoder 121 detects each rotation angle of a pair of wheels rotated by the traveling motor. Based on the information from the encoder 121, for example, the traveling amount, the turning angle, the speed, the acceleration, the angular speed, and the like of the autonomous traveling cleaner 100 are calculated.
  • the acceleration sensor detects the acceleration when the autonomous traveling cleaner 100 travels.
  • the angular velocity sensor detects an angular velocity when the autonomous traveling cleaner 100 turns.
  • the information detected by the acceleration sensor and the angular velocity sensor is used, for example, as information for correcting an error caused by idling of the wheel.
  • FIG. 4 is a block diagram showing a functional configuration of the autonomous traveling cleaner 100 according to the embodiment, together with a mechanical configuration and the like.
  • the autonomous traveling cleaner 100 includes, as processing units executed by the control unit 150, an estimation unit 151, a drive control unit 152, a type identification unit 153, a traveling control unit 154, and a cleaning control unit. 155 and the like.
  • the control unit 150 is described as one unit in one functional block. However, each processing unit is divided into a plurality of functional blocks, and is realized by a plurality of CPUs corresponding to the respective functional blocks. You may.
  • the estimation unit 151 is a processing unit that performs self-position estimation based on image information obtained from the imaging unit 160.
  • the method of self-position estimation is not particularly limited, but in the case of the present embodiment, estimation section 151 performs self-position estimation using SLAM (Simultaneous Localization and Mapping) technology.
  • the estimation unit 151 includes a plurality of feature points (landmarks) included in the image information acquired from the imaging unit 160, posture information indicating the posture of the imaging unit 160 obtained from the drive control unit 152, and a drive unit.
  • the self-position estimation is executed by integrating signals from the encoder 121 incorporated in the sub-station 120. Then, based on the estimated self-position, the estimating unit 151 generates self-position information.
  • the estimating unit 151 performs the self-position estimation while traveling. As a result, the estimating unit 151 also generates a map using the traveling results that are sets of self-position information generated at a plurality of locations. At this time, the position (area) of an obstacle such as a wall may be added as information to the map using information detected by various sensors.
  • the drive control unit 152 is a processing unit that controls the drive mechanism 170 so as to increase the accuracy of the self-position estimation performed by the estimation unit 151 and changes the attitude of the imaging unit 160.
  • the drive control unit 152 sets the orientation of the imaging unit 160 so that the floor is included in the angle of view. To change.
  • orientation of the imaging unit 160 for self-position estimation may be described as “estimated orientation”
  • orientation of the imaging unit 160 for obstacle detection may be described as “detected orientation”.
  • the estimated posture of the imaging unit 160 at which the accuracy of the self-position estimation becomes high changes depending on the surrounding environment and the situation of the autonomous traveling cleaner 100.
  • the posture in which the number of feature points included in one piece of image information obtained from the imaging unit 160 is large is set as the estimated posture of the imaging unit 160. This tends to increase the accuracy of the self-position estimation.
  • the drive control unit 152 controls the drive mechanism 170 so that the angle of view includes many feature points, and changes the estimated attitude of the imaging unit 160.
  • the method for determining the estimated posture is not particularly limited.
  • the drive control unit 152 controls the drive mechanism 170 so that the posture of the imaging unit 160 changes in a predetermined order or randomly.
  • image information is sequentially acquired from the imaging unit 160.
  • the feature point in the image information exceeds the first threshold value, for example, the posture in which the image information is acquired such that the feature point exceeds 1000 may be determined as the estimated posture of the imaging unit 160.
  • the type specifying unit 153 obtains information indicating that ceiling illumination is included in the image information
  • the type in which the image information is obtained may be determined as the estimated attitude.
  • the posture at which the image information of the ceiling illumination image at the center is obtained may be determined as the estimated posture.
  • the drive control unit 152 controls the drive mechanism 170 so that the imaging unit 160 can acquire a plurality of pieces of image information covering the whole sphere or a range close to the whole sphere. Then, a posture in which the number of feature points within the angle of view of the imaging unit 160 exceeds the first threshold may be determined as the estimated posture of the imaging unit 160. Further, the posture having the most feature points within the angle of view of the imaging unit 160 may be determined as the estimated posture of the imaging unit 160. That is, as described above, the estimated attitude of the imaging unit 160 is determined using the number of feature points included in the angle of view. Thereby, for example, an estimation result obtained from processing such as self-position estimation performed by the estimation unit 151 can be obtained with high accuracy.
  • the drive control unit 152 controls the drive mechanism 170 so that the optical axis of the imaging unit 160 is directed upward so that the floor surface does not enter the angle of view in order to determine an estimated posture including many feature points.
  • the image capturing section 160 may acquire a plurality of pieces of image information having different areas. Thus, the number of pieces of image information acquired to increase the number of feature points included in the image information can be suppressed. As a result, the processing required for determining the estimated attitude can be further speeded up.
  • the drive control unit 152 may control the drive mechanism 170 so that the type specifying unit 153 described below specifies, for example, ceiling illumination, and the image capturing unit 160 may acquire image information.
  • the type specifying unit 153 described below specifies, for example, ceiling illumination
  • the image capturing unit 160 may acquire image information.
  • the estimated posture of the imaging unit 160 with which the accuracy of the self-position estimation is high is selected as the estimated posture having a large number of feature points that can be matched based on, for example, a plurality of pieces of image information obtained from the imaging unit 160. Is also good. As a result, the accuracy of the self-position estimation increases. That is, the drive control unit 152 controls the drive mechanism 170 so that the angle of view includes many feature points that can be matched, and changes the imaging unit 160 to the selected estimated posture.
  • the drive control unit 152 determines that the area overlaps in, for example, a celestial sphere, a range close to the celestial sphere, a range where the floor surface does not fall within the angle of view, or a range where ceiling illumination is included within the angle of view.
  • the driving mechanism 170 is controlled so that the image capturing unit 160 can acquire a plurality of pieces of image information.
  • the drive control unit 152 determines the posture in which the number of feature points that can be matched within the angle of view of the imaging unit 160 exceeds the second threshold, for example, the case where the number of feature points that can be matched exceeds 100, as the estimated posture. May be.
  • the drive control unit 152 may determine, as the estimated posture, the posture having the most matching feature points within the angle of view of the imaging unit 160.
  • the type specifying unit 153 described above is a processing unit that specifies the type of a captured object from image information obtained from the imaging unit 160.
  • the method of specifying the type is not particularly limited.
  • an AI (Artificial @ Intelligence) technique can be exemplified.
  • the image capturing unit 160 first acquires a plurality of pieces of image information in advance in an environment (including a pseudo environment) in which the autonomous traveling cleaner 100 performs cleaning.
  • the type specifying unit 153 prepares the image of the object in the acquired image information and the type of the object corresponding to the image as data in the storage unit 190 or the like.
  • the type specifying unit 153 learns a situation such as an object in an environment (including a pseudo environment) by using a technique such as deep learning. Then, when the acquired image includes the image of the learned object, the type identification unit 153 constructs a type identification model so as to output the type of the object. At this time, the type specifying unit 153 may prepare, in addition to the image of the object in the image information and the type of the object corresponding to the image, the image position where the object is displayed as data. Accordingly, when the acquired image includes the image of the learned object, the type specifying unit 153 can also construct a model that can output the displayed image position along with the type of the object. Further, the actual distance between the object and the main body 110 can be estimated to some extent based on the displayed image position. That is, the type specifying unit 153 can exemplify a case in which the type of an object is specified using each model constructed as described above.
  • Specifying the type of the object includes not only specifying the detailed type of the object but also specifying whether the object can be an obstacle to the traveling of the autonomous traveling vacuum cleaner 100 or not. Specifically, for example, a window image appearing in the image information is not specified as an obstacle. On the other hand, the case where the image of the small carpet on the floor is specified as an obstacle is exemplified.
  • the type specifying unit 153 specifies the type of the object based on the image information obtained from the imaging unit 160 in the detected posture. As a result, the type identification unit 153 detects a moving obstacle. At this time, the type identification unit 153 is based on information on the angle of the imaging unit 160 in the detected attitude acquired from the drive control unit 152, odometry information obtained from the encoder 121 and the like, the position of the identified obstacle in the image information, and the like. Then, the distance to the obstacle may be calculated. Thus, the operation of the autonomous traveling vacuum cleaner 100 can be changed depending on the type of the obstacle and the obstacle to be cleaned up and the obstacle to be avoided without approaching.
  • the traveling control unit 154 is a processing unit that controls the traveling operation of the autonomous traveling vacuum cleaner 100. Specifically, for example, the self-position of the autonomous traveling cleaner 100 is estimated from information detected by various sensors including the imaging unit 160 included in the autonomous traveling cleaner 100. Then, the traveling control unit 154 controls the drive unit 120 based on the estimated self-position of the autonomous traveling cleaner 100. Thereby, the traveling control unit 154 controls traveling and steering of the autonomous traveling cleaner 100 to move on the floor.
  • the cleaning control unit 155 controls the suction unit 140 to suck dust from the suction port 111 of the main body 110. As a result, the floor is cleaned.
  • FIG. 5 is a flowchart showing the operation of the autonomous traveling vacuum cleaner 100.
  • the drive control unit 152 of the autonomous traveling cleaner 100 controls the drive mechanism 170 to change the attitude of the imaging unit 160 so that the floor to be cleaned is included in the angle of view.
  • the detection posture is set (step S101).
  • the type identification unit 153 detects whether there is an obstacle in the image information acquired from the imaging unit 160 (Step S102). At this time, when there is an obstacle (Yes in step S102), the type identification unit 153 estimates the distance to the obstacle (step S103). On the other hand, when no obstacle is detected (No in step S102), the type identification unit 153 sets the estimated distance to the maximum detection distance (for example, about 1 meter) (step S104), and proceeds to step S105. Thereby, the traveling operation can be performed while being ticked for each detection maximum distance. In addition, the detection position accuracy for a distant obstacle tends to be low. Therefore, by setting the detection maximum distance, it is possible to accurately catch an obstacle and travel while correcting a decrease in the detection position accuracy.
  • the maximum detection distance for example, about 1 meter
  • the drive control unit 152 controls the drive mechanism 170 so that the optical axis of the imaging unit 160 is at least along the horizontal plane as described with reference to FIG. 3 so that the floor surface is not included in the angle of view. It moves (rotates) around one axis (for example, around the first rotation axis) and turns upward (step S105). At this time, the optical axis of the imaging unit 160 may be moved (rotated) around the second rotation axis.
  • the estimating unit 151 acquires image information obtained by imaging a plurality of different areas from the imaging unit 160, and determines whether or not the imaging unit 160 is in the estimated posture (step S106). At this time, if the posture is not the estimated posture (No in step S106), the process returns to step S105, and the subsequent steps are similarly executed. On the other hand, when the estimated posture is reached (Yes in step S106), the estimating unit 151 determines the estimated posture of the imaging unit 160. Note that an arbitrary method such as the method described above can be adopted as a method for determining the estimated posture.
  • the estimating unit 151 performs the self-position estimation based on the feature points in the image information obtained from the imaging unit 160 in the estimated posture (step S107).
  • the estimating unit 151 continuously performs the self-position estimation while moving the autonomous traveling cleaner 100 on the floor by the drive unit 120, and performs creation and updating of the map as needed (step S108).
  • the drive control unit 152 causes the storage unit 190 to store a region imaged in the determined estimated posture of the imaging unit 160, the posture of the imaging unit 160, and the like.
  • the estimated attitude of the imaging unit 160 may follow the traveling state of the autonomous traveling cleaner 100 so that the same area falls within the angle of view. Absent.
  • the estimation unit 151 can maintain high accuracy of the self-position estimation.
  • the estimation unit 151 can stably execute the self-position estimation with high accuracy.
  • the estimating unit 151 determines whether or not the self-position has been lost during the cleaning (step S109). At this time, if the self-position has been lost (Yes in step S109), the process returns to step S105, and the subsequent steps are executed. Specifically, the drive control unit 152 controls the drive mechanism 170 again to change the attitude of the imaging unit 160 so as to increase the accuracy of the self-position estimation (step S105). Subsequently, the estimating unit 151 determines again the estimated posture of the imaging unit 160 (Step S106). Further, the respective steps of the above-described self-position estimation (step S107) and map creation (step S108) are executed.
  • step S109 it is determined whether or not the cleaning of the predetermined area has been completed. At this time, if the cleaning has not been completed (step S110) (No in S110), the process returns to step S107, and the subsequent steps are executed.
  • step S110 if the cleaning has been completed (Yes in step S110), a series of flows ends.
  • the autonomous traveling vacuum cleaner 100 operates.
  • the autonomous traveling cleaner 100 has at least one optical system that has a relatively narrow angle of view (for example, an angle of view of 90 degrees or less and 30 degrees or more) and a small distortion.
  • the imaging unit 160 is provided. This makes it possible to detect a forward obstacle and calculate the distance to the obstacle with high accuracy. Further, the feature position included in the image information obtained from the imaging unit 160 having an appropriate estimated posture can perform the self-position estimation with high accuracy. As a result, the autonomous traveling vacuum cleaner 100 can move smoothly and completely without colliding with an obstacle, and can clean the obstacle and the corner of the floor cleanly.
  • the present invention is not limited to the above embodiment.
  • another embodiment that is realized by arbitrarily combining the components described in this specification and excluding some of the components may be an embodiment of the present invention.
  • the gist of the present invention with respect to the above-described embodiment that is, modified examples obtained by performing various modifications conceivable by those skilled in the art without departing from the meaning indicated by the words described in the claims are also included in the present invention. It is.
  • the flow in which the imaging unit 160 is set to the detection posture, the obstacle is detected, and then the cleaning is performed while executing the self-position estimation is described as an example.
  • I can't.
  • the imaging unit 160 is set to the estimated posture, and the self-position estimation is performed.
  • the imaging unit 160 is set to the detection posture, and the cleaning is performed while recognizing the obstacle and estimating the distance to the obstacle.
  • the imaging unit 160 may be set to the estimated posture again and the cleaning may be performed while executing the self-position estimation.
  • the optical axis of the imaging unit 160 may be directed vertically upward.
  • the probability that ceiling illumination is included in the angle of view of the imaging unit 160 increases. Therefore, the posture in which the ceiling illumination is located at the center may be used as the estimated posture of the imaging unit 160. Thereby, the feature point of the ceiling lighting can be always observed. Then, by performing self-position estimation based on the feature points of the ceiling lighting, the estimation accuracy can be improved.
  • the estimated attitude of the imaging unit 160 is determined based on the number of feature points included in one piece of image information and the number of feature points that can be matched.
  • the position may be determined as the estimated posture.
  • the travel cleaner 100 when it is desired to more accurately grasp the distance between the obstacle and the autonomous traveling cleaner 100, for example, when the user climbs on a carpet recognized as an obstacle and performs cleaning, the following autonomous operation is performed.
  • the travel cleaner 100 may be controlled.
  • the autonomous traveling cleaner 100 is moved to a position where the leading edge of the main body 110 of the autonomous traveling cleaner 100 and the obstacle fall within the angle of view of the imaging unit 160. Move closer to obstacles according to the distance information. Then, the drive control unit 152 controls the drive mechanism 170 such that the leading edge of the autonomous traveling cleaner 100 and the obstacle fall within the angle of view of the imaging unit 160. Thereby, the distance between the obstacle and the autonomous traveling cleaner 100 can be more accurately grasped, and the floor can be completely cleaned.
  • the present invention is applicable to an autonomous traveling vacuum cleaner that automatically cleans a floor surface in a house or the like.
  • Reference Signs List 100 autonomous traveling cleaner 110 main body 111 suction port 120 drive unit 121 encoder 129 caster 130 cleaning unit 140 suction unit 141 trash can unit 150 control unit 151 estimation unit 152 drive control unit 153 type identification unit 154 travel control unit 155 cleaning control unit 160 imaging Unit 170 Drive mechanism 180 Battery 190 Storage unit 271 Transmitting unit 272 Receiving unit 273 Obstacle sensor 274 Distance measuring sensor 276 Floor surface sensor 277 Dust amount sensor

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Electric Vacuum Cleaner (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

An autonomous traveling cleaner (100) comprises an imaging unit (160) for imaging an object present in the vicinity, and a drive mechanism (170) capable of changing the orientation of the imaging unit (160) about at least one axis along a horizontal plane. Furthermore, the autonomous traveling cleaner (100) comprises: an estimation unit for estimating the position of the autonomous traveling cleaner (100) on the basis of image information obtained from the imaging unit (160); and a drive control unit for controlling the drive mechanism (170) so that the accuracy of estimation of the position of the autonomous traveling cleaner (100) by the estimation unit rises, and changing the orientation of the imaging unit (160). There is thereby provided an autonomous traveling cleaner (100) that is capable of detecting obstacles and estimating the position of the autonomous traveling cleaner (100) using the imaging unit (160), e.g., at least one camera having a narrow angle of view.

Description

自律走行掃除機Autonomous traveling vacuum cleaner
 本発明は、床面を自律的に走行して掃除する自律走行掃除機に関する。 The present invention relates to an autonomous traveling cleaner for autonomously traveling and cleaning a floor.
 近年、掃除機が備えるカメラからの情報を用いて自己の動きや周囲との位置関係を元に、自己位置推定を行って部屋の中のどの位置にいるかを把握し、その情報を元に、動作内容を決定する自律走行掃除機が提案されている。 In recent years, based on the information from the camera provided by the vacuum cleaner, based on its own movement and its positional relationship with the surroundings, it performs self-position estimation to grasp which position in the room, and based on that information, Autonomous traveling vacuum cleaners that determine the operation content have been proposed.
 また、カメラによって得られる画像から、カメラの撮影地点から画像に含まれる対象物までの距離を算出する自律走行掃除機が開示されている(例えば、特許文献1参照)。 Also, there is disclosed an autonomous traveling vacuum cleaner that calculates a distance from a shooting point of a camera to an object included in the image from an image obtained by the camera (for example, see Patent Document 1).
 特許文献1に開示の自律走行掃除機は、具体的には、一台のカメラにより自己位置を推定し、かつ障害物を検出する。このとき、例えば床面を撮像して障害物を検出し、かつ得られた特徴点から自己位置を推定する場合、自身が旋回すると、自己位置を喪失し易くなる。一方、画角の広いカメラを用いて、天井および床面に存在する障害物を一度に撮像すると、天井に存在する特徴点を用いて自己位置を推定できるので、自身の旋回による自己位置の喪失は、回避できる。しかしながら、この場合、得られた画像の歪が大きくなりすぎるため、障害物との距離の値に大きな誤差が含まれ、自己位置推定の誤差が大きくなる。 自律 Specifically, the autonomous traveling vacuum cleaner disclosed in Patent Literature 1 estimates a self-position by one camera and detects an obstacle. At this time, for example, when an obstacle is detected by imaging a floor surface and the own position is estimated from the obtained feature points, if the user turns, the self position is easily lost. On the other hand, if a camera with a wide angle of view is used to image obstacles existing on the ceiling and floor at one time, the self-position can be estimated using the feature points existing on the ceiling. Can be avoided. However, in this case, since the distortion of the obtained image becomes too large, a large error is included in the value of the distance to the obstacle, and the error of the self-position estimation becomes large.
日本国特許第3215616号公報Japanese Patent No. 3215616
 本発明は、一台の撮像部で、自己位置推定と、床面にある障害物の撮像とが可能な自律走行掃除機を提供する。 The present invention provides an autonomous traveling vacuum cleaner capable of estimating its own position and imaging an obstacle on the floor with a single imaging unit.
 本発明の一態様は、自律的に走行し掃除を行う自律走行掃除機である。自律走行掃除機は、周囲に存在する物体を撮像する撮像部と、少なくとも水平面に沿う1軸周りで撮像部の姿勢を変更可能に駆動する駆動機構と、撮像部から得られる画像情報に基づいて、自己位置推定を行う推定部を備える。さらに、駆動機構を制御して、撮像部の姿勢を変更する駆動制御部を備える。 One embodiment of the present invention is an autonomous traveling cleaner that autonomously travels and performs cleaning. The autonomous traveling cleaner has an image capturing unit that captures an image of an object present in the surroundings, a driving mechanism that drives the attitude of the image capturing unit to be changeable at least around one axis along a horizontal plane, and image information obtained from the image capturing unit. And an estimating unit for performing self-position estimation. Furthermore, a drive control unit that controls the drive mechanism and changes the attitude of the imaging unit is provided.
 なお、自律走行掃除機による各処理は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータで読み取り可能なCD-ROMなどの記録媒体で実現されてもよい。また、自律走行掃除機による各処理は、システム、方法、集積回路、コンピュータプログラムおよび記録媒体の任意の組み合わせで実現されてもよい。 Note that each process by the autonomous traveling cleaner may be realized by a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM. Further, each processing by the autonomous traveling cleaner may be realized by an arbitrary combination of a system, a method, an integrated circuit, a computer program, and a recording medium.
 本発明によれば、一つの撮像部で、床面に存在する障害物を精度よく検出し、精度よく自己位置推定ができる自律走行掃除機を提供できる。 According to the present invention, it is possible to provide an autonomous traveling cleaner capable of accurately detecting an obstacle existing on a floor surface and accurately estimating its own position with one imaging unit.
図1は、実施の形態に係る自律走行掃除機を上方から示す平面図である。FIG. 1 is a plan view showing the autonomous traveling cleaner according to the embodiment from above. 図2は、同自律走行掃除機を下方から示す平面図である。FIG. 2 is a plan view showing the autonomous traveling vacuum cleaner from below. 図3は、同自律走行掃除機を側方から示す側面図である。FIG. 3 is a side view showing the autonomous traveling cleaner from a side. 図4は、同自律走行掃除機の機能構成を、機構構成などとともに示すブロック図である。FIG. 4 is a block diagram showing a functional configuration of the autonomous traveling vacuum cleaner together with a mechanical configuration and the like. 図5は、同自律走行掃除機の動作を示すフローチャートである。FIG. 5 is a flowchart showing the operation of the autonomous traveling vacuum cleaner.
 以下に、本発明に係る自律走行掃除機の実施の形態について、図面を用いて詳細に説明する。なお、以下に説明する実施の形態は、いずれも本発明の一具体例を示すものである。したがって、以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置および接続形態、ステップおよびステップの順序などは、一例であり、本発明を限定する趣旨ではない。よって、以下の実施の形態における構成要素のうち、本発明の最上位概念を示す独立請求項に記載されていない構成要素については、任意の構成要素として説明される。 Hereinafter, embodiments of the autonomous traveling vacuum cleaner according to the present invention will be described in detail with reference to the drawings. Each of the embodiments described below shows a specific example of the present invention. Therefore, the numerical values, shapes, materials, constituent elements, arrangement and connection forms of the constituent elements, the steps and the order of the steps, and the like shown in the following embodiments are merely examples, and do not limit the present invention. Therefore, among the components in the following embodiments, components that are not described in the independent claims that represent the highest concept of the present invention are described as arbitrary components.
 また、各図は、模式図であり、必ずしも厳密に図示されたものではない。また、各図において、同じ構成部材については同じ符号を付している。 図 Moreover, each drawing is a schematic diagram and is not necessarily strictly illustrated. In each drawing, the same components are denoted by the same reference numerals.
 また、以下の実施の形態において、方向を示す表現を用いている。例えば、直交する方向とは、完全に直交することを意味するだけでなく、実質的に直交する、すなわち、例えば数%程度の角度の差異を含むことも意味する。他の方向を用いた表現についても同様である。 In the following embodiments, directions indicating directions are used. For example, the orthogonal direction means not only completely orthogonal, but also substantially orthogonal, that is, including an angle difference of, for example, about several percent. The same applies to expressions using other directions.
 (実施の形態)
 以下、本発明の実施の形態に係る自律走行掃除機100について、図1から図3を参照しつつ、説明する。
(Embodiment)
Hereinafter, an autonomous traveling cleaner 100 according to an embodiment of the present invention will be described with reference to FIGS. 1 to 3.
 図1は、本実施の形態に係る自律走行掃除機100を上方から示す平面図である。図2は、自律走行掃除機100を下方から示す平面図である。図3は、自律走行掃除機100を側方から示す側面図である。なお、実施の形態に係る自律走行掃除機100は、天井が存在する家屋の床面上を自律的に走行し、床面上のごみを吸引するロボット型の掃除機である。 FIG. 1 is a plan view showing the autonomous traveling vacuum cleaner 100 according to the present embodiment from above. FIG. 2 is a plan view showing the autonomous traveling cleaner 100 from below. FIG. 3 is a side view showing the autonomous traveling cleaner 100 from the side. The autonomous traveling vacuum cleaner 100 according to the embodiment is a robot-type vacuum cleaner that autonomously travels on a floor of a house having a ceiling and sucks dust on the floor.
 具体的には、図1から図3に示すように、本実施の形態の自律走行掃除機100は、本体110と、駆動ユニット120と、掃除ユニット130と、吸引ユニット140と、制御ユニット150と、撮像部160と、駆動機構170と、バッテリ180(図4参照)と、記憶部190(図4参照)などを備える。駆動ユニット120は、本体110を床面上で移動させる。掃除ユニット130は、床面上に存在するごみを集める。吸引ユニット140は、掃除ユニット130で集めたごみを、本体110の内部に吸引する。制御ユニット150は、駆動ユニット120、掃除ユニット130および吸引ユニット140などの動作を制御する。 Specifically, as shown in FIGS. 1 to 3, the autonomous traveling cleaner 100 of the present embodiment includes a main body 110, a driving unit 120, a cleaning unit 130, a suction unit 140, and a control unit 150. , An imaging unit 160, a driving mechanism 170, a battery 180 (see FIG. 4), a storage unit 190 (see FIG. 4), and the like. The drive unit 120 moves the main body 110 on the floor. The cleaning unit 130 collects refuse existing on the floor. The suction unit 140 sucks dust collected by the cleaning unit 130 into the main body 110. The control unit 150 controls operations of the drive unit 120, the cleaning unit 130, the suction unit 140, and the like.
 本体110は、駆動ユニット120、制御ユニット150などを収容する筐体を構成する。本体110は、下部と上部とを含み、下部に対し、上部が取り外し可能に構成される。本体110は、外周部に、本体110に対して変位可能に設けられるバンパ(図示せず)を備える。さらに、本体110は、下部に、ごみを本体110の内部に吸引するための吸込口111を有する。 (4) The main body 110 forms a housing that houses the drive unit 120, the control unit 150, and the like. The main body 110 includes a lower part and an upper part, and the upper part is configured to be removable with respect to the lower part. The main body 110 includes a bumper (not shown) provided on the outer peripheral portion so as to be displaceable with respect to the main body 110. Further, the main body 110 has a suction port 111 at a lower portion for sucking dust into the main body 110.
 駆動ユニット120は、制御ユニット150からの指示に基づいて、自律走行掃除機100を床面上で走行させる。本実施の形態においては、駆動ユニット120は、本体110の平面視における幅方向の中心に対して、左側および右側にそれぞれ1つずつ配置される。なお、駆動ユニット120の数は、2つに限られず、1つでもよく、また3つ以上でもよい。 Drive unit 120 causes autonomous traveling cleaner 100 to travel on the floor based on an instruction from control unit 150. In the present embodiment, one drive unit 120 is arranged on each of the left and right sides with respect to the center in the width direction of main body 110 in plan view. The number of drive units 120 is not limited to two, but may be one, or three or more.
 また、駆動ユニット120は、図示しない、床面上を走行するホイール、ホイールにトルクを与える走行用モータ、および走行用モータを収容するハウジングなどを有する。ホイールは、本体110の下部の下面に形成される凹部(図示せず)に収容され、本体110に対して回転可能に取り付けられる。 The drive unit 120 includes a wheel (not shown) that travels on the floor, a travel motor that applies torque to the wheel, a housing that houses the travel motor, and the like. The wheel is housed in a recess (not shown) formed on the lower surface of the lower part of the main body 110 and is rotatably attached to the main body 110.
 なお、駆動ユニット120は、自律走行掃除機100の進行方向を算出するために用いられるオドメトリ情報を検出するためのエンコーダ121(図4参照)などを備えてもよい。また、以降においては、自律走行掃除機100の進行方向(自律走行掃除機100が進行する方向)とは、実際に進行している方向のみならず、これから進行する方向も含む意味で説明する。 Note that the drive unit 120 may include an encoder 121 (see FIG. 4) for detecting odometry information used for calculating the traveling direction of the autonomous traveling cleaner 100. In addition, hereinafter, the traveling direction of the autonomous traveling vacuum cleaner 100 (the direction in which the autonomous traveling vacuum cleaner 100 travels) is described not only in the direction in which the vehicle is actually traveling but also in the direction including the direction in which the vehicle travels.
 自律走行掃除機100は、キャスター129を補助輪として備える対向二輪型の駆動方式で構成される。2つのホイールの回転を独立して制御することにより、自律走行掃除機100は、直進、後退、左回転、右回転など、自在な走行が可能となる。 The autonomous traveling cleaner 100 is configured by a two-wheel opposed drive system including the casters 129 as auxiliary wheels. By independently controlling the rotation of the two wheels, the autonomous traveling cleaner 100 can travel freely, such as going straight, retreating, turning left, turning right, and so on.
 掃除ユニット130は、吸込口111からごみを吸い込ませるためのユニットを構成する。掃除ユニット130は、図示しない、吸込口111内に配置されるメインブラシ、メインブラシを回転させるブラシ駆動モータなどを備える。 The cleaning unit 130 is a unit for sucking dust from the suction port 111. The cleaning unit 130 includes a main brush (not shown) disposed in the suction port 111, a brush drive motor for rotating the main brush, and the like.
 吸引ユニット140は、本体110の内部に配置される。吸引ユニット140は、図示しない、ファンケース、および、ファンケースの内部に配置される電動ファンなどを備える。電動ファンは、ごみ箱ユニット141の内部の空気を吸引し、本体110の外方に空気を吐出させる。これにより、吸込口111から吸い込まれたごみが、ごみ箱ユニット141内に収容される。 The suction unit 140 is arranged inside the main body 110. The suction unit 140 includes a fan case (not shown), an electric fan disposed inside the fan case, and the like. The electric fan sucks air inside the trash box unit 141 and discharges the air outside the main body 110. Thus, the refuse sucked from the suction port 111 is stored in the trash box unit 141.
 制御ユニット150は、本体110の内部に配置される。制御ユニット150は、メモリと、メモリに記憶されている制御プログラムを実行するCPU(Central Processing Unit)などを有する。制御ユニット150により実現される各処理部については、後述する。 The control unit 150 is arranged inside the main body 110. The control unit 150 includes a memory, a CPU (Central Processing Unit) that executes a control program stored in the memory, and the like. Each processing unit realized by the control unit 150 will be described later.
 撮像部160は、本体110の周囲に存在する障害物などの物体を撮像し、デジタル信号からなる画像情報を出力する、いわゆるデジタルカメラなどで構成される。本実施の形態の撮像部160は、光軸が一本の単眼光学系を備える。そのため、撮像部160から得られた画像情報には、撮像部160から物体までの距離を示す情報は含まれていない。 The image capturing section 160 is configured by a so-called digital camera or the like that captures an image of an object such as an obstacle existing around the main body 110 and outputs image information composed of digital signals. The imaging unit 160 according to the present embodiment includes a single-lens optical system having a single optical axis. Therefore, the image information obtained from the imaging unit 160 does not include information indicating the distance from the imaging unit 160 to the object.
 また、撮像部160が備える光学系の画角は、例えば90度以下から選定される。そのため、撮像部160には、本体110の上部空間の全周画像を撮影することはできない光学系が採用される。これにより、撮像部160から得られる画像の歪み、画像内の場所による解像度の相違などが抑制される。 画 The angle of view of the optical system included in the imaging unit 160 is selected from, for example, 90 degrees or less. Therefore, the imaging unit 160 employs an optical system that cannot capture an image of the entire circumference of the upper space of the main body 110. Accordingly, distortion of an image obtained from the imaging unit 160, a difference in resolution depending on a position in the image, and the like are suppressed.
 駆動機構170は、図3に示すように、撮像部160の姿勢を、少なくとも水平面に沿う1軸周りで移動させる機構を備える。なお、本実施の形態の駆動機構170は、自律走行掃除機100の幅方向(図3中のX軸方向)に延在する第一回転軸周りに、図3中の破線矢印で示すように、撮像部160の姿勢を変更させるように構成される。さらに、駆動機構170は、図3中の一点鎖線で示す鉛直方向に延在する第二回転軸周りに、撮像部160の姿勢を変更させるように構成される。なお、駆動機構170は、X軸方向および鉛直方向以外の、他の方向に延在する回転軸回りで撮像部160の姿勢を変更可能な構成としてもよい。また、図3では、撮像部160および駆動機構170を本体110の天面に突出状に設ける例で図示し説明したが、これに限られない。例えば、本体110の天面に凹部を設け、凹部に、撮像部160および駆動機構170を設置する構成としてもよい。これにより、撮像部160の姿勢がわかるため、より精度よく自己位置推定ができる画像情報を取得することができる。 The drive mechanism 170 includes a mechanism for moving the posture of the imaging unit 160 at least around one axis along a horizontal plane, as shown in FIG. In addition, the drive mechanism 170 of the present embodiment surrounds the first rotation axis extending in the width direction (the X-axis direction in FIG. 3) of the autonomous traveling cleaner 100 as indicated by a broken arrow in FIG. 3. , The posture of the imaging unit 160 is changed. Further, the drive mechanism 170 is configured to change the attitude of the imaging unit 160 around a second rotation axis extending in a vertical direction indicated by a chain line in FIG. Note that the driving mechanism 170 may be configured so that the attitude of the imaging unit 160 can be changed around a rotation axis extending in other directions other than the X-axis direction and the vertical direction. FIG. 3 illustrates and illustrates an example in which the imaging unit 160 and the driving mechanism 170 are provided on the top surface of the main body 110 so as to protrude. For example, a configuration may be adopted in which a concave portion is provided on the top surface of the main body 110, and the imaging section 160 and the driving mechanism 170 are installed in the concave portion. Thus, since the orientation of the imaging unit 160 can be known, it is possible to acquire image information from which self-position estimation can be performed more accurately.
 バッテリ180(図4参照)は、自律走行掃除機100が有する電子機器と電気的に接続される。バッテリ180は、接続された電子機器に電力を供給するための、例えば二次電池などの電池である。 Battery 180 (see FIG. 4) is electrically connected to the electronic device included in autonomous traveling vacuum cleaner 100. The battery 180 is a battery, such as a secondary battery, for supplying power to the connected electronic device.
 記憶部190は、各種処理部が実行する制御プログラムが記憶される、例えばROM(Read Only Memory)、RAM(Random Access Memory)などのメモリである。記憶部190は、例えばHDD(Hard Disk Drive)、フラッシュメモリなどにより実現される。 The storage unit 190 is a memory such as a ROM (Read Only Memory) or a RAM (Random Access Memory) in which control programs executed by various processing units are stored. The storage unit 190 is realized by, for example, a hard disk drive (HDD), a flash memory, or the like.
 また、記憶部190は、例えば、相関情報、サイズ情報、床面マップなどが記憶される。具体的には、相関情報は、撮像部160が生成した画像の画素位置に対する、画素の実空間における推定サイズの情報である。サイズ情報は、自律走行掃除機100のサイズ(具体的には、本体110の横幅など)の情報である。床面マップは、家屋などの床面の間取りを示す情報である。なお、本体110の横幅とは、上面視において、自律走行掃除機100の進行方向と直交する方向における自律走行掃除機100の幅である。 (4) The storage unit 190 stores, for example, correlation information, size information, a floor map, and the like. Specifically, the correlation information is information on the estimated size of the pixel in the real space with respect to the pixel position of the image generated by the imaging unit 160. The size information is information on the size of the autonomous traveling vacuum cleaner 100 (specifically, the width of the main body 110, etc.). The floor map is information indicating a floor plan of a house or the like. Note that the lateral width of the main body 110 is the width of the autonomous traveling cleaner 100 in a direction orthogonal to the traveling direction of the autonomous traveling cleaner 100 when viewed from above.
 さらに、自律走行掃除機100は、以下に例示する、例えば障害物センサ273、測距センサ274、衝突センサ(図示せず)、床面センサ276、および、塵埃量センサ277などの各種センサを備える。 Further, the autonomous traveling vacuum cleaner 100 includes various sensors such as an obstacle sensor 273, a distance measurement sensor 274, a collision sensor (not shown), a floor sensor 276, and a dust amount sensor 277, which are exemplified below. .
 障害物センサ273は、本体110の前方に存在する周囲の壁、家具などの走行の障害となる障害物を検出するセンサである。本実施の形態においては、障害物センサ273として、例えば超音波センサが用いられる。障害物センサ273は、発信部271、受信部272などを有する。発信部271は、本体110の前方の中央に配置され、前方に向けて、超音波を発信する。受信部272は、発信部271のX方向の両側に配置され、発信部271から発信された超音波を受信する。つまり、障害物センサ273は、発信部271から発信され、障害物により反射して戻ってくる超音波の反射波を受信部272で受信する。これにより、障害物センサ273は、障害物との距離、障害物の位置などを検出する。 The obstacle sensor 273 is a sensor that detects an obstacle such as a peripheral wall or furniture existing in front of the main body 110, which is an obstacle to traveling. In the present embodiment, as the obstacle sensor 273, for example, an ultrasonic sensor is used. The obstacle sensor 273 includes a transmitting unit 271, a receiving unit 272, and the like. Transmitting section 271 is disposed at the front center of main body 110 and transmits ultrasonic waves forward. The receiving units 272 are arranged on both sides of the transmitting unit 271 in the X direction, and receive the ultrasonic waves transmitted from the transmitting unit 271. That is, the obstacle sensor 273 causes the receiving unit 272 to receive the reflected ultrasonic wave transmitted from the transmitting unit 271 and reflected by the obstacle and returned. Thereby, the obstacle sensor 273 detects the distance to the obstacle, the position of the obstacle, and the like.
 測距センサ274は、本体110の周囲に存在する障害物などの物体と、本体110との距離を検出するセンサである。本実施の形態においては、測距センサ274は、例えば発光部および受光部を有する赤外線センサで構成される。つまり、測距センサ274は、発光部から放射され、障害物で反射した赤外線が戻って受光部で受光されるまでの経過時間に基づいて、障害物との距離を測定する。 The distance measuring sensor 274 is a sensor that detects a distance between the main body 110 and an object such as an obstacle existing around the main body 110. In the present embodiment, the distance measuring sensor 274 is configured by, for example, an infrared sensor having a light emitting unit and a light receiving unit. That is, the distance measurement sensor 274 measures the distance to the obstacle based on the elapsed time from when the infrared light emitted from the light emitting unit and reflected by the obstacle returns and is received by the light receiving unit.
 具体的には、測距センサ274は、例えば右側の前方頂部近傍、および、左側の前方頂部近傍に配置される。右側の測距センサ274は、本体110の右斜め前方に向けて光(赤外線)を出力する。左側の測距センサ274は、本体110の左斜め前方に向けて光(赤外線)を出力する。この構成により、自律走行掃除機100の旋回時において、測距センサ274は、本体110の輪郭と最も接近した周囲の物体と、本体110との距離を検出する。 Specifically, the distance measurement sensor 274 is arranged, for example, near the right front top and near the left front top. The distance measuring sensor 274 on the right side outputs light (infrared rays) toward the diagonally right front of the main body 110. The distance measuring sensor 274 on the left side outputs light (infrared light) toward the front left side of the main body 110. With this configuration, when the autonomous traveling cleaner 100 is turning, the distance measurement sensor 274 detects the distance between the main body 110 and the surrounding object closest to the contour of the main body 110.
 床面センサ276は、本体110の底面の複数箇所に配置され、床面の状態を検出するセンサである。本実施の形態においては、床面センサ276は、例えば発光部および受光部を有する赤外線センサで構成される。つまり、床面センサ276は、発光部から放射した光(赤外線)が戻って受光部で受信される光量に基づいて、例えば床面が濡れているなどの床面の状態を検出する。 The floor sensor 276 is a sensor that is disposed at a plurality of locations on the bottom surface of the main body 110 and detects the state of the floor. In the present embodiment, floor sensor 276 is formed of, for example, an infrared sensor having a light emitting unit and a light receiving unit. That is, the floor surface sensor 276 detects the state of the floor surface, for example, the floor surface is wet, based on the amount of light (infrared light) returned from the light emitting unit and received by the light receiving unit.
 塵埃量センサ277は、例えば発光素子および受光素子などから構成され、発光素子から放出された光の量を、受光素子で検出して出力するセンサである。このとき、制御ユニット150は、出力された情報に基づいて、受光した光の量と塵埃量とを対応させる。具体的には、制御ユニット150は、受光される光の量が少なくなるほど塵埃量が多いと判断する。そして、制御ユニット150は、その旨を示す塵埃量情報を生成する。 The dust amount sensor 277 is, for example, a sensor including a light-emitting element and a light-receiving element, and the amount of light emitted from the light-emitting element is detected by the light-receiving element and output. At this time, the control unit 150 associates the amount of received light with the amount of dust based on the output information. Specifically, control unit 150 determines that the smaller the amount of received light, the greater the amount of dust. Then, the control unit 150 generates dust amount information indicating that.
 なお、以上で説明した障害物センサ273、測距センサ274、床面センサ276、および、塵埃量センサ277などは、センサの例示である。そのため、自律走行掃除機100は、上記全てのセンサを備える必要はない。また、自律走行掃除機100は、上記と異なる形態のセンサを備えていてもよい。 The obstacle sensor 273, the distance measurement sensor 274, the floor sensor 276, and the dust amount sensor 277 described above are examples of sensors. Therefore, the autonomous traveling vacuum cleaner 100 does not need to include all the above sensors. In addition, the autonomous traveling cleaner 100 may include a sensor having a different form from the above.
 例えば、自律走行掃除機100は、さらに、図示しない、衝突センサ、エンコーダ121(図4参照)、加速度センサ、角速度センサなどを備えていてもよい。 For example, the autonomous traveling cleaner 100 may further include a collision sensor, an encoder 121 (see FIG. 4), an acceleration sensor, an angular velocity sensor, and the like, which are not shown.
 衝突センサは、例えばスイッチ接触変位センサで構成され、本体110の周囲に取り付けられているバンパに配設される。スイッチ接触変位センサは、障害物がバンパと接触して、バンパが本体110に対して押し込まれることにより、オンされる。 The collision sensor is composed of, for example, a switch contact displacement sensor, and is disposed on a bumper attached around the main body 110. The switch contact displacement sensor is turned on when the obstacle comes into contact with the bumper and the bumper is pushed into the main body 110.
 エンコーダ121は、駆動ユニット120に配置される。エンコーダ121は、走行用モータによって回転する一対のホイールのそれぞれの回転角を検出する。エンコーダ121からの情報により、自律走行掃除機100の、例えば走行量、旋回角度、速度、加速度、角速度などが算出される。 The encoder 121 is disposed in the drive unit 120. The encoder 121 detects each rotation angle of a pair of wheels rotated by the traveling motor. Based on the information from the encoder 121, for example, the traveling amount, the turning angle, the speed, the acceleration, the angular speed, and the like of the autonomous traveling cleaner 100 are calculated.
 加速度センサは、自律走行掃除機100が走行する際の加速度を検出する。 The acceleration sensor detects the acceleration when the autonomous traveling cleaner 100 travels.
 角速度センサは、自律走行掃除機100が旋回する際の角速度を検出する。 The angular velocity sensor detects an angular velocity when the autonomous traveling cleaner 100 turns.
 加速度センサおよび角速度センサにより検出された情報は、例えばホイールの空回りによって発生する誤差を修正するための情報などに用いられる。 情報 The information detected by the acceleration sensor and the angular velocity sensor is used, for example, as information for correcting an error caused by idling of the wheel.
 つぎに、制御ユニット150が実現する各処理部について、図4を用いて、説明する。 Next, each processing unit realized by the control unit 150 will be described with reference to FIG.
 図4は、実施の形態に係る自律走行掃除機100の機能構成を、機構構成などとともに示すブロック図である。 FIG. 4 is a block diagram showing a functional configuration of the autonomous traveling cleaner 100 according to the embodiment, together with a mechanical configuration and the like.
 図4に示すように、自律走行掃除機100は、制御ユニット150が実行する処理部として、推定部151と、駆動制御部152と、種類特定部153と、走行制御部154と、清掃制御部155などを備える。なお、本実施の形態は、制御ユニット150を一つのユニットとして、一つの機能ブロックに記載しているが、各処理部は、複数の機能ブロックに分かれ、それぞれに対応する複数のCPUにより実現されてもよい。 As shown in FIG. 4, the autonomous traveling cleaner 100 includes, as processing units executed by the control unit 150, an estimation unit 151, a drive control unit 152, a type identification unit 153, a traveling control unit 154, and a cleaning control unit. 155 and the like. In the present embodiment, the control unit 150 is described as one unit in one functional block. However, each processing unit is divided into a plurality of functional blocks, and is realized by a plurality of CPUs corresponding to the respective functional blocks. You may.
 推定部151は、撮像部160から得られる画像情報に基づいて、自己位置推定を行う処理部である。自己位置推定の方法は、特に限定されないが、本実施の形態の場合、推定部151は、SLAM(Simultaneous Localization and Mapping)技術を用いて、自己位置推定を行っている。具体的には、推定部151は、撮像部160から取得した画像情報中に含まれる複数の特徴点(ランドマーク)、駆動制御部152から得られる撮像部160の姿勢を示す姿勢情報、駆動ユニット120に組み込まれたエンコーダ121からの信号などを統合して、自己位置推定を実行する。そして、推定した自己位置に基づいて、推定部151は、自己位置情報を生成する。 The estimation unit 151 is a processing unit that performs self-position estimation based on image information obtained from the imaging unit 160. The method of self-position estimation is not particularly limited, but in the case of the present embodiment, estimation section 151 performs self-position estimation using SLAM (Simultaneous Localization and Mapping) technology. Specifically, the estimation unit 151 includes a plurality of feature points (landmarks) included in the image information acquired from the imaging unit 160, posture information indicating the posture of the imaging unit 160 obtained from the drive control unit 152, and a drive unit. The self-position estimation is executed by integrating signals from the encoder 121 incorporated in the sub-station 120. Then, based on the estimated self-position, the estimating unit 151 generates self-position information.
 また、推定部151は、走行しながら自己位置推定を実行する。これにより、推定部151は、複数箇所で生成した自己位置情報の集合である走行実績を用いて、マップも生成している。このとき、マップに、各種センサで検出した情報を用いて、壁などの障害物の位置(領域)を情報として、付加してもよい。 {Circle around (5)} The estimating unit 151 performs the self-position estimation while traveling. As a result, the estimating unit 151 also generates a map using the traveling results that are sets of self-position information generated at a plurality of locations. At this time, the position (area) of an obstacle such as a wall may be added as information to the map using information detected by various sensors.
 駆動制御部152は、推定部151が実行する自己位置推定の確度が高くなるように駆動機構170を制御して、撮像部160の姿勢を変更する処理部である。また、撮像部160から得られた画像情報に基づいて、床面に存在する障害物を検出する場合、駆動制御部152は、画角内に床面が含まれるように、撮像部160の姿勢を変更する。 The drive control unit 152 is a processing unit that controls the drive mechanism 170 so as to increase the accuracy of the self-position estimation performed by the estimation unit 151 and changes the attitude of the imaging unit 160. When detecting an obstacle existing on the floor based on the image information obtained from the imaging unit 160, the drive control unit 152 sets the orientation of the imaging unit 160 so that the floor is included in the angle of view. To change.
 なお、以下では、自己位置推定のための撮像部160の姿勢を「推定姿勢」、障害物検出のための撮像部160の姿勢を「検出姿勢」と、記載して説明する場合がある。 In the following, the orientation of the imaging unit 160 for self-position estimation may be described as “estimated orientation”, and the orientation of the imaging unit 160 for obstacle detection may be described as “detected orientation”.
 ここで、自己位置推定の確度が高くなる撮像部160の推定姿勢は、自律走行掃除機100の周辺環境や状況により変化する。この場合、例えば撮像部160から得られた1枚の画像情報内に含まれる特徴点の数が多い姿勢を、撮像部160の推定姿勢とする。これにより、自己位置推定の確度が高くなる傾向にある。そこで、駆動制御部152は、特徴点が多く含まれる画角となるように駆動機構170を制御して、撮像部160の推定姿勢を変更する。 Here, the estimated posture of the imaging unit 160 at which the accuracy of the self-position estimation becomes high changes depending on the surrounding environment and the situation of the autonomous traveling cleaner 100. In this case, for example, the posture in which the number of feature points included in one piece of image information obtained from the imaging unit 160 is large is set as the estimated posture of the imaging unit 160. This tends to increase the accuracy of the self-position estimation. Thus, the drive control unit 152 controls the drive mechanism 170 so that the angle of view includes many feature points, and changes the estimated attitude of the imaging unit 160.
 なお、本実施の形態において、推定姿勢の決定方法は、特に限定されない。具体的な推定姿勢の決定方法としては、まず、例えば駆動制御部152は、予め定められた順番、またはランダムに撮像部160の姿勢が変わるように駆動機構170を制御する。つぎに、順次、撮像部160から画像情報を取得する。そして、画像情報の中の特徴点が、第一閾値を超えた場合、例えば特徴点が1000を超えるような画像情報を取得した姿勢を、撮像部160の推定姿勢として決定してもよい。 In the present embodiment, the method for determining the estimated posture is not particularly limited. As a specific method of determining the estimated posture, first, for example, the drive control unit 152 controls the drive mechanism 170 so that the posture of the imaging unit 160 changes in a predetermined order or randomly. Next, image information is sequentially acquired from the imaging unit 160. Then, when the feature point in the image information exceeds the first threshold value, for example, the posture in which the image information is acquired such that the feature point exceeds 1000 may be determined as the estimated posture of the imaging unit 160.
 また、種類特定部153が、画像情報の中に天井照明が含まれていることを示す情報を取得した場合、画像情報を取得した姿勢を推定姿勢として決定してもよい。さらに、天井照明の像が中央部に存在する画像情報を取得した姿勢を、推定姿勢として決定してもよい。 In addition, when the type specifying unit 153 obtains information indicating that ceiling illumination is included in the image information, the type in which the image information is obtained may be determined as the estimated attitude. Furthermore, the posture at which the image information of the ceiling illumination image at the center is obtained may be determined as the estimated posture.
 また、駆動制御部152は、撮像部160が、全天球、または全天球に近い範囲をカバーする複数枚の画像情報を取得できるように、駆動機構170を制御する。そして、撮像部160の画角内の特徴点の数が第一閾値を超える姿勢を、撮像部160の推定姿勢として決定してもよい。さらに、撮像部160の画角内で最も特徴点の多い姿勢を、撮像部160の推定姿勢として決定してもよい。つまり、上述のように、画角内に含まれる特徴点の数を用いて、撮像部160の推定姿勢を決定する。これにより、例えば推定部151が実行する自己位置推定などの処理から得られる推定結果を高精度に得ることができる。 (5) The drive control unit 152 controls the drive mechanism 170 so that the imaging unit 160 can acquire a plurality of pieces of image information covering the whole sphere or a range close to the whole sphere. Then, a posture in which the number of feature points within the angle of view of the imaging unit 160 exceeds the first threshold may be determined as the estimated posture of the imaging unit 160. Further, the posture having the most feature points within the angle of view of the imaging unit 160 may be determined as the estimated posture of the imaging unit 160. That is, as described above, the estimated attitude of the imaging unit 160 is determined using the number of feature points included in the angle of view. Thereby, for example, an estimation result obtained from processing such as self-position estimation performed by the estimation unit 151 can be obtained with high accuracy.
 また、駆動制御部152は、特徴点を多く含む推定姿勢を決定するために、床面が画角内に入らない上方方向に撮像部160の光軸が向くように駆動機構170を制御して、撮像部160で領域の異なる複数の画像情報を取得してもよい。これにより、画像情報内に含まれる特徴点を多くするために取得する画像情報の枚数を抑制できる。その結果、推定姿勢の決定に要する処理を、さらに高速化できる。 In addition, the drive control unit 152 controls the drive mechanism 170 so that the optical axis of the imaging unit 160 is directed upward so that the floor surface does not enter the angle of view in order to determine an estimated posture including many feature points. Alternatively, the image capturing section 160 may acquire a plurality of pieces of image information having different areas. Thus, the number of pieces of image information acquired to increase the number of feature points included in the image information can be suppressed. As a result, the processing required for determining the estimated attitude can be further speeded up.
 また、駆動制御部152は、後述する種類特定部153が、例えば天井照明を特定するように駆動機構170を制御して、撮像部160で画像情報を取得してもよい。これにより、画像情報内に含まれる特徴点を多くするために取得する画像情報の枚数を抑制できる。その結果、推定姿勢の決定に要する処理を、さらに高速化できる。 In addition, the drive control unit 152 may control the drive mechanism 170 so that the type specifying unit 153 described below specifies, for example, ceiling illumination, and the image capturing unit 160 may acquire image information. Thus, the number of pieces of image information acquired to increase the number of feature points included in the image information can be suppressed. As a result, the processing required for determining the estimated attitude can be further speeded up.
 一方、自己位置推定の確度が高くなる撮像部160の推定姿勢を、例えば撮像部160から得られた複数の画像情報に基づいて、マッチング可能な特徴点の数が多い推定姿勢として、選択してもよい。これにより、自己位置推定の確度が、高くなる。つまり、駆動制御部152は、マッチング可能な特徴点が多く含まれる画角となるように駆動機構170を制御し、撮像部160を選択した推定姿勢に変更する。 On the other hand, the estimated posture of the imaging unit 160 with which the accuracy of the self-position estimation is high is selected as the estimated posture having a large number of feature points that can be matched based on, for example, a plurality of pieces of image information obtained from the imaging unit 160. Is also good. As a result, the accuracy of the self-position estimation increases. That is, the drive control unit 152 controls the drive mechanism 170 so that the angle of view includes many feature points that can be matched, and changes the imaging unit 160 to the selected estimated posture.
 具体的には、駆動制御部152は、例えば全天球、全天球に近い範囲、床面が画角内に入らない範囲、または天井照明が画角内に含まれる範囲において、領域が重複する複数枚の画像情報を撮像部160が取得できるように、駆動機構170を制御する。そして、駆動制御部152は、撮像部160の画角内のマッチング可能な特徴点の数が第二閾値を超える姿勢、例えばマッチング可能な特徴点の数が100を超える場合を、推定姿勢として決定してもよい。さらに、駆動制御部152は、撮像部160の画角内で最もマッチング可能な特徴点の多い姿勢を、推定姿勢として決定してもよい。 Specifically, the drive control unit 152 determines that the area overlaps in, for example, a celestial sphere, a range close to the celestial sphere, a range where the floor surface does not fall within the angle of view, or a range where ceiling illumination is included within the angle of view. The driving mechanism 170 is controlled so that the image capturing unit 160 can acquire a plurality of pieces of image information. Then, the drive control unit 152 determines the posture in which the number of feature points that can be matched within the angle of view of the imaging unit 160 exceeds the second threshold, for example, the case where the number of feature points that can be matched exceeds 100, as the estimated posture. May be. Furthermore, the drive control unit 152 may determine, as the estimated posture, the posture having the most matching feature points within the angle of view of the imaging unit 160.
 上述した種類特定部153は、撮像部160から得られる画像情報から、撮像された物の種類を特定する処理部である。なお、種類の特定方法は、特に限定されない。種類の特定方法としては、例えばAI(Artificial Intelligence)技術などを例示できる。具体的には、撮像部160で、まず、自律走行掃除機100が掃除をする環境(疑似環境を含む)内において、複数の画像情報を事前に取得する。このとき、種類特定部153は、記憶部190などに、取得した画像情報内の物体の像と、像に対応する物体の種類などをデータとして用意しておく。つぎに、種類特定部153は、例えばディープラーニングなどの技術を用いて、環境(疑似環境を含む)内にある物など状況を、学習する。そして、取得した画像に学習した物体の像がある場合、種類特定部153は、物体の種類を出力するように、種類特定用のモデルを構築する。このとき、種類特定部153は、上記画像情報内の物体の像と、像に対応する物体の種類に加えて、物体が表示されている画像位置もデータとして用意していてもよい。これにより、取得した画像に学習した物体の像がある場合、種類特定部153は、物体の種類とともに、表示されている画像位置も出力できるモデルを構築することもできる。さらに、表示されている画像位置を元に、物体と本体110との実距離を、ある程度、推定することもできる。つまり、種類特定部153は、上記により構築した各モデルを利用して、物の種類を特定する場合を例示できる。 The type specifying unit 153 described above is a processing unit that specifies the type of a captured object from image information obtained from the imaging unit 160. In addition, the method of specifying the type is not particularly limited. As a method for specifying the type, for example, an AI (Artificial @ Intelligence) technique can be exemplified. Specifically, the image capturing unit 160 first acquires a plurality of pieces of image information in advance in an environment (including a pseudo environment) in which the autonomous traveling cleaner 100 performs cleaning. At this time, the type specifying unit 153 prepares the image of the object in the acquired image information and the type of the object corresponding to the image as data in the storage unit 190 or the like. Next, the type specifying unit 153 learns a situation such as an object in an environment (including a pseudo environment) by using a technique such as deep learning. Then, when the acquired image includes the image of the learned object, the type identification unit 153 constructs a type identification model so as to output the type of the object. At this time, the type specifying unit 153 may prepare, in addition to the image of the object in the image information and the type of the object corresponding to the image, the image position where the object is displayed as data. Accordingly, when the acquired image includes the image of the learned object, the type specifying unit 153 can also construct a model that can output the displayed image position along with the type of the object. Further, the actual distance between the object and the main body 110 can be estimated to some extent based on the displayed image position. That is, the type specifying unit 153 can exemplify a case in which the type of an object is specified using each model constructed as described above.
 なお、物の種類を特定するとは、物の詳細な種類を特定するばかりでなく、自律走行掃除機100の走行に対する障害物になり得る物であるか否かを特定する場合も含まれる。具体的には、例えば画像情報中に現れる窓の像は、障害物と特定しない。一方、床面上の小型のカーペットの像は、障害物として特定する場合が例示される。 特定 Specifying the type of the object includes not only specifying the detailed type of the object but also specifying whether the object can be an obstacle to the traveling of the autonomous traveling vacuum cleaner 100 or not. Specifically, for example, a window image appearing in the image information is not specified as an obstacle. On the other hand, the case where the image of the small carpet on the floor is specified as an obstacle is exemplified.
 つまり、自律走行掃除機100の走行の障害となる障害物を検出する場合、種類特定部153は、検出姿勢の撮像部160から得られた画像情報に基づいて、物の種類を特定する。これにより、種類特定部153は、走行中の障害物を検出する。このとき、種類特定部153は、駆動制御部152から取得した検出姿勢における撮像部160の角度の情報、エンコーダ121などから得られるオドメトリ情報、および特定した障害物の画像情報内における位置などに基づいて、障害物までの距離を算出してもよい。これにより、障害物の種類を特定して、際まで掃除する障害物と、近づかずに回避する障害物とで、自律走行掃除機100の動作を変更できる。 In other words, when detecting an obstacle that hinders the travel of the autonomous traveling vacuum cleaner 100, the type specifying unit 153 specifies the type of the object based on the image information obtained from the imaging unit 160 in the detected posture. As a result, the type identification unit 153 detects a moving obstacle. At this time, the type identification unit 153 is based on information on the angle of the imaging unit 160 in the detected attitude acquired from the drive control unit 152, odometry information obtained from the encoder 121 and the like, the position of the identified obstacle in the image information, and the like. Then, the distance to the obstacle may be calculated. Thus, the operation of the autonomous traveling vacuum cleaner 100 can be changed depending on the type of the obstacle and the obstacle to be cleaned up and the obstacle to be avoided without approaching.
 さらに、走行制御部154は、自律走行掃除機100の走行動作を制御する処理部である。具体的には、例えば自律走行掃除機100が備える撮像部160を含む各種センサが検出した情報から、自律走行掃除機100の自己位置を推定さする。そして、走行制御部154は、推定された自律走行掃除機100の自己位置に基づいて、駆動ユニット120を制御する。これにより、走行制御部154は、自律走行掃除機100の走行や操舵などを制御して、床面上を移動させる。 走 行 Furthermore, the traveling control unit 154 is a processing unit that controls the traveling operation of the autonomous traveling vacuum cleaner 100. Specifically, for example, the self-position of the autonomous traveling cleaner 100 is estimated from information detected by various sensors including the imaging unit 160 included in the autonomous traveling cleaner 100. Then, the traveling control unit 154 controls the drive unit 120 based on the estimated self-position of the autonomous traveling cleaner 100. Thereby, the traveling control unit 154 controls traveling and steering of the autonomous traveling cleaner 100 to move on the floor.
 また、清掃制御部155は、吸引ユニット140を制御して、本体110の吸込口111から、ごみを吸い込ませる。これにより、床面の掃除が実行される。 (4) The cleaning control unit 155 controls the suction unit 140 to suck dust from the suction port 111 of the main body 110. As a result, the floor is cleaned.
 以下、自律走行掃除機100の動作について、図5を参照しつつ、説明する。図5は、自律走行掃除機100の動作を示すフローチャートである。 Hereinafter, the operation of the autonomous traveling vacuum cleaner 100 will be described with reference to FIG. FIG. 5 is a flowchart showing the operation of the autonomous traveling vacuum cleaner 100.
 まず、図5に示すように、自律走行掃除機100の駆動制御部152は、駆動機構170を制御して、画角内に掃除対象の床面が含まれるように、撮像部160の姿勢を検出姿勢にする(ステップS101)。 First, as illustrated in FIG. 5, the drive control unit 152 of the autonomous traveling cleaner 100 controls the drive mechanism 170 to change the attitude of the imaging unit 160 so that the floor to be cleaned is included in the angle of view. The detection posture is set (step S101).
 つぎに、種類特定部153は、撮像部160から取得した画像情報中に障害物があるか否かを検出する(ステップS102)。このとき、障害物が存在する場合(ステップS102のYes)、種類特定部153は、障害物までの距離を推定する(ステップS103)。一方、障害物が検出されない場合(ステップS102のNo)、種類特定部153は、推定する距離を検出最大距離(例えば、1メートル程度)に設定し(ステップS104)、ステップS105に移行する。これにより、検出最大距離ごとに刻みながら走行動作することができる。また、遠方の障害物に対する検出位置精度が低くなりがちである。そのため、検出最大距離の設定により、検出位置精度の低下を補正しながら、正確に障害物を捕捉し走行することができる。 Next, the type identification unit 153 detects whether there is an obstacle in the image information acquired from the imaging unit 160 (Step S102). At this time, when there is an obstacle (Yes in step S102), the type identification unit 153 estimates the distance to the obstacle (step S103). On the other hand, when no obstacle is detected (No in step S102), the type identification unit 153 sets the estimated distance to the maximum detection distance (for example, about 1 meter) (step S104), and proceeds to step S105. Thereby, the traveling operation can be performed while being ticked for each detection maximum distance. In addition, the detection position accuracy for a distant obstacle tends to be low. Therefore, by setting the detection maximum distance, it is possible to accurately catch an obstacle and travel while correcting a decrease in the detection position accuracy.
 つぎに、駆動制御部152は、駆動機構170を制御して、床面が画角内に含まれないように、撮像部160の光軸を、図3で説明したように、少なくとも水平面に沿う1軸周り(例えば、第一回転軸周り)で移動(回動)して、上方に向ける(ステップS105)。このとき、撮像部160の光軸を、第二回転軸周り)で移動(回動)させてもよい。 Next, the drive control unit 152 controls the drive mechanism 170 so that the optical axis of the imaging unit 160 is at least along the horizontal plane as described with reference to FIG. 3 so that the floor surface is not included in the angle of view. It moves (rotates) around one axis (for example, around the first rotation axis) and turns upward (step S105). At this time, the optical axis of the imaging unit 160 may be moved (rotated) around the second rotation axis.
 つぎに、推定部151は、撮像部160から、異なる複数の領域を撮像した画像情報を取得し、撮像部160が推定姿勢になったか否かを判断する(ステップS106)。このとき、推定姿勢でない場合(ステップS106のNo)、ステップS105に戻って、以降のステップを、同様に、実行する。一方、推定姿勢になった場合(ステップS106のYes)、推定部151は、撮像部160の推定姿勢を決定する。なお、推定姿勢の決定方法は、上述した方法など、任意の方法を採用することができる。 Next, the estimating unit 151 acquires image information obtained by imaging a plurality of different areas from the imaging unit 160, and determines whether or not the imaging unit 160 is in the estimated posture (step S106). At this time, if the posture is not the estimated posture (No in step S106), the process returns to step S105, and the subsequent steps are similarly executed. On the other hand, when the estimated posture is reached (Yes in step S106), the estimating unit 151 determines the estimated posture of the imaging unit 160. Note that an arbitrary method such as the method described above can be adopted as a method for determining the estimated posture.
 つぎに、推定姿勢における撮像部160から得られた画像情報内の特徴点に基づいて、推定部151は、自己位置推定を実行する(ステップS107)。 Next, the estimating unit 151 performs the self-position estimation based on the feature points in the image information obtained from the imaging unit 160 in the estimated posture (step S107).
 つぎに、駆動ユニット120により自律走行掃除機100を床面上で移動させながら、推定部151は自己位置推定を継続して実行し、マップの作成および更新などを、随時行う(ステップS108)。ここで、駆動制御部152は、決定された撮像部160の推定姿勢で撮像される領域や、撮像部160の姿勢などを、記憶部190に記憶させる。このとき、自律走行掃除機100が床面上を移動しても、同じ領域が画角内に収まるように撮像部160の推定姿勢を、自律走行掃除機100の走行状態に追随させても構わない。これにより、自律走行掃除機100が移動しても、推定部151は、自己位置推定の確度を高く維持できる。その結果、推定部151は、自己位置推定を高い精度で安定して実行できる。 Next, the estimating unit 151 continuously performs the self-position estimation while moving the autonomous traveling cleaner 100 on the floor by the drive unit 120, and performs creation and updating of the map as needed (step S108). Here, the drive control unit 152 causes the storage unit 190 to store a region imaged in the determined estimated posture of the imaging unit 160, the posture of the imaging unit 160, and the like. At this time, even if the autonomous traveling cleaner 100 moves on the floor, the estimated attitude of the imaging unit 160 may follow the traveling state of the autonomous traveling cleaner 100 so that the same area falls within the angle of view. Absent. As a result, even when the autonomous traveling cleaner 100 moves, the estimation unit 151 can maintain high accuracy of the self-position estimation. As a result, the estimation unit 151 can stably execute the self-position estimation with high accuracy.
 つぎに、推定部151は、掃除の途中で、自己位置を喪失したか否かを判断する(ステップS109)。このとき、自己位置を喪失した場合(ステップS109のYes)、ステップS105に戻って、以降のステップを実行する。具体的には、駆動制御部152は、再度、駆動機構170を制御して自己位置推定の確度が高くなるように撮像部160の姿勢を変更する(ステップS105)。続いて、推定部151は、撮像部160の推定姿勢を、再度、決定する(ステップS106)。さらに、上述した自己位置推定(ステップS107)、マップ作成(ステップS108)の各ステップを実行する。 Next, the estimating unit 151 determines whether or not the self-position has been lost during the cleaning (step S109). At this time, if the self-position has been lost (Yes in step S109), the process returns to step S105, and the subsequent steps are executed. Specifically, the drive control unit 152 controls the drive mechanism 170 again to change the attitude of the imaging unit 160 so as to increase the accuracy of the self-position estimation (step S105). Subsequently, the estimating unit 151 determines again the estimated posture of the imaging unit 160 (Step S106). Further, the respective steps of the above-described self-position estimation (step S107) and map creation (step S108) are executed.
 一方、自己位置を喪失していない場合(ステップS109のNo)、予め定められた領域の掃除が終了したか否かを判断する(ステップS110)、このとき、掃除が終了していない場合(ステップS110のNo)、ステップS107に戻って、以降のステップを実行する。 On the other hand, if the self-position has not been lost (No in step S109), it is determined whether or not the cleaning of the predetermined area has been completed (step S110). At this time, if the cleaning has not been completed (step S110) (No in S110), the process returns to step S107, and the subsequent steps are executed.
 一方、掃除が終了している場合(ステップS110のYes)、一連のフローを終了する。 On the other hand, if the cleaning has been completed (Yes in step S110), a series of flows ends.
 以上のように、自律走行掃除機100は動作する。 自律 As described above, the autonomous traveling vacuum cleaner 100 operates.
 以上で説明したように、本実施の形態に係る自律走行掃除機100は、画角が比較的狭く(例えば画角が90度以下、30度以上)、歪の少ない光学系を有する、少なくとも一台の撮像部160を備える。これにより、前方の障害物を検出して、高い精度で障害物までの距離を算出できる。さらに、適切な推定姿勢の撮像部160から得られた画像情報に含まれる特徴点により、自己位置推定を高い確度で実行できる。その結果、自律走行掃除機100は、障害物に衝突することなくスムーズに、隈なく移動し、障害物と床面の隅部に至るまで、きれいに掃除できる。 As described above, the autonomous traveling cleaner 100 according to the present embodiment has at least one optical system that has a relatively narrow angle of view (for example, an angle of view of 90 degrees or less and 30 degrees or more) and a small distortion. The imaging unit 160 is provided. This makes it possible to detect a forward obstacle and calculate the distance to the obstacle with high accuracy. Further, the feature position included in the image information obtained from the imaging unit 160 having an appropriate estimated posture can perform the self-position estimation with high accuracy. As a result, the autonomous traveling vacuum cleaner 100 can move smoothly and completely without colliding with an obstacle, and can clean the obstacle and the corner of the floor cleanly.
 なお、本発明は、上記実施の形態に限定されるものではない。例えば、本明細書において記載した構成要素を任意に組み合わせて、また、構成要素のいくつかを除外して実現される別の実施の形態を本発明の実施の形態としてもよい。また、上記実施の形態に対して本発明の主旨、すなわち、請求の範囲に記載される文言が示す意味を逸脱しない範囲で当業者が思いつく各種変形を施して得られる変形例も本発明に含まれる。 The present invention is not limited to the above embodiment. For example, another embodiment that is realized by arbitrarily combining the components described in this specification and excluding some of the components may be an embodiment of the present invention. Further, the gist of the present invention with respect to the above-described embodiment, that is, modified examples obtained by performing various modifications conceivable by those skilled in the art without departing from the meaning indicated by the words described in the claims are also included in the present invention. It is.
 具体的には、上記実施の形態では、撮像部160を検出姿勢にして、障害物の検出を行った後、自己位置推定を実行しながら掃除を行うフローを例に説明したが、これに限られない。例えば、まず、撮像部160を推定姿勢にして、自己位置推定を行う。その後、撮像部160を検出姿勢にして、障害物の認識と障害物までの距離を推定しながら掃除を行う。そして、一定距離の走行後など所定の間隔で、再度、撮像部160を推定姿勢にして、自己位置推定を実行しながら掃除を行ってもよい。 Specifically, in the above-described embodiment, the flow in which the imaging unit 160 is set to the detection posture, the obstacle is detected, and then the cleaning is performed while executing the self-position estimation is described as an example. I can't. For example, first, the imaging unit 160 is set to the estimated posture, and the self-position estimation is performed. Thereafter, the imaging unit 160 is set to the detection posture, and the cleaning is performed while recognizing the obstacle and estimating the distance to the obstacle. Then, at a predetermined interval, such as after traveling a certain distance, the imaging unit 160 may be set to the estimated posture again and the cleaning may be performed while executing the self-position estimation.
 また、上記実施の形態において、自己位置推定を実行する場合、最初に、撮像部160の光軸を、鉛直上方に向けても構わない。この場合、撮像部160の画角内に、天井照明が含まれる確率が高くなる。そこで、天井照明が中央に位置する姿勢を、撮像部160の推定姿勢としてもよい。これにより、天井照明の特徴点を、常に観測することができる。そして、天井照明の特徴点に基づいて、自己位置推定を行うことにより、推定精度を高めることができる。 In the above embodiment, when the self-position estimation is performed, first, the optical axis of the imaging unit 160 may be directed vertically upward. In this case, the probability that ceiling illumination is included in the angle of view of the imaging unit 160 increases. Therefore, the posture in which the ceiling illumination is located at the center may be used as the estimated posture of the imaging unit 160. Thereby, the feature point of the ceiling lighting can be always observed. Then, by performing self-position estimation based on the feature points of the ceiling lighting, the estimation accuracy can be improved.
 また、上記実施の形態では、一枚の画像情報内に含まれる特徴点の数や、マッチング可能な特徴点の数に基づいて、撮像部160の推定姿勢を決定する例で説明したが、これに限られない。例えば、特徴量の信頼度の総計が最大、または第三閾値以上になる姿勢、例えば特徴量の信頼度を0~1の間で取得する場合に信頼度の総計を100以上にするなど、撮像部160の推定姿勢として決定してもよい。 Further, in the above embodiment, an example has been described in which the estimated attitude of the imaging unit 160 is determined based on the number of feature points included in one piece of image information and the number of feature points that can be matched. Not limited to For example, an image in which the total reliability of the feature amount is the maximum or the third threshold or more, for example, when the total reliability of the feature amount is obtained between 0 and 1, the total reliability is set to 100 or more. The position may be determined as the estimated posture.
 また、上記実施の形態において、障害物として認識されたカーペット上に乗り上げて掃除を行う場合など、障害物と自律走行掃除機100との距離をより正確に把握したい場合、以下のように、自律走行掃除機100を制御してもよい。 Further, in the above embodiment, when it is desired to more accurately grasp the distance between the obstacle and the autonomous traveling cleaner 100, for example, when the user climbs on a carpet recognized as an obstacle and performs cleaning, the following autonomous operation is performed. The travel cleaner 100 may be controlled.
 具体的には、まず、自律走行掃除機100の本体110の先端縁と、障害物とが、撮像部160の画角内に収まる位置まで、自律走行掃除機100を、予め推定した障害物までの距離の情報に従い、障害物に近づける。そして、自律走行掃除機100の先端縁と障害物とが撮像部160の画角内に収まるように、駆動制御部152は、駆動機構170を制御する。これにより、障害物と自律走行掃除機100との距離をより正確に把握して、床面上を、隅々まで隈なく掃除できる。 Specifically, first, the autonomous traveling cleaner 100 is moved to a position where the leading edge of the main body 110 of the autonomous traveling cleaner 100 and the obstacle fall within the angle of view of the imaging unit 160. Move closer to obstacles according to the distance information. Then, the drive control unit 152 controls the drive mechanism 170 such that the leading edge of the autonomous traveling cleaner 100 and the obstacle fall within the angle of view of the imaging unit 160. Thereby, the distance between the obstacle and the autonomous traveling cleaner 100 can be more accurately grasped, and the floor can be completely cleaned.
 本発明は、家屋内などの床面を自動的に掃除する自律走行掃除機に適用可能である。 The present invention is applicable to an autonomous traveling vacuum cleaner that automatically cleans a floor surface in a house or the like.
 100  自律走行掃除機
 110  本体
 111  吸込口
 120  駆動ユニット
 121  エンコーダ
 129  キャスター
 130  掃除ユニット
 140  吸引ユニット
 141  ごみ箱ユニット
 150  制御ユニット
 151  推定部
 152  駆動制御部
 153  種類特定部
 154  走行制御部
 155  清掃制御部
 160  撮像部
 170  駆動機構
 180  バッテリ
 190  記憶部
 271  発信部
 272  受信部
 273  障害物センサ
 274  測距センサ
 276  床面センサ
 277  塵埃量センサ
Reference Signs List 100 autonomous traveling cleaner 110 main body 111 suction port 120 drive unit 121 encoder 129 caster 130 cleaning unit 140 suction unit 141 trash can unit 150 control unit 151 estimation unit 152 drive control unit 153 type identification unit 154 travel control unit 155 cleaning control unit 160 imaging Unit 170 Drive mechanism 180 Battery 190 Storage unit 271 Transmitting unit 272 Receiving unit 273 Obstacle sensor 274 Distance measuring sensor 276 Floor surface sensor 277 Dust amount sensor

Claims (6)

  1. 自律的に走行し、掃除を行う自律走行掃除機であって、
    周囲に存在する物体を撮像する撮像部と、
    少なくとも水平面に沿う1軸周りで、前記撮像部の姿勢を変更可能な駆動機構と、
    前記撮像部から得られる画像情報に基づいて、自己位置推定を行う推定部と、
    前記駆動機構を制御して、前記撮像部の姿勢を変更する駆動制御部と、を備える、
    自律走行掃除機。
    An autonomous traveling vacuum cleaner that runs autonomously and performs cleaning,
    An imaging unit for imaging an object present in the vicinity,
    A drive mechanism that can change the attitude of the imaging unit at least around one axis along a horizontal plane;
    An estimating unit that performs self-position estimation based on image information obtained from the imaging unit,
    A drive control unit that controls the drive mechanism and changes the attitude of the imaging unit.
    Autonomous traveling vacuum cleaner.
  2. 前記駆動制御部は、前記駆動機構を制御して、前記画像情報内に含まれる特徴点の数が多い画角となるように、前記撮像部の姿勢を変更する、
    請求項1に記載の自律走行掃除機。
    The drive control unit controls the drive mechanism, and changes the orientation of the imaging unit so that the angle of view has a large number of feature points included in the image information.
    The autonomous traveling vacuum cleaner according to claim 1.
  3. 前記駆動制御部は、前記駆動機構を制御して、複数の前記画像情報に基づいて、マッチング可能な特徴点の数が多い画角となるように、前記撮像部の姿勢を変更する、
    請求項1に記載の自律走行掃除機。
    The drive control unit controls the drive mechanism, and based on the plurality of pieces of image information, changes the orientation of the imaging unit so that the number of feature points that can be matched has a large angle of view.
    The autonomous traveling vacuum cleaner according to claim 1.
  4. 前記駆動制御部は、前記画像情報内に含まれる特徴点を多くするために、床面が画角内に入らない上方方向に前記撮像部の光軸が向くように、前記駆動機構を制御する、
    請求項1から請求項3のいずれか1項に記載の自律走行掃除機。
    The drive control unit controls the drive mechanism so that the optical axis of the imaging unit is directed upward so that the floor surface does not enter the angle of view in order to increase the number of feature points included in the image information. ,
    The autonomous traveling vacuum cleaner according to any one of claims 1 to 3.
  5. 前記撮像部から得られる前記画像情報から撮像された物の種類を特定する種類特定部を、さらに備え、
    前記駆動制御部は、前記駆動機構を制御して、前記種類特定部が天井照明を特定するように、前記撮像部の姿勢を変更する、
    請求項1から請求項4のいずれか1項に記載の自律走行掃除機。
    A type identification unit that identifies the type of object imaged from the image information obtained from the imaging unit,
    The drive control unit controls the drive mechanism, so that the type identification unit identifies the ceiling illumination, to change the attitude of the imaging unit,
    The autonomous traveling vacuum cleaner according to any one of claims 1 to 4.
  6. 前記駆動制御部は、前記推定部が自己位置を喪失した場合、前記駆動機構を制御して、前記撮像部の姿勢を変更する、
    請求項1から請求項5のいずれか1項に記載の自律走行掃除機。
    The drive control unit, when the estimating unit has lost its position, controls the drive mechanism to change the attitude of the imaging unit,
    The autonomous traveling vacuum cleaner according to any one of claims 1 to 5.
PCT/JP2019/029120 2018-09-21 2019-07-25 Autonomous traveling cleaner WO2020059292A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-177253 2018-09-21
JP2018177253A JP2020047188A (en) 2018-09-21 2018-09-21 Autonomous traveling cleaner

Publications (1)

Publication Number Publication Date
WO2020059292A1 true WO2020059292A1 (en) 2020-03-26

Family

ID=69888703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/029120 WO2020059292A1 (en) 2018-09-21 2019-07-25 Autonomous traveling cleaner

Country Status (2)

Country Link
JP (1) JP2020047188A (en)
WO (1) WO2020059292A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7564737B2 (en) 2021-03-09 2024-10-09 株式会社東芝 Photographing device and photographing method
CN114617476A (en) * 2021-06-02 2022-06-14 北京石头创新科技有限公司 Self-moving equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150163993A1 (en) * 2013-12-12 2015-06-18 Hexagon Technology Center Gmbh Autonomous gardening vehicle with camera
JP2016086906A (en) * 2014-10-30 2016-05-23 三菱電機株式会社 Self-propelled vacuum cleaner
WO2016163492A1 (en) * 2015-04-10 2016-10-13 日本電気株式会社 Position determining device, position determining method, and program
JP2017093626A (en) * 2015-11-19 2017-06-01 シャープ株式会社 Self-travelling electronic apparatus
JP2017111606A (en) * 2015-12-16 2017-06-22 カシオ計算機株式会社 Autonomous mobile apparatus, autonomous mobile method, and program
KR20180085154A (en) * 2017-01-18 2018-07-26 엘지전자 주식회사 Robot cleaner

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150163993A1 (en) * 2013-12-12 2015-06-18 Hexagon Technology Center Gmbh Autonomous gardening vehicle with camera
JP2016086906A (en) * 2014-10-30 2016-05-23 三菱電機株式会社 Self-propelled vacuum cleaner
WO2016163492A1 (en) * 2015-04-10 2016-10-13 日本電気株式会社 Position determining device, position determining method, and program
JP2017093626A (en) * 2015-11-19 2017-06-01 シャープ株式会社 Self-travelling electronic apparatus
JP2017111606A (en) * 2015-12-16 2017-06-22 カシオ計算機株式会社 Autonomous mobile apparatus, autonomous mobile method, and program
KR20180085154A (en) * 2017-01-18 2018-07-26 엘지전자 주식회사 Robot cleaner

Also Published As

Publication number Publication date
JP2020047188A (en) 2020-03-26

Similar Documents

Publication Publication Date Title
CN111035327B (en) Cleaning robot, carpet detection method, and computer-readable storage medium
EP3459691B1 (en) Robot vacuum cleaner
EP3167341B1 (en) Method for detecting a measurement error in a robotic cleaning device
TWI603699B (en) Mobile robot and control method thereof
JP7487283B2 (en) Autonomous Mobile Robot Navigation
EP2888603B1 (en) Robot positioning system
KR101887055B1 (en) Robot cleaner and control method for thereof
US20160313741A1 (en) Prioritizing cleaning areas
JP7141220B2 (en) self-propelled vacuum cleaner
JP2013012200A (en) Robot cleaner and control method thereof
US11297992B2 (en) Robot cleaner and method for controlling the same
KR102565250B1 (en) Robot cleaner
WO2020235295A1 (en) Cleaning map display device, and cleaning map display method
WO2016005011A1 (en) Method in a robotic cleaning device for facilitating detection of objects from captured images
US11625043B2 (en) Robot cleaner and method for controlling the same
WO2020059292A1 (en) Autonomous traveling cleaner
KR20180037516A (en) Moving robot and control method thereof
CN112423639B (en) Autonomous walking type dust collector
JP7329125B2 (en) Mobile robot and its control method
JP2020052601A (en) Autonomous travel cleaner and control method
KR102467990B1 (en) Robot cleaner
KR102492947B1 (en) Robot cleaner
KR102203438B1 (en) a Moving robot and Controlling method for the moving robot
WO2020017239A1 (en) Self-propelled type vacuum cleaner and control method for self-propelled type vacuum cleaner
JP2019144849A (en) Autonomous traveling cleaner

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19863817

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19863817

Country of ref document: EP

Kind code of ref document: A1