CN113848902A - Target object determination method, mobile robot, storage medium, and electronic device - Google Patents
Target object determination method, mobile robot, storage medium, and electronic device Download PDFInfo
- Publication number
- CN113848902A CN113848902A CN202111116319.7A CN202111116319A CN113848902A CN 113848902 A CN113848902 A CN 113848902A CN 202111116319 A CN202111116319 A CN 202111116319A CN 113848902 A CN113848902 A CN 113848902A
- Authority
- CN
- China
- Prior art keywords
- laser
- target object
- determining
- information
- mobile robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 25
- 238000004590 computer program Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 abstract description 14
- 238000010408 sweeping Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 7
- 238000005259 measurement Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000004888 barrier function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004804 winding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Optics & Photonics (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention provides a method for determining a target object, a mobile robot, a storage medium and an electronic device, wherein the method for determining the target object comprises the following steps: the method comprises the steps that a laser panel of an area array depth sensor is controlled to emit multiple groups of first lasers to a target object located in the traveling direction of a mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, the front side is used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensity; and receiving multiple groups of second laser reflected by the multiple groups of first laser from the target object through the laser panel. By adopting the technical scheme, the problem of low detection accuracy in the process of detecting the object in front of the sweeping robot in the traditional method is solved.
Description
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of communications, and in particular, to a method for determining a target object, a mobile robot, a storage medium, and an electronic device.
[ background of the invention ]
With the development of society, more and more families begin to use the robot that sweeps the floor, and in the in-process of using the robot that sweeps the floor, the robot that sweeps the floor needs to discern the place ahead region, judges the place ahead region whether have the barrier to avoid the barrier in the in-process of removing.
In the existing sweeping robot, the sweeping robot with the active obstacle avoidance function realizes the identification of obstacles by relying on a dot matrix sensor or a linear array depth sensor, but the dot matrix or the linear array sensor needs to scan and irradiate an object at different angles for many times, and the scanning results at different angles are combined to determine the information of the obstacle, so that the calculation amount is large and inaccurate. In addition, the sensing range of the dot matrix or linear array sensor is small, the judgment accuracy of the active obstacle avoidance function is limited, the user experience is poor, small obstacles cannot be cleaned, and the probability that the floor sweeping robot is clamped and the floor sweeping robot rolls and brushes is wound is increased.
Aiming at the problems of the prior art and low detection accuracy in the process of detecting an object in front of a sweeping robot by using a traditional method, an effective solution is not provided at present.
Accordingly, there is a need for improvement in the related art to overcome the disadvantages of the related art.
[ summary of the invention ]
The invention aims to provide a target object determining method, a mobile robot, a storage medium and an electronic device, so as to at least solve the problem of low detection accuracy in the process of detecting an object in front of a sweeping robot in the traditional method.
The purpose of the invention is realized by the following technical scheme:
according to an aspect of an embodiment of the present invention, there is provided a target object determination method including: the method comprises the steps that a laser panel of an area array depth sensor is controlled to emit multiple groups of first lasers to a target object located in the traveling direction of a mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, the front side is used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensity; receiving, by the laser panel, a plurality of groups of second laser light reflected by the plurality of groups of first laser light from the target object; and determining the type of the obstacle corresponding to the target object according to the multiple groups of first lasers and the multiple groups of second lasers.
Further, determining the type of the obstacle corresponding to the target object according to the plurality of groups of first laser lights and the plurality of groups of second laser lights includes: determining a first laser and a second laser with the same light intensity from the multiple groups of first lasers and the multiple groups of second lasers; the first laser carries coding information, and the second laser carries decoding information; and determining the type of the obstacle corresponding to the target object according to the coding information of the first laser and the decoding information of the second laser.
Further, determining the type of the obstacle corresponding to the target object according to the encoded information of the first laser and the decoded information of the second laser includes: determining the flight time of the first laser according to the coding information in the first laser and the decoding information in the second laser; determining three-dimensional information of the target object according to the flight time, and determining the type of the obstacle corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information comprises at least one of the following: the height information of the target object, the length information of the target object and the width information of the target object.
Further, determining three-dimensional information of the target object according to the flight time comprises: determining the three-dimensional coordinates of the target object according to the flight time; separating the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates; and determining the three-dimensional information of the target object by taking the coordinate information of the ground as a reference.
Further, determining the type of the obstacle corresponding to the target object according to the three-dimensional information includes: acquiring a corresponding relation between preset three-dimensional information and the type of an obstacle; and determining the type of the obstacle corresponding to the three-dimensional information of the target object from the corresponding relation.
Further, after determining the type of the obstacle corresponding to the target object according to the plurality of sets of first lasers and the plurality of sets of second lasers, the method further includes: determining an avoidance strategy corresponding to the type of the obstacle; and controlling the traveling route of the mobile robot according to the avoiding strategy so as to control the mobile robot to successfully avoid the target object.
According to another aspect of the embodiments of the present invention, there is also provided a mobile robot including: the area array depth sensor is arranged on the front side of the mobile robot and used for emitting multiple groups of first lasers to a target object located in the traveling direction of the mobile robot by using a laser panel, wherein the laser panel is arranged on the front side of the mobile robot and used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensity; the processor is arranged in the mobile robot and connected with the area array depth sensor, or the processor is positioned in the area array depth sensor and used for receiving a plurality of groups of second lasers sent by the laser panel; and determining the type of the obstacle corresponding to the target object according to the multiple groups of first lasers and the multiple groups of second lasers, wherein the multiple groups of second lasers are received by the laser panel after the multiple groups of first lasers are reflected from the target object.
Further, the processor is further configured to determine a first laser and a second laser having the same light intensity from the plurality of sets of first lasers and the plurality of sets of second lasers; the first laser carries coded information, and the second laser carries decoding information.
According to a further aspect of embodiments of the present invention, there is provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is arranged to, when executed, perform the method of determining a target object as set forth in any one of the above.
According to a further aspect of the embodiments of the present invention, there is provided an electronic apparatus including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the method of determining a target object described in any one of the above.
According to the method and the device, in the moving process of the mobile robot, the laser panel of the area array depth sensor is controlled to emit multiple groups of first lasers to the target object in the moving direction of the mobile robot, the multiple groups of second lasers reflected from the target object by the multiple groups of first lasers are received through the laser panel, and the type of the obstacle corresponding to the target object is determined according to the multiple groups of first lasers and the multiple groups of second lasers. By adopting the technical scheme, the problem of low detection accuracy in the process of detecting the object in front of the sweeping robot in the traditional method is solved. And then detect the object through controlling area array depth sensor, improved the rate of accuracy that detects.
[ description of the drawings ]
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal of a target object determination method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining a target object according to an embodiment of the present invention;
fig. 3 is a schematic diagram (one) of detecting an obstacle according to a determination method of a target object of an embodiment of the present invention;
fig. 4 is a schematic diagram (two) of obstacle detection in the determination method of the target object according to the embodiment of the present invention;
fig. 5 is a schematic diagram (three) of the obstacle detection of the determination method of the target object according to the embodiment of the present invention;
fig. 6 is a block diagram of a mobile robot according to an embodiment of the present invention;
fig. 7 is a block diagram of a target object determination apparatus according to an embodiment of the present invention.
[ detailed description ] embodiments
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the embodiment of the invention can be executed in a computer terminal or a similar arithmetic device. Taking the example of being operated on a computer terminal, fig. 1 is a hardware structure block diagram of a computer terminal of a method for determining a target object according to an embodiment of the present invention. As shown in fig. 1, a computer terminal may include one or more processors 102 (only one is shown in fig. 1), wherein the processors 102 may include but are not limited to a processing device such as a Microprocessor (MPU) or a Programmable Logic Device (PLD) and a memory 104 for storing data, and optionally, the computer terminal may further include a transmission device 106 for communication function and an input/output device 108. it will be understood by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting on the structure of the computer terminal, for example, the computer terminal may further include more or less components than those shown in fig. 1, or have equivalent functions or different configurations than those shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the method for determining a target object in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to a computer terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In the present embodiment, a method for determining a target object is provided, and fig. 2 is a flowchart of a method for determining a target object according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, a laser panel of an area array depth sensor is controlled to emit multiple groups of first lasers to a target object located in the traveling direction of a mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, the front side is used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensities;
it should be noted that the area array depth sensor in the embodiment of the present application includes: time of Flight (TOF) sensors.
It should be noted that the dot matrix laser sensor can only measure one point at a time, the linear array laser sensor can only measure values of all points on one line at a time, and the dot matrix or linear array sensor can determine information of one obstacle only by performing scanning irradiation on one object at a plurality of different angles and combining scanning results at different angles, and the calculation amount is large and inaccurate. The area array depth laser sensor in the embodiment of the application can measure all points on one surface at one time, and the calculation amount is small and accurate.
Step S204, receiving multiple groups of second laser reflected by the multiple groups of first laser from the target object through the laser panel;
it can be understood that the laser panel of the area array depth sensor can realize the laser emission function and the laser receiving function, and mainly comprises: a laser transmitter, a laser receiver; in the moving process of the mobile robot, the laser transmitter in the area array depth sensor is controlled to emit multiple groups of laser to the front area of the mobile robot, if a target object is arranged in the front area, the multiple groups of laser can be reflected by the target object after contacting the target object to form multiple groups of reflected light, and then the multiple groups of reflected light are received by the laser receiver of the area array depth sensor. In order to better distinguish laser emitted by the laser emitter from reflected light received by the laser receiver, multiple groups of laser emitted by the laser emitter are defined as multiple groups of first laser, and multiple groups of reflected light received by the laser receiver are defined as multiple groups of second laser.
Step S206, determining the type of the obstacle corresponding to the target object according to the multiple groups of first laser and the multiple groups of second laser.
Through the steps, in the moving process of the mobile robot, the laser panel of the area array depth sensor is controlled to emit multiple groups of first lasers to the target object in the moving direction of the mobile robot, multiple groups of second lasers reflected by the multiple groups of first lasers from the target object are received through the laser panel, and the type of the obstacle corresponding to the target object is determined according to the multiple groups of first lasers and the multiple groups of second lasers. By adopting the technical scheme, the problem of low detection accuracy in the process of detecting the object in front of the sweeping robot in the traditional method is solved. And then detect the object through controlling area array depth sensor, improved the rate of accuracy that detects.
In the embodiment of the present invention, the installation position of the laser panel of the area array depth sensor is preferably the foremost position of the mobile robot during traveling, and may also be installed at other positions of the mobile robot, for example, the laser panel is installed on the upper surface of the mobile robot, so as to be able to detect the object in front of the mobile robot.
It should be noted that, determining the type of the obstacle corresponding to the target object according to the multiple groups of first laser light and the multiple groups of second laser light is implemented by: determining a first laser and a second laser with the same light intensity from the multiple groups of first lasers and the multiple groups of second lasers; the first laser carries coded information, the second laser carries decoding information, and the type of the obstacle corresponding to the target object is determined according to the coded information of the first laser and the decoding information of the second laser.
In the embodiment of the present application, the area array depth sensor emits multiple sets of first laser light forward in one plane, for example, multiple sets of first laser light are emitted within 1s, and the brightness (light intensity) of each set of laser light is different. In the process of laser reflection, the light intensity of the laser cannot be changed, so that a first laser and a second laser with the same light intensity can be determined from multiple groups of first lasers and multiple groups of second lasers, and when the area array depth sensor emits the first laser, the first laser carries coding information, wherein the coding information is preset for the area array depth sensor, parameter information is arranged in the coding information, the parameter information in the coding information can be changed after the first laser is reflected by a target object, for better understanding, the changed coding information is defined as decoding information, and then the type of an obstacle of the target object can be determined according to the coding information of the first laser and the decoding information of the second laser. In an alternative embodiment. It should be noted that the area TOF depth sensor includes a sensor using an infrared receiver or an infrared receiver + RGB receiver, and the sensor uses a technique including direct measurement of the time of flight and indirect measurement of the time of flight.
In order to better understand the above-mentioned determination of the type of the obstacle corresponding to the target object according to the encoded information of the first laser and the decoded information of the second laser, in an alternative embodiment, the determination may be implemented by: determining the flight time of the first laser according to the coding information in the first laser and the decoding information in the second laser; determining three-dimensional information of the target object according to the flight time, and determining the type of an obstacle corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information comprises at least one of the following: the height information of the target object, the length information of the target object and the width information of the target object.
That is, the area array depth sensor may determine the time of flight of the first laser light from the encoded information in the first laser light and the decoded information in the second laser light. Specifically, the area array depth sensor may determine the flight time of the first laser according to the parameter information in the decoding information and the parameter information in the encoding information, determine the three-dimensional information of the target object according to the flight time of the first laser, and further determine the type of the obstacle corresponding to the target object according to the three-dimensional information. By adopting the technical scheme, the area array depth sensor can quickly determine the flight time of the first laser, and the three-dimensional information of the target object can be determined according to the flight time of the first laser.
Further, the above-mentioned determining the three-dimensional information of the target object according to the flight time optionally requires determining the three-dimensional coordinates of the target object according to the flight time; separating the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates; and determining the three-dimensional information of the target object by taking the coordinate information of the ground as a reference.
That is, the space where the mobile robot is located may be regarded as a three-dimensional space coordinate system, and then the area array depth sensor may determine the three-dimensional coordinates of the target object in the space coordinate system according to the flight time of the first laser, for example, the three-dimensional coordinates of the target object are (X, Y, Z), after the three-dimensional coordinates of the target object are determined, the area array depth sensor separates the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates of the target object, and determines the three-dimensional information of the target object with the coordinate information of the ground as a reference. By adopting the technical scheme, the three-dimensional information of the target object can be determined more accurately.
In an optional embodiment, determining the type of the obstacle corresponding to the target object according to the three-dimensional information may be implemented by: acquiring a corresponding relation between preset three-dimensional information and the type of an obstacle; and determining the type of the obstacle corresponding to the three-dimensional information of the target object from the corresponding relation.
It should be noted that each piece of three-dimensional information may determine a type of an obstacle, specifically, there may be a three-dimensional information-obstacle type table, where the table has obstacle types corresponding to different pieces of three-dimensional information, after the area array depth sensor acquires the three-dimensional information of the target object, the area array depth sensor acquires the three-dimensional information-obstacle type table, and determines an obstacle type corresponding to the three-dimensional information of the target object from the table, specifically, according to different pieces of three-dimensional information, the obstacle type that may be determined includes: table and chair legs, walls and steps. By adopting the technical scheme, the area array depth sensor can quickly determine the type of the obstacle of the target object according to the three-dimensional information of the target object.
Further, after determining the type of the obstacle corresponding to the target object according to the plurality of sets of first lasers and the plurality of sets of second lasers, the method further includes: determining an avoidance strategy corresponding to the type of the obstacle; and controlling the traveling route of the mobile robot according to the avoiding strategy so as to control the mobile robot to successfully avoid the target object.
It can be understood that, after determining the type of the obstacle corresponding to the target object, the area array depth sensor may select a corresponding avoidance strategy according to the type of the obstacle, for example, when the type of the obstacle is a wall, the obstacle is selected to retreat, and when the type of the obstacle is a table and chair leg, the obstacle is selected to bypass. And then controlling the traveling route of the mobile robot according to the avoiding strategy, and controlling the mobile robot to successfully avoid the target object. By adopting the mode, the mobile robot can rapidly avoid the target object, and the probability of the mobile robot being blocked and the rolling brush of the mobile robot being wound is reduced.
For a better understanding, the following is specifically illustrated: the area array depth sensor emits multiple groups of laser forward in a plane, for example, multiple groups of laser are emitted within 1s, the brightness (light intensity) of each group of laser is different, each brightness (light intensity) corresponds to a time point, so that the multiple groups of laser within 1s carry coded information, the coded information can represent the flight time of the laser, the laser reflected from an object has corresponding decoding information, the time for the laser to reflect back and forth can be calculated for the laser with the same brightness (light intensity), further, the distance from a light source to the object is calculated according to the flight time of the light, the distances from all points of the object to the light source are combined, and the three-dimensional information of the object can be obtained.
It is to be understood that the above-described embodiments are only a few, but not all, embodiments of the present invention. In order to better understand the method for determining the target object, the following describes the above process with reference to an embodiment, but the method is not limited to the technical solution of the embodiment of the present invention, and specifically:
in an alternative embodiment, a method for identifying obstacles by using an area array TOF depth sensor is provided, specifically, a laser transmitter of the area array depth sensor is installed forward, a laser receiver of the area array depth sensor is installed at an adjacent side of the laser transmitter, and the laser transmitter and the laser receiver form the area array TOF depth sensor. The active light emitted by the laser emitter is received by the laser receiver after being reflected by an object, the flight time of the emitted laser is calculated by the area array TOF depth sensor, and the three-dimensional coordinates of the object in the view field of the area array TOF depth sensor are obtained according to the flight time and the calibration parameters of the laser receiver. First, the ground information belonging to the plane where the sweeping robot (corresponding to the mobile robot in the above-mentioned embodiment) is located is separated from the three-dimensional point information (corresponding to the three-dimensional information in the above-mentioned embodiment), and based on the ground information, the object in the sensing range is classified by adding the width, length, and height of the three-dimensional point, and whether the object belongs to the obstacle is determined.
Fig. 3 is a schematic diagram (one) of obstacle detection of a target object determination method according to an embodiment of the present invention, fig. 4 is a schematic diagram (two) of obstacle detection of a target object determination method according to an embodiment of the present invention, fig. 5 is a schematic diagram (three) of obstacle detection of a target object determination method according to an embodiment of the present invention, as shown in fig. 3 to 5, a cylinder is a device (corresponding to a mobile robot in the above-described embodiment) using an area TOF sensor, a middle of the cylinder and a rectangular parallelepiped is a measurement sensing range of TOF, and the rectangular parallelepiped is an obstacle in front of the device:
as shown in fig. 3, according to the measured width, length and height attributes of the object, the object can be classified as a table and chair leg;
as shown in fig. 4, the object can be classified as a wall according to the measured width, length, and height attributes of the object;
as shown in fig. 5, the object can be classified into steps according to the measured width, length and height attributes of the object; according to different classifications, the equipment can adopt different obstacle avoidance or obstacle crossing strategies.
It should be noted that the area TOF depth sensor includes a sensor using an infrared receiver or an infrared receiver + RGB receiver, and the sensor uses a technique including direct measurement of the time of flight and indirect measurement of the time of flight.
In addition, according to the technical scheme of the embodiment of the invention, various obstacles can be identified by using the method for identifying the obstacles by using the area array TOF depth sensor, and winding and collision can be prevented when the mobile robot identifies the obstacles such as a power line. When the desk and chair legs are identified, the desk and chair legs can be avoided by a shorter distance, and meanwhile, the passing performance is ensured. When the objects which can be crossed such as an upper step, a carpet, a sliding door sliding rail and the like in the obstacle crossing capability are identified, the missing scanning is prevented.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g., a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
The present invention also provides a mobile robot, and fig. 6 is a block diagram of a mobile robot according to an embodiment of the present invention, including:
the area array depth sensor 62 is arranged on the front side of the mobile robot and used for emitting multiple groups of first laser to a target object located in the traveling direction of the mobile robot by using a laser panel, wherein the laser panel is arranged on the front side of the mobile robot and used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first laser in the multiple groups of first laser have different light intensity;
the processor 64 is arranged in the mobile robot and connected with the area array depth sensor, or the processor is positioned in the area array depth sensor and used for receiving multiple groups of second laser sent by the laser panel; and determining the type of the obstacle corresponding to the target object according to the multiple groups of first lasers and the multiple groups of second lasers, wherein the multiple groups of second lasers are received by the laser panel after the multiple groups of first lasers are reflected from the target object.
According to the invention, in the process of moving the mobile robot, the area array depth sensor 62 is controlled to emit multiple groups of first laser to the target object in the moving direction of the mobile robot by using the laser panel, and then the processor 64 receives multiple groups of second laser transmitted by the laser panel, and determines the type of the obstacle corresponding to the target object according to the multiple groups of first laser and the multiple groups of second laser. By adopting the technical scheme, the problem of low detection accuracy in the process of detecting the object in front of the sweeping robot in the traditional method is solved. And then detect the object through controlling area array depth sensor, improved the rate of accuracy that detects.
In the embodiment of the present invention, the installation position of the laser panel of the area array depth sensor is preferably the foremost position of the mobile robot during traveling, and may also be installed at other positions of the mobile robot, for example, the laser panel is installed on the upper surface of the mobile robot, so as to be able to detect the object in front of the mobile robot.
It should be noted that the processor 64 is further configured to determine, from the plurality of sets of first laser light and the plurality of sets of second laser light, a first laser light and a second laser light having the same light intensity; the first laser carries coded information, the second laser carries decoding information, and the type of the obstacle corresponding to the target object is determined according to the coded information of the first laser and the decoding information of the second laser.
In the embodiment of the present application, the area array depth sensor emits multiple sets of first laser light forward in one plane, for example, multiple sets of first laser light are emitted within 1s, and the brightness (light intensity) of each set of laser light is different. In the process of laser reflection, the light intensity of the laser cannot be changed, so that a first laser and a second laser with the same light intensity can be determined from multiple groups of first lasers and multiple groups of second lasers, and when the area array depth sensor emits the first laser, the first laser carries coding information, wherein the coding information is preset for the area array depth sensor, parameter information is arranged in the coding information, the parameter information in the coding information can be changed after the first laser is reflected by a target object, for better understanding, the changed coding information is defined as decoding information, and then the type of an obstacle of the target object can be determined according to the coding information of the first laser and the decoding information of the second laser. In an alternative embodiment. It should be noted that the area TOF depth sensor includes a sensor using an infrared receiver or an infrared receiver + RGB receiver, and the sensor uses a technique including direct measurement of the time of flight and indirect measurement of the time of flight.
In an alternative embodiment, processor 64 is further configured to determine a time of flight of the first laser based on the encoded information in the first laser and the decoded information in the second laser; determining three-dimensional information of the target object according to the flight time, and determining the type of an obstacle corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information comprises at least one of the following: the height information of the target object, the length information of the target object and the width information of the target object.
That is, the area array depth sensor may determine the time of flight of the first laser light from the encoded information in the first laser light and the decoded information in the second laser light. Specifically, the area array depth sensor may determine the flight time of the first laser according to the parameter information in the decoding information and the parameter information in the encoding information, determine the three-dimensional information of the target object according to the flight time of the first laser, and further determine the type of the obstacle corresponding to the target object according to the three-dimensional information. By adopting the technical scheme, the area array depth sensor can quickly determine the flight time of the first laser, and the three-dimensional information of the target object can be determined according to the flight time of the first laser.
Further, the processor 64 is also configured to determine the three-dimensional coordinates of the target object according to the flight time; separating the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates; and determining the three-dimensional information of the target object by taking the coordinate information of the ground as a reference.
That is, the space where the mobile robot is located may be regarded as a three-dimensional space coordinate system, and then the area array depth sensor may determine the three-dimensional coordinates of the target object in the space coordinate system according to the flight time of the first laser, for example, the three-dimensional coordinates of the target object are (X, Y, Z), after the three-dimensional coordinates of the target object are determined, the area array depth sensor separates the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates of the target object, and determines the three-dimensional information of the target object with the coordinate information of the ground as a reference. By adopting the technical scheme, the three-dimensional information of the target object can be determined more accurately.
In an optional embodiment, the processor 64 is further configured to obtain a corresponding relationship between preset three-dimensional information and a type of an obstacle; and determining the type of the obstacle corresponding to the three-dimensional information of the target object from the corresponding relation.
It should be noted that each piece of three-dimensional information may determine a type of an obstacle, specifically, there may be a three-dimensional information-obstacle type table, where the table has obstacle types corresponding to different pieces of three-dimensional information, after the area array depth sensor acquires the three-dimensional information of the target object, the area array depth sensor acquires the three-dimensional information-obstacle type table, and determines an obstacle type corresponding to the three-dimensional information of the target object from the table, specifically, according to different pieces of three-dimensional information, the obstacle type that may be determined includes: table and chair legs, walls and steps. By adopting the technical scheme, the area array depth sensor can quickly determine the type of the obstacle of the target object according to the three-dimensional information of the target object.
Further, the processor 64 is further configured to determine an avoidance strategy corresponding to the type of the obstacle; and controlling the traveling route of the mobile robot according to the avoiding strategy so as to control the mobile robot to successfully avoid the target object.
It can be understood that, after determining the type of the obstacle corresponding to the target object, the area array depth sensor may select a corresponding avoidance strategy according to the type of the obstacle, for example, when the type of the obstacle is a wall, the obstacle is selected to retreat, and when the type of the obstacle is a table and chair leg, the obstacle is selected to bypass. And then controlling the traveling route of the mobile robot according to the avoiding strategy, and controlling the mobile robot to successfully avoid the target object. By adopting the mode, the mobile robot can rapidly avoid the target object, and the probability of the mobile robot being blocked and the rolling brush of the mobile robot being wound is reduced.
In this embodiment, a target object detection apparatus is further provided, where the target object detection apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a block diagram of a target object determination apparatus according to an embodiment of the present invention, as shown in fig. 7:
a sending module 72, configured to control a laser panel of the area array depth sensor to emit multiple sets of first lasers to a target object located in a traveling direction of the mobile robot, where the laser panel is disposed on a front side of the mobile robot, the front side is used to indicate that the mobile robot is the forefront in a traveling process, and any two sets of first lasers in the multiple sets of first lasers have different light intensities;
a receiving module 74, configured to receive, through the laser panel, multiple sets of second laser light reflected from the target object by the multiple sets of first laser light;
and a determining module 76, configured to determine the type of the obstacle corresponding to the target object according to the multiple sets of the first laser light and the multiple sets of the second laser light.
Through the module, in the process of moving the robot, the laser panel of the area array depth sensor is controlled to emit multiple groups of first lasers to the target object in the moving direction of the robot, receive multiple groups of second lasers reflected by the multiple groups of first lasers from the target object through the laser panel, and determine the type of the obstacle corresponding to the target object according to the multiple groups of first lasers and the multiple groups of second lasers. By adopting the technical scheme, the problem of low detection accuracy in the process of detecting the object in front of the sweeping robot in the traditional method is solved. And then detect the object through controlling area array depth sensor, improved the rate of accuracy that detects.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, controlling a laser panel of the area array depth sensor to emit multiple groups of first lasers to a target object located in the traveling direction of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, the front side is used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensities;
s2, receiving multiple groups of second laser reflected by the multiple groups of first laser from the target object through the laser panel;
and S3, determining the type of the obstacle corresponding to the target object according to the multiple groups of first laser and the multiple groups of second laser.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a U disk, a read only memory ROM, a random access memory RAM, a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, controlling a laser panel of the area array depth sensor to emit multiple groups of first lasers to a target object located in the traveling direction of the mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, the front side is used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensities;
s2, receiving multiple groups of second laser reflected by the multiple groups of first laser from the target object through the laser panel;
and S3, determining the type of the obstacle corresponding to the target object according to the multiple groups of first laser and the multiple groups of second laser.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
Embodiments of the present invention also provide a robot comprising a body, a motion assembly and a controller arranged to perform the steps of any of the method embodiments described above.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. A method of determining a target object, comprising:
the method comprises the steps that a laser panel of an area array depth sensor is controlled to emit multiple groups of first lasers to a target object located in the traveling direction of a mobile robot, wherein the laser panel is arranged on the front side of the mobile robot, the front side is used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensity;
receiving, by the laser panel, a plurality of groups of second laser light reflected by the plurality of groups of first laser light from the target object;
and determining the type of the obstacle corresponding to the target object according to the multiple groups of first lasers and the multiple groups of second lasers.
2. The method for determining the target object according to claim 1, wherein determining the type of the obstacle corresponding to the target object according to the plurality of sets of the first laser light and the plurality of sets of the second laser light includes:
determining a first laser and a second laser with the same light intensity from the multiple groups of first lasers and the multiple groups of second lasers; the first laser carries coding information, and the second laser carries decoding information;
and determining the type of the obstacle corresponding to the target object according to the coding information of the first laser and the decoding information of the second laser.
3. The method for determining the target object according to claim 2, wherein determining the type of the obstacle corresponding to the target object according to the encoded information of the first laser and the decoded information of the second laser includes:
determining the flight time of the first laser according to the coding information in the first laser and the decoding information in the second laser;
determining three-dimensional information of the target object according to the flight time, and determining the type of the obstacle corresponding to the target object according to the three-dimensional information, wherein the three-dimensional information comprises at least one of the following: the height information of the target object, the length information of the target object and the width information of the target object.
4. The method for determining a target object according to claim 3, wherein determining three-dimensional information of the target object based on the time of flight includes:
determining the three-dimensional coordinates of the target object according to the flight time;
separating the coordinate information of the ground where the mobile robot is located from the three-dimensional coordinates;
and determining the three-dimensional information of the target object by taking the coordinate information of the ground as a reference.
5. The method for determining the target object according to claim 3, wherein determining the type of the obstacle corresponding to the target object according to the three-dimensional information includes:
acquiring a corresponding relation between preset three-dimensional information and the type of an obstacle;
and determining the type of the obstacle corresponding to the three-dimensional information of the target object from the corresponding relation.
6. The method for determining the target object according to claim 1, wherein after determining the type of the obstacle corresponding to the target object according to the plurality of sets of the first laser light and the plurality of sets of the second laser light, the method further comprises:
determining an avoidance strategy corresponding to the type of the obstacle;
and controlling the traveling route of the mobile robot according to the avoiding strategy so as to control the mobile robot to successfully avoid the target object.
7. A mobile robot, comprising:
the area array depth sensor is arranged on the front side of the mobile robot and used for emitting multiple groups of first lasers to a target object located in the traveling direction of the mobile robot by using a laser panel, wherein the laser panel is arranged on the front side of the mobile robot and used for indicating the most front part of the mobile robot in the traveling process, and any two groups of first lasers in the multiple groups of first lasers have different light intensity;
the processor is arranged in the mobile robot and connected with the area array depth sensor, or the processor is positioned in the area array depth sensor and used for receiving a plurality of groups of second lasers sent by the laser panel; and determining the type of the obstacle corresponding to the target object according to the multiple groups of first lasers and the multiple groups of second lasers, wherein the multiple groups of second lasers are received by the laser panel after the multiple groups of first lasers are reflected from the target object.
8. The mobile robot of claim 7, wherein the processor is further configured to determine a first laser and a second laser having the same light intensity from the plurality of first laser light and the plurality of second laser light; the first laser carries coded information, and the second laser carries decoding information.
9. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 6.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 6 by means of the computer program.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111116319.7A CN113848902A (en) | 2021-09-23 | 2021-09-23 | Target object determination method, mobile robot, storage medium, and electronic device |
PCT/CN2022/113312 WO2023045639A1 (en) | 2021-09-23 | 2022-08-18 | Method for determining target object, mobile robot, storage medium, and electronic apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111116319.7A CN113848902A (en) | 2021-09-23 | 2021-09-23 | Target object determination method, mobile robot, storage medium, and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113848902A true CN113848902A (en) | 2021-12-28 |
Family
ID=78979014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111116319.7A Withdrawn CN113848902A (en) | 2021-09-23 | 2021-09-23 | Target object determination method, mobile robot, storage medium, and electronic device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113848902A (en) |
WO (1) | WO2023045639A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023045639A1 (en) * | 2021-09-23 | 2023-03-30 | 追觅创新科技(苏州)有限公司 | Method for determining target object, mobile robot, storage medium, and electronic apparatus |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130090438A (en) * | 2012-02-04 | 2013-08-14 | 엘지전자 주식회사 | Robot cleaner |
CN105866790B (en) * | 2016-04-07 | 2018-08-10 | 重庆大学 | A kind of laser radar obstacle recognition method and system considering lasing intensity |
CN110916562A (en) * | 2018-09-18 | 2020-03-27 | 科沃斯机器人股份有限公司 | Autonomous mobile device, control method, and storage medium |
CN110622085A (en) * | 2019-08-14 | 2019-12-27 | 珊口(深圳)智能科技有限公司 | Mobile robot and control method and control system thereof |
CN112749643A (en) * | 2020-12-30 | 2021-05-04 | 深圳市欢创科技有限公司 | Obstacle detection method, device and system |
CN113848902A (en) * | 2021-09-23 | 2021-12-28 | 追觅创新科技(苏州)有限公司 | Target object determination method, mobile robot, storage medium, and electronic device |
-
2021
- 2021-09-23 CN CN202111116319.7A patent/CN113848902A/en not_active Withdrawn
-
2022
- 2022-08-18 WO PCT/CN2022/113312 patent/WO2023045639A1/en active Application Filing
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023045639A1 (en) * | 2021-09-23 | 2023-03-30 | 追觅创新科技(苏州)有限公司 | Method for determining target object, mobile robot, storage medium, and electronic apparatus |
Also Published As
Publication number | Publication date |
---|---|
WO2023045639A1 (en) | 2023-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10932635B2 (en) | Vacuum cleaner | |
EP3798974B1 (en) | Method and apparatus for detecting ground point cloud points | |
US10478037B2 (en) | Method for operating a floor-cleaning device and floor-cleaning device | |
CN111427023B (en) | Laser radar anti-interference method, laser radar system and storage medium | |
CN110202569B (en) | Robot recharging method, device, system, electronic equipment and storage medium | |
KR100954232B1 (en) | Method and apparatus for avoiding vehicle collision | |
US11553452B2 (en) | Positioning control method and device, positioning system and storage medium | |
CN112214011B (en) | System and method for positioning charging seat of self-moving robot | |
CN112826393B (en) | Sweeping robot operation management method, sweeping robot, equipment and storage medium | |
CN113848902A (en) | Target object determination method, mobile robot, storage medium, and electronic device | |
CN111694360B (en) | Method and device for determining position of sweeping robot and sweeping robot | |
CN110471086A (en) | A kind of radar survey barrier system and method | |
US11867798B2 (en) | Electronic device including sensor and method of determining path of electronic device | |
CN113633221A (en) | Method, device and system for processing missed-scanning area of automatic cleaning equipment | |
CN112014830B (en) | Reflection filtering method of radar laser, sweeping robot, equipment and storage medium | |
US20170115161A1 (en) | Measuring device, measuring method, and programs therefor | |
CN109270548B (en) | Object detection for motor vehicles | |
CN110928296A (en) | Method for avoiding charging seat by robot and robot thereof | |
CN114879690A (en) | Scene parameter adjusting method and device, electronic equipment and storage medium | |
CN113902632A (en) | Method and device for removing laser data noise point, storage medium and electronic device | |
KR20210031828A (en) | Electronic device including sensor and path planning method of the electronic device | |
CN111114367A (en) | Automatic charging method and system for electric automobile | |
CN118161087A (en) | TOF optical system for sweeping robot and sweeping robot | |
CN112731421B (en) | Laser radar system and light intensity switching method thereof | |
CN115079145B (en) | Method and device for improving anti-interference capability of laser radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20211228 |