Nothing Special   »   [go: up one dir, main page]

WO2019085716A1 - 移动机器人的交互方法、装置、移动机器人及存储介质 - Google Patents

移动机器人的交互方法、装置、移动机器人及存储介质 Download PDF

Info

Publication number
WO2019085716A1
WO2019085716A1 PCT/CN2018/109626 CN2018109626W WO2019085716A1 WO 2019085716 A1 WO2019085716 A1 WO 2019085716A1 CN 2018109626 W CN2018109626 W CN 2018109626W WO 2019085716 A1 WO2019085716 A1 WO 2019085716A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile robot
prompt information
projection
path
obstacle
Prior art date
Application number
PCT/CN2018/109626
Other languages
English (en)
French (fr)
Inventor
陈超
吴伟
李成军
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019085716A1 publication Critical patent/WO2019085716A1/zh
Priority to US16/597,484 priority Critical patent/US11142121B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/26Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic
    • B60Q1/50Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking
    • B60Q1/507Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to indicate the vehicle, or parts thereof, or to give signals, to other traffic for indicating other intentions or conditions, e.g. request for waiting or overtaking specific to autonomous vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q2400/00Special features or arrangements of exterior signal lamps for vehicles
    • B60Q2400/50Projected symbol or information, e.g. onto the road or car body
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25252Microprocessor
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller

Definitions

  • the embodiments of the present invention relate to the field of artificial intelligence technologies, and in particular, to a mobile robot interaction method, device, mobile robot, and storage medium.
  • a mobile robot (English: Robot) is a machine that automatically performs work.
  • the mobile robot can send a message to the user through a device such as a signal light or a speaker.
  • the information prompting mode of the mobile robot is mainly performed in a voice form.
  • the mobile robot receives the instruction of the person through the microphone, determines the prompt information corresponding to the instruction, and sends a prompt sound to the person through the speaker, the prompt sound.
  • the prompt information is transmitted by the prompt sound, and the prompt sound is affected by various factors such as the distance between the human and the mobile robot, the ambient sound, and the locality of the language, so that the mobile robot is difficult to quickly and accurately The person describes the prompt information.
  • the prompt sound is affected by various factors such as the distance between the human and the mobile robot, the ambient sound, and the locality of the language, so that the mobile robot is difficult to quickly and accurately The person describes the prompt information.
  • the embodiments of the present application provide a mobile robot interaction method, device, mobile robot, and storage medium, which can be used to solve the problem in the related art that the interactive method of the mobile robot has not been provided more effectively.
  • the technical solution is as follows:
  • an interactive method for a mobile robot includes:
  • the sensing data is used to indicate the surrounding environment of the mobile robot during the traveling;
  • the path prompt information is projected and displayed on the target projection surface.
  • an interactive device for a mobile robot disposed on the mobile robot, the device comprising:
  • An acquisition module configured to acquire sensing data, where the sensing data is used to indicate a surrounding environment of the mobile robot during traveling;
  • a first determining module configured to determine path prompt information according to the sensing data, where the path prompt information includes a planned path of the mobile robot;
  • a projection module configured to project the path prompt information on a target projection surface.
  • a mobile robot including a processor, a memory, and a projection component, wherein the memory stores at least one instruction, at least one program, a code set, or a set of instructions;
  • the processor is configured to acquire sensing data, where the sensing data is used to indicate a surrounding environment of the mobile robot during traveling;
  • the processor is further configured to determine path prompt information according to the sensing data, where the path prompt information includes a planned path of the mobile robot;
  • the projection component is configured to project the path prompt information determined by the processor on a target projection surface.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set, or a set of instructions, the at least one instruction, the at least one program,
  • the code set or set of instructions is loaded and executed by the processor to implement the interactive method of the mobile robot provided by the first aspect.
  • the mobile robot can project on the target projection surface to display the planned path to be moved during the traveling process, so that the nearby pedestrian can directly see the planned path that the mobile robot is to move on the target projection surface, thereby avoiding the related art.
  • the limitations caused by the form of speech are beneficial to the mobile robot to quickly and accurately inform the pedestrians.
  • FIG. 1 is a block diagram showing the structure of a mobile robot 10 according to an embodiment of the present application.
  • FIG. 2 is a flowchart of an interaction method of the mobile robot 10 according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of an interaction method of three different types of mobile robots 10 provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a projection form involved in an interaction method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a projection form involved in an interaction method according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a projection form involved in an interaction method provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a method for interacting with a mobile robot according to another embodiment of the present application.
  • FIG. 8 is a schematic diagram of a projection form involved in an interaction method according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a projection form involved in an interaction method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a projection form involved in an interaction method according to an embodiment of the present application.
  • FIG. 11 is a flowchart of a method for interacting a mobile robot according to another embodiment of the present application.
  • FIG. 12 is a schematic diagram of a projection form involved in an interaction method provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an interaction apparatus of a mobile robot according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an interaction apparatus of a mobile robot according to another embodiment of the present application.
  • Mobile robot It is a robot system with mobile functions.
  • the mobile robot includes: a wheeled mobile robot, a walking mobile robot (one-legged, two-legged and multi-legged), a crawler-type mobile robot, a crawling robot, a peristaltic robot, and a swimming At least one type of robot.
  • wheeled mobile robots include self-driving cars or driverless cars.
  • mobile robots can also be classified into the following types of robots.
  • mobile robots include indoor mobile robots and outdoor mobile robots; for example, depending on the control architecture, mobile robots include functional (horizontal) structural robots, behavioral (vertical) structural robots, and Hybrid robots; for example, mobile robots include: medical robots, military robots, disability robots, cleaning robots, etc., depending on function and use.
  • the type of the mobile robot is not limited in the embodiment of the present application.
  • Perceptual data Includes data about the mobile robot that is related to itself during travel and data related to the surrounding environmental objects.
  • the data related to the mobile robot during the traveling process includes at least one of a traveling position, an acceleration, an angular velocity, a tilting angle, and a driving range of the mobile robot; and the mobile robot is related to the surrounding environmental objects during the traveling process.
  • the data includes at least one of a first position of the moving obstacle, a planned path of the moving obstacle, gesture information of the moving obstacle, a second position of the stationary obstacle, a type of the stationary obstacle, and a plane color of the target projection surface. .
  • the sensory data is data collected by the mobile robot through the sensing component during the traveling process.
  • the sensing data may be the driving position of the mobile robot collected by the camera, the acceleration and/or the tilting angle of the mobile robot collected by the three-axis accelerometer, or the angular velocity of the mobile robot collected by the gyroscope. / or tilt angle, can also be the mileage of the mobile robot collected by the odometer, or the distance between the mobile robot and the environmental object collected by the Laser Distance Sensor (LDS), or It is the distance between the mobile robot and the environmental object collected by the cliff sensor.
  • LDS Laser Distance Sensor
  • the data related to the mobile robot itself includes: at least one of a current position, a moving speed, and a moving mileage of the mobile robot; and the data related to the surrounding environmental object includes: a position of the obstacle, a type of the obstacle, At least one of pavement quality for indicating the surrounding environment of the mobile robot during travel.
  • the quality of the road surface is used to indicate the road surface condition of the road on which the mobile robot is located.
  • the basic conditions of the pavement include whether there are pits, whether there is water accumulation, and whether there is at least one of deep wells.
  • the obstacle includes: a stationary obstacle and/or a moving obstacle, the stationary obstacle may be one or more, and the moving obstacle may be one or more. This embodiment does not limit the number and type of stationary obstacles and moving obstacles.
  • static obstacles can be furniture, home appliances, office equipment, brick wall, wooden wall, wires on the ground, door bars between rooms, etc.
  • Moving obstacles include people, moving vehicles, other mobile robots, etc. . In the following embodiments, only the case where the moving obstacle is a person is exemplified.
  • Prompt information It is information that needs to be projected and displayed based on the perceptual data.
  • the prompt information includes path prompt information
  • the path prompt information includes a planned path of the mobile robot
  • the planned path of the mobile robot is the planned moving path of the mobile robot.
  • the prompt information further includes auxiliary prompt information, where the auxiliary prompt information includes an estimated arrival time and/or a moving speed of the mobile robot.
  • the estimated arrival time of the mobile robot is the estimated time that the mobile robot arrives at the target location while moving on the planned path.
  • the moving speed is the speed at which the mobile robot moves on the planned path.
  • FIG. 1 shows a structural block diagram of a mobile robot provided by an exemplary embodiment of the present application.
  • the mobile robot 10 includes a sensing unit 110, a control unit 120, a projection unit 130, and a driving unit 140.
  • the sensing unit 110 is configured to collect the sensing data of the mobile robot 10 in the traveling area through the sensing component.
  • the perceptual data includes data relating to itself by the mobile robot 10 during traveling and data related to environmental objects surrounding it.
  • the sensing component comprises at least one of a camera, a three-axis accelerometer, a gyroscope, an odometer, an LDS, an ultrasonic sensor, an infrared human body sensing sensor, a cliff sensor, and a radar.
  • the camera can also be a monocular camera and/or a binocular camera. The application does not limit the number and type of sensing components.
  • a camera is used to measure the travel position of the mobile robot 10
  • a three-axis accelerometer is used to acquire the acceleration and/or tilt angle of the mobile robot 10
  • the gyroscope is used to obtain the angular velocity and/or the tilt angle of the mobile robot 10, the mileage
  • the meter is used to acquire the mileage of the mobile robot 10
  • the LDS is usually disposed at the top of the mobile robot 10 for measuring the distance between the mobile robot 10 and the environmental object using the laser
  • the ultrasonic sensor is usually disposed at the side of the mobile robot 10
  • the distance between the mobile robot 10 and the environmental object is measured using ultrasonic waves
  • the cliff sensor is typically disposed at the bottom of the mobile robot 10 for measuring the distance between the mobile robot 10 and the environmental object using infrared rays.
  • the sensing unit 110 is electrically connected to the control unit 120, and sends the collected sensing data to the control unit 110.
  • the control unit 120 receives the sensing data collected by the sensing unit 110.
  • the control unit 120 includes a processing unit 122 and a storage unit 124.
  • the control unit 120 controls the overall operation of the mobile robot 10 through the processing unit 122.
  • the processing unit 122 is configured to determine path prompt information according to the sensing data, where the path prompt information includes a planned path of the mobile robot 10 .
  • the processing unit 122 is further configured to be able to control the mobile robot 10 to project and display the path prompt information on the target projection surface.
  • the target projection surface is a ground or wall located in the area in front of the movement.
  • the processing unit 122 is further configured to control the mobile robot 10 to travel on the planned path in a predetermined traveling mode after determining the planned path of the mobile robot 10 .
  • the control unit 120 stores at least one instruction through the storage unit 124.
  • the instructions include instructions for determining path prompt information from the perceptual data, instructions for projecting the path cue information on the target projection surface, instructions for performing travel on the planned path in a predetermined travel mode, and the like.
  • the storage unit 124 is also used to store the sensory data of the mobile robot 10 during travel.
  • the processing unit 122 includes a processor
  • the storage unit 124 includes a memory. At least one instruction, at least one program, code set or instruction set is stored in the memory, and at least one instruction, at least one program, code set or instruction set is loaded and executed by the processor to implement the mobile robot 10 provided by the following method embodiments. Interactive method.
  • the control unit 120 is electrically connected to the projection unit 130.
  • the projection unit 130 is configured to project and display the path prompt information on the target projection surface according to the control signal of the control unit 120.
  • the control unit 120 is electrically connected to the driving unit 330.
  • the drive unit 140 includes at least one drive wheel and a motor coupled to each of the drive wheels.
  • the driving unit 140 is configured to control a driving direction and a rotating speed of the at least one driving wheel according to a control signal of the control unit 120.
  • control unit 120 may be implemented by one or more application specific integrated circuits, digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, micro A processor or other electronic component is implemented for performing the interactive method of the mobile robot 10 in the embodiment of the present application.
  • a computer readable storage medium having stored therein at least one instruction, at least one program, a code set or a set of instructions, at least one instruction, at least one program, a code set Or the set of instructions is loaded and executed by the processor to implement the interactive method of the mobile robot 10 provided in the various method embodiments described above.
  • a computer readable storage medium can be a read only memory, a random access memory, a magnetic tape, a floppy disk, and an optical data storage device.
  • FIG. 2 is a flowchart of a method for interacting with the mobile robot 10 according to an embodiment of the present application. This embodiment is described by using the method for the mobile robot 10 shown in FIG. 1 as an example. include:
  • step 201 the sensing data is acquired, and the sensing data is used to indicate the surrounding environment of the mobile robot 10 during the traveling.
  • the mobile robot 10 acquires the sensing data through the sensing component, for example, the mobile robot 10 acquires the image frame in real time through the camera, or acquires the image frame through the camera every predetermined time interval, and acquires the sensing data according to the collected image frame.
  • the predetermined time interval is set by the mobile robot 10 by default, or is user-defined. This embodiment does not limit this.
  • the type of the sensing component employed by the mobile robot 10 is not limited in the embodiment of the present application. In the following, only the mobile robot 10 acquires an image frame through a camera, and the sensing data is acquired based on the image frame as an example.
  • the sensing data includes: a first position of the moving obstacle, a planned path of the moving obstacle, a gesture information of the moving obstacle, a second position of the stationary obstacle, a type of the stationary obstacle, and a current position of the mobile robot 10 And at least one of the plane colors of the target projection surface.
  • the planned path of the moving obstacle is the predicted moving path of the moving obstacle
  • the gesture information of the moving obstacle is information indicated by the gesture action of the moving obstacle
  • the gesture information includes information indicating that the mobile robot 10 stops. At least one of information for instructing the mobile robot 10 to make a line, and information for instructing the mobile robot 10 to move in a specified direction.
  • Step 202 Determine path prompt information according to the sensing data, where the path prompt information includes a planned path of the mobile robot 10.
  • the planned path of the mobile robot 10 is a path for indicating the planned moving direction of the mobile robot 10.
  • the planned path of the mobile robot 10 is the planned moving path of the mobile robot 10 within a preset time period, or the moving path corresponding to the preset length is moved for the planned mobile robot 10.
  • the planned path of the mobile robot 10 is the moving path of the mobile robot 10 within 5 seconds.
  • the planned path of the mobile robot 10 is that the mobile robot 10 moves the corresponding moving path of 1 meter.
  • the mobile robot 10 passes the first preset according to at least one of the first position of the moving obstacle, the planned path of the moving obstacle, the second position of the stationary obstacle, and the gesture information of the moving obstacle.
  • the strategy determines the planned path of the mobile robot 10, and generates route prompt information according to the planned path of the mobile robot 10.
  • the first preset policy can be referred to the related description in the following embodiments, and is not introduced here.
  • step 203 the path prompt information is projected and displayed on the target projection surface.
  • the target projection surface is a projection surface for projecting display path prompt information, for example, the target projection surface is a ground or a wall surface.
  • the first preset threshold or preset range is set by the mobile robot 10 by default, or is user-defined.
  • the first preset threshold is 1 meter
  • the preset range is a circular range with the current position of the mobile robot 10 as a center and a radius of 1 meter.
  • FIG. 3 a schematic diagram of an interaction method for three different types of mobile robots 10 is illustrated.
  • the route prompt information 31 is projected and displayed on the ground;
  • the mobile robot 10 is a crawler mobile robot,
  • the path prompt information 32 is projected and displayed on the ground;
  • the route prompt information 33 is projected and displayed on the ground.
  • only a mobile robot is used as a walking mobile robot as an example.
  • the displaying, by the mobile robot 10, the path prompt information on the target projection surface comprises: displaying the path prompt information in a first projection form on the target projection surface.
  • the first projection form includes at least one of a text projection form, an image projection form, an animation projection form, and a video projection mode.
  • the first possible form of projection the mobile robot 10 displays the path prompt information in a text projection form on the target projection surface.
  • the mobile robot 10 projects and displays text content on the target projection surface, and the text content is used to describe the planned path of the mobile robot 10.
  • the mobile robot 10 moves in the east direction, and the pedestrian 41 moves in the west direction.
  • the sensing data includes the first position of the pedestrian 41.
  • the mobile robot 10 determines the planned path of the mobile robot 10 based on the sensory data, and the mobile robot 10 generates the text content 42 "straight 5 meters" according to the planned path of the mobile robot 10, and projects the text content 42 on the ground for describing the The planned path of the mobile robot 10.
  • a second possible form of projection the mobile robot 10 displays the path prompt information in an image projection form on the target projection surface.
  • the mobile robot 10 displays the path prompt information in the form of a preset image, and the preset image includes a diamond image, a rectangular image, a circular image, or an irregular polygon image, etc., which is not limited in this embodiment. .
  • the mobile robot 10 determines the projection color of the path prompt information according to the plane color of the target projection surface, and displays the path prompt information in the form of the projection color on the target projection surface.
  • the projection color is different from the plane color.
  • the mobile robot 10 acquires the plane color of the target projection surface in the perceptual data, and determines the projection color different from the plane color. And the color distinction between the plane color and the projected color is higher than the predetermined discrimination threshold, and the path prompt information is projected and displayed on the target projection surface in the form of the projected color.
  • the color discrimination between the plane color and the projection color is a degree of difference between at least one of a hue, a saturation, and a brightness of the plane color and the projected color.
  • the corresponding relationship between the plane color and the projected color is stored in advance in the mobile robot 10 .
  • the correspondence is shown in Table 1.
  • the plane color “yellow” corresponds to the projection color “blue”
  • the plane color “green” corresponds to the projection color “red”
  • the plane color “white” corresponds to the projection color “brown”.
  • the plane color of the ground in the perceptual data acquired by the mobile robot 10 is blue, and the mobile robot 10 searches for the projection color corresponding to the plane color “yellow” from the stored correspondence relationship. "Blue”, the path prompt information is projected and displayed on the target projection surface in the form of a blue pattern.
  • a third possible form of projection the mobile robot 10 displays the path prompt information in an animated projection form, and the projection is displayed on the target projection surface.
  • the mobile robot 10 displays the path prompt information in the form of an animated guide arrow and displays it on the target projection surface.
  • the animation guide arrow is used to indicate the moving direction of the mobile robot 10.
  • the mobile robot 10 moves in the east direction, and the pedestrian 43 moves in the west direction.
  • the mobile robot 10 acquires the sensing data, and the sensing data includes the first position of the pedestrian 43, the mobile robot 10 determines the path planning information 44 based on the sensing data.
  • the width of the mobile robot 10 is used as a projection boundary, and the path prompt information 44 is projected and displayed on the ground in the form of an animated guide arrow for presenting the planned path of the mobile robot 10 to the pedestrian 43.
  • the mobile robot 10 can display the path prompt information in the form of an animation guide arrow, and can also display the path prompt information in the form of other preset animations on the target projection surface. This is not limited. In the following, only the path prompt information is projected and displayed in the form of an animation guide arrow as an example.
  • a fourth possible form of projection the mobile robot 10 displays the path prompt information in a video projection manner on the target projection surface.
  • the mobile robot 10 can project a dynamic video on the target projection surface for prompting the planned path of the mobile robot 10.
  • the mobile robot 10 presets the video size and/or resolution of the dynamic video.
  • any of the above four possible projection forms may be implemented in combination of any two, or alternatively, any of the above four possible projection forms may be combined, and the above four possible projection forms are all implemented in combination. This is easily understood by those skilled in the art according to the above various projection forms, and the embodiments of the present application do not repeat the manner in which these projection forms are combined.
  • the mobile robot 10 determines the projection area on the target projection surface according to the position of the obstacle, and there is no overlapping area between the projection area and the position of the obstacle. After determining the projection area, the mobile robot 10 projects the path prompt information on the projection area.
  • the mobile robot 10 projects the path prompt information on the projection area of the target projection surface in any one or more of the above projection forms.
  • the mobile robot 10 acquires the position of the obstacle in the sensory data, thereby determining that there is no overlap with the position of the obstacle on the target projection surface.
  • the projected area of the area is the area of the area.
  • the mobile robot 10 moves in the east direction, the pedestrian 52 moves in the west direction, and the pedestrian 53 moves in the south direction.
  • the mobile robot 10 acquires the sensory data, the sensory data
  • the mobile robot 10 determines the path planning information 53 based on the sensing data, and determines that there is no projection of the overlapping area with the first position of the pedestrian 52 and the first position of the pedestrian 53.
  • the path prompt information 54 is projected and displayed on the projection area.
  • the width of the projection area is greater than or equal to the width of the mobile robot 10.
  • the embodiment acquires the sensing data for indicating the surrounding environment of the mobile robot during the traveling process by using the mobile robot, determines the planning path of the mobile robot according to the sensing data, and projects the path prompt information including the planning path of the mobile robot. Displayed on the target projection surface; enabling the mobile robot to project on the target projection surface during the travel to display the planned path to be moved, so that nearby pedestrians can directly see the plan that the mobile robot is to move on the target projection surface.
  • the path avoids many limitations in the related art due to the form of speech, and is beneficial to the mobile robot to quickly and accurately inform the pedestrians.
  • the embodiment further displays the path prompt information in a first projection form on the target projection surface, the first projection form including at least one of a text projection form, an image projection form, an animation projection form, and a video projection mode, such that the path
  • the projection form of the prompt information is diversified, and the information display effect of the path prompt information is improved.
  • the prompt information may include auxiliary prompt information including: the estimated arrival time and/or the moving speed of the mobile robot 10, in addition to the path prompt information.
  • auxiliary prompt information including: the estimated arrival time and/or the moving speed of the mobile robot 10, in addition to the path prompt information.
  • FIG. 7 shows a flowchart of an interaction method of the mobile robot 10 provided by another embodiment of the present application. This embodiment is described by using the method for the mobile robot 10 shown in FIG. 1 as an example.
  • the method includes:
  • step 601 the sensing data is acquired, and the sensing data is used to indicate the surrounding environment of the mobile robot 10 during the traveling.
  • the mobile robot 10 acquires the ith frame image frame captured by the camera, and determines the sensing data corresponding to the ith frame image frame according to the ith frame image frame.
  • the camera shoots at a predetermined shooting rate, such as: 24 fps (ie, 24 frames per second), and i is a natural number.
  • the sensing data includes: a first position of the moving obstacle, a planned path of the moving obstacle, a gesture information of the moving obstacle, a second position of the stationary obstacle, a type of the stationary obstacle, a current position of the mobile robot 10, and a target projection surface At least one of the planar colors.
  • the first position of the moving obstacle, the second position of the stationary obstacle, the type of the stationary obstacle, and the plane color of the target projection surface are all obtained by using the first acquisition strategy, and the planned path of the moving obstacle is adopted.
  • the second acquisition strategy obtains that the gesture information of the moving obstacle is obtained by using the third acquisition strategy.
  • the first acquisition strategy includes: the mobile robot 10 calculates an image feature by using a machine learning algorithm according to the image frame of the ith frame, and determines the sensory data according to the calculated image feature.
  • the machine learning algorithm is a traditional machine learning algorithm
  • the mobile robot 10 calculates an image feature of the ith frame image frame by using a traditional machine learning algorithm.
  • the traditional machine learning algorithm is a target detection algorithm based on the Histogram of Oriented Gradient (HOG) feature and the Support Vector Machine (SVM) model.
  • HOG Histogram of Oriented Gradient
  • SVM Support Vector Machine
  • the machine learning algorithm is a neural network algorithm
  • the mobile robot 10 inputs the ith frame image frame to the depth network, and extracts the image feature of the ith frame image frame through the depth network.
  • the depth network is a neural network for extracting image features of the image frame after performing feature extraction on the input image frame.
  • the deep network includes a Convolutional Neural Network (CNN) model, a Deep Neural Network (DNN) model, a Recurrent Neural Networks (RNN) model, an embedding model, At least one of a Gradient Boosting Decision Tree (GBDT) model and a Logistic Regression (LR) model.
  • CNN Convolutional Neural Network
  • DNN Deep Neural Network
  • RNN Recurrent Neural Networks
  • GBDT Gradient Boosting Decision Tree
  • LR Logistic Regression
  • the deep network is a Region-based Convolutional Neural Networks (RCNN).
  • the determining, by the mobile robot 10, the sensory data according to the calculated image feature comprises: estimating a first position of the moving obstacle, a second position of the stationary obstacle, a type of the stationary obstacle according to the calculated image feature and the pre-calibrated camera parameter, At least one of the perceived colors of the planar color of the target projection surface.
  • the mobile robot 10 extracts the image feature of the ith frame image frame through the depth network, including the pedestrian's foot position and/or the human body size, etc., according to the extracted image features and the pre-calibrated image.
  • the camera parameter is estimated to obtain the distance of the pedestrian from the mobile robot 10, thereby obtaining the first position of the pedestrian.
  • the second acquisition strategy includes: the mobile robot 10 obtains a historical position of the moving obstacle in at least two image frames according to the collected at least two frames of image frames (including the first to the ith frame image frames), according to the Moving at least two historical locations of the obstacle to determine a planned path for moving the obstacle.
  • m is a positive integer less than i.
  • the third acquisition strategy includes: the mobile robot 10 uses a machine learning algorithm to detect a gesture motion of the moving obstacle according to the collected image frame of the ith frame, and determines and detects when the detected gesture motion matches the stored gesture motion.
  • the gesture information corresponding to the gesture action is: the mobile robot 10 uses a machine learning algorithm to detect a gesture motion of the moving obstacle according to the collected image frame of the ith frame, and determines and detects when the detected gesture motion matches the stored gesture motion.
  • the mobile robot 10 stores in advance a correspondence between the gesture action and the gesture information.
  • the correspondence is shown in Table 2.
  • the gesture action is the first preset action, that is, the five fingers are not bent, and the palm faces the mobile robot 10
  • the corresponding gesture information is information for indicating that the mobile robot 10 stops
  • the gesture action is the second The preset action, that is, the thumb is upward, and when the other four fingers except the thumb are completely bent toward the palm direction
  • the corresponding gesture information is information for instructing the mobile robot 10 to make a line
  • the gesture action is the third preset action
  • the corresponding gesture information is information for instructing the mobile robot 10 to move in the specified direction.
  • Gesture action Gesture information First preset action stop Second preset action Give way Third preset action Move in the specified direction
  • the mobile robot 10 uses a machine learning algorithm to detect the gesture of the pedestrian based on the acquired image frame of the i-th frame. And comparing the detected gesture action with the stored gesture action, and when the detected gesture action matches the first preset action, determining that the gesture information corresponding to the detected gesture action is used to indicate that the mobile robot 10 stops .
  • Step 602 Determine path prompt information and auxiliary prompt information according to the sensing data, where the auxiliary prompt information includes an estimated arrival time and/or a moving speed of the mobile robot 10.
  • the mobile robot 10 determines a planned path of the mobile robot 10 by a preset rule according to at least one of the planned path of the moving obstacle, the second position of the stationary obstacle, and the gesture information of the moving obstacle, according to the mobile robot 10
  • the planning path generates path prompt information.
  • path prompt information For the process of determining the path prompt information, reference may be made to the related details in the foregoing embodiments, and details are not described herein again.
  • the gesture information of the moving obstacle includes the gesture information corresponding to each of the at least two moving obstacles, compare the distance between each of the at least two moving obstacles and the mobile robot 10, and the distance of the nearest moving obstacle
  • the gesture information is determined as the target gesture information, and the path prompt information is generated according to the target gesture information.
  • the mobile robot 10 detects two pedestrians by using a machine learning algorithm according to the collected i-th frame image frame (respectively For the gesture action of pedestrian A and pedestrian B), the gesture action of pedestrian A is the third preset action, the gesture action of pedestrian B is the first preset action, and the distance between pedestrian A and mobile robot 10 is less than pedestrian B and movement The distance of the robot 10 is determined. Therefore, the mobile robot 10 determines the gesture information corresponding to the third preset motion of the pedestrian A as the target gesture information, and determines the path prompt information of the mobile robot 10 according to the specified direction indicated by the target gesture information.
  • the mobile robot 10 determines the estimated arrival time and/or the moving speed of the mobile robot 10 as auxiliary prompt information.
  • the moving speed of the mobile robot 10 may be preset or dynamically determined.
  • the moving speed can be either constant or variable.
  • the moving speed of the mobile robot 10 is a dynamically determined constant speed will be described as an example.
  • the auxiliary prompt information includes an estimated arrival time of the mobile robot 10, and the manner in which the mobile robot 10 determines the auxiliary prompt information based on the sensing data includes determining an estimated arrival time of the mobile robot 10 according to the planned path of the mobile robot 10 and the moving speed of the mobile robot 10.
  • the planned path of the mobile robot 10 is a path with a length of 3 meters starting from the current position of the mobile robot 10, and the moving speed of the mobile robot 10 is 1 meter/second, and then the mobile robot 10 is determined to be on the planned path.
  • the time to reach the target position when moving is 3 seconds, and the target position is the end point of the planned path.
  • the auxiliary prompt information includes the moving speed of the mobile robot 10, and the manner in which the mobile robot 10 determines the auxiliary prompt information according to the sensing data includes: the mobile robot 10 determines the mobile robot 10 by a preset rule according to the planned path of the moving obstacle and the planned path of the mobile robot 10. The speed of movement.
  • the mobile robot 10 determines the moving speed of the mobile robot 10 to be 1 m/sec by a preset rule according to the planned path of the moving obstacle and the planned path of the mobile robot 10.
  • Step 603 projecting the path prompt information and the auxiliary prompt information on the target projection surface; and/or, projecting the path prompt information in a second projection form on the target projection surface, and using the second projection form to indicate the auxiliary prompt information.
  • the mobile robot 10 projects the path prompt information and the auxiliary prompt information on the target projection surface.
  • the mobile robot 10 projects and displays the path prompt information on the target projection surface while projecting the auxiliary prompt information on the target projection surface.
  • the projection form of the auxiliary prompt information reference may be made to the projection form of the three possible path prompt information in the foregoing embodiment, and details are not described herein again.
  • the mobile robot 10 moves in the east direction, and the pedestrian 72 moves in the west direction.
  • the path is The prompt information 74 is projected on the target projection surface, and the projected arrival time "3s" and/or the moving speed "1 m/s" of the mobile robot 10 are projected and displayed on the ground.
  • the mobile robot 10 projects the path prompt information in a second projection form on the target projection surface, and the second projection form is used to indicate the auxiliary prompt information.
  • the auxiliary prompt information includes an estimated arrival time of the mobile robot 10, and the path prompt information is projected and displayed on the target projection surface in a second projection form, including: projecting the path prompt information according to the estimated arrival time of the mobile robot 10.
  • the linear gradient of the color, the path prompt information after the linear gradient, the projection is displayed on the target projection surface.
  • the path prompt information after the linear gradation includes n color saturations of the same projection color from high to low, and the n color saturations are positively correlated with the estimated arrival time of the mobile robot, and n is a positive integer.
  • color saturation International: Saturation
  • saturation used to indicate the purity of color.
  • color saturation is positively correlated with the estimated arrival time of the mobile robot.
  • the color saturation is “1”, it is used to indicate the first estimated arrival time of the mobile robot 10, and when the color saturation is “0”, it is used to indicate the second estimated arrival time of the mobile robot 10, because the color saturation is “1”. "Higher than the color saturation "0”, so the first estimated arrival time is earlier than the second estimated arrival time.
  • the mobile robot 10 moves in the east direction, and the pedestrian 82 moves in the west direction.
  • the path prompt information is obtained.
  • a linear gradient of the projected color "brown" is obtained to obtain a path prompt information 84 after the linear gradation, and the path prompt information 84 after the linear gradation is projected and displayed on the target projection surface.
  • the auxiliary prompt information includes a moving speed of the mobile robot 10, and the mobile robot 10 projects the path prompt information in a second projection form on the target projection surface, including: determining the predetermined time period according to the moving speed of the mobile robot 10.
  • the moving length of the mobile robot 10 is projected on the target projection surface by using the moving length as the projection length of the path prompt information.
  • the projection length is used to indicate the moving speed of the mobile robot 10. The longer the projection length, the faster the moving speed of the mobile robot 10.
  • the mobile robot 10 moves in the east direction, and the pedestrian 92 moves in the west direction.
  • the mobile robot 10 determines the planned path of the mobile robot 10 based on the acquired sensing data, and moves.
  • the robot 10 determines the moving length "5m” of the mobile robot 10 in the predetermined time period "5s" based on the moving speed "1 m/s", and projects the 5 m long path presentation information 94 on the ground.
  • the path prompt information is projected and displayed on the projection area by determining the projection area where there is no overlapping area with the position of the obstacle according to the position of the obstacle; avoiding projecting the path prompt information to the obstacle
  • the position of the mobile robot causes the information display effect to be better.
  • the embodiment further displays the path prompt information in a second projection form on the target projection surface, and the second projection form is used to indicate the auxiliary prompt information, so that the mobile robot can provide the route prompt information to the pedestrian while providing the pedestrian with information
  • the auxiliary prompt information enriches the content of the prompt information.
  • the interaction method of the mobile robot 10 further includes: the mobile robot 10 determines whether the sensing data meets the preset condition, and if the sensing data meets the preset condition, determines to start the projection, and performs step 203 or 603.
  • the mobile robot 10 determines whether the sensing data meets the preset condition, and if the sensing data meets the preset condition, determines to start the projection, and performs step 203 or 603.
  • the step is included before step 603 as an example. Please refer to FIG. 11:
  • step 1001 the mobile robot 10 determines whether the perceived data satisfies a preset condition.
  • the mobile robot 10 determines whether the sensing data satisfies the preset condition before starting the projection. If the sensing data satisfies the preset condition, it is determined to start the projection, and step 605 is performed; If the perceived data does not satisfy the preset condition, it is determined that no projection is required and the process is ended.
  • the preset condition includes: a distance between the first position of the moving obstacle and the current position of the mobile robot 10 is less than a predetermined threshold, and/or the type of the stationary obstacle is an obstacle type satisfying the visual blind spot condition.
  • the predetermined threshold may be any value between 0 and N meters, and N is a positive integer. The value of the predetermined threshold is not limited in this embodiment.
  • the mobile robot 10 Since the mobile robot 10 passes through certain specific intersections, there may be a large dead zone between the mobile robot 10 and the moving obstacle due to the occlusion of the stationary obstacle. In order to avoid the occurrence of collisions with each other, if the type of the stationary obstacle in the perceptual data acquired by the mobile robot 10 is an obstacle type satisfying the visual blind spot condition, it is determined to start the projection.
  • the type of obstacle that satisfies the visual blind spot condition includes at least one of an obstacle at the entrance and exit, an obstacle at the intersection, an obstacle at the fork, or an obstacle at the corner.
  • the type of the static obstacle acquired by the mobile robot 10 is an obstacle type satisfying the visual blind spot condition, determining to start the projection, and projecting the projection prompt information and the auxiliary prompt information in the form of a specific warning image. on the ground.
  • a specific warning image is a zebra crossing image. This embodiment does not limit the form of the specific alarm image.
  • the mobile robot 10 issues a sound signal for prompting while projecting the projection prompt information and the auxiliary prompt information on the ground.
  • the prompt identifier is displayed on the wall surface, and the prompt identifier is used to prompt the mobile robot 10 to pass within the preset time period.
  • the prompt identifier is an identifier including an inverted triangle, and the identifier includes the text "Attention”.
  • the mobile robot 10 moves in the north direction, and the pedestrian 112 moves in the west direction.
  • the mobile robot 10 determines the planned path of the mobile robot 10 based on the acquired sensing data.
  • the obtained sensing data includes the wall surface 114 at the fork
  • the sensing data is determined to meet the preset condition
  • the projection is determined to be started
  • the projection prompt information and the auxiliary prompt information are projected and displayed on the ground in the form of the specific warning image 116.
  • the mobile robot 10 first determines whether the sensing data satisfies the preset condition before starting the projection. If the sensing data satisfies the preset condition, it is determined to start the projection, and the mobile robot 10 is prevented from being displayed by the real-time projection. The resulting energy loss problem greatly saves energy loss of the mobile robot 10.
  • the present embodiment also determines that the projection is started when the type of the stationary obstacle in the perceptual data acquired by the mobile robot 10 is an obstacle type satisfying the visual blind spot condition, so that when the mobile robot 10 passes through certain specific intersections, that is, due to stationary When the obstacle is blocked, when there is a large blind spot between the mobile robot 10 and the pedestrian, the mobile robot 10 can perform information display on the target projection surface, and the information is promptly displayed to the pedestrian who cannot currently observe the mobile robot, thereby greatly reducing the information. The risk of colliding with each other.
  • FIG. 13 is a schematic structural diagram of an interaction device of a mobile robot according to an embodiment of the present application.
  • the interaction device may be implemented as a whole or a part of the mobile robot through a dedicated hardware circuit or a combination of software and hardware.
  • the interaction device includes an acquisition module 1210, a first determination module 1220, and a projection module 1230.
  • the obtaining module 1210 is configured to implement the foregoing step 201 or 601.
  • the first determining module 1220 is configured to implement the foregoing step 202.
  • the projection module 1230 is configured to implement the above step 203.
  • the projection module 1230 is further configured to project the path prompt information in a first projection form on the target projection surface;
  • the first projection form includes at least one of a text projection form, an image projection form, an animation projection form, and a video projection mode.
  • the projection module 1230 is further configured to determine a projection color of the path prompt information according to a plane color of the target projection surface, where the projection color is different from Plane color; the path prompt information is projected on the target projection surface in the form of a projected color.
  • the projection module 1230 is further configured to display the path prompt information in an animated projection form on the target projection surface, including: The path prompt information is displayed in the form of an animated guide arrow on the target projection surface.
  • the apparatus further includes: a second determining module 1240.
  • the second determining module 1240 is configured to determine a projection area on the target projection surface according to the position of the obstacle, and there is no overlapping area between the projection area and the position of the obstacle;
  • the projection module 1230 is further configured to project and display the path prompt information on the projection area.
  • the apparatus further includes: a third determining module 1250.
  • the third determining module 1250 is configured to determine auxiliary prompt information according to the sensing data, where the auxiliary prompt information includes an estimated arrival time and/or a moving speed of the mobile robot;
  • the projection module 1230 is further configured to implement the above step 603.
  • the auxiliary prompt information includes an estimated arrival time of the mobile robot; and the projection module 1230 is further configured to: according to an estimated arrival time of the mobile robot, The path prompt information is subjected to a linear gradient of the projected color; the path prompt information after the linear gradient is projected and displayed on the target projection surface;
  • the path prompt information after the linear gradation includes n color saturations of the same projection color from high to low, and the n color saturations are positively correlated with the estimated arrival time of the mobile robot.
  • the auxiliary prompt information includes a moving speed of the mobile robot; and the projection module 1230 is further configured to determine the predetermined according to the moving speed of the mobile robot. The moving length of the mobile robot during the time period;
  • the moving length is the projection length of the path prompt information, and the path prompt information is projected and displayed on the target projection surface, and the projection length is used to indicate the moving speed of the mobile robot.
  • the sensing data includes a first position of the moving obstacle and/or a type of static obstacle; the apparatus further includes: a determining module 1260.
  • the determining module 1260 is configured to perform a step of displaying and displaying the prompt information when the sensing data meets the preset condition;
  • the preset condition includes: a distance between the first position of the moving obstacle and the current position of the mobile robot is less than a predetermined threshold, and/or the type of the stationary obstacle is an obstacle type satisfying the visual blind spot condition.
  • the sensing data includes a first position of the moving obstacle, a planned path of the moving obstacle, a gesture information of the moving obstacle, a second position of the stationary obstacle, a type of the stationary obstacle, and a plane color of the target projection surface. At least one of them.
  • the obtaining module 910 is further configured to implement any other implied or disclosed functions related to the obtaining step in the foregoing method embodiment.
  • the first determining module 920 is further configured to implement any other implied or disclosed in the foregoing method embodiments.
  • a mobile robot comprising a processor, a memory and a projection component, wherein the memory stores at least one instruction, at least one program, a code set or a set of instructions;
  • a processor configured to acquire sensing data, where the sensing data is used to indicate a surrounding environment of the mobile robot during traveling;
  • the processor is further configured to determine path prompt information according to the sensing data, where the path prompt information includes a planned path of the mobile robot;
  • the projection component is configured to project the path prompt information determined by the processor on the target projection surface.
  • the projection component is further configured to display the path prompt information in a first projection form on the target projection surface;
  • the first projection form includes at least one of a text projection form, an image projection form, an animation projection form, and a video projection mode.
  • the projection component is further configured to determine a projection color of the path prompt information according to a plane color of the target projection surface, the projection color is different from the plane color; and the path prompt information is projected and displayed on the target projection surface in the form of a projection color.
  • the projection component is further configured to display the path prompt information in an animated projection form on the target projection surface, including: displaying the path prompt information in the form of an animation-oriented arrow, and displaying the projection on the target projection surface.
  • the processor is further configured to determine a projection area on the target projection surface according to the position of the obstacle, and there is no overlapping area between the projection area and the position of the obstacle;
  • the projection component is further configured to project the path prompt information on the projection area.
  • the processor is further configured to determine, according to the sensing data, the auxiliary prompt information, where the auxiliary prompt information includes an estimated arrival time and/or a moving speed of the mobile robot;
  • the projection component is further configured to implement the above step 603.
  • the auxiliary prompt information includes an estimated arrival time of the mobile robot; the projection component is further configured to: perform a linear gradient of the projected color of the path prompt information according to the estimated arrival time of the mobile robot; and prompt the information of the path after the linear gradient, The projection is displayed on the target projection surface;
  • the path prompt information after the linear gradation includes n color saturations of the same projection color from high to low, and the n color saturations are positively correlated with the estimated arrival time of the mobile robot.
  • the auxiliary prompt information includes a moving speed of the mobile robot; the projection component is further configured to determine a moving length of the mobile robot in the predetermined time period according to the moving speed of the mobile robot; and the moving length is a projection length of the path prompt information, The path prompt information is projected on the target projection surface, and the projection length is used to indicate the moving speed of the mobile robot.
  • the sensing data includes a first position of the moving obstacle and/or a type of the static obstacle; the projection component is further configured to perform a step of displaying the prompt information when the sensing data meets the preset condition;
  • the preset condition includes: a distance between the first position of the moving obstacle and the current position of the mobile robot is less than a predetermined threshold, and/or the type of the stationary obstacle is an obstacle type satisfying the visual blind spot condition.
  • the sensing data includes a first position of the moving obstacle, a planned path of the moving obstacle, a gesture information of the moving obstacle, a second position of the stationary obstacle, a type of the stationary obstacle, and a plane color of the target projection surface. At least one of them.
  • the processor is further configured to implement any other implicit or disclosed function related to the processing step in the foregoing method embodiment; the projection component is further configured to implement any other implicit or disclosed projection step related to the projection method in the foregoing method embodiment.
  • the steps of the mobile robot implementing the foregoing embodiment may be completed by hardware, or may be instructed by a program to execute related hardware, and the program may be stored in a computer.
  • the above-mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Transportation (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种移动机器人的交互方法、装置、移动机器人及存储介质,该方法包括:移动机器人获取感知数据,感知数据用于指示移动机器人在行进过程中的周围环境(201),根据感知数据确定路径提示信息,路径提示信息包括移动机器人的规划路径(202),将路径提示信息投影显示在目标投影面上(203)。该方法通过移动机器人将要移动的规划路径能够通过投影显示的方式输出在目标投影面上,使得附近的行人能够直接在目标投影面上看到该移动机器人的规划路径,避免了相关技术中由于语音形式而导致的诸多限制,有利于移动机器人能够快速且精确对行人进行信息提示。

Description

移动机器人的交互方法、装置、移动机器人及存储介质
本申请实施例要求于2017年10月31日提交的申请号为201711044804.1、发明名称为“移动机器人的交互方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请实施例中。
技术领域
本申请实施例涉及人工智能技术领域,特别涉及一种移动机器人的交互方法、装置、移动机器人及存储介质。
背景技术
移动机器人(英文:Robot)是自动执行工作的机器装置。移动机器人可以通过信号灯、扬声器等器件向用户发出信息提示。
相关技术中,移动机器人的信息提示方式主要是以语音形式进行的,比如,移动机器人通过麦克风接收人的指令,确定与该指令对应的提示信息,并通过扬声器向人发出提示声音,该提示声音用于向人描述提示信息的信息内容。
在上述方法中,提示信息是通过提示声音传输的,由于提示声音会受到人与移动机器人的距离、周围环境声音、语言地域性等多种因素的影响,因此会导致移动机器人难以快速且精确向人描述提示信息。目前,尚未提供更加有效的移动机器人的交互方法。
发明内容
本申请实施例提供了一种移动机器人的交互方法、装置、移动机器人及存储介质,可以用于解决相关技术中尚未提供更加有效的移动机器人的交互方法的问题。所述技术方案如下:
一方面,提供了一种移动机器人的交互方法,用于所述移动机器人中,所述方法包括:
获取感知数据,所述感知数据用于指示所述移动机器人在行进过程中的周围环境;
根据所述感知数据确定路径提示信息,所述路径提示信息包括所述移动机 器人的规划路径;
将所述路径提示信息投影显示在目标投影面上。
另一方面,提供了一种移动机器人的交互装置,设置在所述移动机器人上,所述装置包括:
获取模块,用于获取感知数据,所述感知数据用于指示所述移动机器人在行进过程中的周围环境;
第一确定模块,用于根据所述感知数据确定路径提示信息,所述路径提示信息包括所述移动机器人的规划路径;
投影模块,用于将所述路径提示信息投影显示在目标投影面上。
另一方面,提供了一种移动机器人,所述移动机器人包括处理器、存储器和投影组件,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集;
所述处理器,用于获取感知数据,所述感知数据用于指示所述移动机器人在行进过程中的周围环境;
所述处理器,还用于根据所述感知数据确定路径提示信息,所述路径提示信息包括所述移动机器人的规划路径;
所述投影组件,用于将所述处理器确定出的所述路径提示信息投影显示在目标投影面上。
另一方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现第一方面所提供的移动机器人的交互方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
通过移动机器人获取用于指示移动机器人在行进过程中的周围环境的感知数据,根据感知数据确定移动机器人的规划路径,将包括移动机器人的规划路径的路径提示信息投影显示在目标投影面上;使得移动机器人在行进过程中能够在目标投影面上投影显示出将要移动的规划路径,进而使得附近的行人能够直接在目标投影面上看到该移动机器人将要移动的规划路径,避免了相关技术中由于语音形式而导致的诸多限制,有利于移动机器人快速且精确对行人进行信息提示。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的移动机器人10的结构方框图;
图2是本申请一个实施例提供的移动机器人10的交互方法的流程图;
图3是本申请一个实施例提供的三种不同类型的移动机器人10的交互方法的示意图;
图4是本申请一个实施例提供的交互方法涉及的投影形式的示意图;
图5是本申请一个实施例提供的交互方法涉及的投影形式的示意图;
图6是本申请一个实施例提供的交互方法涉及的投影形式的示意图;
图7是本申请另一个实施例提供的移动机器人的交互方法的流程图;
图8是本申请一个实施例提供的交互方法涉及的投影形式的示意图;
图9是本申请一个实施例提供的交互方法涉及的投影形式的示意图;
图10是本申请一个实施例提供的交互方法涉及的投影形式的示意图;
图11是本申请另一个实施例提供的移动机器人的交互方法的流程图;
图12是本申请一个实施例提供的交互方法涉及的投影形式的示意图;
图13是本申请一个实施例提供的移动机器人的交互装置的结构示意图;
图14是本申请另一个实施例提供的移动机器人的交互装置的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
为了方便理解,下面对本申请实施例中出现的名词进行介绍。
移动机器人:是具有移动功能的机器人系统。
可选的,根据移动方式的不同,移动机器人包括:轮式移动机器人、步行移动机器人(单腿式、双腿式和多腿式)、履带式移动机器人、爬行机器人、蠕动式机器人和游动式机器人中的至少一种类型。其中,轮式移动机器人包括自动驾驶汽车或无人驾驶汽车。
可选的,根据其他分类方式,移动机器人还可以分为如下类型的机器人。 比如,根据工作环境的不同,移动机器人包括室内移动机器人和室外移动机器人;又比如,根据控制体系结构的不同,移动机器人包括功能式(水平式)结构机器人、行为式(垂直式)结构机器人和混合式机器人;又比如,根据功能和用途的不同,移动机器人包括:医疗机器人、军用机器人、助残机器人、清洁机器人等等。本申请实施例对移动机器人的类型不加以限定。
感知数据:包括移动机器人在行进过程中与自身有关的数据和与其周围的环境物体有关的数据。
可选的,移动机器人在行进过程中与自身有关的数据包括移动机器人的行驶位置、加速度、角速度、倾斜角度、行驶里程中的至少一种;移动机器人在行进过程中与其周围的环境物体有关的数据包括:移动障碍物的第一位置、移动障碍物的规划路径、移动障碍物的手势信息、静止障碍物的第二位置、静止障碍物的类型和目标投影面的平面颜色中的至少一种。
可选的,感知数据是移动机器人在行进过程中通过感知组件采集得到的数据。
感知数据可以是通过摄像头采集到的移动机器人的行驶位置,也可以是通过三轴加速度计采集到的移动机器人的加速度和/或倾斜角度,也可以是通过陀螺仪采集到的移动机器人的角速度和/或倾斜角度,也可以是通过里程计采集到的移动机器人的行驶里程,也可以是通过激光测距传感器(Laser Distance Sensor,LDS)采集到的移动机器人与环境物体之间的距离,也可以是通过悬崖传感器采集到的移动机器人与环境物体之间的距离。
可选的,与移动机器人自身有关的数据包括:移动机器人的当前位置、移动速度和移动里程中的至少一种;与其周围的环境物体有关的数据包括:障碍物的位置、障碍物的类型、路面质量中的至少一种,用于指示移动机器人在行进过程中的周围环境。
路面质量用于指示移动机器人所处道路的路面基础情况。路面基础情况包括是否存在凹坑、是否存在积水、是否存在深井中的至少一种。
障碍物包括:静止障碍物和/或移动障碍物,静止障碍物可以是一个或多个,移动障碍物可以是一个或多个。本实施例对静止障碍物和移动障碍物的数量和类型均不加以限定。
其中,静止障碍物可以是家具、家电、办公设备、砖墙墙体、木板墙体、地面上的电线、房间之间的过门条等,移动障碍物包括人、移动车辆、其它移 动机器人等等。下述实施例中,仅以移动障碍物是人为例进行举例说明。
提示信息:是根据感知数据确定出的需要投影显示的信息。在本申请实施例中,提示信息包括路径提示信息,路径提示信息包括移动机器人的规划路径,移动机器人的规划路径为规划的该移动机器人的移动路径。
可选的,提示信息还包括辅助提示信息,辅助提示信息包括移动机器人的预计到达时间和/或移动速度。移动机器人的预计到达时间为该移动机器人在规划路径上移动时到达目标位置的预计时间。移动速度为该移动机器人在规划路径上移动时的速度。
请参考图1,其示出了本申请一个示例性实施例提供的移动机器人的结构方框图。该移动机器人10包括感知单元110、控制单元120、投影单元130和驱动单元140。
感知单元110用于通过感知组件采集移动机器人10在行进区域中的感知数据。感知数据包括:移动机器人10在行进过程中与自身有关的数据和与其周围的环境物体有关的数据。
可选的,感知组件包括摄像头、三轴加速度计、陀螺仪、里程计、LDS、超声波传感器、红外人体感应传感器、悬崖传感器和雷达中的至少一种。其中,摄像头还可以是单目摄像头和/或双目摄像头。本申请对感知组件的数量和类型不加以限定。
示意性的,摄像头用于测量移动机器人10的行驶位置,三轴加速度计用于获取移动机器人10的加速度和/或倾斜角度,陀螺仪用于获取移动机器人10的角速度和/或倾斜角度,里程计用于获取移动机器人10的行驶里程;LDS通常设置在移动机器人10的顶部,用于利用激光测量移动机器人10与环境物体之间的距离;超声波传感器通常设置在移动机器人10的侧边,用于利用超声波测量移动机器人10与环境物体之间的距离;悬崖传感器通常设置在移动机器人10的底部,用于利用红外线测量移动机器人10与环境物体之间的距离。
感知单元110与控制单元120电性连接,其将采集到的感知数据发送给控制单元110,对应的,控制单元120接收感知单元110采集的感知数据。
控制单元120包括处理单元122和存储单元124。控制单元120通过处理单元122控制移动机器人10的总体操作。处理单元122用于根据感知数据确定路径提示信息,该路径提示信息包括移动机器人10的规划路径。处理单元 122还用于能够控制移动机器人10将该路径提示信息投影显示在目标投影面上。可选的,目标投影面是位于移动前方区域的地面或墙面。
可选的,处理单元122还用于在确定出移动机器人10的规划路径后,能够控制移动机器人10以预定的行进模式在规划路径上行进。
控制单元120通过存储单元124存储至少一个指令。这些指令包括用于根据感知数据确定路径提示信息的指令、用于将该路径提示信息投影显示在目标投影面上的指令、用于执行以预定的行进模式在规划路径上行进的指令等等。存储单元124还用于存储移动机器人10在行进过程中的感知数据。
可选的,处理单元122包括处理器,存储单元124包括存储器。存储器中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现下面各个方法实施例所提供的移动机器人10的交互方法。
控制单元120与投影单元130电性连接,投影单元130用于根据控制单元120的控制信号,将该路径提示信息投影显示在目标投影面上。
控制单元120与驱动单元330电性连接。可选的,驱动单元140包括至少一个驱动轮和与每个驱动轮连接的电机。驱动单元140用于根据控制单元120的控制信号控制至少一个驱动轮的驱动方向和转动速度。
在示例性实施例中,控制单元120可以被一个或多个应用专用集成电路、数字信号处理器、数字信号处理设备、可编程逻辑器件、现场可编程门阵列、控制器、微控制器、微处理器或其他电子元件实现,用于执行本申请实施例中的移动机器人10的交互方法。
在示例性实施例中,还提供了一种计算机可读存储介质,计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令集由处理器加载并执行以实现上述各个方法实施例中所提供的移动机器人10的交互方法。例如,计算机可读存储介质可以是只读存储器、随机存储器、磁带、软盘和光数据存储设备等。
请参考图2,其示出了本申请一个实施例提供的移动机器人10的交互方法的流程图,本实施例以该方法用于如图1所示的移动机器人10为例进行说明,该方法包括:
步骤201,获取感知数据,感知数据用于指示移动机器人10在行进过程中 的周围环境。
可选的,移动机器人10通过上述的感知组件获取感知数据,比如,移动机器人10通过摄像头实时采集图像帧,或者通过摄像头每隔预定时间间隔采集图像帧,根据采集得到的图像帧获取感知数据。
预定时间间隔是移动机器人10默认设置的,或者是用户自定义设置的。本实施例对此不加以限定。
本申请实施例对移动机器人10所采用的感知组件的类型不作限定。下面仅以移动机器人10通过摄像头采集图像帧,根据图像帧获取感知数据为例进行说明。
可选的,感知数据包括:移动障碍物的第一位置、移动障碍物的规划路径、移动障碍物的手势信息、静止障碍物的第二位置、静止障碍物的类型、移动机器人10的当前位置和目标投影面的平面颜色中的至少一种。
其中,移动障碍物的规划路径是预计的该移动障碍物的移动路径,移动障碍物的手势信息是该移动障碍物的手势动作所指示的信息,手势信息包括用于指示移动机器人10停止的信息、用于指示移动机器人10让行的信息、用于指示移动机器人10向指定方向移动的信息中的至少一种。各种感知数据的获取方式可参考下面实施例中的相关描述,在此暂不介绍。
步骤202,根据感知数据确定路径提示信息,路径提示信息包括移动机器人10的规划路径。
移动机器人10的规划路径为用于指示规划的该移动机器人10的移动方向的路径。
移动机器人10的规划路径为规划的该移动机器人10在预设时间段内的移动路径,或者为规划的该移动机器人10移动预设长度对应的移动路径。
比如,预设时间段为5秒,则移动机器人10的规划路径为5秒内移动机器人10的移动路径。
又比如,预设长度为1米,则移动机器人10的规划路径为移动机器人10移动1米对应的移动路径。
可选的,移动机器人10根据移动障碍物的第一位置、移动障碍物的规划路径、静止障碍物的第二位置、移动障碍物的手势信息中的至少一种感知数据,通过第一预设策略确定移动机器人10的规划路径,根据移动机器人10的规划路径生成路径提示信息。其中,第一预设策略可参考下面实施例中的相关描述, 在此暂不介绍。
步骤203,将路径提示信息投影显示在目标投影面上。
其中,目标投影面为用于投影显示路径提示信息的投影面,比如,目标投影面为地面或者墙面。
目标投影面可以是预设的,也可以是根据采集得到的感知数据,通过第二预设策略动态确定的投影面。示意性的,第二预设策略包括若感知数据中静止障碍物的第二位置与移动机器人10的当前位置之间的距离大于或者等于第一预设阈值,则确定目标投影面为地面;若感知数据中静止障碍物的第二位置与移动机器人10的当前位置之间的距离小于第一预设阈值,且该移动机器人10的当前位置的预设范围内存在类型为墙面的静止障碍物,则确定目标投影面为墙面。本实施例对第二预设策略不加以限定。下面仅以目标投影面为地面为例进行说明。
第一预设阈值或者预设范围是移动机器人10默认设置的,或者是用户自定义设置的。示意性的,第一预设阈值为1米,预设范围为以移动机器人10的当前位置为圆心,1米为半径的圆形范围。
在一个示意性的例子中,如图3所示,例举了三种不同类型的移动机器人10的交互方法的示意图。如图3(a)所示,当移动机器人10为步行移动机器人时,将路径提示信息31投影显示在地面上;如图3(a)所示,当移动机器人10为履带式移动机器人时,将路径提示信息32投影显示在地面上;如图3(a)所示,当移动机器人10为自动驾驶汽车时,将路径提示信息33投影显示在地面上。在下面的实施例中,仅以移动机器人为步行移动机器人为例进行说明。
可选的,移动机器人10将路径提示信息投影显示在目标投影面上包括:将路径提示信息以第一投影形式,投影显示在目标投影面上。其中,第一投影形式包括文本投影形式、图像投影形式、动画投影形式和视频投影方式中的至少一种。
下面,依次介绍上述四种可能的投影形式。
第一种可能的投影形式:移动机器人10将路径提示信息以文本投影形式,投影显示在目标投影面上。
可选的,移动机器人10在目标投影面上投影显示文本内容,该文本内容用于描述该移动机器人10的规划路径。
在一个示意性的例子中,如图4A所示,移动机器人10沿正东方向移动,行人41沿正西方向移动,当移动机器人10获取感知数据,该感知数据包括行人41的第一位置时,移动机器人10根据感知数据确定移动机器人10的规划路径,移动机器人10根据移动机器人10的规划路径生成文本内容42“直行5米”,将该文本内容42投影显示在地面上,用于描述该移动机器人10的规划路径。
需要说明的是,为了方便介绍移动机器人10进行信息提示时采用的投影形式,下面各个示意性的例子仅以移动机器人10将路径提示信息投影显示在目标投影面上的俯视图为例进行说明。
第二种可能的投影形式:移动机器人10将路径提示信息以图像投影形式,投影显示在目标投影面上。
可选的,移动机器人10将路径提示信息以预设图像的形式进行投影显示,预设图像包括菱形图像、矩形图像、圆形图像或者不规则多边形图像等等,本实施例对此不加以限定。
可选的,移动机器人10根据目标投影面的平面颜色确定路径提示信息的投影颜色,将路径提示信息以该投影颜色的形式投影显示在目标投影面上。其中,投影颜色不同于平面颜色。
为了避免路径提示信息的投影颜色与目标投影面的平面颜色一致或者相似,导致信息显示效果较差的问题,移动机器人10获取感知数据中目标投影面的平面颜色,确定与平面颜色不同的投影颜色,且平面颜色与投影颜色的颜色区分度高于预定区分阈值,将路径提示信息以投影颜色的形式投影显示在目标投影面上。
可选的,平面颜色与投影颜色的颜色区分度为平面颜色与投影颜色之间色相、饱和度和明度中至少一种的差异程度。
可选的,移动机器人10中预先存储有平面颜色和投影颜色的对应关系。示意性的,该对应关系如表一所示。其中,平面颜色“黄色”对应投影颜色“蓝色”,平面颜色“绿色”对应投影颜色“红色”,平面颜色“白色”对应投影颜色“棕色”。
表一
平面颜色 投影颜色
黄色 蓝色
绿色 红色
白色 棕色
比如,基于表一所提供的对应关系,移动机器人10获取到的感知数据中地面的平面颜色为蓝色,则移动机器人10从存储的对应关系中查找与平面颜色“黄色”对应的投影颜色“蓝色”,将路径提示信息以蓝色图案的形式投影显示在目标投影面上。
第三种可能的投影形式:移动机器人10将路径提示信息以动画投影形式,投影显示在目标投影面上。
可选的,移动机器人10将路径提示信息以动画导向箭头的形式,投影显示在目标投影面上。其中,动画导向箭头用于指示该移动机器人10的移动方向。
在一个示意性的例子中,如图5所示。移动机器人10沿正东方向移动,行人43沿正西方向移动,当移动机器人10获取感知数据,该感知数据包括行人43的第一位置时,移动机器人10根据感知数据确定路径规划信息44,将移动机器人10的宽度作为投影边界,将路径提示信息44以动画导向箭头的形式投影显示在地面上,用于向行人43提示该移动机器人10的规划路径。
需要说明的是,移动机器人10除了可以将路径提示信息以动画导向箭头的形式进行投影显示,还可以将路径提示信息以其他的预设动画的形式投影显示在目标投影面上,本实施例对此不加以限定。下面仅以将路径提示信息以动画导向箭头的形式进行投影显示为例进行说明。
第四种可能的投影形式:移动机器人10将路径提示信息以视频投影方式,投影显示在目标投影面上。
由于路径提示信息通常是动态变化的,因此,移动机器人10可以在目标投影面上投影显示动态视频,该动态视频用于提示该移动机器人10的规划路径。
可选的,移动机器人10预先设置该动态视频的视频尺寸和/或分辨率。
需要说明的是,上述四种可能的投影形式中可以任意两种两两结合实施,或者,或者上述四种可能的投影形式中可以任意三种结合实施,上述四种可能的投影形式全部结合实施,此乃本领域技术人员根据上述各个投影形式所易于思及的,本申请实施例不对这几种投影形式结合实施的方式一一重复赘述。
在将路径提示信息投影显示在目标投影面上之前,移动机器人10根据障 碍物的位置确定目标投影面上的投影区域,投影区域与障碍物的位置不存在重叠区域。在确定出投影区域之后,移动机器人10将路径提示信息投影显示在投影区域上。
可选的,移动机器人10将路径提示信息以上述任意一种或者几种投影形式投影显示在目标投影面的投影区域上。
为了避免将路径提示信息投影到障碍物的位置上,导致信息显示效果较差的问题,移动机器人10获取感知数据中障碍物的位置,从而在目标投影面上确定与障碍物的位置不存在重叠区域的投影区域。
在一个示意性的例子中,如图6所示,移动机器人10沿正东方向移动,行人52沿正西方向移动,行人53沿正南方向移动,当移动机器人10获取感知数据,该感知数据包括行人52的第一位置和行人53的第一位置时,移动机器人10根据感知数据确定路径规划信息53,确定与行人52的第一位置和行人53的第一位置均不存在重叠区域的投影区域,将路径提示信息54投影显示在投影区域上。
可选的,投影区域的宽度大于或等于移动机器人10的宽度。
综上所述,本实施例通过移动机器人获取用于指示移动机器人在行进过程中的周围环境的感知数据,根据感知数据确定移动机器人的规划路径,将包括移动机器人的规划路径的路径提示信息投影显示在目标投影面上;使得移动机器人在行进过程中能够在目标投影面上投影显示出将要移动的规划路径,进而使得附近的行人能够直接在目标投影面上看到该移动机器人将要移动的规划路径,避免了相关技术中由于语音形式而导致的诸多限制,有利于移动机器人快速且精确对行人进行信息提示。
本实施例还通过将路径提示信息以第一投影形式投影显示在目标投影面上,第一投影形式包括文本投影形式、图像投影形式、动画投影形式和视频投影方式中的至少一种,使得路径提示信息的投影形式多样化,提高了路径提示信息的信息显示效果。
为了显示更多与移动机器人10在移动过程中的信息,该提示信息除了包括路径提示信息,还可以包括辅助提示信息,辅助提示信息包括:移动机器人10的预计到达时间和/或移动速度。下文以示意性的实施例介绍基于路径提示信息和辅助提示信息的交互方法。
请参考图7,其示出了本申请另一个实施例提供的移动机器人10的交互方法的流程图。本实施例以该方法用于如图1所示的移动机器人10为例进行说明,该方法包括:
步骤601,获取感知数据,感知数据用于指示移动机器人10在行进过程中的周围环境。
以感知组件包括摄像头为例,移动机器人10获取摄像头拍摄的第i帧图像帧,根据第i帧图像帧确定第i帧图像帧对应的感知数据。其中,该摄像头以预定的拍摄速率进行拍摄,如:24fps(即,每秒24帧),i为自然数。
感知数据包括:移动障碍物的第一位置、移动障碍物的规划路径、移动障碍物的手势信息、静止障碍物的第二位置、静止障碍物的类型、移动机器人10的当前位置和目标投影面的平面颜色中的至少一种。
可选的,移动障碍物的第一位置、静止障碍物的第二位置、静止障碍物的类型、目标投影面的平面颜色均是采用第一获取策略得到的,移动障碍物的规划路径是采用第二获取策略得到的,移动障碍物的手势信息是采用第三获取策略得到的。下面,依次介绍这三种获取策略。
第一种获取策略包括:移动机器人10根据第i帧图像帧,采用机器学习算法计算得到图像特征,根据计算得到的图像特征确定感知数据。
移动机器人10采用机器学习算法计算得到图像特征包括但不限于以下两种可能的实现方式:
在一种可能的实现方式中,机器学习算法为传统机器学习算法,移动机器人10通过传统机器学习算法计算得到第i帧图像帧的图像特征。
比如,传统机器学习算法为基于方向梯度直方图(Histogram of Oriented Gradient,HOG)特征和支持向量机(Support Vector Machine,SVM)模型的目标检测算法。
在另一种可能的实现方式中,机器学习算法为神经网络算法,移动机器人10将第i帧图像帧输入至深度网络,通过深度网络提取第i帧图像帧的图像特征。其中,深度网络是用于对输入的图像帧进行特征提取后,得到图像帧的图像特征的神经网络。
可选的,深度网络包括卷积神经网络(Convolutional Neural Network,CNN)模型、深度神经网络(Deep Neural Network,DNN)模型、循环神经网络(Recurrent Neural Networks,RNN)模型、嵌入(embedding)模型、梯度提 升决策树(Gradient Boosting Decision Tree,GBDT)模型、逻辑回归(Logistic Regression,LR)模型中的至少一种。比如,深度网络为基于区域的卷积神经网络(Region-based Convolutional Neural Networks,RCNN)。
移动机器人10根据计算得到的图像特征确定感知数据包括:根据计算得到的图像特征和预先标定的摄像头参数估算得到移动障碍物的第一位置、静止障碍物的第二位置、静止障碍物的类型、目标投影面的平面颜色中的至少一种感知数据。
比如,以移动障碍物为行人为例,移动机器人10通过深度网络提取第i帧图像帧的图像特征包括该行人的双脚位置和/或人体尺寸等,根据提取得到的图像特征和预先标定的摄像头参数估算得到行人与该移动机器人10的距离,从而得到该行人的第一位置。
第二获取策略包括:移动机器人10根据采集到的至少两帧图像帧(包括第i-m帧图像帧至第i帧图像帧),得到移动障碍物在至少两帧图像帧中的历史位置,根据该移动障碍物的至少两个历史位置,确定移动障碍物的规划路径。其中,m为小于i的正整数。
第三获取策略包括:移动机器人10根据采集到的第i帧图像帧,采用机器学习算法检测移动障碍物的手势动作,当检测到的手势动作与存储的手势动作相匹配时,确定与该检测到的手势动作对应的手势信息。
可选的,移动机器人10中预先存储有手势动作和手势信息的对应关系。示意性的,该对应关系如表二所示。其中,当手势动作为第一预设动作,即五根手指均不弯曲,且掌心面对移动机器人10时,对应的手势信息为用于指示移动机器人10停止的信息;当手势动作为第二预设动作,即拇指朝上,且除拇指以外的其他四根手指向手心方向完全弯曲时,对应的手势信息为用于指示移动机器人10让行的信息;当手势动作为第三预设动作,即食指朝向指定方向,且除食指以外的其他四根手指向手心方向完全弯曲时,对应的手势信息为用于指示移动机器人10向指定方向移动的信息。
表二
手势动作 手势信息
第一预设动作 停止
第二预设动作 让行
第三预设动作 向指定方向移动
基于表二所提供的对应关系,在一个示意性的例子中,以移动障碍物为行人为例,移动机器人10根据采集到的第i帧图像帧,采用机器学习算法检测得到行人的手势动作,将检测得到的手势动作与存储的手势动作依次进行比较,当检测得到的手势动作与第一预设动作相匹配时,确定与该检测到的手势动作对应的手势信息用于指示移动机器人10停止。
步骤602,根据感知数据确定路径提示信息和辅助提示信息,辅助提示信息包括移动机器人10的预计到达时间和/或移动速度。
移动机器人10根据移动障碍物的规划路径、静止障碍物的第二位置、移动障碍物的手势信息中的至少一种感知数据,通过预设规则确定移动机器人10的规划路径,根据移动机器人10的规划路径生成路径提示信息。路径提示信息的确定过程可参考上述实施例中的相关细节,在此不再赘述。
可选的,当移动障碍物的手势信息包括至少两个移动障碍物各自对应的手势信息时,比较至少两个移动障碍物各自与移动机器人10之间的距离,将距离最近的移动障碍物的手势信息确定为目标手势信息,根据目标手势信息生成路径提示信息。
在一个示意性的例子中,基于表二所提供的对应关系,以移动障碍物为行人为例,移动机器人10根据采集到的第i帧图像帧,采用机器学习算法检测得到两个行人(分别为行人A和行人B)的手势动作,其中行人A的手势动作为第三预设动作,行人B的手势动作为第一预设动作,且行人A与移动机器人10的距离小于行人B与移动机器人10的距离,因此移动机器人10将行人A的第三预设动作对应的手势信息确定为目标手势信息,根据目标手势信息所指示的指定方向,确定移动机器人10的路径提示信息。
可选的,移动机器人10将移动机器人10的预计到达时间和/或移动速度确定为辅助提示信息。
其中,移动机器人10的移动速度可以是预设的,也可以是动态确定得到的。移动速度可以是匀速的,也可以是变速的。下面仅以移动机器人10的移动速度是动态确定的恒定速度为例进行说明。
辅助提示信息包括移动机器人10的预计到达时间,移动机器人10根据感知数据确定辅助提示信息的方式包括:根据移动机器人10的规划路径和移动机器人10的移动速度确定移动机器人10的预计到达时间。
比如,移动机器人10的规划路径为以该移动机器人10的当前位置为起点 移动长度为3米的路径,移动机器人10的移动速度为1米/秒,则确定该移动机器人10在该规划路径上移动时到达目标位置的时间为3秒,目标位置为规划路径的终点。
辅助提示信息包括移动机器人10的移动速度,移动机器人10根据感知数据确定辅助提示信息的方式包括:移动机器人10根据移动障碍物的规划路径和移动机器人10的规划路径通过预设规则确定移动机器人10的移动速度。
比如,移动机器人10根据移动障碍物的规划路径和移动机器人10的规划路径,通过预设规则确定移动机器人10的移动速度为1米/秒。
步骤603,将路径提示信息和辅助提示信息,投影显示在目标投影面上;和/或,将路径提示信息以第二投影形式投影显示在目标投影面上,第二投影形式用于指示辅助提示信息。
在一种可能的实现方式中,移动机器人10将路径提示信息和辅助提示信息,投影显示在目标投影面上。
移动机器人10将路径提示信息投影显示在目标投影面上的同时,将辅助提示信息投影显示在目标投影面上。其中,辅助提示信息的投影形式可参考上述实施例中三种可能的路径提示信息的投影形式,在此不再赘述。
在一个示意性的例子中,如图8所示,移动机器人10沿正东方向移动,行人72沿正西方向移动,移动机器人10根据获取到的感知数据确定路径提示信息74后,在将路径提示信息74投影显示在目标投影面上的同时,将移动机器人10的预计到达时间“3s”和/或移动速度“1m/s”,投影显示在地面上。
在另一种可能的实现方式中,移动机器人10将路径提示信息以第二投影形式投影显示在目标投影面上,第二投影形式用于指示辅助提示信息。
可选的,辅助提示信息包括移动机器人10的预计到达时间,将路径提示信息以第二投影形式投影显示在目标投影面上,包括:根据移动机器人10的预计到达时间,将路径提示信息进行投影颜色的线性渐变,将线性渐变后的路径提示信息,投影显示在目标投影面上。
其中,线性渐变后的路径提示信息包括同一个投影颜色从高到低的n种色彩饱和度,n种色彩饱和度与移动机器人的预计到达时间呈正相关关系,n为正整数。其中,色彩饱和度(英文:Saturation)也称饱和度,用于指示色彩的纯度。
即移动机器人10将路径提示信息进行投影颜色的n种色彩饱和度从高至 低的线性渐变,得到线性渐变后的路径提示信息。可选的,n种色彩饱和度的取值范围为0至1。
n种色彩饱和度与移动机器人的预计到达时间呈正相关关系包括:投影颜色的色彩饱和度越高,即该投影颜色越鲜明,表示移动机器人的预计到达时间越早。
比如,色彩饱和度为“1”时用于指示移动机器人10的第一预计到达时间,色彩饱和度为“0”时用于指示移动机器人10的第二预计到达时间,由于色彩饱和度“1”高于色彩饱和度“0”,因此第一预计到达时间早于第二预计到达时间。
在一个示意性的例子中,如图9所示,移动机器人10沿正东方向移动,行人82沿正西方向移动,移动机器人10根据获取到的感知数据确定路径提示信息后,将路径提示信息进行投影颜色“棕色”的线性渐变得到线性渐变后的路径提示信息84,将线性渐变后的路径提示信息84投影显示在目标投影面上。
可选的,辅助提示信息包括移动机器人10的移动速度,移动机器人10将路径提示信息以第二投影形式投影显示在目标投影面上,包括:根据移动机器人10的移动速度,确定预定时间段内移动机器人10的移动长度,以移动长度为路径提示信息的投影长度,将路径提示信息投影显示在目标投影面上。
其中,投影长度用于指示移动机器人10的移动速度。投影长度越长,表示移动机器人10的移动速度越快。
在一个示意性的例子中,如图10所示,移动机器人10沿正东方向移动,行人92沿正西方向移动,移动机器人10根据获取到的感知数据确定移动机器人10的规划路径后,移动机器人10根据移动速度“1m/s”,确定预定时间段“5s”内移动机器人10的移动长度“5m”,将5m长的路径提示信息94投影显示在地面上。
综上所述,本实施例还通过根据障碍物的位置确定与障碍物的位置不存在重叠区域的投影区域,将路径提示信息投影显示在投影区域上;避免了将路径提示信息投影到障碍物的位置上,导致信息显示效果较差的问题,使得该移动机器人的交互方法的信息显示效果较好。
本实施例还通过根据目标投影面的平面颜色,确定不同于平面颜色的影颜色,将路径提示信息以投影颜色的形式投影显示在目标投影面上;避免路径提示信息的投影颜色与目标投影面的平面颜色一致或者相似,导致信息显示效 果较差的问题,提高了信息显示效果。
本实施例还通过将路径提示信息以第二投影形式投影显示在目标投影面上,第二投影形式用于指示辅助提示信息,使得移动机器人在向行人提供路径提示信息的同时还能够向行人提供辅助提示信息,丰富了提示信息的内容。
可选的,在步骤203或603之前,该移动机器人10的交互方法还包括:移动机器人10判断感知数据是否满足预设条件,若感知数据满足预设条件,则确定开始投影,执行步骤203或603。下面,仅以在步骤603之前还包括该步骤为例进行说明,请参考图11:
步骤1001,移动机器人10判断感知数据是否满足预设条件。
为了减少移动机器人10因为实时投影显示而导致的能源损耗,移动机器人10在开始投影前,先判断感知数据是否满足预设条件,若感知数据满足预设条件,则确定开始投影,执行步骤605;若感知数据不满足预设条件,则确定不需要投影,结束进程。
其中,预设条件包括:移动障碍物的第一位置与移动机器人10的当前位置之间的距离小于预定阈值,和/或,静止障碍物的类型为满足视觉盲点条件的障碍物类型。预定阈值可以是0-N米之间任意数值,N为正整数。本实施例对预定阈值的取值不加以限定。
由于当移动机器人10通过某些特定的路口时,由于静止障碍物的遮挡,可能造成移动机器人10与移动障碍物相互之间存在较大的盲区。为了避免相互碰撞的情况发生,若移动机器人10获取到的感知数据中静止障碍物的类型为满足视觉盲点条件的障碍物类型,则确定开始投影。
可选的,满足视觉盲点条件的障碍物类型包括出入口处的障碍物、十字路口处的障碍物、叉路口处的障碍物或转角处的障碍物中的至少一个。
可选的,若移动机器人10获取到的感知数据中静止障碍物的类型为满足视觉盲点条件的障碍物类型,则确定开始投影,将投影提示信息和辅助提示信息以特定示警图像的形式投影显示在地面上。
比如,特定示警图像为斑马线图像。本实施例对特定示警图像的形式不加以限定。
可选的,在将投影提示信息和辅助提示信息投影显示在地面上的同时,移动机器人10发出用于提示的声音信号。
可选的,在将投影提示信息和辅助提示信息投影显示在地面上的同时,将提示标识投影显示在墙面上,提示标识用于提示在预设时间段内存在移动机器人10经过。
比如,提示标识为包括倒三角形的标识,该标识中包括文本“注意”。
在一个示意性的例子中,如图12所示,移动机器人10沿正北方向移动,行人112沿正西方向移动,移动机器人10根据获取到的感知数据确定移动机器人10的规划路径,当获取到的感知数据中包括叉路口处的墙面114时,确定感知数据满足预设条件,确定开始投影,将投影提示信息和辅助提示信息以特定示警图像116的形式投影显示在地面上。
综上所述,本实施例还通过移动机器人10在开始投影前,先判断感知数据是否满足预设条件,若感知数据满足预设条件,则确定开始投影,避免了移动机器人10因为实时投影显示而导致的能源损耗问题,大大节省了移动机器人10的能源损耗。
本实施例还通过若移动机器人10获取到的感知数据中静止障碍物的类型为满足视觉盲点条件的障碍物类型,则确定开始投影,使得当移动机器人10通过某些特定的路口,即由于静止障碍物的遮挡,移动机器人10与行人之间存在较大的盲区时,移动机器人10能够通过在目标投影面上进行投影显示,向当前无法观察到该移动机器人的行人进行信息提示,大大降低了相互碰撞的风险。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图13,其示出了本申请一个实施例提供的移动机器人的交互装置的结构示意图。该交互装置可以通过专用硬件电路,或者,软硬件的结合实现成为移动机器人的全部或一部分,该交互装置包括:获取模块1210、第一确定模块1220和投影模块1230。
获取模块1210,用于实现上述步骤201或601。
第一确定模块1220,用于实现上述步骤202。
投影模块1230,用于实现上述步骤203。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,该投影 模块1230,还用于将路径提示信息以第一投影形式,投影显示在目标投影面上;
其中,第一投影形式包括文本投影形式、图像投影形式、动画投影形式和视频投影方式中的至少一种。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,该投影模块1230,还用于根据目标投影面的平面颜色确定路径提示信息的投影颜色,投影颜色不同于平面颜色;将路径提示信息以投影颜色的形式投影显示在目标投影面上。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,该投影模块1230,还用于将路径提示信息以动画投影形式,投影显示在目标投影面上,包括:将路径提示信息以动画导向箭头的形式,投影显示在目标投影面上。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,该装置,还包括:第二确定模块1240。
该第二确定模块1240,用于根据障碍物的位置确定目标投影面上的投影区域,投影区域与障碍物的位置不存在重叠区域;
该投影模块1230,还用于将路径提示信息投影显示在投影区域上。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,该装置,还包括:第三确定模块1250。
该第三确定模块1250,用于根据感知数据确定辅助提示信息,辅助提示信息包括移动机器人的预计到达时间和/或移动速度;
该投影模块1230,还用于实现上述步骤603。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,辅助提示信息包括移动机器人的预计到达时间;该投影模块1230,还用于根据移动机器人的预计到达时间,将路径提示信息进行投影颜色的线性渐变;将线性渐变后的路径提示信息,投影显示在目标投影面上;
其中,线性渐变后的路径提示信息包括同一个投影颜色从高到低的n种色彩饱和度,n种色彩饱和度与移动机器人的预计到达时间呈正相关关系。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,辅助提示信息包括移动机器人的移动速度;该投影模块1230,还用于根据移动机器人的移动速度,确定预定时间段内移动机器人的移动长度;
以移动长度为路径提示信息的投影长度,将路径提示信息投影显示在目标投影面上,投影长度用于指示移动机器人的移动速度。
在基于图13所示实施例提供的一个可选实施例中,如图14所示,感知数据包括移动障碍物的第一位置和/或静止障碍物的类型;该装置,还包括:判断模块1260。
该判断模块1260,用于当感知数据满足预设条件时,执行将提示信息进行投影显示的步骤;
其中,预设条件包括:移动障碍物的第一位置与移动机器人的当前位置之间的距离小于预定阈值,和/或,静止障碍物的类型为满足视觉盲点条件的障碍物类型。
可选的,感知数据包括移动障碍物的第一位置、移动障碍物的规划路径、移动障碍物的手势信息、静止障碍物的第二位置、静止障碍物的类型和目标投影面的平面颜色中的至少一种。
相关细节可结合参考图2至图12所示的方法实施例。其中,获取模块910还用于实现上述方法实施例中其他任意隐含或公开的与获取步骤相关的功能;第一确定模块920还用于实现上述方法实施例中其他任意隐含或公开的与确定步骤相关的功能;投影模块930还用于实现上述方法实施例中其他任意隐含或公开的与投影步骤相关的功能。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
在示例性实施例中,还提供了一种移动机器人,移动机器人包括处理器、存储器和投影组件,存储器中存储有至少一条指令、至少一段程序、代码集或指令集;
处理器,用于获取感知数据,感知数据用于指示移动机器人在行进过程中的周围环境;
处理器,还用于根据感知数据确定路径提示信息,路径提示信息包括移动机器人的规划路径;
投影组件,用于将处理器确定出的路径提示信息投影显示在目标投影面 上。
可选的,该投影组件,还用于将路径提示信息以第一投影形式,投影显示在目标投影面上;
其中,第一投影形式包括文本投影形式、图像投影形式、动画投影形式和视频投影方式中的至少一种。
可选的,该投影组件,还用于根据目标投影面的平面颜色确定路径提示信息的投影颜色,投影颜色不同于平面颜色;将路径提示信息以投影颜色的形式投影显示在目标投影面上。
可选的,该投影组件,还用于将路径提示信息以动画投影形式,投影显示在目标投影面上,包括:将路径提示信息以动画导向箭头的形式,投影显示在目标投影面上。
可选的,该处理器,还用于根据障碍物的位置确定目标投影面上的投影区域,投影区域与障碍物的位置不存在重叠区域;
该投影组件,还用于将路径提示信息投影显示在投影区域上。
可选的,该处理器,还用于根据感知数据确定辅助提示信息,辅助提示信息包括移动机器人的预计到达时间和/或移动速度;
该投影组件,还用于实现上述步骤603。
可选的,辅助提示信息包括移动机器人的预计到达时间;该投影组件,还用于根据移动机器人的预计到达时间,将路径提示信息进行投影颜色的线性渐变;将线性渐变后的路径提示信息,投影显示在目标投影面上;
其中,线性渐变后的路径提示信息包括同一个投影颜色从高到低的n种色彩饱和度,n种色彩饱和度与移动机器人的预计到达时间呈正相关关系。
可选的,辅助提示信息包括移动机器人的移动速度;该投影组件,还用于根据移动机器人的移动速度,确定预定时间段内移动机器人的移动长度;以移动长度为路径提示信息的投影长度,将路径提示信息投影显示在目标投影面上,投影长度用于指示移动机器人的移动速度。
可选的,感知数据包括移动障碍物的第一位置和/或静止障碍物的类型;该投影组件,还用于当感知数据满足预设条件时,执行将提示信息进行投影显示的步骤;
其中,预设条件包括:移动障碍物的第一位置与移动机器人的当前位置之间的距离小于预定阈值,和/或,静止障碍物的类型为满足视觉盲点条件的障碍 物类型。
可选的,感知数据包括移动障碍物的第一位置、移动障碍物的规划路径、移动障碍物的手势信息、静止障碍物的第二位置、静止障碍物的类型和目标投影面的平面颜色中的至少一种。
相关细节可结合参考图2至图12所示的方法实施例。其中,处理器还用于实现上述方法实施例中其他任意隐含或公开的与处理步骤相关的功能;投影组件还用于实现上述方法实施例中其他任意隐含或公开的与投影步骤相关的功能。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的移动机器人的交互方法中全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (22)

  1. 一种移动机器人的交互方法,其特征在于,用于所述移动机器人中,所述方法包括:
    获取感知数据,所述感知数据用于指示所述移动机器人在行进过程中的周围环境;
    根据所述感知数据确定路径提示信息,所述路径提示信息包括所述移动机器人的规划路径;
    将所述路径提示信息投影显示在目标投影面上。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述路径提示信息投影在目标投影面上进行显示,包括:
    将所述路径提示信息以第一投影形式,投影显示在所述目标投影面上;
    其中,所述第一投影形式包括文本投影形式、图像投影形式、动画投影形式和视频投影方式中的至少一种。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述路径提示信息以第一投影形式,投影显示在所述目标投影面上,包括:
    根据所述目标投影面的平面颜色确定所述路径提示信息的投影颜色,所述投影颜色不同于所述平面颜色;
    将所述路径提示信息以所述投影颜色的形式投影显示在所述目标投影面上。
  4. 根据权利要求2所述的方法,其特征在于,所述将所述路径提示信息以第一投影形式,投影显示在所述目标投影面上,包括:
    将所述路径提示信息以动画导向箭头的形式,投影显示在所述目标投影面上。
  5. 根据权利要求1所述的方法,其特征在于,所述将所述路径提示信息投影在目标投影面上进行显示之前,还包括:
    根据障碍物的位置确定所述目标投影面上的投影区域,所述投影区域与所 述障碍物的位置不存在重叠区域;
    所述将所述路径提示信息投影在目标投影面上进行显示,包括:
    将所述路径提示信息投影显示在所述投影区域上。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述方法,还包括:
    根据所述感知数据确定辅助提示信息,所述辅助提示信息包括所述移动机器人的预计到达时间和/或移动速度;
    所述将所述路径提示信息投影显示在目标投影面上,还包括:
    将所述路径提示信息和所述辅助提示信息,投影显示在所述目标投影面上;和/或,将所述路径提示信息以第二投影形式投影显示在所述目标投影面上,所述第二投影形式用于指示所述辅助提示信息。
  7. 根据权利要求6所述的方法,其特征在于,所述辅助提示信息包括所述移动机器人的预计到达时间;
    所述将所述路径提示信息以第二投影形式投影显示在所述目标投影面上,包括:
    根据所述移动机器人的预计到达时间,将所述路径提示信息进行投影颜色的线性渐变;
    将线性渐变后的所述路径提示信息,投影显示在所述目标投影面上;
    其中,所述线性渐变后的所述路径提示信息包括同一个所述投影颜色从高到低的n种色彩饱和度,所述n种色彩饱和度与所述移动机器人的预计到达时间呈正相关关系。
  8. 根据权利要求6所述的方法,其特征在于,所述辅助提示信息包括所述移动机器人的移动速度;
    所述将所述路径提示信息以第二投影形式投影显示在所述目标投影面上,包括:
    根据所述移动机器人的移动速度,确定预定时间段内所述移动机器人的移动长度;
    以所述移动长度为所述路径提示信息的投影长度,将所述路径提示信息投影显示在所述目标投影面上,所述投影长度用于指示所述移动机器人的移动速 度。
  9. 根据权利要求1至8任一所述的方法,其特征在于,所述感知数据包括移动障碍物的第一位置和/或静止障碍物的类型;
    所述将所述路径提示信息投影显示在目标投影面上之前,还包括:
    当所述感知数据满足预设条件时,执行将所述提示信息进行投影显示的步骤;
    其中,所述预设条件包括:所述移动障碍物的第一位置与所述移动机器人的当前位置之间的距离小于预定阈值,和/或,所述静止障碍物的类型为满足视觉盲点条件的障碍物类型。
  10. 根据权利要求1至8任一所述的方法,其特征在于,所述感知数据包括移动障碍物的第一位置、所述移动障碍物的规划路径、所述移动障碍物的手势信息、静止障碍物的第二位置、所述静止障碍物的类型和所述目标投影面的平面颜色中的至少一种。
  11. 一种移动机器人的交互装置,其特征在于,设置在所述移动机器人上,所述装置包括:
    获取模块,用于获取感知数据,所述感知数据用于指示所述移动机器人在行进过程中的周围环境;
    第一确定模块,用于根据所述感知数据确定路径提示信息,所述路径提示信息包括所述移动机器人的规划路径;
    投影模块,用于将所述路径提示信息投影显示在目标投影面上。
  12. 根据权利要求11所述的装置,其特征在于,所述投影模块,还用于将所述路径提示信息以第一投影形式,投影显示在所述目标投影面上;
    其中,所述第一投影形式包括文本投影形式、图像投影形式、动画投影形式和视频投影方式中的至少一种。
  13. 根据权利要求12所述的装置,其特征在于,所述投影模块,还用于根据所述目标投影面的平面颜色确定所述路径提示信息的投影颜色,所述投影颜 色不同于所述平面颜色;
    将所述路径提示信息以所述投影颜色的形式投影显示在所述目标投影面上。
  14. 根据权利要求12所述的装置,其特征在于,所述投影模块,还用于将所述路径提示信息以动画导向箭头的形式,投影显示在所述目标投影面上。
  15. 根据权利要求11所述的装置,其特征在于,所述装置还包括:第二确定模块;
    所述第二确定模块,用于根据障碍物的位置确定所述目标投影面上的投影区域,所述投影区域与所述障碍物的位置不存在重叠区域;
    所述投影模块,还用于将所述路径提示信息投影显示在所述投影区域上。
  16. 根据权利要求11至15任一所述的装置,其特征在于,所述装置,还包括:第三确定模块;
    所述第二确定模块,用于根据所述感知数据确定辅助提示信息,所述辅助提示信息包括所述移动机器人的预计到达时间和/或移动速度;
    所述投影模块,还用于将所述路径提示信息和所述辅助提示信息,投影显示在所述目标投影面上;和/或,将所述路径提示信息以第二投影形式投影显示在所述目标投影面上,所述第二投影形式用于指示所述辅助提示信息。
  17. 根据权利要求16所述的装置,其特征在于,所述辅助提示信息包括所述移动机器人的预计到达时间;
    所述投影模块,还用于根据所述移动机器人的预计到达时间,将所述路径提示信息进行投影颜色的线性渐变;将线性渐变后的所述路径提示信息,投影显示在所述目标投影面上;
    其中,所述线性渐变后的所述路径提示信息包括同一个所述投影颜色从高到低的n种色彩饱和度,所述n种色彩饱和度与所述移动机器人的预计到达时间呈正相关关系。
  18. 根据权利要求16所述的装置,其特征在于,所述辅助提示信息包括所 述移动机器人的移动速度;
    所述投影模块,还用于根据所述移动机器人的移动速度,确定预定时间段内所述移动机器人的移动长度;以所述移动长度为所述路径提示信息的投影长度,将所述路径提示信息投影显示在所述目标投影面上,所述投影长度用于指示所述移动机器人的移动速度。
  19. 根据权利要求11至18任一所述的装置,其特征在于,所述感知数据包括移动障碍物的第一位置和/或静止障碍物的类型;所述装置还包括:判断模块;
    所述判断模块,用于当所述感知数据满足预设条件时,执行将所述提示信息进行投影显示的步骤;
    其中,所述预设条件包括:所述移动障碍物的第一位置与所述移动机器人的当前位置之间的距离小于预定阈值,和/或,所述静止障碍物的类型为满足视觉盲点条件的障碍物类型。
  20. 根据权利要求11至18任一所述的装置,其特征在于,所述感知数据包括移动障碍物的第一位置、所述移动障碍物的规划路径、所述移动障碍物的手势信息、静止障碍物的第二位置、所述静止障碍物的类型和所述目标投影面的平面颜色中的至少一种。
  21. 一种移动机器人,其特征在于,所述移动机器人包括处理器、存储器和投影组件,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集;
    所述处理器,用于获取感知数据,所述感知数据用于指示所述移动机器人在行进过程中的周围环境;
    所述处理器,还用于根据所述感知数据确定路径提示信息,所述路径提示信息包括所述移动机器人的规划路径;
    所述投影组件,用于将所述处理器确定出的所述路径提示信息投影显示在目标投影面上。
  22. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中 存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至10任一所述的移动机器人的交互方法。
PCT/CN2018/109626 2017-10-31 2018-10-10 移动机器人的交互方法、装置、移动机器人及存储介质 WO2019085716A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/597,484 US11142121B2 (en) 2017-10-31 2019-10-09 Interaction method and apparatus of mobile robot, mobile robot, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711044804.1A CN108303972B (zh) 2017-10-31 2017-10-31 移动机器人的交互方法及装置
CN201711044804.1 2017-10-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/597,484 Continuation US11142121B2 (en) 2017-10-31 2019-10-09 Interaction method and apparatus of mobile robot, mobile robot, and storage medium

Publications (1)

Publication Number Publication Date
WO2019085716A1 true WO2019085716A1 (zh) 2019-05-09

Family

ID=62869639

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109626 WO2019085716A1 (zh) 2017-10-31 2018-10-10 移动机器人的交互方法、装置、移动机器人及存储介质

Country Status (3)

Country Link
US (1) US11142121B2 (zh)
CN (1) CN108303972B (zh)
WO (1) WO2019085716A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114274184A (zh) * 2021-12-17 2022-04-05 重庆特斯联智慧科技股份有限公司 一种基于投影引导的物流机器人人机交互方法及系统

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108303972B (zh) * 2017-10-31 2020-01-17 腾讯科技(深圳)有限公司 移动机器人的交互方法及装置
CN110039535B (zh) * 2018-01-17 2022-12-16 阿里巴巴集团控股有限公司 机器人交互方法及机器人
CN109491875A (zh) * 2018-11-09 2019-03-19 浙江国自机器人技术有限公司 一种机器人信息显示方法、系统及设备
CN109572555B (zh) * 2018-11-13 2020-01-21 百度在线网络技术(北京)有限公司 一种应用于无人车的遮挡信息显示方法和系统
DE102018130779A1 (de) * 2018-12-04 2020-06-04 Still Gmbh Verfahren zum Betreiben eines autonomen Flurförderzeugs und intralogistisches System mit einem autonomen Flurförderzeug
CN111399492A (zh) * 2018-12-28 2020-07-10 深圳市优必选科技有限公司 一种机器人及其障碍物感知方法和装置
CN109927624B (zh) * 2019-01-18 2022-06-28 驭势(上海)汽车科技有限公司 车辆移动的目标区域的投影方法、hmi计算机系统及车辆
CN110335556B (zh) * 2019-07-03 2021-07-30 桂林电子科技大学 移动解说平台及其控制方法、控制系统、计算机介质
CN110442126A (zh) * 2019-07-15 2019-11-12 北京三快在线科技有限公司 一种移动机器人及其避障方法
CN111179148B (zh) * 2019-12-30 2023-09-08 深圳优地科技有限公司 数据展示方法及装置
CN111736699A (zh) * 2020-06-23 2020-10-02 上海商汤临港智能科技有限公司 基于车载数字人的交互方法及装置、存储介质
US11687086B2 (en) * 2020-07-09 2023-06-27 Brookhurst Garage, Inc. Autonomous robotic navigation in storage site
CN111844038B (zh) * 2020-07-23 2022-01-07 炬星科技(深圳)有限公司 一种机器人运动信息识别方法、避障方法及避障机器人、避障系统
WO2022063403A1 (en) * 2020-09-24 2022-03-31 Abb Schweiz Ag System and method for indicating a planned robot movement
JP7484758B2 (ja) * 2021-02-09 2024-05-16 トヨタ自動車株式会社 ロボット制御システム
CN115072626B (zh) * 2021-03-12 2023-07-18 灵动科技(北京)有限公司 搬运机器人、搬运系统及提示信息生成方法
CN113267179B (zh) * 2021-05-17 2023-05-30 北京京东乾石科技有限公司 提示信息生成方法和装置、存储介质、电子设备
CN113485351B (zh) * 2021-07-22 2024-08-23 深圳优地科技有限公司 移动机器人的控制方法、装置、移动机器人及存储介质
CN114281187A (zh) * 2021-11-16 2022-04-05 深圳市普渡科技有限公司 机器人系统、方法、计算机设备及存储介质
CN114265397B (zh) * 2021-11-16 2024-01-16 深圳市普渡科技有限公司 移动机器人的交互方法、装置、移动机器人和存储介质
JP2024524595A (ja) * 2021-11-16 2024-07-05 深▲せん▼市普渡科技有限公司 移動ロボットのインタラクション方法、装置、移動ロボット及び記憶媒体
WO2023088311A1 (zh) * 2021-11-16 2023-05-25 深圳市普渡科技有限公司 机器人系统、方法、计算机设备及存储介质
CN114245091B (zh) * 2022-01-27 2023-02-17 美的集团(上海)有限公司 投影位置修正方法、投影定位方法及控制装置、机器人
CN114683284B (zh) * 2022-03-24 2024-05-17 上海擎朗智能科技有限公司 一种控制方法、装置、自主移动设备和存储介质
CN118015554B (zh) * 2024-04-10 2024-06-21 南京派光智慧感知信息技术有限公司 多源数据融合的铁路场站监测方法、系统、设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4962940B2 (ja) * 2006-03-30 2012-06-27 株式会社国際電気通信基礎技術研究所 道案内システム
CN106406312A (zh) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 导览机器人及其移动区域标定方法
CN106471441A (zh) * 2014-08-25 2017-03-01 X开发有限责任公司 用于显示机器人设备动作的虚拟表示的增强现实的方法和系统
CN106814738A (zh) * 2017-03-30 2017-06-09 南通大学 一种基于体感控制技术的轮式机器人及其操控方法
CN106864361A (zh) * 2017-02-14 2017-06-20 驭势科技(北京)有限公司 车辆与车外人车交互的方法、系统、装置和存储介质
CN108303972A (zh) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 移动机器人的交互方法及装置

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200526441A (en) * 2004-02-06 2005-08-16 Shih-Po Lan A vehicle light beam guide
US7201525B2 (en) 2004-07-21 2007-04-10 Allegiance Corporation Liquid antimicrobial solution applicator
US20060187010A1 (en) * 2005-02-18 2006-08-24 Herbert Berman Vehicle motion warning device
JP2007121001A (ja) * 2005-10-26 2007-05-17 Matsushita Electric Ind Co Ltd ナビゲーション装置
CN101393029B (zh) * 2007-09-17 2011-04-06 王保合 车载导航装置以及使用其的导航系统
US9230419B2 (en) * 2010-07-27 2016-01-05 Rite-Hite Holding Corporation Methods and apparatus to detect and warn proximate entities of interest
CN102829788A (zh) * 2012-08-27 2012-12-19 北京百度网讯科技有限公司 一种实景导航方法和实景导航装置
CN105162482A (zh) * 2012-12-22 2015-12-16 华为技术有限公司 一种眼镜式通信装置、系统及方法
US10499477B2 (en) * 2013-03-18 2019-12-03 Signify Holding B.V. Methods and apparatus for information management and control of outdoor lighting networks
CN103245345B (zh) * 2013-04-24 2016-04-06 浙江大学 一种基于图像传感技术的室内导航系统及导航、搜索方法
CN103353758B (zh) * 2013-08-05 2016-06-01 青岛海通机器人系统有限公司 一种室内机器人导航方法
CN203941451U (zh) * 2014-04-15 2014-11-12 桂林电子科技大学 基于手势识别的自动避障小车
CN104376731A (zh) * 2014-10-09 2015-02-25 苏州合欣美电子科技有限公司 基于人行横道信号灯的投影系统
CN104750448A (zh) * 2015-03-23 2015-07-01 联想(北京)有限公司 一种信息处理的方法、电子设备及可穿戴设备
CN110667466A (zh) * 2015-04-10 2020-01-10 麦克赛尔株式会社 图像投射装置和图像投射方法
CN104851146A (zh) * 2015-05-11 2015-08-19 苏州三体智能科技有限公司 一种互动式行车记录导航安全系统
CN104842860B (zh) * 2015-05-20 2018-06-19 浙江吉利汽车研究院有限公司 一种应用于智能驾驶汽车上的行驶路径指示方法及系统
FR3048219B1 (fr) * 2016-02-26 2020-12-25 Valeo Vision Dispositif d'eclairage pour vehicule avec presentation d'information d'aide a la conduite
CN205706411U (zh) * 2016-04-19 2016-11-23 北京奔驰汽车有限公司 一种行车安全信息交互装置及安装该装置的汽车
CN105929827B (zh) * 2016-05-20 2020-03-10 北京地平线机器人技术研发有限公司 移动机器人及其定位方法
CN105976457A (zh) * 2016-07-12 2016-09-28 百度在线网络技术(北京)有限公司 用于指示车辆行车动态的方法和装置
CN106878687A (zh) * 2017-04-12 2017-06-20 吉林大学 一种基于多传感器的车载环境识别系统及全方位视觉模块
CN107139832A (zh) * 2017-05-08 2017-09-08 杨科 一种汽车光学投影警示系统及其方法
DE102018203660A1 (de) * 2018-03-12 2019-09-12 Ford Global Technologies, Llc Verfahren zur Übertragung von einem Datensatz von einem Kraftfahrzeug zu einem HMI außerhalb des Kraftfahrzeugs

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4962940B2 (ja) * 2006-03-30 2012-06-27 株式会社国際電気通信基礎技術研究所 道案内システム
CN106471441A (zh) * 2014-08-25 2017-03-01 X开发有限责任公司 用于显示机器人设备动作的虚拟表示的增强现实的方法和系统
CN106406312A (zh) * 2016-10-14 2017-02-15 平安科技(深圳)有限公司 导览机器人及其移动区域标定方法
CN106864361A (zh) * 2017-02-14 2017-06-20 驭势科技(北京)有限公司 车辆与车外人车交互的方法、系统、装置和存储介质
CN106814738A (zh) * 2017-03-30 2017-06-09 南通大学 一种基于体感控制技术的轮式机器人及其操控方法
CN108303972A (zh) * 2017-10-31 2018-07-20 腾讯科技(深圳)有限公司 移动机器人的交互方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114274184A (zh) * 2021-12-17 2022-04-05 重庆特斯联智慧科技股份有限公司 一种基于投影引导的物流机器人人机交互方法及系统
CN114274184B (zh) * 2021-12-17 2024-05-24 重庆特斯联智慧科技股份有限公司 一种基于投影引导的物流机器人人机交互方法及系统

Also Published As

Publication number Publication date
US20200039427A1 (en) 2020-02-06
CN108303972A (zh) 2018-07-20
US11142121B2 (en) 2021-10-12
CN108303972B (zh) 2020-01-17

Similar Documents

Publication Publication Date Title
WO2019085716A1 (zh) 移动机器人的交互方法、装置、移动机器人及存储介质
US11481024B2 (en) Six degree of freedom tracking with scale recovery and obstacle avoidance
CN108247647B (zh) 一种清洁机器人
US10593126B2 (en) Virtual space display system
US11001196B1 (en) Systems and methods for communicating a machine intent
US9704267B2 (en) Interactive content control apparatus and method
US20190278273A1 (en) Odometry system and method for tracking traffic lights
JP2024042693A (ja) 様々な環境照明条件において動作可能なモバイル機器のためのビジュアルナビゲーション
US20200257821A1 (en) Video Monitoring Method for Mobile Robot
WO2018103023A1 (zh) 人机混合决策方法和装置
CN113116224B (zh) 机器人及其控制方法
US11385643B2 (en) Artificial intelligence moving agent
JP2019008519A (ja) 移動体検出方法、移動体学習方法、移動体検出装置、移動体学習装置、移動体検出システム、および、プログラム
Derry et al. Automated doorway detection for assistive shared-control wheelchairs
Peasley et al. Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor
Bipin et al. Autonomous navigation of generic monocular quadcopter in natural environment
KR20190105214A (ko) 인공 지능을 통해 구속 상황을 회피하는 로봇 청소기 및 그의 동작 방법
JP7043376B2 (ja) 情報処理装置、車両制御装置および移動体制御方法
WO2023159591A1 (zh) 一种展陈场景智能讲解的系统及其方法
Kiy A new real-time method of contextual image description and its application in robot navigation and intelligent control
Dinaux et al. FAITH: Fast iterative half-plane focus of expansion estimation using optic flow
KR20240063820A (ko) 청소 로봇 및 그의 태스크 수행 방법
KR20230134109A (ko) 청소 로봇 및 그의 태스크 수행 방법
CN111975775B (zh) 基于多角度视觉感知的自主机器人导航方法及系统
US20230086153A1 (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18872348

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18872348

Country of ref document: EP

Kind code of ref document: A1