Nothing Special   »   [go: up one dir, main page]

GB2500214A - Robot vision apparatus and control method and corresponding land robot - Google Patents

Robot vision apparatus and control method and corresponding land robot Download PDF

Info

Publication number
GB2500214A
GB2500214A GB1204409.5A GB201204409A GB2500214A GB 2500214 A GB2500214 A GB 2500214A GB 201204409 A GB201204409 A GB 201204409A GB 2500214 A GB2500214 A GB 2500214A
Authority
GB
United Kingdom
Prior art keywords
robot
cameras
camera
images
terrain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1204409.5A
Other versions
GB201204409D0 (en
Inventor
Thomas Ladyman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB1204409.5A priority Critical patent/GB2500214A/en
Publication of GB201204409D0 publication Critical patent/GB201204409D0/en
Publication of GB2500214A publication Critical patent/GB2500214A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/04Wheeled walking aids for patients or disabled persons
    • A61H2003/043Wheeled walking aids for patients or disabled persons with a drive mechanism
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5058Sensors or detectors
    • A61H2201/5092Optical sensor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Electromagnetism (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Rehabilitation Therapy (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

A land based robot 2 has a base section 4 housing a drive system (30 fig. 2), a camera mast 8 mounted on the base section 4 supporting at least a first pair of downwards facing cameras 12a, 12c with overlapping fields of view, each camera capturing an image of the robot base section 4 and the terrain immediately adjacent the base section 4, the images also overlapping, a controller (20 fig. 2) receiving the images, generating a stereoscopic image of the terrain including a depth map and controlling the drive system (20 fig. 2) based upon the depth map to move through the terrain and avoid obstacles. A second pair of downwards facing cameras 12b, 12d may be supported by the mast 8, allowing the mast to be shorter for the same breadth of the visual field than would be the case if only a single camera was used (fig. 4). The controller (20 fig. 2) may use the images from all the cameras 12a to 12d to generate the stereoscopic image or just the two adjacent cameras lying closest to the direction of travel of the robot 2. The cameras 12a to 12d may be mounted on perpendicular cross beams 10a, 10b. The controller (20 fig. 2) may use a further camera with a horizontal field of view and a GPS system. The controller may detect and track a marker on the footwear of an operator to allow the robot 2 to function as a guide dog for the operator. A grip or handle 16 may be used by the operator to control the robot 2.

Description

1
A ROBOT VISION APPARATUS AND CONTROL METHOD, AND CORRESPONDING ROBOT
5 This invention relates to a robot vision apparatus and control method, and in particular a computer vision apparatus and control method providing obstacle detection and avoidance in a land based robot.
The term computer vision refers to the process of capturing digital images of a scene using one or more digital cameras connected to a computer, and to the 10 subsequent processing of those images by a computer in order to extract useful information relevant to the situation. In some cases, the information required can be details of the image content, such as in face or number plate recognition algorithms for example. Alternatively, in computer vision processes that are employed for the control of robotic systems or vehicles, cameras may be used to provide an artificial 15 awareness of the robot's immediate surroundings so that the robot can guide itself around the terrain and avoid collisions. However, such systems are often technically complicated and require considerable processing and computer resources to operate reliably.
We have therefore appreciated that there is a need for a self-guiding land 20 based robot system in which the visual recognition of the surrounding terrain is simplified in terms of necessary processing and associated costs. This would make such systems suitable for a wider range of commercial and specialist applications. We have further appreciated that in commercial applications, the overall size and weight of the robot is an important factor for adoption, and have recognised that 25 smaller devices are more suitable.
One particular commercial application in which we envisage such a system being used is as a robotic guide dog for visually impaired persons. Almost two million people in the UK alone suffer from sight loss, and around 300,000 are registered as blind or partially sighted. Of these approximately 180,000 are reported as saying that 30 they do not go outside on their own, and so are completely dependent on other people. Guide dogs can assist but cost around £50,000 to train and support throughout their working lives. A robotic guide device would be considerably less expensive than a guide dog in terms of cost while providing the same freedom to its owner.
35 A number of robots for other commercial uses are known. USD627377 for example illustrates a self-balancing robot with shaft mounted head intended for remote teleconferencing and telepresence applications. The robot takes the place of
2
its operator at a meeting or at a facility inspection for example. A video camera mounted in the robot head transmits an image to the user who despite being located remotely, can then guide the robot wirelessly around terrain or facility. An infrared sensor in the robot prevents the robot from bumping into objects that are not in the 5 camera's field of view.
Summary of the Invention
The invention is defined in the independent claims to which reference should now be made. Advantageous features are set forth in the dependent claims.
10
Brief Description of the Drawings
Example embodiments of the invention will now be described by way of example and with reference to the drawings, in which:
Figure 1 is an illustration of a robot exterior according to a first example;
15 Figure 2 is a schematic block diagram of the robot's internal components, in a first example;
Figure 3 illustrates the field of vision from a single robot mounted camera;
Figure 4 illustrates the field of vision from four, pair-wise, robot mounted cameras, as shown in Figure 1;
20 Figure 5 illustrates the stereoscopic field of vision for the robot of Figure 1;
Figure 6 illustrates a visual region of interest for the robot of Figure 1 ;
Fig ure 7 illustrates a visual region of interest in which depth information has been added.
25 Detailed Description of Example Embodiments
An example of the invention will now be described with reference to Figure 1. In this example, the obstacle detecting robot is used as a mobility guide robot for the visually impaired.
The self-guiding robot 2 comprises a chassis or base 4, which is supported on 30 the ground by a plurality of wheels 6. The wheels 6 allow the robot to travel in forward and reverse directions, and to turn left and right under the control of a drive system controller (not shown). A power source and the drive system controller are housed in the base 4. Although, wheels 6 are shown in this example, suitable robot drive systems may also include caterpillar tracks, runners, articulated leg designs, 35 and air jets.
Extending from the top of the base 4 is a camera mounting pole 8 having at its upper end a camera mounting frame 10. The frame 10 comprises a pair of cross
3
beams 10a and 10b extending perpendicular to one another and substantially horizontally to the base 4 and to the ground. At the end of each cross beam is a respective digital camera 12a, 12b, 12c, and 12d mounted so that it points directly downwards towards the ground. Although in the example shown, the cameras 12 5 point vertically downwards, it is possible to install them so that they are directed at a small angle to the vertical, outward and away from the base of the guide robot.
A further camera 14 may also be provided on the frame 10. Whereas cameras 12 are arranged to point downwards towards the visual region of interest immediately adjacent the robot base, camera 14 is arranged to look in the horizontal 10 field of view, and to capture visual information from a deeper field of vision. Camera 14 is preferably mounted on a rotational mount and so can be turned to look in different directions by the drive system controller. The camera 14 may be a single camera in order to save on the build cost of the robot, or, in more developed models, may be one or more pairs of cameras arranged to look in diametrically opposed 15 directions and provide a fuller field of vision. Pairs of cameras may also or to look in the same direction to provide a stereoscopic view.
The cameras 12 and 14 are preferably digital CCD devices providing either a colour or monochrome image made up of a two dimensional array of pixels corresponding to the scene in their field of vision. The cameras 12 are connected via 20 wired or wireless connection to the drive system controller housed in the base 4 of the robot 2. As will be described below, the drive system controller receives captured images from each of the cameras 12 and combines the images to form a three dimensional map of the terrain around the base 4 of the guide robot 2. This information is then used in fine control of the robot's motion. Based on the three 25 dimensional map and any control instructions that have been previously provided to the controller for controlling the movement of the guide robot 2, the drive control system can operate the drive system and move the guide robot safely across the terrain.
In this example, the guide robot also comprises a lead or harness 16 which a 30 user of the guide robot can take in their hands to be connected to the robot. As will be described later, the camera system of the robot can be arranged to track the user by following a marker placed on the user's feet. In this way, the robot responds to the user motion, stopping when the user stops, and moving when the user moves. In this example, therefore, the robot behaves much like a real guide dog. 35 In other embodiments, the lead may provide an optional control function for the robot drive system, allowing the user to control its speed setting. In this way, and where the robot has been preprogrammed with route information, the robot can guide
4
the user automatically, with the user simply following behind. In this example, the robot would still stop automatically like a real guide dog, when an obstacle such as a road was encountered. However, on safe terrain, the robot would guide the user more positively rather than the other way round. Some mixing of the two modes of 5 operation is also possible.
The guide robot chassis and the camera mounting pole 8 and frame can be made of hard, durable metals, such as steel, iron, aluminium, or plastic, depending on requirements and intended usage.
Figure 2 shows in more detail the interior components of the robot 2. The 10 driver system controller 20 draws its power from a power source 22, such as a rechargeable battery. The controller 20 receives inputs from a camera system 24, including cameras 12 and 14, and optionally from GPS (Global Positioning System) device 26, and user control 28, where the active grip or handle 16 is present. The controller 20 subsequently provides an output to drive system 30 to move the robot. 15 The drive system controller is implemented as one or more computer processors, having respective memories.
Overall control of the robot can be carried out in a number of ways. In a first example embodiment, most closely modelling the behaviour of a real guide dog, the camera system 24 tracks a marker on the user's shoes or lower legs allowing the 20 robot to follow the user's walking motion. The camera system is then only used for obstacle detection and to stop the robot (and the user) when an obstacle is detected.
In example embodiments involving more complex journey planning and general navigation, courses can be pre-programmed into memory in known fashion using start and end coordinates, as well as one or more optional way-station 25 coordinates to fix the route from among any number of different available possibilities. The location of the robot received from the GPS device 26 is then fed to drive control system 20 to carry out broad navigational control guiding the robot along the route. In this case, a compass to indicate the direction in which the robot is facing is also required. This provides an advantage over traditional guide dogs, which are unable to 30 lead a blind person to a location unless the owner or guide dog know the route. In such embodiments, the drive speed of the robot may advantageously be controlled through a user input device in the grip or handle 16, such as a squeeze trigger for example.
As will now be discussed in detail, the cameras 12 and 14 provide fine control 35 of the guide robot movement, allowing the robot to avoid collisions and navigate safely along the wider route. In the context of a guide dog robot, this can mean tracking along a kerb, without straying into the road, or navigating between cars. The
5
operation of the robotic vision and control apparatus will now be described in more detail and with reference to Figures 3, 4 and 5.
One of the most important qualities of the guide robot is the ability to accurately detect obstacles and dangers in the immediate area. Many robots use 5 ultrasonic sensors to do this due to their advantageous detection range. However, ultrasonic sensors are not suitable for accurately determining the direction of an obstacle relative to the robot, due to problems with sensor resolution, interference, and the need to interface the sensors into the control system. A number of direction specific sensors have to be used, and this can greatly add to cost and to the 10 complexity of the device. We have therefore appreciated that while such sensors are useful for detection of the proximity of obstacles, they are not ideal for use in the control of the guide robot itself.
As an alternative, it would be ideal if a live aerial image of the robot and its surroundings were available for guidance. However, in most applications, particularly 15 commercial ones, this is either impossible or impractical. In order to overcome this, the robot shown in Figure 1 uses a special arrangement of downwardly pointing cameras which together emulate an aerial image.
Figure 3 is intended to illustrate the production of an aerial image using a single camera. If used with the robot of Figure 1, the camera could be mounted on 20 the pole 8 and angled to look downwards towards the robot base 4 and the ground (in this example, it is assumed that the camera would look directly along the pole 8, and that the pole has a height of hi). Although the field of vision from a point like lens is essentially a cone, the image captured from a digital camera or indeed traditional film can be represented as a two dimensional rectangular array of pixels, and the 25 field of vision can therefore be thought of as a rectangular or square based pyramid. The pole would of course appear in the image, but is omitted here for clarity.
We have appreciated therefore that if the situation illustrated in Fig ure 3 is modelled as a square-based pyramid, with the top point as the camera and the bottom as the image, then a preferable arrangement of cameras can be provided by 30 cutting the pyramid parallel to the bottom face and providing a camera at each of the corners of the resulting square frustum. In practice, it is desirable to have the cameras arranged so that the camera's fields of vision at least partially overlap with each other in a region 40 that corresponds to the field of vision of the single camera case.
35 The situation is illustrated in Figures 4 and Figures 5 in which the desired field of vision 40 is represented as the shaded base of the pyramid in Figure 4 and the shaded square in Figure 5. The cameras are indicated with an X. In Figure 5, the
overlap between adjacent cameras is shown in more detail as the overlapping square fields. Square 40' for example represents the overlap between the fields of vision of cameras 12a and 12b. As the camera's field of vision will not overlap exactly, some of the field of vision of each camera lies outside of the desired field at the periphery. This area is labelled 41 in the figures.
As a result of the camera arrangement shown in Figures 4 and 5, the same image can then be produced but at a fraction of the height. In the present example, the height at which the cameras 12 must be placed to emulate an aerial image is reduced significantly from hi to h2.
Fig ure 4 therefore illustrates the situation corresponding to the camera arrangement in Figure 1. In this case, the four cameras 12a, 12b. 12c, and 12d lie at respective corners of a square, and are angled downwards such that their respective fields of vision overlap with one another. By arranging the cross beams 10a and 10b on which the cameras are mounted to be of sufficient length, the image from the four cameras can be made to provide substantially the same area of coverage as the image from the single camera illustrated in Figure 2. This is advantageous for the guide robot as it means that the camera mounting pole 8 can be made shorter without losing the field of view around the robot base 4. This allows the robot to be smaller so that it can fit more easily through doors and into small rooms.
A further advantage with the proposed arrangement of four cameras 12a, 12b, 12c and 12d, in particular the requirement that the fields of vision at least partially overlap, is that the images from two adjacent cameras can be combined to provide an image containing depth information, often referred to as a stereoscopic image disparity map. Although obstacle and edge detection techniques used in computer vision often rely on colour or contour boundary detection, we have found that this is unreliable for the intended use of the robot as pavement features such as markings, grating covers, or cracks between paving stones can appear erroneously as obstacles. Once calibrated, the four cameras of the present example can work as four stereoscopic pairs and produce a full three dimensional map of the area surrounding the robot. This allows obstacles to be detected based on their relative height facilitating more accurate navigation and obstacle avoidance.
Figure 5 illustrates how the fields of vision from the four different cameras might at least partially overlap in the overall field of view. In the figure, the image from cameras 12a and 12b is processed by the drive system controller in region 40' to provide a stereoscopic image. The same pair-wise processing is performed for each of the remaining pair-wise combinations 12b and 12c, 12c and 12d and 12d and 12a.
7
It will be appreciated that the position of the guide robot base 4 is essentially in the centre of the field of view, and the area of the image that is useful for navigation is the area surrounding the base 4. This is the part of the image that can be processed by the drive control system 20 to understand the surrounding terrain.
5 The area of this useful region of the image therefore depends on how widely the camera's field of view extends beyond the edges of the guide robot base 4. This is a function of the base 4 area, the camera lens that is used, as well as the dimensions of the camera imaging device, and is significantly improved by the arrangement of four cameras discussed above.
10 A further advantage of arranging the cameras to point downwards towards the base is that the cameras remain stationary relative to the base. This would not be the case for example if the cameras were mounted on a moveable head section as in the prior art document discussed above. This means that the base section of the robot will appear in each of the camera images always at the same location, allowing 15 the robot to readily understand its position in the captured image without complicated processing. A calibration target or marker for the camera system can then be easily situated on the robot base. Further, the image of the robot can be easily deleted from the camera images (as it is stationary with respect to the images) which has been found to improve the obstacle detection technique.
20 The way in which the captured images are used in navigation will now be described in more detail.
Referring again to Figure 5, it will be assumed that the guide robot is moving in the direction indicated by the arrow. In this case, the drive system controller selects the image from at least the two forward facing cameras 12a and 12b and 25 combines them to provide a stereoscopic image of the overlapping region in front of the guide robot.
To generate a stereoscopic image it is necessary that the cameras first be calibrated. In an initial calibration phase of the cameras, a reference image like a chessboard is used, and the image features are processed to generate distortion, 30 rotation and translation matrices. This allows differences in camera image planes, issues with the cameras not pointing exactly along the same axis, and the shape of the lens not providing any entirely flat image to be taken into account. These matrices can subsequently be applied to the camera images to produce rectified images used in the generation of a disparity map.
35 As is known in the art, producing a stereoscopic image relies upon taking two essentially identical images, that differ only in the viewing angle, rectifying the images as described above, identifying features of two separate images that are identical but
8
are at different locations in the image as a result of the translated positions of the cameras and generating a disparity map, essentially a measure of displacement.
From the disparity map it is then straightforward to calculate depth information for the features in the image and assign depth values to pixels of the image. In this way, the 5 two two-dimensional images can be combined into a single three dimensional array having depth information associated with each pixel. Techniques for producing a stereoscopic image from two images of the same scene is well known in the art and so will not be described in detail here.
Once a depth map of the terrain in the direction of travel of the guide robot 10 has been produced, the controller determines from the image whether or not the guide robot can proceed in the direction of travel. This is achieved as shown in Figure 6 by breaking the visual region of interest 60 down into smaller areas 62 for comparison. This means that a complicated depth map can quickly be processed by the controller. An average depth value is then calculated for each rectangular or 15 square area of the image 62, and compared with a threshold function indicating the maximum safe height over which the guide robot can travel. If the values in the depth map indicate that the terrain poses an obstacle to the guide robot then the controller may stop the forward motion of the guide robot and turn the robot to the right or left to investigate the terrain further or navigate around. In Figure 7 for example, the shaded 20 areas indicate one or more regions 64 in the visual field of interest that are greater than the threshold. It will be appreciated that the depth map is relative, as what is in fact most important is the depth difference between adjacent regions of the map. Obstacles can be identified by comparing the depth value calculated for adjacent regions against a threshold value which indicates a maximum permissible depth 25 difference.
For the guide robot of the present example, obstacles might be a kerb, staircase or wall that would impede the smooth forward motion of the robot and pose a safety hazard for its operator. The controller may also activate an audible alarm, such as the chiming of a bell, in this case, in order to alert the user to the proximity of 30 an obstacle. Alternatively, the robot can stop, and transmit the signal to the user via a tug on the lead 16.
The drive system controller may select which of the camera images to use in order to carry out edge and obstacle detection (as discussed above with respect to Figures 6 and 7) based on direction of travel. In practice, however, it is preferred if 35 the images from all camera pairs are combined and the depth comparison is carried out over the full visual field available. This means that the robot essentially has aerial vision rather than vision limited to the direction of travel only. It also means that the
9
visual regions to the side of the robot can be used by way of reference to correct the robot's direction of travel. For example, using depth detection of the terrain to the side of the robot means that the robot can easily detect pavements features such as kerbs that lie parallel to the desired direction of travel, and track along these until the 5 forward facing images detect an obstacle.
If desired, the processing carried out on the forward facing cameras can be carried out at a higher resolution to offer a greater degree of obstacle detection. In this case, the areas 62 shown in Figure 6 will be smaller for the forward facing regions than for the side and rear facing regions.
10 The drive system controller uses control algorithms to plot a course between a start and end location, and uses the continuous images produced by the cameras to handle the moment by moment fine level navigation. Kerbs and pavement features are detected using the depth technique described allowing the robot to track a kerb or move around an obstacle. Where the robot has been programmed with information 15 about the terrain, such as the location of roads and other areas occupied by vehicles, the robot will attempt to stay in a safe region, such as a pavement or pedestrian precinct at all times. Safe regions are defined by making an initial assumption that the robot is in a safe region at start up, and subsequently ensuring that it does not stray beyond identified obstacles like kerbs. GPS coordinates can also be used, 20 and/or mapping information features onto the obtained aerial images, to identify safe areas that extend more widely.
In order to track the operator in operation, a small marker is placed on the user's foot or lower legs for detection by the controller 12. The drive controller 12 detects the marker via the cameras 12 and seeks to keep the marker within a 25 predetermined region of the image. If the marker strays outside of the predetermined region then the drive system controller instructs the robot to move or increase its speed accordingly. In order to achieve this, the visual region can be divided into sub-regions that correspond to a forwards, backwards and central stop region. If the user marker is detected in the central region no movement to keep up with the user is 30 required. If the marker is detected in the forwards region, then the driver controller 12 instructs the robot to move forwards to catch up with the user. If the marker is detected in the rearwards region, then the driver controller 12 instructs the robot to move backwards in order to draw along side the user. The same principle can be used with the left and right directions also. This allows the guide robot to follow the 35 user's movement and remain in stride.
The drive control system also receives an image input from camera 14. Unlike cameras 12, camera 14 is operable under the control of the controller to determine
10
whether there is danger of collision between the guide robot and other moving objects. An example is when the guide robot is navigating across a road, and it is necessary to look for traffic coming from the left and right directions. In this case, the camera is turned to look in the direction of oncoming traffic and two images of the 5 scene are taken in quick succession. A comparison is then made of one image with another to determine whether areas of the image that are different indicate rapid movement of an object towards the robot. Movement can be detected by looking for optical flow in the images, that is by calculating the motion vectors between portions of the image and determining if the motion vectors are of sufficient size to indicate 10 that part of the image is moving. A single camera 14 can be used for this, and simply turned in real time to face all of the required detection directions. A stereoscopic pair of cameras, by providing depth detection can allow the control system to distinguish between movement of smaller, closer objects, such as insects, and larger, more distant objects like cars. Additional cameras may also be provided diametrically 15 opposed to the first camera, or at an angle, in order to reduce the amount of rotational movement needed to adequately survey the scene.
Although an example guide robot has been described using wheels or rollers, any suitable propulsion system may be used, such as a rotary fan providing a cushion of air in the fashion of a hovercraft, or caterpillar tracks for example. Further 20 although in the present example a single camera mast has been described, it will be appreciated that in alternative examples one or more separate mountings for each camera could be used.
Although in the present example, the arrangement of cameras has been described as a square based pyramid requiring four cameras, it will be appreciated 25 that a similar vision effect could also be achieved with three cameras arranged on a truncated tetrahedron, or with a higher number of cameras arranged on other shapes of base. Four cameras are preferred however, as the arrangement provides the most efficient way of representing the terrain immediately adjacent to the periphery of the base on all sides.
30 Although the robot in this example has been described as a guide robot for obstacle avoidance, it will be appreciated from the optionally GPS and navigation system that it may also be self-guiding. In alternative embodiments, the robot may also be operated under the control of user input either by an active control grip or handle 16, or by remote control. In this sense the self-guiding feature can be 35 complementary to external control or input.
A guide dog robot has therefore been described which is small enough to fit through standard doors, and which has good manoeuvrability and reliability. It will be
11
appreciated that the construction of the robot and the techniques for control need not be limited to guiding applications for the visually impaired and may be used in any situation where a robot is needed to orientate itself in and navigate around an environment.
12

Claims (21)

Claims
1. A land based robot comprising:
a base section having a drive system for moving the robot over a terrain: 5 at least one camera mast mounted on the base section and supporting at least a first pair of downwards facing cameras, each camera in the pair arranged to point downwards towards the base of the mast to capture an image of the region of terrain immediately adjacent the base section, wherein the fields of view of each camera at least partly overlap with each other, such that the regions of terrain
10 represented in each image have an overlapping region that appears in both images; and a controller arranged to:
receive respective images from the cameras in the at least a first pair of cameras;
15 based on the respective images, generate a stereoscopic image of the terrain in the overlapping region, the stereoscopic image including a depth map of the terrain; and based on the depth map controlling the drive system to move the robot through the terrain and avoid obstacles.
20
2. The robot of claim 1, wherein the camera mast supports a second pair of cameras, each camera in the second pair arranged to point downwards towards the base of the mast to capture an image of the region of terrain immediately adjacent the base section, wherein the field of view of a camera in the first pair at least partly
25 overlaps with the field of view of a camera in the second pair, and the first and second pairs of cameras provide images around the entire periphery of the base section.
3. The robot of claim 2, wherein when generating the stereoscopic image the 30 controller selects from the two pairs of cameras, only the images from the two adjacent cameras that lie closest to the direction of travel of the robot to generate a first stereoscopic image.
4. The robot of claim 2 or 3, wherein when generating the stereoscopic image, 35 the controller selects the images from all of the cameras in the first and second pairs of cameras, and generates a plurality of stereoscopic images which are combined into a depth map of the terrain around the entire periphery of the base section.
13
5. The robot of any preceding claim, wherein the controller is operable to:
divide the depth map into a plurality of exclusive regions;
5 calculate for each region an average depth value;
identify obstacles by comparing the depth value calculated for adjacent regions against a threshold value indicating a maximum permissible depth difference.
6. The robot of any preceding claim, wherein the controller is operable to detect 10 in the images received from the cameras, a marker placed on the footwear of an operator, and to control the drive system based upon the location in the image of the marker.
7. The robot of any of claims 2 to 4, wherein the at least one camera mast
15 comprises a vertically extending central mast and two perpendicular cross beams, mounted on the camera mast in the horizontal plane, and supporting the first and second pairs of cameras, each respective camera in the pair located at a respective end of the cross beam.
20
8. The robot of any preceding claim wherein the downwards facing cameras are mounted in a fixed orientation relative to the base.
9. The robot of any preceding claim comprising:
a further camera arranged to look in the horizontal plane away from the robot; 25 wherein the controller is operable to capture at least two images from the camera;
analyse the images to determine whether visual differences indicative of a moving object are present; and based on the analysis operate the drive system or activate an alarm.
30
10. The robot of any preceding claim comprising a GPS system for navigating along a preprogrammed course.
11. A computer implemented method of controlling a land based robot, the robot 35 comprising a base section having a drive system for moving the robot over a terrain,and at least one camera mast mounted on the base section and supporting at least a first pair of downwards facing cameras, each camera in the pair arranged to
14
point downwards towards the base of the mast to capture an image of the region of terrain immediately adjacent the base section, wherein the fields of view of each camera at least partly overlap with each other, such that the regions of terrain represented in each image have an overlapping region that appears in both images; 5 wherein the method comprises:
receiving respective images from the cameras in the at least a first pair of cameras;
based on the respective images, generating a stereoscopic image of the terrain in the overlapping region, the stereoscopic image including a depth map of the 10 terrain; and based on the depth map, controlling the drive system to move the robot through the terrain and avoid obstacles.
12. The method of claim 11, wherein the camera mast supports a second pair of 15 cameras, each camera in the second pair arranged to point downwards towards the base of the mast to capture an image of the region of terrain immediately adjacent the base section, wherein the field of view of a camera in the first pair at least partly overlaps with the field of view of a camera in the second pair, and the first and second pairs of cameras provide images around the entire periphery of the base 20 section.
13. The method of claim 12, comprising when generating the stereoscopic image, selecting from the two pairs of cameras, only the images from the two adjacent cameras that lie closest to the direction of travel of the robot to generate a first
25 stereoscopic image.
14. The method of claim 12 or 13, comprising when generating the stereoscopic image, selecting the images from all of the cameras in the first and second pairs of cameras, and generating a plurality of stereoscopic images which are combined into
30 a depth map of the terrain around the entire periphery of the base section.
15. The method of any of claims 11 to 14, comprising:
dividing the depth map into a plurality of exclusive regions;
calculating for each region an average depth value; and
35 identifying obstacles by comparing the depth value calculated for adjacent regions against a threshold value indicating a maximum permissible depth difference.
15
16. The method of any of claims 11 to 15, comprising detecting in the images received from the cameras, a marker placed on the footwear of an operator, and controlling the drive system based upon the location in the image of the marker.
5
17. The method of any of claims 11 to 16, wherein the robot comprises a further camera arranged to look in the horizontal plane away from the robot, the method comprising:
capturing at least two images from the further camera;
analysing the images to determine whether visual differences in the image 10 indicative of a moving object are present; and based on the analysis, operating the drive system or activating an alarm.
18. The method of any of claims 11 to 17 comprising receiving a GPS signal and operating the robot to navigate along a preprogrammed course.
15
19. A robotic guide dog for the visually impaired comprising the robot of any of claims 1 to 10.
20. A robot substantially as described herein and with reference to the drawings.
20
21. A computer implemented method of controlling a land based robot, substantially as described herein and with reference to the drawings.
GB1204409.5A 2012-03-13 2012-03-13 Robot vision apparatus and control method and corresponding land robot Withdrawn GB2500214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1204409.5A GB2500214A (en) 2012-03-13 2012-03-13 Robot vision apparatus and control method and corresponding land robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1204409.5A GB2500214A (en) 2012-03-13 2012-03-13 Robot vision apparatus and control method and corresponding land robot

Publications (2)

Publication Number Publication Date
GB201204409D0 GB201204409D0 (en) 2012-04-25
GB2500214A true GB2500214A (en) 2013-09-18

Family

ID=46026459

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1204409.5A Withdrawn GB2500214A (en) 2012-03-13 2012-03-13 Robot vision apparatus and control method and corresponding land robot

Country Status (1)

Country Link
GB (1) GB2500214A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104827483A (en) * 2015-05-25 2015-08-12 山东理工大学 Method for grabbing object through mobile manipulator on basis of GPS and binocular vision positioning
CN104827470A (en) * 2015-05-25 2015-08-12 山东理工大学 Mobile manipulator control system based on GPS and binocular vision positioning
CN106890067A (en) * 2017-01-06 2017-06-27 南京邮电大学 Indoor blind man navigation robot
DE102016003816A1 (en) * 2016-03-26 2017-09-28 Audi Ag Industrial robot with a monitoring space carried by the manipulator
WO2018076814A1 (en) * 2016-10-25 2018-05-03 中兴通讯股份有限公司 Navigation method and system, and unmanned aerial vehicle
EP3238593A4 (en) * 2014-12-25 2018-08-15 Toshiba Lifestyle Products & Services Corporation Electric vacuum cleaner
EP3430879A4 (en) * 2016-03-17 2019-02-27 Honda Motor Co., Ltd. Unmanned traveling work vehicle
WO2020036910A1 (en) * 2018-08-13 2020-02-20 R-Go Robotics Ltd. System and method for creating a single perspective synthesized image
EP3906452B1 (en) * 2019-01-04 2023-03-01 Balyo Companion robot system comprising an autonomously guided machine

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114347064B (en) * 2022-01-31 2022-09-20 深圳市云鼠科技开发有限公司 Robot collision detection method and device based on optical flow, computer equipment and storage medium
CN116076387B (en) * 2023-02-09 2023-07-28 深圳市爱丰达盛科技有限公司 Guide dog training navigation intelligent management system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0896267A2 (en) * 1997-08-04 1999-02-10 Fuji Jukogyo Kabushiki Kaisha Position recognizing system of autonomous running vehicle
US20080297590A1 (en) * 2007-05-31 2008-12-04 Barber Fred 3-d robotic vision and vision control system
WO2011146254A2 (en) * 2010-05-20 2011-11-24 Irobot Corporation Mobile human interface robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0896267A2 (en) * 1997-08-04 1999-02-10 Fuji Jukogyo Kabushiki Kaisha Position recognizing system of autonomous running vehicle
US20080297590A1 (en) * 2007-05-31 2008-12-04 Barber Fred 3-d robotic vision and vision control system
WO2011146254A2 (en) * 2010-05-20 2011-11-24 Irobot Corporation Mobile human interface robot

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3238593A4 (en) * 2014-12-25 2018-08-15 Toshiba Lifestyle Products & Services Corporation Electric vacuum cleaner
US10314452B2 (en) 2014-12-25 2019-06-11 Toshiba Lifestyle Products & Services Corporation Vacuum cleaner
CN104827470A (en) * 2015-05-25 2015-08-12 山东理工大学 Mobile manipulator control system based on GPS and binocular vision positioning
CN104827483A (en) * 2015-05-25 2015-08-12 山东理工大学 Method for grabbing object through mobile manipulator on basis of GPS and binocular vision positioning
EP3430879A4 (en) * 2016-03-17 2019-02-27 Honda Motor Co., Ltd. Unmanned traveling work vehicle
US10725476B2 (en) 2016-03-17 2020-07-28 Honda Motor Co., Ltd. Unmanned operation vehicle
DE102016003816A1 (en) * 2016-03-26 2017-09-28 Audi Ag Industrial robot with a monitoring space carried by the manipulator
DE102016003816B4 (en) 2016-03-26 2019-05-29 Audi Ag Industrial robot with a monitoring space carried by the manipulator
WO2018076814A1 (en) * 2016-10-25 2018-05-03 中兴通讯股份有限公司 Navigation method and system, and unmanned aerial vehicle
CN106890067B (en) * 2017-01-06 2019-05-31 南京邮电大学 Indoor blind man navigation robot
CN106890067A (en) * 2017-01-06 2017-06-27 南京邮电大学 Indoor blind man navigation robot
WO2020036910A1 (en) * 2018-08-13 2020-02-20 R-Go Robotics Ltd. System and method for creating a single perspective synthesized image
CN112513931A (en) * 2018-08-13 2021-03-16 R-Go机器人有限公司 System and method for creating a single-view composite image
EP3906452B1 (en) * 2019-01-04 2023-03-01 Balyo Companion robot system comprising an autonomously guided machine

Also Published As

Publication number Publication date
GB201204409D0 (en) 2012-04-25

Similar Documents

Publication Publication Date Title
GB2500214A (en) Robot vision apparatus and control method and corresponding land robot
US11041958B2 (en) Sensing assembly for autonomous driving
CN107992052B (en) Target tracking method and device, mobile device and storage medium
US9896810B2 (en) Method for controlling a self-propelled construction machine to account for identified objects in a working direction
US11679961B2 (en) Method and apparatus for controlling a crane, an excavator, a crawler-type vehicle or a similar construction machine
KR101703177B1 (en) Apparatus and method for recognizing position of vehicle
CN107627957B (en) Working vehicle
JP5536125B2 (en) Image processing apparatus and method, and moving object collision prevention apparatus
KR100901311B1 (en) Autonomous mobile platform
WO2017219751A1 (en) Mobile suitcase having automatic following and obstacle avoidance functions, and using method therefor
US20170123425A1 (en) Salient feature based vehicle positioning
US20180039273A1 (en) Systems and methods for adjusting the position of sensors of an automated vehicle
JP6510654B2 (en) Autonomous mobile and signal control system
JP7520875B2 (en) Autonomous steering and parking of vehicles-trailers
CN110888448A (en) Limiting motion of a mobile robot
CN113085896B (en) Auxiliary automatic driving system and method for modern rail cleaning vehicle
JP6800166B2 (en) A device for determining the space in which a vehicle can run, running based on it, and a vehicle
WO2020100595A1 (en) Information processing apparatus, information processing method, and program
WO2022004494A1 (en) Industrial vehicle
JP2019050007A (en) Method and device for determining position of mobile body and computer readable medium
WO2018072908A1 (en) Controlling a vehicle for human transport with a surround view camera system
CN110945510A (en) Method for spatial measurement by means of a measuring vehicle
RU113395U1 (en) VIDEO SURVEILLANCE SYSTEM FROM VEHICLE IN MOTION
Perng et al. Vision-Based human following and obstacle avoidance for an autonomous robot in Intersections
RU124514U1 (en) VIDEO SURVEILLANCE SYSTEM FROM VEHICLE IN MOTION

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)