WO2010142689A2 - An object detection device - Google Patents
An object detection device Download PDFInfo
- Publication number
- WO2010142689A2 WO2010142689A2 PCT/EP2010/058025 EP2010058025W WO2010142689A2 WO 2010142689 A2 WO2010142689 A2 WO 2010142689A2 EP 2010058025 W EP2010058025 W EP 2010058025W WO 2010142689 A2 WO2010142689 A2 WO 2010142689A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- object detection
- detection device
- image
- visually impaired
- time
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H3/00—Appliances for aiding patients or disabled persons to walk about
- A61H3/06—Walking aids for blind persons
- A61H3/061—Walking aids for blind persons with electronic detecting or guiding means
- A61H2003/063—Walking aids for blind persons with electronic detecting or guiding means with tactile perception
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61H—PHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
- A61H2201/00—Characteristics of apparatus not provided for in the preceding codes
- A61H2201/50—Control means thereof
- A61H2201/5023—Interfaces to the user
- A61H2201/5043—Displays
- A61H2201/5046—Touch screens
Definitions
- the present invention relates to an object detection device for a visually impaired individual.
- Various devices have been proposed heretofore for assisting the visually impaired individuals in detecting obstacles and/or objects in front of them.
- a conventional cane commonly known as a white cane, which is held by the visually impaired individual and is used to detect objects and/or obstacles such as steps, walls and other such obstructions in front of the visually impaired individual.
- this type of device is quite basic and provides only the most rudimentary feedback to the visually impaired individual as only objects immediately in front of the visually impaired individual can be detected.
- the use of a cane gives feedback in relation to obstacles on the ground but won't detect an overhanging obstacle, for example a branch of a tree.
- the length of the cane is dependent on the height of the person and usually is long enough to cover two strides.
- providing the visually impaired individual with only two strides of warning is quite short in terms of providing confidence and independence to many visually impaired individuals.
- a camera it is also known from the prior art to use a camera to obtain images of what is in front of a visually impaired individual, and provide information regarding objects in the image back to the visually impaired individual.
- a tongue display unit receives signals regarding the light intensity of an object and provides sensory feedback to the user through their tongue by mildly stimulating the nerves in the tongue using small electrical currents.
- the obstacle information is first segmented by thresholding the light intensity of the objects.
- the prior art cannot detect distance to an object. Only the light intensity of the object is detected via a two-dimensional camera. This results in dark objects in the foreground not being displayed on the tongue display unit. It also results in objects that are multicoloured and have both high a low light intensity being displayed incorrectly as these objects are only partially displayed by the tongue display unit.
- the device Whilst such a device offers an improvement over the traditional white cane, as the visually impaired individual's hands are completely free to be used carrying out other tasks, the device can only offer the same amount of basic information to the visually impaired individual, namely, the presence of either bright or dark objects to the individual. Moreover, the distance to objects is not presented as that information is unknown.
- the ultrasound device relays information to the visually impaired individual using the cane via vibrating nodes in the handle of the cane or via variable frequency sounds which require the visually impaired user to wear earphones.
- the ultrasound is emitted downward and upward and bounces back off the objects on a single plane.
- the ultrasound only gives two-dimensional information of environment in front of the visually impaired user.
- the vibrating nodes are also poor at conveying to the visually impaired user exactly what type of obstacle is in front of them and the location of the obstacle as well.
- a well known alternative is to use a guide dog.
- the visually impaired individual will be able to avoid obstacles, safely cross the street and perform other tasks whereby the dog can make decisions for the visually impaired individual based on the dog's training.
- Having a guide dog is also perceived as useful on a busy street as oncoming people will see the guide dog and move to avoid collision with the vision impaired individual.
- the use of a guide dog means that the vision impaired person has no real spatial awareness of the immediate environment and is very dependent on the guide dog. The associated costs and care required for the dog add additional burdens onto the visually impaired individual.
- a further known prior art device which has been developed for use by visually impaired individuals is a voice guided GPS tracking device.
- the GPS tracking device comprises a GPS receiver which can lock on to the location of the visually impaired individual.
- voice instructions are given by the GPS tracking device to direct the visually impaired individual to their selected destination.
- the GPS tracking device also incorporates a voice recognition system as an input device to allow the visually impaired user to input the destination information and control the use of the GPS tracking device.
- Visually impaired individuals typically wish to receive as much information as possible regarding objects in their vicinity as this will ensure that they will minimise the possibility of injuring themselves as they routinely go about their day-to-day tasks.
- a visually impaired individual would preferably wish to know the presence of an object, such as a step up into an entrance to a passageway, how high the object is, such as the step(s) is, and, importantly, how far away the object is. This object information would allow the visually impaired individual to safely traverse the object.
- the solutions which are currently available in the marketplace are not sufficient to allow visually impaired individuals to safely and fully interact with the objects in their vicinity.
- the majority of visually impaired individuals would ideally like to have as much knowledge as possible about the actual terrain in which they are located.
- the visually impaired individual would have a fully three- dimensional "picture" or representation of their surroundings, including all objects and obstacles in their vicinity. This would include objects immediately in the foreground, in addition to objects which are further away but must be planned to be negotiated by the visually impaired individual.
- an object detection device for use by a visually impaired individual comprising a time of flight camera and an associated tactile feedback unit, whereby, the time of flight camera receives an image within a viewing range of the time of flight camera; the object detection device produces object detection information regarding: a presence of one or more objects detected in the image, an approximate size for each detected object, and, an approximate distance to each detected object; at least a portion of the object detection information is transmitted from the time of flight camera to the tactile feedback unit; and, the tactile feedback unit relays the transmitted object detection information to the visually impaired individual.
- the time of flight camera allows distance information to be provided to the visually impaired individual, thus allowing the visually impaired individual to build a realistic three-dimensional representation of the surrounding environment.
- the visually impaired individual can interact in a more full and immersive manner then was previously possible with all other known types of prior art devices.
- the object detection device further comprises an audio feedback unit to receive at least a portion of the object detection information from the time of flight camera and relay the received object detection information to the visually impaired individual.
- the audio feedback unit can be used to provide a certain element of redundancy in the feedback given to the visually impaired individual. As the visually impaired individuals typically have heightened senses to compensate for the loss of vision, the audio feedback is a particularly effective method of providing information to a visually impaired individual without requiring the visually impaired individual to use their hands.
- the tactile feedback unit is a human machine interface.
- the human machine interface is an actuated pin matrix.
- the human machine interface is a feedback pad comprising a non-Newtonian fluid.
- the human machine interface is a feedback pad comprising a shape memory alloy.
- the human machine interface is a feedback pad comprising an electro active polymer. In a further embodiment, the human machine interface is a feedback pad comprising an electro-rheological fluid.
- the human machine interface is a feedback pad comprising a magneto-rheological fluid.
- the human machine interface is a tongue display unit.
- the human machine interface is a touchscreen unit.
- the touchscreen unit comprises a plurality of strips of piezoelectric material which are used to detect the location of a user is touch on the touchscreen.
- the audio feedback unit comprises stereo headphones.
- a corresponding audio tone can be provided to the user through stereo headphones, the intensity of which is swept in a manner to correspond to the sweeper performed by the analysis, from the left ear headphone to the right ear headphone.
- the object detection device comprises an image processing unit which receives the image from the time of flight camera and processes the image to detect any objects in the image.
- the image processing unit sweeps across the received image to detect objects in the operational viewing range of the time of flight camera.
- the viewing range can be increased or decreased by the user using thresholding techniques.
- object detection is based on determining the distance from the time of flight camera to a closest object in each of a plurality of sectors within the image, determining if a plurality of detected closest objects in adjacent sectors form a single object, and calculating the presence of one or more objects based on the determination.
- the object detection device further comprises a solar cell to power the time of flight camera, the tactile feedback unit and/or the audio feedback unit.
- the object detection device further comprises a kinetic energy converter to convert kinetic energy, generated by the visually impaired individual moving the object detection device, so as to power the time of flight camera, the tactile feedback unit and/or the audio feedback unit.
- the present invention is further directed towards a method of detecting an object comprising the steps of receiving an image from a time of flight camera, whereby, the time of flight camera receives object detection information regarding: a presence of one or more objects detected within a viewing range of the time of flight camera, an approximate size for each detected object, and, an approximate distance to each detected object; transmitting at least a portion of the object detection information from the time of flight camera to the tactile feedback unit; and, relaying the transmitted object detection information from the tactile feedback unit to the visually impaired individual.
- the advantage is that a visually impaired individual is provided with a greater amount of object detection information when compared with any other known devices which have been previously used in this field.
- the time of flight camera provides distance information to the visually impaired individual, and allows the visually impaired individual to build, in their mind, a three-dimensional representation of their surrounding environment which will greatly facilitate the visually impaired individual's safe passage around and/or over any obstacles and/or objects in their environment.
- a further advantage is that the visually impaired individual can interact in a more full and immersive manner then was previously made possible with other types of known prior art devices.
- the method further comprises the step of transmitting at least a portion of the object detection information to an audio feedback unit; and, relaying the transmitted object detection information from the audio feedback unit to the visually impaired individual.
- the method further comprises the step of processing the image received from the time of flight camera and processing the image to detect any objects in the image.
- the step of processing the image comprises sweeping across the received image to detect objects in the viewing range of the time of flight camera.
- the method further comprises the step of determining the distance from the time of flight camera to a closest object in each of a plurality of sectors within the image; determining if a plurality of detected closest objects in adjacent sectors form a single object; and, calculating the presence of one or more objects based on the determination.
- the method further comprises the steps of determining if a detected object is substantially the same as one of a plurality of pre-stored object images; and, if the detected object is substantially the same as one of a plurality of pre-stored object images, alerting the visually impaired individual as to what the detected object resembles.
- the method further comprises the steps of scanning the image for text; running a text recognition analysis on any text in the image; and, relaying the text to the visually impaired individual through an audio feedback unit.
- Figure 1 is a perspective view of a time of flight camera on a headband in accordance with an embodiment of the present invention
- Figure 2 is an exploded perspective view of a tactile feedback device used in accordance with an embodiment of the present invention
- Figures 3a is a bottom plan view of the haptic device of Figure 2;
- Figure 3b is a perspective view of the haptic device of Figure 2;
- Figure 4 is a perspective view of a further embodiment of a haptic device
- Figures 5a to 5c diagrammatically show the process of converting an image captured by the time of flight camera to a three-dimensional image
- FIGS 6a to 6d diagrammatically show the process of processing an image captured by the time of flight camera
- Figure 7a is a photographic representation of a corridor
- Figure 7b is a front angled view of the haptic device representing the corridor of Figure 7a;
- Figure 7c is a left perspective view of a haptic device of Figure 7b representing the corridor of Figure 7a;
- Figure 7d is a right perspective view of a haptic device of Figure 7b representing the corridor of Figure 7a;
- Figure 8 is diagrammatic view of the process of the present invention.
- Figure 9 is a flow diagram illustrating a part of the process of the present invention.
- Figures 10a to 10e show an image obtained by the time of flight camera during various stages of processing by the present invention.
- an image sensor indicated generally by reference numeral 100 comprising a time of flight (TOF) camera 102 having a front facing lens 104.
- the image sensor 100 is mounted on a headband 106 which may be worn by a visually impaired individual (not shown). It will be appreciated that the image sensor 100 may be mounted on, or carried by a visually impaired individual using known techniques such as attaching the TOF camera 102 to a pair of shades (not shown) or mounting the TOF camera on a cap or hat (not shown).
- a tactile feedback unit indicated generally by reference numeral 200.
- the tactile feedback unit comprises an actuated pin matrix indicated generally by reference numeral 202 comprising a base plate 204 mounting a plurality of actuatable pins 206.
- a wristband 208 is also provided to engage with the actuated pin matrix 202 so as to maintain the actuated pin matrix 202 in an in use position beneath a hand 300 of a visually impaired individual.
- the tactile feedback unit 202 comprises an array of one hundred pins 206, although any other number of pins 206 may be used on the base plate 204.
- object in the images may be fed back to the visually impaired individual by raising and lowering some of the pins 206 in the actuated pin matrix 202.
- the pins may be lowered as indicated by reference numeral 302 or raised as indicated by pins 304, 306 in Figure 3b. It will be readily appreciated that numerous other types of tactile feedback devices could be used in replacement of the actuated pin matrix 202.
- a touchscreen (not shown) may be provided.
- the touchscreen may comprise a plurality of piezoelectric strips which vibrate if a user is touching a part of the screen and that section of the screen should be used to represent the fact that an object is in front of the visually impaired individual in a corresponding position in their vicinity.
- the touchscreen could be a normal capacitive touchscreen, all of which vibrates upon the touchscreen sensing the visually impaired individual's touch gliding across an area of the screen in which an object is to be represented, corresponding to an object in front of the visually impaired individual.
- haptic devices may be used to provide a sensory input to the visually impaired individual.
- the present system may be used in conjunction with known sensory feedback devices such as the tongue display unit.
- the tactile feedback unit for hundred comprises an actuated pin matrix 202 having a base plate 204 and a plurality of actuatable pins 206.
- the actuated pin matrix 202 is strapped to a user using an armband 402.
- the feedback from the tactile feedback unit for hundred is provided to the user on their arm and this is advantageous as it would leave both of the visually impaired individual's hands-free to be used for carrying out other tasks. It could be possible to oscillate or vibrate each of the pins 11 to indicate the distance of an object from a user. Instead of an armband 12 the device could be mounted on any portion of the user's body.
- the image sensor 100 octane is an image of an object, in this case a person 500 within a viewing range of the image sensor 100.
- An object detection device comprising the image sensor 100 and the tactile feedback unit 200 further comprises means for analysing the image obtained from the image sensor 100 to produce a three-dimensional image as can be seen in Figure 5b.
- the object 500 is shown in front of a background 502.
- a user can select how many layers of objects are to be recognised and presented to the visually impaired user head of a background.
- a user may set is such that objects are three different distances from the image sensor 100 recognised in front of a background, or, a user may set the device such that only one object in front of the image sensor 100, which will be the closest object, is recognised and presented to the visually impaired individual.
- an object 600 is shown in front of a background 602 comprising a blackboard 606 mounted on a wall on a left-hand side of the image, and, a fan 604 sitting on a table on a right-hand side of the image.
- the object of 600 is detected as being the closest object to the TOF camera 102 and the background is steadily processed to remove all unwonted noise such as the blackboard 606 and a fan to arrive at a final processed image as can be seen in Figure 6d.
- a visually impaired individual using the object detection device can sense the surrounding environment and/or surrounding terrain from the pins 206 on the tactile feedback unit 200 when a software program, running the means for analysing the image from the TOF camera 102, is provided to represent the distance to any object and obstacles in the vicinity of the visually impaired individual and represent the terrain in front of the visually impaired individual.
- walls 706, 708 on either side of the steps 700 are represented by raising the pins on each side of the base plate 204; the flat surface 701 directly in front of the visually impaired user is represented by maintaining a number of the pins in a fully lowered position.
- the radiator 704 and chairlifts 702 are represented by raising the pins to appropriate heights.
- the visually impaired user will be notified that there is a substantially flat area for a view steps directly in front of them, before they set of steps appear as. Furthermore, the visually impaired user will be aware that there are a number of objects to the left and right so that they cannot walk in either of these directions.
- the image sensor 100 octane is an image using the TOF camera which is analysed, preferably by software 800, and a three-dimensional representation 500 of the environment surrounding the visually impaired individual (not shown) is produced.
- This three-dimensional representation 500 is fed back to the visually impaired user through use of a haptic unit, such as a tactile feedback unit 200, 400.
- step 900 a three-dimensional images obtained by the time of flight camera 102.
- step 902 a noise elimination filter is applied to the obtained image in order to remove distant objects from the image as the distant objects do not reflect light, and consequently there is no distance information available about these objects.
- a statistical analysis is carried out on the filtered image in step 904 in order to obtain Min, Max, average and standard deviations of object distances to the time of flight camera 102.
- step 906 it is shown that a user can specify how many objects they wish to detect, and at what maximum distance they wish to detect objects (i.e. the operational range of the time of flight camera).
- the statistical analysis results can be used to assist with specifying various thresholds which will determine how many objects will be displayed.
- a thresholding process is then applied in step 908 to the image.
- the final three- dimensional, processed image is output to the visually impaired individual via a haptic device as hereinbefore described and shown in step 910.
- the presence of an object, the shape of an object, the distance to an object, the velocity of an object, the size of an object and any associated information regarding text on an object can be inputted to any of the haptic devices as shown in step 912.
- an image 1000 is obtained from the time of flight camera 102.
- the image 1000 contains an object, in this case a person, indicated by reference numeral 1002.
- a background indicated by reference numeral 1004 is also in the image 1000.
- a suppression of background objects is applied so that only the foreground object 1002 is recognised.
- a left to right and scan is then accomplished as illustrated by line 1006 in Figure 10d, which is scanned to the right as shown in Figure 10e and represented by reference numeral 1008. This left to right scan results in the feedback signals which are presented to the visually impaired individual via the audio feedback unit and the haptic feedback unit.
- the core of invention is relatively simple in concept namely the use of a time in flight camera operatively connected to a haptic device. Obviously, other sensing devices could be used. Once the concept is understood then clearly there are many variations of the manner in which the haptic device could be used as it will be easily seen that the time of flight camera can be used to determine many features of the terrain and surrounding objects and it would be relatively simple to use the haptic device to transfer that information to the user.
Landscapes
- Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Pain & Pain Management (AREA)
- Physical Education & Sports Medicine (AREA)
- Rehabilitation Therapy (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Traffic Control Systems (AREA)
- Rehabilitation Tools (AREA)
Abstract
According to the invention there is provided an object detection device for use by a visually impaired individual comprising a time of flight camera and an associated tactile feedback unit, whereby, the time of flight camera receives an image within a viewing range of the time of flight camera; the object detection device produces object detection information regarding: a presence of one or more objects detected in the image, an approximate size for each detected object, and, an approximate distance to each detected object; at least a portion of the object detection information is transmitted from the time of flight camera to the tactile feedback unit; and, the tactile feedback unit relays the transmitted object detection information to the visually impaired individual. The advantage is that a visually impaired individual will be provided with more object detection information then with any other known device. In particular, the time of flight camera allows distance information to be provided to the visually impaired individual, thus allowing the visually impaired individual to build a realistic three-dimensional representation of the surrounding environment. This greatly increases the visually impaired individual's ability to safely negotiate any obstacles and/or objects in the surrounding environment. Moreover, the visually impaired individual can interact in a more full and immersive manner then was previously possible with all other known types of prior art devices.
Description
"An object detection device"
Introduction
The present invention relates to an object detection device for a visually impaired individual.
Various devices have been proposed heretofore for assisting the visually impaired individuals in detecting obstacles and/or objects in front of them. At the most basic end of the device options available to a visually impaired individual is a conventional cane, commonly known as a white cane, which is held by the visually impaired individual and is used to detect objects and/or obstacles such as steps, walls and other such obstructions in front of the visually impaired individual.
However, this type of device is quite basic and provides only the most rudimentary feedback to the visually impaired individual as only objects immediately in front of the visually impaired individual can be detected. The use of a cane gives feedback in relation to obstacles on the ground but won't detect an overhanging obstacle, for example a branch of a tree. The length of the cane is dependent on the height of the person and usually is long enough to cover two strides. However, providing the visually impaired individual with only two strides of warning is quite short in terms of providing confidence and independence to many visually impaired individuals. Moreover, there is no feedback to the visually impaired individual regarding the size of the object, locations of other objects nearby or beyond the object immediately in front of the individual.
It is also known from the prior art to use a camera to obtain images of what is in front of a visually impaired individual, and provide information regarding objects in the image back to the visually impaired individual. For example, it is known to use a tongue display unit to provide feedback regarding the presence and size of object in an image for a visually impaired individual. The tongue display unit receives signals regarding the light intensity of an object and provides sensory feedback to the user through their tongue by mildly stimulating the nerves in the tongue using small electrical currents. The obstacle information is first segmented by
thresholding the light intensity of the objects. The prior art cannot detect distance to an object. Only the light intensity of the object is detected via a two-dimensional camera. This results in dark objects in the foreground not being displayed on the tongue display unit. It also results in objects that are multicoloured and have both high a low light intensity being displayed incorrectly as these objects are only partially displayed by the tongue display unit.
Whilst such a device offers an improvement over the traditional white cane, as the visually impaired individual's hands are completely free to be used carrying out other tasks, the device can only offer the same amount of basic information to the visually impaired individual, namely, the presence of either bright or dark objects to the individual. Moreover, the distance to objects is not presented as that information is unknown.
Furthermore, using a white cane forces the visually impaired individual to carry the white cane everywhere with them. This is cumbersome and awkward for the visually impaired individual. The visually impaired individual must be very careful when installing the white cane in public spaces such as restaurants, cinemas and the like. Moreover, the visually impaired individual, when using the white cane, will have one of their hand permanently used to grip the white cane. This is disadvantageous as it only leaves one free hand for the visually impaired individual to use.
Advancements to the conventional white cane have been made by equipping the canes with ultrasound technology. The ultrasound device relays information to the visually impaired individual using the cane via vibrating nodes in the handle of the cane or via variable frequency sounds which require the visually impaired user to wear earphones.
However, these devices have significant limitations in that the ultrasound is emitted downward and upward and bounces back off the objects on a single plane. Thus, the ultrasound only gives two-dimensional information of environment in front of the visually impaired user. The vibrating nodes are also poor at conveying to the visually impaired user exactly what type of obstacle is in front of them and the
location of the obstacle as well.
A well known alternative is to use a guide dog. Using a guide dog the visually impaired individual will be able to avoid obstacles, safely cross the street and perform other tasks whereby the dog can make decisions for the visually impaired individual based on the dog's training. Having a guide dog is also perceived as useful on a busy street as oncoming people will see the guide dog and move to avoid collision with the vision impaired individual. However, the use of a guide dog means that the vision impaired person has no real spatial awareness of the immediate environment and is very dependent on the guide dog. The associated costs and care required for the dog add additional burdens onto the visually impaired individual.
In order to provide the visually impaired individual with as much freedom as possible, attempts have been made to provide a "hand-free" device which allows visually impaired individuals to negotiate objects which are in their vicinity. However, as can be seen below, the majority of these devices do not provide sufficient information to the visually impaired individual to allow them to fully interact with the world around them. This results in the visually impaired individual feeling disengaged from their surroundings, and disconnecting the visually impaired individual from interacting in an immersive and fully engaged manner in their surrounding environment.
A further known prior art device which has been developed for use by visually impaired individuals is a voice guided GPS tracking device. The GPS tracking device comprises a GPS receiver which can lock on to the location of the visually impaired individual. Depending on the destination provided to the GPS tracking device, voice instructions are given by the GPS tracking device to direct the visually impaired individual to their selected destination. The GPS tracking device also incorporates a voice recognition system as an input device to allow the visually impaired user to input the destination information and control the use of the GPS tracking device.
- A -
Visually impaired individuals typically wish to receive as much information as possible regarding objects in their vicinity as this will ensure that they will minimise the possibility of injuring themselves as they routinely go about their day-to-day tasks. For example, a visually impaired individual would preferably wish to know the presence of an object, such as a step up into an entrance to a passageway, how high the object is, such as the step(s) is, and, importantly, how far away the object is. This object information would allow the visually impaired individual to safely traverse the object.
In essence, the solutions which are currently available in the marketplace are not sufficient to allow visually impaired individuals to safely and fully interact with the objects in their vicinity. The majority of visually impaired individuals would ideally like to have as much knowledge as possible about the actual terrain in which they are located. Ideally, the visually impaired individual would have a fully three- dimensional "picture" or representation of their surroundings, including all objects and obstacles in their vicinity. This would include objects immediately in the foreground, in addition to objects which are further away but must be planned to be negotiated by the visually impaired individual.
It is a goal of the present invention to provide an apparatus/method that overcomes at least one of the above mentioned problems.
Summary of the Invention
According to the invention there is provided an object detection device for use by a visually impaired individual comprising a time of flight camera and an associated tactile feedback unit, whereby, the time of flight camera receives an image within a viewing range of the time of flight camera; the object detection device produces object detection information regarding: a presence of one or more objects detected in the image, an approximate size for each detected object, and, an approximate distance to each detected object; at least a portion of the object detection information is transmitted from the time of flight camera to the tactile feedback unit; and, the tactile feedback unit relays the transmitted object detection information to the visually impaired individual.
The advantage is that a visually impaired individual will be provided with more object detection information then with any other known device. In particular, the time of flight camera allows distance information to be provided to the visually impaired individual, thus allowing the visually impaired individual to build a realistic three-dimensional representation of the surrounding environment. This greatly increases the visually impaired individual's ability to safely negotiate any obstacles and/or objects in the surrounding environment. Moreover, the visually impaired individual can interact in a more full and immersive manner then was previously possible with all other known types of prior art devices.
In a further embodiment, the object detection device further comprises an audio feedback unit to receive at least a portion of the object detection information from the time of flight camera and relay the received object detection information to the visually impaired individual. The audio feedback unit can be used to provide a certain element of redundancy in the feedback given to the visually impaired individual. As the visually impaired individuals typically have heightened senses to compensate for the loss of vision, the audio feedback is a particularly effective method of providing information to a visually impaired individual without requiring the visually impaired individual to use their hands.
In a further embodiment, the tactile feedback unit is a human machine interface.
In a further embodiment, the human machine interface is an actuated pin matrix.
In a further embodiment, the human machine interface is a feedback pad comprising a non-Newtonian fluid.
In a further embodiment, the human machine interface is a feedback pad comprising a shape memory alloy.
In a further embodiment, the human machine interface is a feedback pad comprising an electro active polymer.
In a further embodiment, the human machine interface is a feedback pad comprising an electro-rheological fluid.
In a further embodiment, the human machine interface is a feedback pad comprising a magneto-rheological fluid.
In a further embodiment, the human machine interface is a tongue display unit.
In a further embodiment, the human machine interface is a touchscreen unit. In a preferred embodiment, the touchscreen unit comprises a plurality of strips of piezoelectric material which are used to detect the location of a user is touch on the touchscreen.
In a further embodiment, the audio feedback unit comprises stereo headphones. As the image received from the time of flight camera is analysed in a sweeper, for example from left to right, a corresponding audio tone can be provided to the user through stereo headphones, the intensity of which is swept in a manner to correspond to the sweeper performed by the analysis, from the left ear headphone to the right ear headphone.
In a further embodiment, the object detection device comprises an image processing unit which receives the image from the time of flight camera and processes the image to detect any objects in the image.
In a further embodiment, the image processing unit sweeps across the received image to detect objects in the operational viewing range of the time of flight camera. The viewing range can be increased or decreased by the user using thresholding techniques.
In a further embodiment, object detection is based on determining the distance from the time of flight camera to a closest object in each of a plurality of sectors within the image, determining if a plurality of detected closest objects in adjacent sectors form a single object, and calculating the presence of one or more objects based on the determination.
In a further embodiment, the object detection device further comprises a solar cell to power the time of flight camera, the tactile feedback unit and/or the audio feedback unit.
In a further embodiment, the object detection device further comprises a kinetic energy converter to convert kinetic energy, generated by the visually impaired individual moving the object detection device, so as to power the time of flight camera, the tactile feedback unit and/or the audio feedback unit.
The present invention is further directed towards a method of detecting an object comprising the steps of receiving an image from a time of flight camera, whereby, the time of flight camera receives object detection information regarding: a presence of one or more objects detected within a viewing range of the time of flight camera, an approximate size for each detected object, and, an approximate distance to each detected object; transmitting at least a portion of the object detection information from the time of flight camera to the tactile feedback unit; and, relaying the transmitted object detection information from the tactile feedback unit to the visually impaired individual.
The advantage is that a visually impaired individual is provided with a greater amount of object detection information when compared with any other known devices which have been previously used in this field. The time of flight camera provides distance information to the visually impaired individual, and allows the visually impaired individual to build, in their mind, a three-dimensional representation of their surrounding environment which will greatly facilitate the visually impaired individual's safe passage around and/or over any obstacles and/or objects in their environment.
A further advantage is that the visually impaired individual can interact in a more full and immersive manner then was previously made possible with other types of known prior art devices.
In a further embodiment, the method further comprises the step of transmitting at least a portion of the object detection information to an audio feedback unit; and,
relaying the transmitted object detection information from the audio feedback unit to the visually impaired individual.
In a further embodiment, the method further comprises the step of processing the image received from the time of flight camera and processing the image to detect any objects in the image.
In a further embodiment, the step of processing the image comprises sweeping across the received image to detect objects in the viewing range of the time of flight camera.
In a further embodiment, the method further comprises the step of determining the distance from the time of flight camera to a closest object in each of a plurality of sectors within the image; determining if a plurality of detected closest objects in adjacent sectors form a single object; and, calculating the presence of one or more objects based on the determination.
In a further embodiment, the method further comprises the steps of determining if a detected object is substantially the same as one of a plurality of pre-stored object images; and, if the detected object is substantially the same as one of a plurality of pre-stored object images, alerting the visually impaired individual as to what the detected object resembles.
In a further embodiment, the method further comprises the steps of scanning the image for text; running a text recognition analysis on any text in the image; and, relaying the text to the visually impaired individual through an audio feedback unit.
Detailed Description of Embodiments
The invention will be more clearly understood from the following description of some embodiments thereof, given by way of example only with reference to the accompanying drawings, in which:
Figure 1 is a perspective view of a time of flight camera on a headband in
accordance with an embodiment of the present invention;
Figure 2 is an exploded perspective view of a tactile feedback device used in accordance with an embodiment of the present invention;
Figures 3a is a bottom plan view of the haptic device of Figure 2;
Figure 3b is a perspective view of the haptic device of Figure 2;
Figure 4 is a perspective view of a further embodiment of a haptic device;
Figures 5a to 5c diagrammatically show the process of converting an image captured by the time of flight camera to a three-dimensional image;
Figures 6a to 6d diagrammatically show the process of processing an image captured by the time of flight camera;
Figure 7a is a photographic representation of a corridor;
Figure 7b is a front angled view of the haptic device representing the corridor of Figure 7a;
Figure 7c is a left perspective view of a haptic device of Figure 7b representing the corridor of Figure 7a;
Figure 7d is a right perspective view of a haptic device of Figure 7b representing the corridor of Figure 7a;
Figure 8 is diagrammatic view of the process of the present invention;
Figure 9 is a flow diagram illustrating a part of the process of the present invention; and,
Figures 10a to 10e show an image obtained by the time of flight camera
during various stages of processing by the present invention.
Referring to Figure 1 , there is provided an image sensor indicated generally by reference numeral 100 comprising a time of flight (TOF) camera 102 having a front facing lens 104. The image sensor 100 is mounted on a headband 106 which may be worn by a visually impaired individual (not shown). It will be appreciated that the image sensor 100 may be mounted on, or carried by a visually impaired individual using known techniques such as attaching the TOF camera 102 to a pair of shades (not shown) or mounting the TOF camera on a cap or hat (not shown).
With reference to Figures 2, 3a and 3b, there is provided a tactile feedback unit indicated generally by reference numeral 200. The tactile feedback unit comprises an actuated pin matrix indicated generally by reference numeral 202 comprising a base plate 204 mounting a plurality of actuatable pins 206. A wristband 208 is also provided to engage with the actuated pin matrix 202 so as to maintain the actuated pin matrix 202 in an in use position beneath a hand 300 of a visually impaired individual.
In one embodiment of the invention, the tactile feedback unit 202 comprises an array of one hundred pins 206, although any other number of pins 206 may be used on the base plate 204.
It is envisaged that there are many ways for moving the pins 206 on the actuated pin matrix 202, such as by means of any of the following: electric linear miniature actuators, shape memory metal alloys, electro active polymers, miniature pneumatic actuators, non-Newtonian fluids, or, electromagnets. The electric linear miniature actuators and electromagnet implementations have been found to be most useful and most practical in the sense that they are both small, lightweight, wireless and battery powered.
As the image sensor 100 receives and analyses images, object in the images may be fed back to the visually impaired individual by raising and lowering some of the pins 206 in the actuated pin matrix 202. The pins may be lowered as indicated by reference numeral 302 or raised as indicated by pins 304, 306 in Figure 3b. It will
be readily appreciated that numerous other types of tactile feedback devices could be used in replacement of the actuated pin matrix 202.
For example, a touchscreen (not shown) may be provided. The touchscreen may comprise a plurality of piezoelectric strips which vibrate if a user is touching a part of the screen and that section of the screen should be used to represent the fact that an object is in front of the visually impaired individual in a corresponding position in their vicinity. Alternatively, the touchscreen could be a normal capacitive touchscreen, all of which vibrates upon the touchscreen sensing the visually impaired individual's touch gliding across an area of the screen in which an object is to be represented, corresponding to an object in front of the visually impaired individual.
It will be understood that various types of haptic devices may be used to provide a sensory input to the visually impaired individual. For example, the present system may be used in conjunction with known sensory feedback devices such as the tongue display unit.
Referring to Figure 4, wherein like parts previously described have been assigned the same reference numerals, there is provided an alternative embodiment for a tactile feedback unit, indicated generally by reference numeral 400. The tactile feedback unit for hundred comprises an actuated pin matrix 202 having a base plate 204 and a plurality of actuatable pins 206. The actuated pin matrix 202 is strapped to a user using an armband 402. The feedback from the tactile feedback unit for hundred is provided to the user on their arm and this is advantageous as it would leave both of the visually impaired individual's hands-free to be used for carrying out other tasks. It could be possible to oscillate or vibrate each of the pins 11 to indicate the distance of an object from a user. Instead of an armband 12 the device could be mounted on any portion of the user's body.
Referring now to Figures 5a to 5c, the image sensor 100 octane is an image of an object, in this case a person 500 within a viewing range of the image sensor 100. An object detection device comprising the image sensor 100 and the tactile feedback unit 200 further comprises means for analysing the image obtained from
the image sensor 100 to produce a three-dimensional image as can be seen in Figure 5b. The object 500 is shown in front of a background 502. Depending on the user's preferences in the means for analysing the image, a user can select how many layers of objects are to be recognised and presented to the visually impaired user head of a background. For example, a user may set is such that objects are three different distances from the image sensor 100 recognised in front of a background, or, a user may set the device such that only one object in front of the image sensor 100, which will be the closest object, is recognised and presented to the visually impaired individual.
For example, referring to Figures 6a to 6d, an object 600 is shown in front of a background 602 comprising a blackboard 606 mounted on a wall on a left-hand side of the image, and, a fan 604 sitting on a table on a right-hand side of the image. As the images processed, the object of 600 is detected as being the closest object to the TOF camera 102 and the background is steadily processed to remove all unwonted noise such as the blackboard 606 and a fan to arrive at a final processed image as can be seen in Figure 6d.
A visually impaired individual using the object detection device can sense the surrounding environment and/or surrounding terrain from the pins 206 on the tactile feedback unit 200 when a software program, running the means for analysing the image from the TOF camera 102, is provided to represent the distance to any object and obstacles in the vicinity of the visually impaired individual and represent the terrain in front of the visually impaired individual.
Referring to Figures 7a to 7d, for clarity and explanatory purposes only, the same reference numerals have been used to represent objects in the image in Figure 7a as have been used for the pins which correspond to the objects in Figures 7b to 7d. The reference numerals are not meant to refer to the same technical features in Figures 7a to 7d only. Instead of sensing the whole terrain in front of a visually impaired individual, the distance to an object, such as a set of steps 700, could be fed to the visually impaired individual by altering the height of the pins 206 up and down.
As can be seen, walls 706, 708 on either side of the steps 700 are represented by raising the pins on each side of the base plate 204; the flat surface 701 directly in front of the visually impaired user is represented by maintaining a number of the pins in a fully lowered position. The radiator 704 and chairlifts 702 are represented by raising the pins to appropriate heights.
In this manner, the visually impaired user will be notified that there is a substantially flat area for a view steps directly in front of them, before they set of steps appear as. Furthermore, the visually impaired user will be aware that there are a number of objects to the left and right so that they cannot walk in either of these directions.
Referring to Figure 8, the image sensor 100 octane is an image using the TOF camera which is analysed, preferably by software 800, and a three-dimensional representation 500 of the environment surrounding the visually impaired individual (not shown) is produced. This three-dimensional representation 500 is fed back to the visually impaired user through use of a haptic unit, such as a tactile feedback unit 200, 400.
Referring to Figure 9, the process of analysing an image from the time of flight camera 102 is shown. In step 900, a three-dimensional images obtained by the time of flight camera 102. In step 902, a noise elimination filter is applied to the obtained image in order to remove distant objects from the image as the distant objects do not reflect light, and consequently there is no distance information available about these objects. A statistical analysis is carried out on the filtered image in step 904 in order to obtain Min, Max, average and standard deviations of object distances to the time of flight camera 102. In step 906, it is shown that a user can specify how many objects they wish to detect, and at what maximum distance they wish to detect objects (i.e. the operational range of the time of flight camera). The statistical analysis results can be used to assist with specifying various thresholds which will determine how many objects will be displayed. A thresholding process is then applied in step 908 to the image. The final three- dimensional, processed image is output to the visually impaired individual via a haptic device as hereinbefore described and shown in step 910. The presence of an object, the shape of an object, the distance to an object, the velocity of an
object, the size of an object and any associated information regarding text on an object can be inputted to any of the haptic devices as shown in step 912.
With reference to Figures 10a to 10e, an image 1000 is obtained from the time of flight camera 102. The image 1000 contains an object, in this case a person, indicated by reference numeral 1002. A background indicated by reference numeral 1004 is also in the image 1000. As can be seen between the difference in the process images shown in Figures 10b and 10c respectively, a suppression of background objects is applied so that only the foreground object 1002 is recognised. A left to right and scan is then accomplished as illustrated by line 1006 in Figure 10d, which is scanned to the right as shown in Figure 10e and represented by reference numeral 1008. This left to right scan results in the feedback signals which are presented to the visually impaired individual via the audio feedback unit and the haptic feedback unit.
Additionally it is envisaged that means will be provided, almost certainly by way of a computer program to use the camera to read general signs that are used extensively, such as, hazard warning signs, building entrances, business/occupant names, using optical character recognition (OCR) and particle analysis. It is also envisaged that this information may be transmitted audibly.
Further, it is envisaged that there will be provided a list in a database of various common objects used such as lampposts, street benches, escalators, stairways and so on. Accordingly, when the camera captures the image of one of these objects, the database is consulted, the image sensor compared to an image stored in the database and the necessary information transmitted through the haptic device.
The core of invention is relatively simple in concept namely the use of a time in flight camera operatively connected to a haptic device. Obviously, other sensing devices could be used. Once the concept is understood then clearly there are many variations of the manner in which the haptic device could be used as it will be easily seen that the time of flight camera can be used to determine many features of the terrain and surrounding objects and it would be relatively simple to use the haptic
device to transfer that information to the user.
The terms "comprise" and "include" and any variations thereof required for grammatical reasons are to be considered as used interchangeable and accorded the widest possible interpretation.
The invention is not limited to the embodiments hereinbefore described which may be varied in both construction and detail within the scope of the appended claims.
Claims
1. An object detection device for use by a visually impaired individual comprising a time of flight camera and an associated tactile feedback unit, whereby, the time of flight camera receives an image within a viewing range of the time of flight camera; the object detection device produces object detection information regarding: a presence of one or more objects detected in the image, an approximate size for each detected object, and, an approximate distance to each detected object; at least a portion of the object detection information is transmitted from the time of flight camera to the tactile feedback unit; and, the tactile feedback unit relays the transmitted object detection information to the visually impaired individual.
2. An object detection device as claimed in claim 1 , wherein, the object detection device further comprises an audio feedback unit to receive at least a portion of the object detection information from the time of flight camera and relay the received object detection information to the visually impaired individual.
3. An object detection device as claimed in claim 1 or 2, wherein, the tactile feedback unit is a human machine interface.
4. An object detection device as claimed in claim 3, wherein, the human machine interface is an actuated pin matrix.
5. An object detection device as claimed in claim 3, wherein, the human machine interface is a feedback pad comprising a non-Newtonian fluid.
6. An object detection device as claimed in claim 3, wherein, the human machine interface is a feedback pad comprising a shape memory alloy.
7. An object detection device as claimed in claim 3, wherein, the human machine interface is a feedback pad comprising an electro active polymer.
8. An object detection device as claimed in claim 3, wherein, the human machine interface is a tongue display unit.
9. An object detection device as claimed in claim 3, wherein, the human machine interface is a touchscreen unit.
10. An object detection device as claimed in claim 3, wherein, the human machine interface is a feedback pad comprising an electro-rheological fluid.
11. An object detection device as claimed in claim 3, wherein, the human machine interface is a feedback pad comprising a magneto-rheological fluid.
12. An object detection device as claimed in claim 2, wherein, the audio feedback unit comprises stereo headphones.
13. An object detection device as claimed in any preceding claim, wherein, the object detection device comprises an image processing unit which receives the image from the time of flight camera and processes the image to detect any objects in the image.
14. An object detection device as claimed in claim 13, wherein, the image processing unit sweeps across the received image to detect objects in the operational viewing range of the time of flight camera.
15. An object detection device as claimed in claim 14, wherein, object detection is based on determining the distance from the time of flight camera to a closest object in each of a plurality of sectors within the image, determining if a plurality of detected closest objects in adjacent sectors form a single object, and calculating the presence of one or more objects based on the determination.
16. An object detection device as claimed in any preceding claim, wherein, the object detection device further comprises a solar cell to power the time of flight camera, the tactile feedback unit and/or the audio feedback unit.
17. An object detection device as claimed in any preceding claim, wherein, the object detection device further comprises a kinetic energy converter to convert kinetic energy, generated by the visually impaired individual moving the object detection device, so as to power the time of flight camera, the tactile feedback unit and/or the audio feedback unit.
18. An object detection device as claimed in any preceding claim, wherein, the object detection device further produces object detection information regarding object velocity.
19. A method of detecting an object comprising the steps of: receiving an image from a time of flight camera, whereby, the time of flight camera receives object detection information regarding: a presence of one or more objects detected within a viewing range of the time of flight camera, an approximate size for each detected object, and, an approximate distance to each detected object; transmitting at least a portion of the object detection information from the time of flight camera to the tactile feedback unit; and, relaying the transmitted object detection information from the tactile feedback unit to the visually impaired individual.
20. A method as claimed in claim 19, wherein, the method further comprises the steps of transmitting at least a portion of the object detection information to an audio feedback unit; and, relaying the transmitted object detection information from the audio feedback unit to the visually impaired individual.
21. A method as claimed in claim 19 or 20, wherein, the method further comprises the step of processing the image received from the time of flight camera and processing the image to detect any objects in the image.
22. A method as claimed in claim 21 , wherein, the step of processing the image comprises sweeping across the received image to detect objects in the viewing range of the time of flight camera.
23. A method as claimed in claim 22, wherein, the method further comprises the steps of determining the distance from the time of flight camera to a closest object in each of a plurality of sectors within the image; determining if a plurality of detected closest objects in adjacent sectors form a single object; and, calculating the presence of one or more objects based on the determination.
24. A method as claimed in any of claims 19 to 23, wherein, the method further comprises the steps of determining if a detected object is substantially the same as one of a plurality of pre-stored object images; and, if the detected object is substantially the same as one of a plurality of pre- stored object images, alerting the visually impaired individual as to what the detected object resembles.
25. A method as claimed in any of claims 19 to 23, wherein, the method further comprises the steps of scanning the image for text; running a text recognition analysis on any text in the image; and, relaying the text to the visually impaired individual through an audio feedback unit.
26. A method as claimed in any of claims 19 to 23, wherein, the method further comprises the steps of determining object velocity from the received image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IE20090449 | 2009-06-08 | ||
IES2009/0449 | 2009-06-08 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2010142689A2 true WO2010142689A2 (en) | 2010-12-16 |
WO2010142689A3 WO2010142689A3 (en) | 2011-02-17 |
Family
ID=43037026
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2010/058025 WO2010142689A2 (en) | 2009-06-08 | 2010-06-08 | An object detection device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2010142689A2 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012040703A3 (en) * | 2010-09-24 | 2012-11-22 | Mesa Imaging Ag | White cane with integrated electronic travel aid using 3d tof sensor |
WO2012167653A1 (en) * | 2011-06-10 | 2012-12-13 | 深圳典邦科技有限公司 | Visualised method for guiding the blind and intelligent device for guiding the blind thereof |
JP2013226345A (en) * | 2012-04-27 | 2013-11-07 | Nidek Co Ltd | Space recognition device |
US9389431B2 (en) | 2011-11-04 | 2016-07-12 | Massachusetts Eye & Ear Infirmary | Contextual image stabilization |
WO2016116182A1 (en) * | 2015-11-06 | 2016-07-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Flexible device for guiding a user |
EP3088996A1 (en) * | 2015-04-28 | 2016-11-02 | Immersion Corporation | Systems and methods for tactile guidance |
WO2017196251A1 (en) * | 2016-05-13 | 2017-11-16 | Andersson Runo | A system for translating visual movements into tactile stimulations |
US10134304B1 (en) | 2017-07-10 | 2018-11-20 | DISH Technologies L.L.C. | Scanning obstacle sensor for the visually impaired |
GB2599471A (en) * | 2021-05-20 | 2022-04-06 | Hope Tech Plus Ltd | System and method for guiding user |
GB2622184A (en) * | 2022-05-04 | 2024-03-13 | Kp Enview Ltd | Personal assistance systems and methods |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE4241937C2 (en) * | 1992-12-12 | 2001-02-08 | Gunnar Matschulat | Tactile image display device |
NZ505891A (en) * | 1998-02-06 | 2002-11-26 | Wisconsin Alumni Res Found | Tongue placed tactile output device |
US5942970A (en) * | 1998-10-08 | 1999-08-24 | Norman; Jim | Image optical-to-tactile converter |
US8284989B2 (en) * | 2004-08-24 | 2012-10-09 | Koninklijke Philips Electronics N.V. | Method for locating an object associated with a device to be controlled and a method for controlling the device |
-
2010
- 2010-06-08 WO PCT/EP2010/058025 patent/WO2010142689A2/en active Application Filing
Non-Patent Citations (1)
Title |
---|
None |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012040703A3 (en) * | 2010-09-24 | 2012-11-22 | Mesa Imaging Ag | White cane with integrated electronic travel aid using 3d tof sensor |
US8922759B2 (en) | 2010-09-24 | 2014-12-30 | Mesa Imaging Ag | White cane with integrated electronic travel aid using 3D TOF sensor |
WO2012167653A1 (en) * | 2011-06-10 | 2012-12-13 | 深圳典邦科技有限公司 | Visualised method for guiding the blind and intelligent device for guiding the blind thereof |
US9389431B2 (en) | 2011-11-04 | 2016-07-12 | Massachusetts Eye & Ear Infirmary | Contextual image stabilization |
US10571715B2 (en) | 2011-11-04 | 2020-02-25 | Massachusetts Eye And Ear Infirmary | Adaptive visual assistive device |
JP2013226345A (en) * | 2012-04-27 | 2013-11-07 | Nidek Co Ltd | Space recognition device |
EP3088996A1 (en) * | 2015-04-28 | 2016-11-02 | Immersion Corporation | Systems and methods for tactile guidance |
WO2016116182A1 (en) * | 2015-11-06 | 2016-07-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Flexible device for guiding a user |
WO2017196251A1 (en) * | 2016-05-13 | 2017-11-16 | Andersson Runo | A system for translating visual movements into tactile stimulations |
US10134304B1 (en) | 2017-07-10 | 2018-11-20 | DISH Technologies L.L.C. | Scanning obstacle sensor for the visually impaired |
WO2019013962A1 (en) * | 2017-07-10 | 2019-01-17 | DISH Technologies L.L.C. | Scanning obstacle sensor for the visually impaired |
GB2599471A (en) * | 2021-05-20 | 2022-04-06 | Hope Tech Plus Ltd | System and method for guiding user |
GB2622184A (en) * | 2022-05-04 | 2024-03-13 | Kp Enview Ltd | Personal assistance systems and methods |
Also Published As
Publication number | Publication date |
---|---|
WO2010142689A3 (en) | 2011-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010142689A2 (en) | An object detection device | |
Hoang et al. | Obstacle detection and warning system for visually impaired people based on electrode matrix and mobile Kinect | |
US20190294909A1 (en) | Apparatus and method for using background change to determine context | |
CN104427960B (en) | Self-adaptive visual servicing unit | |
KR101221513B1 (en) | Graphic haptic electronic board and method for transferring visual information to visually impaired people as haptic information | |
EP2677982B1 (en) | An optical device for the visually impaired | |
US9805619B2 (en) | Intelligent glasses for the visually impaired | |
KR20110098988A (en) | Information display device and information display method | |
CN107015638A (en) | Method and apparatus for being alarmed to head mounted display user | |
CN105306084A (en) | Eyewear type terminal and control method thereof | |
US10922998B2 (en) | System and method for assisting and guiding a visually impaired person | |
Dunai et al. | Obstacle detectors for visually impaired people | |
KR102242681B1 (en) | Smart wearable device, method and system for recognizing 3 dimensional face and space information using this | |
KR20150097043A (en) | Smart System for a person who is visually impaired using eyeglasses with camera and a cane with control module | |
KR102242719B1 (en) | Smart glasses tracking method and device, and smart glasses and storage media | |
US20210390882A1 (en) | Blind assist eyewear with geometric hazard detection | |
Tian | RGB-D sensor-based computer vision assistive technology for visually impaired persons | |
CN113678141A (en) | Stereophonic device for blind and visually impaired persons | |
Bala et al. | Design, development and performance analysis of cognitive assisting aid with multi sensor fused navigation for visually impaired people | |
Wei et al. | [Retracted] Innovative Design of Intelligent Health Equipment for Helping the Blind in Smart City | |
JP2006251596A (en) | Support device for visually handicapped person | |
JP2019159193A (en) | Behavior support device for visually impaired person | |
CN111128180A (en) | Auxiliary dialogue system for hearing-impaired people | |
Liu et al. | Electronic travel aids for the blind based on sensory substitution | |
JP6500139B1 (en) | Visual support device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10726460 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10726460 Country of ref document: EP Kind code of ref document: A2 |