Nothing Special   »   [go: up one dir, main page]

Camera & Sensors-Based Assistive Devices For Visually Impaired Persons: A Systematic Review

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

Camera & Sensors-Based Assistive Devices For


Visually Impaired Persons: A Systematic Review
Preetjot Kaur, Roopali Garg

Abstract— Assistive Technology has led to the removal of numerous navigation barriers for visually impaired individuals. It promotes more
freedom by empowering such people to perform tasks that were formerly challenging, such as Obstacle Detection, indoor/outdoor
Navigation, finding lost objects etc., with more ease. This paper provides a wider scope for researchers in the field of Obstacle detection for
blind/partially sighted persons. This paper discusses several techniques contributed by numerous researchers to serve this purpose. These
techniques are reviewed and categorized according to the criteria of taking visual information and then the research gaps in those
techniques have been detailed. The critical challenges faced by visually impaired users in using Assistive systems based on smartphones,
IoT devices, sensors, etc have been discussed along with future directions. In this paper the advancements and research done in this field
is surveyed. Further, the various research gaps are included.

Index Terms — Assistive systems, Computer Vision, Camera-Based, Obstacle Detection, Sensors-Based,Visually Impaired;

————————————————————
People with visual impairments have difficulty in accessing
1 INTRODUCTION some information that sighted people take for granted in [11],
Vision is among one of the key sensory modalities that [12], [13]. Götzelmann and Winkler in [14] improved the
humans possess & loss of vision makes the human lives‘ more already existing Braille labels, keeping in view the restricted
challenging. The White Cane & Guide dogs [1] are primary discriminability of tactile sense of humans. Researchers have
navigational aids used by blind persons or partially sighted been using several tools & techniques to build systems that
ones. Both of them are reliable tools used by blinds since past would add help to the community of blind persons. Nicoletta
years, but these have certain drawbacks. The White Cane Noceti suggested using glasses equipped with webcam [80].
cannot detect obstacles coming from far range such as Although some of the systems were even successful in
moving vehicles & that they don‘t guide the user in an making a blind person, board the bus independently [81].
unfamiliar situation or in the emergency situations. Guide dogs Some of these systems have been rejected by this community
itself have certain challenges associated to it [85]. The due to heavy sets of wearable clothes or because of complex
problem with Guide dogs is that they are costly & demand methods of operating tools or because of their costs or
care as shown by the survey of great British dogs in [2]. The because they overwhelm the users with unnecessary
Probing Cane [3] comes in many flavors nowadays, such as information. Also, it is difficult to insist visually impaired person
Folded Long Cane, Guide Cane, Green Cane [4], smart cane to switch to newer technology from traditional tools, for
[5] & many more variations (e.g. [6], [7], [8], [9]). Since very detecting obstacles during navigation in unfamiliar situations.
long years, researchers have been trying to improve this very Building trust is the most difficult part here. To be acceptable
trusted tool of visually impaired persons. by this community, assistive devices have to be more effective
As per the latest Global Vision database, it has been found & should detect obstacles that are beyond the reach of Canes.
that there is an increase in the number of blind persons by 6 Within no time, the domain of computing will outgrow and
million and that of moderately and severely impaired people by cross the limits of long-established desktops [15]. IoT has
56.7 million in the year 2015 from year 1990 [10]. The term taken respectable status in the lives of differently abled
blindness is actually a part of term Visual impairment. The peoples, especially visually disabled. The computer through
term legally blind refers to the persons having visibility less the help of sensors, actuators, cameras & some wearable on
than 20/200 in a normal eye. Such persons are eligible for all the body, alerts the people about the obstacles in front of them
the legal benefits provided by governmental agencies. [16]. Assistive technologies in collaboration with IoT have
Retinal Detachment, cataract, Head Injury, Glaucoma and been designed to assist such people & now, it has become a
overgrowing population are the major factors responsible for powerful tool to improve freedom & build interest among them
blindness among individuals. The current epoch in which we [17]. Many systems succeeded in satisfying the users based
are living is of Information Explosion, where everyone needs on stereovision technology such as in [18] & [19] but lagged
behind due to the reasons such as these doesn‘t provide
information of holes on the road or get heated up during
operation respectively.

We audit the research and advancements within this field,


variety of information on variety of topics to take decisions. also highlighting various research gaps. Till now, several
techniques have been devised or used to help people with
————————————————
visual disabilities. Some of them are categorized below.
 Preetjot Kaur is currently pursuing PhD in Information Technology in UIET
Panjab University, Chandigarh, India.
E-mail: reetjotkaur20@gmail.com
 Dr. Roopali Garg is presently serving as Associate Professor, Panjab University,
Chandigarh. She is former Coordinator of Department of IT, UIET, Panjab
University, Chandigarh. She is an active Life Member of several International
and National technical organizations like ISTE, IETE, IEE
E-mail: roopali.garg@pu.ac.in
622
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

in low light regions. User must hold WhiteCane even while


using this app.

Jindal (2018) et al. presented a cost effective system to


guide visually impaired users based on camera using an
alogorithm - Speeded-Up-Robust-Features (SURF) [23]. The
proposed framework is composed of many phases which are
as video pre-processing (removing non-linear camera motion
by using Feature Point matching & ROI searching); ground
Plane Removal (using thresholding & region growing),
Extracting Localized Points (by using SURF), Region Of
Interest Segmentation (active Contour Model), Calculating
Textile Features (such as Gray-Level Co-occurrence Matrix)
Fig.1 Flowchart showing obstacle detection techniques on the and ROI Classification
basis of criteria of taking input. TP  TN
Accuracy  (1)
image _ size
The rest of the paper is organized as follows: In section-II,
the review of different techniques proposed by several
researchers on the basis of different input criterions is where TP is True Positive & TN is True Negative.
presented. It also discusses the open challenges and future In order to accurately detect changing Ground planes,
directions. Section-III is devoted to Evaluation Metrics and Machine Learning can be used.
comparison charts of different techniques. The concluding
remarks of paper are given in section-IV. S.Meers & K.Ward (2017) introduced a stick framework
based on 3D vision using head mounted camera & electro-
tactile gloves [24], which utilizes AI to assist the visually
handicapped individuals with navigating their environment
2 OBSTACLE DETECTION TECHNIQUES WITH autonomously. The user has to keep hands in the direction
viewed by his camera, allowing him 3d perception of the
DIFFERENT WAYS OF TAKING INPUT environment. After 5 minutes of training in unknown
2.1 Camera Based Techniques: environment with the proposed system, the users were
comfortably able to use it. The stereo cameras used here are
Adriano Mancini (2018) et al. developed a heavy size not able to clearly detect the featureless/texture less
embedded system based on small camera and image walls/surfaces. Infrared sensors can be used to resolve this
processing algorithms to assist visually impaired individuals limitation.
during walking and running [20]. The system comprises of
vibrators, robotic-like controllers and gloves equipped with Delahoz(2017) et al. proposes work for the detection of
motors. There are 3 main components of the system such as floors using Smartphone‘s camera [25]. The system contains 5
Camera (global-shutter monochromatic), Processing Unit modules like:
(image processing algorithms) and charging haptic feedback
 smoothing for the removal of noise from image,
device (gloves). The system is observed efficient in guiding
 edge detection to detect edges in an image,
right directions to the user while navigation. Jamming should
 line detection to detect lines of different types (horizontal,
be reduced by the use of RADAR and compact RTK L1 GPS
vertical),
receiver.
 Floor wall boundary detection to remove
Bogdan Mocanu (2018) et al. introduced a system DEEP-  Lines not part of the boundary of floor wall.
SEE FACE to help visually impaired individuals communicate  Floor detection actually selects the floor area in an image.
with social environment, based on the integration of computer The system received an accuracy of 82%, a precision and
vision techniques with Convolution Neural Networks (CNNs) recall of 90.3% and 75% resp. This work can be further
[21]. The system uses video camera of the smartphone and extended to detect objects lying on the floor & tripping hazard
the whole processing unit is carried in backpack of the user. analysis.
The battery of the smartphone remains a major concern here.
The ultrabook computer carried in backpack is expensive. The R. Tapu (2017) et al. introduces a new framework called
output is provided by the bone conduction headphones, which DEEP-SEE, based on the integration of computer vision
also consume much energy. An accuracy of 92% is observed algorithms & CNNs for visually impaired users [26]. It is based
even with different light and position conditions. on the principle of alternating between tracking by using
motion information & modifying the predicted location on the
Nawin Somyat (2018) et al. introduces an app NavTU for basis of visual similarity. The proposed method attains an
visually impaired persons in Thai based on GPS and camera accuracy of about 90% in recognizing both static & dynamic
based technologies [22]. It encourages the use of android obstacles. The system could be extended to detect crossings
smartphones having GPS trackers, compasses, maps, and on the road. Several features such as Face detection and
digital cameras in it. This app is made while considering shopping assistance could be incorporated into it.
electric poles on the sidewalks as emergency warning for blind
users, thereby ensuring the safety of users. NavTU fails to Rumin Zhang (2017) et al. presents a real-time depth-data
detect dynamic objects and stairs. It does not operate properly based technique by using binocular camera, for helping

623
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

visually impaired users in detecting obstacles [27]. The system (MSF) framework that is based on the IMU (inertial
is based on the algorithm in which Region of Interest (ROI) is Measurement Unit) [32]. As camera in smartphone is moving,
captured by stereo cameras, whose calibration is discussed in topological structure of MSF is estimated & IMU estimates the
Eq. 1, & disparity maps are generated & through segmentation blur levels to adapt to MSF. The concept of Artificial
techniques as discussed in Eq. 2, the obstacles are detected. Intelligence is also used. Through this technique, not only the
The accuracy is acceptable for visually impaired users. number of errors, but also computation time got reduced as
P  A ( RW  t ) (2) compared to other techniques such as using Guide Dogs or
where P is projection to W & m; W is word coordinate as : W= WhiteCanes.The system needs to be checked on public
[X, Y, Z]T, M is image coordinate as: m= [u,v]T, A is camera database consisting of more blurred images (video
intrinsic matrix; R is orthonormal matrix of 3x3 size. The sequences) for its effectiveness.
distance between baseline & 3D point is:
Al-Khalifa (2016) et al. described an Arabic navigational
= (3) aid, namely Ebsar for partially sighted users [33]. It consists of
where d is disparity [ranging from 0-255] is: d= XL - XR wearable devices furnished with sensors & cameras. The
f is rectified Focal Length; B is baseline of Camera; application develops indoor maps from the movements of the
Color & depth information can be integrated into it to make it a blind person & produces QR code area markers & provides
reliable assistive system. voice guidance to its users in Arabic language. The application
demonstrated satisfactory results. The system can be
Kang (2017) et al. gives the improved version of obstacle improved in many ways like using Wi-Fi & reducing
detection technique based on Deformable Grid (DG) [28]. In dependency on QR codes to track blind persons‘ movements.
this technique, following steps are performed as:
Step-I : A vertex deformation function is defined using Khenkar (2016) et al. proposed ENVISION, a navigational
Perspective Projection geometry. aid for visually impaired smartphone users [34].. It is based on
Step-II : The collision risk is measured based on this DG. the fusion of GPS technology, supervised learning & uses new
The obstacles within the range of 2m from the user are treated ways to detect obstacles (both static & dynamic). The system
as risk of collision. 3 different cases were considered to obtain is robust & works accurately by taking real-time video made
the ground truth data. The system fails when it gets closer to using smartphone & takes intelligent decisions. The results
the non-textured obstacles such as walls, doors, etc. This can were recorded in terms of four models, which are PMhigh with
be resolved by using image segmentation techniques. On recall of 70% & precision of 85%, PMlow with recall of 78% &
comparison with other systems, it is found that the proposed precision of 83%, PMleft with recall of 72% & precision of 85%
DG version gives better and more accurate results & also & PMright with recall of 77% & precision of 79%. The system
more précised information of risks. needs to be improved to imbibe the concepts of obstacle
detection and for the clearer understanding of the environment
Zhang (2017) et al. presents a new framework, based on 6- around blind users.
DOF Pose estimation method & 3D camera & involves 2 graph
SLAM processes for reducing accumulative pose errors in the Mohane (2016) et al. proposed a system having 1 camera
device [29]. & using the features of SIFT algorithm [35]. The movement
Step-I At first, the floor plan is extracted from 3D camera, based technique is used to extract obstacles of the moving
Step-II Then wall lines are extracted. objects from the video made by camera by using K-means
Based on this PE method, a navigational system is built that clustering technique. SIFT Algorithm is used to extract
helps in finding ways for blind persons in an indoor features from the object image. The extracted features are
environment. The concept of RNA, which is an enhanced compared with the features extracted from the data set images
WhiteCane with 3d camera, is introduced here. Experimental & if there is a match, then output is given in the form of
results proves that the proposed system provides more speech. The system is robust, easy to use and slow. It is not
accurate poses & in lesser time. The runtime of the proposed suitable for emergency situations in real-time environment.
strategy is 59.4ms, while that of planar SLAM is 77.6ms for
one frame data. Several errors are observed in SLAM Aravinda S.Rao (2016) et al. stresses on the lack of
algorithm that can be reduced by employing loop-closure devices for detecting potholes & uneven pavements during
detection algorithm mentioned in [30]. night or in dark [36]. The framework incorporates projecting
laser designs, recording the patterns via monocular video, &
Trung-Kien Dao (2016) et al. introduced an indoor navigation then observing patterns for extracting features and after that
prototype for visually impaired users [31]. The proposed system giving way signs to the visually impaired client. The HOI
includes several multimedia technologies integrated into it. It (Histogram of Intersections) descriptor is used to detect
consists of 4 layers as Physical, Functional, logical and potholes & uneven areas. The proposed technique attained an
Application layer. After modeling the environment, user‘s accuracy of 90% in detecting potholes. It works only in night
location is determined through the combination of Wi-Fi & vision time or in dark areas.
information. The system interacts with user in their local
Vietnamese language through voice synthesis. The deployed Lukas Everding (2015) et al. proposed a wearable
version of this system in schools revealed its effective nature. prototype that captures the visual information through a unique
But accuracy of 97% is ineffective in real-time situations, thus, Dynamic Vision Sensor (Camera) and outputs audio through
it is only good for safe environments. headphones [37]. DVS camera has many alluring features
over normal camera such as less power consumption, lesser
Chan (2016) et al. proposed a modified sigmoid function storage requirements etc., which make it highly encouraging

624
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

camera for applications where power (battery) is of high been used for the generation of depth & color images. The
concern. The system attained a success rate of close to proposed system detects all the moving & standing obstacles
100%. However, it is only good for static subjects and fails within the range of 1-2m from the user. Although the practical
in case of moving subjects. Moreover, it performs better implementation of the system is still in progress, the system
only after a user is trained for 50-100hrs. achieved the detection rate of 96.17% in indoor & 93.7% in
outdoor environments.
R.Tapu (2014) et al. introduced a new framework based on
feature extraction [38]. At first, features are extracted based on Liyanage (2012) et al. proposed a navigational system
image grid & tracked based on Lucas-Kanade algorithm. Then, based on optical flow estimations [43]. A prototype comprising
the background & other types of movements are identified using of virtual reality world was designed, using 2 stereo cameras
homographic transforms & RANSAC algorithm and agglomerative headset, portable computer & GPS, to demonstrate the
clustering technique respectively. The distance of the user from an concept. It uses auditory & tactile feedback to guide visually
obstacle determines the situation as normal or urgent. The impaired persons. Existing optical flow algorithm is also used
advantage of the approach is that it is robust in nature & doesn‘t along with other image processing techniques. The simulation
require any previous information about the obstacle. The average of the technique shows its efficiency in controlled environment
time required for its execution was 18ms/frame on Windows 7 but awaits real-time implementation.
platform, while it was 130ms/frame on Samsung Galaxy S4. Voice
module can be incorporated into it for better output. Fazli (2011) et al. proposed an algorithm for detection of
negative obstacles [44]. It is a fusion of Stereo vision techniques
Praveen (2013) et al. proposed depth estimation technique & 2 stage Dynamic Programming technique. The paper also
on the basis of local depth hypothesis [39]. An image is discusses several algorithms used for matching features. The
captured by camera and is further resized for computational camera collects images during navigation & the algorithm depicts
efficiency & obstacles are extracted using edge detection and important terrain features of the obstacles that are below the level
then by morphological image processing operations & their of ground such as holes, drop-offs etc. On comparison with
depth is compared to the estimated depth measured by local algorithms such as Belief Propagation & random Growing
depth hypothesis & obstacles are realized. This system Correspondence Seeds (GCS), it performs better in terms of
doesn‘t require any prior information of the user‘s space & can (lesser) time taken, accuracy of disparity map and also lower
be used in both known & unknown environments, thereby RMS errors. Also, the proposed method is 28% times faster than
making it compatible for real-time environment. No Ultrasonic the GCS method, but focuses only on the negative obstacles in
sensors have been used in this system. The system failed to the environment.
detect the depth discontinuity between sub segments in the
same obstacle itself. This can be resolved using graph-based Chen (2011) et al. proposed a technique based on dynamic
segmentation. The paper highlights the deviations of the fuzzy logic & Musical Instrument Digital Interface (MIDI) & its
proposed system in terms of 8 bands. conversion from images of roads to help visually impaired
guide through roads while moving [45]. In this technique, a
R.Tapu (2013) et al. introduced a real-time obstacle detection low-cost camera is mounted with 18degree pitch down, on the
system for visually impaired persons based on smartphone scooter. The paper also defines the concept of RGB ration
Camera [40]. Interest points are selected by HoG descriptor according to the reference colors of an image. The obstacle
from an image & tracked using Lucas-Kanade algorithm. information includes the information about an obstacle such as
Homomorphic Filtering techniques are used to detect camera Normalized Image size, Image co-ordinates of obstacle center
& background motion. Obstacles are classified as urgent or and Lowest Obstacle pixel. Although the system is not tested
normal on the basis of distances. These obstacles are sent to on large databases, in 96.8% cases, users could detect the
the Classifier BoVW (Bag of Visual Words). The technique is number of obstacles correctly, in 90.6% cases, the users
computationally efficient, effective & achieves high accuracy detected the orientation of obstacles correctly & in 97.3%
rates with an average recall of 83.75%, Average precision of cases, the users could correctly depict the size of the
91.5% & average F1 of 87%. User alerting system needs obstacles.
improvement as Bone induction headphones can be used
instead of normal headphones. Costa (2011) et al. proposed a new algorithm to perceive
landmarks placed on sidewalks [46]. The new algorithm is
Ali Israr (2012) et al. introduces a visuo-tactile assistive composed of fusion of Peano-Hilbert Space Filling Curves,
device for helping visually impaired users, explore the world which is used for reducing dimensions in the image, captured
around them [41]. The system is comprised of Webcam by camera & Ensemble Empirical Mode Decomposition
camera, capacitive touch panel, touch-panel USB driver, tesla (EEMD), which is used to pre-process the image. The
touch driver, USB hub and connector. Several computer evaluation of the results provide that proposed algorithm leads
vision and color extracting techniques are used to build a to fast results & efficient method for assisting visually impaired
prototype. Experiments conducted reveal that training is in obstacle recognition. Certain improvements are required as
required to operate the prototype. But it should also be able disparity maps can be used and PH-EEMD image analysis
to extract more features like edges, contours and contrasts can be used.
and detect small items like keys, coffee mug, etc.
Lee (2012) et al. proposed a depth based technique for En peng (2010) et al. devised a real time obstacle
visually impaired [42]. By analyzing the depth map, techniques detection mechanism implemented on a smartphone [47]. It is
such as segmentation & noise elimination are applied & used to help visually impaired individuals in detecting any low
obstacles are extracted. Time Of Flight (TOF) Camera has height object lying on the floor by combining three techniques,

625
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

which are as: detection modules, for depth sensing. Disparity map refers to
 color histograms, the difference in pixels between a pair of stereo images.
 edge cues Jamming can be reduced by using RADAR and compact RTK
 Pixel–depth relationship. L1 GPS receiver.
The paper also discusses several issues faced by the
participants in holding the smartphone, etc. The proposed
2.2 Sensors Based
system has an accuracy of 94% in all the cases & a speed of Nur Syazreen Ahmad (2018) et al. proposes a multi-
only 7ms is required to execute it. But the system has failed in sensor based obstacle detection system implemented on a
the detection of complex floor patterns and the objects lying on White Cane [48]. The system is based on the model-based
such floors. state-feedback control strategy, which controls the detection
angle of sensors; thereby reducing false detection results. It
Critical Challenges: uses 3 Ultrasonic sensors (detecting left and right side) and 1
The critical challenges faced by using camera based Infrared sensor (detecting holes and stairs) along with
techniques are as: vibrotactile and audio feedback systems for the visually
In some cases, the system fails in the detection of impaired user. The system performs best with an accuracy of
complex floor patterns. The objects lying on such floor are not 97.95% on an average. A wearable system can be made to
detected by the system. It should be able to extract more promote hands-free movement. Also, a survey must be done
features like edges, contours, contrasts etc. Some systems to ensure the usability of proposed system among visually
are found to execute at very low rate. This might be due to the impaired users.
computer vision algorithm used to calculate the distance
between user and obstacle. There are systems that use laser R K. Katzschmann (2018) et al. presents hands-free
patterns, which operate only in night time or dark areas. So, wearable device as Array of Lidars and Vibrotactile Units
suitable laser source is required, which may detect patterns in (ALVU) for visually impaired users based on Time-Of-Flight
both day as well as night. The accuracy of most obstacle (ToF) sensor [49]. The device comprises of Sensor Belt and
recognition systems should be improved either by using more Haptic Feedback device (strap). ALVU creates a map by
powerful GPUs (Graphical Processing Units) or other feature sensing the environment and projects it on the user‘s body
extraction tools. Buying GPUs is itself very expensive. In most through haptic feedback. An overall accuracy of 82% with
of the cases, results are not completely accurate. This may be ALVU is observed in the system. The viewing range of ALVU
due to random sampling as not always the samples have needs to be adjusted according to speed of the user. Also, its
equal probability of being chosen. Accuracy of 97% is performance can even be enhanced by using turn-by-turn
ineffective in real-time situations, thus, some systems are only navigation.
good for safe and protected environments. High Resolution
cameras are very expensive. Size of the system becomes too R. K. Megalingam (2018) et al. introduces an intelligent
big and heavy for a user to carry. Some devices require long navigation system for visually impaired individuals that is
training time. Heavy & expensive components should be based on robotics and is using Ultrasonic and infrared sensors
replaced by light and cheaper ones. It should be able to [50]. It also comprises of Buzzer, vibration motor, motor driver,
detect small daily routine things such as keys, coffee mug, and power source, ISD 1820Voice Module, Arduino Mega
etc. However, it is only good for static subjects and fails in (ATMEGA1280) and Keypad. There are 3 modes such as
case of moving subjects. Certain camera based assistive Learning, retrace, free-moving module. Joystick directs the
systems perform better only after a user is trained for 50- movement of robot. The most suitable feature in the system is
100hrs. Detecting concave objects such as holes & that a user can stop in between while walking and can again
downward steps is very complex. restart. It provides multiple warnings to the user, hence
ensures safety.
Future Directions:
In order to accurately detect changing Ground planes, Gonzalo Baez (2018) et al. proposes a 3d artificial vision
Machine Learning algorithms such as Naive Bayes classifiers, system that converts 3d depth information into 3d sound using
Support Vector Machines, etc. can be used. The systems Head Related Transfer Function (HRTF) and Light Structured
need to be checked on large public databases consisting of Sensors [51]. Microsoft Kinect is used to register 3D point
more blurred images and video sequences. Camera-based cloud from environment. Directional spatial Interpolation is
systems can be practically implemented on smartphone, tablet most important feature of HRTF. Although the system shows
or wearable device. Some obstacle detection algorithms can an accuracy of 97%, but it is ineffective in real-time situations
be further extended using object classification algorithms & due to its big size and external power requirements. Thus, it
also voice can be incorporated into it. Ultrasonic sensors should be either embedded on a smartphone or should be
detect even in dark surfaces, where infrared sensors fail and made wearable technology.
therefore can be used as required along with cameras. The
systems should be extended for detecting crossings on the Nabila Shahnaz Khan (2017) et al. introduces an assistive
road, faces and providing assistance in shopping thereby system for helping visually impaired users in walking based on
improving its accuracy in real-time environments. Bone Ultrasonic sensors & GPRS module [52]. The framework
Induction headphones can be used instead of normal consisting of Arduino Uno, sensors, buzzers, switch buttons &
headphones, so as to allow user to listen environment sound battery, is used to detect obstacles & manholes. An opening
as well sound from the device. Improvements in speed can be for sending sound pulse & another one for receiving signal are
made by using PH-EEMD (Peano-Hilbert Ensemble Empirical located on the sensor & distance is calculated as mentioned in
Mode Decomposition) image analysis for pre-processing the Eq. 4.
images. Disparity maps need to be integrated into the obstacle
626
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

the fusion of sensors & using the concept of machine learning.


Distance = Time x Speed_of_sound / 2 (4) The algorithm is more precise as compared to other
comparable algorithms & has an accuracy of 79%, precision
Sensor doesn‘t get affected with dust or water & covers large 82%, recall 69% & F-Measure of 72% in comparison to the
range of distance by holding it while walking. Arduino Digital other techniques (sparse point‘s dataset & predefined grid). In
Magnetic Compass can be used to help users in detecting case of lamps, reflective surfaces such as floors, inaccurate
right directions & by including audio notifications to it. results are observed.

Tayla Froneman (2017) et al. developed a wearable Zhou (2016) et al. proposed a novel technique by
prototype for the detection of obstacles in an unfamiliar combining ultrasonic sensors to sense the environment, GPS
environment by using low-cost Ultrasonic sensors for visually & Google maps to determine the current location, Bluetooth
impaired persons [53]. Sensors were worn on users‘ waist as it devices for data flow utilizing smartphone, Voice commands
is considered the least moving part of body. The detection [58]. This software has 2 parts. One is wearable smart sensor
distance ranges from 0.02m to 7m. 100% obstacle detection & another is app running on smartphone. The prototype is built
sensitivity & 100% object discrimination was observed on on Android platform in the Eclipse IDE. The sensor data is
testing the system. System yields inaccurate results for relayed to the device & reads distance from 1 to 10 feet,
objects like table & chair. Cost of the prototype can be making users enjoy a hand-free interaction with the device.
reduced by using microcontroller instead of expensive The system has an average standard deviation of 0.448 and
computer. can be improved to detect objects more than the range of 10
feet in a controlled environment.
Van-Nam Hoang (2017) et al. presents a wearable obstacle
detection and warning system for visually impaired users on Jee-Eun kim (2016) et al. introduces a new solution -
the basis of matrix of electrode and a mobile Kinect [54]. The StaNavi, to address the problem being faced by blind travelers
system uses the techniques such as maging technique, at railway stations [59]. This system uses Bluetooth Low
segmentation technique, Euclidean distance measurement Energy (BLE) & built-in compass of smartphone. BLE is a
and Watershed algorithm. Other hardware includes laptop in localization module that is considered as the most accurate,
backpack, RF transmitter and belt. It shows 50% accuracy in low cost ($5 per unit) & low power consumption module (runs
real-time and 82% in controlled environment. It is a simple, several years). It provides voice instructions & was tested at
portable, hands & ears-free prototype. It gives many warnings one of the busiest railway stations of Tokyo. No external
at the right time. But bulky prototype makes it difficult for a hardware was required. It encouraged feeling of independence
user to wear it on body for continuous long time. Also, it & confidence among users. 38 deviations occurred in 32 trials,
requires training before using. requiring users re-route their paths. Large number of
deviations should be improved by integrating StaNavi with
Desar Shahu (2017) et al. presents a low cost obstacle other positioning methods.
detection system by using 2 Ultrasound sensors along with
GPS & GSM module for visually impaired users [55]. The Chaitali Kishor Lakde (2015) et al. proposes a wearable
system is based on the principle of estimation of TOF (Time of navigational assistance system for visually impaired persons
Flight) of transmitted pulse from the obstacle sensed by the based on Infrared (IR) sensor and RGB sensor [60]. The
proposed framework. Several scenarios such as typical indoor system is made up of AVR 8/l6 development board,
& outdoor environment are considered to test the efficiency of Headphone, Voice Recording IC, Power supply and Vibrator.
system. It is observed that system is light-weight & obstacles Several pattern matching algorithm are used along with
of different material & shapes are successfully detected by the fusion of sensing & voice based guidance. It is simple,
system. But the system fails to detect inclined doors. Also, the portable & low-cost system and user friendly. It doesn‘t require
reliability of a system can be improved by using Microwave training. Some measures should be suggested in case of
sensors instead of Ultrasound sensors. unfortunate situations such as fast moving vehicle or in
case of accidents etc.
Bruno Andò (2017) et al. presents wearable active assistive
system to help visually impaired persons navigate in indoor B. Ando (2015) et al. introduces a haptic device which
environment based on multisensory technologies [56]. The aims to guide visually impaired users in daily-routine tasks
system uses Ultrasound sensors and wireless sensor network [61]. It comprises of 2 Ultrasonic sensors and array of strain
along with advanced trilateration paradigm (MTA), signal gauges sensors. It is based on codification strategy,
processing algorithms, UEI and UEC functionalities, Nelder– Detection simulation and threshold algorithms. It has
Mead nonlinear to perform real-time localization. Audio Contactless cane along with vibrating actuators to warn user
output is given through Bluetooth audio feedback device. about the obstacle. It provides natural codification of obstacles
Mispositioning of environment nodes has also been and improved user confidence. Handlers should be made
compensated. MTA needs more time, so there is a trade-off keeping in consideration left-handed persons as well. Results
between localization performances and processing time. need to be improved by using more sensors.

Karimi (2017) et al. presented a context-aware smartphone Uddin (2015) et al. proposes a system that has 2 modules
based approach to help visually impaired in in-door as providing direction and Obstacle detection [62]. Direction is
environments [57]. The approach uses 2 consecutive frames, provided by smartphone over voice command & sensors
computes the optical flow & track texture features to detect the detect obstacles & finds minimum distance based on Dijkstra‘s
obstacles in front of blind users. The frames are set based on algorithm. The data is transmitted through Bluetooth. 100%

627
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

accuracy was obtained in case of detection of obstacles, 84% correlation distance measurement and Bluetooth
in detection of holes & 93% in case of detection of turns with technologies along with microcontrollers, Smart transceivers
an overall accuracy of 92.33%. Accuracy depends upon and warning units for the user. IR sensor works perfect than
weather and changing weather leads to inaccurate results. ultrasonic senor in terms of speed, all types of material &
Microsoft Bing map is not updated. sizes. The sensor is attached to the shoe tip of user and gives
many warnings on detecting obstacle. It detects small, large
Zhe Wang (2014) et al. observes the need of designing a objects on walkways, stairs, and uneven surfaces. Same
robust system for detecting obstacles for blind persons & uses experiments can be conducted using Proximity sensor to
scene segmentation & labeling to design an obstacle check its efficiency.
avoidance system [63]. The prototype uses a multiscalevoxel
technique for reducing noise effects & improving Atif Khan (2012) et al. proposes a wearable obstacle and
segmentation. The results are then combined with depth & human detection system for visually impaired users by using
color data. Then, a decision tree is trained to be able to depth information acquired by Kinect sensor - Xtion Pro Live
classify different segmental types. On testing this method on [68]. The system utilizes OpenNI framework, Xtion Pro Live,
NYU Data sets & it is observed that the system is fast, robust the Laptop, Bluetooth and USB power cord. It provides an
& effective in nature. The proposed technique has been efficient and robust algorithm that can identify up to 3 people
compared with other techniques and the significant results at a time. But it fails if the object is in motion. Also, SAPI
have been shown. The method can be extended to apply on feature used here reduces the efficiency of output.
sequence of images & videos & detect more area.
Gallagher (2012) et al. presents an indoor positioning
Cheng-Lung Lee (2014) et al. proposes an obstacle system that runs on a smartphone & is based on Kalman Filter
detection device based on automobile parking sensors in [64]. that collects the information from all the sensors present on a
It consists of 3 modules: smartphone [69]. The earlier part of this paper mentions the
 Sensing module (transmitting & receiving unit) requirements of blind or visually impaired users for an
 Processing module (control host) indoor/outdoor environment. On comparison of the system
 Warning module (buzzer) with most advanced Wi-Fi & NN algorithms, it was found that
Several experiments are conducted and efficiency of the the system shows an improvement of 35% over Wi-Fi
device is compared with WhiteCane & the combination of algorithm & 50% over classical NN algorithm respectively.
proposed device with WhiteCane. It is observed that the Large numbers of errors were detected due to the presence of
combination (of proposed device with WhiteCane) showed 0% metallic structures in a room. The system can be improved
collisions and the best results. Sample data used for testing is further by the choice of relevant noise covariance matrixes of
very small; thereby lacking the trust of users in it. Kalman Filter.

Nakajima (2013) et al. put forth a navigational system for Mounir Bousbia-Salah (2011) et al. presents a wearable
blind users in Indoor environments [65]. The system uses navigational aid to help blind persons in their vision by using
Visible LCT (light Communication Technology) using LED ultrasonic sensors on user‘s cane and mounted on shoulders
lights & geomagnetic sensors facilitated in a smartphone. The [70]. It consists of 2 vibrators, ultrasonic cane,
author claims the system performs effectively for some VI microcontroller, accelerometer, footswitch, speech
users. But the biggest problem here was an error in the synthesizer, hexadecimal keypad, mode switch, an ultrasonic
direction is equivalent to 1 hr. clock difference. Moreover, for cane and power speech. The system detects obstacles within
the faster moving users, the spoken navigation doesn‘t match. 6m range. It warns the user about obstacles through
Azimuth accuracy is not detected by the system. vibrations & voice. The proposed system is low cost &
effective in real-time. But the efficiency can be improved by
Wai L. Khoo (2012) et al. uses Microsoft Robotics using GPS to track user‘s position information.
Developer Studio to create a virtual environment for
evaluating multimodal sensors that are used for assisting Bruno Andò(2011) et al. suggests a new platform to help
visually impaired users [66]. It uses multimodal sensors, visually impaired interact with the environment by using smart
laser sensors, ultrasonic sensors, laser sensors and sonar distributed sensor [71]. The Decision Support System
sensors. The outcome is a wearable system which uses (DSS) tool uses data from sensor probe. The Management
segmentation-based stereo vision algorithms, Microsoft Tool (MT) manages interaction. Bluetooth audio interface
Robotics Developer Studio, stereo cameras, XBOX conveys messages about environment to the user. The
controller, Brainport and Braille. It is Easy-to-use XBox system provides continuous perception of both environment
controller than using mouse & keyboard. Also, Laser sensors & user. It is a Flexible system and can even assist deaf
are more accurate to range distance & angle up to 80m. people. Performance can be enhanced by increasing
Different combination & placements of sensors along with complexity of DSS.
user‘s learning curve will provide research platform in
making different assistive tools for VI. A. M. Kassim (2011) et al. proposes a wearbale warning
system called ‗MY 2nd EYE‘ for visually impaired individuals
B. Mustapha (2012) et al. proposes a system which is based on distance measurement and-infrared sensors [72]. It
composed of ultrasonic and optical sensors to detect wide consists of H-bridge Motor driver, Pulse Width Modulation
range of obstacles for visually impaired persons [67]. It uses (PWM) and a vibration motor along with PIC Microcontroller,
ultrasonic (SRF10), Infrared (GP2Y0A21YK), Laser range and Rechargeable battery, Glove and Wheel. It is a low cost
Ultrasonic range (sonar) sensors. It uses the concept of system. Here, user only needs to wear gloves. It vibrates

628
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

on detecting obstacles. But there is a need to improve with user‘s learning curve will provide research platform in
system to be able to operate at crowded areas such as making different assistive tools for VI. Those sensor based
supermarkets. And deal with high-speed obstacles such as systems that contain handlers, should consider left-handed
cars. people also. Increasing the number of sensors will provide
accurate results, but will make the system very heavy;
Mitsuhiro Okayasu (2010) develops two systems to help therefore the trade-off between accuracy and bulky system
visually impaired people in visualizing environment things by to be prioritized.
using Thermographic camera (Infrared sensor camera) along
with Ultrasonic sensor [73]. One is implemented on White
Cane and other one is a wearable wrist-band. The system 2.3. Both ( Camera & sensors)
uses depth data and 3D view using array of pins. It also Maria Cornacchia (2018) et al. proposes a small and low-
consists of vibrator and tooling apparatus to guide the user. cost obstacle detection assistance system for visually impaired
White cane is able to detect high as well as low level objects users based on camera and patterned light field sensor [74].
ranging between 0.5 m to 5.5 m. 3D visual system provides The system helps the users in navigating indoor and outdoor
information on shape & distance of obstructions ranging up to environments and detects obstacles along their way. It uses
7.5m. Certain collisions were observed while its execution. the concept of deep learning, especially Convolution Neural
It is unable to detect concave object such as holes & networks (CNNs) to classify different frames of obstacles.
downward steps. Long Short Term Memory (LSTM) is used to smoothen the
frames so as to enhance better detection and gives the
Critical Challenges: accuracy of 98.37%. Real-time performance can be improved
Some of the critical challenges faced while using sensors as by using better shape detection algorithms and also detecting
input device to assistive systems: different colors.
Some systems do not detect objects more than the range
of 10 feet. There are systems that do not yield accurate results M.Vanitha (2018) et al. has designed a smart walking
in case of lamps, reflective floors or highlighting surfaces. In stick by using 4 Ultrasonic sensors & 1 Camera for assisting
most cases, accuracy depends on weather and changes in the blind users & helping them in detect obstacles [75]. Three
weather makes results highly unpredictable. Some System sensors have been put on use for detecting obstacles, while
yields inaccurate results as table is detected as chair and vice- the 4th sensor detects potholes. Camera also recognizes
versa. Detecting inclined doors is still a complex task. There is objects & text & works like virtual eye to blind users. The
high need to improve systems for crowded areas such as system is successful in detecting obstacles from 360 degree
supermarkets. Users are not warned of emergency view with respect to smart stick. Inclusion of GPS system into
obstacles such as high-speed cars. Some Trialteration this framework will help guardians of blind person to know
algorithms such as MTA provide very accurate results but their exact location at any time. Framework should be
take much time. So there is a trade-off between localization extended to recognize faces.
performances and processing time. Many sensors make the
prototype bulky for user to wear it on body for continuous W.Elmannai (2018) makes use of fuzzy logic in guiding
long time. Users require training before using some visually impaired about the obstacles in their way & ensures
prototypes. Lack of confidence in using and adapting to 100% results [76]. The proposed technique is based on the
newer technology is often observed among visually integration of sensors & cameras using BRIEF (ORB) for
impaired users. obstacle detection with an accuracy rate of 98%. The
proposed technique doesn‘t detect all the obstacles in the
Future Directions: image. This may be due to the large sized obstacles & needs
Arduino Digital Magnetic Compass can be used to help an improvement
users in detecting right directions & including audio
notifications to it, makes it more user-friendly. System D-R.Chebat (2017) et al. provides a review on advanced
performances can be improved further by the choice of technologies based on neural correlates & sensory
relevant noise covariance matrixes of Kalman Filter. Large substitution for visually impaired persons & suggested
number of deviations in results of StaNavi could be improved sensorimotor loop as the basis for plastic changes in the brain
by integrating it with other positioning methods (such as DR). [77]. This paper is a review on sensory substitution devices,
The systems should be further extended so as to be able to which are using cameras & sensors in them. The paper
detect azimuth accuracy. Reliability of some systems can be reviews neural correlates of the route finding in sighted & blind
improved by using Microwave sensors instead of Ultrasound persons.
sensors. GPS can be used to track position related
information and should be incorporated into navigational Ramirez (2017) et al. identifies the problems being faced
devices. Some systems use smart sensors along with by visually impaired users due to autonomy of cities & urban
Decision support systems. Increasing the complexity of areas [78]. The paper presents previously designed traveling
such systems will yield good performance. Proximity aids & enhances them using IoT. The proposed architecture
sensors can be used to perform experiments while testing uses haptic feedback that comprises of both tactile & auditory
the efficiency of prototype. Some measures should be feedback. The major improvements were done in Electronic
suggested in case of unfortunate situations such as fast Long Cane Project by minimizing power consumption, utilizing
moving vehicle or in case of accidents etc. Sensors should recyclable elements in it & many more. The results obtained
be able to adjust themselves according to the speed of the were adequate & satisfactory with an average precision of
user. Different combination & placements of sensors along 4.59 & average repeatability of 2.80. A central server is

629
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

required to be developed to allow even more services through region. The feature recognition algorithms such as HOG,
IoT. The unfortunate situations such as failed obstacle LBPH are used in combination with SVM classifier on different
detection cases should be addressed. Solar batteries can be sets of data. This leads to the formation of efficient datasets,
used for the improvement of autonomy & sustainability of which can be further used for building navigational systems for
smart cane. blind users. Practical application of such a guidance system,
based on this approach is required for both local & urban
P. Kumar (2017) et al. proposed navigation system that areas.
provides brief & quick audio messages to visually impaired
persons based on neural networks & neural learning [79]. The Mocanu(2016) et al. suggests a wearable assistive device
proposed technique consists of 2 modules. These are that uses the ultrasonic sensors, mobile camera & machine
Intelligent Navigation Module (that uses Ultrasonic sensors, learning techniques (SVM Support Vector Machines) to
compass, Arduino Studio & Arduino SDK) and Face identify objects for visually impaired objects [92]. Munoz et al.
Recognition Module (uses smartphone camera, Artificial proposes an indoor staircase detection assistive system for
Neural system, and voice enabled interface). The system visually impaired based on RGB-D camera & SVM based
provides an accuracy of 90% for face recognition & 95% for multi-classifier in [84]. The system observed an F-score of
obstacle detection. But it works only for static faces. 82% & an accuracy of more than 90%. The system does not
Integration with IoT can make it more effective. give accurate results for some classes of objects such as
bicycles. Also, it does not provide navigational information,
S. Chinchole (2017) et al. presents a low-cost stick that uses which is necessary for a user to reach specific location.
the concept of artificial intelligence to detect obstacles for Enhancing features such as face detection could be used to
visually impaired persons [80]. It uses a smartphone camera, improve performance.
Bluetooth & several sensors for the perception of user
environment. The system was successful in performing the Parra (2015) et al. reviews all the published proposals that
task of obstacle recognition, obstacle detection & independent uses sensors like the accelerometer, gyroscope or light,
navigation. Beep sound is produced as a warning when any multimedia sensor [85]. The design & deployments of such
obstacle comes out of the safe distance set in the system. proposals in making applications for visually impaired
Hands-free assistance is required as hands of the user are especially elderly people, such as AAL (Ambient Assisted
always occupied due to smartphone holding. Living) & e-Health, is shown here. The collected information is
grouped into several categories & compared. There is a need
Muneshwara (2017) et al. builds a portable product, to develop such systems that have both microphone & camera
which is worn as a cap by visually impaired users [81]. in them.
Raspberry Pi processes the obstacles data‘ & inform the users
through headphones. The device is simple & robust. The Shripad Bhatlawande (2014) et al. has proposed a
special point about this device is that it is also helpful for the wearable assistive system for helping visually impaired
persons with other disabilities such as legs. The paper also individuals in navigation by using the combination of camera &
discusses about the advantages & disadvantages of Electronic ultrasonic sensor [86]. A user understands his environment by
Travelling Aids (ETAs) being used by persons with disability. using waist-belt & bracelet along with WhiteCane. The waist-
The device can be paired with GPS for improved detection of belt is made up of fabric & Velcro material, having battery &
obstacles & to analyze the environment more properly. Audio camera attached to it. The manufacturing cost of the system is
Feedback can be provided in regional languages and $300. In order to test the reactions of users towards the
accuracy can be improved by using high level cameras. system, System Capability & Utility Score (SCU) was
calculated & it was observed that using WhiteCane, the score
Rabia Jafri (2016) et al. presents a depth based obstacle was 20.93, while using the prototype, the score was 26.20. It
detection system to help visually impaired users navigate in should also include Object categorization. System needs to be
indoor environments [82]. The system is utilizing Google made portable. Testing in large sample space is required.
Project Tango Tablet Development Kit along with processors
(NVIDIA Tegra K1) and sensors such as infrared based depth Dunai (2014) describes a new system named
sensor and ambient light sensor. The application uses CASBliP(Cognitive Aid System for blind persons) based on
computer vision techniques and is based on cutting edge artificial vision, that uses acoustic sounds & helps blind users
technology. The other hardware includes processor (NVIDIA while walking or travelling [87]. The initial part of the paper
Tegra K1 with 192CUDA cores), accelerometer, barometer, presents a review on Electronic Travel Aid Systems (ETAs),
compass, GPS and gyroscope. The system is real-time, which are classified into 3 categories as, Input Interface
affordable, aesthetically acceptable and mobile assistive (ultrasound,GPS,laser & artificial vision), Processing Interface
stand-alone application that helps users in unfamiliar (contains techniques & software required for processing
situations. Errors in data & noise were observed in point information), Output Interface (to give output to the user). The
cloud scans. It only detects obstacles, but doesn‘t identify paper discusses about 2 under development prototypes:
them.  TANIA (Tactic Acoustical Navigational & information
Retrieval) – uses MTx inertial sensors and
Daniel Koester (2016) proposes an approach for detecting  SWAN (System for Wearable Audio Navigation) – uses
zebra crossings in aerial images by using geospatial 4 cameras & light sensors.
database, which will help in the navigation of visually impaired The objects are detected within the area of 5 to 15m & the
persons [83]. The framework naturally takes in an appearance user is warned. White Cane users were highly satisfied with
model from accessible geospatial information for an analyzed this prototype. Heavy wearable helmet is a major drawback of

630
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

this prototype. & humans within 120cm & uses beep sound. The accuracy
of 95.45% is achieved. It is easy to carry, small, lightweight
Fuzzy logic, the subset of AI has also been used & convenient to user. User needs to carry eBox 2300™
extensively in this field. along with sensors having weight of 500gm.

Jin-Hee Lee (2014) et al. proposes a wearable assistive Angin (2010) et al. presents a traffic light detector as an
system for visually impaired persons by utilizing Camera, GPS initial component of context aware model [93]. The approach
receiver, magnetic compass & multiple ultrasonic sensors [88]. is based on available resources by Cloud computing suppliers
All the data taken by these inputs is sent to the embedded & location specific resources present on internet. The aim of
computer. This paper presents 2 modules of navigation this approach is to build a system that has limited dependence
system. One is Indoor navigation system & other is Outdoor on infrastructure. The experiments done using this approach
Navigation system. By using magnetic compass & GPS shows that there is a need to build robust obstacle detection
receiver, the proposed system shows accuracy of 55%. system. Camera mounting position is still an issue that needs
System shows inaccurate results if the user walks faster, thus to be addressed.
the system only fits the environment that has fixed obstacles.
Also, the system should be lightweight. Critical Challenges:
Certain challenges faced while using sensors along with
Hotaka Takizawa (2013) et al. proposes a novel system cameras are as:
for helping visually impaired persons detect objects in the Objects should be categorized so that they can be
environment. The system is implemented on Cane. The kinect effectively detected and informed to blind user. But Object
has 3-axis accelerometer, infrared sensor and RGB camera categorization is missing in some systems. Some techniques
[89]. It uses depth data and 3D computer vision techniques work only for static objects such as faces, etc. In most of the
along with White cane, Keypad type, controller and tactile smartphone based systems, hands of the user are always
device. It recognizes 3d objects in lesser time than normal occupied due to smartphone. Hence, hands-free assistance is
WhiteCane and provides only brief & necessary information to required. System shows inaccurate results if the user walks
the user. Prototype size is too big & can cause fatigue, so faster, thus the system only fits the environment that has fixed
downsizing of the system is necessary. System is not obstacles. The size of prototype becomes too large on
executed in real-time. combining camera and sensors together (such as eBox
2300™). This can cause fatigue, so downsizing of the
Pundlik (2013) et al. developed a real time collision detection system is necessary. Camera mounting position is still an
system that uses body mounted camera & gyro-sensors [90]. issue that needs to be addressed in most cases. A central
The system uses videos made by camera & computes sparse server should be developed to allow more services through
optical flow, uses gyro-sensors to predict & issue warnings IoT. The unfortunate situations such as failed obstacle
and estimates collision risk based on motion estimates. This detection cases should be addressed. Heavy wearable helmet
system has been implemented using generic laptop as well as is a major drawback in some prototypes. In some cases, the
on embedded OMAP-3 compatible platform. This approach is proposed technique doesn‘t detect all the obstacles in the
successful in estimation of collision risk for obstacles. Out of 4 image. This may be due to the large sized obstacles and
test sequences, the proposed technique had false rate of 2.05% & needs an adjustment of the system, which is difficult for blind
2.56% in sequence 1 & 2 resp. Collision warnings can be user. Some systems do not give accurate results for some
refined for better risk estimation Also, using the concept of classes of objects such as bicycles. Errors in data and noise
positional tracking can improve performance. are observed in point cloud scans. Some systems only
detect obstacles, but do not identify (classify) them. This
Brian F. G. Katz (2012) et al. presents a wearable purpose can also be served by white canes, guide dogs.
architecture of Navigation Assisted by artificial VIsion and Thus, it is difficult to motivate users to use new technology
GNSS (NAVI) - head and backpack, for helping visually systems.
impaired users in navigation purposes [91]. The system is
based on Global Positioning System (GPS sensor), 2 head Future Directions:
mounted cameras and BumbleBee stereoscopic camera. The Inclusion of GPS system into this type of frameworks will
prototype is build using real-time embedded vision algorithms, help guardians of blind person to know their exact location at
fusion algorithms and bio-inspired vision along with laptop, any time. Frameworks should be extended to recognize faces.
ANGEO GPS, XSens orientation All systems need to be portable so that they can operate at all
tracker, headphones, microphone, and a notebook computer. places. Collision warnings need to be refined for better risk
High precision geolocalization is achieved. It allows to user to estimation. Using the concept of positional tracking through
select routes on pedestrians. 3D synthesis of relative GPS can improve system‘s performance and analyze the
location can be more helpful. This purpose can also be environment more properly. Real-time performance of such
served by white canes, guide dogs. systems can be improved by using better shape detection
algorithms and also detecting different colors. IoT can be
Amit Kumar (2011) et al. presents a wearable navigation embedded into such systems to make them more effective.
aid to help blind persons in their travelling by using USB Practical application of such guidance system is required for
camera along with ultrasonic sensors [92]. Face detection and both local & urban areas. Audio Feedback can be provided in
cloth texture analysis is used to identify human faces. The regional languages also. Accuracy can be improved by using
system is composed of sonar, USB camera and eBox high level cameras (HD). 3D synthesis of relative location is
2300™ Embedded System. It detects obstacles up to 300cm helpful for object classification and detection. Solar batteries

631
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

can be used for the improvement of autonomy & sustainability Hoon Lee (2016) et al. presented an indoor navigational
of smart cane application based on RGB-D camera & IMU sensors [99].
Real-time ego motion estimation (both sparse features and
2.4 Others dense point clouds), mapping, path planning & smartphone
 RGB-D based: form the basis of this system. In order to attain real-time frame
Kailun Yang (2018) et al. proposed an effective wearable estimation, FOVIS (Fast Odometry from VISion) is used. The
framework that is composed of smart glasses & path-finder proposed method provides much accurate results in terms of
(waist-worn) by using RGB-D sensor to guide visually impaired errors on an average of 0.88m on 13.93m trajectory. The
persons in detecting obstacles [94]. The proposed system comparison of the system with white cane shows progress of
aims to achieve 2 goals i.e. providing Long-term traversability about 57% over cane. Inaccurate results were obtained in
to blind users and detecting Low-lying obstacles. Navigation some cases due to severely blur images. Also, further plans
system can be improved by making it achieve higher are being made to adapt real time image based localization
perception levels & offering more independence to its users. algorithm. It is observed that few places are being visited
multiple times by a user.
Jinqiang Bai (2017) et al. presents a low-cost and novel
Electronic Travel Aid (ETA) in the form of smart glasses based Huy-Hieu Pham (2016) et al. presents a wearable support
on RGB-D sensors for visually impaired persons [95]. The system for visually impaired individuals to help them detect
system acquires depth information through depth sensor obstacles in the indoor environment based on Kinect sensor
(RGB-D), which comprises of CMOS (Complementary Metal and RGB-D 3D-image processing [100]. The system is
Oxide Semiconductor) image sensor. An accuracy of 98.93% comprised of personal computer (PC) and Tongue Display
is observed without using ultrasonic sensor with frosted glass, Unit (TDU). An accuracy of 90.69% is observed. It detects 4
but shows inaccurate results with pure transparent glass. The types of obstacles- walls, floor, door & stairs by measuring
system produces promising results in comparison to distance between user & obstacles. No satisfactory
WhiteCane, by informing user through beep sound in crowded performance is observed while detecting downstairs. Also,
areas such as supermarkets. the kinect sensor does not detect things in case of strong
lightning, which can be resolved by using combination of
W. C. SIMÕES (2016) et al. presents an audio assistive color & depth information. High Execution time can be
system based on visual markers, Ultrasonic sensors & reduced by using fusion of 4 algorithms & applying
Camera for blind persons [96]. It consists of a pair of glasses probabilistic approaches.
with RGB Camera & Ultrasonic sensors & a low cost mini-PC
is used for storing database & Haar Cascade classifier. It was Aladren (2016) et al. presents a NAVI (Navigational
observed that an accuracy of 94.92% obtained in the Assistance for Visually Impaired) based on RGB-D camera, its
recognition of markers & 98.33% in detection of obstacles. range & visual information [101]. A range sensor provides
The system needs many improvements such as low-light depth information & differentiates between wild terrain & solid
detection and it should allow different languages for people of obstacles. The range camera is worn on neck & the laptop is
different regions. Better camera (Infrared Camera) should be carried in backpack. Real-life implementation of the system
used. implies a precision of 99% & recall of 95%. But it does not
work in direct sunlight. Also, it can accurately measure
Vítor Filipe (2016) et al. proposes a real-time navigation distance up to 3.5m only. These limitations can be improved
system to help visually impaired individuals avoid obstacles on by using better color segmentation & floor segmentation
their path by using Microsoft Kinect sensor and accruing RGB- techniques.
D images of the object [97]. It uses visual recognition of
markers, RGB-D images, depth information, C# Perez-Yus (2016) et al. presents a wearable method for
programming and neural networks in coordination with detecting stairs based on chest-mounted RGB-D Camera for
FANN, an open source library NyARToolklit. Proper detection visually impaired persons [102]. 3d point cloud is computed by
of wall-mounted markers within the range of 1-4m & vertical sensor & depth images are retrieved. The paper deals with
field in range of 0.8-4m in front of user is observed. The detecting staircases in indoor environments. On comparing
system is not portable, so it is difficult for the user to carry it. the results of proposed technique with other techniques, it was
Sometimes the system yields false results, which needs to observed that proposed system has 0% False Positive (FP) &
be improved by using NFT markers. Also, there is no need False Negatives (FN). The system should be extended for
of such a system as blinds themselves follow path along outdoor environments as well. Also, it should also include
walls. information such as count of stairs & color of the obstacles.

Bing Li (2016) at al. presents a novel wearable navigation M.Poggi & S.Mattoccia (2016) proposes a technique
system ISANA, based on depth sensor (RGB-D) & cameras based on 3d computer vision & deep learning techniques to
for visually impaired persons [98]. The framework incorporates guide the visually impaired users [103]. RGB-D Sensors sense
an indoor map editor & Tango device app on multiple images & embedded CPU board processes them & deep
modules. It also supports multi-floor navigation by defining learning techniques are used to categorize the detected
graphs connections between floors, stairs, escalators or obstacles. The proposed technique has an accuracy of 98% in
elevators. The system does not detect dynamic obstacles. obstacle detection & an accuracy of 72% in object
Also, it should focus on complete environment understanding. categorization. The system needs to be trained on larger
datasets in order to improve its categorization capability.

632
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

Wang (2014) et al. developed a framework based on RGB-D Mathankumar (2013) et al. proposed a framework that
(Red, Green, Blue & depth) images to detect pedestrians & gives direction to visually impaired users to recognize and buy
crosswalks [104]. At first, Hough Transform is applied to extract their items in the grocery store without anybody‘s help [106].
features based on RGB channels and then the depth transform is The system uses RFID for identification of products and audio
used to identify stairs, crosswalks & pedestrians. The identification instructions are used to assist them further inside
of stairs going up or down is also done along with the supermarket, thereby eliminating queuing headaches. Zigbee
measurement of distance between user‘s camera & stairs. The transceiver is used for sending & receiving information leading
system attained accuracy of more than 90% in the identification of to formation of convenient environment for blind users. The
crosswalks, stairs & pedestrians. This technique can be technique needs to be built on portable device along with trolley.
improved to detect several types of obstacles & from different Also, there is a need of ultrasonic sensors to avoid collisions
projections. among blind users in a supermarket.

Critical Challenges: Tsirmpas (2015) et al. presents a navigation system for


Some challenges being faced by individuals using systems visually impaired elderly individuals in [107] in helping them
based on RGB-D sensor are listed as below: navigate indoor environments. An RFID-based model of an indoor
Some depth-based systems are non-portable and cannot navigation framework, having able to navigate a sightless
be carried by the user. Low-light detection is a problem for disabled senior person with safety in a natural real-time
many systems. Better camera such as Infrared Camera should environment, is presented. More specifically a ―mapping‖ process
be used for systems such as smart glasses. Prediction of by interpreting the blueprints of a building is proposed along with
dynamic obstacles is also required along with static obstacles an innovative localization and obstacle avoidance algorithm. In
and the system should focus on complete environment addition a proper antenna circuit is built so as to enhance the
understanding, rather than just focusing on big obstacles in an properties of the proposed framework. 99% success rate is not
image. Inaccurate results were obtained in some cases due to considered efficient due to other resource limitations. The system
severe blur images. Some RGB-D based systems do not work needs many improvements, taking into account the limitations
in direct sunlight. NAVI system can accurately measure of time & human resources and should be able to
distance up to 3.5m only. Systems do not give satisfactory accommodate different scenarios multiple times.
performance while detecting downstairs and upstairs.
Kumar Yelamarthi (2010) used Robots to build navigation
Future Directions: system to detect obstacles for visually impaired users [108].
The further plans need to be made to adapt real-time The input is given through Ultrasonic & Infrared sensors &
image based localization algorithms as few places are being output through speakers & vibrating motors. The concepts of
visited multiple times by a user. Some limitations can be RFID reader, GPS & analog are used in building the prototype.
improved by using better color segmentation and floor The pilot study of the result shows effectiveness of the system.
segmentation techniques. Kinect sensor does not detect in The time to reach certain destination is different every time the
case of strong lightning. It can be resolved by using system is operated & is unpredictable. This loosens the
combination color and depth information. High execution interest of user & demands attention towards resolving the
times can be reduced by using fusion of 4 algorithms & technique.
applying probabilistic approaches on it. The systems should
be extended for outdoor environments as well. Also including Domingo (2012) provides an overview of usage of IoT in
information such as count of number of stairs and color the field of Obstacle detection for visually impaired persons
information is required. The system needs to be trained on [109]. He also proposed architecture based on RFID
larger datasets in order to improve its categorization capability. databases. Benefits of using IoT & different scenarios &
This technique can be improved to detect several types of challenges being faced by visually impaired persons is
obstacles and from different projections. False results can be highlighted here. The proposed framework is divided into 3
improved by using NFT markers. layers as Perception, Network and Application layer. Various
challenges represented in this paper provide scope the
researchers working in this field.
 RFID Based:
Chumkamon (2008) et al. builds a blind navigation system Rosen Ivanov (2010) developed a low cost indoor
for Indoor environment using RFID tags [105]. The system uses navigation system based on technology of mobiles, NFC (Near
RFID tags to store location information, user‘s destination place Field Communication), Java Program& a low cost RFID [110].
& a routing server to measure user‘s current location from It enables users to imagine the map of a room & stores this
destination by using GPRS. The specialty of the system lies in info in RFID tags in WBXML format. The audio messages are
the fact that it cannot be used by blind persons only, but also recorded in RFID tags in AMR format and build a voice
tourists, or fire fighters to enter a room full of smoke. The system enabled guidance system for visually impaired users. This
shows promising results. The proposed device operates on application has many advantages such as low cost, easily
rechargeable battery of 9V (6 hrs.). There are some delay accessible, simple, user-friendly, audio-enabled, & many
problems such as communication delay due to GPRS modem more. The mean time to complete the task of finding some
and voice delay due to file transfer delay from MMC module. room is 136s at 1.5km/hr. Navigational data must be adjusted
These problems can be improved by adding some common if the client moves away from the path between two reference
words to ROM and preloading them. focuses or get lost. This can be improved by using a mobile
phone that supports electronic compass and accelerometer.

633
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

M. Nassih (2012) et al. develops an obstacle recognition the extent of real negatives that have been correctly identified.
system for visually impaired people based on RFID and also Sensitivity, also known as Recall is defined as:
gives overview of RFID based techniques used in this field
[111]. The paper presents how DGPS (Differential Global no. of TP/no. of (TP + FN)
Positioning System) has lacked in localizing people and
objects and mentions the rise of techniques using RFID. The and Specificity is the no. of TN/ no. of (TN + FP). Precision is
proposed system uses RFID along with very traditional cane the quality of being completely correct and is defined as:
equipped with Braille & builds smart canes. It is difficult to
implement this system.

Critical Challenges:
Accuracy is the nearness of the measured value to the
Several challenges being faced by using RFID based
expected value.
systems are discussed below:
Communication delay is observed in some systems due to
GPRS modem. The voice delay is seen due to file transfer While there are many criteria‘s as discussed above to check
delay from MMC module. The time to reach certain destination the efficiency of the proposed techniques for obstacle
is different every time the system is operated & is detection for blind persons, we have evaluated few techniques
unpredictable. This loosens the interest of user & demands on the basis of accuracy observed in them as shown in Fig. 2,
attention towards resolving the technique. Difficult Fig. 3, Fig. 4 and Fig. 5.
implementation of RFID based systems is also a common
issue.

Future Directions:
Different types of delays can be improved by adding some
common words to ROM & preloading them. The systems using
RFID techniques need to be built on portable device along with
trolley. There is a need of ultrasonic sensors to avoid collisions
among blind users in a supermarket. Navigational data must be
adjusted if the client moves away from the path between two
reference focuses or get lost. This can be improved by using a
mobile phone that supports electronic compass and
accelerometer.

3 EVALUATION CRITERIA
Evaluation is an endeavor to appraise the quality of proposed
technique. It is important to evaluate techniques because it
becomes easier to decide the suitable one among many works.
Also, the user gets to know how the existing work can be
improved based on certain parameters discussed below & it
leads to further expansion of research in that area. There are
many metrics to evaluate obstacle detection based techniques.
Some of them are listed below:

Null Hypotheses is a general assumption about anything.


Testing of Null Hypotheses is done to accept it or reject it. A
contrasting statement to null hypotheses is called as Alternate
Hypotheses. There are 2 types of errors called as Type 1 &
Type 2. Rejecting correct Null Hypotheses leads to Type 1
Error, which is also called False Positive (FP). False Positives
rate (FPR) is the extent of existence of certain condition, which
actually doesn‘t exists. False Negatives rate (FNR) is the
extent of absence of certain condition, which actually exists.
True Positives rate (TPR) is the extent of real positives that
have been correctly identified. True Negatives rate (TNR) is

634
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

Fig. 2 Percentage-wise accuracy evaluation of different techniques based on using camera for taking input.

Accuracy(%)
120
100
80
60
40
20
0 Accuracy(%)

Fig. 3 Percentage-wise accuracy evaluation of different techniques based on using Sensors for taking input

120 Accuracy(%)
100
80
60
40
20
0
W.Elmannai Kumar (2017) Mocanu(2016) Jin-Hee Lee A Kumar
(2018) [87] [89] [92] (2014)[108] (2011) Accuracy(…

Fig. 4 Percentage- wise accuracy evaluation of different techniques based on the fusion of Camera & sensors

635
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

Fig. 5 Percentage- wise accuracy evaluation of different techniques based on RGB-D Camera sensors

4 SUMMARY a Long Cane Used by Individuals Who Are Visually


Impaired on the Yielding Behavior of Drivers." Journal of
In this paper, several technologies used for detecting
Visual Impairment & Blindness (2017): 401-410.
obstacles for blind persons have been reviewed. The
[5] Do Ngoc Hung, Vo Minh-Thanh, Nguyen Minh-Triet, Quoc
advantages and disadvantages of previous work have been
Luong Huy, Viet Trinh Cuong. "Design and
studied thoroughly along with their critical challenges and
Implementation of Smart Cane for Visually Impaired
future directions. The analysis and comparison of several
People." 6th International Conference on the Development
techniques have been done, which may serve as research
of Biomedical Engineering in Vietnam (BME6). Vietnam:
Gaps for upcoming researches in this field.
Springer, Singapore, 2017. 249-254.
After reviewing many papers on this work, it is observed
[6] Sung Yeon Kim, Ki Joon Kim, S. Shyam Sundar, Frank
that major improvements are required in existing systems,
Biocca. "Electronic Cane for Visually Impaired Persons:
so that they can operate accurately in crowded areas such
Empirical Examination of Its Usability and Effectiveness."
as supermarkets, hospitals, airports etc. Also, some
Human Centric Technology and Service in Smart Space
techniques yield accurate results but takes more time
(2012): 71-76.
during execution, thereby making the systems unfit for real-
[7] Kozak, Roman. "―Virtual Cane‖ for the Blind, Powered by
time situations. Different combination and placements of
Arduino and Android." 2014. www.romanakozak.com.
sensors along with user‘s learning curve will provide
[Online] https://www.romanakozak.com/virtual-cane/
research platform in making different assistive tools for
[8] Sonda Ammar Bouhamed, Jihen Frikha Eleuch, Imen
Visually Impaired (VI) users. Motivating blind users towards
Khanfir Kallel. "New electronic cane for visually impaired
new technology itself impose a big challenge. This paper will
people for obstacle detection and recognition." IEEE
help in the ongoing research on this topic.
International Conference on Vehicular Electronics and
Safety (ICVES 2012). Istanbul, Turkey: IEEE, 2012.
[9] VARSHINI, G.PRIYA. "SMART MOBILITY STICK."
ACKNOWLEDGMENTS International Journal of Scientific & Engineering Research
This research work is supported by Technical Eductaion 6.10 (2015): 89-94.
Quality Improvement Project III (TEQIP III) of MHRD, [10] Rupert R A Bourne*, Seth R Flaxman*, Tasanee
Government of India assisted by World Bank under Grant Braithwaite, Maria V Cicinelli, Aditi Das, Jost B Jonas, Jill
Number P154523 and sanctioned to UIET, Panjab University Keeffe, John H Kempen, Janet Leasher,Hans Limburg,
Chandigarh (India). Kovin Naidoo, Konrad Pesudovs, Serge Resnikoff, Alex
Silvester, Gretchen A Stevens, Nina Tahhan,. "Magnitude,
REFERENCES temporal trends, and projections of the global prevalence
of blindness and distance and near vision impairment: a
[1] Whitmarsh, L. "The Benefits of Guide Dog Ownership." systematic review and meta-analysis." The Lancet Global
(2005): 27-42. Health 5.9 (2017): e888-e897.
[2] "Dogs Monthly." Great British Dog Survey 2016: 1-28. [11] Kirsty Williamson, Steve Wright, Don Schauder, Amanda
[3] What Type of Cane Should I Use?" n.d. American Bow. "The Internet for the Blind and Visually Impaired."
Foundation for the Blind: Leading the Vision Loss Journal of Computer-Mediated Communication 7.1 (2001).
Community. [Online] [12] Tella Adeyinka, James Abedayo Ayeni, and Olukemi
http://www.visionaware.org/info/everyday-living/essential- Titilola Oleniyi. "Assessment of Information Seeking
skills. Behaviour of Physically Challenged Students in Selected
[4] Eugene A. Bourquin, Robert Wall Emerson, Dona Nigerian Tertiary Institutions." Journal of Balkan Libraries
Sauerburger, and Janet Barlow. "The Effect of the Color of Union 5.2 (2017): 24-33.
636
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

[13] Kirsty Williamson, Don Schauder, Louise Stockfield, Steve with Application to Visually Impaired Navigational
Wright & Amanda Bow. "The role of the internet for people Assistance." Sensors 17.11 (2017).
with disabilities: issues of access and equity for public [27] Rumin Zhang, Wenyi Wang, Liaoyuan Zeng, Jianwen
libraries." The Australian Library Journal (2001): 157-174. Chen. "A Real-Time Obstacle Detection Algorithm for the
[14] Winkler, Timo Götzelmann and Klaus. "SmartTactMaps: a Visually Impaired Using Binocular Camera." 2017
smartphone-based approach to support blind persons in International Conference in Communications, Signal
exploring tactile maps." PETRA '15 Proceedings of the 8th Processing, and Systems. Springer, Singapore, 2018.
ACM International Conference on PErvasive Technologies 1412-1419.
Related to Assistive Environments. Corfu, Greece: ACM, [28] Kang, Mun-Cheon, et al. "An enhanced obstacle
2015. avoidance method for the visually impaired using
[15] Nicoletta Noceti, Luca Giuliani, Joan Sosa-Garciá, Luca deformable grid." IEEE Transactions on Consumer
Brayda, Andrea Trucco, Francesca Odone. "Designing Electronics 63.2 (2017): 169 - 177.
audio-visual tools to support multisensory disabilities." [29] He Zhang, Cang Ye. "An Indoor Wayfinding System
Recognition, Computer Vision and Pattern. Multimodal based on Geometric Features Aided Graph SLAM for the
Behaviour Analysis in the Wild. Academic Press Elsevier, Visually Impaired." IEEE Transactions on Neural Systems
2019. 79-102. and Rehabilitation Engineering 25.9 (2017): 1592 - 1604.
[16] Dr.P.C.Jain, K.P.Vijaygopalan. RFID and Wireless Sensor [30] Kin Leong Ho, Paul Newman. "Loop closure detection in
Networks. CDAC Noida: Proceedings of ASCNT, 2010. SLAM by combining visual and spatial appearance."
[17] Domingo, Mari Carmen. "An overview of the Internet of Robotics and Autonomous Systems 54.9 (2006): 740-749.
Things for people with disabilities." Journal of Network and [31] Trung-Kien Dao, Thanh-Hai Tran, Thi-Lan Le, Hai Vu,
Computer Applications 35 (2011): 584-596. Viet-Tung Nguyen, Dang-Khoa Mac, Ngoc-Diep Do,
[18] Alberto Rodríguez, J. Javier Yebes, Pablo F. Alcantarilla, Thanh-Thuy Pham. "Indoor navigation assistance system
Luis M. Bergasa, Javier Almazán and Andrés Cela. for visually impaired people using multimodal
"Assisting the Visually Impaired: Obstacle Detection and technologies." 14th International Conference on Control,
Warning System by Acoustic Feedback." SENSORS New Automation, Robotics and Vision (ICARCV). Phuket,
Trends towards Automatic Vehicle Control and Perception Thailand: IEEE, 2016.
–Systems (2012): 17476-17496. [32] Kit Yan Chan, Ulrich Engelke, Nimsiri Abhayasinghe. "An
[19] Senem Kursun Bahadir, Vladan Koncar, Fatma Kalaoglu. edge detection framework conjoining with IMU data for
"Wearable obstacle detection system fully integrated to assisting indoor navigation of visually impaired persons."
textile structures for visually impaired people." Sensors Expert Systems With Applications (2016).
and Actuators A: Physical 179 (2012): 297-311. [33] Gaurav Kumar, Pradeep Kumar Bhatia. "A Detailed
[20] Adriano Mancini, Emanuele Frontoni, Primo Zingaretti,. Review of Feature Extraction in Image Processing
"Mechatronic System to Help Visually Impaired Users Systems." Fourth International Conference on Advanced
During Walking and Running." IEEE Transactions on Computing & Communication Technologies. Rohtak,
Intelligent Transportation Systems 19.2 (2018): 649 - 660. India: IEEE, 2014.
[21] Bogdan Mocanu, Ruxandra Tapu, Titus Zaharia. "DEEP- [34] Shoroog Khenkar, Hanan Alsulaiman, Shahad Ismail,
SEE FACE: A Mobile Face Recognition System Dedicated Alaa Fairaq, Salma Kammoun Jarraya, and Hanêne Ben-
to Visually Impaired People." IEEE Access 6 (2018): Abdallah. "ENVISION: Assisted Navigation of Visually
51975 - 51985. Impaired Smartphone Users." Procedia Computer Science
[22] Nawin Somyat, Teepakorn Wongsansukjaroen, Wuttinan (2016): 128-135.
Longjaroen, Songyot Nakariyakul. "NavTU: Android [35] Mohane, Vikky and Chetan Gode. "Object recognition for
Navigation App for Thai People with Visual Impairments." blind people using portable camera." World Conference
2018 10th International Conference on Knowledge and on Futuristic Trends in Research and Innovation for Social
Smart Technology (KST). Chiang Mai, Thailand: IEEE, Welfare (Startup Conclave). Coimbatore, India: IEEE,
2018. 2016.
[23] A. Jindal, N. Aggarwal, S. Gupta. "An Obstacle Detection [36] Aravinda S. Rao, Jayavardhana Gubbi, Marimuthu
Method for Visually Impaired Persons by Ground Plane Palaniswami and Elaine Wong. "A vision-based system to
Removal Using Speeded-Up Robust Features and Gray detect potholes and uneven surfaces for assisting blind
Level Co-Occurrence Matrix." Pattern Recognition and people." IEEE International Conference on
Image Analysis 28.2 (2018): 288-300. Communications (ICC). Kuala Lumpur, Malaysia: IEEE,
[24] Simon Meers, Koren Ward. "A Vision System for Providing 2016.
3D Perception of the Environment via Transcutaneous [37] Lukas Everding, Lennart Walger, Viviane S. Ghaderi, and
Electro-Neural Stimulation." Proceedings of the Eighth Jorg Conradt. "A Mobility Device for the Blind with
International Conference on Information Visualisation Improved vertical resolution using dynamic vision
(IV‘04). IEEE, 2004. sensors." IEEE 18th International Conference on e-Health
[25] Yueng Delahoz, Miguel A. Labrador. "A Real-Time Networking, Applications and Services (Healthcom).
Smartphone-Based Floor Detection System For The Munich, Germany: IEEE, 2015.
Visually Impaired." 2017 IEEE International Symposium [38] R. TAPU, B. MOCANU and T. ZAHARIA. "Real time
on Medical Measurements and Applications (MeMeA). static/dynamic obstacle detection for visually impaired
Rochester, MN, USA: IEEE, 2017. persons." 2014 IEEE International Conference on
[26] Ruxandra Tapu, Bogdan Mocanu, Titus Zaharia. "DEEP- Consumer Electronics (ICCE). Las Vegas, NV, USA:
SEE: Joint Object Detection, Tracking and Recognition IEEE, 2014. 394-395.

637
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

[39] R. Gnana Praveen, Roy P Paily. "Blind Navigation [51] Gonzalo Baez, Pablo Prieto and Fernando A Auat Cheein.
Assistance for Visually Impaired Based on Local Depth "3D vision-based handheld system for visually impaired
Hypothesis from a Single Image." International people: preliminary results on echo-localization using
Conference on DESIGN AND MANUFACTURING, structured light sensors." Biomedical Physics &
IConDM 2013. ELSEVIER, 2013. 351 – 360. Engineering Express 4.4 (2018).
[40] Ruxandra Tapu, Bogdan Mocanu, Andrei Bursuc, Titus [52] Nabila Shahnaz Khan, Shusmoy Kundu, Sazid Al Ahsan,
Zaharia. "A Smartphone-Based Obstacle Detection and Moumita Sarker, Muhammad Nazrul Islam. "An Assistive
Classification System for Assisting Visually Impaired System of Walking for Visually Impaired." 4th International
People." International Conference on Computer Vision Conference on Computer, Communication, Chemical,
Workshops. Sydney, NSW, Australia: IEEE, 2013. 444- Material and Electronic Engineering (IC4ME2-2018).
451. Bangladesh: IEEE, 2018.
[41] Ali Israr, Olivier Bau, Seung-Chan Kim, Ivan Poupyrev. [53] Tayla Froneman, Dawie van den Heever, Kiran Dellimore.
"Tactile feedback on flat surfaces for the visually "Development of a wearable support system to aid the
impaired." CHI EA '12 CHI '12 Extended Abstracts on visually impaired in independent mobilization and
Human Factors in Computing Systems. Austin, Texas, navigation." 2017 39th Annual International Conference of
USA: ACM New York, NY, USA ©2012, 2012. 1571-1576. the IEEE Engineering in Medicine and Biology Society
[42] Lee, Chia-Hsiang, Yu-Chi Su and Liang-Gee Chen. "An (EMBC). Seogwipo, South Korea: IEEE, 2017.
intelligent depth-based obstacle detection system for [54] Van-Nam Hoang, Thanh-Huong Nguyen, Thi-Lan Le,
visually-impaired aid applications." 13th International Thanh-Hai Tran, Tan-Phu Vuong, Nicolas Vuillerme.
Workshop on Image Analysis for Multimedia Interactive "Obstacle detection and warning system for visually
Services. Dublin, Ireland: IEEE, 2012. impaired people based on electrode matrix and mobile
[43] D. K. Liyanage, M. U. S. Perera. "Optical flow based Kinect." Vietnam Journal of Computer Science 4.2 (2017):
obstacle avoidance for the visually impaired." 2012 IEEE 71-83.
Business, Engineering & Industrial Applications [55] Shahu, Desar, et al. "A low-cost mobility monitoring
Colloquium (BEIAC). Kuala Lumpur, Malaysia: IEEE, system for visually impaired users." International
2012. 284-289. Conference on Smart Systems and Technologies (SST).
[44] Saeid Fazli, Hajar Mohammadi Dehnavi, Payman Osijek, Croatia: IEEE, 2017.
Moallem. "A robust negative obstacle detection method [56] Bruno Andò, Salvatore Baglio, Cristian O. Lombardo,
using seed-growing and dynamic programming for Vincenzo Marletta. "Smart Multisensor Strategies for
visually-impaired/blind persons." Optical Review 18.6 Indoor Localization." Mobility of Visually Impaired People
(2011): 415–422. (2017): 585-595.
[45] Chieh-Li Chen, Yan-Fa Liao, Chung-Li Tai. "Image-to- [57] A.Karimi, PedramGharani & Hassan. "Context-aware
MIDI mapping based on dynamic fuzzy color obstacle detection for navigation by visually impaired."
segmentation for visually impaired people." Pattern Image and Vision computing 64 (2017): 103-115.
Recognition Letters 32.4 (2014): 549-560. [58] david Zhou, yonggao yang, and hanbing yan. "A Smart
[46] Paulo Costa, Hugo Fernandes, Verónica Vasconcelos, Virtual Eye- Mobile System for the Visually Impaired."
Paulo Coelho, oão BarrosoLeontios Hadjileontiadis. IEEE Potentials 35.6 (2016): 13-20.
"Landmarks Detection to Assist the Navigation of Visually [59] Jee-Eun Kim, Masahiro Bessho, Shinsuke Kobayashi,
Impaired People." Human-Computer Interaction. Towards Noboru Koshizuka, Ken Sakamura. "Navigating Visually
Mobile and Intelligent Interaction Environments 6763 Impaired Travelers in a Large Train Station Using
(2011): 293-300. Smartphone and Bluetooth Low Energy." SAC '16
[47] En Peng, Patrick Peursum, Ling Li & Svetha Venkatesh. Proceedings of the 31st Annual ACM Symposium on
"A Smartphone-Based Obstacle Sensor for the Visually Applied Computing. Pisa, Italy: ACM New York,USA,
Impaired." International Conference on Ubiquitous 2016. 604-611.
Intelligence and Computing. Lecture Notes in Computer [60] Chaitali Kishor Lakde, Dr. Prakash S. Prasad. "Navigation
Science. Springer, Berlin, Heidelberg, 2010. 590-604. system for visually impaired people." 2015 International
[48] NUR SYAZREEN AHMAD, NG LAI BOON and PATRICK Conference on Computation of Power, Energy,
GOH. "Challenged, Multi-Sensor Obstacle Detection Information and Communication (ICCPEIC). Chennai,
System Via Model-Based State-Feedback Control in India: IEEE, 2015. 93-98.
Smart Cane Design for the Visually." IEEE Access. vol. 6 [61] 82B. Ando, S. Baglio, V. Marletta, A. Valastro. "A Haptic
(2018): 64182-64192. Solution to Assist Visually Impaired in Mobility Tasks." :
[49] Robert K. Katzschmann, Brandon Araki, Daniela Rus. IEEE Transactions on Human-Machine Systems 45.5
"Safe Local Navigation for Visually Impaired Users With a (2015): 641 - 646.
Time-of-Flight and Haptic Feedback Device." IEEE [62] Md. Ashraf Uddin, Ashraful Hug Suny. "Shortest Path
Transactions on Neural Systems and Rehabilitation Finding and Obstacle Detection for Visually Impaired
Engineering 26.3 (2018): 583 - 593. People Using Smart Phone." 2nd Int'l Conf. on Electrical
[50] Rajesh Kannan Megalingam, Souraj Vishnu, Vishnu Engineering and Information & Communication
Sasikumar, Sajikumar Sreekumar. "Autonomous Path Technology (ICEEICT) 2015. Dhaka, Bangladesh: IEEE,
Guiding Robot for Visually Impaired People." Cognitive May 2015.
Informatics and Soft Computing.Advances in Intelligent [63] he Wang, Hong Liu, Xiangdong Wang, Yueliang Qian.
Systems and Computing,. Springer, Singapore, August "Segment and Label Indoor Scene Based on RGB-D for
2018. 257-266. the Visually Impaired." International Conference on

638
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

Multimedia Modeling, MMM 2014. Springer, Cham, 2014. Navigation in Blindness." Mobility of Visually Impaired
449-460. People (2017): 167-200.
[64] Cheng-Lung Lee, Chih-Yung Chen, Peng-Cheng Sung, [78] Alejandro R. García Ramirez, Israel González-Carrascor,
Shih-Yi Lu. "Assessment of a simple obstacle detection Gustavo Henrique Jasper, Amarilys Lima Lopez, Jose
device for the visually impaired." Applied Ergonomics 45.4 Luis Lopez-Cuadrado, Angel García-Crespo. "Towards
(2013). Human Smart Cities: Internet of Things for sensory
[65] Haruyama, Madoka Nakajima and Shinichiro. "New indoor impaired individuals." Computing 99.1 (2017): 107-126.
navigation system for visually impaired people using [79] Priyan Malarvizhi Kumar, Ushadevi Gandhi1, R.
visible light communication." EURASIP Journal on Varatharajan, Gunasekaran Manogaran, Jidhesh R.,
Wireless Communications and Networking (2013). Thanjai Vadivel. "Intelligent face recognition and
[66] Wai L. Khoo, Eric L. Seidel, and Zhigang Zhu. "Designing navigation system using neural learning for smart security
a Virtual Environment to Evaluate Multimodal Sensors for in Internet of Things." Cluster Computing (2017): 1-12.
Assisting the Visually Impaired." ICCHP 2012.Computers [80] Sandesh Chinchole, Samir Patel. "Artificial intelligence
Helping People with Special Needs 7383 (2012): 573-580. and sensors based assistive system for the visually
[67] B. Mustapha, A. Zayegh, R.K. Begg. "Multiple sensors impaired people." International Conference on Intelligent
based obstacle detection system." 4th International Sustainable Systems (ICISS). Palladam, India: IEEE,
Conference on Intelligent and Advanced Systems 2017. 16-19.
(ICIAS2012). Kuala Lumpur, Malaysia: IEEE, 2012. 562- [81] Mr. Muneshwara M S, Dr. Lokesh A, Mrs. Swetha M S3,
566. Dr. Thunagmani M. "Ultrasonic and Image Mapped Path
[68] Hotaka Takizawa, Shotaro Yamaguchi, Mayumi Aoyagi, Finder for the Blind People in the Real Time System."
Nobuo Ezaki and Shinji Mizuno. "Kinect cane: Object IEEE International Conference on Power, Control, Signals
recognition aids for the visually impaired." 2013 6th and Instrumentation Engineering (ICPCSI-2017). Chennai,
International Conference on Human System Interactions India, sept 2017. 964-969.
(HSI). Sopot, Poland: IEEE, 2013. 473-478. [82] Rabia Jafri, Marwa Mahmoud Khan. "Obstacle Detection
[69] Gallagher, Thomas, et al. "Indoor positioning system and Avoidance for the Visually Impaired in Indoors
based on sensor fusion for the Blind and Visually Environments Using Google‘s Project Tango Device."
Impaired." 2012 International Conference on Indoor ICCHP 2016: Computers Helping People with Special
Positioning and Indoor Navigation, 13-15th November Needs 9759 (2016): 179-185.
2012. Sydney, NSW, Australia: IEEE, 2012. [83] Daniel Koester, Björn Lunt, Rainer Stiefelhagen. "Zebra
[70] Mounir Bousbia-Salah, Maamar Bettayeb, Allal Larbi. "A Crossing Detection from Aerial Imagery Across
Navigation Aid for Blind People." Journal of Intelligent & Countries." International Conference on Computers
Robotic Systems SPRINGER 64.3-4 (2011): 387–400. Helping People with Special Needs. Springer, Cham,
[71] Bruno Andò, Salvatore Baglio, Salvatore La Malfa, and 2016. 27-34.
Vincenzo Marletta. "A Sensing Architecture for Mutual [84] Rai Munoz, Xuejian Rong, Yingli Tian. "DEPTH-AWARE
User-Environment Awareness Case of Study: A Mobility INDOOR STAIRCASE DETECTION AND RECOGNITION
Aid for the Visually Impaired." IEEE Sensors Journal 11.3 FOR THE VISUALLY IMPAIRED." IEEE International
(2011): 634-640. Conference on Multimedia & Expo Workshops (ICMEW).
[72] A. M. Kassim, M. H. Jamaluddin, M. R. Yaacob, N. S. N. Seattle, WA, USA: IEEE, 2016 . 1-6.
Anwar, Z. M. Sani and A. Noordin. "Design and [85] Lorena Parra, Sandra Sendra, José Miguel Jiménez,
Development of MY 2nd EYE for Visually Impaired Jaime Lloret. "Multimedia sensors embedded in
Person." IEEE Symposium on Industrial Electronics and smartphones for ambient assisted living and e-health."
Applications (ISIEA2011). Langkawi, Malaysia: IEEE, Multimedia Tools and Applications 21 (2015): 13271–
September 25-28, 2011. 700-703. 13297.
[73] Okayasu, Mitsuhiro. "Newly developed walking apparatus [86] Shripad Bhatlawande, Amar Sunkari, Manjunatha
for identification of obstructions by visually impaired Mahadevappa, Jayanta Mukhopadhyay, Mukul Biswas,
people." Journal of Mechanical Science and Technology Debabrata Das & Somedeb Gupta (2014): Electronic
24.6 (2010). Bracelet and Vision Enabled Waist-belt for Mobility of
[74] Maria Cornacchia, Burak Kakillioglu, Yu Zheng, Senem Visually Impaired People, Assistive Technology: The
Velipasalar. "Deep Learning-Based Obstacle Detection Official Journal of RESNA, DOI:
and Classification With Portable Uncalibrated Patterned 10.1080/10400435.2014.915896.
Light." IEEE Sensors Journal 18.20 (2018): 8416 - 8425.0. [87] Larisa Dunai Dunai, Ismael Lengua Lengua, Ignacio
[75] M.Vanitha, A. Rajiv, K. Elangovan, S.Vinoth Kumar. "A Tortajada, Fernando Brusola Simon. "Obstacle detectors
Smart walking stick for visually impaired using Raspberry for visually impaired people." 2014 International
pi." International Journal of Pure and Applied Mathematics Conference on Optimization of Electrical and Electronic
119.16 (2018): 3485-3490. Equipment (OPTIM). Bran, Romania: IEEE, 2014. 809-
[76] Wafa M. Elmannai, Khaled M. Elleithy. "A Novel Obstacle 816.
Avoidance System for Guiding the visually impaired [88] Lee, JH. Kim, D. & Shin, BS. Multimed Tools Appl
through the use of fuzzy control logic." 15th IEEE Annual (2016). Springer US. HYPERLINK "Multimedia Tools
Consumer Communications & Networking Conference and Applications‖ Multimedia Tools and Applications 75:
(CCNC). Las Vegas, NV, USA: IEEE, 2018. 15275. https://doi.org/10.1007/s11042-014-2385-4.
[77] Daniel-Robert Chebat, Vanessa Harrar, Ron Kupers, [89] Hotaka Takizawa, Shotaro Yamaguchi, Mayumi Aoyagi,
Shachar Maidenbaum, Amir Amedi and Maurice Ptito. Nobuo Ezaki and Shinji Mizuno. "Kinect cane: Object
"Sensory Substitution and the Neural Correlates of recognition aids for the visually impaired." 2013 6th

639
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

International Conference on Human System Interactions Wearable Depth Sensor. In: Agapito L., Bronstein M.,
(HSI). Sopot, Poland: IEEE, 2013. 473-478. Rother C. (eds) Computer Vision - ECCV 2014
[90] Shrinivas Pundlik, Matteo Tomasi, Gang Luo. "Collision Workshops. ECCV 2014. Lecture Notes in Computer
Detection for Visually Impaired from a Body-Mounted Science, vol. 8927. Springer, Cham.
Camera." 2013 IEEE Conference on Computer Vision and [103] Matteo Poggi, Stefano Mattoccia. "A Wearable
Pattern Recognition Workshops. IEEE, 2013. 41-47. Mobility Aid for the Visually Impaired based on embedded
[91] Brian F. G. Katz, Slim Kammoun, Gae ¨tan Parseihian, 3D Vision and Deep Learning." IEEE Symposium on
Olivier Gutierrez, Adrien Brilhault , Malika Auvray , Computers and Communication (ISCC). Messina, Italy:
Philippe Truillet, Michel Denis, Simon Thorpe, Christophe IEEE, 2016.
Jouffrais,Simon Thorpe ,Christophe Jouffrais. "NAVIG: [104] Shuihua Wang, Hangrong Pan, Chenyang Zhang,
augmented reality guidance system for the visually Yingli Tian. "RGB-D image-based detection of stairs,
impaired." Virtual Reality 16.4 (2012): 253-269. pedestrian crosswalks and traffic signs." Journal of Visual
[92] Amit Kumar, Rusha Patra, M. Manjunatha, J. Communication and Image Representation 25.2 (2014):
Mukhopadhyay and A. K. Majumdar. "An Electronic Travel 263-272.
Aid for Navigation of Visually Impaired Persons." Third [105] Sakmongkon Chumkamon, Peranitti
International Conference on Communication Systems and Tuvaphanthaphiphat, Phongsak Keeratiwintakorn. "A blind
Networks (COMSNETS 2011). Bangalore, India: IEEE, navigation system using RFID for indoor environments."
2011. 2008 5th International Conference on Electrical
[93] Pelin Angin, Bharat Bhargava, Sumi Helal. "A Mobile- Engineering/Electronics, Computer, Telecommunications
Cloud Collaborative Traffic Lights Detector for Blind and Information Technology. Krabi, Thailand: IEEE, 2008.
Navigation." Eleventh International Conference on Mobile 765-768).
Data Management. Kansas City, MO, USA: IEEE, 2010. [106] M. Mathankumar, N.Sugandhi. "A low cost smart
396-401. shopping facilitator for visually impaired." 2013
[94] Kailun Yang, Kaiwei Wang, Luis M. Bergasa, Eduardo International Conference on Advances in Computing,
Romera, Weijian Hu, Dongming Sun, Junwei Sun, Ruiqi Communications and Informatics (ICACCI). Mysore, India:
Cheng, Tianxue Chen, Elena López. Sensors IEEE, 2013. 1088-1092.
(Basel) 2018 May; 18(5): 1506. Published online 2018 [107] Charalampos Tsirmpas, Alexander Rompas, Orsalia
May 10. doi: 10.3390/s18051506. Fokou, Dimitris Koutsouris. "An indoor navigation system
[95] Bai, Jinqiang, et al. "Smart guiding glasses for visually for visually impaired and elderly people based on Radio
impaired people in indoor environment." IEEE Frequency Identification (RFID)." Information Sciences
Transactions on Consumer Electronics 63.3 (2017): 258 - 320 (2015): 288-305.
266. [108] Kumar Yelamarthi, Daniel Haas, Daniel Nielsen,
[96] W. C. S. S. Simões, V. F. de Lucena. "Blind user wearable Shawn Mothersell. "RFID and GPS integrated navigation
audio assistance for indoor navigation based on visual system for the visually impaired." 2010 53rd IEEE
markers and ultrasonic obstacle detection." 2016 IEEE International Midwest Symposium on Circuits and
International Conference on Consumer Electronics Systems. Seattle, WA, USA: IEEE, 2010.
(ICCE). Las Vegas, NV, USA: IEEE, 2016. [109] CarmenDomingo, Mari. "An overview of the Internet of
[97] Vítor Filipe, Nuno Faria, Hugo Paredes, Hugo Fernandes, Things for people with disabilities." Journal of Network &
João Barroso. "Assisted Guidance for the Blind Using the Computer Applications 35.2 (2012): 584-596.
Kinect Device." DSAI 2016 Proceedings of the 7th [110] Ivanov, Rosen. "Indoor navigation system for visually
International Conference on Software Development and impaired." CompSysTech '10 Proceedings of the 11th
Technologies for Enhancing Accessibility and Fighting International Conference on Computer Systems and
Info-exclusion. Vila Real, Portugal: ACM, 2016. 13-19. Technologies and Workshop for PhD Students in
[98] Li B., Muñoz J.P., Rong X., Xiao J., Tian Y., Arditi A. Computing on International Conference on Computer
(2016) ISANA: Wearable Context-Aware Indoor Systems and Technologies. Sofia, Bulgaria: ACM New
Assistive Navigation with Obstacle Avoidance for the York, NY, USA, 2010. 143-149.
Blind. In: Hua G., Jégou H. (eds) Computer Vision – [111] M. Nassih, I. Cherradi, Y. Maghous, B. Ouriaghli and
ECCV 2016 Workshops. ECCV 2016. Lecture Notes in Y. Salih-Alj. "Obstacles Recognition System for the Blind
Computer Science, vol 9914. Springer, Cham. People Using RFID." Sixth International Conference on
[99] Young HoonLee, GérardMedioni. "RGB-D camera based Next Generation Mobile Applications, Services and
wearable navigation system for the visually impaired." Technologies. Paris, France: IEEE, 2012. 60-63.
Computer Vision and Image Understanding 149 (2016): 3-
20. Preetjot Kaur is currently a PhD research
[100] Huy-Hieu Pham, Thi-Lan Le, Nicolas Vuillerme. "Real- scholar at UIET, Panjab University Chandigarh.
Time Obstacle Detection System in Indoor Environment She received her M.Tech degree in Computer
for the Visually Impaired Using Microsoft Kinect Sensor." Engineering from Punjabi University Patiala in
Journal of Sensors (2016): 1-13.. 2016. She is a B.Tech in Computer Science
[101] A. Aladrén, G. López-Nicolás, Luis Puig, Josechu J. Engineering from GNDEC Ludhiana in 2014.
Guerrero. "Navigation Assistance for the Visually Impaired Ms. Kaur has a teaching experience of 2.5yrs as an Assistant
Using RGB-D Sensor With Range Expansion." IEEE Professor in the School of Computer Science Engineering at
Systems Journal 10.3 (2014). Chitkara University Punjab. She has published high impact
[102] Pérez-Yus A., López-Nicolás G., Guerrero J.J. factor technical papers in International Journals. Also, she has
(2015) Detection and Modelling of Staircases Using a presented her research work at various conferences of

640
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 08, AUGUST 2019 ISSN 2277-8616

National and International repute. Her current research


interests include Digital Image Processing, Deep Learning,
and Cognitive radios.

Roopali Garg is presently serving as Associate


Professor, Panjab University, Chandigarh. She
is former Coordinator of Department of IT,
UIET, Panjab University, Chandigarh. She is
PhD in Electronics and Communication
Engineering. She did her M.Tech in Electronics
and B.Tech in Electronics & Electrical
Communication Engg from PEC University of Technology,
Chandigarh. She has been in academics for 15 years and is
armed with rich teaching and research experience. Dr. Garg
has over 60 technical papers published in indexed high
impact-factor International Journals and International
Conferences. She has authored two books, namely
Polarization Mode Dispersion and Digital Electronics.
In addition she has several book chapters to her credit.Her
focused research area is Wireless communication, Optical
communication, and Cognitive Radio. She is in active research
by supervising PhD and M.tech research scholars. She has
guided nearly 20 M.Tech Thesis. In addition she has delivered
talks and lectures at various conferences, workshops at
different institutes. She has reviewed many research papers
for varied reputed journals. She is an active Life Member of
several International and National technical organizations like
ISTE, IETE, IEE. She has been awarded Administrator's Gold
medal by Chandigarh Administration in 2000 for her supreme
performance in curricular, co- curricular and extra- curricular
activities.

641
IJSTR©2019
www.ijstr.org

You might also like