Nothing Special   »   [go: up one dir, main page]

Applsci 09 00989

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

applied

sciences
Article
An ARCore Based User Centric Assistive Navigation
System for Visually Impaired People
Xiaochen Zhang 1 , Xiaoyu Yao 1 , Yi Zhu 1,2 and Fei Hu 1, *
1 Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China;
xzhang@gdut.edu.cn (X.Z.); yaoxiaoyu@mail2.gdut.edu.cn (X.Y.); zhuyi@gdut.edu.cn or
zhuyi@gatech.edu (Y.Z.)
2 School of Industrial Design, Georgia Institute of Technology, GA 30332, USA
* Correspondence: hufei@gdut.edu.cn

Received: 17 February 2019; Accepted: 6 March 2019; Published: 9 March 2019 

Featured Application: The navigation system can be implemented in smartphones. With


affordable haptic accessories, it helps the visually impaired people to navigate indoor without
using GPS and wireless beacons. In the meantime, the advanced path planning in the system
benefits the visually impaired navigation since it minimizes the possibility of collision in
application. Moreover, the haptic interaction allows a human-centric real-time delivery of motion
instruction which overcomes the conventional turn-by-turn waypoint finding instructions. Since
the system prototype has been developed and tested, a commercialized application that helps
visually impaired people in real life can be expected.

Abstract: In this work, we propose an assistive navigation system for visually impaired people
(ANSVIP) that takes advantage of ARCore to acquire robust computer vision-based localization.
To complete the system, we propose adaptive artificial potential field (AAPF) path planning
that considers both efficiency and safety. We also propose a dual-channel human–machine
interaction mechanism, which delivers accurate and continuous directional micro-instruction via a
haptic interface and macro-long-term planning and situational awareness via audio. Our system
user-centrically incorporates haptic interfaces to provide fluent and continuous guidance superior
to the conventional turn-by-turn audio-guiding method; moreover, the continuous guidance makes
the path under complete control in avoiding obstacles and risky places. The system prototype
is implemented with full functionality. Unit tests and simulations are conducted to evaluate the
localization, path planning, and human–machine interactions, and the results show that the proposed
solutions are superior to those of the present state-of-the-art solutions. Finally, integrated tests are
carried out with low-vision and blind subjects to verify the proposed system.

Keywords: assistive technology; ARCore; user centric design; navigation aids; haptic interaction;
adaptive path planning; visual impairment; SLAM

1. Introduction
According to statistics presented by the World Health Organization in October 2017, there are more
than 253 million visually impaired people worldwide. Compared to normally sighted people, they
are unable to access sufficient visual clues of the surroundings due to weakness in visual perception.
Consequently, visually impaired people face challenges in numerous aspects of daily life, including
when traveling, learning, entertaining, socializing, and working.
Visually impaired people have a strong dependency on travel aids. Self-driving vehicles have
achieved SAE (Society of Automotive Engineers) Level 3 long back, which allows the vehicle to make

Appl. Sci. 2019, 9, 989; doi:10.3390/app9050989 www.mdpi.com/journal/applsci


Appl.
Appl. Sci.
Sci.2019,
2019,9,
9,x989
FOR PEER REVIEW 2 2ofof16
15

decisions autonomously regarding the machine cognition. Autonomous robots and drones have also
been dispatched
decisions for unmanned
autonomously regarding tasks. Obviously,
the machine the advances
cognition. in robotics,
Autonomous robotscomputer
and drones vision,
have GIS
also
(Geographic
been dispatched Information
for unmanned System),tasks.andObviously,
sensors allow for integrated
the advances smart computer
in robotics, systems to perform
vision, GIS
mapping,
(Geographic positioning,
Information and decision-making
System), and sensors while
allowexecuting in urban
for integrated smart areas.
systems to perform mapping,
Human and
positioning, beings have the ability
decision-making to interpret
while executingtheinsurrounding
urban areas.environment using sensory organs.
Over Human
90% of the information
beings have the transmitted to the brain
ability to interpret is visual, and
the surrounding the brain processes
environment images
using sensory tens
organs.
of thousands
Over 90% of theof times faster than
information texts. This
transmitted explains
to the why
brain is human
visual, andbeings
the brainareprocesses
called visual creatures;
images tens of
when traveling,
thousands visually
of times fasterimpaired
than texts. people have to face
This explains whydifficulties
human beings imposed by their
are called visual
visual impairment
creatures; when
[1].
traveling, visually impaired people have to face difficulties imposed by their visual impairment [1].
Traditional
Traditional assistive solutions
solutions forfor visually
visually impaired
impaired people
people include
include white
white canes,
canes, guide
guide dogs,
dogs, and
and
volunteers.
volunteers. However,
However, each each ofof these
these solutions
solutions hashas its
its own
own restrictions.
restrictions. They
They either
either work
work only
only under
under
certain
certain situations,
situations, with
with limited
limited capability,
capability, or are expensive in terms of extra manpower. manpower.
Modern
Modern assistive
assistivesolutions
solutionsfor forvisually
visuallyimpaired
impairedpeople
peopleborrow
borrowpowerpowerfromfrommobile
mobilecomputing,
computing,
robotics, andautonomous
robotics, and autonomoustechnology.
technology. TheyThey are implemented
are implemented in various
in various forms
forms such such asterminals,
as mobile mobile
terminals, portable computers,
portable computers, wearable sensorwearable sensor
stations, andstations, and indispensable
indispensable accessories.accessories.
Most of these Most of these
devices use
devices
computer usevision
computer vision orto
or GIS/GPS GIS/GPS
understandto understand the surroundings,
the surroundings, acquire aacquire
real-timea real-time
location,location,
and use
and use turn-by-turn
turn-by-turn commands commands
to guideto the guide
user.the user. However,
However, turn-by-turnturn-by-turn
commands commands are for
are difficult difficult
users
for users to follow.
to follow.
In
In this
this work,
work,we wepropose
proposean anassistive
assistivenavigation
navigationsystem
systemfor forvisually
visuallyimpaired
impairedpeople
people(ANSVIP,
(ANSVIP,
see
see Figure
Figure 1.)1) using ARCore area learning; we introduce an adaptive adaptive artificial
artificial potential
potential field
field path
path
planning
planning mechanism
mechanism that that generates
generates smooth
smooth and and safe
safe paths;
paths; wewe design
design aa user-centric
user-centric dual
dual channel
channel
interaction
interaction that
that uses
uses haptic
haptic sensors
sensors to todeliver
deliverreal-time
real-timetraction
tractioninformation
informationto tothe
theuser.
user. To
To verify
verify the
the
design
design ofof the
the system,
system, wewe havehave the
the proposed
proposed system
systemprototype
prototypeimplemented
implementedwith withfull
fullfunctionality,
functionality,
and
and have
have the
the prototype
prototype tested
tested with
with blind
blind folded and blind subjects.

Figure1.
Figure Thecomponents
1.The componentsof
ofthe
theproposed
proposed ANSVIP
ANSVIP system.
system.

2. Related Works
2. Related Works
2.1. Assistive Navigation System Frameworks
2.1. Assistive Navigation System Frameworks
Recent advances in sensor technology support the design and integration of portable assistive
navigation. advances
Recent in sensoran
Katz [2] designed technology support
assistive device theaids
that design and integration ofand
in macro-navigation portable assistive
micro-obstacle
navigation. Katz [2] designed an assistive device that aids in macro-navigation and micro-obstacle
Appl. Sci. 2019, 9, 989 3 of 15

avoidance. The prototype was on a backpacked laptop employed with a stereo camera and audio
sensors. Zhang [3] proposed a hybrid-assistive system with a laptop with a head-mounted web
camera and a belt-mounted depth camera along with IMU (Inertial Measurement Unit). They used a
robotics-operating system to connect and manage the devices and ultimately help visually impaired
people when roaming indoors. Ahmetovic [4] used a smartphone as the carrier of the system, but a
considerable number of beacons had to be deployed prior to support the system. Furthermore, Bing [5]
used Project Tango Tablet with no extra sensor to implement their proposed system. The system
allowed the on-board depth sensor to support area learning, but the computational power burden
was heavy. Zhu [6] proposed and implemented the ASSIST system on a Project Tango smartphone.
However, due to the presence of more advanced Google ARCore, the Google Project Tango introduced
in 2014 has been depreciated [6] since 2017, and smartphones with the required capability are no
longer available. To the best of our knowledge, the proposed ANSVIP system is the first assistive
human–machine system using an ARCore-supported commercial smartphone.

2.2. Positioning and Tracking


Most indoor positioning and tracking technologies were borrowed from autonomous robotics and
computer vision. Methods using direct sensing and dead reckoning [7] are no longer qualified options.
Yang [8] proposed a Bluetooth RSSI-based sensing framework to localize users in large public venues.
The particle filter was applied to localize the subject. Jiao [9] used an RGB-D camera to reconstruct a
semantic map to support indoor positioning. They used an artificial neural network on reflectivity to
improve the accuracy of 3D localization. Moreover, Xiao [10,11] and Zhang [3,12] used hybrid sensors
to carry out fast visual odometry and feature-based loop closure in localization, while Zhu [6] and
Bing [5,13] used area learning (a pattern recognition method) to bond subjects with areas of interest.

2.3. Path Planning


As the most popular path planning in robotics, A* is also extensively used by assistive systems.
Xiao [10,11] and Zhang [3] used A* to connect areas of interest. Bing [5] applied greedy path planning
in the label-intensive semantic map. Meanwhile, Zhao [14] suggested a potential field as a potential
candidate for local planning. Paths in Reference [15,16] were planned on well-labeled maps using
global optimal methods such as Dijkstra [17] and its varieties. Most existing path-planning methods
generate sharp turn-by-turn paths connecting corner or feature anchors. These paths are good for
robots, but bring unbearable experiences to humans.

2.4. Human–Machine Interaction


Most present systems use audio to deliver turn-by-turn directional instructions [5,11,18].
However, the human brain has its own understanding for positioning, direction, and velocity [19,20],
which is not robot-styled. Some recent works proposed using haptic interfaces in obstacle
avoidance [1,3,6,10,13,21–23]. However, due to the restriction of turn-by-turn path planning, the
haptic interaction is unlikely to be used in a continuous path-following interface in assistive systems.
Fernandes [7] used perceptual 3D audio as a solution; however, the learning was not easy and the
accuracy in real complex scenes needs to be improved. Ahmetovic [15] conducted a data-driven
analysis, which pointed out that turn-by-turn audio instructions have considerable drawbacks due to
latency in interaction and limited information per instruction. Guerreiro [24] stated that turn-by-turn
instructions may confuse visually impaired people’s navigation behaviors, and result in, for example,
deviating from the intended path. These behaviors lead to errors, confusion and longer recovery times
back to the right track. Such behaviors also emphasize that more effective real-time feedback interfaces
are necessary. Ahmetovic [15] studied the factors that cause rotation error in turn-by-turn navigation.
Rotation errors accompanying audio instructions significantly affect user experiences in navigation.
Rector [22] compared the accuracy of three different human guidance interfaces and provided insights
into the design of multimodal feedback mechanisms.
Appl. Sci. 2019, 9, 989 4 of 15
Appl. Sci. 2019, 9, x FOR PEER REVIEW 4 of 16

3. Design of ANSVIP
3. Design of ANSVIP

3.1. Information
3.1. Information Flow
Flow in
in System
System Design
Design
Most tasks
Most tasks in
in real
real life
life are
are difficult
difficult to
to accomplish
accomplish using
using aa single
single sensor
sensor or
or functional
functional unit.
unit. Instead,
Instead,
they require collaboration (cooperation, competition, or coordination) from multiple
they require collaboration (cooperation, competition, or coordination) from multiple functional units functional units
or
or sensing
sensing agents
agents in intelligent
in intelligent systems systems
to maketothemakemost the most final
favorable favorable finalInsynthesis.
synthesis. In such a
such a collaborative
collaborative
context, each context,
functionaleachunit
functional
has its unit
ownhas dutyits and
owncooperates
duty and cooperates via thechannel,
via the agreed agreed channel,
thereby
thereby maximizing the effectiveness of shared resources
maximizing the effectiveness of shared resources to achieve the goal. to achieve the goal.
Specifically, an
Specifically, an assistive
assistive navigation
navigation system
system is is composed
composed of of two
two parts:
parts: the
the cognitive
cognitive system
system andand
the guidance system. The cognitive system aims to understand the world, including
the guidance system. The cognitive system aims to understand the world, including the micro-scale the micro-scale
surroundingsand
surroundings andthethemacro-scale
macro-scalescene;
scene;thethe guidance
guidance system
system aims aims to properly
to properly deliver
deliver the micro-
the micro-scale
scale guidance
guidance command,
command, the macro-scale
the macro-scale plan, plan,
as wellas as
well as semantic
semantic scenescene understanding,
understanding, to thetouser.
the user.
The
The collaboration of the two allows the machine to understand the scene, and
collaboration of the two allows the machine to understand the scene, and then the user acquires then the user acquires
understanding from
understanding from the
the machine,
machine, asas shown
shown in in Figure
Figure2. 2.

Figure 2. The
Figure 2. The information
information flow
flow in
in the
the assistive
assistive system:
system: The
The assistive
assistive system
system core
core aims
aims to
to understand
understand
the
the world
world and
and translate
translate the
the essential
essential understanding
understanding to tothe
theuser.
user.

The
The proposed
proposed ANSVIP
ANSVIP uses uses anan ARCore-supported
ARCore-supported smartphone
smartphone as as the
the major
major carrier
carrier and
and uses
uses
ARCore-based
ARCore-based SLAM (Simultaneous Localization and Mapping) to track motion so as to create
SLAM (Simultaneous Localization and Mapping) to track motion so as to create aa
scene
scene understanding
understanding along along with
with mapping.
mapping. The Thehuman-scale
human-scaleunderstanding
understandingofofmotion motion and
and space
space is
is processed to produce a short and safe path towards the goal. The
processed to produce a short and safe path towards the goal. The corresponding micro-motioncorresponding micro-motion
guidance
guidance is is delivered
delivered to to the
the user
user using
using haptic
haptic interaction,
interaction, while
while the
the macro-path
macro-path cluesclues are
are delivered
delivered
using audio interaction.
using audio interaction.
Based
Based on onthetheinformation
information flow in aninassistive
flow navigation
an assistive system,system,
navigation we design we the ANSVIP
design the structure
ANSVIP
as follows:
structure as follows:
Firstly,
Firstly,the
thesystem
systemshould
shouldbe befully
fullyaware
awareof ofthe
the information
information related
relatedto to the
the user’s
user’s location
location during
during
navigation.
navigation. Unlike GPS-based solutions that are commonly used outdoors, our system has
Unlike GPS-based solutions that are commonly used outdoors, our system has to
to use
use
computer
computer vision-based
vision-based SLAMSLAM sincesince indoor
indoor GPS
GPS signals
signals are
are unreliable.
unreliable. SLAM
SLAM is is based
based on on the
the Google
Google
ARCore,
ARCore, which
which integrates
integratesthethevision
visionandandinert
inertsensor
sensorhardware
hardwareto tosupport
supportareaarealearning.
learning.
Secondly,
Secondly, the system should be capable of conveying the abstracted systemic cognition
the system should be capable of conveying the abstracted systemic cognition to to the
the
user. Unlike the conventional exclusive audio interaction, we propose a
user. Unlike the conventional exclusive audio interaction, we propose a haptic-based cooperativehaptic-based cooperative
mechanism.
mechanism. This This allows
allows usus to
to replace
replace thethe popular
popular turn-by-turn
turn-by-turn guidance
guidance withwith aa more
more continuous
continuous
motion guidance.
motion guidance.
The
The working
working logiclogic among
among the the ANSVIP
ANSVIP components
components is is shown
shown in in Figure
Figure 3.3. Details
Details ofof the
the major
major
components are discussed in the following
components are discussed in the following subsections: subsections:
Appl.
Appl. Sci. 2019, 9,
Sci. 2019, 9, 989 5 of
of 15
Appl. Sci. 2019, 9, xx FOR
FOR PEER
PEER REVIEW
REVIEW 55 of 16
16

Figure 3.
Figure 3. The
3. The working
working logic
logic among
among the
among the ANSVIP
the ANSVIP components:
ANSVIP components: The
components: The physical
physical components
components and
and soft
soft
soft
components are
components are
components shown
are shown on
shown on the
on the left
the left hand
left hand side
hand side and
side and right
and right hand
right hand side,
hand side, respectively.
side, respectively.
respectively.

3.2.
3.2. Real-Time
3.2. Real-Time Area
Real-Time Area Learning-Based
Area Learning-Based Localization
Learning-Based Localization
Localization
The
The system
The systemrelies
system relieson
relies onexisting
on existing
existingindoor scenario
indoor
indoor scenario
scenarioCAD maps,
CAD
CAD which
maps,
maps, which
whichare available as escape
are available
are available mapsmaps
as escape
as escape near
maps
elevators
near (as requested
near elevators
elevators (as by fire
(as requested
requested bydepartments).
by fire TheThe
fire departments).
departments). mapmap
The we we
map useuse
we in this
use study
in this
in this is presented
study
study in in
is presented
is presented Figure
in 4.
Figure
Figure
We
4. label
We the
label area
the of
area interest
of on
interest the
on map
the mapso as
so to
as allow
to allowthe
thesystem
system to
to understand
understand navigation
navigation
4. We label the area of interest on the map so as to allow the system to understand navigation requests requests
requests
and
and to
and to plan
to plan the
plan the path
the path accordingly.
path accordingly.
accordingly.

Figure
Figure 4. The digital
4. The digital CAD
CAD map
map before (left) and
before (left)
(left) and after
and after (right)
after (right) being
(right) being labeled.
being labeled.
labeled.

Google
Google ARCore
ARCore is is used
used to
used to track
to track the
track the pose
the pose of
pose of the
of the system
the system in
system in navigation.
in navigation. The
navigation. The sparse
The sparse features
sparse features are
features are
are
collected
collected and
andstored in
stored an
in area
an description
area dataset
description and
datasetsubsequently
and used
subsequentlyin
collected and stored in an area description dataset and subsequently used in re-localization. re-localization.
used in Specifically,
re-localization.
aSpecifically,
normal-sighted
Specifically, person has toperson
aa normal-sighted
normal-sighted build the
person has sparse
has to build
to build map
theof
the the indoor
sparse
sparse ofscenario
map of
map by running
the indoor
the indoor scenariothe
scenario by ARCore
by running
running
SLAM
the in
ARCore advance.
SLAM Then,
in the
advance.assistive
Then, system
the is capable
assistive to
system re-localize
is capable itself
to on the
re-localize
the ARCore SLAM in advance. Then, the assistive system is capable to re-localize itself on the pre-built
itself map
on theafter
pre-
pre-
entering
built map the scenario.
after enteringBy observing
the scenario. and
By recognizing
observing the
and labeled
recognizingtraceable
the objects,
labeled the
traceable
built map after entering the scenario. By observing and recognizing the labeled traceable objects, the system is
objects,able
the
to re-localize
system is
system is able itself
able to after roaming
to re-localize
re-localize itself in the
itself after system
after roaming
roaming in map.
in the However,
the system
system map. the mapping
map. However,
However, the between
the mappingpoints
mapping between in the
between
system
points feature-based
in the system map and those
feature-based mapon the
and scenario
those on CAD
the map has
scenario to
CAD be
points in the system feature-based map and those on the scenario CAD map has to be obtained. obtained.
map has to be obtained.
We use
We use aa singular
singular value
value decomposition
decomposition method
method toto find
find the
the transportation
transportation matrix A .. Two
matrix A Two
groups of corresponding feature point sets are used for finding the homogeneous transformation
groups of corresponding feature point sets are used for finding the homogeneous transformation
matrix:
matrix:
Appl. Sci. 2019, 9, 989 6 of 15

We use a singular value decomposition method to find the transportation matrix A. Two groups
of corresponding feature point sets are used for finding the homogeneous transformation matrix:
 
" # cos(θ ) − sin(θ ) t x
R 2×2 t 2×1
A= =  sin(θ ) cos(θ ) ty . (1)
 
0 1
0 0 1

Let ln = [ xn , yn ] T denote the point sets in the feature map, and let pn = [in , jn ] T denote the
corresponding point set on the scenario CAD map. We use the least squares method to find the rotation
R and translation t as follows:
N
( R, t) ← arg min ∑ k R × pi + t − li k2 . (2)
R,t i =1

N N
1 1
Denote l = N ∑ li , p = N ∑ pi , Equation (2) can be written as
i =1 i =1

N 2
R ← arg min ∑ R × pi − l i . (3)
R i =1

For M x = b,  
x1 − y1 1 0

 y1 x1 0 1  

M= .. .. .. .. 
. . . . , (4)


−yn
 
 xn 1 0 
yn xn 0 1

x = [cos(θ ), sin(θ ), t x , ty ] T , (5)

b = [i1 , j1 , ..., in , jn ] T . (6)

Using SVD to decompose M, we find

M N ×4 = U N × N S N ×4 V T 4×4 , (7)

where U denotes the feature matrix of MM T , S denotes the diagonal matrix with eigenvalue δi , V is
the eigenvector matrix of M T M, and we have A in (1) calculated by

−1
x = (Vdiag(δ1−1 , ..., δ4−1 )U T ) b. (8)

3.3. Area Learning in ANSVIP


ARCore is an augmented reality framework for smartphones with Android operating system.
It is an advanced substitute of the deprecated Project Tango. Without the extra depth sensor, an
ARCore-powered cell phone is able to track its pose and build a map of the surroundings in real time.
Besides, ARCore also enhances the area of interest detection by estimating the average illumination
intensity, which helps area segmentation during semantic mapping.
The smartphone is a remarkable feat of engineering. It integrates a great number of sensors like a
gyroscope, camera, and GPS into a small slab. Specifically, in our work, a HUAWEI P20 with Kirin 970
CPU, Gravity sensor, Ambient light sensor, Proximity sensor, Gyroscope Compass is used.
Appl. Sci. 2019, 9, 989 7 of 15

3.4. Adaptive Artificial Potential Field-Based Path Planning


In indoor navigation for the visually impaired, the path planning has to consider both efficiency
and safety. Specifically, our path planning considers the issues that follow.
1. The path should be planned to be away from obstacles and risks: Where the conventional robot
path planning prefers the shortest path, the assistive system has more in-depth requirements. For
visually impaired users, the path should be away from obstacles and risks such as walls, pillars, and
uneven steps, which may cause falls [25].
2. The path and guidance shall be updated in real time: Unlike autonomous robot systems, the
assistive system cannot expect visually impaired users to proceed along the planned path accordingly.
When the user deviates from the planned path, there should be a corresponding real-time path
evolution instead of asking the user to return to the planned track.
3. The mechanism shall be flexible to scale up with new elements: The path-planning algorithm
should be able to easily expand with new elements, such as dynamic obstacle avoidance, functional
unit integration, step warning, and extreme case re-planning.
4. The path shall be planned in a human-friendly manner: Unlike robots, visually impaired users
are unable to grasp precise turning angles, and thus, it is difficult for them to follow conventional
turn-by-turn paths [15]. Qualitative direction guidance is more suitable. Users prefer continuous
guidance in navigation and a generally smooth plan.
The artificial potential field path planning is a suitable candidate for the above issues and
challenges, since it has the characteristics of a simple structure, strong practicability, ease of
implementation, and flexibility regarding expansion [14,17].
Therefore, we propose an adaptive artificial potential field path-planning mechanism for
path generation.
Specifically, the target (goal) is considered an attractive potential, while walls are repulsive. The
potential fields are the combination of first-order attractive and repulsive potentials:

U = Uatt + Urep , (9)

Uatt ( Xcurrent ) = kρ( Xcurrent , Xtarget ), (10)


  
1 1
η
ρ( Xcurrent ,Xobs )
− ρ0 if ρ( Xcurrent , Xobs ) ≤ ρ0
Urep ( Xcurrent ) = , (11)
0 if ρ( Xcurrent , Xobs ) > ρ0

where η denotes the repulsive factor, ρ denotes a distance function, and ρ0 denotes the effective radius.
A path can be generated along with the potential gradients.
However, local minimums may block the path or cause redundant travel costs. Thus, we refer to
the local minimum immune A* algorithm path length to control ρ0 and solve this problem.
ρ0 = ρ0 − ∆ρ until CAAPF > λCA∗ , where λ denotes the control factor, and CAAPF and CA∗
denote the path length of adaptive artificial potential field (AAPF) and A* from the current position,
respectively. A sliding window is used to smooth the path to support and enhance the experience of
motion guidance in human–machine interaction (Figure 5):

1
X (i ) = ( X (i + N ) + X (i + N − 1) + ... + X (i − N )). (12)
2N + 1

A case of a smoothed path is shown in Figure 5. Since the plan is discrete, the path (red dotted) is
planned in taxicab style before smoothing. The sliding window described in equation (12) updates
each point on the path by averaging its position with the nearest 2N points’ positions on the path.
Consequently, the path (dark curve) is smoothed after the process.
Appl. Sci. 2019, 9, 989 8 of 15
Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 16

Figure 5. A5.case
Figure of aofsmoothed
A case a smoothedpath
pathby
bysliding window:before
sliding window: beforesmoothing
smoothing (red
(red dotted)
dotted) versus
versus afterafter
smoothing (dark
smoothing curve).
(dark curve).

3.5. Dual-Channel Human–Machine Interaction


A case of a smoothed path is shown in Figure 5. Since the plan is discrete, the path (red dotted)
is plannedThein information
taxicab styletransfer
beforebetween the user
smoothing. The and system
sliding relies on
window human–machine
described in equation interactions
(12) updates
(HMIs). The HMI in an assistive navigation system has certain unique characters.
each point on the path by averaging its position with the nearest 2N points’ positions on the First, the HMI doespath.
not rely on visual cognition. Second, the HMI is highly
Consequently, the path (dark curve) is smoothed after the process.task-oriented. Third, the different information
has distinct delivery requirements in situations of urgency or depending on high accuracy. The most
popular audio interaction for assistive navigation systems [16,26–28] suffers from the following aspects:
3.5. Dual-Channel Human–Machine Interaction
Instruction delay: The instruction delivery is not instantaneous, and the latency becomes a
The information
bottleneck transfer
when dealing between
with urgent the user and
interaction systemItrelies
requests. onwhen
is vital human–machine interactions
dealing with urgency
in navigation.
(HMIs). The HMI in an assistive navigation system has certain unique characters. First, the HMI does
Limited
not rely on visualinformation: The amount
cognition. Second, of information
the HMI is highlyper message/second
task-oriented. is very
Third, limited and
the different tends
information
to cause ambiguity, which makes accomplishing tasks with multiple semantics difficult.
has distinct delivery requirements in situations of urgency or depending on high accuracy. The most
Vulnerable to interference: The user may not be able to access multiple instructions simultaneously,
popular audio interaction for assistive navigation systems [16,26–28] suffers from the following
and environmental sounds may cause interference.
aspects:
Result-oriented instructions: The conventional graphical interaction provides many individual
Instruction delay: The instruction delivery is not instantaneous, and the latency becomes a
small tasks to users, allowing them to choose among different combinations to achieve their
bottleneck
goals. when
Audiodealing with are
instructions urgent interaction
usually requests.
goal-driven It is vital whenand
and result-oriented, dealing
they with urgency
are weak in in
navigation.
procedure-oriented interaction tasks.
Limited
Thus,information:
we design aThe amount
hybrid of information
haptic per message/second
interaction mechanism as the major is very limited
interface and tends
to deliver
to cause ambiguity,
navigation which especially
instructions, makes accomplishing
micro-motion tasks with multiple
instructions. Audio issemantics difficult.
used to deliver less-sensitive
macro-informative messages.
Vulnerable to interference: The user may not be able to access multiple instructions
After a path to the target is determined,
simultaneously, and environmental sounds may motion
causeguidance is generated as shown in Figure 6.
interference.
Result-oriented instructions: The conventional graphical interaction provides many individual
small tasks to users, allowing them to choose among different combinations to achieve their goals.
Audio instructions are usually goal-driven and result-oriented, and they are weak in procedure-
oriented interaction tasks.
Thus, we design a hybrid haptic interaction mechanism as the major interface to deliver
navigation instructions, especially micro-motion instructions. Audio is used to deliver less-sensitive
macro-informative messages.
After a path to the target is determined, motion guidance is generated as shown in Figure 6.
Appl.
Appl. Sci. 2019, 9, 989 999 of
of 15
Appl. Sci.
Sci. 2019,
2019, 9,
9, xx FOR
FOR PEER
PEER REVIEW
REVIEW of 16
16

Figure 6.
Figure The motion
motion guidance is
is generated intersecting the planned path and the awareness cycle.
Figure 6.
6. The
The motion guidance
guidance is generated
generated intersecting
intersecting the
the planned
planned path
path and
and the
the awareness
awareness cycle.
cycle.

To
To deliver the motion guidance in real time via haptic interaction,
interaction, there is
is a numerical solution.
solution.
To deliver
deliver the
the motion
motion guidance
guidance inin real
real time
time via
via haptic
haptic interaction, there
there is aa numerical
numerical solution.
In
In this work, we design haptic gloves as shown in Figure 7.
In this
this work,
work, we
we design
design haptic
haptic gloves
gloves asas shown
shown inin Figure
Figure 7.
7.

Figure
Figure 7. The
7. The
Figure7. design
The design of
design of haptic
of haptic gloves.
hapticgloves.
gloves.

The
Theleft
leftglove
gloveguides
guidesthe
themotion,
motion,andandthe
theright
rightglove
glovewarns
warnsof obstacles. Using
of obstacles. Using the
the middle
middle finger
finger
as
as the head front of the subject, the motion directional guidance can be instantaneously delivered
the head front of the subject, the motion directional guidance can be instantaneously delivered to
to
the
theuser
useras
assoon
soonasasthe motion
themotion plan
motionplan is
planis made.
ismade.
made.

4.
4. System
System Prototyping
4. System Prototyping and
Prototyping and Evaluation
and Evaluation
Evaluation
4.1.
4.1. System Prototyping
4.1. System
System Prototyping
Prototyping
We
We use the HUAWEI P20 as the ARCore-supported smartphone, use Arduino sensors to
We use
use the
the HUAWEI
HUAWEI P20 P20 as
as the
the ARCore-supported
ARCore-supported smartphone,
smartphone, useuse Arduino
Arduino sensors
sensors toto
implement
implement the haptic interactive glove, and use the BAIDU open API for audio recognition. The
implement the the haptic
haptic interactive
interactive glove,
glove, and
and use
use the
the BAIDU
BAIDU openopen API
API for
for audio
audio recognition.
recognition. The
The
application
application is developed in Unity3D. Roberto Lopez Mendez ARCore SLAM is applied as the the
base for
application isis developed
developed in in Unity3D.
Unity3D. Roberto
Roberto Lopez
Lopez Mendez
Mendez ARCore
ARCore SLAM
SLAM is is applied
applied as
as the base
base
visual
for odometry and area learning. Bluetooth is used to connect the smartphone and the accessory.
for visual
visual odometry
odometry and and area
area learning.
learning. Bluetooth
Bluetooth isis used
used toto connect
connect the
the smartphone
smartphone and and the
the
The ready-to-work
accessory. human–machine prototype isprototype
shown in Figure 8. Figure 8.
accessory. The
The ready-to-work
ready-to-work human–machine
human–machine prototype is is shown
shown in in Figure 8.
Appl. Sci. 2019, 9, 989 10 of 15
Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 16
Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 16

Figure 8. The implemented ANSVIP prototype with full functionality.


Figure 8. The implemented ANSVIP prototype with full functionality.
functionality.

4.2.
4.2. Localization
Localization
4.2. Localization
To
To validate the localization accuracy and reliability, we compare
compare the localization of area learning
To validate
validate the
the localization
localization accuracy
accuracy and
and reliability,
reliability, we
we compare the the localization
localization ofof area
area learning
learning
with
with visual odometry in an indoor test. Two subjects wearing the system are asked to walk five times
with visual
visual odometry
odometry in in an
an indoor
indoor test. Two subjects
test. Two subjects wearing
wearing thethe system
system are
are asked
asked toto walk
walk five
five times
times
along
along a path in
in the
the corridor,
corridor, one subject with
with area
area learning
learning and
and the
the other
other with
with visual
visual odometry
odometry(VO).
along aa path
path in the corridor, one
one subject
subject with area learning and the other with visual odometry (VO).
(VO).
The
The results on Figure 9 are consistent with our expectations: The VO trials suffer from accumulative
The results
results on
on Figure
Figure 99 are
are consistent
consistent with
with our
our expectations:
expectations: The
The VOVO trials
trials suffer
suffer from
from accumulative
accumulative
errors,
errors, which cause localization drifts; meanwhile, in the area learning method, there are certain drifts
errors, which cause localization drifts; meanwhile, in the area learning method, there are
which cause localization drifts; meanwhile, in the area learning method, there are certain
certain drifts
drifts
in
in passing corners but the
the system is
is able to
to swiftly correct
correct the drift
drift by recognizing
recognizing learned areas.
areas.
in passing
passing corners
corners but
but the system
system is able
able to swiftly
swiftly correct the
the drift by
by recognizinglearned
learned areas.

Ground Truth
Ground
VO 1 Truth
VO 21
VO
VO 32
VO
VO 43
VO
VO 54
VO
VO 5Learning
Area
Area Learning

Figure 9. The ground truth and trajectory of test trails.


Figure 9. The ground truth and trajectory of test trails.
Figure 9. The ground truth and trajectory of test trails.
4.3. Path Planning
4.3. Path Planning
4.3. Path Planning
Simulation comparisons on four different path planning mechanisms are conducted: the adaptive
Simulation
artificial potential comparisons
field (AAPF), on thefour differentartificial
path planning
potentialmechanisms
field withoutare conducted: the
Simulation comparisons on fouradaptive
different path planning mechanisms a sliding
are window
conducted: the
adaptive
(AAPF/S), artificial potential
the artificial field
potential (AAPF), the adaptive artificial potential field without a sliding
adaptive artificial potential fieldfield without
(AAPF), thea adaptive
repulsive artificial
force andpotential
sliding window (AAPF/RS),
field without and
a sliding
window
the (AAPF/S),
A* path planning.the artificial potential field without a repulsive force and sliding window
window (AAPF/S), the artificial potential field without a repulsive force and sliding window
(AAPF/RS), and
On the map, the A*
weA* path
setpath planning.location as the starting position. Then, 100 random destinations
the elevator’s
(AAPF/RS), and the planning.
On the
are generated map, we
outside set the elevator’s
with alocation as 25
the startingcentered
position. Then, 100 random destinations
On the map, we setathe
circle
elevator’s radius of
location as themeters
starting position.atThen,
the starting point,
100 random as shown
destinations
are
in generated
Figure 10. outside
We use a circle
the four with a radius
candidate of 25 meters centered
path-planning mechanisms at the
to starting
generate point,
the as shown
paths in
for the
are generated outside a circle with a radius of 25 meters centered at the starting point, as shown in
Figure 10.
start–target We use
pairs. the four candidate path-planning mechanisms to generate the paths for the start–
Figure 10. We use the four candidate path-planning mechanisms to generate the paths for the start–
target pairs.
target pairs.
Appl. Sci. 2019, 9, 989 11 of 15
Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 16
Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 16

Figure 10. The 100 generated destinations (star) and the starting position (pentagram) on the map.
Figure 10. The 100 generated destinations (star) and the starting position (pentagram) on the map.

In Figure 11,
In 11, we compare
compare the pathpath lengths generated
generated by the the four mechanisms.
mechanisms. Obviously,
Obviously, the the
In Figure
Figure 11, wewe compare the the path lengths
lengths generated by by the four
four mechanisms. Obviously, the
path
path lengths of AAPF are always lower than those of AAPF/S and AAPF/RS because the sliding
path lengths
lengths of of AAPF
AAPF are
are always
always lower
lower than
thanthose
thoseofofAAPF/S
AAPF/S andand AAPF/RS because the
AAPF/RS because the sliding
sliding
window curves
window curvesthethesharp
sharpturns
turnsononthe
thepath
pathinto
intofilleted
filleted turns. Therefore, the path length is shorter,
window curves the sharp turns on the path into filleted turns. Therefore, the path length shorter,
turns. Therefore, the path length is as
is shorter,
as expected.
expected. The
TheThe
pathpath lengths
lengths of
of A* A* are always the lowest and are the best among the four. Because
as expected. path lengths of are always
A* are the lowest
always and and
the lowest are the
arebest
the among the four.
best among Because
the four. A* is
Because
A* is using
using a a greedy
greedy mechanism
mechanism to to generate
generate the theitpath,
path, is it is guaranteed
guaranteed to produceto produce
the the shortest
shortest length length
when a
A* is using a greedy mechanism to generate the path, it is guaranteed to produce the shortest length
when aview
global global
is view is accessible.
accessible. However, However,
it is noted itthat
is noted
the thatlength
path the path length differences
differences between A*between
and AAPFA*
when a global view is accessible. However, it is noted that the path length differences between A*
and
is AAPF
very is very limited.
limited.
and AAPF is very limited.

120
120
AAPF
AAPF
AAPF/S
110 AAPF/S
110 AAPF/RS
AAPF/RS
A*
A*
100
100

90
90

80
80

70
70

60
60

50
50

40
40 0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
Trial
Trial
Figure 11. Simulation
Figure 11. Simulation results
results on
on path
path planning
planning cost.
cost.
Figure 11. Simulation results on path planning cost.
In Figure 12, we collect the discrete distances from the path to obstacles along the paths. It is
In Figure 12, we collect the discrete distances from the path to obstacles along the paths. It is
shownIn that
Figure 12, and
AAPF we collect
AAPF/S theproperly
discretedeal
distances from thetopath
with distance to obstacles
obstacles, along
which is the paths.
consistent with Itour
is
shown that AAPF and AAPF/S properly deal with distance to obstacles, which is consistent with our
shown that AAPF and AAPF/S properly deal with distance to obstacles, which is consistent
design: The repulsive forces of obstacles keep the path away. AAPF/RS and A* do not have such with our
design: The repulsive forces of obstacles keep the path away. AAPF/RS and A* do not have such
design: The repulsive forces of obstacles keep the path away. AAPF/RS and A* do not have such
Appl. Sci. 2019,
Appl. Sci. 2019, 9,
9, 989
x FOR PEER REVIEW 12 of
12 of 16
15
Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 16

repulsive forces,
forces, and
repulsive forces, thus,
thus, aaa good
and thus, good portion
good portion of
portion of the
of the paths
the paths is
paths is close
is close to
close to obstacles.
obstacles. This
to obstacles. This is
is not desired
not desired in
desired in
in
repulsive and This is not
assistive navigation
assistive navigation [6,11].
navigation [6,11].
[6,11].
assistive
5000
5000

0
0
5000
5000

0
0
5000
5000

0
0

2000
2000
1000
1000
0
0 0 1 2 3 4 5 6 7 8 9 10 11 12
0 1 2 3 4 5 6 7 8 9 10 11 12

Figure
Figure 12.
12. Simulation
Simulation results
results on
on distances
distances to
to obstacles.
obstacles.

Although the path lengths of A* are a bit shorter than those of AAPF, considering the fact that
Although the path lengths of A* are a bit shorter than those of AAPF, considering the fact that
subjects in navigation are prone totoexperience risks and panics if risky places areare
close to the paths, the
subjects in
in navigation
navigation are
are prone
prone to experience
experience risks
risks and
and panics
panics ifif risky
risky places
places are close
close to
to the
the paths,
paths,
AAPF
the outperforms
AAPF A*
outperforms in
A*keeping
in the
keeping path
the safer.
path safer.
the AAPF outperforms A* in keeping the path safer.
4.4.
4.4. Haptic Guidance
Guidance
4.4. Haptic
Haptic Guidance
To
To verify the directional guidance of the haptic device, we carry out unit tests of the haptic
To verify
verify the
the directional
directional guidance
guidance of of the
the haptic
haptic device,
device, we
we carry
carry out
out unit
unit tests
tests of
of the
the haptic
haptic
guidance
guidance glove. The guidance glove on the left hand and the Arduino joystick to be controlled on the
the
guidance glove. The guidance glove on the left hand and the Arduino joystick to be controlled on
glove. The guidance glove on the left hand and the Arduino joystick to be controlled on the
right
right hand are
are shown in Figure 13. A series
series of programmed
programmed guidance commands
commands are stored and sent
right hand
hand are shown
shown inin Figure
Figure 13.
13. A
A series of
of programmed guidance
guidance commands are are stored
stored and
and sent
sent
to
to the glove so as to let the subject feel the guidance. A blind-folded subject is told to use the joystick
to the glove so as to let the subject feel the guidance. A blind-folded subject is told to use the joystick
the glove so as to let the subject feel the guidance. A blind-folded subject is told to use the joystick
to
to depict the directional instruction received. The joystick behaviors are recorded every half second.
to depict
depict the
the directional
directional instruction
instruction received.
received. The
The joystick
joystick behaviors
behaviors are
are recorded
recorded every
every half
half second.
second.

Figure 13. (Left)


Figure 13. (Left) Prototype
Prototype of
of haptic glove. (Right)
haptic glove. (Right) Joystick
Joystick for
for test
test purpose.
purpose.

In Figure 14, the input guidance commands are compared with with the joystick
joystick records. Obviously,
Obviously,
In Figure 14, the input guidance commands are compared with the the joystick records.
records. Obviously,
there is a latency between the input and records. The latency is caused by three factors: the cognitive
there is a latency between the input and records. The latency is caused by three factors: the cognitive
delay of human haptic sensibility, the delay from understanding the guidance to controlling the
delay of human haptic sensibility, the delay from understanding
understanding the guidance to controlling the
joystick, and the delay between joystick action and recording. The The average
average delay is is less than
than 0.4 s, s,
joystick, and the delay between joystick action and recording. The average delay delay is less
less than 0.4
0.4 s,
which
which is acceptable in most cases. Note that the delays in later trials are much less than those in earlier
which is acceptable
acceptable in in most
most cases.
cases. Note that the delays in later trials are much less than those in earlier
trials. One of the reasons for this is that the subject is getting familiar with the haptic interaction. In
trials. One of the reasons for this is that the subject is getting familiar with the haptic haptic interaction.
interaction. In
other words, the subject is capable of efficiently and quickly converting the data perceived by the
other words, the subject is capable of efficiently and quickly converting the data perceived by the
haptic interaction into their own perception after a few attempts. Thus, a cooperative cognition is built
haptic interaction
interaction into
into their
their own
own perception
perception after
after aa few
few attempts.
attempts. Thus,
Thus, aa cooperative
cooperative cognition
cognition is is
between
built the assistive system, haptichaptic
interaction, and human perceptions.
built between
between the
the assistive
assistive system,
system, haptic interaction,
interaction, and
and human
human perceptions.
perceptions.
Appl. Sci. 2019, 9, 989 13 of 15
Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 16

Haptic Glove Guidance


Joystick Trail 1
Joystick Trail 2
Joystick Trail 3
Joystick Trail 4

Haptic Glove Guidance


Rocker Trail 1
Rocker Trail 2
Rocker Trail 3
Rocker Trail 4

Figure Haptic
14.14.
Figure Hapticglove
glove guidance versusjoystick
guidance versus joystick records.
records.

4.5. Integration TestTest


4.5. Integration
To verify the prototype
To verify the prototype system, we conduct
system, we conduct target-oriented navigation
target-oriented navigationtests
testswith
withthree
threelow-vision
low-
vision
subjects andsubjects and one
one blind blind subject.
subject. To evaluate
To evaluate thethe human–machine interaction
human–machine interactionthat occurs
that in our
occurs in our
systems, we administrate navigation with two different interaction mechanisms.
systems, we administrate navigation with two different interaction mechanisms. One with pure One with pure
audioaudio instructions
instructions [3] [3]
andand thethe otherwith
other withhaptic
haptic instructions.
instructions. Experience
Experience surveys are are
surveys collected after after
collected
the tests. A 5-minute tutorial on the navigation instructions is given prior to the tests, and all of the
the tests. A 5-minute tutorial on the navigation instructions is given prior to the tests, and all of the
subjects are told that security personnel will interfere before any collision or risk is met. This gives
subjects are told that security personnel will interfere before any collision or risk is met. This gives the
the users peace of mind.
users peace of mind.
After the test, all four subjects believed they successfully followed the instructions to reach the
After
target the test,
(5/5); mostallsubjects
four subjects
agreed believed they successfully
that the instructions were veryfollowed the instructions
easy to understand (4.5/5); to
andreach
all the
targetsubjects
(5/5); agreed
most subjects
that theiragreed
cognitionthat the instructions
of haptic instructions were verya short
enhanced easy while
to understand (4.5/5);
after beginning the and
all subjects agreed
experiment that
(5/5). their cognition
Furthermore, of haptic
all subjects agreed instructions enhanced
that the haptic a short
instructions werewhile after
less likely to beginning
cause
hesitation than
the experiment audio
(5/5). instructions (5/5);
Furthermore, some subjects
all subjects agreedbelieved
that thethat theyinstructions
haptic feel safer than expected
were less likely
(3.75/5);
to cause most believed
hesitation than audio that instructions
they had a better
(5/5);experience with believed
some subjects haptic instructions
that they than audio than
feel safer
instructions
expected (3.75/5); in most
micro-guidance
believed that (4.75/5);
theyand
hadall believed
a better that audiowith
experience instructions were indispensable
haptic instructions than audio
as macro-instructions (5/5). Two subjects believed the haptic glove would affect holding objects in
instructions in micro-guidance (4.75/5); and all believed that audio instructions were indispensable as
daily life and suggested migrating the haptic component to the arm or back of the hand.
macro-instructions (5/5). Two subjects believed the haptic glove would affect holding objects in daily
life and suggested migrating the haptic component to the arm or back of the hand.
Appl. Sci. 2019, 9, 989 14 of 15

5. Conclusions
In this work, we propose a human-centric navigation system to assist people with visual
impairment while travelling indoors. The system takes a commercial smartphone as the carrier
and uses Google ARCore vison SLAM in positioning. Comparing with the conventional visual
odometry-supported travel aids, the system achieves better mapping and tracking. An adaptive
artificial potential field-based path planning has been proposed for the system; it keeps the path
away from the obstacles so as to avoid risk and collision while generating a real-time smooth path.
Finally, a dual-channel human-machine interaction mechanism is introduced in the system. The system
user-centrically incorporates haptic interfaces to provide a fluent and continuous guidance superior to
the conventional turn-by-turn audio-guiding method. The haptic interaction can be carried out via
different candidate devices, but our proposed haptic gloves benefit from an affordable cost and the
convenience to plug and play.
Evaluation on field tests and simulations shows that the localization and path planning achieves
the expected performance, and as such, the proposed ANSVIP system is welcomed by visually
impaired subjects.

Author Contributions: Conceptualization, X.Z.; Investigation, Y.Z.; Methodology, X.Z. and F.H.; Project
administration, F.H.; Resources, Y.Z.; Software, X.Y.; Writing—original draft, X.Z.
Funding: This work was funded by the Humanity and Social Science Youth foundation of the Ministry of
Education of China, grant number 18YJCZH249, 17YJCZH275.
Acknowledgments: The authors would like to thank Bing Li, Jizhong Xiao and Wei Wang for their insightful
suggestions regarding this research. We thank LetPub for its linguistic assistance during the preparation of
this manuscript.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Horton, E.L.; Renganathan, R.; Toth, B.N.; Cohen, A.J.; Bajcsy, A.V.; Bateman, A.; Jennings, M.C.; Khattar, A.;
Kuo, R.S.; Lee, F.A.; et al. A review of principles in design and usability testing of tactile technology for
individuals with visual impairments. Assist. Technol. 2017, 29, 28–36. [CrossRef] [PubMed]
2. Katz, B.F.G.; Kammoun, S.; Parseihian, G.; Gutierrez, O.; Brilhault, A.; Auvray, M.; Truillet, P.; Denis, M.;
Thorpe, S.; Jouffrais, C. NAVIG: Augmented reality guidance system for the visually impaired. Virtual Reality
2012, 16, 253–269. [CrossRef]
3. Zhang, X. A Wearable Indoor Navigation System with Context Based Decision Making for Visually Impaired.
Int. J. Adv. Robot. Autom. 2016, 1, 1–11. [CrossRef]
4. Ahmetovic, D.; Gleason, C.; Kitani, K.M.; Takagi, H.; Asakawa, C. NavCog: Turn-by-turn smartphone
navigation assistant for people with visual impairments or blindness. In Proceedings of the 13th Web for All
Conference Montreal, Montreal, QC, Canada, 11–13 April 2016; pp. 90–99. [CrossRef]
5. Bing, L.; Munoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-based Mobile Indoor
Assistive Navigation Aid for Blind People. IEEE Trans. Mobile Comput. 2019, 18, 702–714.
6. Nair, V.; Budhai, M.; Olmschenk, G.; Seiple, W.H.; Zhu, Z. ASSIST: Personalized Indoor Navigation via
Multimodal Sensors and High-Level Semantic Information. In Proceedings of the 2018 European Conference
on Computer Vision, Munich, Germany, 8–14 September 2018; Volume 11134, pp. 128–143. [CrossRef]
7. Fernandes, H.; Costa, P.; Filipe, V.; Paredes, H.; Barroso, J. A review of assistive spatial orientation and
navigation technologies for the visually impaired. In Universal Access in the Information Society; Springer:
Berlin/Heidelberg, Germany, 2017. [CrossRef]
8. Yang, Z.; Ganz, A. A Sensing Framework for Indoor Spatial Awareness for Blind and Visually Impaired
Users. IEEE Access 2019, 7, 10343–10352. [CrossRef]
9. Jiao, J.C.; Yuan, L.B.; Deng, Z.L.; Zhang, C.; Tang, W.H.; Wu, Q.; Jiao, J. A Smart Post-Rectification Algorithm
Based on an ANN Considering Reflectivity and Distance for Indoor Scenario Reconstruction. IEEE Access
2018, 6, 58574–58586. [CrossRef]
Appl. Sci. 2019, 9, 989 15 of 15

10. Joseph, S.L.; Xiao, J.Z.; Zhang, X.C.; Chawda, B.; Narang, K.; Rajput, N.; Mehta, S.; Subramaniam, L.V.
Being Aware of the World: Toward Using Social Media to Support the Blind with Navigation. IEEE Trans.
Hum.-Mach. Syst. 2015, 45, 399–405. [CrossRef]
11. Xiao, J.; Joseph, S.L.; Zhang, X.; Li, B.; Li, X.; Zhang, J. An Assistive Navigation Framework for the Visually
Impaired. IEEE Trans. Hum.-Mach. Syst. 2017, 45, 635–640. [CrossRef]
12. Zhang, X.; Bing, L.; Joseph, S.L.; Xiao, J.; Yi, S.; Tian, Y.; Munoz, J.P.; Yi, C. A SLAM Based Semantic Indoor
Navigation System for Visually Impaired Users. In Proceedings of 2015 IEEE International Conference on
Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015.
13. Bing, L.; Muñoz, J.P.; Rong, X.; Xiao, J.; Tian, Y.; Arditi, A. ISANA: Wearable Context-Aware Indoor Assistive
Navigation with Obstacle Avoidance for the Blind. In Proceedings of the 2016 European Conference on
Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016.
14. Zhao, Y.; Zheng, Z.; Liu, Y. Survey on computational-intelligence-based UAV path planning. Knowl.-Based
Syst. 2018, 158, 54–64. [CrossRef]
15. Ahmetovic, D.; Oh, U.; Mascetti, S.; Asakawa, C. Turn Right: Analysis of Rotation Errors in Turn-by-Turn
Navigation for Individuals with Visual Impairments. In Proceedings of the 20th International ACM Sigaccess
Conference on Computers and Accessibility, Assets’18, Galway, Ireland, 22–24 October 2018; pp. 333–339.
[CrossRef]
16. Balata, J.; Mikovec, Z.; Slavik, P. Landmark-enhanced route itineraries for navigation of blind pedestrians in
urban environment. J. Multimodal User Interfaces 2018, 12, 181–198. [CrossRef]
17. Soltani, A.R.; Tawfik, H.; Goulermas, J.Y.; Fernando, T. Path planning in construction sites: Performance
evaluation of the Dijkstra, A*, and GA search algorithms. Adv. Eng. Inform. 2002, 16, 291–303. [CrossRef]
18. Sato, D.; Oh, U.; Naito, K.; Takagi, H.; Kitani, K.; Asakawa, C. NavCog3 An Evaluation of a Smartphone-Based
Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment. In Proceedings of
the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA,
20 October–1 November 2017. [CrossRef]
19. Epstein, R.A.; Patai, E.Z.; Julian, J.B.; Spiers, H.J. The cognitive map in humans: Spatial navigation and
beyond. Nat. Neurosci. 2017, 20, 1504–1513. [CrossRef] [PubMed]
20. Marianne, F.; Sturla, M.; Witter, M.P.; Moser, E.I.; May-Britt, M. Spatial representation in the entorhinal cortex.
Science 2004, 305, 1258–1264.
21. Papadopoulos, K.; Koustriava, E.; Koukourikos, P.; Kartasidou, L.; Barouti, M.; Varveris, A.; Misiou, M.;
Zacharogeorga, T.; Anastasiadis, T. Comparison of three orientation and mobility aids for individuals with
blindness: Verbal description, audio-tactile map and audio-haptic map. Assist. Technol. 2017, 29, 1–7.
[CrossRef] [PubMed]
22. Rector, K.; Bartlett, R.; Mullan, S. Exploring Aural and Haptic Feedback for Visually Impaired People on
a Track: A Wizard of Oz Study. In Proceedings of the 20th International ACM Sigaccess Conference on
Computers and Accessibility, Assets’18, Galway, Ireland, 22–24 October 2018. [CrossRef]
23. Papadopoulos, K.; Koustriava, E.; Koukourikos, P. Orientation and mobility aids for individuals with
blindness: Verbal description vs. audio-tactile map. Assist. Technol. 2018, 30, 191–200. [CrossRef] [PubMed]
24. Guerreiro, J.; Ohn-Bar, E.; Ahmetovic, D.; Kitani, K.; Asakawa, C. How Context and User Behavior Affect
Indoor Navigation Assistance for Blind People. In Proceedings of the 2018 Internet of Accessible Things,
Lyon, France, 23–25 April 2018. [CrossRef]
25. Kacorri, H.; Ohn-Bar, E.; Kitani, K.M.; Asakawa, C. Environmental Factors in Indoor Navigation Based on
Real-World Trajectories of Blind Users. In Proceedings of the 2018 CHI Conference on Human Factors in
Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [CrossRef]
26. Boerema, S.T.; van Velsen, L.; Vollenbroek-Hutten, M.M.R.; Hermens, H.J. Value-based design for the elderly:
An application in the field of mobility aids. Assist. Technol. 2017, 29, 76–84. [CrossRef] [PubMed]
27. Mone, G. Feeling Sounds, Hearing Sights. Commun. ACM 2018, 61, 15–17. [CrossRef]
28. Martins, L.B.; Lima, F.J. Analysis of Wayfinding Strategies of Blind People Using Tactile Maps. Procedia
Manuf. 2015, 3, 6020–6027. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like