Nothing Special   »   [go: up one dir, main page]

Flight Simulation Research Challenges and Flight Crew Assessments of Fidelity

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Flight Simulation: Research Challenges and

Flight Crew Assessments of Fidelity

Andrew Robinson 1 Katerina Mania 2


Counterpoint - mtc, UK University of Sussex, UK

Abstract Typically, a full flight simulator as shown in Figure 3 accurately


represents a specific aircraft type by faithfully recreating the flight
The principal aim of simulation is to provide a platform on which deck using actual aircraft avionics and instrumentation. The
environments or technology, either real or proposed, may be aerodynamic characteristics of the specific aircraft type are then
recreated for the purposes of training, visualisation and research. mathematically modeled and used to drive the avionics, motion
Simulators’ fidelity range widely; some aim to recreate an and visual system. In this way a simulator may create a training or
environment or system to such a high degree that it is difficult to research environment that is highly convincing in its
distinguish between the simulator and the real system, while representation of reality. The degree to which a simulator
others simply aim to recreate a small part of a system, or to recreates the intended aircraft is of course highly regulated and
present the system as a whole in a more compact and stylised monitored by the relevant aviation authority (FAA in America,
fashion.. The aim of this paper is to provide an overview of the CAA in the UK for example) and approved across 4 levels from A
technical challenges that face the simulation field as technology & B which will have a rudimentary visual and no motion, to C &
and requirements change and evolve. Focusing almost exclusively D which will have a visual with highly specific visual parameters
upon commercial Flight and Flight Systems simulation, it will and full motion. The challenges facing future generations of
include the results of a experimental study acquiring user simulation device may then be broken into the following four
assessments of fidelity, involving ‘Expert Users’ (Captain and categories:
Flight Instructor) from a variety of international airlines and who
have many hundreds of hours of experience of both the real, • Avionics & Instrumentation
operational environment, as well as the simulated equivalent • Motion Base
• Visual System
Keywords: Flight Simulation, Fidelity. • Environmental

1 Introduction
2 Avionics and Instrumentation
It could be argued that Flight Simulation is perhaps the most
pervasive and successful area within the simulation arena. Within When considering the avionics fit within a specific aircraft type a
simulators flight crew can train to deal with emergency situations, distinction needs to be made between a high fidelity full flight
can gain familiarity with new aircraft types and learn airfield simulator and other lower fidelity devices.
specific procedures. Flight simulators vary considerably with
regard to complexity, and range from fairly simple devices such 3 Low Fidelity
as the Airbus flight-training device shown in Figure 2, to highly
complex Full Flight simulators which incorporate motion and aim
to present the most convincing facsimile of the real aircraft If it is not requirement to simulate an aircraft type exactly then
possible. there is clearly no need to use actual avionic devices, instead
-------------------------------------------- recreating the appearance of each instrument ‘digitally’ on a
1
e-mail: andyr@couterpoint-mtc.co.uk computer display. Such devices range in complexity from the
2
e-mail: k.mania@sussex.ac.uk FTD shown in Figure 2 which approximates the appearance and
layout of a flight deck spread across multiple screens, to even
simpler devices and applications which may be run on a standard
personal computer or laptop with a single display, as shown in
Figure 4. The great strength of such devices lies in their
portability, which clearly stems from lack of actual avionics. They
may be used by aircrew to train ‘Anytime, Anywhere’ either as a
classroom aid during initial training or even for basic type
familiarisation prior to progression onto a full flight simulator,
and then the real aircraft. The great portability of such devices has
also led to the possibility of deploying them ‘in the field’ within
the military arena, for the purposes of mission practice and
rehearsal prior to flying the sortie for real.
Figure 4. Low fidelity ‘Simfinity’ training application (courtesy of
Figure 1. Krsko Power Plant Simulator (Courtesy of CAE). CAE).

This clearly leads to increased mission success and survivability.


A significant challenge for the development of these systems
however lies in the nature of the recreation of the avionics display
which in itself stems from the portability. In a high fidelity
simulator utilising actual avionics, only the data input needs to be
synthesised. Within these portable devices however the actual
avionic display must also be created, clearly within limited screen
space, especially with a single display device. The nature of
interaction will also be synthetic and unnatural. While this is
unavoidable to the greatest extent, careful application of Human
Computer Interaction practices is vital. With the advent and
pervasiveness of high bandwidth communication it is also
possible that these devices (as well as their high fidelity
counterparts) may be interconnected, either across buildings or
globally, to create a virtual training scenario regardless of trainee
location.

Figure 2. Airbus A320 ‘simfinity’ Flight Training Device 4 High Fidelity


(Courtesy of CAE).
For a high fidelity simulator that recreates a true representation of
the operational flight deck the avionics fit is simply copied from
the operational aircraft. This however reduces the portability of
the device with the result that they are nearly always fixed in
location. Obviously the advantage of this approach is a flight deck
that is identical in appearance to the operational aircraft, in which
a pilot may train upon more complete, complex scenarios. Indeed
these devices are so convincing that a level D certified device may
even be used for type conversion. The overall effect can be seen
in fig 5. Since real avionic devices are used, only the data input
that each requires needs to be synthesised and routed. This is a
complex task which requires that the sensor device upon which
each instrument relies is modeled, the data correctly formatted
(ARINC for example) and sent to the device. Since this sensor
data is dependant upon external factors (air temperature, pressure,
airspeed etc) the external elements must also be accurately
reproduced. Moreover, as technology evolves and improves new
Figure 3. A Full Flight Simulator (courtesy of CAE). avionic devices are developed which typically rely on new types
of sensor input. These must be examined and reproduced. Recent
advances in avionic design have lead to some fairly exotic devices
becoming available and frequently installed with operational
aircraft such as Forward Looking Infrared (FLIR) and Millimetre
Wave (MMW) Radar.
may present the most significant challenge for the flight
simulation field [Bennett 2003].
5 Physiological Simulation
While the fidelity of the simulated avionics fit will clearly impact
the nature, level and accuracy of the simulator overall, it is vital to
consider kinesthetic aspects relevant to the human pilot in order to
provide a truly convincing simulation rather than merely an
expensive and highly complex machine. Kinesthesis (motion and
touch), vision and hearing are the essential senses that must be
simulated. Smell and taste and largely ignored, though smell may
play some small role in the operation of a real aircraft. The
information supplied to the auditory channel may be very
representative of the real thing. Given the advent of modern
digital signal processing, it is possible to reproduce noises that are
indistinguishable from the real thing, indeed the fidelity of such
sound is highly regulated, monitored and tested by the various
regulatory bodies (FAA and CAA) during the acceptance of any
simulator prior to training (and biannually thereafter). It is not
practical however to accomplish near duplication of motion and
Figure 5. A simulated Boeing 737 flight deck with level D visual visual sensations, particularly in the representation of the real,
(Courtesy of Boeing). visual world [Rolfe, J.M. 1986]. It is therefore important that we
should analyse the simulation of the senses provided by motion
Such devices are employed to provide clear outside view during and vision to determine those features of the total environment
low visibility or hazardous conditions. Clearly within a real that are important.
environment the data that is displayed from such devices is simply
dependant upon sensor information – the outside environment is A starting point is to endeavor to identify what information is
really there and can be measured and displayed. Within a received by the human sensory system and then to model the
simulator however the environment is virtual, and cannot be manor in which this information is interpreted. What is utilized,
measured with FLIR or MMW. The low visibility image must and the way in which it is utilized is clearly task dependant. It
therefore be created from the visual scene, adjusted to be would be highly convenient if everybody chose the same
displayed from the point of view of the sensor (which will be information and interpreted it in the same way for a given
different from the pilots eye point), and displayed within the situation. There is however considerable evidence that the
cockpit. Recent advances within visual image generator (IG) interpretation of a given set of limited information varies widely
design provide for this particular example as shown in Figure 6. between individuals [Palmer & Petitt. 1976]. We are born with
certain in-built routines in the brain that allow basic survival, but
with a ‘large empty space’ waiting for information. The way in
which this information develops depends upon the experience of
each individual. While it might be hoped that the decision to
become a pilot, for example, is influenced by a predisposition to
view the world in a certain way, it clearly will not represent
general uniformity across all elements of the piloting task, and
certainly not to the relatively limited elements of the perception of
motion and visual inputs and their interpretation [Rolfe, J.M.
1986]. This presents a potentially insuperable dilemma. It is
possible to produce only a very limited subset of the information
available in the real world, and this subset, while potentially
useful to some individuals, may not be appropriate for others.

Fortunately the human being is very adaptable, and the world is


full of redundant information. So if preferred information is
Figure 6. Simulated FLIR image (courtesy of CAE). unavailable then some other relevant though perhaps not so easily
interpreted information will be selected. The limits of operation of
With the events of September the 11th still firmly in mind, aircraft are often determined by the reduction of information in
another aspect of flight deck evolution focuses on security. Within the real world to a minimum, and the substitution of artificial aids.
a real aircraft it is clearly relatively simple to relay CCTV For example, the lighting patterns and runway markings of
imagery to the flight deck to alert flight crew of any situation. airfields are designed to allow aircraft to approach and land in
This may also be fairly simply simulated by providing digital conditions where information from the natural world is totally
video footage as part of any appropriate training scenarios. inadequate due to poor visibility or darkness.
However since we are concerned with highly accurate simulation,
this must mimic absolutely the type of display expected within the The intrinsic capabilities of the human sensory system are fairly
operational flight deck, down to CCTV camera location (and well understood, along with the adaptability of people in making
therefore the relayed display), the location of the monitor within use of available data cues, whether by past experience or specific
the cockpit and even the exact type of display device used within training. Armed with this information, kinesthetic and visual
the real aircraft. While everything that is installed within a real information can be used in the creation of both motion and visual
aircraft can of course be replicated within a simulator, it is the systems.
speed of development and implementation of such devices that
5.1 Motion Base photorealistic environment involving walking around a narrow pit
[Meehan et al. 2003].
Motion bases are used to create the sensation of motion within a
System latency (time delay) and its visible consequences are
simulator. Their use is not limited flight simulation; many types of
fundamental virtual environment deficiencies that can hamper user
vehicle simulator employ motion bases from car and truck
perception and performance. The aim of this research is to
simulators to tank and ship simulators. Motion bases may be
quantify perceptual tolerance to Virtual Environment latency. In
broken into two categories, Hydraulic and Electric. Within the
particular, the role of Virtual Environment scene content and
realm of Level C & D full flight simulators, hydraulic motion
resultant relative object motion on latency detection was examined
bases are used almost exclusively. This is due to two main factors.
by presenting observers in a head-tracked, stereoscopic head
mounted display with environments having differing levels of
• The weight of the equipment being moved
complexity ranging from simple geometrical objects to a radiosity-
• The fidelity of the movement required rendered scene representing a hypothetical real world setting.
Such knowledge will help elucidate latency perception
The principal by which hydraulic motion platforms operate is mechanisms and, in turn, guide VE designers in the development
quite simple. Hydraulic oil is pressurised (to approximately
of latency countermeasures. In this study, a radiosity-rendered
1500psi) by a series of pumps and forced into hydraulic rams or scene of two interconnected rooms was employed. Latency
jacks. If a jack is required to rise, a servo-operated valve is opened
discrimination observed was compared with a previous study in
allowing oil into the cylinder, which forces the jack to extend. which only simple geometrical objects, without radiosity
Conversely when the jack is required to lower, the valve is closed
rendering or a ‘real-world’ setting, were used. By investigating
causing the pressure to drop and the oil is forced back out of the sensitivity to latency in VEs that could represent a real-world
cylinder causing the jack to lower. While there are several
setting in direct comparison with previous research that utilized
different techniques by which the simulator itself may be mounted simple objects, it can be inferred that the Just Noticeable
on the motion base, by far the most common within civilian &
Difference (JND) for latency discrimination by trained observers
commercial aviation simulation is to support the flight deck on six averages ~15 ms or less, independent of scene complexity and
individual jacks as shown in Figure 3. By varying the pressure to
real-world meaning [Ellis et al. 1999; Adelstein et al. 2003].
each jack individually the simulator can be made to move through
six degrees of freedom (DoF) to provide the sensation of pitch, In summary, results from these studies suggest that virtual
roll, yaw, acceleration and deceleration as well as turbulence. environment system designers should expect observers who are
not burdened with any other performance tasks to generally be
Since it is impossible to compress a fluid, the use of hydraulic oil able to notice differences in latency as low as ~15 ms, regardless
gives the ability to simulate conditions such as ‘Gear Up’ forced of the relative location of objects in the scene, the
landings, undercarriage collapse and other violent, heavy motion ‘meaningfulness’ of the scene context in relation to the real world,
events. The simulators main host computer relays data relating to or possibly even the degree of photorealism in their rendering.
the aircrafts attitude to the motion control host computer, which These results will also serve as performance guidelines to aid in
then controls the individual jacks to provide the sensation of the design of predictive compensation algorithms.
authentic movement. Due to the nature of the hydraulic system,
this movement can be controlled very rapidly, and while it is While hydraulic motion is currently the mainstay of professional
restricted in extent because of the limited stroke of the jacks, motion platforms aimed at representing accurate motion effects,
when coupled with a visual image that reflects the movement the there are a number of quite significant drawbacks. Hydraulic
results can be very convincing. A vital research area therefore is systems are by their nature economically inefficient, they use vast
to examine the exact amount of time that is allowable between quantities of power and require very large and expensive
moving the motion base, and reflecting the change in attitude hydraulic pumps that need to be housed away from the simulator
within the visual scene (referred to as latency) [Guo et al. 2003]. itself. They also require frequent and specialised maintenance
The human motor system is very sensitive to changes in pitch which drives the running cost of the simulator up, and while they
when coupled with a visual image, and if the detected movement are not in themselves great environmental polluters, the waste oil
is out of sync across the senses, not only will the effect appear does need to be disposed of in an environmentally safe fashion.
unconvincing; it can also result in motion sickness. Typically The alternative is to replace the hydraulic system with a directly
within a full flight simulator this latency is tuned to within 100 to driven electrical system. Within the area of simulated motion, this
120 milliseconds (ms). is perhaps the most significant challenge. The problem with
electric motion bases so far has proved to be twofold, weight and
Excessive latency has long been known to hinder operator motion authenticity. Any simulator that uses systems and
adaptation to other display distortions such as static displacement instrumentation taken from the real world, either from an aircraft,
offset. Latency also degrades manual performance, forcing users a ship or a tank will have a significant weight. When this is
to slow down to preserve manipulative stability, ultimately coupled with the superstructure that actually makes up the
driving them to adopt a ‘move and wait’ strategy [Sheridan and simulator, which has to be built to withstand quite significant load
Ferrell 1963]. Operator compensation for a delay usually requires forces, the platform that needs to be manipulated can weigh
the ability to predict the future state of a tracked element. several tons. Electric motion bases may use either linear motors,
direct driven screw jacks or a ‘Gas Spring’ created by replacing
Interest has more recently been directed toward the subjective the hydraulic oil with compressed gas to create the hydrostatic
impact of system latency relevant to virtual reality simulations. pressure required to move the platform [Denne 2003]. The most
Latency as well as update rate have been considered as factors significant problem with all three of these approaches has so far
affecting the operator’s sense of presence in the environment. In a been the weight that they can support, although this is rapidly
recent study, lower latencies were associated with a higher self- becoming less of a problem as technology improves. American
reported sense of presence and a statistically higher change in motion base manufacturer, Moog incorporated, has a system in
heart rate for users while in a stress-inducing (fear of heights), development that it is claimed can manipulate a platform
weighing up to 32 tons, more than enough support even the
heaviest simulator [Moog Incorporated, 2003]. The second significant effects that cannot be simulated in such a fashion. At a
problem facing the use of electric motion bases is that of motion load of 5G for example the pilot will feel as if he weighs 5 times
fidelity. The nature of hydraulics provides a system that is ideally more than he normally does, and will almost certainly be
suited for the accurate creation of simulated motion, however experiencing additional grey out effects such as a mild inability to
recreating this within an electric base is challenging. Linear motor focus on operational tasks and may be finding it difficult to
and screw jack approaches may provide the most suitable overall interpret information being relayed via the avionic displays due to
results, while gas springs may be better for certain effects such as the lack of oxygen. If these effects could also be simulated, then
turbulence. It is unclear however whether a gas spring will be able the overall training value for fast jet simulation would increase
to provide suitable motion for heavy events such as forced dramatically (though clearly rendering a pilot unconscious would
landings. Since gas is highly compressible, the weight of the be undesirable). This would be a significant challenge indeed
simulator would need to be counteracted exactly, at exactly the since the only way to simulate the full spectrum of high G loading
right time during the heavy event. If it were not then the weight of effects is to reproduce the motion and velocities involved, and
the simulator itself would compress the gas in the cylinder causing while certain approaches have been taken in the past the Vertical
a ‘Bounce’ rather than a convincing hard stop. The move to Motion Simulator (VMS) at NASA Ames Research Centre in
electric motion bases is however very compelling due to socio- California (Figure 7) is the only one which is operational, and this
economic demands. A direct driven electric motion base is less is used for research rather than operational training [Nasa Ames
expensive to run and doesn’t require the same degree of 2003]. Indeed the shear cost, complexity and size of systems such
specialised maintenance and care as its hydraulic counterpart, as the VMS may make them economically unviable for mass
which will reduce the running cost of the simulator considerably. training, and are only likely to be developed should it become a
They are also far more environmentally friendly with virtually no requirement within the specification of future simulators.
possibility of causing pollution, and have no waste products that
require careful disposal.

5.2 G-loading

One very desirable feature that may be demanded of a motion


base is that of simulating the G forces that a pilot may be exposed
to. While not so significant for the majority of civilian simulation,
within military fast jet simulation this would be highly desirable.
Simulating the full range of G loading effects in a realistic manor
would be a huge challenge for simulation due to the way in which
it is caused. Acceleration or deceleration in a given direction is
detected by the human motion system and interpreted as a change
in speed. If this motion is across the vertical plane of the body
then blood is forced into or out of the brain leading to the
possibility of grey or blackout for the pilot - blackout normally
occurring at around 8 – 9 G for a fit, experienced military pilot
[Greene et al. 1992]. While a blackout will cause the pilot to loose
consciousness, and therefore control of the aircraft, a grey out will
at the very least cause an inability to concentrate and therefore
operate the aircraft in a safe efficient manor until the load is
reduced. Within an actual military fast jet the pilot is equipped Figure 7. The NASA VMS system
with a special ‘G-Suit’ that helps to counteract some of the effects
of high G loads. These suits comprise of compartments within the
legs that inflate when high G loads are detected, restricting the
flow of blood, thus helping to maintain an adequate supply to the
6 Visual System
brain. Under prolonged and increasing loads however the supply
of oxygenated blood will still decrease, leading to an inadequate
supply resulting in grey, and then blackout. As the G load begins Perhaps the most rapidly advancing aspect of simulation is that of
to increase leading to the first stages of a grey out, peripheral the visual representation of the environment as recreated by the
vision begins to fail and vision starts to become ‘fuzzy’. As this visual system. Once again the visual system can be broken into
increases in intensity, vision may be totally lost until the pilot two distinct and equally important parts, each one dependant upon
experiences a complete blackout, ultimately resulting in the demands and abilities of the other.
unconsciousness. Several approaches are already used within
military fast jet & helicopter simulation that attempt to reproduce • The image generator (IG)
some of the effects of high G loading. The pilot may wear a G-suit • The display system
for example that inflates when a high G maneuver is entered. This
provides a physical cue informing the pilot of the calculated load The IG generates the image from its database, dependant upon the
factor the he and the aircraft are currently being subjected to. As aircraft position, attitude and the specified environmental
this load continues or increases, the avionics and simulated visual conditions for the scenario, while the display system is
scene may be dimmed and defocused giving the impression of a responsible for reproducing this data in a visible form to the
grey out. This may continue until they have been dimmed to a aircrew.
point where they are no longer visible at all, giving the impression
of a blackout. In this way it is possible to simulate certain effects
associated with high G loading. There are however other
6.1 Image Generation as the CAE Tropos shown in Figure 7) each channel is rendered
by a series of commercially available GPU’s. The use of
The inclusion of image generation devices is relatively new within commercially available GPU’s allows other non simulation
the field of simulation, and has only become possible with the specific research and development to be taken advantage of,
advances in microcomputer technology. Early IG devises were effectively reducing the cost per channel and allowing
only capable of calculating the relative position of light points for advancements to be made at a greatly increased rate. With
displaying runways for example, which resulted in the limitation polygon counts now up to between 80,000 and 160,000 (peak) per
of only being able to run ‘Night time’ scenarios within the channel, it is arguable that the demand for greater polygon counts
simulator. Clearly computer technology has improved is reducing. While the progression of pixel and polygon capacity
exponentially since the inception of these early systems, and has will clearly continue in future generations of IG, the key benefits
been capable of depicting terrain, other air traffic, ground vehicles from the increase in power will come in the form of improved
and ground buildings for many years in the form of shaded processing. For example the inclusion of Phong shading will
polygons. Early versions of this form of IG technology were only allow for specular lighting on water surfaces and runway
capable of displaying a few hundred polygons per channel, and contaminants such as ice, something that has not been possible
could only employ ‘flat shading’ algorithms to give them until now due to limited pixel and vertex shader performance.
substance. With the advent of increased CPU speed, memory Anisotropic texture filtering and layered fog are already included
bandwidth and the development of texture memory however, this in the most modern systems, and these greatly improve the
flat shading was replaced with texture mapped surfaces that not realism of adverse weather conditions. By looking at research
only appear to have substance but also the appearance of realistic being conducted globally into the human visual and cognitive
surfaces, frequently created from photographs of the specific systems, it is challenging to create an image generation system of
object that requires simulated display. Modern IG’s are capable of high perceptual simulation fidelity by examining the relative
calculating and rendering many tens of thousands of polygons per importance of rendering aspects such as specular highlights and
channel in real time, and with the development of increased diffuse inter-reflection as well as simulating cognitive models of
texture handling subsystems within the dedicated Graphics spatial perception and tailoring the abilities of the IG to match.
Processing Units (GPU’s) can render some truly impressive This is an important research area for the future [Mania &
images as demonstrated in Figure 7. Until recently IG’s have been Robinson 2002; Mania et al. 2003; McNamara 2001; Mania &
custom designed and built by a small handful of companies such Chalmers 2001; Mania 2001].
as CAE, Evans & Sutherland, MacDonnell Douglas and Silicon
Graphics, each containing custom built boards housed in large A goal of Virtual Environment (VE) systems is to provide users
cabinets each about the size of a domestic fridge freezer. The with appropriate sensory stimulation so that they act and react in
displayed image typically has a horizontal field of view of 180 to similar ways in the virtual world as they would in the natural
220 degrees in a commercial flight simulator and up to 360 world. The research community is challenged to investigate the
degrees in other systems such as Air Traffic Control. factors that make virtual reality technologies effective (simulation
of spaces and humans). Realising the goals of virtual reality
systems and harnessing them to successful applications could be
accomplished by employing robust fidelity metrics based on
human-centred experimentation.

What makes a simulation ‘feel real’ to a human observer? Can we


use what is known about human visual system and human
cognition to help us produce more realistic synthetic images? Can
our perception of the real world (space and people) around us
‘survive’ the transition to a graphics environment or to a virtual
human? How can we use the attributes of the human visual system
and human cognition to design computer graphics simulation
systems in a way that a sense of ‘being there’ is communicated?
Are there perceptual commonalities among applications; or are
practical applications so independent that we cannot generalise
findings from one application to another? These are significant
questions for the research community to tackle.

It is increasingly important to provide quantitative data on the


fidelity of rendered images. This can be done either by developing
Figure 8. Image from a CAE Tropos IG (Courtesy of CAE). computational metrics which aim to predict the degree of fidelity
of an image, or to carry out psychophysical investigations into the
These images are displayed by ‘tiling’ segments of typically 60 degree of similarity between the original and rendered images.
degrees together. Each 60 degree segment being rendered by an Psychophysics comprises a collection of methods used to conduct
individual IG channel, and data shared across channels by a high non-invasive experiments on humans, the purpose of which is to
bandwidth backplane. With the development of high performance study mappings between events in an environment and levels of
PC’s and high speed networks, one area of research and sensory responses to those events. The term visual fidelity refers
development in this field is looking into basing the entire IG to the degree to which visual features in the Virtual Environment
around a network of high speed PC’s, with each individual PC conform to visual features in the real environment. Interface or
rendering an individual channel of the visual scene – the so called interaction fidelity refers to the degree to which the simulator
‘PC IG’, with the clear goal of a vast reduction in cost per channel technology (visual and motor) is perceived by a trainee to
since commercially available (COTS) products may be used. duplicate the operational equipment and the actual task situation.
Another approach takes a similar line but maintains the very high It is not computationally feasible to immerse a person into an
speed available through a shared backplane, presenting a single interactive artificial environment which exactly mimics the
unit with one point of interface and control. In such systems (such panoply and complexity of sensory experiences associated with a
“real” scene. For a start, it is technologically challenging to mounted upon the motion base, and even at a horizontal field of
control all of the sensory modalities to render the exactly view of 180 degrees, three such displays are required. In addition
equivalent sensory array as that produced by real world to this the nature of the high intensity light points (coupled with a
interaction. When visual (or interaction) fidelity is increased, the level D requirement to display the raster at 6 foot lamberts) leads
system responsiveness decreases resulting in reduced frame rate to quite rapid CRT tube degradation resulting in time consuming
and added visual/tracking latency. It is argued that training in a and expensive replacement. Of major interest to the development
VE with maximum fidelity would result in positive transfer and evolution of simulation devices is developing a suitable
equivalent to real-world training since the two environments replacement for the calligraphic CRT in order to reduce weight on
would be impossible to differentiate. Robust metrics are essential the motion platform (and therefore aid the deployment of electric
in order to assess the fidelity of VE implementations comprising bases), reduce cost of tube replacement and increase the overall
of computer graphics imagery, display technologies and 3D fidelity of the visual system. While it is possible that LCD or
interaction metaphors across a range of application fields.A small plasma displays may be developed to function in a similar way in
study investigating subjective assessments of fidelity by aircrew order to produce calligraphic points, neither technology is capable
using flight simulators will be reported in this paper.
of generating the required intensity. Three chip digital light
processing (DLP) projectors such as those used in high fidelity
cinema may be a candidate, however these devices are also
7 Display System extremely complex and expensive. The most likely replacement
will come from the current and ongoing development of laser
A level C or D flight simulator places some specific demands projectors. These devices create the image in the normal raster
upon the display device with the result that only cathode ray tube scan fashion, but replace the individual RGB CRTs with
(CRT) devices are suitable. While other technologies such as individual red, green and blue lasers. Since they are capable of
LCD, DLP and Plasma offer enhanced visual quality, none of scan times in excess of CRT, refresh rate could be increased
these devices are capable of one vital requirement of a level C or leading to enhanced visual scene clarity, an increase in resolution
D display – Calligraphic light points. A Calligraphic light point is when driven by more capable next generation IGs, and increased
a high intensity point created by focusing an electron beam at an light points per frame. Given the possibility of increased
electron sensitive raster. By varying the time the beam is focused, resolution and rapid scan times offered by laser projectors, it
it is possible to vary the intensity of the light point. Non should also be possible to scan the entire 180 – 220 degree field of
calligraphic displays (including standard CRT) function by view (FoV) with a single projector, removing the need for
dividing the horizontal and vertical screen elements (the pixels) multiple projector heads. This would remove the current need for
by the required refresh rate to determine the amount of time that vastly time consuming and complex edge blending and colour
each pixel may remain illuminated each frame, for example at a matching across the boundaries between CRT projected images
resolution of 1024 – 768 at a refresh rate of 72Hz (quite a within the FoV. Moreover since these devices don’t reply on X &
common combination) we have Y electron deflection much of the complexity of CRT will be
replaced with a single laser generation source which may lead to
1024 X 768 = 78642 pixels increased reliability. This laser source may be placed up to 30M
1 second = 1000Ms (milliseconds) away from the projection head with the result that it can be
1000Ms ÷ 72 = 13.8Ms per frame mounted off-board, further reducing the weight on the motion
13.8Ms = 13800 µs (Microseconds) platform. Research and development of next generation IG and
13800µs ÷ 78642 pixels = 0.175 µs per pixel projector systems is rapidly being pursued representing a
fundamental step within the evolution of next generation, high
Within a standard raster scan device then at a resolution of 1024 – fidelity simulators.
768 and a refresh rate of 72Hz each pixel can only be focused on
for 175 Nanoseconds. There is no flexibility within this so 8 Environmental Simulation
individual pixel intensity cannot be varied. The colour may be
varied by controlling the amount of red, green and blue, but the
Environmental simulation refers to the simulation of all aspects
intensity may not. A level C or D display explicitly requires high
external to the aircraft, from weather conditions and terrain to
intensity light points that accurately reproduce the appearance of
other air traffic and air traffic control. With advances in our
real world lights (such as runway or navigation lights). To achieve
understanding of the weather and the resulting development of
this, the background image (runway, buildings, mountains etc) is
more sophisticated weather radar and detection devices, it is vital
displayed in a conventional raster scan, typically at 60Hz for day
that the simulation keeps pace. Simulating sensor data and driving
and 40Hz for night. This raster scanning is interlaced, which
avionic display accordingly has already been discussed; however
means that all of the odd numbered lines are drawn, followed by
of equal importance is how weather conditions are visually
the even. This 60Hz refresh rate is however simply the time it
represented and displayed. With increasingly clear, accurate
takes to draw the raster – 60 times a second or once every 16 Ms.
weather radar being installed within aircraft for example various
However, in calligraphic displays the actual time it takes to draw
weather effects can be viewed with greater efficiency. This must
each frame may be as little as 2 Ms, effectively leaving 14 Ms per
be represented within the simulated external view. If the weather
frame ‘free’. This free time is divided by two due to the interlaced
radar shows a cloud formation of a specific shape and size then
nature of the display, which gives 7 Ms at the end of each raster
this must be accurately represented within the visual system.
scan. During this 7 Ms period, before starting the next raster scan,
Other weather effects play a subtler role and must also be
the electron beam can be focused at any individual pixel or group
accurately simulated for enhanced training value. A temperature
of pixels to form a very bright light point, the intensity of which is
inversion for example, under certain conditions indicates an
governed by the time the beam is focused before moving on to the
extremely hazardous condition known as Wind Shear [Thom
next light point or starting the next raster scan. Calligraphic CRT
2002]. Modern aircraft, and therefore their simulated equivalents,
displays are large cumbersome devices that require HT voltages
carry sophisticated computers dedicated to the detection and
and complex deflection. The net result of which is a display that
warning of wind shear, however it is highly desirable to simulate
has considerable weight. This is undesirable since they need to be
the subtle visual conditions associated with this phenomenon to
provide the most believable simulation possible. This is only now One possible solution to this is to interconnect ATC and full flight
becoming possible with the enhanced capabilities of modern IG / simulators and run combined scenarios, indeed this approach has
GPU technology and is shown in Figure 9. been used by Deutsche Flugsicherung (DFS), the German aviation
authority. In their system 4 Boeing 747 simulators are connected
The ability to render and display such phenomenon is of course via a serial data stream to a Raytheon ATC simulator. While this
only part of the problem. Such accurate effects take a considerable system may overcome some of the shortcomings of the purely
amount of time to model, which again leads to increased cost and instructor lead approach, it too has many disadvantages such as
a less than desirable flexibility. Modeling tools must be developed
that enable these conditions to be created efficiently to reduce the • One type of training has to take priority, either ATC or
workload of the modeler and enable more rapid and flexible Flight.
model development. • It requires very meticulous scenario and exercise planning.
• It is vastly expensive in terms of both set up and ongoing
running costs.
• May be of limited use since it is not particularly flexible (if
one pilot makes an error for example the whole exercise may
need to be restarted for all trainees)

With ever increasing computer power leading to the ability to run


applications with greater complexity however, one proposed
solution is to digitally recreate the various controller’s voices and
command set, and use voice recognition (Vrec) to interpret pilot
requests. This could lead to the virtual recreation of real airspaces
and place more accurate demands on the trainee pilot, as well as
allowing for increased familiarity with specific routes. Sector
handover or approach requests for example would have to be
handled correctly, and more demanding situations could be set up
Figure 9. Cloud layer indicating a possible inversion (courtesy of and represented on all relevant devices (visual display, radar etc).
CAE).
At the same time instructor workload would be reduced allowing
The modeling of terrain can also be accomplished in a highly for greater monitoring of pilot activity. A real airspace is a highly
accurate way by developing tools that allow satellite data to be dynamic environment; it is highly desirable to recreate this within
used to automatically generate terrain models, based around actual the simulated aircraft. Research and development of such
geo-specific information. This will lead to the most accurate integrated ATC environments (such as CAE’s GATES system) is
possible depiction of airfield and route specific terrain that will ongoing and represents a significant challenge. With increased IG
again enhance the training value of the simulator. capacity and visual clarity, coupled with enhanced avionics and
computer power, simulating the ATC environment itself, while
Perhaps the most important aspect of environmental simulation challenging, should be possible within a reasonably short time
relates to accurate representation of other air traffic, and a frame. The integration of Vrec however will require significant
complete air traffic control system. While Traffic Collision work to be done on existing systems. Voice recognition systems
Avoidance System (TCAS) has been simulated for a reasonable have been used for the past few years in such things as word
amount of time, these systems only provide fairly limited and processing with some moderate degree of success. However these
specific scenarios, which require the simulator pilot to take action systems are nowhere near as demanding as those required for
as directed by the TCAS computer. These scenarios are selected Flight / ATC integration. For example all voice types (male,
by the flight instructor for relevant procedural practice; however female, high pitched and low pitched) must be equally as
they typically fail to take into account air traffic that is not in acceptable to the system without significant time spent ‘Coaching’
direct conflict. In reality airspace is extremely crowded, especially the system on individual voices. Previous attempts to integrate
within an airfield control boundary, and is divided into sectors, Vrec within simulation have used industry specific engines (such
each being controlled by a specific air traffic controller on an as SAPI 4 & 5) that are derived from more generic products such
individual frequency while an aircraft is on route, an approach as word processing, and require high levels of coaching in order to
controller when approaching a terminal airfield, a ground build a dictionary of individual words and phrases based upon the
controller while taxiing for parking or takeoff, and a departure waveform of each users individual voice [Bennett, T. 2003]. In
controller when leaving a airfield. TCAS alone does not account order to reduce the coaching time required, one very recent test
for this vast quantity of air traffic or airspace complexity, and bed for a USAF system used a ‘lookup table’ of possible
simply warns the pilot of any conflict and may offer avoiding responses, whereby if the exact phrase couldn’t matched the best
action. Should Air Traffic Control (ATC) be required, it is ‘guess’ based on the input waveform was used. This approach
normally the flight instructor that assumes the various roles at the worked with some degree of success, however the rate of
appropriate times. This is less than ideal since ‘incorrect guesses’ was considered too high, and the system still
required substantial amounts of coaching averaging approximately
• It is the same instructor; all sector controllers sound the 20 – 25 minutes per student [Tomlinson 2003]
same.
• The instructor workload is increased which may detract form An additional and very significant difference between Vrec in an
his observation of the trainee. office or ATC environment and a simulated flight deck is
• Sector frequency changes may not be accurate background noise. In an office or ATC simulator, background
• There is no accurate display of other air traffic, either noise is minimal. Within an aircraft the noise levels can be
visually on any installed radar. significant, especially within a helicopter where background noise
can be extreme. This background noise is of course also present
within the full flight simulator, and can reach levels of simulating physics due to, for instance, limitations of
approximately 55 – 65 dBA. (up to 106 dBA in a Blackhawk computational power, is a challenge for the research community.
helicopter for example). Existing Vrec technology is incapable of
100% recognition rates even with a background noise level of less 9.2 Experimental study
than 20 dBA as may be found in a small office. Ways must be
found therefore to not only increase the fidelity of recognition, but A number of flight crew who are familiar with operational aircraft
also to compensate for background noise. Within a simulator the as well as the simulated equivalent were asked to participate in a
noise is of course artificially introduced into the flight deck, the investigation about fidelity perception in a simulator, as expert
waveform and volume are therefore known in advance. One users. The flight crew which gave qualitative survey input,
possible solution therefore maybe to send the waveform of this immediately after exposure to a commercial flight simulator,
sound to the Vrec system along a separate channel to the represented a number of international airlines, and were all
incoming voice communication, and then employ a filter to qualified, operational Airbus A320 or Boeing 757 and 767 pilots.
remove it from the actual communication channel prior to This survey was very brief and in no way exhaustive,
interpreting the voice command. The integration of a fully concentrating on certain key areas of simulation such as
automated ATC environment is a very significant challenge, and a
vital direction for future development [Bennet 2003]. • Flight deck fidelity
• Accuracy of motion
9 Qualitative Assessments of Fidelity • Visual scene image quality

9.1 Simulation of physics Considering Flight deck fidelity on both the 757 / 767 and A320
simulators that the crew are exposed to, all crew without
Computer graphics algorithms have for long dealt with simulation exception stated that the flight deck is a 100% accurate
of physics: simulation of the geometry of a real-world space, representation of the aircraft they fly operationally right down to
simulation of the light propagation in a real environment and the Captain & F/O seating (and even the floor carpet in the case of
simulation of motor actions with appropriate tracking. Perception the A320). This is perhaps not surprising since within the full
principles have subsequently been incorporated into rendering flight simulator, the flight deck is deliberately recreated from the
algorithms in order to save rendering computation, mainly original in terms of size and appearance, and all avionics systems
following the generic idea of ‘do not render what we can not see’ . present are actual aircraft systems.
However, with VE simulator technologies trying to simulate real-
world task situations, the research community is challenged to When considering motion the majority of crew were happy with
produce a much more complex system. We do not necessarily
most of the motion effects such as clear air turbulence, while take
require accurate simulation of physics to induce reality. Much less
off and landing were considered adequate but not truly
detail is often adequate.
representative of a real aircraft. One instructor stated that forced
Recent research results have been produced where:
landings ‘Just didn’t feel right’ but still had the desired effects.
Take off and landing motion discrepancies between the real and
• Fidelity metrics for VE simulations based on task
simulated aircraft are due almost entirely to the nature of the
performance in real world and VEs’ task situations have
motion base. G Load cannot be simulated (even the relatively
been complemented by investigations of cognitive
minor G force associated with commercial flight) and the
processes or awareness states while completing tasks
sensation of speed relies upon an illusion created by deliberately
[Mania et al. 2003]
tilting the motion base back whilst keeping the visual appearing
• Simulations of how human perceive spaces from a level – causing the sensation of acceleration. With the limited
cognitive point of view rather than just simulation of stroke of about 60 inches that these and the majority of motion
physics [Mania & Robinson 2002]. bases have, it is difficult to envisage a way of improving this, and
it is still deemed acceptable by the crew.
We can therefore pose the following research question: Forced landings ‘Don’t feel right’ is a very subjective term since
Could we interrogate cognitive systems which are activated by (it is hoped) most aircrew will never experience the situation for
being in a scene of a specific context to see if the same systems real, which was indeed the case with the instructor who made the
respond similarly to a VE version of the scene? And how could
statement. All forms of motion are checked and ‘tweaked’
we match the capabilities of the VE system (related to visual and
frequently to maintain parameters agreed with the various
interaction fidelity) to the requirements of the human perceptual
regulatory bodies, but this comment highlights concerns over the
and motor systems?
suitability of ‘Gas Spring’ systems as a hydraulic replacement.
A high fidelity system which is not necessarily produced by The instructors concern was that the motion of the forced landing
slavish simulation of physics, could be produced by non-linear (a single undercarriage collapse at touch down in this case) was
informative distortions of reality since it is often the information not violent enough. If the hydraulic base was unable to reproduce
uptake that matters. Due to limitations of displays, tracking and the violence then it is unlikely that a gas spring system will be.
computer graphics algorithms, simulation of physics will often Linear motor and screw jack systems may well be able to replicate
result in systems that do not simulate behaviour, cognition or hydraulic systems in terms of fidelity but there will always be
perception processes as operating in the real-world. Therefore, the structural limitations as well as safety concerns that will limit
challenge is to induce reality with ‘magic’ meaning inducing a what can be done with simulated motion.
sense of ‘reality’ by building systems which include non-linear
distortions of the physics taking into account not only the human With regard to the visual systems it should be pointed out that the
cognitive and perceptual systems but how these will be transferred two simulators are from two different generations with the A320
to the components of the VE system concerned, e.g. displays, being certified at level D (built in 2001) and the 757 / 767
tracking, computer graphics algorithms. How we scientifically certified at level C (built in 1991). The airbus employs a CAE
define a system’s attribute to ‘feel real’ when it is far from Maxvue plus IG coupled with a 180 degree display, while the 757
/ 767 uses an MacDonnell Douglas Vital 7 IG coupled to an off
set 220 degree display. Within both simulators all crew stated that ELLIS, S.R., YOUNG, M.J., ADELSTEIN, B.D. & EHRLICH,
the fidelity of the systems was such that it was adequate for the S.M. 1999. Discrimination of changes in latency during head
type of training that they perform, though not surprisingly the 757 movement. Proc. Computer Human Interfaces, 1129-1133.
/ 767 was criticised for poor texture, weather and ambient lighting
effects. The maxvue system has since been superseded by the FARRINGTON. P, et al, 1999. Strategic directions in simulation
Tropos system which has greatly enhanced polygon limits and research. Proceedings of the 1999 winter simulation conference.
texture handling abilities which will resolve many of the
limitations of the existing system within the A320, such as limited GREEN, R. MUIR, H., JAMES, D., GRADWELL D., GREEN,
airfield buildings and inaccurate water effects. The main R. 1992. Human Factors for Pilots, 14-16, Ashgate. ISBN 1
complaint for both systems was stated as being ‘visible blend 85628 177 9.
zones’ causing the appearance of vertical ‘pillars’ within the
visual scene. This is caused by the nature of the display and MANIA, K. 2001. Connections between Lighting Impressions and
highlights the limitation of current CRT technology. The blend Presence in Real and Virtual Environments. Afrigraph 2004,
zone is the region within the image where one 60 degree segment South Africa 119-123, ACM Press.
ends and is matched with the next. It is notoriously difficult to get
these blend zones exact, and even more difficult to make them MANIA, K., ADELSDEIN, B., ELLIS, S.R., HILL, M. (2004).
stay matched since the different projector setting will drift over Perceptual Sensitivity to Head Tracking Latency in Virtual
time by varying amounts. Various systems to remove some of this Environments with Varying Degrees of Scene Complexity. ACM
effect exist using both opto-mechanical and digital systems to Symposium on Applied Perception in Graphics and Visualization,
automatically balance and adjust the colour within the blend zone, ACM Press.
but these generally only succeed in reducing the size of the visible
boundary. The only real solution would be to generate the image MANIA, K. & CHALMERS, A. 2001. The Effects of Levels of
using a single projector; this is currently being researched. Immersion on Presence and Memory in Virtual Environments: A
Reality Centred Approach. Cyberpsychology & Behavior Journal,
Generally, flight simulation succeeds in its goal of producing 4(2), pages 247-264.
highly convincing and accurate systems for training and research.
However in the highly dynamic world of aviation it is vital that MANIA, K., ROBINSON, A. 2002. Fidelity based on the Schema
simulation keeps up to date with advancements in flight Memory Theory: An Experimental Study. 5th Annual
technology. Additionally, it is vital that new ‘Simulation Specific’ International Workshop Presence, Portugal (in co-operation with
technology that is not employed within operational aircraft such ACM SIGCHI), 296-304.
as IG’s, Visual display devices, motion bases and Vrec systems,
are developed to provide perceptual fidelity enhancement for MANIA, K., TROSCIANKO, T., HAWKES, R., CHALMERS,
future generations of flight simulators. Perceptual fidelity is not A. 2003. Fidelity Metrics for Virtual Environment Simulations
necessarily the same as physical simulation. Identifying ways to based on Human Judgments of Spatial Memory Awareness States.
‘induce’ reality rather than simulating the physics of reality is the Presence, Teleoperators and Virtual Environments, 12(3), MIT
greatest but also most fascinating research challenge of all. Press, 296-310.

Acknowledgments McNAMARA, A. 2001. Visual Perception in Realistic Image


Synthesis', Computer Graphics Forum, Volume 20, #4.
Special thanks go to all at CAE Systems for all of the help and
support. Also special thanks go to Alteon (formerly Flight Safety MEEHAN, M., RAZZAQUE, S., WHITTON, M., BROOKS, F. Effect of
Boeing) for allowing us approach flight crew with questions. Latency on Presence in Stressful Virtual Environments. In Proc.
Thank you to all of the flight crew who took part in the survey. of IEEE Virtual Reality ’03, 2003, 141-148.

Moog Incorporated, http://www.moog.com/


References
NASA Ames Research Center,
ADELSTEIN, B.D., LEE, T.G., ELLIS, S.R. 2003. Head tracking http://www.simlabs.arc.nasa.gov/vms/vms.html
latency in Virtual Environments: Psychophysics and a model.
Proc. of the Human Factors and Ergonomics Society Meeting REISMAN, R., ELLIS, S.R. 2003. Augmented Reality for Air
(HFES). Traffic Control Towers. Technical sketch, ACM Siggraph 2003.

BENNETT, T.. 2003. The Downside Of New Advancements In ROLFE, J.M. & STAPLES. Flight Simulation. K.J. Cambridge
Simulation Fidelity And Avionic Technologies. In Proc. of Royal University Press, 1999.
Aeronautical Society Simulation of the Environment Conference.
SHERIDAN, T.B. & FERRELL, W.R. Remote manipulative control
GUO, L., CARDULLO, F; TELBAN, R; HOUCK, J; and with transmission delay. IEEE Transactions on Human Factors in
KELLY, L. 2003. The Results of a Simulator Study to Electronics 4, 1, 1963, 25-29.
Determine the Effects on Pilot Performance of Two Different
Motion Cueing Algorithms and Various Delays, Compensated and SIVIAN. R, et al, 1982. An optimal control approach top the
Uncompensated Research Papers of the Link Foundation Fellows design of moving flight simulators. IEEE Transactions on
AIAA-2003-5676. systems, man and cybernetics. Vol 12, Number 6, 818 – 827.

DENNE, P. Virtual Motion and Electromagnetic Rams From the THOM, T. 2002. The Air Pilot's Manual Volume 2: Aviation Law
Virtual Reality forum at http://www.q3000.com/pdf/sim7.pdf. and Meteorology. ISBN: 1843360667.
TOMLINSON, D. 2003.Voice Recognition in an Air Traffic
Control Simulation Environment. In Proc. of Royal Aeronautical
Society Simulation of the Environment Conference.

You might also like