Remote Sensing From Air and Space 2nd Edition
Remote Sensing From Air and Space 2nd Edition
Remote Sensing From Air and Space 2nd Edition
Published by
SPIE
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: + 1 360.676.3290
Fax: + 1 360.647.1445
Email: books@spie.org
Web: http://spie.org
The content of this book reflects the work and thought of the author. Every effort has
been made to publish reliable and accurate information herein, but the publisher is not
responsible for the validity of the information or for any outcomes resulting from
reliance thereon.
Index 291
xiii
in imaging from space that are not ready to be discussed here. These larger
fleets of satellites and newer focal plane technology imply more persistent
imaging. Video from space is a consequence of these new hardware designs,
with promising but uncertain utility. Also signaled by the success of Skybox
imaging: remote sensing appears to be emerging as the third field, following
communications and navigation, to become economically viable in space.
This text is organized according to a fairly typical progression—optical
systems in the visible realm, followed by infrared and radar systems. New to
this textbook is a full chapter on LiDAR. The necessary physics is developed
for each domain, followed by a look at a few operational systems that are
appropriate. Somewhat unusual for a text of this sort is a chapter on how
orbital mechanics influences remote sensing, but ongoing experience shows
that this topic is essential.
I have added a radiometry component to the infrared (IR), radar (SAR),
and LiDAR sections. The IR section clearly needed this to address detection
issues and make temperature measurements more clearly founded. The
imaging radar material clearly needed the radar range equation, just as the
LiDAR chapter needed its corresponding range equation.
Finally: The first edition was pretty much a solo effort on my part. The
second edition has benefitted from the support of my technical team—my
thanks to Angela Kim, Jeremy Metcalf, Chad Miller, and Scott Runyon for
their contributions. Thanks to Donna Aikins and Jean Ferreira for help with
the many copyright issues. The reviewers did a great job and identified a
number of annoying elements in my writing style that clearly needed to be
adjusted. Thanks to the editor, Scott McNeill, for his persistence and
diligence.
R. C. Olsen
Naval Postgraduate School, Monterey, CA
June 2016
Remote sensing is a field designed to enable people to look beyond the range
of human vision. Whether it is over the horizon, beyond our limited range, or
in a spectral range outside our perception, we are in search of information.1
The focus in this text will be on imaging systems of interest for strategic,
tactical, and military applications, as well as information of interest to those
domains.
To begin, consider one of the first airborne remote-sensing images.
Figure 1.1 shows a photograph by Gaspard-Félix Tournachon2 (Tournachon
was also known by his pseudonym, Nadar). He took this aerial photo of Paris
in 1868 from the Hippodrome Balloon, tethered 1700 feet above the city.
Contrast this image with the photo taken by astronauts on Apollo 17, roughly
one hundred years later (Fig. 1.2).
Tournachon’s picture is a fairly classic remote-sensing image—a
representation from a specific place and time of an area of interest. What
sorts of things can be learned from such an image? Where, for instance, are the
streets? What buildings are there? What are the purposes of those buildings?
Which buildings are still there today? These are the elements of information
that people want to extract from such imagery.
The following material establishes a model for extracting information
from remote-sensing data. The examples used here are also meant to illustrate
the range of information that can be extracted from remote-sensing imagery,
as well as some of the consequences of wavelength and resolution choices
made with such systems.
1. The term “remote sensing” emerged as the imaging technology moved beyond film-based
aerial photography. The initial impetus for the term is attributed to Evelyn Pruitt and Walter
Bailey (ca 1960).
2. Tournachon was a notable portrait photographer from 1854 to 1860. He made the first
photographs with artificial light in 1861, descending into the Parisian sewers and catacombs
with magnesium flares. He apparently was also an early experimenter in ballooning, even
working with Jules Verne. http://www.getty.edu/art/collections/bio/a1622-1.html.
Figure 1.1 Gaspard-Félix Tournachon took his first aerial photographs in 1858, but those
earlier images did not survive.3 He applied for a patent for aerial surveying and photography
in 1858. Curiously, there is no evidence of photography during the American Civil War,
although balloons were used for reconnaissance by both sides. Image courtesy of the Brown
University Library Center for Digital Scholarship.
Figure 1.2 An image of earth (“The Blue Marble”), taken from near-geosynchronous orbit
by Apollo 17 on December 7, 1972.
Figure 1.3 Image of the Dolon air base in Chagan, Kazakhstan (50° 320 3000 N, 079° 110
3000 E) taken by the Gambit (KH-7) system during Mission 4022 on October 4, 1965. The
inset is a close-up of the Tupolev Tu-95 (Bear) bombers along the 4-km runway. North is up.
The planes are 46 m in length, with a wingspan of 50 m. The spatial resolution in the
scanned image is 0.735 m per pixel. Image reference: DZB00402200056H012001; the film
was scanned at a 7 mm pitch.4
4. http://en.wikipedia.org/wiki/Tupolev_Tu-95.
Figure 1.4 This Gambit-1 image shows the Sary Shagan Hen House radar (centered at
46° 360 4100 N, 74° 310 2200 E). The image was taken on May 28, 1967. Similar images with a
1-m resolution date to 1964. These large Soviet radars were designed to watch for ballistic
missiles and satellites. The 25-MW system was designed to monitor the south and west with
two pairs of antenna, one transmitting and one receiving (bistatic). The image chip for two of
the radar systems is overlaid on the full frame image for context. The original film was
9 inches wide (the vertical direction in this frame).
5. CIA Center for the Study of Intelligence; Studies in Intelligence, series, series Volume 11, #2,
pp. 59–66; Spring 1967; Moon Bounce Elint, Frank Eliot. https://www.cia.gov/library/
center-for-the-study-of-intelligence/kent-csi/vol11no2/html/v11i2a05p_0001.htm.
Figure 1.5 Worldview-2 image of STS-134 on the pad. North is up. This illustration uses
the near-infrared, red, and green bands in a false-color representation similar to that
obtained from infrared-color film in previous years. The vegetation appears bright red.
Somewhat coincidentally, the red external tank on the shuttle maintains a fairly orange color.
The color image has been “pan-sharpened” to the 0.6-m resolution of the panchromatic
sensor that collected these data. Imagery reprinted courtesy of DigitalGlobe.
Figure 1.6 On June 3, 2002, SPOT-5 took this picture of the ERS-2 satellite at about 23:00
UTC over the Southern Hemisphere. ERS-2, 42 kilometers below, overtakes SPOT-5 from
north-east to south-west at a relative velocity of 81 m/s. Image reprinted with permission of
the Centre National d’Études Spatiales (CNES).
arrays, radar antenna, and telemetry antenna are visible. Previously, SPOT-4
had imaged ERS-1 at a lower resolution; TerraSAR-X has shown a similar
capability in radar imaging, as illustrated below in Section 1.2.5.
Figure 1.7 Image of Kamchatka Submarine Base, Russia, taken by Earth Resources
Observation Satellite (EROS) on December 25, 2001 with a 1.8-m resolution.6 Located on
the far-eastern frontier of Russia and the former Soviet Union, this peninsula has always
been of strategic importance. Kamchatka is home to the Pacific nuclear submarine fleet,
housed across Avacha Bay from Petropavlovsk at the Rybachy base.
Figure 1.8 View of Egypt at night taken by the VIIRS aboard the Suomi National Polar-
orbiting Partnership (NPP) satellite on October 13, 2012.
Later in this chapter, Figs. 1.16 and 1.19 illustrate a different element of
IOB: lines of communication. The Coronado Bridge and associated roads
appear at a higher spatial resolution. Figure 4.17 in this book shows port
activity in a nighttime image, with roads made visible by cars and static light
sources.
These illustrations of orders of battle provide an idea of the types of
information that can be obtained. The following section briefly surveys the
various forms of remote-sensing data available historically and today. Visible,
infrared, LiDAR, and radar imagery are illustrated.
8. Geosynchronous orbit (GEO) satellite orbits have a radius of 6.6 earth radii.
Figure 1.9 GOES-9 visible image taken on June 9, 1995 at 18:15 UTC. Image courtesy of
NASA-Goddard Space Flight Center, with data from NOAA GOES.9
taken in daylight in the western hemisphere. The solid earth is hotter than the
oceans during the day and thus appears dark, particularly over the western
United States. The drier western states, with less vegetation, are hotter than
the eastern side of the country.
The earth’s atmosphere decreases monotonically in temperature with
altitude within the troposphere (the region with weather), and the cloud
temperatures vary along with the ambient atmosphere. The character of
infrared emission varies in wavelength with temperature, so the apparent color
of the clouds in this presentation reflects cloud temperatures and therefore
height.
9. http://goes.gsfc.nasa.gov/pub/goes/goes9.950609.1815.vis.gif.
Figure 1.10 GOES-15 infrared imagery taken April 26, 2010 at 17:30 UTC. This is the “first
light” image for the GOES-15 infrared sensors. The composite is made by using the 3.9-mm
infrared channel (G, B) and the long-wave infrared channel at 11 mm (R). The cloud colors
provide information about their height (which corresponds with temperature) and water
content. Here, higher-altitude clouds are colder and appear white.10 Related views are
shown in Fig. 8.15.
Figure 1.11 Landsat 7 image of San Diego taken June 14, 2001. The RGB “true color”
image is on the left (30-m pixels), and the thermal infrared (LWIR) image is on the right.
White is “hot” in this display. Temperatures are from 12–52 °C, or 53.6–125.6 °F.
Figure 1.12 Landsat 7 image enlarged (acquired June 14, 2001 at 18:12:08.07Z), with the
“true-color” image on the left and a Landsat thermal image on the right. The right image uses
IR wavelengths bands 6 and 7; the red is 11 mm, and the green and blue are 2.2 mm.
harbor. Adjacent to the true color images in Figs. 1.11 and 1.12 are the
corresponding LWIR images from Landsat. The 60-m-resolution sensor is
the highest-spatial-resolution LWIR sensor flown to date on a civil or
commercial system.
Figure 1.13 Landsat 7 panchromatic channel. The high-spatial-resolution channel for the
Enhanced Thematic Mapper Plus (ETM+) has a 15-m resolution capability shown here. The
Coronado Bridge starts to appear clearly. The golf course is bright because of this sensor’s
spectral response extends into the near-infrared (see Chapter 6).
In the visible sensor data, the Coronado Bridge is just visible crossing
from San Diego to Coronado Island. Long linear features, such as bridges and
roads, show up well in imagery even if they are narrow by comparison to the
pixel size. Reflective infrared and thermal IR data from Landsat are shown
encoded as an RGB triple on the right side of Fig. 1.12. The hot asphalt and
city features are bright in the red (thermal) frequencies, whereas parks are
green (cool, and highly reflective in short-wave IR).
Figure 1.13 shows the higher-spatial-resolution panchromatic channel
from the ETM+ sensor. In comparison to an imager like GOES, the penalty
paid for this high spatial resolution is a reduced field of view—nominally
185 km across for any image.
Figure 1.14 Full-frame image captured by the UK-DMC on October 1, 2009, at 17:53:00Z.
The false-color “IR” image is 14400 15550 pixels and provides a 30-m ground sample
distance (GSD). Regions that appear red have significant healthy vegetation. UK-DMC2
image, October 1, 2009 ©2016 DMCii, all rights reserved. Supplied by DMC International
Imaging, U.K.
now part of Airbus).12 SSTL has designed and flown a number of small
satellites, selling many of them to countries without indigenous capabilities
in this area.
Figure 1.14 shows an image of southern California and northern Mexico
(Baja California), comparable to the Landsat system shown earlier in this
chapter, though limited in bands (green, red, and near-infrared). This newer
technology provides a much larger image area (3 in linear dimensions).
The system is limited in bandwidth and collection rates, with revisit times of
Figure 1.15 Full-frame image captured by the UK-DMC on October 1, 2009 at 17:53:00Z.
The false-color “IR” image is 14400 15550 pixels and provides a 30-m ground sample
distance (GSD). Regions that appear red have significant healthy vegetation. UK-DMC2
image October 1, 2009, ©2016 DMCii, all rights reserved. Supplied by DMC International
Imaging, U.K.
two weeks being the norm for such low-earth orbiting systems. Surrey
addresses the revisit issue by providing their different customers a means to
team up, formally in the Disaster Management Constellation (DMC). As
the fleet grows, the revisit time drops to about a day for the ensemble of
satellites.
Medium-resolution imaging systems like the DMC system are becoming
practical for the support of agriculture. Figure 1.14 shows a checkerboard
pattern of irrigated vegetation north and south of the Salton Sea in the
middle-right portion of the image. Figure 1.15 shows a zoomed-in image of
San Diego, emphasizing the similarity to the quality of the Landsat 7 data.
The very bright red regions in this figure are golf courses (natural vegetation is
not particularly healthy at this time of year in southern California).
The base system at this writing is focused on payloads with a 10-m
resolution in the panchromatic band and 32-m resolution in the multispectral
(e.g., color) bands. These robust systems cost about 10–20 million USD.
13. A video shows how the dunes marker has been built up over the years, largely through the
efforts of Armondo Morena, a San Diego city worker: https://www.youtube.com/watch?
v=Ag5n_1CPQ7M.
Figure 1.16 Worldview-3 image of Coronado Island, San Diego, California, 9/16/2014. North
is approximately to the right, and the sun is to the upper left. The Hotel del Coronado is shown
in the upper inset, and the carrier Midway is shown in the lower inset, using the higher-
resolution panchromatic data (0.30-m GSD). Imagery reprinted courtesy of DigitalGlobe.
Laser scanners are extensively used for mapping and surveying, with point
densities of 1–30 points/m2 depending on the application. In the illustration
here, the nominal point density is 3.5 pts/m2, which is typical for mapping at
the time of this image. The point density corresponds roughly to a ground
resolution of 0.5–1.0 m.
Figure 1.17 Image of Coronado Island, San Diego, California with LiDAR data from USGS.
The sensor used was an Optech, Inc. Airborne Laser Terrain Mapper (ALTM) 1225. The
LiDAR data were collected on March 24–25, 2006. The following settings were used for
these flights: 25-kHz laser pulse rate, 26-Hz scanner rate, ±20° scan angle, 300–600-m
AGL altitude, and 95–120-kts ground speed.
large image size ( > 400 megapixels) is a consequence of the relatively large
area being imaged at high resolution. Modern digital cameras used for
airborne mapping are frequently operated to image at a 4–6-inch GSD.
Figure 1.18 The images shown here are from an aerial photograph taken over San Diego
harbor in 2004: (a) the full frame, (b) a small chip from the 21,000- 21,000-pixel image
scanned from the film image, and (c) a further zoomed-in view of the 1.3-gigabyte file. The
resolution is between 6 and 12 inches. Notice the glare on the water and how the wind-driven
water waves show from above. The carrier is the U.S.S. Midway, part of an exhibit at the San
Diego Maritime Museum.14 Images reprinted with permission from Lenska Aerial Images.
14. http://www.navsource.org/archives/02/41.htm.
15. http://www.nasa.gov/mission_pages/shuttle/shuttlemissions/sts123/multimedia/fd15/fd15_
gallery.html; http://www.dlr.de/en/desktopdefault.aspx/tabid-6840/86_read-22539/.
Figure 1.20 TerraSAR-X sub-meter imaging. The buildings have a peculiar look due to the
specular reflection of energy back to the satellite. Visible here besides the U.S.S. Midway are
three Nimitz-class carriers docked at North Island and the baseball stadium for the San Diego
Padres. The small streaks in the water are from smaller boats moving rapidly through the
scene. The resolution appears to be a bit better than 25 cm. By comparison to the WV3 and
airborne illustrations presented earlier, the aircraft on the deck of the Midway appear to have
vanished. In the inset image, cars are detectable in the parking lot to the north of the Midway.
A fourth axis, polarization, has been an important term for passive and
active radar systems, and it has started to appear in the optical remote-sensing
community as an additional dimension of information.
1.4 Resources
There are some classic and modern remote sensing textbooks to note:
• The classic text—Fundamentals of Remote Sensing and Airphoto
Interpretation, by Thomas Eugene Avery and Graydon Lennis
Berlin—is rather dated even in its 5th edition (1992), but it is still a
great reference with many illustrations.
• Remote Sensing of the Environment: An Earth Resource Perspective, 2nd
edition, published in 2006 by John R. Jensen, is an excellent book by
one of the top people in remote sensing.
Figure 1.21 TerraSAR-X image of the International Space Station (ISS), collected on
March 13, 2008 (1325Z). TerraSAR-X passed the ISS at a distance of 195 km and at a
relative speed of 9.6 km/s. The resolution is about one meter, obtained in a 3-s exposure.
The image grayscale is inverted (dark indicates stronger returns). The size of the ISS is
roughly 110 m 100 m 30 m. The Space Shuttle Endeavour was docked at this time, so it
is in the image. The lower image is taken from STS-123 as it departed on March 24.
Reference NASA image S123E010155.16
Spatial Resolution
IKONOS, Quickbird
SPOT
Hyperion Spectral
Landsat Hyperspectral
Multi-spectral
Geosynchronous Weather
Missile Warning
Temporal
Figure 1.22 Three dimensions for remote sensing.
16. TerraSAR-X image of the month: The International Space Station (ISS); news release
dated: 4 March 2010. Image acquired 13 March 2008, image #SWE1-E1058981, http://
www.dlr.de/en/desktopdefault.aspx/tabid-6215/10210_read-22539/10210_page-4/.
• By far the best book on the topic of data analysis is Remote Sensing
Digital Image Analysis: An Introduction, by John A. Richards, 5th
edition (2012).
1.5 Problems
1. List 5–10 elements of information that could be determined for NOB from
imagery. Typical main elements are battle group, ships, submarines, ports,
weather, personnel, C3, and medical.
2. What wavelengths of EM radiation are utilized in the images shown in this
chapter? (This is really a review question, best answered after completing
Chapter 2.)
3. Construct a table/graph showing the relationship between the ground
resolution and area of coverage for the sensors shown in this chapter.
(Also a review question.)
4. Compare the various images of San Diego Harbor. What are the
differences in information content for the highest-resolution systems (e.g.,
IKONOS), the earth resources system (Landsat, visible, and IR), and the
radar system. Which is best for lines of communication? Terrain
categorization? Air order of battle? NOB?
Figure 2.0 The next two chapters follow the progression of energy (light) from the source
(generally the sun) to detectors that measure such energy. Concepts of target reflectance
and atmospheric transmission are developed, and the problem of getting data to the ground
is discussed.
27
1. ∯ E · dS ¼ εQ or ∇ · E ¼ εr ,
o o
(2.1a)
2. ∯ B · dS ¼ 0 or ∇ · B ¼ 0, (2.1b)
3. ∮ E · dl ¼ ∫∫ B · dS or ∇ E ¼
B
, (2.1c)
t t
4. ∮ B · dl ¼ m i þ m ε
E
o
t
∫∫ E · dS or ∇ B ¼ m J þ m ε
o o o o o
t
: (2.1d)
Figure 2.1 Four cycles of a wave are shown, with wavelength l or period t. The wave has
an amplitude A equal to 3.
lf ¼ c: (2.4)
Figure 2.2 An electromagnetic wave. The electric field is perpendicular to the magnetic
field (E ⊥ B), and both are perpendicular to the direction of propagation k. Following a typical
convention, E is in the x direction, B is in the y direction, and the wave is propagating in the z
direction. This same convention is used in Eq. (2.3).
Figure 2.3 Color photographs of Hermann Hall, on the campus of the Naval Postgraduate
School. The low-level clouds are visible against the darker blue sky in the left image; the
reflected light from the clouds is not highly polarized. The exposure settings are constant
with the Nikon D70 camera. A similar scene is shown in Chapter 6 (Fig. 6.25).
the micron instead. The angstrom (Å) is a nonmetric unit, but it is used widely
nevertheless, particularly by older physicists. The nanometer (nm) needs to be
carefully distinguished from the nautical mile. Visible-light wavelengths
correspond to a wavelength range from 0.38–0.75 mm and energies of 2–3 eV.
Examples
Consider the following illustrative calculations of the characteristics of optical
frequencies and energies:
Photons corresponding to the “green” portion of the spectrum have a
nominal wavelength of 0.5 mm, or a frequency of 6 1014 Hz. The energy for
such photons can be calculated in electron volts by using Planck’s constant:
This energy is on the order of (or slightly less than) typical atomic binding
energies.
Energies of typical x-ray photons are in the 104 to 105 eV range, whereas
the photons of a 100-MHz radio signal are only 4 10–7 eV.
LiDAR systems (Chapter 11) generally put out short pulses (5 ns) with
an energy of 10 mJ. How many photons is this for a laser with a wavelength
of 1.064 mm? Starting with Energy ¼ N · hf, where N is the number of photons,
the expression must be rewritten slightly in terms of wavelength:
c
E ¼ N ·h ⇒
l
l 1.064 106 m
N ¼E· ¼ 10 106 joules ·
hc 6.626 1034 joule seconds · 3 108 m∕s
¼ 5.35 1013 photons
Figure 2.5 Layout for demonstration of the photoelectric effect. The convention for the
current is that it flows in the direction opposite that of the electrons.
Figure 2.6 Results from a demonstration of the photoelectric effect using a mercury (Hg)
light source.1
hc
E ¼ hf ¼
l
4.136 1015 eV-s · 3 108 m∕s
¼
435.8 109 m
1.24 1016
¼ ¼ 2.85 eV:
4.358 107
E ¼ hf ¼ KE þ qF, (2.6)
where the total energy is E, the kinetic energy KE ¼ qW, and the work
function gives the potential energy term qF. The magnitude of the electron
charge is q in this equation; it converts from eV to joules.
Figure 2.7 The Sternglass formula is the standard description for the yield of secondary
electrons as a function of incident electron energy. Sternglass published an expression for
the secondary current by electron impact using the yield function d(E) ¼ 7.4 dm (E/Em) exp
p
[ 2 (E/Em)], where the maximum yield dm and the energy at which it occurs Em vary from
material to material. Illustrative values for glass (SiO2), for example, are dm ¼ 2.9 and
Em ¼ 420 eV.2
waves are created. There are several major sources of EM radiation, all
ultimately associated in some form with the acceleration (change of energy) of
charged particles (mostly electrons). For remote sensing, these can be divided
into three categories:
2. Sternglass, E.J. (1954) Sci. Pap. 1772, Westinghouse Research Laboratory, Pittsburg, PA.
3. https://www.hamamatsu.com/resources/pdf/etd/PMT_handbook_v3aE.pdf; or Photomulti-
plier Tubes, Sales Brochure, TPMO0005E01, June, 2002, Hamamatsu Photonics, KK.
4. References: all Hamamatsu Photonics, KK Rectangular MCP and Assembly Series TMCP
1006E02, December 1999, Circular MCP and Assembly Series, TMCP1007E04, December
1999, and Image Intensifiers, TII0001E2, September 2001. https://www.hamamatsu.com/
resources/pdf/etd/PMT_handbook_v3aE.pdf.
the lower (more negative) their energy levels. As energy is given to the electrons,
the radii of their orbits increase until they finally break free. Bohr hypothesized
that the radii of the orbits were constrained by quantum mechanics to have
certain values (really, a constraint on angular momentum), which produces a
set of discrete energy levels that are allowed for the electrons. Bohr also
assumed that the emission and absorption of energy (light) by an atom could
only occur for transitions between the discrete energy levels allowed to
electrons. Figure 2.10 illustrates the concept that photons are emitted (or
absorbed) in changes of these discrete energy levels. See Appendix 1 for more
careful analysis of the Bohr model.
A few pages of mathematics (in the appendix) give the formula for the
energy of the electrons orbiting in hydrogen-like atoms:
1 Ze2 2 m E
E¼ ¼ Z2 21 , (2.7)
2 4pε0 h n2
n
hc
l¼ ¼ 121.6 nm ¼ 1216 Å:
DE
If DE is expressed in electron volts, which it usually is, then the constant hc in
the numerator can be written as
Figure 2.11 An energy-level diagram of a hydrogen atom, showing the possible transitions
corresponding to the different series. The numbers along the transitions are wavelengths in
units of angstroms, where 1 nm ¼ 10 Å.5
Figure 2.12 The Balmer series: visible-region hydrogen spectra in emission and
absorption.6
energy level (the ground state) are called the Lyman series. The n ¼ 2 to n ¼ 1
transitions compose the Lyman alpha (a) transition. This ultraviolet (UV)
emission is one of the primary spectral (emission) lines of the sun’s upper
atmosphere. The emission (or absorption) lines in the visible portion of the
sun’s spectrum are the Balmer series, i.e., transitions from n > 2 to n ¼ 2.
Higher-order series are of less importance for our purposes.
Although the Bohr model was ultimately replaced by the solution of the
Schrödinger equation and a more-general form of quantum mechanics, it
successfully predicts the observed energy levels for one-electron atoms and
illustrates the quantum nature of the atom and associated energy levels. It is
also a good beginning for understanding the interesting spectral character-
istics reflected and radiated light may exhibit in remote-sensing applications.
2phc2 1 watts
Radiant exitance ¼ M ¼ · hc · 2 : (2.11)
l5
elkT 1 m mm
The difference is that the dependence on the angle of the emitted radiation has
been removed by integrating over the solid angle. This can be done for
blackbodies because they are “Lambertian” surfaces by definition—the
emitted radiation does not depend upon the angle, and M ¼ pL.
For the purposes of this book, two aspects of the Planck curves are of
particular interest: the total power radiated, which is represented by the area
under the curve, and the wavelength at which the curve peaks lmax.
The power radiated (integrated over all wavelengths) is given by the
Stefan–Boltzmann law:
R ¼ sεT 4 W∕m2 , (2.12)
where R is the power radiated per square meter, ε is the emissivity (taken as
unity for a blackbody), s ¼ 5.67 10–8 W/m2K4 (Stefan’s constant), and T is
the temperature of the radiator (in K).
Wien’s displacement law gives the wavelength at which the peak in
radiation occurs:
a
lmax ¼ (2.13)
T
for a given temperature T. Wien’s constant a has the value
Example
Assume that the sun radiates like a blackbody (which is not a bad assumption,
though two slightly different temperatures must be chosen to match the
observed quantities):
(a) Find the wavelength at which this radiation peaks lmax. The solar
spectral shape in the visible is best matched by a temperature of
6000 K:
Figure 2.14 The solar spectrum, based on the spectrum of Neckel and Labs. The peak
occurs at about 460 nm (blue or cyan). The data illustrated here represent the “top-
of-atmosphere” incident radiance. Reprinted with permission from “The solar radiation
between 3300 and 12500 angstrom,” Solar Physics 90, 205–258 (1984). Data file courtesy
of Bo-Cai Gao, NRL.
7. K. Phillips, Guide to the Sun, p. 83–84, Cambridge Press, Cambridge, U.K. (1992).
8. This section is adapted from the classic text by T. E. Avery and G. L. Berlin, Fundamentals
of Remote Sensing and Airphoto Interpretation, Macmillan Publishing Company, New
York (1992).
Figure 2.15 The four interactions defined here are somewhat artificial from a pure physics
perspective, but they are nonetheless extremely useful. Figure reprinted with permission
from Avery and Berlin (1992).8
2.5.1 Transmission
Transmission is the process by which incident radiation passes through matter
without measurable attenuation; the substance is thus transparent to the
radiation. Transmission through material media of different densities (e.g., air
to water) causes radiation to be refracted or deflected from a straight-line path
with an accompanying change in its velocity and wavelength; the frequency
always remains constant. In Fig. 2.15, it is observed that the incident beam of
light at angle u1 is deflected toward the normal when going from a low-density
medium to a denser one at angle u2. Emerging from the far side of the denser
medium, the beam is refracted from the normal at angle u3. The angle
relationships in Fig. 2.15 are u1 > u2 and u1 ¼ u3.
The change in the EMR velocity is explained by the index of refraction n,
which is the ratio between the velocity of the EMR in a vacuum c and its
velocity in a material medium v:
c
n¼ : (2.14)
v
The index of refraction for a vacuum (perfectly transparent medium) is equal
to 1, or unity. Because v is never greater than c, n can never be less than 1 for
any substance. Indices of refraction vary from 1.0002926 (for the earth’s
atmosphere) to 1.33 (for water) and 2.42 (for a diamond). The index of
refraction leads to Snell’s law:
9. See, for example, E. Hecht, Optics, 4th Edition, Addison Wesley, 2001.
Figure 2.16 Fresnel equations. Both curves approach 1 (100% reflection) as the incident
angle approaches 90°. There is a range of incident angles for which the intensity of the
parallel component is very small, reaching zero at the Brewster angle.
2.5.3 Scattering
Scattering (also called diffuse reflection) occurs when incident radiation is
dispersed or spread out unpredictably in many directions, including the
direction from which it originated (Fig. 2.15). In the natural environment,
scattering is much more common than reflection. Scattering occurs with
surfaces that are rough relative to the wavelengths of incident radiation. Such
surfaces are called diffuse reflectors. The velocity and wavelength of
electromagnetic waves are not affected by scattering.
Variations in scattering manifest themselves in varying characteristics in
the bidirectional reflectance distribution function (BRDF). For an ideal,
Lambertian surface, this function would nominally be a cosine curve, but the
reality generally varies quite a bit. Aside from the world of remote sensing,
BRDF is also studied in computer graphics and visualization.
Figure 2.17 shows the consequences of variation in the scattering function
from the NASA Terra satellite, orbiting at a 700-km altitude, and the Multi-
angle Imaging SpectroRadiometer (MISR). At left is a “true-color” image
from the downward-looking (nadir) camera on the MISR. This image of a
snow-and-ice-dominated scene is mostly shades of gray. The false-color image
at right is a composite of red-band data taken by the MISR’s forward 45.6°,
Figure 2.17 Multi-angle Imaging SpectroRadiometer (MISR) images of Hudson Bay and
James Bay, Canada, February 24, 2000. This example illustrates how multi-angle viewing
can distinguish physical structures and textures. The images are about 400 km (250 miles)
wide with a spatial resolution of about 275 m (300 yards). North is toward the top. Photo
reprinted courtesy of NASA/GSFC/JPL, MISR Science Team (PIA02603).
nadir, and aft 45.6° cameras, displayed in blue, green, and red colors,
respectively. Color variations in the right image indicate differences in the
angular reflectance properties. The purple areas in the right image are low
clouds, and the light blue at the edge of the bay is due to increased forward
scattering by the fast (smooth) ice. The orange areas are rougher ice, which
scatters more light in the backward direction.
2.5.4 Absorption
Absorption is the process by which incident radiation is taken in by a medium.
For this to occur, the substance must be opaque to the incident radiation. A
portion of the absorbed radiation is converted into internal heat energy, which
is subsequently emitted or reradiated at longer thermal-infrared wavelengths.
2.6 Problems
1. MWIR radiation covers the 3–5-mm portion of the EM spectrum. What
energy range does this correspond to in eV?
49
Figure 3.1 Front-page of the New York Times on August 20, 1960. Note the other
headlines.
1. The United States’ Explorer-6 transmitted the first (electronic) space photograph of earth in
August 1959; the spin-stabilized satellite had television cameras. These images, apparently
now lost, predate the more-official first civilian images from space taken by TIROS 1,
launched on April 1, 1960 (see Chapter 8 and http://nssdc.gsfc.nasa.gov/, NSSDC ID: 59-
004A-05). Russian Luna-3 images of the far side of the moon were transmitted to earth in
October, 1959 (seventeen images from October 7–18, http://nssdc.gsfc.nasa.gov/database/
MasterCatalog?sc=1959-008A).
smaller imaging areas.2 Higher-resolution KH-8 data have not yet been
declassified, although the systems themselves have been.
3.1.2 Technology
The Corona concept uses film cameras to record images for a few days before
dropping the film via a recovery vehicle. The film containers were de-orbited and
recovered by Air Force C-119 (and C-130) aircraft while floating to earth on a
parachute. The system adapted aerial-photography technology with a constantly
rotating, stereo, panoramic-camera system [Figs. 3.2(a), 3.2(b), and 3.3]. The low
orbital altitudes (typically less than 100 miles) and slightly elliptical orbits eased
some of the problems associated with acquiring high-spatial-resolution imagery.
The “Gambit” series of KH-7 satellites flew at even lower altitudes with initial
perigees as low as 120 km. Appendix 2 provides details on the Corona missions.3
The cameras, codenamed “Keyhole,” began as variants on products
designed for aerial photography. The first cameras, the “C” series, were
designed by Itek and built by Fairchild. Two counter-rotating cameras,
pointing forward and aft and viewing overlapping regions, allowed for stereo
views (Fig. 3.4). The cameras used long filmstrips (2.2 inches 30 feet) and an
f/5.0 Tessar lens with a focal length of 24”. The first images had a ground
resolution of 40 feet, based on a film resolution of 50–100 lines/mm.
(Advances in film technology by Kodak were some of the most important
technological advances in the Corona program. Kodak developed a special
polyester base to replace the original acetate film.)
Improved camera, lens, and film design led to the KH-4-series cameras
(Fig. 3.2), with Petzval f/3.5 lenses, still at a 24-inch focal length. With film
resolutions of 160 lines/mm, it was possible to resolve ground targets of six feet.
Corona (KH-4) ultimately used film ranging in speeds from ASA 2 to 8—only a
few percent of the sensitivity, or speed, of consumer film. This is the tradeoff for
a high film resolution, and one reason why very large optics were needed.4
The great majority of Corona’s imagery was black and white (panchro-
matic). Infrared film was flown on Mission 1104; color film was flown on
Missions 1105 and 1108. Photo-interpreters did not like the color film,
however, because the resolution was lower.5 Tests showed color as valuable
for mineral exploration and other earth-resources studies, and its advantages
led indirectly to the Landsat satellites.
Figure 3.2 (a) KH-4B (artist’s concept) and (b) KH-4B or -J3 camera (DISIC refers to a dual
improved stellar index camera). Both images reprinted courtesy of the National Reconnais-
sance Office.6
Figure 3.3 A USAF C-119, and later, a C-130 (shown here) modified with poles, lines, and
winches extending from the rear cargo door, was used to capture capsules ejected from the
Discoverer satellites. Reportedly, this step—catching a satellite in midair—was considered
by some to be the least likely part of the whole process.7
6. http://www.nro.gov/history/csnr/corona/imagery.html.
7. A recent NASA attempt to replicate this technique failed. The Genesis satellite crashed on
September 8, 2004 as the parachute failed to deploy properly. It made a sizable hole in the Utah
desert. AW&ST, Sept 12, 2004; http://www.nasa.gov/mission_pages/genesis/main/index.html.
Figure 3.4 The KH-4B cameras operated by mechanically scanning to keep the ground in
focus. The paired cameras provided stereo images, which are very helpful when estimating
the heights of cultural and natural features.
Satellite inclinations are not polar, and altitude is low. The 2014 mission was
observed to be in a 176 285-km orbit with an inclination of 81.4°. The
satellite lifetime is only a few months at those altitudes.8 There are
indications that Russia is moving to electronic systems.
3.1.3 Illustrations
The first Corona image was taken of the Mys Shmidta airfield. Figure 3.6
shows that the resolution was high enough to discern the runway and an adjacent
parking apron. Eventually, the systems were improved, and higher-resolution
images were acquired. Figure 3.7 shows two relatively high-resolution images
of the Pentagon and the Washington Monument in Washington D.C.
Declassified imagery is available from the U.S. Geological Service (USGS).9
8. http://www.russianspaceweb.com/kobalt_m.html; http://www.nasaspaceflight.com/2014/05/
soyuz-2-1a-kobalt-m-reconnaissance-satellite/.
9. http://pubs.usgs.gov/fs/2008/3054/.
Figure 3.5 Corona satellite in the Smithsonian. The recovery vehicle is to the right.
Figure 3.6 Mys Shmidta Airfield, U.S.S.R. This August 18, 1960 photograph is the first
intelligence target imaged from the first Corona mission. It shows a military airfield near Mys
Shmidta on the Chukchi Sea in far-northeastern Russia (Siberia, 68.900°N, 179.367°W, just
across some forbidding water from Barrow, Alaska, at a very similar latitude). North is at the
upper left. Image reprinted courtesy of the NRO.
Of course, the whole point was to track the activities in the Soviet Union.
Figure 3.8 shows the Severodvinsk shipyard, a North Sea port for the U.S.S.R,
on February 10, 1969. The large rectangular building in the center is the
construction hall, and the square courtyard (drydock) to its left is where vessels
Figure 3.7 Photographs of Washington, D.C.: (a) one always-popular target, the
Pentagon, imaged September 25, 1967. (b) Note the shadow cast by the Washington
Monument in September 1967. Both images reprinted courtesy of the NRO.
(submarines) are launched. The curved trail of disturbed snow and ice reveals
where the subs are floated into the river. The satellite is on a southbound pass
over the port facility.10
The image of Severodvinsk is a chip from a much larger scene shown in
Figs. 3.9 and 3.10. The strips are perpendicular to satellite motion, which in
10. Eye in the Sky, The Story of the Corona Spy Satellites, page 224, D. A. Day, J. M.
Logsdon, and B. Latell, eds. (1998).
Figure 3.10 Three consecutive Corona images. The shipyard in Fig. 3.8 is in the bottom
frame, just under the word “Severodvinsk.” Note the ends of the film strips: “when the
satellite’s main camera snapped a picture of the ground, two small cameras took a picture of
the earth’s horizon at the same time on the same piece of film. The horizon cameras helped
interpreters calculate the position of the spacecraft relative to the earth and verify the
geographical area covered in the photo.”11
this frame was toward the southeast. There are horizon images on the edges of
the film strips, showing the rounded earth. These images from the horizon
cameras provided reference/timing information as the system scanned from
horizon to horizon.
11. http://airandspace.si.edu/exhibitions/space-race/online/sec400/sec431.htm.
Concurrent with the Corona series, several other film-return systems were
orbited. The most interesting of these, in some sense, were the Gambit systems,
codenamed KH-7 and KH-8. These were designed for a higher spatial
resolution and produced images over smaller areas. Illustrations from KH-7
appear at the beginning of Chapter 1.
Figure 3.11 Atmospheric absorption. The transmission curves calculated using MOD-
TRAN 4.0, release 2. The U.S. standard atmosphere (1976) is defined in the NOAA
publication with that title, NOAA0S/T-1562, October, 1976, Washington, D.C., Stock # 003-
017-00323-0.
12. P. Slater, Manual of Remote Sensing, Vol. 2, 2nd edition, F. M. Henderson and A. J. Lewis,
Eds., p. 210, Wiley, New York (1983).
Figure 3.13 Atmospheric absorption and scattering. The transmission curve is calculated
using MODTRAN 4.0, release 2. The conditions are typical of mid-latitudes with a 1976 U.S.
standard atmosphere assumed. The overall broad shape is due to scattering by molecular
species and aerosols.
Figure 3.14 The apparent position of a star will fluctuate as the rays pass through time-
varying light paths.
Figure 3.15 This astronomical I band (850 nm) compensated image of the binary star
Kappa-Pegasus (k-peg) was generated using the 756-active-actuator adaptive-optics
system. The two stars are separated by 1.4544 mradians. The images are 128 128 pixels;
each pixel subtends 120 nano-radians. The FWHM of the uncompensated spot is about
7.5 mradians—about 5 times the separation of the two stars.13 Note on nomenclature:
astronomy IR bands are H (1.65 mm), I (0.834 mm), J (1.25 mm), and K (2.2 mm).
1 1 1
¼ þ : (3.1)
f i o
Figure 3.17 Magnification: similar triangles. The object distance will be the altitude or
range; the image distance is typically the focal length.
Example
For example, consider a photographer using a 1000-mm lens on a modern
digital-single-lens-reflex (DSLR) camera at a football stadium. If he images a
player (2 m tall) from across the field (object distance 40 m), how large is the
image on the camera detector, or focal plane?
image size object size object size
¼ ⇒ · focal length;
focal length range range
2m
image size ¼ · 1000 mm ¼ 5 cm:
40 m
This is larger than the size of the detector on a modern DSLR—the image
would not include the entire player.
focal length
f ∕# ¼ : (3.2)
diameter of the primary optic
Typical lenses found on amateur cameras will vary from f/2.8 to f/4. High-
quality standard lenses will be f/1.2 to f/1.4 for a modern DSLR. The longer
the focal length (the higher the magnification) is, the larger the lens must be to
maintain a similar light-gathering power. The longer the focal length is, the
harder it is to create a fast optic. A telephoto lens for a sports photographer
might be 500 mm and might at best be f/8 (question: what is the diameter
implied by that aperture?). Two different quantities are being referred to as
“f ” here, following optical convention. One is the focal length, and the other
is the aperture.
As mentioned at the beginning of this chapter, the KH-4B cameras had a
focal length of 24 inches and were 5–10 inches in diameter (see the appendix
on Corona cameras). Apertures of f/4 to f/5.6 are typical of the early
systems. In contrast, the Hubble Space Telescope is characterized by
aperture values of f/24 and f/48, depending on the optics following the large
primary mirror.14
14. Supplemental topic: optical systems made with lenses need to pay attention to the
transparency of the material in the spectral range of interest. Glass is transparent from
400 nm into the infrared. UV sensors and MWIR/LWIR sensors need to be constructed
from special materials making them MUCH more expensive.
15. In photography, “normal” means that if you print the image on an 8 10 piece of paper
and hold it at arm’s length, it will appear as the scene did in real life. Modern “point and
shoot” cameras will typically have shorter focal lengths but still frequently refer to “35 mm
equivalent” as a way to standardize nomenclature.
Figure 3.18 Two images of a shoreline. The pinhole is effectively a 50-mm lens stopped
down to an aperture of 0.57 mm with f/100.
The pinhole image obtained is fuzzy; the smaller the pinhole, the sharper
the image will be. The problem with a small aperture is, of course, a relatively
long exposure time. A limit is eventually reached as diffraction effects emerge,
as described in the next section.
ax x l
2p ¼ p, or ¼ : (3.4)
Rl R 2a
The width of the central maximum is just twice this value, and the result is
well known: the angular width of the first bright region is Du ¼ 2(x/R) ¼ l/a.
The secondary maxima outside this region can be important, particularly in
the radar and communications fields—these are the sidelobes in the antenna
pattern of a rectangular antenna.
How do these factors relate to optical systems? Diffraction implies
that for a finite aperture there is a fundamental limit to the angular resolution,
which is defined by the ratio of the wavelength to the aperture width or
diameter. This is the fundamental issue that determines the size of an optical
system. The diffraction formula applies nicely to rectangular apertures (as will
be seen in Chapter 9) and leads to the Rayleigh criterion:
l
Du ¼ , (3.5)
D
where Du is the angular resolution, and D is the aperture width. This formula
must be modified for circular apertures. The result is effectively obtained by
16. Generally encountered in an introductory calculus class as a good illustration for concepts
of limits and L’Hospital’s Rule. The numerator and denominator go to zero for x ¼ 0, but
the ratio is well defined.
Figure 3.20 Single-slit diagram for the geometry of the diffraction pattern.
taking the Fourier transform of the aperture shape, which for a circular
aperture results in a formula involving Bessel functions, as normally
developed in a course in differential equations:
J1 ðwÞ 2
I∝ , (3.6)
w
where w ¼ (2par) / (Rl), J1 is the “J” Bessel function of order 1, a is the lens
radius, r is the distance from the center line, R is the distance from the lens to
the screen, and l is the wavelength. This function is illustrated in Fig. 3.21.
The first zero occurs where w ¼ 3.832, which leads to a relatively famous
result: the radius of the “Airy disk” ¼ 0.61l distance / lens radius, or the
angular resolution of a lens is
l l
Du ¼ 0.61 · , or Du ¼ 1.22 · : (3.7)
a diameter
Figure 3.21 Airy pattern for diffraction from a circular aperture. The background pattern is
shown in negative to improve visual contrast.
Figure 3.22 (a) Two objects (stars) separated by a distance x just corresponding to the
Rayleigh criteria for a cylindrical optic. (b) Two point targets at a range of 400 km. The
objects are separated by 10 m.
0.5 106 m
1.22 · · 400 103 m ¼ 10.0 m:
0.0244 m
This is the case illustrated in Fig. 3.22(a), and for the second image from the
bottom of Fig. 3.23(b). The two targets illustrated at the top are separated by
twice the Rayleigh criteria, which is used in some engineering texts as a design
criterion.
As an illustrative example, consider a system like the Hubble Space
Telescope orbiting at an altitude of 200 nautical miles, or 370 km, and assume
it is nadir viewing, so that the range is just the altitude. The Rayleigh criteria
can be used to estimate the best possible ground resolution such a sensor could
produce:
3.5 Detectors
Following optics, the next primary element in any sensor system is the
detector. Modern detector systems make use of solid state technology, as
discussed here.
17. See C. McCreight, “Infrared Detectors for Astrophysics,” Physics Today (Feb. 2005).
18. G. Rieke, Detection of Light from the Ultraviolet to the Sub-millimeter, Cambridge
University Press (2002).
19. S. M. Sze, Physics of Semiconductor Devices, Wiley, New York (1981), and Kittel,
Introduction to Solid State Physics, Wiley, New York (1971).
Thus, the visible and the first part of the near-infrared spectrum can be
detected with silicon detectors; this fact is reflected in their common use in
modern cameras. The energy bad gap generally depends on the temperature.
The detection of longer-wavelength photons requires materials such as
HgCdTe or InSb. The relatively small band gaps cause a problem, however.
At room temperature, the electrons tend to rattle around, and every now and
then one will cross the gap due to thermal excitation. This process is largely
controlled by the exponential term that comes from the “Maxwell–
Boltzmann” distribution (or bell-shaped curve), which describes the velocities,
or energies, to be found in any collection of objects (whether electrons, atoms,
or molecules) in a thermal equilibrium:
E E band-gap energy
N2
¼ e kT ⇒ number ∝ e thermal energyðkT Þ :
2 1
(3.8)
N1
If these electrons are collected, this factor becomes part of a background noise
known as the dark current. To prevent this, the materials must be cooled—
typically to 50–70 K, which requires liquid nitrogen at least [and for some
applications, liquid helium (4 K)]. Mechanical refrigerators can also be used, but
they are problematic in space applications because they generally have relatively
short lifetimes and can introduce vibration into the focal plane, which is
undesirable. Some recent NASA missions have used a long-lived pulse-tube
technology, developed by TRW, with apparent success.
The importance of cooling is illustrated here with a calculation. Use
HgCdTe, assume a band gap of 0.1 eV, and compare the nominal number of
electrons above the band gap at room temperature (300 K) and at 4 K. The
conversion factor k in the term kT is
joules joules eV
1.38 1023 =1.6 1019 ¼ 8.62 105 ;
kelvin eV kelvin
T ¼ 300 K, kT ¼ 0.026 eV; T ¼ 4 K, kT ¼ 0.00035 eV;
bandgap energy ( 0.1 3.8
thermal e
0.026
¼ e ¼ 0.02 @ 300 K
number ∝ e energyðkTÞ ¼
0.00035
0.1
e ¼ e286 ≈ 0 @ 4 K:
20. Reproduced with permission from C. L. Littler and D. G. Seiler, “Temperature dependence
of the energy gap of InSb using nonlinear optical techniques,” Appl. Phys. Lett. 46(10)
(1985), Copyright 1985, AIP Publishing LLC.
Figure 3.25 Rough guide to the spectral ranges of use for different focal-plane materials.
HgCdTe is abbreviated as MCT (MerCadTelluride). The chart shows the wavelength and
temperature ranges that may be used for a variety of materials. Longer wavelengths fairly
uniformly require lower temperatures. Image courtesy the Rockwell International Electro-
optical Center.
the order of 5–10 mm. More exotic materials generally have somewhat
larger values for the detector pitch. A fairly typical linear CCD array is
illustrated in Chapter 6, where a Kodak 3 8000 linear array is shown in
Fig. 6.4.
A relatively new technology has emerged over the last decade, the
quantum well infrared photodetector (QWIP). QWIP focal planes have
recently been used on the Landsat Data Continuity Mission (aka Landsat 8).
The technology lends itself to larger, more-uniform arrays than the more
exotic InSb and HgCdTe materials. Coolers are still required.
Figure 3.26 A thermally isolated resistor (200 mm 200 mm), used in a microscopic
Wheatstone bridge. The current enters from the top left and exits through the bottom
right. As the sensor heats, changes in its resistance can be measured with great
sensitivity.
respectively). This technique was used on the early Corona satellites, where
the film is moved in concert with the satellite motion for longer exposure
and better focus. Figure 3.27(a) illustrates such a system. The wide-field
planetary camera (WFPC) on the Hubble is an example of this approach,
as well as the early UoSat cameras (University of Surrey, Surrey Satellite
Technology Limited). Aerial photography systems use this approach,
notably the widely used Vexcel UltraCam, with current systems offering
260-megapixel panchromatic images. In November 2013, Skybox Imaging
(now called Terra Bella) launched Skysat-1, a high-spatial-resolution
system with a framing focal plane. This is the first “1-m” system with such
a focal plane.21
3.6.1.2 Cross-track (Landsat MSS, TM; AVIRIS)
Sensors such as those on the GOES weather satellite and Landsat system consist
of a small number of detectors—from 1 to 32 or so in the systems described later
in this book. The sensor is swept from side to side, typically via an oscillating
mirror, while the system flies along a track. The image is constructed by the
21. The UltraCam Eagle uses four separate camera “cones” to obtain the 260-megapixel
panchromatic images, with four additional cones for the four-color (multispectral) frames.
Figure 3.27 (a) Framing system,22 (b) cross-track scanner (whiskbroom), and (c) along-
track scanner (pushbroom).
combined motion of the optic and the platform (aircraft or satellite), as shown in
Fig. 3.27(b). Such sensors are called “whiskbroom” sensors.
3.7.2 Relay
Real-time systems (limited storage) can also operate through a relay. The
Hubble system is a good example, although it also uses onboard storage.
The Tracking and Data Relay Satellite System (TDRSS), described in the
appendix, gives a description of the NASA system.
Example
IKONOS was designed to capture a 10 km 10 km scene at a 1-m spatial
resolution in 4 s. The dynamic range is 12 bits/pixel. The data acquisition
rate then becomes
3.9 Problems
1. When was the first Corona launch?
2. When was the first successful launch and recovery of a Corona capsule?
Which number was it?
3. How many launches did it take before a film was successfully returned?
4. How did the date of this launch relate to that of the U-2 incident with
Gary Powers?
5. What was the best resolution (GSD) of the KH-4 cameras discussed here?
6. What was the swath width associated with the best-resolution KH-4
images?
7. How many Corona missions were there?
8. For a 24-inch focal length, f/3.5 lens, calculate the Rayleigh limit to the
GSD for a satellite at a 115-km altitude. Assume nadir viewing and visible
light (500 nm).
9. What diameter mirror would be needed to achieve 12-cm resolution
(GSD) at geosynchronous orbit? (Geosynchronous orbit has a radius of
6.6 earth radii (Re); this is not the altitude).
10. What are the three factors that constrain the resolution obtainable with an
imaging system?
11. Adaptive optics: compare the Rayleigh criteria for the 3.5-m Starfire
observations in Fig. 3.15 to results with and without the adaptive-optics
system.
12. What is the energy band gap for lead sulfide? What is the cutoff
wavelength for that value?
13. The Corona lenses were f/5.0 Tessar designs with a focal length of
24 inches. Calculate the diameter of these lenses.
14. For a 24-inch-focal-length camera, f/3.5, at an altitude of 115 km, calculate
the GSD corresponding to a 0.01-mm spot on the film (100 lines/mm).
Assume nadir viewing. This is a geometry problem.
15. What is the f/# for the 0.57-mm pinhole illustrated in Fig. 3.19? The focal
length is approximately 50 mm.
16. How large an optic (mirror) would you need on the moon to obtain a 0.66-m
GSD when viewing the earth? What is the angular resolution of this optic (Du
in radians)? Assume that the visible radiation l ¼ 0.5 mm ¼ 5 10–7 m.
17. One of the most popular cameras used for airborne mapping today is the
Microsoft/Vexcel Ultracam. The Ultracam Eagle can be outfitted with a
variety of lenses, including a 210-mm, f/5.6 optic. A typical flight altitude
is 1000 m. The panchromatic image size is 20,010 13,080 pixels, and the
panchromatic physical pixel size (pitch) is 5.2 mm. Calculate the resolution
defined by the Rayleigh criteria at 1.0 mm and the resolution defined by
the geometry of the camera.25 The result should be 2.5 cm.
25. http://www.microsoft.com/ultracam/en-us/UltraCamEagle.aspx.
This chapter applies the knowledge from Chapter 3 to a few illustrative satellite
systems. The Hubble Space Telescope is one of the most impressive
illustrations of the technology, even after 25 years of service. Smaller
(commercial) systems are described as well, and illustrations are given of
nighttime imaging.
81
Figure 4.3 Image of the HST taken during STS-82, the second service mission, on
February 19, 1997 (S82E5937, 07:06:57). New solar arrays had not yet been deployed.
(FGSs) that lock onto guide stars to reduce drift and ensure pointing accuracy.
The HST’s pointing accuracy is 0.007 arcseconds (0.034 mradians).
Power to the system electronics and scientific instruments is provided by
two 2.4 12.1-m solar panels, which provide a nominal total power of 5 kW.
The power generated by the arrays is used by the satellite system (1.3 kW)
and scientific instruments (1.0–1.5 kW); it also charges the six nickel–
hydrogen batteries that power the spacecraft during the roughly 25 minutes
per orbit in which the HST is in the earth’s shadow.1
Communications with the HST are conducted via the tracking and data-
relay satellites (TDRS, see Appendix 3). Observations taken during the time
when the TDRS system is not visible from the spacecraft are recorded and
dumped during periods of visibility. The spacecraft also supports real-time
interactions with the ground system during times of TDRS visibility. The
primary data link is at 1024 kbps, using the S-band link to the TDRS.2 The
system routinely transfers a few gigabytes per day to the ground station. Data
are then forwarded to NASA/GSFC via landlines.
Figure 4.4 Hubble optics. The mirrors are hyperboloids, and the secondary is convex. The
primary has a focal length of 5.5 m and a radius of curvature of 11.042 m. The secondary
has a focal length of 0.7 m and a radius of curvature of 1.358 m. The bottom image is an
accurate ray trace for the Cassegrain telescope, courtesy of Lambda Research (Oslo).3
design for the Hubble is very common among satellite systems. The primary
mirror is constructed of ultra-low-expansion silica glass and coated with a thin
layer of pure aluminum to reflect visible light. A thinner layer of magnesium
fluoride is laid over the aluminum to prevent oxidation and reflect ultraviolet
light. The secondary mirror is constructed from Zerodur, a very-low-thermal-
expansion (optical) ceramic. The effective focal length is 57.6 m.
Figure 4.4 illustrates the optical design of the telescope. The distance
between the mirrors is 4.6 m, and the focal plane is 1.5 m from the front of the
primary mirror. The angular resolution at 400 nm is nominally 0.043
arcseconds (0.21 mradians). The fine guidance sensors, or star trackers, view
3. http://www.lambdares.com
Figure 4.5 The primary mirror of the Hubble telescope measures 2.4 m (8 ft) in diameter
and weighs about 826 kg (1820 lbs). By comparison, the Mt. Wilson 100-inch-solid-glass
mirror weighs some 9000 pounds.4 The center hole in the primary mirror has a diameter of
0.6 m.
through the primary optic. The off-axis viewing geometry does not interfere
with imaging, and it allows the use of the large primary optic for the necessary
detector resolution. Figure 4.5 shows a closeup of the 96 00 mirror. A
requirement for the spacecraft was a pointing accuracy (jitter) of 0.007
arcseconds, which was more easily achieved after the first set of solar arrays
was replaced. The original flexible array design vibrated rather badly every
time the satellite moved from sun to shadow or shadow to sun—that is, twice
an orbit. This design error required a redesign of the satellite-pointing-control
algorithms.
A more serious problem was found with the Hubble: the mirror was not
ground to the right prescription, and it suffered from spherical aberration (too
flat by about 4 mm at the edges.) As a consequence, new optics designs were
created, and a corrective optic was added for the existing instruments
(COSTAR). Figure 4.6 shows the before and after for the spherical aberration
problem. Subsequent scientific instruments, such as the WFPC2, built
corrections into the optics of the newer instruments. COSTAR was removed
during the last servicing mission because it was no longer needed.
4. http://www.mtwilson.edu, including a link to the 1906 article by George Hale describing the
new telescope.
Figure 4.6 On the top left, a FOC image of a star taken prior to the STS-61 shuttle mission to
service the HST, during which astronauts installed COSTAR. The broad halo (1-arcsecond
diameter) around the star is caused by scattered, unfocused starlight. On the right, following
installation, deployment, and alignment of COSTAR, starlight is concentrated into a
0.1-arcsecond radius circle. Images are reprinted courtesy of the Space Telescope Science
Institute (STScI), STScI-PRC1994-08. The bottom two images were taken of the center of NGC
1068 before and after COSTAR correction of Hubble’s aberration (STScI-PRC1994-07).5
instruments are mounted in bays behind the primary mirror. The WFPC2
occupied one of the radial bays, with an attached 45° pickoff mirror that allowed
it to receive the on-axis beam. (The best image quality is obtained on-axis.)
The WFPC2 field-of-view is distributed over four cameras by a four-
faceted pyramid mirror near the HST focal plane. Each of the cameras
5. http://www.spacetelescope.org/about/general/instruments/costar.html;http://hubblesite.org/
newscenter/archive/releases/1994/07/image/a/
Figure 4.7 WFPC2 optics. Light enters the optical train from the main telescope at left.
contains an 800 800 pixel Loral CCD detector. Three wide-field cameras
operate at f/12.9, and each 15-mm pixel samples a 0.10-arcsecond portion of
the sky. The three wide-field cameras cover an L-shaped field of view of
2.5 2.5 arcminutes. The fourth camera operates at 0.046 00 (arcseconds, or
0.22 mradians) per pixel (f/28.3) and is referred to as the planetary camera.
This sensor is therefore operating at the full resolution of the telescope. The
fourth camera observes a smaller sky quadrant: a 3400 3400 field. This is a
sufficiently large field of view to image all the planets but Jupiter. The spectral
range lies from approximately 1150–10500 Å. The exposure times range from
0.11 to 3000 s.
The WFPC2 was ultimately replaced by the Advanced Camera for
Surveys (ACS) and Wide-Field Camera 3 (WFC3), which used more
sophisticated detectors but had similar characteristics in terms of wavelength
coverage and angular resolution.
Figure 4.8 This image of Mars was taken by the HST using the WFPC2, on October 28,
2005, when Mars was near opposition—approximately a distance of 70 million km from
earth. The image shows the blue, green, and red data from three filter wheel positions
(410 nm, 502 nm, and 631 nm). The spatial resolution is 10 km. Image is reprinted
courtesy of NASA, ESA, the Hubble Heritage Team (STScI/AURA), J. Bell (Cornell
University), and M. Wolff (Space Science Institute).6
Figure 4.9 For the Hubble/WFPC2 combination, altitude is 600 km, detector size is 15 mm,
and effective focal length is 57 m.
l
Du ¼ 1.22 · :
lens ðmirrorÞ diameter
A few numbers are tested here, assuming a deep-blue wavelength (410 nm):
4.1 107 m
Du ¼ 1.22 · ¼ 2.08 107 radians:
2.4 m
In order to compare this value to the given value of 0.043 00 , the given
resolution is converted to radians:
0.043 arcseconds 2p radians
Du ¼ · ¼ 2.08 107 radians:
60 s∕min · 60 min∕deg 360 deg
Applying this value to the hypothetical problem of the ground resolution that
the Hubble would have if pointed down produces
Geometric resolution
The example thus far implies that the detector has infinite resolution. In
reality, however, it does not. The concept of similar triangles discussed
previously and this example’s values for the detector’s pixel size can be used to
compare the detector resolution to the best resolution offered by the telescope:
GSD pixel size
¼
altitude focal length
or
pixel size 15 106
GSD ¼ · altitude ¼ · 600 103 ¼ 0.16 m,
focal length 57
which is slightly worse than the best results that the telescope can provide—
the Airy disk from a distant star (or a small, bright light on the ground) would
not quite fill one detector pixel. The detector is undersampling the image in
the shortest wavelengths.
System, satellite, and instrument characteristics for these first two satellites
are enumerated in Table 4.2. IKONOS and Quickbird differ in design; the latter
is unique for not using a Cassegrain. Both use store-and-dump telemetry systems.
Space Imaging used a large number of ground stations; DigitalGlobe uses one or
two (northern) high-latitude ground stations. Both companies suffered system
loss in their initial launches. DigitalGlobe, launching after Space Imaging,
lowered the orbit of their satellite to provide a higher spatial resolution and give
an economic advantage over its competitor. A larger focal plane allowed them to
maintain a larger swath width.
A fleet of commercial satellites have followed IKONOS and Quickbird
into orbit with ever-improving GSDs. The three (commercial) U.S. vendors
have since been consolidated, with DigitalGlobe absorbing their competitors.
The most recent systems have been designed to offer a spatial resolution of
better than 0.5-m GSD for their panchromatic sensors. Imagery from
Worldview-3, launched August 13, 2014, approach a 0.35-m GSD.
7. M. Mecham, “IKONOS Launch to Open New Earth-Imaging Era,” Aviation Week & Space
Technology, McGraw-Hill, New York (October 4, 1999).
726 kg at launch, with a main body 1024 kg (wet, extra hydrazine for low
Mass/size
1.8 1.8 1.6 m orbit), 3.04 m (10 ft) in length
Onboard
64 Gb 128 Gb
storage
Payload data;
X-band downlink at 320 Mbps
8. Kodak Insights in Imaging Magazine (June 2003). See Fig. 6.4 for a similar sensor.
Figure 4.10 First light image from IKONOS of the Jefferson memorial, taken September
30, 1999. Image reprinted courtesy of DigitalGlobe.
3375 pixels have a pitch of 48 mm. These 4:1 ratios for the detector arrays sizes
and detector pitch are very typical of the design of these systems. The result is
that the panchromatic sensors have 4 the resolution of the spectral sensors.
The digital processing unit compresses digital image files from 11 bits per
pixel (bpp) data to an average value of 2.6 bpp at a speed of 115 million pixels
per second. The compression is important for onboard storage and telemetry
purposes. The lossless, real-time compression of the imagery is a capability that
only recently has been made practical by modern computational resources. It is
important that IKONOS and Quickbird offer the extended dynamic range
represented by 11 bits (DN ¼ 0–2047), a significant improvement on the
contemporary NASA systems. This topic is covered further in Chapter 7.
Figure 4.11 Image of the IKONOS satellite in the acoustic test cell at Lockheed Martin
Missile and Space in Sunnyvale, CA. It is basically a “baby brother” to the HST.
9. T. E. Lee et al., “The NPOESS VIIRS Day/Night Visible Sensor,” Bull. Am. Meteorol. Soc.
87, 191–199 (Feb. 2006); S. E. Mills et al., “Calibration of the VIIRS Day/Night Band
(DNB),” 6th Annual Symposium on Future National Operational Environmental Satellite
Systems-NPOESS and GOES-R; https://ams.confex.com/ams/90annual/techprogram/
paper_163765.htm
Figure 4.12 First light image from IKONOS, taken September 30, 1999. The Jefferson
memorial is shown at a higher resolution in Fig. 4.10. North is to the right in this image
orientation.
Figure 4.13 The IKONOS telescope, built by Kodak, features three curved mirrors. Two
additional flat mirrors fold the imagery across the inside of the telescope, thereby
significantly reducing telescope length and weight. The telescope is an obscured, three-
mirror anastigmat with two fold mirrors, a 70-cm-diameter primary with a 16-cm central hole,
a 10.00-m focal length, and 1.2-mrad instantaneous field-of-view (pixel).
estimate exposure time. CCD arrays have sensitivity not unlike regular
daylight film, with a standard speed defined as ISO 100.10 An old
photographic rule of thumb is that the exposure time at f/11 to f/16 is
1/ISO, or in this case 1/100 s. The f/14 IKONOS optics provide sufficient light
10. International Organization for Standardization, or ISO [the successor to the American
Standards Association (ASA)], ratings will mostly be familiar to old film photographers.
Kodak Plus-X pan, and Kodak Kodacolor films were rated ISO 100. Kodachrome, as made
famous by National Geographic photographers and musician Paul Simon, was rated ASA 25.
Figure 4.15 Severodvinsk, as captured by IKONOS. Compare this image with the Corona
image in Fig. 3.8. Acquisition date and time: 06-13-2001, 08:48 GMT. Nominal collection
azimuth: 133.4306°. Nominal collection elevation: 79.30025°. Sun angle azimuth:
168.5252°. Sun angle elevation: 48.43104°.
Figure 4.16 This image of the continental United States at night is a composite assembled
from data acquired by the Suomi NPP satellite in April and October 2012. The nominal
imaging time is 1:30 AM in each orbit. The primary downlink connects to Svalbard, Norway.
Image reprinted courtesy of NASA Earth Observatory/NOAA NGDC.
Figure 4.17 Edited image of the Long Beach harbor at night, taken from the International
Space Station (ISS016-E-27162.JPG). Date and time: Feb 4, 2008, 07:44:37.24 GMT;
camera: Nikon D2Xs; exposure time: 1/20 s; f/2.8; and focal length: 400 mm.
4.5 Problems
1. What are the focal length, diameter, and f/# of the Hubble primary optic?
2. For a pushbroom scanner like IKONOS, calculate the data rate implicit in
a system with a 13,500-pixel linear array, assuming 16 bits/channel, or
pixels, imaging pixels on the ground with a 0.8-m GSD. Data compression
by a factor of 4 reduces the required number of bits to 4 bits/channel.
Assume the spacecraft is moving at 7.5 km/s. How many bits/second must
the telemetry system be able to handle? To do this problem, calculate the
length of time it takes the spacecraft to move one meter. Then calculate
the number of bits acquired in that time. Compare to the known
bandwidth of the IKONOS satellite.
3. Skybox Imaging has flown a staring focal plane for high-resolution
imaging from low-earth orbit. The system can acquire 5 megapixel frames
at rates up to 30 Hz for up to 30 s. What bandwidth would be required for
such a sensor to operate in near real time? What is the data volume for one
panchromatic scene (30 s)? Assume 12 bits/pixel.
4. The Hubble telescope ACS cameras have optical systems of f/25 and f/70.
To what focal lengths do the two channels correspond?
5. At opposition, the distance from the earth to Mars can be as low as 65
million km. What is the best spatial GSD the Hubble WFPC2 could
produce at that range?
6. The Nikon camera used to take the image of Los Angeles in Fig. 4.17 has
a pixel pitch of 5.5 5.5 mm. What spatial resolution can the 400-mm lens
used for this image give under ideal circumstances? The ISS altitude is
333 km. Assume a nadir view. Compare to the distance the spacecraft
moved in 0.05 s (the exposure time).
m1 m2
F ¼ G r̂, (5.1)
r2
F ¼ go m, (5.2)
103
2
Rearth
F ¼ go m , (5.3)
r
where Rearth ¼ 6380 km, and this example uses mearth ¼ 5.9736 1024 kg.
Although mearth and G are not known to high accuracy, the product is
GMearth ¼ (3.98600434 ± 2 10–8) 1014 m3s–2. The ±2 10–8 in the parenthe-
ses is the error in the last digit of the expression—the term has nine significant
digits.1
v ¼ vr,
Examples
1. Rees, 1990.
or
1 1
r3 g 1 r go 1 3 9.8 ð86400Þ2 3
¼ o2 1 ⇒ ¼ ¼
R3earth v Rearth Rearth Rearth v2 6.38 10 ð2pÞ2
6
1
¼ ð290.45Þ3 ¼ 6.62:
The geosynchronous orbit is 6.6 earth radii (geocentric). What is the velocity
of the satellite?
planets are elliptical, not circular. Kepler’s three laws describing planetary
motion apply equally to satellites:
1. Planetary orbits are ellipses, with one focal point at the center of the sun.
2. Equal areas are swept out in equal times.
3. The square of the orbital period is proportional to the cube of the semi-
major axis.
It was one of the great triumphs of Newtonian mechanics that Kepler’s laws
could be derived from basic physics principles.
Figure 5.1 A graph of radius r versus angle u for an elliptical orbit. In cylindrical or spherical
coordinates, r ¼ [a(1 ε2)] / (1 + ε cos u).
Figure 5.2 Earth is at one focus, x ¼ 5.29; the x range is 13.29 to 2.71 Re (earth radii).
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 1
v ¼ GM ,
r a
where r is the instantaneous radius from the center of the earth, and a is the
semi-major axis.
only on the semi-major axis. Following the same calculation given earlier to
derive the period for a geosynchronous orbit,
rffiffiffiffiffi rffiffiffiffiffi
go v go 2p
v¼ Rearth ⇒ v ¼ ¼ Rearth ¼ ; or
r r r 3 t
sffiffiffiffiffi
2p r3 4p2 3 4p2
t¼ ⇒ t2 ¼ r ¼ r3 : (5.8)
Rearth go go Rearth
2 M earth G
This result is quickly obtained here for a circular orbit but is more generally
true. The value of the orbital period can be obtained by replacing the radius of
the circle with the semi-major axis.
5.5.2 Eccentricity
The eccentricity ε (or e) determines the shape of the orbit: ε ¼ c/a. For a circle,
ε ¼ 0; for a straight line, ε ¼ 1. The latter would be a ballistic missile: straight
up and straight down.
Figure 5.3 Ground track for four orbits by a LEO satellite, Landsat 4, crossing the equator
during each orbit at 0940 local time. The solar sub-point is just above the coast of South
America, corresponding to the time of the satellite crossing the equator. During the 98-min
orbit, the earth has rotated 24.5°.
Figure 5.4 (a) LEO illustration. The two white lines indicate the orbit and the ground track of
the orbit. (b) The sensor on Landsat 4 sweeps along the orbit, aimed at the sub-satellite point
(in the nadir direction). Over fifteen days, the satellite will have observed the entire earth.
Sun-synchronous orbits are popular for civil radar satellites because they
make a consistent solar array orientation to the sun practical; a dawn–dusk
plane allows power systems to largely dispense with batteries (e.g., Radsarsat).
Other, non-polar orbits are also used for LEO; for example, the Operationally
Responsive Space (ORS-1) satellite was launched into a 40° inclination orbit
to allow a focus on mid-latitude regions of interest.2
2. https://directory.eoportal.org/web/eoportal/satellite-missions/o/ors-1.
Figure 5.5 The Svalbard Satellite Station (SvalSat) on Platåberget (a mountain near
Longyearbyen, Norway) is ideally positioned as a ground station for polar-orbiting satellites.
From SvalSat, all of the 14 daily rotation of a polar satellite can be seen, compared with only
ten from Tromsø or the Kiruna stations. A 300-mbps downlink could theoretically transfer
180 gigabits (22 GB) in a 10-min pass. (The satellite illustrated here has a 10-min access;
the tracks shown on the right range from 10–13-min access times at an altitude of 617 km.)
Compare this value to the amount of onboard storage on Ikonos and Quickbird, discussed in
Chapter 4 (Table 4.2).
Figure 5.6 MEO orbit, illustrated for two GPS orbit planes.
Figure 5.8 The TDRSS orbit and the field of view from the satellite.
Figure 5.9 TDRS views the earth. The GOES satellite views in Chapter 1 are similar to that
shown here. (Figs. 1.9, 1.10) TDRS-7, launched in 1995, with an apogee of 35809 km,
perigee of 35766 km, period of 1436.1 min, and inclination of 3.0°.
Late in the life of TDRS-1, the satellite had depleted its north–south
station-keeping capability, and the inclination had increased to the point
where it could view the Antarctic for part of the day. This situation allowed
for support from an NSF ground station at McMurdo. The image of the earth
taken from Apollo 17, as shown at the beginning of the book, was taken from
near-geosynchronous orbit. Compare the field of view to that seen in the
GOES illustrations in Chapter 1.
Figure 5.10 Molniya orbit ground track. The sub-solar point is centered on India.
Figure 5.11 (a) The view from Molniya orbit, corresponding to the location at apogee
illustrated above (06:54 UT). (b) Some twelve hours later, the view from the apogee over the
U.S. The sub-solar point is in the Caribbean (18:05 UT).
The altitude for the HEO orbit at perigee is 500 km—just high enough above
the atmosphere to avoid excessive atmospheric drag.
The Molniya orbit is only one of a variety of “magic” orbits with the
inclination and eccentricity matched to keep the inclination constant. The
Sirius radio satellites use a highly inclined orbit to allow the satellites to dwell
over North America, providing more direct access to users in the urban
canyons of cities in the United States. (By contrast, the XM system used a
geosynchronous orbit.)
Table 5.2 Illustrative values for satellites in the orbits discussed in this section.3
Orbit LEO MEO HEO (Molniya) GEO
3. These numbers are primarily from the Systems Tool Kit (STK) database; STK is a product
of Analytical Graphics, Inc.
bits ¼ bandwidth · time ¼ 109 bits∕s · 300 s ¼ 3 1011 bits, or 37.5 GB:
Note that there are 8 bits (b) to the byte (B). A single Worldview-1
panchromatic image typically runs from 1–2 GB without compression.
DigitalGlobe normally applies a compression algorithm that reduces the size
of the image by a factor of 4 or so.
5.8 Problems
1. Calculate the angular velocity with respect to the center of the earth for a
geosynchronous orbit in radians/second.
2. Calculate the period for a circular orbit at an altitude of one earth radius
(r ¼ 2 Re).
3. Calculate the period for a circular orbit at the surface of the earth, at the
equator. What is the velocity? This is a “Herget” orbit and is considered
undesirable for a satellite.
4. Look up the orbits for the eight planets (and Pluto) and plot their period
versus their semi-major axis. Do they obey Kepler’s third law? This is best
done by using a log–log plot. Even better, plot the two-thirds root of the
period versus the semi-major axis (or mean radius). The proper system of
units for this problem is earth-years and astronomical units (AU).
5. Derive the radius of the orbit for a geosynchronous orbit.
6. Can Antarctica be seen from geosynchronous orbit? Geostationary?
7. A satellite is in an elliptical orbit with a perigee of 1.5 earth radii
(geocentric) and an apogee of 3.0 earth radii (geocentric). If the velocity is
3.73 km/s at apogee, what is the velocity at perigee? What is the semi-
major axis? Hint: use the principle of conservation of angular momentum:
L ¼ mv r ¼ constant.
8. An ongoing desire of the intelligence, surveillance, and reconnaissance
(ISR) community is long-dwell imaging (LDI), or persistent surveillance.
If you could place a satellite at an altitude of 1.0 earth radius (2-Re
geocentric), how long an imaging window would it provide over a given
target? You need the period (or velocity) and a bit of geometry to answer
the question. Take the horizon to be ±45°.
9. The Sirius-1 satellite has an orbit of 53,432 km 30,895 km (geocentric).
The inclination is 61.2°, and the apogee is over Canada. See Fig. 5.12 for
an illustration. What are the semi-major axis, eccentricity, and period of
the orbit?
10. A rather popular concept that has emerged over the last few years is the
concept of a tactical satellite—one that does not depend on a remote
ground station but instead directly downlinks data to soldiers “in theater.”
Assuming you had such a satellite (e.g., ORS-1) and a relatively restrictive
field unit (small dish antenna), how many 1-GB images could you
downlink in one pass over a ground station with a 100-Mbps (megabits/
second) capability in 100 s?
1. Termed the “red edge” in the Manual of Photographic Interpretation, ASPRS, 1997, the
signature might be more properly referred to as an infrared signature. It marks the boundary
between absorption by chlorophyll in the red visible region and scattering due to the leaf’s
internal structure in the NIR region. http://www.eumetrain.org/data/3/36/navmenu.php?
page=3.2.3.
119
Figure 6.1 Visible image of San Francisco from Landsat 7, taken April 23, 1999, on flight
day 9, orbit 117, 1830Z. The satellite is not yet in its final orbit and not on the standard
reference grid, WRS, so the scene is offset 31.9 km east of the nominal scene center (path
44, row 34). Landsat has been the premier earth resources satellite system for four decades.
Image reprinted with special thanks to Rebecca Farr, NESDIS/NOAA.
2. See also the Manual of Photographic Interpretation, page 67 ASPRS, 1997. The ASPRS
Manual cites Dartnall et al. 1983, “Microspectrophotometry of Human Photorecepters,”
pages 69–80 in Color Vision, edited by Mollon and Sharpe.
Figure 6.2 Comparison of some synthetic and natural materials. The olive-green paint
mimics the grass spectrum in the visible to NIR but then deviates.
Figure 6.3 The white curves indicate the sensitivity level for the three types of cones. The
black curve indicates the sensitivity of the rods.3
Figure 6.4 The KODAK KLI-8023 Image Sensor is a multi-spectral, linear solid state image
sensor for color-scanning applications. The 8000-pixel 3-row detector features a 9-mm
pitch and filters for red, green, and blue. An enlarged view and a microscopic view of one
end are superimposed on the photograph showing the three rows and the individual pixels
that make up the detector.
image, and four that contribute to the multi-spectral image. This approach
requires very highly controlled mounting and calibration of the different
cameras to produce a complete spectral image because each of the four multi-
spectral cameras uses a different color filter.
The dispersive elements are prisms (transmission) and gratings (typically
reflective). Prisms make use of the variation of the index of refraction with
wavelength in glass. This variation in velocity with wavelength is termed
dispersion. Prisms are not widely used in space systems, but they were used in
the airborne HYDICE sensor in the 1990s. Figure 6.5 shows the characteristic
rainbow of colors dispersed from a white light source.
A diffraction grating is traditionally a ruled pattern on a glass or metal
surface, with thousands of narrow lines in parallel grooves. A CD or DVD
surface will show a rainbow spectrum similar to the one shown in Fig. 6.5.
The physics of the grating follows the same principles of interference described
in Chapter 3, leading up to the Rayleigh criteria. Reflective (metal) gratings
are common in spectral imaging systems; the grating is frequently inscribed on
the surface of a reflective mirror, typically curved as part of the optical system.
One final comment on the technologies and terminology: Airborne and
satellite systems that measure spectral data in a few bands are termed
multispectral imagers (MSI), with 4 to 16 bands depending somewhat on the
satellite generation. Higher-spectral-resolution systems typically make
Figure 6.5 Light diffraction through a prism for a mercury source lamp. Image reprinted
courtesy of D-Kuru/Wikimedia Commons.4
6.4 Landsat
In late July 1972, NASA launched the first Earth-Resources Technology
Satellite, ERTS-1. The name of the satellite and those that followed was soon
changed to Landsat. These platforms have been the primary earth-resources
satellites ever since, utilizing MSI with a spatial resolution that has varied
from 30–100 m. After a decade-long hiatus in the operational pace, Landsat-8
(also called the Landsat Data Continuity Mission, or LDCM) was launched
in 2013. Table 6.1 shows some of the parameters for the sequence of missions.
The evolution in data storage technology, bandwidth, and the changes in
downlink technology shown here for Landsat mirror the evolution of the
industry. Resolution has gradually increased with time; Landsat 7 added a
4. http://commons.wikimedia.org/wiki/File:Light_dispersion_of_a_mercury-vapor_lamp_with_a_
flint_glass_prism_IPNr%C2%B00125.jpg.
Table 6.1 Landsat parameters. Note that Landsat 6 failed at launch and Landsat 7 suffered
a mechanical failure in 2003 that has since limited its utility.
Equatorial
On-Orbit / Resolution Altitude
Satellite Operational Date Sensors (meters) (km) Data Link
Landsat 1 July 23, 1972 to MSS 80 917 Direct downlink with a tape
(ERTS-A) January 6, 1978 RBV 80 recorder (15 Mbps)
Landsat 2 January 22, 1975 MSS 80
to February 25, RBV 80
1982
Landsat 3 March 5, 1978 to MSS 80
March 31, 1983 RBV 30
Landsat 4 July 16, 1982 to MSS 80 705 Direct downlink with TDRSS
December 14, TM 30 (85 Mbps)
1993
Landsat 5 March 1, 1984 to MSS 80
January 2013 TM 30
Landsat 6 March 10, 1983 ETM þ n/a Direct downlink (150 Mbps)
Landsat 7 April 15, 1999 to ETM þ 30 with solid state recorders
date (pan) 15 (380 Gb)
Landsat 8 February 11, OLI 15/30 Direct downlink (384 Mbps)
(LDCM) 2013 TIRS 100 with solid state recorders (3.8-
terabit BOL / 3.1-Tb EOL)
Figure 6.7 Subsequent orbits are displaced 2500 km to the west. There are 233 unique
orbit tracks.
The orbit track for Landsat 7 is further illustrated in Fig. 6.8. The ground
track is illustrated for 2 orbits. The satellite is on an ascending node on the
night side, and it descends southward on the day side.
Figure 6.8 This orbit ground track corresponds to the San Francisco image in Fig. 6.1.
The yellow spot just below Mexico City is the sub-solar point, April 23, 1999, ~1830Z.
4 0.5–0.6
5 0.6–0.7
6 0.7–0.8
7 0.8–1.1
8 10.5–12.4
6. Landsat 1-5 Multispectral Scanner (MSS) Image Assessment System (IAS) Radiometric
Algorithm Description Document; USGS, June 2012.
7. http://landsathandbook.gsfc.nasa.gov/.
Cassegrain, as seen with several earlier systems. The primary mirror (outer)
aperture is 40.64 cm; the clear inner aperture is 16.66 cm. The effective focal
length is 2.438 m, f/6. The instantaneous field of view (IFOV) for one pixel of
the high-resolution panchromatic sensor is 42.5 mrads.
The relay optics consist of a graphite-epoxy structure containing a folding
mirror and a spherical mirror that are used to relay the imaged scene from the
prime focal plane to the band 5, 6, and 7 detectors on the cold focal plane.
There is a mechanical scanning mirror at the beginning of the optical path
that oscillates at 7 Hz (Fig. 6.9). The scan-correction mirrors compensate for
the satellite’s forward motion as the sensor accumulates 6000 pixels in its
whiskbroom cross-track sampling. The scan-line correction mirror failed on
Landsat 7 after the first year, causing the satellite to collect spatially distorted
data for the remainder of the mission.
8. With thanks to Dr. Carl Schueler, Director Advanced Concepts, Raytheon, May 1999 and
http://landsathandbook.gsfc.nasa.gov/.
5, and 7 each have a 30-m resolution; the LWIR detector has a 60-m
resolution (an improvement over the 120-m resolution of the TM). The
detectors are arranged to have coincident 480-m coverage down track. The
focal plane is cooled to 85 K via a (passive) radiative cooler. Table 6.4 lists
each band’s parameters.
Figure 6.11 The Landsat 7 spectral-band base response functions are plotted here as a
function of wavelength. These values are from the ground calibration numbers provided by
NASA/GSFC. The higher-resolution panchromatic band covers the same region as bands
2–4 but does not extend into the blue in order to avoid atmospheric scattering at shorter
wavelengths.
Table 6.5 Spatial resolution and swath of Landsat 7. Note that band 6 has a 60-m
resolution; earlier missions featured a 120-m resolution in the LWIR.
Band Wavelength (nanometers) Detector Resolution (m)
1 Blue 450–520 Si 30
2 Green 520–600 Si 30
3 Red 630–690 Si 30
4 NIR 760–900 Si 30
5 SWIR 1 1550–1750 InSb 30
6 LWIR 10.40–12.5 mm HgCdTe 60
7 SWIR 2 2090–2350 InSb 30
8 Pan 520–900 Si 15
with band 6 seemingly out of order, is due to the temporal evolution of the
TM design.9 Table 6.5 lists the specific parameters of each band.
6.4.2.3.6 ETM dynamic range
The dynamic range for the Landsat sensors is typical of the satellites flown in
the first decades of remote sensing. The TM and ETM sensors have an 8-bit
dynamic range—meaning that the digital number varies from 0 to 255, thus
defining the amount of data to be broadcast to the ground. By contrast, the
6-bit MSS data allow for grey levels from 0–63. As indicated in Section 4.2,
modern commercial systems offer an 11- or 12-bit dynamic range, allowing a
range of 0–2047 or 0–4095, respectively.
9. Professor David Landgrebe, private communication, 2002. The original design was to include
five reflective bands. When NASA allowed the additional reflective band at 2.1 mm, it became
band 7. The most recently added band, band 8, is the high-spatial-resolution panchromatic
channel. The numbering scheme finally changed in a significant way with Landsat 8.
10. It is important to discriminate between upper and lower case for the “b” in Mbps or Mb/s.
For a 185-km swath ⇒ (185 km) / (30 m/pixel) ¼ 6000 pixels/scan line,
or 3.36 105 (56 6000) bits per scan line. What is the time interval
corresponding to one scan line?
30 m
t¼ ¼ 0.004 s, or 4.0 ms:
7.5 103 m∕s
Figure 6.12 Landsat 8 spectral bands.12 Atmospheric transmission values for this graphic
were calculated using MODTRAN for a USA 1976 Standard atmosphere, summertime, with
scattering. Band 1 (aerosol) is the narrow, dark-blue band on the left (unlabeled); the
panchromatic band (8) is indicated in grey between 0.5 and 0.7 mm. The panchromatic band
does not extend into the NIR, a major change from Landsat 7. The narrow cirrus band (9) is
in an absorption band for water vapor and is intended to enable atmospheric compensation.
Figure 6.13 The panchromatic sensor has almost no blue response in order to avoid image
degradation caused by atmospheric scattering. The panchromatic sensor extends well into the
NIR. In nanometers, blue = 450–520, green = 520–600, red = 630–690, and NIR = 760–790.
spectral responses, all modeled after the first four Landsat spectral bands. The
calibration values for IKONOS are given in Fig. 6.13. The panchromatic-
sensor spectral response extends well into the near-infrared, and the response
in blue is relatively poor. This is by design, in some sense, to reduce the effect
of aerosols (scattering) in the high-spatial-resolution channel.
The response functions for most of the other commercial sensors are
all very similar. The subtle differences become important when spectral
quantities are estimated, such as vegetation health or area coverage. More
significant changes started to occur, however, when DigitalGlobe launched
the Worldview-2 sensor with 8 spectral (reflective) bands (October 8, 2009).
The sensor preceded the LDCM into orbit but has some similarities, including
the short-wavelength “coastal blue” band. The new Worldview design also
includes a yellow band, which is helpful in the study of shallow coastal water
for bathymetry. The Worldview-3 mission (August 13, 2014) carries the same
8-band VNIR sensor and a new 8-band SWIR focal plane that promises to
dramatically advance the art of spectral imaging from space.
The markets and applications for these sensors are still being created. Their
improved spatial resolution (a factor of ten greater than Landsat) is ideal for
studies of fields and forests by those who wish to observe and characterize
vegetation; presently, the largest market is in agriculture. Figure 6.14 shows the
spectral response functions, including the new short-wave bands as designed by
Fred Kruse and Sandra Perry.13 The design used hyperspectral data from
AVIRIS (described below) and follows in some sense the NASA/Terra/ASTER
sensor, offering great promise for geologic applications.
13. Kruse and Perry, “Mineral Mapping Using Simulated Worldview-3 Short-Wave-Infrared
Imagery,” Remote Sensing 5, 2688-2703 (2013).
Figure 6.14 The Worldview-3 response functions are illustrated here, superimposed on a
blackbody curve for the solar spectrum. The scale is not given here, but the SWIR bands are
in a portion of the curve that has less than 10% of the radiance found in the visible spectrum.
(The visible/near-IR response curves are the same for Worldview-2.) The high-resolution
panchromatic sensor has a resolution of one-third of a meter, and the VNIR bands are four
times that value, at 1.33 m. Several of the bands overlap, NIR-1 and NIR-2 in particular. It is
more difficult to see, but SWIR bands 5 and 6 also overlap. The panchromatic sensor still
extends into the blue.14
Figures 6.15 and 6.16 show the characteristics of the spectral data from
Worldview-3. The differences from frame to frame are subtle, but one fairly
obvious transition is seen in the second row of Fig. 6.15—the golf course on the
right side of the scene changes from dark to bright at the transition from the
visible to the near-infrared. Figure 6.16 shows plots for several characteristic
scene elements. The data have been converted to a rough reflectance using a
common assumption that the scene must contain pixels that range from 0 to
100% reflectance. (The technique is termed internal average relative
reflectance.) The spectra in Fig. 6.16 shows the rather dramatic range in
reflectance in the “grass” class taken from an area just outside the field of view
of the image chips shown in Fig. 6.15. There is a dramatic rise from 15–20%
reflectance in the visible to 90% in the NIR.
14. With thanks to Giovanni Marchisio at DigitalGlobe for the wavelength response functions.
15. http://www.seos-project.eu/modules/agriculture/agriculture-c01-s03.html. There is at least
one portable commercial product designed to measure the NDVI at the ground level for
individual plants: the Trimble Greenseeker crop sensing system. It uses light emitting diodes
(LEDs) at 680 and 780 nm. The handheld unit is being marketed as of 2016.
Figure 6.15 The Worldview-3 sensor provides 16 unique images for each scene. Here, the
visible/near-IR imagery are collected at a 1.2-m GSD and the SWIR data at a 7.5-m GSD.
The “small multiples” technique (Edward Tufte) provides some insight into the variations, but
the differences here are subtle. The SWIR bands provide more differentiation between the
various “impervious” surfaces, i.e., concrete, asphalt, and similar.
Figure 6.16 Spectra from several characteristic regions in the scene shown in Fig. 6.15.
The radiance data have been converted to reflectance using a commonly used assumption
that the scene reflectances vary from 0–100%, i.e., there are perfectly dark and bright
targets. The ocean surface has near-zero reflectance in the SWIR. Vegetation shows a
characteristic peak in the “green” at 550 nm.
where the digital number (DN) comes from bands 4 and 3 for Landsat TM
and ETM data. Similar systems such as Quickbird will have a similar pair,
typically 4 and 3 for the NIR and red bands, respectively. For Worldview-3,
as illustrated in Fig. 6.15, the ratio would be determined from band 7 (832 nm)
and band 5 (660 nm).
Figure 6.17 illustrates the NDVI for the San Diego scene shown
previously in Chapter 1 (Figs. 1.11–1.13.) The NDVI plot is scaled from
0.4 to þ 0.2; healthy vegetation will have an NDVI > 0. Those healthy
vegetation regions mostly correspond to the golf courses and city parks in this
scene; they appear bright red in the false-color infrared figure on the right.
There are a number of variations on the NDVI designed to produce a quantity
that is more directly proportional to physical parameters such as biomass, but
the standard index has a great advantage in simplicity and relatively
widespread acceptance.
Figure 6.17 Spectral data from Landsat 7, taken 2001-06-14. The NDVI is shown on the
left, the false-color infrared image on the right (NIR, red, and green bands appear as RGB).
The inset in the top right of each figure is the golf course adjacent to the Hotel del Coronado.
A·B
cos u ¼ : (6.2)
jAj · jBj
Table 6.7 gives the values of the mean spectra for two regions, along with the
magnitudes of the two vectors. The values for bands 3 and 4 are those shown
in Fig. 6.18. Calculating the product of the two vectors reveals a magnitude of
84446.1.
Calculating the spectral angle gives cos u ¼ 0.93, or u ¼ 21.8°. This is a
fairly arbitrary value without the context of the other data in the scene, but it
indicates a significant difference in spectral angles and a clear means of
distinguishing the two components of the scene.
Figure 6.18 Spectral data from Landsat 7, taken 2001-06-14. Data from two small regions
are plotted as a function of DN for bands 3 and 4. The labels for each of the two classes
correspond to the means, also given in Table 6.7, e.g., for vegetation, the mean of band 3 is
a DN of 72, and the mean for band 4 is 123. The calculation of the dot product for these
vectors shown in the text is done for the full set of six reflective bands.
Figure 6.19 AVIRIS line diagram. AVIRIS uses silicon (Si) detectors for the visible range
and indium-antimonide (InSb) for the near infrared, cooled by liquid nitrogen. The sensor has
a 30° total field of view (full 614 samples) and one-milliradian instantaneous field of view
(IFOV, one sample), calibrated to within 0.1 mrad. The dynamic range has varied over time;
10-bit data encoding was used through 1994, and 12-bit data have been recorded since
1995.
6.8.1 AVIRIS
AVIRIS is a world-class instrument in the realm of earth remote sensing, a
unique optical sensor that delivers calibrated images of the upwelling spectral
radiance in 224 contiguous spectral channels (also called bands), with
wavelengths from 380 to 2500 nm. The instrument typically flies aboard a
NASA ER-2 plane (a U-2 modified for increased performance) typically at
20 km above sea level and 730 km/h. In recent years the sensor has also
been flown on a Twin Otter at a 2–3-km altitude (6000–17,500 feet) for a
higher spatial resolution.
The AVIRIS instrument contains 224 detectors, each with a wavelength-
sensitive range (also known as spectral bandwidth) of approximately 10 nm,
allowing it to cover the range between 380 nm and 2500 nm. Plotted on a
graph, the data from each detector yields a spectrum that, compared with the
spectra of known substances, reveals the composition of the area under
surveillance.
AVIRIS uses a scanning mirror to sweep whiskbroom fashion, producing
614 pixels for the 224 detectors on each scan. For the original ER-2 data, an
Figure 6.20 An AVIRIS “hypercube.” The 3D perspective shows a small image chip as a
false-color infrared image (750, 645, and 545 nm) with two spatial dimensions and the
wavelength in the third dimension. The data are in radiance, and the atmospheric absorption
bands are fairly obvious in this format. The intensity of the light in the water decreases
quickly with wavelength. Flight occurred on November 16, 2011, UTC 20:40, with a spatial
resolution of 7.5 m.
16. http://aviris.jpl.nasa.gov/.
Figure 6.21 This image from AVIRIS shows elements of a scene acquired on November
16, 2011. The mission was flown on a NASA ER-2 plane at an altitude of 7500 m (25,000
feet). The image on the left is roughly true color. Four characteristic regions of interest
(collections of pixels) were measured, with spectra shown in radiance (top) and reflectance
(bottom). The small characteristic peak in the green (550 nm) and IR ledge are evident in the
vegetation signature, particularly in the reflectance data. The “white-roof” spectra are from
the rooftops more clearly seen on the right side of Fig. 1.15. “Sand” is from the region of
open beach.
This brief description does little justice to the difficulty of the process. The
illustration here uses the FLAASH algorithm, which, like most such
approaches, is based on MODTRAN (as shown in Chapter 3).17
17. S. M. Adler-Golden et al., “Atmospheric correction for shortwave spectral imagery based
on MODTRAN4,” Proc. SPIE 3753, 61–69 (1999). F. A. Kruse, 2004, “Comparison of
ATREM, ACORN, and FLAASH Atmospheric Corrections using Low-Altitude
AVIRIS Data of Boulder, Colorado,” In proceedings 13th JPL Airborne Geoscience
Workshop, Jet Propulsion Laboratory, 31 March – 2 April 2004, Pasadena, CA, JPL
Publication 05-3.
6.8.2 Hyperion
The first major VNIR/SWIR hyperspectral sensor to fly in space was the
Hyperion sensor on the NASA EO-1 platform. EO-1 is a test bed for earth
resources instruments, launched in conjunction with Landsat 7 and designed
to test follow-on technology for NASA systems. EO-1/SAC-C was launched
November 21, 2000 from Vandenberg Air Force Base (VAFB) in a 705-km
orbit, trailing just after Landsat 7. The Hyperion sensor is the Thompson,
Ramo, Woolridge (TRW)-built cousin to the payload of NASA’s ill-fated
Lewis satellite effort. [Also on EO-1: the Advanced Landsat Imager (ALI), the
predecessor to the Landsat 8 OLI sensor.]
Hyperion offers a 30-m spatial resolution covering a 7.5-km swath. The
0.4–2.5-mm spectral range is analyzed at a 10-nm spectral resolution (220
bands). Figure 6.22 contains a nice illustration of the spectral nature of the
sensor and an unusual look at the beginnings of a blackbody curve for the hot
lava of Mount Etna, glowing at some 100–2000 K. The lava curve (brown in
the line plot) shows a spectrum that rises above the intensity of reflected
sunlight beginning at 1600 nm and appears to peak around 2.4 mm. The
peak in the 2000–2400 nm range is due to the blackbody radiation from the
lava at a nominal temperature of 1000–2000 K. By contrast, a vegetation
Figure 6.22 Mount Etna. Hyperion offers 12-bit dynamic range.18 Data are in
watts/(m2 · ster · mm), i.e., power per unit area, per solid angle, and per wavelength (mm).
18. J. Pearlman et al., “Development and Operations of the EO-1 Hyperion Imaging
Spectrometer,” Proc. SPIE 4135, 243 (2000);
signature (the green curve) shows the IR ledge expected of healthy vegetation
at 700 nm. The small arrows along the bottom of the curve (at 1234, 1639, and
2226 nm) indicate the spectral bands used to construct the image on the right,
coded as blue, green, and red, respectively.
Figure 6.23 RGB of the first scene, taken near Keenesburg, CO. The data are presented
as a false-color IR image—regions that appear red are areas of vegetation.20
were used, although some additional shielding was placed around critical
components. The sensor operated nominally until the satellite was shut down.21
2 6 3 7
6 7 2 3
S0 6 jE j2 jE j2 7 I 0 þ I 90
6 S1 7 6 x y 7 6
I 0 I 90 7
S¼6 7¼6
6 7
7∝6 7, (6.3)
4 S2 5 6 7 4 I 45 I 135 5
6 2 Re E x E y 7
S3 6 IL IR
6 7
7
4 5
2 Im E x E y
where S0 is the total intensity of the light, S1 is the difference between the
horizontal and vertical polarization, S2 is the difference between linear +45°
21. Yarbrough et al., “MightySat II.1 hyperspectral imager: summary of on-orbit perfor-
mance,” Proc. SPIE 4480, 186 (2002).
22. R. C. Olsen, M. Eyler, A. M. Puetz, and P. Smith, “Initial results using an LCD
polarization imaging camera,” Proc. SPIE 7303 (2009); Philip Smith, The Uses Of A
Polarimetric Camera, M.S. Thesis, Naval Postgraduate School, September 2008.
Figure 6.24 Optical polarimetric images of the Naval Postgraduate School campus,
showing I (S0), Q (S1), U (S2), and degree of linear polarization (DOLP). The panchromatic
camera has a green filter on the lens to limit the wavelength range for the polarization filter.
and 45° polarization, and S3 is the difference between right and left circular
polarization. The latter terms are often normalized by S0 so that they have
values between +1 and 1. (Radar jargon intrudes into the optical domain
with the use of I, Q, U, V, where the first two terms are in-phase and
quadrature in the radar domain.) The intensity term, S0 or I, is effectively the
unpolarized light or the overall intensity. The second term is the difference
between the measurements at 0° and 90°, and the third is the difference
between measurements at 45° and 135°. The last term describes circularly
polarized light, which is extremely rare in nature but frequently used in
satellite communications.
Figure 6.24 illustrates the first three elements of the Stokes vector for a
daytime scene. (The same scene was imaged with a color digital camera in
Fig. 2.3.) The top-left panel is just the intensity (the average, in this case, of all
four filter measurements.) The sun is to the right in this scene, and the sunlit
and shadowed sides of the main building can be seen on the right (Hermann
Hall). The top-right panel (Q) shows some of the expected polarization
elements—the sky is polarized due to Rayleigh scattering of the sunlight (blue
sky). The Hermann Hall rooftop elements appear bright because the relatively
smooth brick surfaces cause the reflected light to be polarized (Fresnel
equations). By contrast, the trees are dark in the optical (green wavelength)
image and in the Q term because reflectance from the natural features tends to
be unpolarized. The final term U contains some residual polarization
information, but these orientations primarily show noise. The gradual shift
in the grey level most obvious in the U image reflects the change in orientation
with respect to the sun in the sequence of images assembled here.
One of the problems with optical polarization is the dependence on both
illumination (direction) and viewing direction. One approach to analysis
examines the total polarization. The degree of linear polarization (DOLP) is
the sum of Q and U, or in equation form,
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
S 21 þ S 22
DOLP ¼ : (6.4)
S0
The fourth panel in Fig. 6.22 shows the DOLP for the scene. The sky is the
most highly polarized element of the scene; the windows in the lower part of
the building reflect the polarized skylight. Figure 2.3 shows a similar effect in
the sky. The Rayleigh scattered sunlight is fairly strongly polarized at the
30–50% level.
6.10 Problems
Figure 6.25 Worldview-3 scatter plot for NIR (B7) versus red (B5).
Wavelength (nm) 425 480 545 605 660 725 832.5 950
Grass 2.2 2.9 8.8 8.8 8.04 45.4 88.3 93.6
Soil 16.2 19.1 24.9 32.5 37.9 43.4 6.1 50.1
Concrete 57.3 61.0 68.3 73.9 75.0 75.6 66.4 64.6
7.1.1 Shape
Shape is one of the most useful elements of recognition. One classic shape-
identified structure is the Pentagon (Fig. 7.1). The well-known shape and size
make it easily identifiable.
7.1.2 Size
Relative size is helpful when identifying objects, and mensuration (the
absolute measure of size) is extremely useful for extracting information from
imagery. The illustrations at the beginning of Chapter 1 show how runway
length can be obtained from properly calibrated imagery. The Hen House
radar sites (Fig. 1.4) display characteristic shapes and sizes, and the size
provides information about the capability.
1. Avery and Berlin, pages 52–57; Manual of Photographic Interpretation; Jensen, pages 121–133.
149
Figure 7.1 Early image of the Pentagon; the oblique view distorts the scene. The large
number of cars in the parking lot indicate a high level of activity, even though it is a
Saturday.
7.1.3 Shadow
Shadows separate targets from the background. They can also be used
to measure height, e.g., the Washington Monument as illustrated in
Fig. 7.2.
Figure 7.2 Image of the Washington Monument, acquired by Gambit (KH-7) on 2/19/1966
(Mission 4025, frame 3). The image is oriented with north as “up.” Based on these details,
estimate the time the image was taken.
Figure 7.3 U-2 image of a SAM site in Cuba, acquired November 10, 1962. These images
were taken from very low altitudes (less than 500 feet), which was dangerous work. Major
Rudolf Anderson was shot down on such a mission by an SA-2 on October 27, 1962.2
7.1.6 Texture
Texture is concerned with the spatial arrangement of tonal boundaries.
Texture is the spatial arrangement of objects that are too small to be
discerned. Texture depends on the image scale, but it can be used to
distinguish objects that may not otherwise be resolved. The relative coarseness
or smoothness of a surface becomes a particularly important visual clue with
radar data (Figs. 1.19 and 1.20). Agricultural and forestry applications are
appropriate for this tool—individual trees may be poorly resolved, but
clusters of trees will have characteristic textures.
7.1.7 Pattern
Related to shape and texture is pattern, the overall spatial form of related
features. Figure 7.3 shows a Russian SAM site with characteristic patterns
that help detect missile sites, such as the Russian propensity for erecting three
concentric fences around important installations. In imagery from systems
like Landsat (30-m resolution), irrigated fields form characteristic circular
Figure 7.4 Landsat TM image (bands 4, 3, and 2) taken near Boulder, Colorado. The Bighorn
Basin is located about 100 miles east of Yellowstone National Park in northern Wyoming.
The circle is characteristic of irrigated crops. Bright red indicates the area is highly reflective in
the near-infrared (TM band 4), which indicates vegetation. Compare this image to Fig. 6.15.
patterns in the American southwest (Fig. 7.4). Irrigated fields and patterns are
also evident in the DMC data shown in Fig. 1.14. Geological structures, too,
reveal themselves in characteristic patterns, a concept applied to the search for
water on Mars and for characteristic textures3 and patterns4 associated with
mineral hydration and water flow.
7.1.8 Association
Three elements of photo-interpretation are related to context, or the
relationship between objects in the scene to each other and to their
environment. These elements are site, association, and time.
Association is the spatial relationship of objects and phenomena,
particularly the relationship between scene elements. “Certain objects are
genetically linked to other objects, so that identifying one tends to indicate or
confirm the other. Association is one of the most helpful clues for identifying
cultural features.”5 Thermal power plants will be associated with large fuel
tanks or fuel lines. Nuclear power plants tend to be near a source of cooling
water (although this can also be considered an example of site or location). A
classic instance of cultural association from the Cold War was the detection of
Cuban forces in Angola by the presence of baseball fields in the African
countryside (1975–1976).
7.1.9 Site
Site is the relationship between an object and its geographic location or terrain.
This can be used to identify targets and their use. An otherwise poorly resolved
structure on the top of a hill might, for example, be a communications relay,
based on its location.
7.1.10 Time
The temporal relationships between objects can also provide information,
through time-sequential observations. Crops, for example, show characteristic
temporal evolutions that uniquely define the harvest. Change detection, in
general, is one of the most important tasks in remote sensing and follows from
this interpretation key. Time can also play a role in determining level of
activity, as in Fig. 7.1.
Figure 7.5 Model Susanna Olsen. The image chip on the right is reduced in resolution to
20% of the original.
Table 7.1 The digital number (DN) values are given here for the image chip of the eye in
Fig. 7.5. The DN values for 152 are highlighted. There are ten such values; compare them to
the image by locating the highest data value, DN ¼ 210.
1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
2 181 188 178 157 153 119 106 107 97 91 91 89 89 87 102 117 119 115 106 82
3 179 160 162 149 132 107 90 86 90 98 114 129 151 172 175 177 169 166 158 141
4 163 158 144 147 120 116 115 121 137 162 174 180 184 184 179 184 182 184 179 170
5 156 149 145 137 139 143 148 156 169 177 179 177 179 182 175 179 177 179 177 169
6 153 151 148 149 153 156 159 152 152 151 153 152 155 162 166 171 173 175 172 166
7 156 152 158 159 150 136 137 146 156 160 158 152 140 134 132 145 161 162 163 158
8 148 158 157 139 144 151 126 87 73 58 55 52 67 96 122 125 123 150 156 153
9 148 152 142 149 143 120 95 48 50 58 43 50 85 85 57 79 111 128 150 152
10 147 152 157 130 143 192 103 47 65 97 38 47 87 165 120 50 71 113 133 144
11 164 153 126 157 197 210 121 71 43 34 44 56 109 170 143 98 73 76 117 132
12 172 134 147 155 151 161 143 110 95 67 71 85 149 146 114 89 99 96 109 131
13 182 187 186 181 175 179 173 171 161 151 134 122 120 116 125 126 129 138 144 153
14 178 198 198 182 179 182 181 191 172 167 162 153 145 153 152 150 152 157 164 169
15 175 185 192 188 185 187 193 205 201 194 190 185 177 173 166 164 170 173 180 182
16 183 185 193 195 198 199 201 200 196 191 188 186 180 180 182 184 187 191 192 189
Figure 7.7 (a) The scanned film has a dynamic range of 12 bits or so. (b) Histogram values
with peaks at DN ¼ 340, 1080, and 3660.
Figure 7.8 (a) Digital image of a black cat. (b) Black-cat histogram.
Now consider a color image. The cat depicted in Fig. 7.8(a) was
photographed with a Canon digital camera (1600 1200); the exposure was
adjusted to compensate for the nearly black fur. Histograms for the three
colors are shown in Fig. 7.8(b). The peak at DN 30 is the very dark fur on
the face; the lowest values are for the shadowed fur. The grass and brighter fur
make up the mid-range, at around 100 or so. The red collar and white fur
provide the peak, at DN 250. Such histograms are key in distinguishing
targets from backgrounds in both panchromatic and spectral imagery.
Figure 7.9 Histogram and target image. A model image simulates what you might obtain
from a 40 40 detector array attached to a telescope. The image is inset, and the small
bright region is supposed to be the target.
93 91 74 87 68 85 94 37 72 94 110
59 97 85 110 88 71 102 47 50 96 98
132 79 77 114 113 75 87 61 99 86 80
96 95 52 96 58 81 65 96 54 64 75
97 76 85 91 67 176 176 88 52 75 41
80 63 10 59 175 180 178 63 91 100 111
92 107 62 54 176 178 49 58 113 89 78
36 78 96 112 87 142 100 82 75 43 73
72 73 58 37 84 54 38 111 116 101 69
66 60 104 63 109 91 43 62 79 105 93
79 66 50 76 88 110 60 88 112 84 31
compare this value to the numbers obtained via calculation: mean ¼ 76.8,
variance ¼ 580.3, skewness ¼ 0.21, and kurtosis ¼ 0.92.
Figure 7.10 (a) A red disk on grass background. (b) Histogram of red (1), green (2), and
blue (3) bands from the color image.
it is not immediately obvious how to tell a computer to distinguish the red pixels
from the background. The statistics that pertain to the histogram are given in
Table 7.3. The mean of the red values is 50.9 (the background), and the width,
or standard deviation, is 23.4. The target has DN values of 128–130.
Figure 7.11 shows a new data format: a 2D scatter plot. The high
correlation is apparent. This correlation, or redundancy, in the data is not a
major problem here, but it becomes much more so when you have higher
spectral dimensions to your data (six reflective bands with Landsat, 224 bands
with AVIRIS). The trick is to use the power of statistical analysis to make use
of redundancy in spectral data.
The details for the example in this section show a strong correlation
between the three bands. The correlation is unusually high because of the
homogeneity of the scene, but the concept is fairly general. Images taken at
varying wavelength will be highly correlated. This correlation can be
important in spectral analysis.
The correlation calculation has a closely related term: the covariance.
They are related by a normalization factor that is the square of the
standard deviation. The diagonals in the covariance matrix in Table 7.4 are
just the squares of the standard deviations given in Table 7.3 (that is,
17.22 ¼ 291.5 for the blue channel).
Figure 7.11 Scatter plots of the occurrence of RGB triples: (a) red versus green, and
(b) blue versus green.
R G B R G B
R G B PC 1 PC 2 PC 3
7.5 Filters
There are a number of standard ways to manipulate images for enhanced
appearance or to make it easier to extract information. Simple smoothing filters
reduce noise; more-sophisticated filters can reduce “speckle” in radar imagery.
A few simple concepts are illustrated here with a Landsat panchromatic
image from San Francisco taken on March 23, 2000. The data have a basic
resolution of 15 m. A small chip has been extracted from the northeast corner of
the peninsula (the Bay Bridge and Yerba Buena Island), as well as an image
Figure 7.12 The image in principal component space. The third PC is shown as an image
on the left and as a scatter plot in principal component space on the right. In PC3, the disk is
now clearly distinguished from the grassy background.
chip for San Francisco Airport. The filters are applied using an image kernel—a
concept adapted from calculus and transform theory.
7.5.1 Smoothing
Noisy images can be difficult to interpret. One approach is to smooth the
image, averaging adjacent pixels together through a variety of approaches,
with some specialized versions like the Lee filter used to reduce speckle in
radar images. The illustration here is not particularly apt because the data
quality is good, but a 3 3 filter block has been applied, with even weights for
each pixel. The kernel is illustrated here; the center of each 3 3 pixel block is
replaced by the average of all nine pixels: Fig. 7.14(b) shows the smoothed
image.
1 1 1
1 8 1
1 1 1
Here, the kernel takes the difference between the central pixel and its
immediate neighbors in all directions.
In Fig. 7.14, the bridge is enhanced, as are the edges of the island. A small
section of the bridge is shown in a magnified view to further illustrate the
filter’s result. The original data appear in Figs. 7.14(a) and (d); the filtered
output appears in Figs. 7.14(c), and (e).
The same high-pass filter is applied to the airport area, depicted in
Fig. 7.15. The runways are pulled out, along with the edges of the terminal
buildings. Similar approaches are used to sharpen images for analysis by the
human eye. Edge detection is important for automated processes in mapping,
for example.
Figure 7.14 Landsat panchromatic sensor: (a) raw data, (b) smoothed image, (c) high-
pass filter, (d) raw, and (e) high-pass filter.
Figure 7.15 Landsat (a) raw data and (b) high-pass filter (edge detection).
1X N
¼
mean ¼ x x:
N j¼1 j
The variance addresses the range of values about the mean. Spatially
homogeneous scenes will have a relatively low variance; scenes or scene
elements with a wide range of DNs will have a larger variance. The standard
deviation s is just the square root of the variance:
1 X N pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
variance ¼ Þ2 ,
ðx x s¼ variance:
N 1 j¼1 j
correlation coefficient
PN P
N
PN
N j¼1 xj yj j¼1 xj j¼1 yj
¼r¼ PN PN P P ,
½N j¼1 xj
2
ð j¼1 xj Þ ½N N
2 1∕2
j¼1 yj ð N
2
j¼1 yj Þ2 1∕2
X
1 N
1X N X
N
1 X N
covariance ¼ xj yj xj yj ¼ Þðyj yÞ:
ðx x
N 1 j¼1 N j¼1 j¼1 N 1 j¼1 j
X ¼ [1, 2, 3] Y ¼ [2, 4, 6]
mean (x) ¼ 2 mean (y) ¼ 4
variance (x) ¼ 1 variance (y) ¼ 4
s(x) ¼ 1 s(y) ¼ 2
correlation coefficient ¼ 1 covariance ¼ 2
7.7 Problems
1. Figure 7.16 shows a small image chip of San Diego harbor (Coronado)
taken on February 7, 2000 by the IKONOS satellite. What can you tell
about the two ships? The carrier is 315 m long. What can you tell about the
other ship?
2. How could it be determined whether or not a road or rail line is intended
for missile transport?
3. For an otherwise uniform scene (Fig. 7.17), there is a target with higher
DN. The variance is 5106.4. Calculate the standard deviation s. Estimate
the distance between the target and background in units of s.
4. Three regions are identified in Fig. 7.18: water, a bright soil, and the old
Moss Landing refinery site (red), with some very bright white sand and soil.
Figure 7.19 provides the corresponding histogram. Describe what dynamic
ranges you would use to display the scene so as to enhance each region of
Figure 7.16 IKONOS image of San Diego harbor, taken February 7, 2000.
Figure 7.17 (a) Dark grey, cluttered background with bright target. (b) Histogram for DN
occurrence. The target DN is 250.
interest. As an example, the best display for the soil would be to scale the
data so that DN ¼ 250–450 mapped to a digital display range of 0–255.
5. For a scene with four pixels, calculate the correlation between the pixels
and the covariance:
Figure 7.18 The Moss Landing Mineral Refractory was built ca. 1942. The white material
may be dolomite from the Gabilan Mountains or magnesium residue from the material
extracted from seawater.
Figure 7.19 Histogram for the Moss Landing/Elkhorn Slough area north of Monterey, CA.
In the histogram plot, the red line is for the soil, and the green in is the dolomite. The cyan
region of interest is plotted in blue here. The black curve shows the values for the full scene.
1 40 50 60
2 20 25 28
3 30 30 30
4 15 16 14
6. The scene in Fig. 7.2 is oriented so that north is “up.” What is the time of
day? Where is the spacecraft relative to the Monument?
Figure 8.0 (a) Data from the Mars Global Surveyor (MGS) spacecraft taken by the Thermal
Emission Spectrometer. The image shows the daytime temperature measurements for one
day. Data are scaled from –125°C to 20°C.1 (b) MGS Thermal Inertia map, obtained by
comparing day/night temperature differences. The large region of dark blue on the left is
Olympus Mons. The scale ranges from 24–800 J/m2 K s½.2
171
The imagery and data collected by tactical and strategic sensors operating in
the infrared portion of the electromagnetic spectrum is generated by emitted
radiation from targets and backgrounds rather than from reflected radiation.
These infrared sensors can give non-literal information that may have
different values than that from comparable panchromatic (visible) images.
A large range of tactical and strategic sensors operate solely in the IR, sensing
the emitted radiation from targets and backgrounds, as opposed to the
reflected radiation previously discussed.
8.1 IR Basics
In the visible spectrum, humans mostly see by reflected light, typically
sunlight. In the IR, there is a reflected solar component (during the day), but
much of remote sensing is due to emitted IR, particularly in the mid-IR range
(3–5 mm) and LWIR range (8–13 mm).
where c ¼ 3 108 m/s, h ¼ 6.626 10–34 J·s, and k ¼ 1.38 10–23 J/K. Slightly
mixed units are indicated in the (normally) metric formula to emphasize that
the m2 term is per unit area, the “mm” term reflects the per unit wavelength
element, and “ster” is per unit angle component.
Figure 8.1 shows the blackbody curves for bodies at 5800 K (the solar
temperature) and 300 K (typical terrestrial temperature). The solar curve has
been normalized to “top-of-the-atmosphere” values for earth orbit. The figure
represents the amount of energy available as a function of wavelength for a
sensor at low-earth orbit.
Figure 2.13 showed how the location of the peak in the spectrum and the
amplitude of the radiation change with temperature. The formula is integrated
over all wavelengths to obtain the Stefan–Boltzmann law [see Eq. (8.2)].
However, for many sensors it is necessary to integrate over relatively narrow
wavelength ranges, which can be challenging in the case of Eq. (8.1). This
process can be done numerically or by approximating the value of the
radiance function over a narrow spectral range.
1. http://tes.asu.edu/tdaydaily.png.
2. N. E. Putzig, M. T. Mellon, K. A. Kretke, and R. E. Arvidson, Global thermal inertia
and surface properties of Mars from the MGS mapping mission, Icarus 173, 325-341, 2005.
http://www.mars.asu.edu/data/tes_putzigti_day/.
3. The Wikipedia has an extensive, and well referenced, discussion on Planck’s Law. www.
wikipedia.com.
Figure 8.1 Blackbody curves. In the example of radiation from the sun, the sun acts like a
blackbody at about 6000 K. Of course, the radiation decreases as per the inverse square
law, and the incident radiation observed at the earth is decreased by that factor, i.e.,
(radiussun / radiusearth orbit)2. As a consequence, the 3–5-mm wavelength range is in the
middle of the transition region from dominance by reflected solar radiation to dominance by
emitted thermal radiation for terrestrial targets.
S ¼ εsT 4 , (8.2)
where s is the constant: s ¼ 5.669 10–8 W m–2 K–4, and ε is the emissivity.
The emissivity for a blackbody is one. Real sensors with a more-limited
bandpass (say, 8–13 mm) will still see a monotonic increase in power with
temperature.
The concept of blackbody temperature shows up in places as ordinary as a
local hardware store, where GE fluorescent light bulbs are sold by color
temperature.4 The bulbs are not blackbodies, but the concept is still applied.
For example:
• GE Daylight Ultra, 3050 lumens, 6500 K;
• GE Daylight, 2550 lumens, 6250 K;
• GE Chrome 50, 2250 lumens, 5000 K;
• GE Residential, 3150 lumens, 4100 K;
4. http://www.gelighting.com/LightingWeb/emea/images/Linear_Flourescent_T5_LongLast_
Lamps_Data_sheet_EN_tcm181-12831.pdf.
a
lm ¼ , (8.3)
T
8.1.4 Emissivity
The assumption so far has generally been that emissive objects are
blackbodies, which are perfect absorbers and emitters of radiation. Real
objects all have an emissivity ε that is between zero and one. Table 8.1 shows
some average values for the 8–12-mm wavelength range. Just as with reflective
spectra, there are fine scale variations in emissivity, which are unique to the
material. Gold emits poorly in the longwave infrared, with an emissivity of
only a few percent.
Figure 8.2 shows the variation in emissivity that occurs as a function of
wavelength in the longwave IR spectrum for some common minerals. The
figure is a “stack plot” with the scales of successive materials shifted upward
by small factors to keep them from overlapping. Each curve has a maximum
just below 1.0. The dip in the emissivity just above 11 mm for magnesite moves
5. His citation: Buettner and Kern, JGR, 70, p. 1333, 1965. Also, http://www.infrared-
thermography.com/material.htm.
Figure 8.2 Emissivity spectra for minerals in the LWIR. This figure comes from data in the
Arizona State University spectra library. http://speclib.asu.edu/.
to the right as the material varies from magnesite to dolomite to calcite. This
reflects changing bond strengths in the materials.
8.2 Radiometry
The objective in much of thermal imaging is the accurate extraction of
temperatures from observations. This process, called radiometry, depends on
an understanding of the target and its radiance, atmospheric absorption,
scattering and propagation, and the detector response function. As with
several elements of this text, a proper development of the topic is the content
of full textbooks,6 and thus only a few of the basic elements and results are
developed here. Elements of the presentation by Schott (1997) are used.7
Two simplified cases are considered: a point target (subpixel) and a
homogenous, Lambertian surface. The former would be an unresolved target
such as a missile; the latter would be appropriate for systems like Landsat. In
both cases, an isotropic radiation pattern (i.e., Lambertian) will be assumed.
With this assumption, the radiant exitance M, where the angular dependence
has been integrated out, must be used to produce an overall factor of p:
watts 2phc2 1
exitance ¼M¼ , (8.4)
m2 · mm l elkT 1
5 hc
moment, the only remaining variable is the detector, and in particular the size
(area). The detector responds to the “irradiance,” which is the power/unit area
at the sensor optics. Irradiance E has the same units as exitance (watts/m2),
differing from that term in concept (emitted versus received/absorbed). As a
quick illustration of the distinction, from Chapter 2, the solar exitance is
6.42 107 W/m2; the irradiance at earth is 1378 W/m2.
The general formula for the irradiance from a point target, then, is
areatarget W
EðlÞ ¼ MðlÞ • in units of : (8.5)
4pr 2
m · m2
The measured energy then depends on the size of the aperture for the sensor—
typically defined by the diameter of the optics. The measured radiant flux is
then
areadetector W
radiant flux ¼ FðlÞ ¼ MðlÞ · areatarget · in units of : (8.6)
4pr2 m
Thus, all other things being equal, it will be much easier to detect a target that
is close than one that is farther away. The wavelength dependence can be
integrated out for a broadband detector, and the total amount of power
detected can be estimated as
areadetector
power ¼ sT 4 · areatarget · ; in units of watts: (8.7)
4pr2
Example
Consider a hot reentry vehicle in the earth’s atmosphere, as viewed from
geosynchronous orbit. (This could be a satellite or bolide burning up on
reentry, for example.) Take the surface area to be 20 m2, the temperature to be
1500 K, and the range to be 6 earth radii (38 106 m). Take the mirror to
have a diameter of 1 m. The power incident on the system, ultimately to be
detected, is calculated as
areadetector
power ¼ sT 4 · areatarget ·
4pr2
2
watts 0.25 p m
¼ 5.68 108 · ð1500KÞ 4
· 20 m2
·
m2 K4 4pð38 106 Þ2 m2
¼ 0.25 109 W,
which does not seem like much power. (The thermal energy radiated is
6 MW.) The energy from the target peaks at 1.9 mm; a quick estimate for the
number of photons represented by that energy would be
Example: Landsat 7
For a system like Landsat 7, with a 60-m resolution at a 705-km range,
GSD 60 m
du ¼ ¼ ¼ 8.5 105 radians,
range 705 103 m
dV u2 for small u,
dV ¼ ð8.5 105 Þ2 ¼ 7.24 109 ster:
The Landsat 7 system has a mirror diameter of 40.64 cm and a clear inner
aperture with a 16.66-cm diameter. The effective area is then 0.11 m2. For the
satellite in LEO, observing the earth at 300 K,
dV
power ¼ sT · 4
· areadetector
4p
8 W 7.24 109 ster
¼ 5.68 10 · ð300 KÞ ·
4
· 0.11 m2
m2 K4 4p
¼ 2.9 108 watts:
The energy from the earth’s surface peaks at 10 mm; a quick estimate of the
number of photons represented by that energy would be
Figure 8.4 Radiometry elements for an area target. The area being imaged increases with
range, so that term is cancelled out in the power calculation for a fixed angular resolution.
The energy detected does not depend explicitly on range for a given detector.
As the range increases, a larger ground area is imaged, and the area imaged
increases with range-squared (R2). The energy will diminish with range if the
GSD is kept constant.
Because the emissivity ε is a number less than one, Tkinetic > Tradiated by a
factor that is just the fourth root of ε:
1
T radiative ¼ ε4 T kinetic : (8.4)
8.3.3 Thermal inertia, conductivity, capacity, and diffusivity
Reflective observations depend primarily on the instantaneous values for the
incident radiation, but thermal IR observations are very much dependent on the
thermal history of the target region and the nature of the materials imaged.9
8.3.3.1 Heat capacity (specific heat)
Thermal heat capacity is a measure of the increase in thermal-energy content
(heat) per degree of temperature rise. It is measured as the number of calories
required to raise the temperature of 1 g of material by 1°C. It is given the
symbol C (calories / g °C).
Thermal storage is a closely related quantity, modified by the mass density
c (calories / cm3 °C), where the value for water is very high (1.0)—about five
times that for rocks. (Here, c ¼ rC, where r is the mass density in g/cm3.)
8.3.3.2 Thermal conductivity
Thermal conductivity is the rate at which heat passes through a material,
measured as the amount of heat (calories) flowing through a cross-section
Table 8.2 Thermal inertia and related characteristic values for various materials. Units are
cgs. Data from the Remote Sensing Tutorial.10
K calories
cm s °C C calories
g °C r cmg 3 P calories
1
Material cm2 °C s2
10. The tutorial by Dr Nicholas Short is no longer present on the NASA web site. See also
Table 6-4 in Avery and Berlin, page 123; Campbell, Table 8.2, page 251; and Sabins
(2nd edition), page 133, Table 5.3.
Figure 8.5 Illustration of the temporal variations in temperature for various materials over
a day.
Figure 8.6 MWIR image taken on 10/23/2014 at 1200 local time. The asphalt is noticeably
cooler in recently vacated parking spots, as people have gone to lunch. The soil/grass area
appears to be as warm as or warmer than the asphalt, which is largely an artifact of the
emissivity differences.
Figure 8.7 Temperature profiles for the scene in Fig. 8.6. The air temperature is provided
by the NPS weather station. The red tile roof is shaded in the latter part of the afternoon and
thus drops relatively more quickly than some of the other synthetic materials, with thermal
crossover well before sunset, by contrast with the still illuminated surfaces. In situ water-
temperature measurements in the bay cluster around 63°F (17°C) on this day.
Figure 8.8 The figure to the left is the merged IR and panchromatic image; to the right is
the panchromatic (reflective) image. The frozen ocean is at the upper left. The fuel tanks and
runway are warmer than the background. North is up. Jim Storey of the EROS Data Center
resampled and enhanced this image.
8.4 Landsat
Infrared data from Landsat were shown in Chapter 1 for several areas in San
Diego. Here, a second example shows the Landsat 7 thermal-band data. The
illustration is from northern Greenland, at Thule AFB. In the color picture of
Fig. 8.8, the 60-m-resolution data from band 6 were resampled using cubic
convolution to an effective pixel size of 5 m as part of a sensor-calibration
study. The LWIR data were then combined with panchromatic band data to
create the RGB image, as shown in Fig. 8.8. Band 6 (LWIR) data are assigned
to the red channel, whereas the panchromatic-band data are assigned to the
green and blue channels.
Some features are revealed as warmer than the surrounding snow due to
heating from the 24-h sunlight at this high northern latitude. The runway and
various buildings on the base show relative warmth, with the southern sides of
the storage tanks near the base somewhat warmer than the northern sides.
Exposed rock on the hillsides to the north are emitting greater thermal
radiation than the snow.11
The thermal channel on Landsat has not been widely exploited until fairly
recently. One application that seems to be emerging involves monitoring
water bodies. The LWIR data from Landsat provide an effective method for
tracking changes in natural and artificial bodies of water. A slightly different
illustration of the utility for LWIR data is given in Fig. 8.9. The image chip
for San Diego Harbor shows a ship and ship wake with temperature
calibration. In the daylight image, the surface water is a degree or two warmer
11. Resources in Earth Observation, 2000; CD-ROM, European Space Agency. Images
courtesy of NASA.
Figure 8.9 The main image shows the 60-m resolution LWIR channel (band 6); the inset is
the 15-m-spatial-resolution panchromatic channel (band 8). False color is introduced by
combining low- and high-gain channels for band 6. Similar wake features can be found in
synthetic aperture radar data.
than the water below the surface. The ship moving on the surface disturbs the
surface, and cooler water is brought to the surface.
8.5.1 TIROS
The Television Infrared Observation Satellite (TIROS) was the first series of
meteorological satellites to carry television cameras to photograph cloud cover
and demonstrate the value of spacecraft for meteorological research and weather
Figure 8.10 Image taken by TIROS on April 1, 1960. Image courtesy of NASA.
forecasting. The first TIROS was launched on April 1, 1960 and returned 22,952
cloud-cover photos. The satellite was tiny by modern standards: mass ¼ 120 kg,
perigee ¼ 656 km, apogee ¼ 696 km, and inclination ¼ 48.4°. RCA built the
small cylindrical vehicle (42-inch diameter and 19-inch height).
Figure 8.10, acquired by TIROS, is one of the first images of the earth
taken from space. TIROS has a complicated history of name changes and
aliases, sensor packages, and parameters. Between 1960 and 1965, ten TIROS
satellites were launched. They were eighteen-sided cylinders covered on the
sides and top by solar cells, with openings for two TV cameras on opposite
sides. Each camera could acquire sixteen images per orbit at 128-s intervals.
8.5.2 Nimbus
Named for a cloud formation, Nimbus—a second-generation meteorological
satellite—was larger and more complex than the TIROS satellites. Nimbus 1
was launched on August 28, 1964 and carried two television and two infrared
cameras. Nimbus 1 had only about a one-month lifespan; six subsequent
missions were launched, with Nimbus 7 operating from 1978 through 1993.
The spacecraft carried an advanced Vidicon camera system for recording
and storing remote cloud-cover pictures, an automatic-picture-transmission
camera for real-time cloud-cover images, and a lead-selenide detector
(3.4–4.2 mm) to complement the daytime TV coverage and measure nighttime
radiative temperatures of cloud tops and surface terrain. The radiometer had
an IFOV of 1.5°, which at a nominal spacecraft altitude (1000 km)
Figure 8.11 MWIR image from Nimbus 2, showing Hurricane Gladys (left) and Hurricane
Inez (right). The darker regions are the ocean surface or low-altitude cloud; these are
warmer than the higher-altitude cloud tops, which are rendered as white in the image.
Images courtesy of NASA.12
8.6 GOES
8.6.1 Satellite and sensor
The Geostationary Operational Environmental Satellite (GOES) mission
provides the now-familiar weather pictures seen on newscasts worldwide.
Each satellite in the series carries two major instruments, an imager and a
sounder, which acquire high-resolution visible and infrared data, as well as
12. http://history.nasa.gov/SP-168/p15.htm.
Figure 8.12 The 120-kg module uses 120-W power and outputs 10-bit data at less than
2.62 Mbps. The Cassegrain telescope has a 31.1-cm (12.2-inch)-diameter aperture and f/6.8.13
Figure 8.13 GOES-15 spectral response function for the visible channel. A Gaussian fit to
the response function does not match particularly well, but it provides some measure of
where the response function is centered.
13. GOES N Series Data Book; Contract NAS5-98069 Rev D November 2009, published by
Boeing., http://goes.gsfc.nasa.gov/text/GOES-N_Databook/section03.pdf.
Figure 8.14 GOES-15 spectral response functions for the four infrared channels, with U.S.
Standard Atmosphere brightness temperature spectrum.14
Figure 8.15 GOES-15 full-disk infrared images taken April 26, 2010 at 1730 UTC. Top left:
0.6-mm band channel (VIS); top center: 3.9-mm channel (IR2); top right: 6.7-mm water vapor
channel (IR3); bottom left: 10.7-mm channel (IR4); and bottom right: 13.3-mm channel (IR6).
“The VIS Moon (left) is a bit skewed by its apparent motion while being scanned back-and-
forth by the Imager. The infrared (3.9 mm) view of the moon is so hot that it is off-scale in the
temperature range used for earth-scanning.”15
Table 8.3 Specifications for GOES N, O, P. Note that channel 1 has a GSD ¼ 28 mrads.16
Wavelength Wave Number Detector GSD
Channel (mm) (cm–1) Type (km) Purpose
15. http://goes.gsfc.nasa.gov/pub/goes/100426_GOES15_firstir/index.html.
16. GOES N Series Data Book; Contract Report under NAS5-98069 Rev D November 2009,
published by Boeing.
17. The older GOES series used a slightly lower wavelength, and the channel-5 designation for
that sensor channel is maintained as a distinct sensor: l ¼ 12 mm, 833 cm–1.
Figure 8.16 The wavelength ranges used here are illustrated in Figs. 8.13 and 8.14. The
third frame, from the 6.7-mm channel, shows the greatest signal-to-background ratio.18
18. http://goes.gsfc.nasa.gov/text/goes8results.html.
1996, a little after 1445 UTC (1045 EDT). The vapor trail can be seen in the
visible image (present as the white cloud you see from the ground). The hot,
sub-pixel, target is visible in all four infrared channels. The long-wave
channels (11 and 12 mm) do not show good contrast because the earth is
already bright at that wavelength. The highest contrast occurs in the 6.7 mm
channel because the atmospheric water vapor prevents the sensor from seeing
the extensive ground clutter.
Visible plumes had been seen before by GOES, but this is the first time the
unresolved heat from the rocket was seen in the 4- and 8-km IR pixels. The
window channels at 3.9 and 11 mm are consistent with a 493-K blackbody
occupying 0.42% of a 2-km-square pixel. The water vapor channels are
consistent with a 581-K blackbody occupying 0.55% of a 2-km pixel. The
shuttle burns liquid hydrogen and oxygen to create a hot exhaust that appears
bright in the water vapor bands.
19. Information is taken from Aviation Week & Space Technology (February 20, 1989;
November 18, 1991; December 2, 1991; February 10, 1997; March 3, 1997; January 5,
1998), and a variety of press releases by TRW. A sequence of three very thorough articles
were published by Dwayne Day in Spaceflight magazine, 1996.
Figure 8.17 The DSP satellites were usually sent aloft by the various generations of Titan
launchers; this photo depicts the shuttle launch of DSP Flight 16, “DSP Liberty,” launched
by the shuttle Atlantis (STS-44) on November 24, 1991. The shuttle crew deployed the
37,600-pound DSP/IUS stack at 0103 EST.
20. Aviation Week & Space Technology; November 18, 1991; Vol. 135, No. 20; Pg. 65.
Figure 8.18 Observations of two Titan II ICBM test launches. Reprinted with permission
from F. Simmons, Rocket Exhaust Plume Phenomenology, Aerospace Press (2000).
The IR sensor data from the DSP are not currently releasable, though
some results have surfaced in studies of natural phenomena such as
meteorites. Two illustrations of what can be obtained from these high-
temporal resolution, non-imaging sensors are shown here. IR data from
Missile Defense Alarm System (MIDAS) satellite tests are shown in Fig. 8.18.
MIDAS was a DSP precursor, conducting tests in the 1960s. The sensors are
lead sulfide, with filters designed to limit the response to the water-absorption
band (2.65–2.80 mm). The initial variation in radiant intensity is due to the
decrease in atmospheric absorption as the rocket rises in the atmosphere. The
subsequent decline is due to plume effects in the exhaust.21
The visible sensor data from DSP are illustrated in Fig. 8.19. This plot
shows the energy observed in a bolide, i.e., a meteor impact. The event is
distinguishable both from other natural phenomena (such as lightning) and
the sensor’s primary concern, ICBMs. The calculations for power are based
on an assumption that the target has a temperature of 6000 K. A temperature
assumption is necessary because the broadband sensor only gives power—it is
not known where the peak in the spectrum occurs or even if the source is a
blackbody.
The high temporal resolution of DSP sensors is a domain not normally
exploited in remote sensing. The IR sensors are also able to observe events
like forest fires (Defense Daily, April 29, 1999) and jet aircraft on
afterburners, and they have some capabilities in battlefield damage
assessment (BDA).
Figure 8.19 Chart of meteor trace. Reprinted with permission from Tagliaferri et al.,
“Detection of Meteoroid Impacts by Optical Sensors in Earth Orbit,” pp. 199–220 in Hazards
Due to Comets and Asteroids, T Gehrels, ed., (1994).
22. Spectrally Enhanced Broadband Array Spectrograph System, from the Aerospace
Corporation.
23. These illustrations come from thesis work at the Naval Postgraduate School by Captain
Aimee Mares (USMC).
Figure 8.20 SEBASS data integrated over the LWIR spectral range.
Figure 8.22 The left image features a photograph of Kilauea; the right image presents
LWIR data from over the volcano.
Figure 8.23 Modeling the SO2 concentration requires a fair amount of information that
must be estimated or modeled.
Figure 8.24 The spectra are reasonably blackbody above 10 mm, and the background
temperature can be estimated from this portion of the spectrum. At shorter wavelengths, the
SO2 absorbs the upwelling radiation. The SO2 path density can be estimated from the
modeled values for absorption.24
The analysis of the data in Fig. 8.24 showed that AHI observed
concentrations of SO2 that could be successfully modeled at a few hundred
parts per million (ppm) in a layer estimated to be 150 m thick and at the same
temperature as the background air, which resulted in estimated plume
concentrations of 1 to 5 104 ppm-m. These values are consistent with those
obtained for such phenomena using upward-viewing UV spectrometers under
the volcanic plumes.
8.9 Problems
1. At what wavelength does the radiation for targets at 300 K peak? What is
the ratio of the total power-per-unit area emitted by a person (300 K) and
a hot vehicle (1000 K)?
2. What are the tradeoffs between using MWIR (3–5 mm) versus LWIR
(8–13 mm)? Consider radiated energy, detector technology (cooling
issues), and Rayleigh criterion concerns.
3. Of the materials in Table 8.2, which will show the largest temperature
fluctuation during a 24-h heating/cooling cycle? Which will show the
smallest?
1. Wiley, CA, Synthetic Aperture Radars, A paradigm for technology evolution, IEEE
Transactions on Aerospace and Electronic Systems, Vol. AES-21, No. 3, 1985.
2. Per 2011 declassification guide: http://www.nro.gov/foia/declass/QUILL/33.%20QUILL%
20Declassification%20Guidelines.pdf.
201
Figure 9.1 Image of San Francisco, California, taken by the JPL AIRSAR (C and L band,
VV polarization, 10-m GSD, and aircraft tack of 135°) on October 18, 1996 at 71351
seconds GMT.
Figure 9.2 Definitions of terms for imaging radar. Reprinted with permission from Avery
and Berlin (1992).
Figure 9.3 Range resolution is a function of pulse length. Reprinted with permission from
D. J. Barr, “Use of Side-Looking Airborne Radar (SLAR) Imagery for Engineering Soils
Studies,” Technical Report 46-TR, U.S. Army Engineer Topographic Laboratories (1992).
SIR-C/X-SAR Pulses
260
195
40 µs
130
65
0
0 15 0 15 0 15 0 15
Sample Number
Figure 9.4 SIR-C X-SAR pulses recorded by a ground calibration receiver, sampling at
4 ms (250 kHz). The slight variation in power seen over this interval is due to the progress of
the shuttle over the ground site. Without signal shaping, the best range resolution that could
be obtained from such pulses would be 6 km. The resolution obtained via the techniques
described in Section 9.2.2 is 25 m, corresponding to the 10- or 20-MHz bandwidth.
time frequency
fo
1/τ
τ
Figure 9.5 Continuous wave and pulsed signals. The bandwidth ¼ 1/pulse length.
Reprinted with permission from Elachi 2006, p. 229.
inverse of the pulse width in time, as indicated in the bottom half of Fig. 9.5.
For a square pulse modulated by a carrier frequency, the center of the sinc
function is shifted, but otherwise the shape of that function is unchanged.
This concept allows a slightly different definition of range resolution, as
follows:
ct c
Drange ¼ ¼ , (9.4)
2 2B
where t is the pulse length, the bandwidth B is the inverse of t, and c is the
speed of light. This definition, purely formal at first, provides a simple way of
understanding the next step, which is to modulate the frequency of the pulse.
Time
Frequency Δf
Figure 9.6 A pulse varies linearly in frequency from f to fo + Df. The power is then localized
in frequency space (bandwidth).
The concept of frequency modulating the pulse, now termed “FM chirp,”
was devised by Suntharalingam Gnanalingam while at Cambridge after
World War II (1954).3 The technique was developed to study the ionosphere.
Figure 9.6 illustrates the chirp concept—the pulse is modulated with a
frequency that increases linearly with time. The value of this is that objects
that are illuminated by the pulse can still be distinguished by the difference in
frequencies of the returns, even if they overlap in time. Gnanalingam realized
that the transform of a chirped pulse would have a bandwidth in frequency
space that was defined by the frequency range of the modulation. Without
proof of any form, it is asserted here that by analogy to a finite pulse of
constant frequency, the bandwidth (1/t) is replaced by the range of the
frequency sweep (Df). As a result, the effective spatial resolution is
c
Drange ¼ , (9.5)
2Df
3. Gnanalingam, S., “An Apparatus for the Detection of Weak Ionospheric Echoes”, Proc.
IEE, Part III, Vol. 101, pp. 243–248, 1954. The argument for the bandwidth (BW)
determining the range resolution is obtained by inference. The result is rigorously obtained
by R. J. Sullivan, Microwave Radar, Imaging and Advanced Concepts, 2000.
60
55
50
45
-2.0 -1.5 -1.0 -0.5 0.0 0.5
Azimuth Angle (degrees)
Figure 9.9 SIR-C X-SAR azimuthal antenna pattern as observed from ground observa-
tions along the beam centerline (the center of the range antenna pattern).4 The vertical axis
in this figure is implicitly logarithmic (being in decibels); this allows the side lobes (secondary
maxima) to be visible in this plot. Figure 9.10 uses a vertical axis that is truly linear.
applies to all radar, but in particular to RAR, where the practical limit of
antenna length for aircraft stability is 5 m, and the all-weather capability of
radar is effectively reduced when the wavelength is decreased below about
3 cm. Because of these limitations, RARs are best suited for low-level, short-
range operations.
The resolution of a real-aperture imaging radar in the along-track
direction given earlier can be rewritten as
lh
Ra ¼ : (9.9)
L cos u
4. M. Zink and R. Bamler, “X-SAR Calibration and Data Quality,” IEEE TGARS 33(4),
pp. 840–847 (1995).
Figure 9.10 SIR-C X-SAR azimuthal antenna pattern on a linear scale and compared to
sinc2. This portion of the processed data comes from the region between the dashed
lines (±0.07°) for which the model is very accurate.4
Figure 9.11 SIR-C X-SAR range antenna pattern. This beam pattern needs to cover the
entire cross-track range of the system, e.g., 20–70 km, as illuminated from a 222-km
altitude.5
5. M. Zink and R. Bamler, “X-SAR Calibration and Data Quality,” IEEE TGARS, Vol. 33,
No. 4, July 1995, pp. 840–847.
2l h
l¼ : (9.10)
L
Data are accumulated for as long as a given point on the ground is in view (see
Fig. 9.12).
Figure 9.12 The ship is illuminated by the radar for a time interval that depends on the
altitude of the radar and the beamwidth.
L
Ra ¼ hus ¼ : (9.12)
2
This very counter-intuitive result is due to the fact that for a smaller antenna
(small L), the target is in the beam for a longer time. The time period that an
object is illuminated increases with increasing range, so the azimuthal
resolution is range independent.
The discussion in this section is correct for “scan-mode” SAR, where the
antenna orientation is fixed. If the antenna is rotated (physically or
electronically) in such a way as to continuously illuminate the target, a third
result is obtained (Fig. 9.13). In spotlight mode, radar energy is returned from
the target for an interval defined by the operator, which simulates an
arbitrarily large antenna. For example, if the shuttle radar illuminated a target
for 10 s, the effective antenna length would be some 75 km:
Figure 9.13 The synthetic antenna’s length is directly proportional to range: as the across-
track distance increases, so does the antenna length. This behavior produces a synthetic
beam with a constant width regardless of range for scan-mode SAR. Reprinted with
permission from Lockheed Martin Corporation (from original by Goodyear Aerospace Corp.).
l
Ra ¼ h, where Leff ¼ vplatform · T observe : (9.13)
Leff
This Lambertian pattern follows for “rough” surfaces, which gives some idea
of the type of functional dependence on angle that one might obtain. The
details can be much more complicated, as illustrated by Skolnick.6
The discussion in this section so far has ignored the important topic of
polarization. Radar transmissions are polarized, with components nor-
mally termed vertical (V) and horizontal (H). Vertical means that the
electric vector is in the plane of incidence; horizontal means the electric
vector is perpendicular to the plane of incidence. The receiving antenna can
be selected for either V or H returns, which leads to a possible matrix of
return values so that the cross-section s is really a tensor:
sHH sHV
s¼ :
sVH sVV
The first subscript for each tensor element is determined by the transmit
state and the second by the receive state. These four complex (amplitude
and phase) components of the scattering matrix give a wealth of
information, much more than can be obtained from an optical system.
Generally speaking, scatterers that are aligned along the direction of
polarization give higher returns, and rough surfaces produce the cross-
terms. Water gives almost zero scattering in the cross terms, whereas
vegetation gives a relatively large cross-term.
G antenna Aantenna
Preceived ¼ Ptransmitted · s
4pR2range 4pR2range
2 (9.15)
1
¼ Ptransmitted · G antenna Aantenna s,
4pR2range
where Preceived is the received power, Ptransmitted is the transmitted power, s is
the radar cross-section (area), Aantenna is the antenna area, and Gantenna is the
antenna gain (dimensionless but proportional to antenna area).
There are a number of physical terms buried in the antenna gain and
cross-section not developed here. The antenna gain is proportional to the area,
inversely proportional to the square of the wavelength (due to the beam
pattern), and is dimensionless.7
The maximum antenna gain is defined by the physical area of the antenna
A and the wavelength:
4pA
Gantenna ¼ : (9.16)
l2
This term is representative of the beam pattern (as seen previously in the
Rayleigh pattern in Figs. 9.8 and 9.9).8 It approximates the ratio of energy on
a target for a given antenna, as compared to what is observed for an isotropic
radiator.
There are a number of variations to the range equation designed to
emphasize different symmetry elements of the range equation, particularly
with respect to the antenna gain. Here, the form is chosen to emphasize that
one of the limiting factors for space systems, in particular, is the R–4
dependence of the signal on range. This dependence is a fairly effective limit
on the altitudes for radar satellites.
9.5 Wavelength9
Choices for radar wavelength vary according to the goals of the system.
Table 9.1 lists most of the standard wavelength ranges and designations for
imaging radar systems. The variations in wavelength affect the behavior and
performance for imaging systems, with shorter wavelengths providing the
opportunity for a higher spatial resolution. In general, of course, radar
penetrates clouds, smoke, rain, and haze. There is some wavelength
dependence for rain penetration: at 15-cm wavelengths and longer (2 GHz
7. Antenna gain can be quite large: the massive Cassegrain antenna at Arecibo, with a diameter
of 305 m, has a gain of 1–2 106 at 2.4 GHz. It has been used to image the surface of
Mercury. C. Drentea, Modern Communications Receiver Design and Technology, p. 369,
Artech House (2010).
8. M. Skolnik, Introduction to Radar Systems, 3rd Edition (2001).
9. RADARSAT/PCI notes, pages 11 and 43.
and below), rain is not a problem. At 5 GHz (6 cm), significant rain shadows
are seen. At 36 GHz (0.8 cm), moderate rainfall rates can cause significant
attenuation. Foliage penetration is enhanced with longer wavelengths. Shorter
wavelengths (X and C band) primarily interact with the surface, while longer
wavelengths (L and P band) penetrate forest canopies and soil. The current
generation of operational radar systems have primarily operated at the X-, C-,
and L-band. The Ku- and P-band have primarily been used on airborne
systems.
Figure 9.14 The real ε0 and imaginary ε00 components of the dielectric constant for a silty
loam mixture, as a function of water content. Absorption increases with moisture content.
Reflection (scattering) will increase as ε0 increases. Reprinted with permission from Ulaby
et al. 2006.10
bodies will appear dark. (For reference, a microwave oven works at a nominal
frequency of 2.45 GHz, or l 12 cm.)
The ability of radar to penetrate dry soil is apparent in a variety of desert
observations. Figure 9.15 depicts ancient riverbeds under the eastern Sahara
sand. The location is the Selima Sand Sheet region in northwestern Sudan. A
50-km-wide path from the Shuttle Imaging Radar (SIR-A) mission over the
Sahara is shown superimposed on a Landsat image of the same area. The
radar penetrated 1–4 m beneath the desert sand to reveal subsurface
prehistoric river systems invisible on the Landsat image. The soil must be
10. Ulaby, Moore, & Fung, Microwave Remote Sensing, Active and Passive, Volume III,
Artech House, 1986, p. 2096. Data from Hallakainen et al., IEEE TGRS, GE-23, #1, 1985.
very dry (less than 1% water content), fine grained (small compared to the
radar wavelength), and homogeneous.11 The idea followed from a suggestion
by Charles Elachi in 1975.12
Figure 9.15 SIR-A observations of subsurface geological structures. The diagonal stripe is
the SIR-A data, and the orange background is the Landsat (visible) image. Work by Victor R.
Baker and Charles Elachi.13
Figure 9.16 The concept of rough and smooth must take into account the wavelength of
the radiation. Reprinted with permission from Lockheed Martin Corporation (from original by
Goodyear Aerospace Corp.).
9.6.2 Roughness
The effect of surface roughness is illustrated by Fig. 9.16.14 The figure is
somewhat schematic, but it emphasizes the variation in radar return with angle
and surface roughness. Roughness is relative to wavelength, so “smooth” means
surfaces like concrete walls (e.g., cultural objects), and “rough” tends to mean
things like vegetation.
The rule of thumb in radar imaging is that the brighter the backscatter on
the image is, the rougher the surface being imaged. Flat surfaces that reflect
little or no microwave energy appear dark in radar images. Vegetation is
usually moderately rough on the scale of most radar wavelengths and appears
gray or light gray in a radar image. Surfaces inclined toward the radar will
have a stronger backscatter than surfaces that slope away.
Figure 9.17 SAR impulse response, Death Valley, CA, and retroreflectors. Image courtesy of
NASA.
9.7 Problems
1. For a spotlight-mode SAR system, what azimuthal resolution could
be obtained with the X-band for a 10-s integration interval [assume that
v ¼ 7.5 km/s and take the range (altitude) to be 800 km]?
2. The amplitude of the electric field for a 1D aperture is given by
sinðkL sin u∕2Þ 2
intensityðuÞ ¼
kL sin u∕2
Figure 9.18 SIR-C/X-SAR image of Los Angeles, CA on October 3, 1994. Shuttle Imaging
Radar data are displayed: C-Band/HV (red), C-Band/HH (green), and L-Band/HH (blue). The
large cyan area at the top is the city of San Fernando, bounded by Interstates 5 and 210,
with streets largely parallel and perpendicular to those freeways. These are, in turn, roughly
parallel to the STS-68 flight line because the shuttle flew diagonally through the scene to the
east (here oriented with north as up). In a similar way, the City of Santa Monica is oriented by
the direction defined by the coastline in that area. Buildings act like corner reflectors in the
four “cardinal” directions and give strong returns in the co-polarized C- and L-band data. The
reddish regions, most noticeable NW of Santa Monica, are defined by multiple scattering
caused by rough surfaces and vegetation, and relatively higher scattering of energy into the
cross-polarized receiver (sVH in Section 9.2).
(see Appendix 1 for derivation). The zeros of this equation then define the
beam pattern, as shown in Fig. 9.8. Plot this function for an L-band
antenna (l ¼ 24 cm). Take the antenna length to be 15 m [L ¼ 15 m, k ¼
(2p)/l ¼ (2p)/ 0.23 m] and plot for an angular range of u ¼ 0–0.05 radians
(3°). At what values of u do the first few zeros occur?
100
Half Power
Received Voltage (mV)
0.63 s
75
Orbit 97.2
10/11/84
ARC # 120
50 PtGt = 86.72 dBm
25
-10 dB Sidelobe
-12 dB Sidelobe
0
-2 -1 0 1 2
Time (Seconds)
Figure 9.19 SIR-B azimuth (along-track) antenna pattern. Image reprinted with permission
of Dobson et al., “External Calibration of SIR-B Imagery,” IEEE TGRS (July 1986).
3. For the conditions illustrated in Fig. 9.11, the shuttle was at a 222-km
altitude, and the antenna (shuttle) attitude was 27.1°. To what range does
the 27.1° ± 3° angular range (measured from nadir) correspond?
4. During the SIR-B flight, observations similar to those shown in Figs. 9.9
to 9.11 were made. Figure 9.19 shows the intensity as a function of time.
Given a vehicle velocity of 7.5 km/s, convert the variations in time displayed
here into a beam width in degrees. The wavelength is 23.5 cm. The local
angle of incidence is 31°. (The incidence angle is measured down from the
vertical.) Additional information is given in Table 9.2. What is the antenna
length implied by this antenna pattern?
5. Why would a radar satellite not be viable in a geostationary orbit?
6. Estimate the power that would be needed for a radar satellite orbiting at
an altitude of one earth radius by extrapolation from the SIR-B
parameters in Table 9.2, assuming all other parameters are kept
constant.
7. Estimate the imaging time that would be required for a radar satellite with
an altitude of one earth radius to obtain an azimuthal resolution of 1 m.
Assume a slant angle of 45°. How far has the satellite flown in this time?
(Hint: It is moving slower than 7.5 km/s.)
8. The Japanese satellite ALOS (Fig. 9.20) carried the L-band (23.6 cm)
PALSAR radar system. The antenna was 3.1 8.9 m in size (the long
dimension was along track). Estimate the size of the projected ellipse for
an incidence angle of 45°. The satellite altitude was 570 km.15
15. http://www.eorc.jaxa.jp/ALOS/en/about/palsar.htm.
Figure 10.0 These data were acquired on October 3, 1994 by the Spaceborne Imaging
Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeav-
our (Image P48773). The image here is a subset of the scene, rotated so that north is roughly up.
L-band and C-band data are shown. Two different polarization results are combined here:
horizontal transmit/horizontal receive (HH) and horizontal transmit/vertical receive (HV).
The foundations for radar imaging were established in the previous chapter.
This chapter examines some of the imaging radar systems used over the last
20 years, the types of imaging products they have produced, and some of the
non-literal analysis techniques that make use of interferometry. The Shuttle
225
Imaging Radar (SIR) is used for illustration first because it is the only system
to fly in space with multiple wavelengths and the first to offer multiple
polarizations (as in Fig. 10.0).
Figure 10.1 The X-, C-, and L-band antennas are all 12 m in length. In width, the X-band
(at the bottom of the figure) is 0.4 m wide, the L-band is 2.95 m, and the C-band panel is
0.75 m. The width values follow the same proportions as the wavelengths (X:C:L::3:6:24).
Figure 10.2 (a) The phased-array C- and L-band antennas are steered electronically.
Image reprinted courtesy of NASA.1 (b) The antenna is 4 m 12 m overall.
1. C. A. Fowler, “Old radar types never die, they just phased array,” IEEE-AES Systems
Magazine, 24A–24L (Sept. 1998) (reference in Sullivan, Microwave Radar).
2. http://photojournal.jpl.nasa.gov/catalog/PIA01310.
Figure 10.3 (a) JPL Image PIA01310 of the Sahara Desert in North Africa and
(b) expanded view of the Kufra Oasis. North is toward the upper left in these images. Red
is L-band, horizontally transmitted and received. Blue is C-band horizontally transmitted and
received. Green is the average of the two HH bands. The well-irrigated soils are quite bright
in radar due to the increased dielectric constant, as illustrated previously in Fig. 9.14.
hyper-arid, receiving only a few millimeters of rainfall per year, and the
valleys are now dry “wadis,” or channels, mostly buried by windblown sand.
Prior to the SIR-C mission, the west branch of this paleodrainage system,
known as the Wadi Kufra (the dark channel along the left side of the image),
was recognized and much of its course outlined. The broader east branch of
the Wadi Kufra, running from the upper center to the right edge of the image,
was, however, unknown until the SIR-C imaging radar instrument was able to
observe the feature here. The east branch is at least 5 km wide and nearly
100 km long. The sand is probably only a few meters deep.
The two branches of the Wadi Kufra converge at the Kufra Oasis, at the
cluster of circular fields at the top of Fig. 10.3(b). The farms at Kufra depend
on irrigation water from the Nubian Aquifer System. The paleodrainage
structures suggest that the water supply at the oasis is a result of episodic
runoff and the movement of groundwater in the old stream channels.3
Figure 10.4 NASA/JPL PIA01803, taken October 9, 1994. The image is located at 19.25°
north and 71.34° east, and covers an area 20 km by 45 km (12.4 miles by 27.9 miles). The
complementary color scheme: yellow regions reflect relatively higher energy in the L-band,
blue areas show relatively higher reflectance in the C-band. Both bands are observed in VV
polarization.
4. High-resolution wind fields from ERS SAR, K. Mastenbroek, Earth Observation Quarterly,
#59, June 1998, http://www.esa.int/esapub/eoq/eoq59/MASTENBROEK.pdf.
5. Zhou et al., Satellite SAR Remote Sensing of Ocean Internal Waves, Asian Association of
Remote Sensing, Asian Conference on Remote Sensing, 1999.
6. http://www.asc-csa.gc.ca/eng/satellites/radarsat/radarsat-tableau.asp.
Figure 10.5 Ultrafine (2-m pixels) ship-detection image taken by RADARSAT-2 near
Singapore on May 5, 2009 at 22:46:33Z, HH polarization. The scene center is 1° 40 5100 N,
103° 520 13.700 E. RADARSAT-2 data © Canadian Space Agency 2009. Data received by the
Canada Centre for Remote Sensing; data processed and distributed by RADARSAT
International.
Figure 10.6 TerraSAR-X data acquired May 12, 2008 at T06:30:00 Z in descending mode
(Strip-Mode, HH, GSD 1–3 m). The data are scaled logarithmically to slightly extend the
dynamic range. For the ship-wake illustration on the right, the mean DN in the wake is 40
and the mean of the adjacent water is 70, so there is a fairly significant difference that can
be detected and used for ship detection. Compare this figure with the thermal signature for a
wake illustrated in Fig. 8.9.
and, to some extent, ship identification. The structure along the length of the
ship reflects the ship structure (cranes, etc.) and the shipping containers on the
deck.7
7. http://www.crisp.nus.edu.sg/~research/ship_detect/ship_det.htm.
8. http://www.geo-airbusds.com/terrasar-x/.
Figure 10.6 illustrates the TerraSAR-X data for the Strait of Gibraltar.
The large scene on the left depicts the southern tip of Spain, Gibraltar, and the
north tip of Africa (Morocco). In the water, numerous bright spots represent
ships, documenting busy traffic in the Strait. There is a “wind-wake” in the
water NW of Africa. Ship wakes are just visible for several vessels. On the
right side, an enlarged view of a ship in the Strait is shown, with the wake
below the ship. The offset between the ship and wake is an artifact of the
ship’s velocity with respect to the satellite. In this illustration, the ships are
moving at 5–10 m/s, enough to give them an apparent displacement of tens of
meters.
Figure 10.7 The ERS-2 SAR image shows two moving ships and their wakes. From the
ship on the left, the speed is estimated at 6 m/s. For the ship on the right, the turbulent wake,
Kelvin envelopes, and transverse waves can be observed. Its speed is estimated to be
around 12.5 m/s. ERS-2; 3 April 1996 – 03:29:29; Incident Angle: 23.0; Lat/Long +01.64/
102.69; VV polarization.9
9. http://www.crisp.nus.edu.sg/~research/ship_detect/ship_det.htm.
the ERS. The displacement between the ships and their wakes indicates their
velocities. The ship velocity can be estimated by the formula:
Dx
V ship ¼ V sat , (10.1)
R cosðwÞ
where Vship is the ship’s velocity, Vsat is the satellite’s orbit speed, Dx is the
ship’s displacement from its wake, R is the slant range, and w is the angle
between the ship’s velocity vector and the SAR look-direction (this last
angle will nominally be zero if the target and satellite motion are parallel).10
The velocity component measured and observed displacements are effectively
in the along-track direction of the satellite motion.
Figure 10.8 ERS-1 multi-temporal image of Rome, with an incidence angle of 23°, a
spatial resolution of 30 m, and a swath width of 100 km. Three color bands are encoded:
green (January 3, 1992), blue (March 6, 1992), and red (June 11 1992). Image © ESA, 1995.
Original data distributed by Eurimage.12
Figure 10.9 Washington D.C., imaged by the Sandia Ku-band airborne SAR.13
powerful additional uses for SAR data beyond the formation of literal images.
Two of the more important applications are topographic mapping and change
detection. Both exploit the fundamental concept that SAR images contain
both amplitude and phase information.
Figure 10.11 Coherent change detection (CCD) map with original reference
synthetic aperture radar (SAR) pre- and post-activity activity on the Hardin field parade
ground, with a temporal separation of 20 minutes. The data illustrate the detection and
progression of human footprints and mower activity. SOURCE: Courtesy of Sandia National
Laboratories.
relatively short time intervals. The latter is defined by the need to have
relatively few changes in the scene between observations. For satellites
such as ERS-1 and 2 and RADARSAT, these conditions are normally
obtained by comparing data from observations within a few days of one
another from nearly identical orbits. The desirability of producing such
products adds to the demand for strict constancy in the near-circular orbits
of these satellites.
The geometry is illustrated by Fig. 10.12. Targets 1 and 2 are imaged on
two separate orbits as illustrated. Given the offset in the satellite location (by a
distance indicated here as a baseline), there will be a relative difference in the
paths to the targets (s02 s2 ≠ s01 s1 ) that can be accurately determined to
within a fraction of a wavelength. This difference in phase can then be
translated into elevation differences.
The concept is illustrated in Fig. 10.13 by means of phase difference
observations from the SIR-C mission in October 1994. The complex images
taken a day apart are highly correlated with differences that are due to
elevation.
Figure 10.13 This image of Fort Irwin in California’s Mojave Desert shows the difference in
phase between two (complex) SAR images, taken on October 7–8, 1994 by the SIR-C
L- and C-band sensors. The image covers an area of about 25 km 70 km. The color
contours shown are proportional to the topographic elevation. With a wavelength one-fourth
that of the L-band, the results from the C-band cycle through the color contours four times
faster for a given elevation change. One (C-band) cycle corresponds to a 2.8-cm ground
displacement parallel to the satellite line of sight for interferometric SAR.14
sin ðuÞ
F ¼ 2pB
, (10.2)
l
where B is the baseline length, and u is the incidence angle. The phase F is in
radians. This equation can be revised to obtain the height from the phase F
and solve for u, yielding the topographic height:15
lR
dh ¼ dF, (10.3)
2pL
where dh is the change in altitude associated with a change of phase dF.
As a quick illustration with the SIR-C parameters: take the baseline as
60 m, the range as 310 km (222 km altitude, depression angle of 45°), and a
wavelength of 6 cm. There is an assumption in Eq. (10.3) that the antenna is
14. Wang et al. Photogrammetric Engineering and Remote Sensing, p. 1157 (October 2004).
NASA Photojournal, image PIA01759
15. Text adapted from R. Treuhaft, JPL; http://www2.jpl.nasa.gov/srtm/instrumentinterfmore.
html, See also R. J. Sullivan, Microwave Radar Imaging and Advanced Concepts, Artech
House, Norwood MA (2000).
vertical with respect to the ground. Further adjustments in the formula are
required when this assumption is relaxed.16 For the SRTM described in the
next section, a vertical resolution of 10 m corresponds to a phase difference
of 10°, as shown by
lR 2pL
dh ¼ dF ⇒ dF ¼ dh;
2pL lR
2p · 60
dF ¼ 10 m ¼ 0.2 radians or 11°, which is measureable:
0.06 · 310 103
16. See also, Principles and Applications of Imaging Radar, Manual of Remote Sensing, Third
Edition, Volume 2, American Society for Photogrammetry and Remote Sensing, edited by
Floyd Henderson and Anthony Lewis, Chapter 6, pages 361-36, by Soren Madsen and
Howard Zebker.
17. http://spaceflight.nasa.gov/shuttle/archives/sts-99/.
Figure 10.16 (a) Mast fully deployed at AEC (shown from tip). (b) Mast with first few bays
deployed from canister at ATK-Able Engineering Company, Inc.
Mast Length 60 m
Nominal Mast Diameter 1.12 m
Nominal Bay Width at Longerons 79.25 cm
Nominal Bay Length 69.75 cm
Number of Bays 87
Stowed Height/Bay 1.59 cm
Total Stowed Height 128 cm
The mast supported a 360-kg antenna structure at its tip and carried
200 kg of stranded copper, coaxial, fiber optic cables, and thruster gas lines
along its length.
This remarkable technology worked largely as planned. The mast
deployed successfully, as illustrated in Fig. 10.17. Unfortunately, the
attitude-control jet at the end of the mast clogged, and it was only through
a remarkable bit of flying by shuttle astronauts that the system was able to
acquire useful data. Data analysis was slowed by this problem, and accuracy
of products reduced somewhat.
Figure 10.17 SRTM mast, deployed. Shuttle glow is seen around the orbiter tail and along
the mast in the left figure; this is due to interactions between atomic oxygen in the upper
atmosphere and the surfaces of the spacecraft and antenna.
50° latitude, the postings are spaced at 100 (one arcsecond) latitude by
100 longitude. At the equator, these are spacings of approximately 30 m 30 m.
Figure 10.18 illustrates some of the products of the mapping mission.
Starting from a known elevation (sea level), altitude is obtained by
unwrapping the variation in phase. Figure 10.18 shows how the phase varies
over Lanai and a portion of Maui. This portrayal can be compared to the
difference image from Ft. Irwin in Fig. 10.13.
Figure 10.18 DEMS for some Hawaiian islands. Figure reprinted from http://photojournal.
jpl.nasa.gov/catalog/ PIA02723.
10.9 Problems
1. For a SAR system such as SIR-C, does the nominal 12.5-m azimuthal
resolution for the German X-band system correspond well to the nominal
antenna width? What pulse length would be required to match that in
range resolution? Compare this value to the actual pulse width.
Figure 10.19 RADARSAT-2 orbital perspective for data acquisition. The satellite orbit track
and ground track below the satellite are traced in light blue.
Figure 10.20 Histogram for regions of interest in TSX data acquired May 12, 2008.
Figure 11.0 Point cloud elevation data for the Naval Postgraduate School campus
obtained from an airborne LiDAR system. Data are color coded by elevation, with red (high)
and green/blue (low) in this rainbow color scheme (6–32 m).
11.1 Introduction
Light amplification by stimulated emission of radiation, or the laser, dates to
1957, emerging in theoretical papers by Townes and Schalow.1 The term
“laser” was coined by Gould, who eventually received credit for this.2 The
1. A. L. Schawlow, and C. H. Townes, “Infrared and Optical Masers,” Physical Rev. 112(6),
1940–1949 (December 15, 1958).
2. R. G. Gould, “The LASER, Light Amplification by Stimulated Emission of Radiation,”
Ann Arbor Conf. Optical Pumping, pp. 128 (June 15–18, 1959).
247
Figure 11.1 Laser profile taken from 1000-foot altitude from a Douglas A-26 aircraft. There is a
two-foot “crown” on the field but also an overall drift due to the limitations in aircraft altitude estimates.7
7. B. Miller, “Laser Altimeter May Aid Photo Mapping,” Aviation Week & Space Technology,
page 60, March 29, 1965.
8. J. E. Geusic, H. M. Marcos, and L. G. van Uitert, “Laser oscillations in Nd-doped yttrium
aluminum, yttrium gallium and gadolinium garnets,” Applied Physics Letters 4, 182–184
(1964).
Figure 11.2 A pulse of laser light is emitted from the aerial platform. A sensor records the
returning energy as a function of the xy position, which then provides the z, or elevation
component. Such systems are occasionally designated 3D imagers. The imager depends on
a very accurate knowledge of the platform position, generally obtained from GPS.9
not greatly different from the early ruby lasers, but it has better heat
conduction. Nd:YAG lasers typically operate at 1.064 mm (1064 nm), a
fluorescence line for Nd3+ in the YAG structure. These neodymium-doped
crystals can be and are “frequency doubled” to 532 nm by using a KTP
crystal10 for bathymetric applications. The laser output can also be “tripled”
to 355 nm.
Also popular are semiconductor lasers, particularly at 1.55 mm. These are
used in the fiber optics community for communications, which motivates a
great deal of development. They have significant eye-safety benefits, but this
wavelength is more affected by water vapor than the shorter wavelength
systems.
An illustrative system with both IR and green output is the Coastal Zone
Mapping and Imaging LiDAR (CZMIL) system. The output power is 30 W
at 10 kHz with a pulse length of < 2.5 ns FWHM at 532 nm and 20 W of
residual power at 1064 nm.11 Individual pulses from the Nd:YVO4 laser are a
few tenths of a milliJoule after amplification. Figure 10.3 shows a time profile
for the output pulse. The system uses a significantly higher power level than
typical terrestrial LiDAR scanners because of the significant amount of losses
9. K. Kraus and N. Pfeifer, “Determination of terrain models in wooded areas with airborne
laser scanner data,” ISPRS Journal of Photogrammetry and Remote Sensing 53(4), 193–203
(1998); with thanks to David Evans, MSU, Dept of Forestry.
10. potassium titanyl phosphate KTiOPO4 (KTP); http://www.lc-solutions.com/product/
ktp.php.
11. J. W. Pierce, E. Fuchs, S. Nelson, V. Feygels, and G. Tuell, “Development of a novel laser
system for the CZMIL lidar,” Proc. SPIE 7695, 76960V (2010).
Figure 11.3 CZMIL green-output-pulse temporal profile. The pulse is a bit less than 2 ns
wide, and the leading edge is a fraction of a nanosecond. Image reprinted with permission
from Pierce et al. (2010).
12. Some vendors are selling systems that measure the phase within a CW signal, as with the
early SpectraPhysics system in Fig. 11.1, for very fine range resolution at short ranges,
notably FARO. These are terrestrial systems used for short-range scanning, from tens of
meters out to 100 m.
The LiDAR formula for power is also similar to that for radar,13 but it is
generally true that the laser beam is small enough that all of the transmitted
energy reaches the detected area on the target. For a homogenous target
(surface) a relatively simple form results. The return beam still falls off
according to the inverse square formula.
The resulting formula is
1
Preceived ¼ Ptransmitted · a G detector , (11.2)
4pR2range
where Preceived is the received power, Ptransmitted is the transmitted power, a is
the albedo (reflectance), Gdetector is the detector gain (proportional to the
detector area and quantum efficiency). The LiDAR range equation for
imaging systems depends on the range squared, in contrast to imaging radar
systems, which depend on the fourth power of the range.
Example
To illustrate the values and implications of the formula, consider a nominal
airborne system operating at 1.06 mm, with 10-mJ pulses. Assume an aperture
with a diameter of 30 cm, and a beam divergence of one milliradian.
Assuming an ideal detector for a moment, the gain is just the area of the
collecting optic. Typical albedos for vegetation are about 0.9, and we assume
an isotropic scattering. Assume an altitude (range) of 500 m:
1
Preceived ¼ 10 106 · 0.9 ðp · 0.152 Þ ¼ 2.02 1012 J:
4pð500Þ2
The 1.06-mm photons have an energy of 1.9 10–19 J, so the return pulse
contains 107 photons. A typical efficiency for the detectors in a commercial
system would be about 10%, so the sensor would count about 106 photons
over a period of a few tens of nanoseconds.
The detectors for most LiDAR systems are variations on the photo-
multiplier tubes (PMTs) illustrated in Chapter 2. The solid-state version of a
PMT is a photodiode or, more particularly, an avalanche photodiode. Longer
wavelengths (1.55 mm) require high-speed (GHz) InGaAS photodiodes.14
This technology is part of the infrastructure for fiber optics communications,
so there is significant technology evolution at work in this area. Photon-
counting detector arrays have been developed and are starting to appear in
commercial systems.
13. A good development and more detailed form is given by Wagner et al., ISPRS Journal of
Photogrammetry and Remote Sensing, Volume 60, Issue 2, April 2006, Pages 100–112.
14. L. E. Tarof, “Planar InP/InGaAs avalanche photodetector with gain-bandwidth product
in excess of 100 GHz,” Electron. Lett. 27(1), 34–36 (1991).
Figure 11.4 Elevation of Wolf River Basin, located near Memphis, Tennessee, taken in
September 1980.16
Figure 11.5 Optech systems: pulse repetition rate (or frequency) as a function of time. The
diameter of the symbol is proportional to the operational altitude. The ALTM 3100 was a key
system in the evolution of commercial imaging and has only recently been superseded in the
market by newer and faster systems. As of 2013, the Pegasus HA-500 was the highest-
altitude, fastest instrument in the Optech inventory, able to work at an altitude of 100 m to
5 km and a PRF of 100–500 KHz. The dual laser system allows for multiple pulses in the air
(MPIA).
Figure 11.6 Leica ALS70, with flight electronics. The laser is a Nd:YAG operated at
1.064 mm. The ALS70 operates at a maximum laser pulse rate of 250 kHz, with a maximum
average optical output of 8 W. The energy per pulse under these conditions is 8 W / 250,000
Hz ¼ 32 mJ. Higher power is possible at a lower PRF, limited by the heating of the laser. The
pulse length is 4.5 ns or 9 ns, depending on system settings. The detector is a Si APD.17
Figure 11.8 LiDAR image taken over the Los Angeles Coliseum. The goal posts are 120
yards apart. Bleachers are at the 420-m mark. Data courtesy of Airborne 1, Los Angeles, CA.
One of the more vexing problems in remote sensing involves power and
telephone lines, which are generally sub-pixel for any reasonable detector—
the wires simply do not show up in optical imagery. LiDAR, with a relatively
small spot size, illuminates the wires at a fairly regular interval, and Fig. 11.11
shows the wires detected quite accurately. Corridor mapping is one of the
major business areas for airborne laser mapping.
Figure 11.9 Intensity image produced by LiDAR active illumination at 1.06 mm.
Figure 11.10 Detailed view of the first/last returns, bare soil, and extracted feature returns
over a few trees to the west of the Coliseum.
11.5 Bathymetry
One powerful capability offered by LiDAR is the ability to survey for water
depth, that is, to conduct bathymetric surveys. Some of the first such
measurements by Hoge et al. are illustrated here for data taken over the
Atlantic Ocean by the Airborne Oceanographic LiDAR (AOL) in Fig. 11.12.
[These data are from the same system used by Krabill et al. (1984), as shown in
Fig. 11.4.] This early system successfully made measurements down to depths as
great as 10 m. Current systems typically work to depths of 50–100 m, at least
for clear water. The Avco model C-500 neon laser operated at 540.1 nm with a
Figure 11.11 Power lines adjacent to the NPS campus, collected with the Optech C-100
corridor mapper. The point density along the power lines ranges from 12.5–13.5 points/m
along the lines. The background point density on the surface ranges from 60–110 points/m2
here, with a peak in the 80–90 points/m2 range. This overall point density is relatively high by
current mapping standards (2015), and the typical power-line point density will be a bit less.
Figure 11.12 Cross-section comparison of AOL data with NOAA launch data. Twenty
seconds of data are shown: the dots are the LiDAR returns, and the solid line is from the
in situ measurements (sonar). The vertical axis indicates the depth in meters, ranging from
0 to 5. Errors are likely due to navigation, i.e., position/time. Image reprinted with permission
from F. E. Hoge et al. (1980).18
Figure 11.13 Waveforms from a profile over Monastery Beach, Monterey, CA. The detector
is sampled at 1.8 GHz, so at roughly 0.5 ns intervals. In the graph at the top, time is increasing
to the right. The peaks in the samples in the 200–220 range are from the surface reflections; the
peaks centered around sample 320 are the bottom reflections. The water here is about 5 m
deep. The waterfall display inset reflects the scan pattern over the water, hence the scalloped
pattern most obvious in the bottom signature. The graph at the bottom represents the depth
profile obtained from the discrete returns. Color is altitude (or depth), with red a few meters
above ground level, green at sea level, and shades of blue as the water deepens.
400-Hz PRF. The 7-ns, 2-kW pulses were modulated by a conical scanning
system; photon returns were obtained from a PMT, digitized, and then gated
at 2.5-ns intervals (a temporal resolution that is still respectable by modern
standards).
For comparison, a modern commercial LiDAR system was operated over
the Monterey Bay area in 2014; the two-color AHAB system measures the
waveforms of the returned laser signals, as illustrated in Fig. 11.13. These
systems are generally applied in clear coastal waters to depths of 10–20 m.
Figure 11.14 The topography of Mars, as measured by the Mars Orbiter Lander (MOLA).
The elevation is scaled from 0–12 km in the color scale shown at the top right.19
19. O. Aharonson et al., “Mars: Northern hemisphere slopes and slope distributions,”
Geophys. Res. Lett. 25, 4413–4416 (1998). http://mola.gsfc.nasa.gov/images.html.
11.7 Problems
1. For a LiDAR to have a vertical resolution of 5 cm, what is the upper limit
on the pulse length, in time? Assume a square pulse, as with a radar.
2. For a LiDAR emitting a 32-microJoule pulse, how much energy is
returned to the detector? How many photons are emitted and return?
Assume a range of 1500 m, a wavelength of 1.55 mm, and a receiver
(telescope) with a diameter of 20 cm. Assume a perfect Lambertian surface
(reflectance a 1).
3. For an airborne system, flying at 1000 AGL, calculate the transit time for
a laser pulse that is reflected from directly below the aircraft. Be sure to
include both the downward and upward propagation time. Compare your
result to the ALS-70 operational parameters.
4. For a satellite operating at an altitude of 705 km (e.g., ICESAT), calculate
the transit time for a laser pulse in a nadir view. What would be the
maximum frequency allowable for SPIA conditions?
Figure 1 Topographical map of Terra Sirenum, the Martian Atlantis. It ranges in elevation
from 8000 to 3000 m. Copyright ESA/DLR/FU Berlin, CC BY-SA 3.0 IGO.
263
q1 q2
F¼ , (A1.1)
4pε0 r2
265
Ze2
F ¼ : (A1.2)
4pε0 r2
The minus sign on the force term means that the force is “inward,” or
attractive.
Ze2
¼ mv2 ∕r, (A1.3)
4pε0 r2
and it is possible to solve for the radius versus velocity.
nℏ
vn ¼ (A1.5)
mrn
for the velocity of the electron in its orbit. There is an index n for the different
allowed orbits. It follows that
Ze2 1 Ze2 1 Ze2
E ¼U þT ¼ þ ¼ : (A1.11)
4pε0 r 2 ð4pε0 Þr 2 ð4pε0 Þr
The total energy is negative—a general characteristic of bound orbits. This
equation also indicates that if the radius of the orbit (r) is known, then the
energy E of the electron can be calculated.
Substituting the expression for rn [Eq. (A1.7)] into Eq. (A1.11) produces
2
1 Ze 1 Zme2
E¼ 2 ,
2 4pε0 n 4pε0 ℏ2
or
1 Ze2 2 m E
E¼ ¼ Z 2 21 , (A1.12)
2 4pε0 ℏ n2 n
where
me4
E1 ¼ ¼ 13.58 eV
32p2 ε0 2 ℏ2
is the energy of the electron in its lowest or “ground” state in the hydrogen
atom.
Eo
En ðan eiwn Þeikrn , (A1.15)
r2n
where En is the electric field component due to the nth array element, Eo is the
electric field magnitude defined by the sources (all the same in this case), an is
the amplitude contribution from the nth array element (here with the
dimensions of length squared), wn is the phase for the nth array element, rn is
the distance from the nth array element to observation point, and k ¼ (2p)/l is
the wavenumber. A classic principle of electricity and magnetism says that the
total electric field can be obtained by adding up all of the components,
keeping track of the phase, of course.
The total field from all the radiators is the sum of the elements:
n=0 y
d rn
rn+1
n+1 r
x
Figure A1.2 Each of the array elements is the source of a spherically expanding wave,
which reaches the “screen” at the right after traveling a distance that depends on y.
X Eo
E total ¼ ðan eiwn Þeikrn : (A1.16)
n r2n
E o X ikrn
E total ¼ e , (A1.17)
r2o n
where the slow inverse square variation in amplitude has been factored out.
The next part manipulates the complex term inside the summation to address
the question of how the exponent varies as n varies.
If it is assumed that y0 ¼ 0 corresponds to the n ¼ 0 element, then
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ro ¼ x2 þ y2 ; rn ¼ x2 þ ðy þ ndÞ2 : (A1.18)
This form is exact, but the trick is to factor out the ro term from the rn terms.
Such an operation is possible because d is small. First, expand the term inside
the square root:
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x2 þ ðy þ ndÞ2 x2 þ y2o þ 2ynd þ n2 d 2
rn ¼ ro pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ ro pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : (A1.19)
x2 þ y2 x2 þ y2
Without any approximations, bring the denominator into the radical and
divide out:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x þ y þ 2ynd þ n d
2 2 2 2
2ynd n2 d 2
rn ¼ ro ¼ ro 1 þ 2 þ 2 : (A1.20)
x þy
2 2
x þy 2
x þ y2
A subtle trick is used at this point. First, take the third term as very small and
then use an approximation for the square root, where the second term is small:
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2ynd ynd ynd
rn ¼ ro 1 þ 2 þ 0 ≈ ro 1 þ 2 ¼ ro 1 þ 2 : (A1.21)
x þ y2 x þ y2 ro
(For exercise, check that the third term is small by plugging in some typical
numbers: d ¼ 1 cm, y ¼ 500 m, and x ¼ 2000 m). Now use the familiar polar
form sin u ¼ y/ro and simplify the remaining terms:
E o X ikrn E o X ikro iknd sin u E o ikro X iknd sin u
E total ¼ e ¼ 2 e ¼ 2 e e ,
r2o n ro n ro n
(A1.22)
which defines the zeroes in the beam pattern. For a continuous antenna
element, the sum is replaced by an integral over an antenna of length L ¼ nd:
L
E X E a
2
E total ðuÞ ¼ 2o eikro eikro ðnd sin uÞ ¼ 2o eikro ∫ o eiky sin u dy, (A1.23)
ro n ro L L 2
where the sum over the elements has been replaced by an integral over
y ¼ L/2 to L/2, and the amplitude factor has been put back in for a
moment, along with an inverse length to go with the integration variable. The
integral on the right side is simply the Fourier transform of the square
aperture of length L:
L
2
E E sin½ðkL sin uÞ∕2
E total ðuÞ ¼ 2o eikro ∫ eiky sin u dy ¼ 2o eikro
1
: (A1.24)
ro L L 2
ro ðkL sin uÞ∕2
The power at any particular location will then be proportional to the square of
the electric field strength. The resulting function is then proportional to the
square of the sinc function: sin2 a / a.
(continued )
273
1959
28 Feb 1959-002A Discoverer I, Thor Agena A, orbited for 5 days.
(b)
One year after Explorer 1 (2/21/58)
13 April 1959- Discoverer II2, Thor rocket Agena A,
003A (g1)
3 June Failed to Orbit
25 Jun 9001 KH-1 no Discoverer IV Agena did not orbit.
13 Aug 9002 KH-1 no Discoverer V; camera failed on Rev 1: RV not
recovered.
19 Aug 9003 KH-1 no Discoverer Vl; camera failed on Rev 2;
retrorocket malfunction; RV not recovered.
7 Nov 9004 KH-1 no Discoverer VII Agena failed to orbit.
20 Nov 9005 KH-1 no Discoverer VIll; bad orbit; camera failure; no
recovery.
1960
4 Feb 9006 KH-1 no Discoverer IX; Agena failed to orbit.
19 Feb 9007 KH-1 no Discoverer X; Agena failed to orbit.
15 Apr 9008 KH-1 no Discoverer Xl; camera operated; spin rocket failure;
no recovery.
29 Jun N/A N/A N/A Discoverer XII diagnostic flight; Agena failed to orbit.
10 Aug N/A N/A N/A Discoverer XIII diagnostic flight successful.3
18 Aug 9009 KH-1 yes Discoverer XIV; first successful KH-1 mission;
first successful air recovery of object sent into space.
13 Sep 9010 KH-1 no Discoverer XV; camera operated; wrong pitch attitude
on reentry: no recovery.(capsule sank)
26 Oct 9011 KH-2 no Discoverer XVI; Agena failed to orbit.
12 Nov 9012 KH-2 no Discoverer XVII; air catch: payload malfunction.
7 Dec 9013 KH-2 yes Discoverer XVIII; first successful KH-2 mission: air
catch
20 Dec N/A N/A N/A Discoverer XIX radiometric mission. (MIDAS missile
detection test)
(continued )
(Continued)
Date Mission Designator Success1 Remarks
1961
17 Feb 9014A KH-5 no Discoverer XX; first ARGON flight: orbital
programmer failed, camera failed, no recovery.
18 Feb N/A N/A N/A Discoverer XXI radiometric mission.
30 Mar 9015 KH-2 no Discoverer XXII; Agena failure; no orbit.
8 Apr 9016A KH-5 no Discoverer XXIII; camera OK; no recovery.
8 Jun 9018A KH-5 no Discoverer XXIV; Agena failure, power &
guidance failure; no recovery.
16 Jun 9017 KH-2 yes Discoverer XXV; water landing, recovery
7 Jul 9019 KH-2 partial Discoverer XXVI; Camera failed on Rev 22:
successful recovery.
21 Jul 9020A KH-5 no Discoverer XXVII; No orbit; Thor problem.
3 Aug 9021 KH-2 no Discoverer XXVIII; No orbit; Agena guidance failure.
30 Aug 9023 KH-3 yes Discoverer XXIX; 1st KH-3 flight. Air recovery.
12 Sep 9022 KH-2 yes Discoverer XXX; Air recovery (fifth).
17 Sep 9024 KH-2 no Discoverer XXXI; no recovery power failure.
13 Oct 9025 KH-3 yes Discoverer XXXII; Air recovery.
23 Oct 9026 KH-2 no Discoverer XXXIII; Agena failed to orbit.
5 Nov 9027 KH-3 no Discoverer XXIV; no recovery.
15 Nov 9028 KH-3 yes Discoverer XXXV
12 Dec 9029 KH-3 yes Discoverer XXXV
1962
13 Jan 9030 KH-3 no Discoverer XXXVII; Agena failed to orbit.
27 Feb 9031 KH-4 yes Discoverer XXXVIII; first KH-4 flight: air recovery.
18 Apr 9032 KH-4 yes air recovery
28 Apr 9033 KH-4 no No recovery; failed to eject parachute.
15 May 9034A KH-5 yes
30 May 9035 KH-4 yes
2 Jun 9036 KH-4 no No recovery; torn parachute.
23 Jun 9037 KH-4 yes
28 Jun 9038 KH-4 yes
21 Jul 9039 KH-4 yes
28 Jul 9040 KH-4 yes
2 Aug 9041 KH-4 yes
29 Aug 9044 KH-4 yes
1 Sep 9042A KH-5 yes
17 Sep 9043 KH-4 yes
29 Sep 9045 KH-4 yes
9 Oct 9046A KH-5 yes
5 Nov4 9047 KH-4 yes
24 Nov 9048 KH-4 yes
4 Dec 9049 KH-4 yes
1963
14 Dec 9050 KH-4 yes
8 Jan 9051 KH-4 yes
28 Feb 9052 KH-4 no Separation failure
18 Mar 8001 KH-6 no First KH-6 flight; no orbit; guidance failure (Agena)
1 Apr 9053 KH-4 yes
26 Apr 9055A KH-5 no No orbit; attitude sensor problem
18 May 8002 KH-6 no Orbit achieved; Agena failed in flight.
13 Jun 9054 KH-4 yes
(continued )
(Continued)
Date Mission Designator Success1 Remarks
(continued )
(Continued)
Date Mission Designator Success1 Remarks
Orbital Parameters
Epoch P (min) hP (km) hA (km)
1. http://www.lib.cas.cz/www/space.40/1959/005A.HTM
1. http://tdrs.gsfc.nasa.gov/tdrsproject/about.htm
279
Figure A3.1 The second TDRSS ground terminal at White Sands Ground Station.
Figure A3.2 There are four nominal stations for the active TDRS constellation: TDE
(TDRS East), TDW (TDRS West), TDZ [TDRS Zone of Exclusion (ZOE)], and TDS (TDRS
Spare). The original plan involved only the first two stations. A zone of exclusion existed in
the original plan, and eventually the third station was added.
shows the ground station. These terminals are known as the White Sands
Ground Terminal (WSGT) and the Second TDRSS Ground Terminal
(STGT). The ground stations include three 18.3-m Ku-band antennas, three
19-m Ku-band antennas and two 10-m S-band TT&C antennas.
Table A3.1 The TDRS fleet: satellite locations. TDRS-7 and -8 are controlled from the
Guam Remote Ground Terminal (GRGT).
Satellite Launch Date Location
Figure A3.3 TDRS 1-7 spacecraft: 45 feet wide, 57 feet long, 5000 pounds, and 1800-W
power (EOL).
satellites of this era (early 1980s). The total power output of the solar array is
approximately 1800 W. Spacecraft telemetry and commanding are performed
via a Ku-band communications system, with emergency backup provided by
an S-band system.2
2. http://tdrs.gsfc.nasa.gov/tdrsproject/tdrs1.htm#1
Figure A3.5 The TDRS fleet as of 2014. TDRS-1 and TDRS-4 are drifting with respect to
the earth’s surface, as indicated here in the day-long simulation. The active satellites are still
fluctuating with respect to the surface of the earth by a few degrees, illustrating the
difference between geosynchronous and geostationary. The latter is quite rare and would be
difficult to maintain. TDRS-9 and -10 are nearly geostationary, with inclinations of just a few
degrees.
A3.3.2 Payload3
The satellite payload is an ensemble of antennas designed to support the relay
mission:
• Two single-access (SA) antennas: Each antenna is a 4.9-m-diameter
molybdenum wire mesh antenna that can be used for Ku-band and
S-band links. Each antenna is steerable in 2 axes and communicates
with one target spacecraft at a time.
• One multiple-access (MA) S-band antenna array: This is an electronically
steerable phased array consisting of 30 fixed helix antennas. The MA
array can receive data from up to 20 user satellites simultaneously, with
one electronically steerable forward service (transmission) at a time.
Twelve of the helices can transmit and receive, with the remainder only
able to receive. Relatively low data rates are supported—100 bps to
50 kbps.4
• One space-to-ground-link (SGL) antenna: This is a 2-m parabolic
antenna operating at Ku-band that provides the communications link
between the satellite and the ground. All customer data is sent through
3. http://msl.jpl.nasa.gov/QuickLooks/tdrssQL.html
4. NASA Press Release, Tracking And Data Relay Satellite System (TDRSS) Overview;
Release No. 91-41; June 7, 1991
this dish, as are all regular TDRS command and telemetry signals. The
antenna is gimbaled on two axes.
• One S-band omni-antenna: a conical log spiral antenna used during the
satellite’s deployment phase and as a backup in the event of a spacecraft
emergency. This antenna does not support customer links.
5. http://spaceflightnow.com/atlas/ac139/000626tdrsh.html
A3.5 TDRS K, L, M
A new generation of TDRS satellites began to operate in 2013. They are
similar to the second series (based on the Boeing 601). Two have been
launched, and as of early 2014, they are in the checkout phase.
EM Waves
hc
lf ¼ c; E ¼ hf ; l ¼ ; c ¼ 2.998 108 ; 1 eV ¼ 1.602 1019 J;
DE
6.626 1034 joule-seconds
h ¼ Planck’s constant ¼ ;
4.136 1015 eV-seconds
Bohr Atom
1 Ze2 2 m 2 E1 me4
En ¼ ¼ Z 2 ; E1 ¼ ¼ 13.58 eV;
2 4pε0 ℏ n2 n 32p2 ε0 2 ℏ2
bandgap energy
number ∝ e thermal energyðkTÞ :
287
Blackbody Radiation
m J
c ¼ 3 108 ; h ¼ 6.626 1034 J · s; k ¼ 1.38 1023 ;
s K
2hc2 1
radiance ¼ L ¼ ;
l elkT 1
5 hc
W
StefanBoltzmann Law: R ¼ sεT 4
;
m2
W
ε ¼ emissivity; s ¼ 5.67 108 ; T ¼ temperatureðKÞ;
m2 K4
1
T radiative ¼ ε4 T kinetic :
Optics
1 1 1 focal length
¼ þ ; f ∕# ¼ ;
f i o diameter
c
n ¼ ;n1 sin u1 ¼ n2 sin u2 ;
v
n1 cos u1 n2 cos u2 n2 cos u1 n1 cos u2 n1 n2 2
r⊥ ¼ ; r ¼ ;R¼ :
n1 cos u1 þ n2 cos u2 k n2 cos u1 þ n1 cos u2 n1 þ n2
Orbital Mechanics
~ m1 m2 Rearth 2 m2 m
F ¼ G 2 r̂; F ¼ go m G ¼ 6.67 1011 N 2 ; go ¼ G 2earth
r r kg Rearth
m
¼ 9.8 2 ;
s
Rearth ¼ 6.38 106 m, mearth ¼ 5.9736 1024 kg:
1 2p
v ¼ vr; v ¼ 2pf ; t ¼ ¼ ;
f v
rffiffiffiffiffi
v2 go
F centripetal ¼ m ¼ mv2 r; circular motion: v ¼ R ;
r r earth
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffi
x y 2 2
a b
2 2
b2
Ellipses: 2 þ 2 ¼ 1; ε ¼ or ε ¼ 1 2 ;
a b a a
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Distance from center to focus: c ¼ εa ¼ a2 b2 ;
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 1 4p2 3 4p2
Elliptical orbit: v ¼ GM ; t2 ¼ r ¼ r3 :
r a go Rearth
2 M earth G
291