Using Stereo Analyst For Arcgis: Geographic Imaging by Leica Geosystems Gis & Mapping
Using Stereo Analyst For Arcgis: Geographic Imaging by Leica Geosystems Gis & Mapping
Using Stereo Analyst For Arcgis: Geographic Imaging by Leica Geosystems Gis & Mapping
The information contained in this document is the exclusive property of Leica Geosystems GIS & Mapping, LLC. This work is protected under United States copyright law
and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying and recording, or by any information storage or retrieval system, as expressly permitted in writing by Leica Geosystems GIS & Mapping, LLC. All
requests should be sent to Attention: Manager of Technical Documentation, Leica Geosystems GIS & Mapping, LLC, 2801 Buford Highway NE, Suite 400, Atlanta, GA,
30329-2137, USA.
CONTRIBUTORS
Contributors to this book and On-line Help for Stereo Analyst for ArcGIS include: Sam Megenta, Frank Obusek, Jay Pongonis, Russ Pouncey, Mladen Stojic′, Ryan Strynatka,
and Lori Zastrow of Leica Geosystems GIS & Mapping, LLC.
Any software, documentation, and/or data delivered hereunder is subject to the terms of the License Agreement. In no event shall the U.S. Government acquire greater than
RESTRICTED/LIMITED RIGHTS. At minimum, use, duplication, or disclosure by the U.S. Government is subject to restrictions set forth in FAR §52.227-14 Alternates I,
II, and III (JUN 1987); FAR §52.227-19 (JUN 1987), and/or FAR §12.211/12.212 (Commercial Technical Data/Computer Software); and DFARS §252.227-7015 (NOV
1995) (Technical Data) and/or DFARS §227.7202 (Computer Software), as applicable. Contractor/Manufacturer is Leica Geosystems GIS & Mapping, LLC, 2801 Buford
Highway NE, Suite 400, Atlanta, GA, 30329-2137, USA.
ERDAS, ERDAS IMAGINE, IMAGINE OrthoBASE, and Stereo Analyst for ArcGIS are registered trademarks. IMAGINE OrthoBASE Pro and IMAGINE VirtualGIS are
trademarks.
ERDAS® is a wholly owned subsidiary of Leica Geosystems GIS & Mapping, LLC.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective trademark owners.
Contents
Getting started
1 Introducing Stereo Analyst for ArcGIS 3
What can you do with Stereo Analyst for ArcGIS? 4
Learning about Stereo Analyst for ArcGIS 11
2 Quick-start tutorial 13
Exercise 1: Starting Stereo Analyst for ArcGIS 14
Exercise 2: Adding oriented images 18
Exercise 3: Converting features—3D to 2D and 2D to 3D 31
Exercise 4: Collecting features in 3D 42
Exercise 5: Editing existing features 58
What’s next? 66
Working in stereo
3 Working with oriented images 69
Creating oriented images 70
Using IMAGINE OrthoBASE to create oriented images 76
Using Image Analysis for ArcGIS to create oriented images 78
Importing IMAGINE OrthoBASE block files 79
Importing SOCET SET ® files 82
What’s next? 85
Appendices
A Capturing data using imagery 179
Collecting data for a GIS 180
Preparing imagery for a GIS 182
Using traditional approaches 186
Applying geographic imaging 188
Moving from imagery to a 3D GIS 190
Identifying workflow 191
Getting 3D GIS data from imagery 195
CONTENTS V
Scanning aerial photography 216
Understanding interior orientation 223
Understanding exterior orientation 226
Using digital mapping solutions 229
Glossary 233
References 249
Index 251
The data in a GIS needs to reflect reality, and snapshots of reality need to be
incorporated and accurately transformed into instantaneously ready, easy-to-use
information. From snapshots to digital reality, images are pivotal in creating and
maintaining the information infrastructure used by today’s society. Today’s
geographic information systems have been carefully created with features,
attributed behavior, analyzed relationships, and modeled processes.
There are five essential questions that any GIS needs to answer: Where, What,
When, Why, and How. Uncovering Why, When, and How are all done within the
GIS; images allow you to extract the Where and What. Precisely where is that
building? What is that parcel of land used for? What type of tree is that? The new
extensions developed by Leica Geosystems GIS & Mapping, LLC use imagery to
allow you to accurately address the questions Where and What, so you can then
derive answers for the other three.
But our earth is changing! Urban growth, suburban sprawl, industrial usage and
natural phenomena continually alter our geography. As our geography changes, so
VII
does the information we need to understand it. Because an
image is a permanent record of features, behavior,
relationships, and processes captured at a specific moment in
time, using a series of images of the same area taken over
time allows you to more accurately model and analyze the
relationships and processes that are important to our earth.
Sincerely,
Mladen Stojic′
Product Manager
Leica Geosystems GIS & Mapping, LLC
Section 1
2 USING STEREO ANALYST FOR ARCGIS
Introducing Stereo Analyst
1Introducing Stereo Analyst for ArcGIS
for ArcGIS 1
IN THIS CHAPTER Welcome to Stereo Analyst® for ArcGIS, the stereo feature collection extension
for ArcGIS™. Stereo Analyst for ArcGIS adds unique image viewing and feature
• What can you do with Stereo collection capabilities to your ArcGIS desktop and uses the existing feature editing
Analyst for ArcGIS?
and collection capabilities available in ArcMap™.
• Learning about Stereo Analyst for
ArcGIS With Stereo Analyst for ArcGIS, you can access image and feature datasets
directly from a geodatabase. You can also collect new feature datasets accurately
using oriented imagery as a reference backdrop. If you already have some feature
datasets, you can edit them reliably in stereo using ArcMap editing tools.
Using a three-dimensional (3D) digital view of the earth’s surface created with
oriented imagery, you can collect true, real-world 3D geographic information. By
analyzing oriented imagery with Stereo Analyst for ArcGIS, even more
information can be extracted from imagery. Geographic information system (GIS)
professionals are no longer limited to collecting two-dimensional (2D) GIS data.
This data can be used to build relationships into a GIS. Also, you can collect mass
points with X, Y, and Z coordinates for the creation of a digital terrain model
(DTM).
3
What can you do with Stereo Analyst for ArcGIS?
Stereo Analyst for ArcGIS supplies you with the tools you’ll need The 3D Floating Cursor has a 3D coordinate associated with it. As
to update and create accurate and reliable feature datasets for use in a result, wherever you move the 3D Floating Cursor, a new 3D
a GIS. You can use Stereo Analyst for ArcGIS in conjunction with coordinate is displayed. To ensure the accuracy of GIS feature data
two applications you’re probably already familiar with, collected in Stereo Analyst for ArcGIS, the 3D Floating Cursor is
ArcCatalog™ and ArcMap. These applications allow you to easily positioned so that it rests on the feature being collected. When the
manage all of your raster and feature datasets used in Stereo 3D Floating Cursor rests on the ground or feature of interest, a new
Analyst for ArcGIS. 3D coordinate is computed and then the feature can be accurately
collected. (See “Adjusting the position of the 3D Floating Cursor”
Creati ng your wo rld in 3 D on page 133 for information about placing the 3D Floating Cursor
on the feature of interest.)
Stereo Analyst for ArcGIS creates an accurate 3D digital
representation of the earth’s surface and geography using imagery.
Using GIS-ready images, the contents of an image are recreated and
represented in a 3D view. This 3D digital representation of the earth
is displayed on the screen.
Ga i n i n g m o r e a c c u r a cy i n G IS d a t a
collectio n
Stereo Analyst for ArcGIS supports all of the raster and feature
dataset formats currently supported by ArcMap. For example,
raster format support includes: ArcSDE rasters, ERDAS® 7.5
LAN, ERDAS IMAGINE® (.img), ERDAS Raw, ESRI Image
Catalogs, GRID, GRID Stack, and Windows BMP.
Editing fe atures
Using the editing tools you’re already familiar with in ArcMap, you
can edit features that you create and features that you have imported
in Stereo Analyst for ArcGIS. Simply display the Editor toolbar and
choose the edit tools you need to modify feature datasets.
Attribution information is updated accordingly.
Using Stereo Analyst for ArcGIS, feature datasets can be updated to reflect the geography on the earth’s surface as recorded by an image.
Highly accurate oriented images are used as a reference source for updating feature datasets. (See “Creating oriented images” on page 70
for information about oriented imagery.) Using Stereo Analyst for ArcGIS, updates to feature datasets are not only in 3D, but are also
accurate 2D (planimetric) updates.
In the picture on the left, before editing, this road is not positioned correctly. In the picture on the right, the road clearly follows the feature.
Using the Editor tools you’re probably already familiar with in ArcMap, Stereo Analyst for ArcGIS allows you to collect new features. To
ensure the accuracy of the GIS data collected using Stereo Analyst for ArcGIS, the 3D Floating Cursor must rest on the feature of interest
being collected.
The picture on the left shows features collected in ArcMap; the picture on the right shows the same features collected in the Stereo Window.
Stereo Analyst for ArcGIS allows you to collect elevation information directly from oriented imagery without requiring a digital elevation
model (DEM). Since an accurate 3D digital representation of the earth’s surface is created on the screen using imagery, accurate 3D point
feature datasets can be collected. With mass point data, you can easily create DEMs using Spatial Analyst™, 3D Analyst™, or ERDAS
IMAGINE.
The picture on the left shows mass points collected in Stereo Analyst for ArcGIS. The DTM on the right was generated from the mass points collected in Stereo
Analyst for ArcGIS. ERDAS IMAGINE was used to create the DTM.
Stereo Analyst for ArcGIS adds three toolbars and several new features to ArcMap when it is installed. The three toolbars include the Stereo
Analyst toolbar, the Stereo View toolbar, and the Stereo Enhancement toolbar. The Stereo Analyst toolbar is the main toolbar and provides
access to importers and exporters as well as preference settings. The Stereo View toolbar provides tools to manipulate data in the Stereo
Window. The Stereo Enhancement toolbar controls the operation of image enhancement in the Stereo Window.
The Stereo Window, which is the middle window in the above picture, can be docked inside the ArcMap application, or can be undocked.
You also have the option of embedding the Stereo Window within
the ArcMap window. The Stereo Window is an extension to the
ArcMap window and is used as the workspace for updating and
collecting new feature datasets. Work conducted in the Stereo
Window is simultaneously reflected in the ArcMap data view.
W h a t c a n you d o w i t h A rc C a t a l o g ?
Find ing answ ers to qu estio ns Lei ca Geosy stems GI S & Mapp ing
Ed ucati on Solutions
This book describes the typical workflow involved in creating and
updating GIS data for mapping projects. The chapters are set up so Leica Geosystems offers instructor-based training about Stereo
that you first learn the theory behind certain applications, then you Analyst for ArcGIS. For more information, go to the Web site
are introduced to the typical workflow you’d apply to get the results <www.gis.leica-geosystems.com> and follow the Training link to
you want. A glossary is provided to help you understand any terms Training Centers, Course Schedules, and Course Registration.
you haven’t seen before.
ESRI educ atio n sol utio ns
Getti ng he lp on your compu ter
ESRI provides educational opportunities related to GISs, GIS
Useful information can be found in the On-line Help system. applications, and technology. You can choose among instructor-led
Consult it to learn how to use Stereo Analyst for ArcGIS. To learn courses, Web-based courses, and self-study workbooks to find
how to use Help, see the Using ArcMap book. educational solutions that fit your learning style and pocketbook.
For more information, visit the Web site <www.esri.com/
education>.
Finally, in “Exercise 5: Editing existing features” on page 58, you’ll learn how to
update existing features.
To start this tutorial, you must have Stereo Analyst for ArcGIS and ArcGIS
installed on your system. Also, you must have access to the tutorial data that
accompanies the installation CD. Ask your administrator for the location of the
tutorial data if you can’t find it in the default installation directory.
13
Exercise 1: Starting Stereo Analyst for ArcGIS
In the following exercises, we’ve assumed that you are using
a single-monitor workstation that is configured for use with
ArcMap and Stereo Analyst for ArcGIS.
If you have a dual-monitor configuration, you may spread out
the applications so that the ArcMap application is displayed 1
on one monitor and the Stereo Window and the Stereo
Analyst for ArcGIS toolbars are displayed on the other
monitor. This type of setup is ideal for productive feature
collection. 2
In this scenario, the ArcMap display serves as the
cartographic station for verifying features that have been A dd i n g t h e S t e r e o A n a ly st for A rc G I S
collected or edited, and the Stereo Window display serves as ex tensi on
the main focus for collecting and editing feature datasets.
1. If the ArcMap dialog opens, keep the option to create a
In this exercise, you’ll learn how to start Stereo Analyst for new map, then click OK.
ArcGIS and display all of the toolbars associated with Stereo
Analyst for ArcGIS. You’ll be able to use the toolbars to gain
access to all of the key functionality in Stereo Analyst for
ArcGIS.
Preparing
This exercise assumes that you have already successfully
completed installing Stereo Analyst for ArcGIS on your
computer. If you haven’t installed Stereo Analyst for
ArcGIS, do so now.
S t a r t i n g A rc M a p
1. Click the Start button on your desktop, then point to
Programs, then point to ArcGIS. 1
2. Click ArcMap to start the application.
2. In the ArcMap window, click the Tools menu, then click
Extensions.
QUICK-START TUTORIAL 15
With the Stereo View toolbar, you can enable many of
the special Stereo Analyst for ArcGIS modes, such as
the Terrain Following Mode and Fixed Cursor Mode.
Additionally, you can use the Continuous Zoom Mode
and the image Roam Tool to adjust the extent of the
image pair display in the Stereo Window.
Other tools allow the ability to synchronize the ArcMap
and Stereo Window displays, input coordinates to drive
to a specific location, reverse the left and right images,
1 and update the feature display.
QUICK-START TUTORIAL 17
Exercise 2: Adding oriented images
In this exercise, you’ll learn how to add multiple oriented 2. In the Add Data dialog, navigate to the folder called
images (rasters) to ArcMap and the Stereo Window. The \ArcTutor\StereoAnalyst\Images.
oriented images you’ll be using were created by importing 3. Shift-click to select the images named strip1_1.img and
data from the SOCET SET® digital photogrammetry strip2_2.img. This selects all of the raster images in the
software product. list.
4. Click Add to add the images to the ArcMap data view.
Using overlapping, GIS-ready images
3
1
You get this message if the images lack a projection, as
A progress meter displays to show the status of the in the case of these images.
pyramid layer generation.
QUICK-START TUTORIAL 19
Once you’ve added the rasters, you can see that the data
view shows two overlapping strips of photography, and
each strip contains two overlapping photographs. The
aircraft that recorded the imagery flew south-east to
north-west, and as a result the images appear diagonal
within ArcMap.
C h a n g i n g t h e A rc M a p d i s p l ay
It is often helpful to have the orientation of the display in
ArcMap match that of the display in the Stereo Window. To
ensure that this is always the case within the same ArcMap
session, you can set an option on the Stereo Analyst Options
dialog.
1. On the Stereo Analyst toolbar, click the Stereo Analyst
dropdown list, then click Options.
2 4 3
QUICK-START TUTORIAL 21
3 2
QUICK-START TUTORIAL 23
2 5
3. Move your cursor into the data view and position it over Workin g with the Ste reo Wind ow
the original area of overlap. Notice that the area
The Stereo Window is where the image pair displays to
becomes highlighted in yellow as you move your cursor
recreate a 3D digital representation of the specific area of
inside the overlap area.
interest as recorded in the oriented images. You’ll learn how
4. Click to select the overlap area of strip2_1.img/ to open it next.
strip2_2.img and make it the active image pair.
Using graphics cards
3
If your computer system doesn’t have a graphics card that
supports quad-buffered stereo, Stereo Analyst for ArcGIS uses
anaglyph rendering techniques to recreate and display the 3D
digital representation of the area of interest. In this case, you
need red/blue anaglyph glasses to view the image pairs in 3D.
3
4
The cursor in the Stereo Window is called the 3D
Floating Cursor because it can float on, below, or above
a feature. 5. Click the Fixed Cursor Mode button again to exit that
mode.
By adjusting the Z thumb wheel, the height of the 3D
Floating Cursor is modified via the movement of the When you have exited Fixed Cursor Mode, the button
images of the image pair. You can see the elevation no longer appears recessed on the Stereo View toolbar.
change in the coordinates of the location of the 3D A dju s t ing t h e z o om rat i o
Floating Cursor, which are displayed in the lower-left
status bar located within the Stereo Window. 1. Click the Zoom to Data Extent button to see the entire
extent of the image pair displayed in the Stereo Window.
26 USING STEREO ANALYST FOR ARCGIS
2. On the Stereo View toolbar, click the Zoom In By 2
1 button a number of times until you can comfortably see
features on the earth’s surface displayed in 3D within
the Stereo Window.
3. Click the Roam Tool button, then move your cursor
If you are viewing in anaglyph mode, the left image of (which appears as a hand) into the Stereo Window and
the image pair appears red for nonoverlap areas; the double-click to activate Auto Roam Mode.
right image appears blue for nonoverlap areas; and the
overlap area is grey.
3 2
If you are viewing in quad-buffered stereo, the entire
area appears grey, but you can see in 3D only in the
overlap area.
4. The hand changes into an arrow in Auto Roam Mode.
Move your mouse in any direction to adjust the image
pair’s position in the Stereo Window.
QUICK-START TUTORIAL 27
Adjusting bri ghtne ss an d con tra st
4 You can adjust the brightness and contrast of the image pair
displayed in the Stereo Window to suit your viewing and
feature collection needs.
By default, both images are adjusted together; however, you
can also adjust each image separately by changing the setting
in the Adjust dropdown list.
1. On the Stereo Enhancement toolbar, make sure that the
Adjust dropdown list shows Both Images.
2
The arrow’s proximity to the center of the Stereo
Window determines the speed of the Auto Roam Mode.
If you are close to the center of the Stereo Window, the
speed is slow; if you are close to the edges of the Stereo The following picture shows the image pair with
Window, the speed is fast. decreased brightness.
5. Once you find an area that interests you, double-click in
the Stereo Window again to return to normal Roam
Mode.
Notice that the 3D Floating Cursor recenters in the
middle of the Stereo Window when you exit Auto Roam
Mode.
QUICK-START TUTORIAL 29
6. Close the Stereo Window by clicking the Close button in
the top right corner of the window.
QUICK-START TUTORIAL 31
5. Click Open on the Input Features dialog to add the
feature classes in the geodatabase to the Convert
2 Features to 2D dialog.
Selecting feature classes to make 2D
1. Scroll to the top of the Select classes window, then
position your cursor inside the window and click to
select the feature class named CONTOUR_INDEX.
2. Hold the Ctrl key on the keyboard and click to select the
classes: CONTOUR_INTERMEDIATE,
FREEWAY, HOUSE, PAVED_ROAD, RAILROAD,
RIVER, and SPOTHEIGHT.
1
3. In the Input Features dialog, navigate to the folder
named \ArcTutor\StereoAnalyst\Geodatabase. 2
4. Click to select the geodatabase file named
sampleAltdorfFME.mdb.
3
4
1
2
3 4
QUICK-START TUTORIAL 33
Confirming features are 3D
You can use the Virtual 2D To 3D tab of the Stereo Analyst 2
Options dialog to determine whether or not features are 2D.
3D features are not eligible for use in Virtual 2D To 3D and
will not display on the Virtual 2D To 3D tab of the Stereo
Analyst Options dialog. Since this tool is only operational
with 2D features; therefore, you’ll use it in this case to
confirm that the features were converted to 2D.
For more information, see “Using Virtual 2D To 3D” on page
89 in chapter 4 “Working with 3D data”.
1. From the Stereo Analyst toolbar, click the Stereo
Analyst dropdown list and choose Options.
3 4
1
2
5. Click Open on the Input Features dialog to add the
feature classes in the geodatabase to the Convert
Features to 3D dialog.
Selecting feature classes to make 3D
By selecting a geodatabase, all the feature classes associated
with the geodatabase are automatically listed in the Select
classes list of the Convert Features to 3D dialog.
1. Notice that all of the classes you chose to convert from
3D to 2D in the previous section of this exercise are
listed in the Select classes window.
Those classes include: CONTOUR_INDEX,
CONTOUR_INTERMEDIATE, FREEWAY, HOUSE,
PAVED_ROAD, RAILROAD, RIVER, and
SPOTHEIGHT.
QUICK-START TUTORIAL 35
2. Make sure that all of the classes are selected
(highlighted) in the list.
2 1 2
Ensuring accuracy
QUICK-START TUTORIAL 37
You might want to use the Source tab of the Table of View ing 3D features in the Ste reo Wind ow
contents to confirm which feature classes are the 2D
Now that you’ve successfully updated the feature datasets
feature classes, and which are the 3D feature classes.
using a raster DEM as your elevation source, you can view
the features in the 3D representation of the area, which is
created using an image pair.
1. On the Stereo Analyst toolbar, click the Stereo Window
button to open a Stereo Window.
Aligning features 2
The features may not entirely align with the oriented images in
ArcMap. This is common since the raw pixels in the oriented
images have not been transformed and then projected to create
a new raster dataset. This process is commonly referred to as All of the features you chose in the Convert Features to
orthorectification. 3D dialog display in the ArcMap data view and the
Stereo Window.
Viewing the same features and oriented images in the Stereo
Window yields better results since Stereo Analyst for ArcGIS
resamples raw pixels on the fly. To learn more about this, refer
to “Applying epipolar correction” on page 124.
QUICK-START TUTORIAL 39
E n t e r i n g a n d e xi t i n g A u t o R o a m M o d e
2 1
QUICK-START TUTORIAL 41
Exercise 4: Collecting features in 3D
To collect features in Stereo Analyst for ArcGIS, you use If the display of the background values in ArcMap
tools in the ArcMap Editor you’re probably already familiar bothers you, refer to “Changing properties” on page 20
with. You can use these tools to collect features in the Stereo for instructions about how to fix the images’ display.
Window—the difference, of course, is that you’re collecting
features in 3D. Enhancing performance
Collecting features in 3D can be made easier by using the
2-Pane View of the Stereo Window. This method is If you want to enhance the performance of ArcMap, you can
turn off the display of the oriented images in the ArcMap Table
described in detail in this exercise.
of contents. The footprint and overlap extents remain displayed.
Preparing
You can also do this via the Stereo Analyst Options. Click the
If you’re continuing the tutorial from “Exercise 3: Stereo Analyst dropdown list and choose Options. Click the
Converting features—3D to 2D and 2D to 3D”, proceed to ArcMap Display tab, then click to select the Footprints of
the section named “Accessing device settings” on page 42. oriented images option, then click OK.
If you’re starting from scratch, you should have both ArcMap
and Stereo Analyst for ArcGIS running on your machine. Accessin g device settings
You should have an empty data view and Stereo Window,
and the Stereo Analyst and Stereo View toolbars displayed. To configure the system mouse, you need to access the
Proceed to “Adding images” below. Devices dialog.
1. On the Stereo Analyst toolbar, click the Stereo Analyst
Adding imag es
dropdown list.
1. Click the Add Data button to select the rasters.
2. Click the Devices option.
2. In the Add Data dialog, navigate to the folder named
\ArcTutor\StereoAnalyst\Images.
1
3. Ctrl-click to select the images named strip2_1.img and
strip2_2.img.
4. Click Add to load the rasters in the Stereo Window and
ArcMap.
2
5. If necessary, click OK on the dialog alerting you about
the absence of spatial reference information.
QUICK-START TUTORIAL 43
Controlling Z movement
QUICK-START TUTORIAL 45
1
6. Click Close on the Devices dialog. 2. Click the 3D Floating Cursor tab of the Stereo Analyst
Options dialog.
Restoring default settings 3. Click the Cursor color dropdown list and choose a color
other than white, which is the default, for the 3D
The default settings files for the system mouse and all other Floating Cursor.
supported devices are contained in the following location:
4. Click the up arrow to increase the Line width of the 3D
\arcexe83\Raster\ButtonMappings\StereoAnalyst. You can also
Floating Cursor, which is measured in pixels (points), to
click Reset All on the corresponding Button Mapping dialog to
return to the original, default settings. 6.00.
5. Click the Cursor shape dropdown list and choose
Ch angin g th e 3D Floa ting Cu rs or another 3D Floating Cursor shape from the list, such as
Open X with dot.
You can easily change the way the 3D Floating Cursor looks
in the Stereo Window. This may make features easier to
collect. C h a n g i n g 3 D F lo a t i n g C u r s o r s h a p e s
1. On the Stereo Analyst toolbar, click the Stereo Analyst For more information about 3D Floating Cursor shapes, please
dropdown list and choose Options. refer to “Selecting 3D Floating Cursor options” on page 136 in
chapter 6 “Applying the 3D Floating Cursor”.
A dd i n g fe a t u r e cla s s e s
In this exercise, you’ll learn some of the common techniques
used in the collection of features. First, you add the feature
classes.
5 7 6
1. On the ArcMap toolbar, click the Add Data button.
2. In the Add Data dialog, navigate to the
6. Click Apply on the 3D Floating Cursor tab of the Stereo \ArcTutor\StereoAnalyst\Geodatabase folder.
Analyst Options dialog.
3. Double-click the file sampleAltdorfFME.mdb.
7. Click OK to close the Stereo Analyst Options dialog.
4. Ctrl-click to select the datasets named Buildings,
Set t i n g u p th e S t e r e o W indow Ground Points, Hydrography, and Transportation.
1. If the Stereo Window is not in 3-Pane View, click the
3-Pane View button at the bottom of the Stereo Window.
QUICK-START TUTORIAL 47
Collecting a po lygo n fe ature
3 Polygon features are created by collecting a number of
vertices, which are eventually closed to create a shape. For
example, a rectangular building may be represented by four
4 connected vertices.
Locating the polygon feature
The easiest way to locate the first polygon you’ll be
digitizing is to use the 3D Position Tool. The coordinates
you’re going to enter correspond to the image pair
5 strip2_1.img/strip2_2.img, so make sure it is active.
1. On the Stereo View toolbar, click the 3D Position Tool
button.
5
2
QUICK-START TUTORIAL 49
7. Adjust the scroll wheel on the mouse up and down until
the 3D Floating Cursor appears to rest on the same
portion of the building in the 2-Pane View (the roof of
3 4 which is approximately 456 meters).
If you are new to working in stereo, you may want to
5. On the Editor toolbar, click the Sketch Tool button. use the 2-Pane View frequently. It shows the individual
left and right images of the image pair. When the
position of the 3D Floating Cursor in the left pane
5
matches the identical position in the right pane, you are
at the correct X, Y, Z location.
QUICK-START TUTORIAL 51
Locating the polygon feature Collecting the polygon feature
1. In the X window of the 3D Position Tool dialog, type 1. Move the scroll wheel on the mouse up and down until
the X coordinate 691119.9. the roof overlaps in both the left and right image of the
2. Type the Y coordinate 191376.2. image pair, and the 3D Floating Cursor appears to rest
on the same portion of the roof in the 2-Pane View. The
You don’t need to enter a value in the Z window. building’s roof is approximately 457 meters.
3. Click Apply on the 3D Position Tool dialog. 2. Click to select vertices corresponding to the corners of
the building.
3. Double-click to finish collecting the building (or press
1 F2 on the keyboard).
2
2 1
QUICK-START TUTORIAL 53
Locating the polyline feature 4. Notice the current elevation of the 3D Floating Cursor,
1. In the X window of the 3D Position Tool dialog, type which displays at the bottom of the Stereo Window.
the X coordinate 691265.5.
2. Type the Y coordinate 191653.8.
You don’t need to enter a value in the Z window.
3. Click Apply on the 3D Position Tool dialog.
1
2
1
Using Snap To Ground moves the elevation of the 3D
Floating Cursor to the ground level elevation
(approximately 454 meters). This elevation is obtained
2. Move your cursor into the Stereo Window and click, from the elevation data source you specified on the
then press the F3 key on the keyboard to activate the 3D Terrain Following Cursor tab of the Stereo Analyst
Floating Cursor. Options dialog.
3. Move in X and Y to the part of the road just within the
image pair overlap boundary.
QUICK-START TUTORIAL 55
If you want to see the new feature in the ArcMap data 2. On the Editor toolbar, click the Target dropdown list and
view, you can click the Synchronize Geographic choose SPOTHEIGHT.
Displays button on the Stereo View toolbar.
Co llecting point fe atures
You collect point features, each of which has an X, Y, and Z
2
coordinate, with a single click. You can use keyboard
shortcuts and other tools you’ve learned about so far in the
collection of point features. 3. Make sure the Manually Toggle 3D Floating Cursor
button is selected.
Locating the area for point features
1. In the X window of the 3D Position Tool dialog, type
the X coordinate 691350.4. 4
QUICK-START TUTORIAL 57
Exercise 5: Editing existing features
Feature editing in Stereo Analyst for ArcGIS differs from 2. In the Add Data dialog, navigate to the folder named
traditional ArcMap feature editing by operating in 3D. In \ArcTutor\StereoAnalyst\Images.
Stereo Analyst for ArcGIS, existing features can be edited 3. Ctrl-click to select the images named strip2_1.img and
using a 3D digital representation of the earth’s surface strip2_2.img.
(created using overlapping, oriented images) as a reference
backdrop for updating the existing dataset. 4. Click Add to load the rasters in the Stereo Window and
ArcMap.
While it is still possible to edit features solely in X and Y, you
can no longer ignore elevation. All data collected and edited 5. If necessary, click OK on the dialog alerting you about
in Stereo Analyst for ArcGIS is 3D. In Stereo Analyst for the absence of spatial reference information.
ArcGIS, each vertex of every feature has an X, Y, and Z If the display of the background values in ArcMap
(elevation) value associated with it. bothers you, refer to “Changing properties” on page 20
This section focuses on editing polygon features, which for instructions about how to fix the images’ display.
involves editing vertices associated with a polygon and A ddin g fe a t ur e da t a
moving an entire polygon. The same editing procedures in
Stereo Analyst for ArcGIS can be used regardless of the type 1. On the ArcMap toolbar, click the Add Data button.
of feature data (point, line, or polygon). 2. In the Add Data dialog, navigate to the
Preparing \ArcTutor\StereoAnalyst\Geodatabase folder.
3. Double-click the file sampleAltdorfFME.mdb.
This exercise assumes you’re using a standard computer
mouse (with a scroll wheel). 4. Click to select the layer named Buildings.
If you’re continuing the tutorial from “Exercise 4: Collecting The Buildings feature dataset has the layers HOUSE,
features in 3D”, then you can proceed to “Adjusting a HOUSE_EXTENSION, and STORAGE_TANKS.
polygon feature” on page 58. 5. Click Add on the Add Data dialog.
If you’re starting this exercise from scratch, you should have Adjusting a polygon fea ture
an empty ArcMap data view and Stereo Window displayed.
Also, you should have the Stereo Analyst, Stereo View, and Locating an existing polygon feature
Editor toolbars displayed. Then, proceed to “Adding images” The first feature to be edited is in the HOUSE feature
below. category in image pair strip2_1.img/strip2_2.img, so make
Adding imag es sure it’s active. You’ll use the 3D Position Tool to find the
first feature quickly.
1. Click the Add Data button to select the rasters.
1. On the Stereo View toolbar, click the Scale dropdown
list and select 150%.
QUICK-START TUTORIAL 59
Orienting the ArcMap display 5
To orient the ArcMap display to match the display in the Stereo
Window, select the Stereo Analyst dropdown menu on the
Stereo Analyst toolbar, then click Options. On the ArcMap
Display tab, click the check box for Orient ArcMap document 6. Click inside the Stereo Window, then press F3 on the
to Image Pair when Image Pair changes. Then click OK. keyboard to toggle on the 3D Floating Cursor.
Remember that, while in the Stereo Window, this cursor
Preparing to adjust the polygon feature is a 3D cursor that no longer functions as a regular 2D
1. Click the Editor dropdown list and choose Start Editing. Windows cursor.
7. Press the “c” key on the keyboard to enter Fixed Cursor
Mode.
1 8. Using the scroll wheel on the mouse, adjust parallax so
that the 3D Floating Cursor is at the same elevation as
the roof of the building (approximately 463 meters).
2. On the Editor toolbar, set the Task to Modify Feature.
9. Press “c” again to exit Fixed Cursor Mode after you
3. On the Editor toolbar, set the Target to the HOUSE have removed parallax.
layer.
1 3
QUICK-START TUTORIAL 61
The following series of steps makes use of the 2-Pane
View to ensure that the 3D Floating Cursor is located at
the same position in both the left and right images.
3 4
QUICK-START TUTORIAL 63
Preparing to move the polygon feature 4. Click and hold the left mouse button, and move the
1. On the Editor toolbar, click the Task dropdown list and polygon to the correct building location in the X and Y
select Reshape Feature. direction.
2. On the Editor toolbar, confirm that the Target layer is Again, it is easiest to judge the correct location by
still HOUSE. looking at the corners of the building in the 2-Pane
View.
Since you already adjusted the 3D Floating Cursor
elevation to the building’s roof, there is no need to make
1 2 further adjustments in Z.
QUICK-START TUTORIAL 65
What’s next?
This tutorial has introduced you to some of the basic
functions you can perform using Stereo Analyst for ArcGIS.
The following chapters go into more detail about each
element of the Stereo Analyst for ArcGIS suite of tools, and
include instructions on how to use them to your advantage.
Section 2
68 USING STEREO ANALYST FOR ARCGIS
3 Working with oriented images
3
IN THIS CHAPTER Oriented images serve the most important role in collecting accurate and reliable
information from imagery. In this chapter, you’ll learn about oriented images:
• Creating oriented images where they come from, how to create them, and how to import them into the
ArcGIS environment.
• Using IMAGINE OrthoBASE to
create oriented images
69
Creating oriented images
To understand an oriented image, it is helpful to look at the process
used to create one.
This is the same relative area as depicted in the previous illustration, with
only feature outlines displayed.
Defini ng an oriented im ag e
A GIS serves as a container for the feature datasets that have been extracted from oriented images. A GIS also maintains all of the
relationships, processes, and information associated with a feature dataset.
By tracing the ancestry of spatial information, it is evident that the reliability of information in a GIS is dependent on the accuracy of feature
data derived from oriented imagery. The following example illustrates the ancestry of information derived from imagery used to assess
what impact a residential housing development may have on a watershed drainage system.
5. Performing image-to-earth association (aerial triangulation). You have the option of creating one oriented image at a time or
creating multiple oriented images simultaneously. In order to create
Once the image-to-earth association has been completed, the
multiple oriented images simultaneously, select the OK to all
information required to create an oriented image is available.
option. It is important to note that at least two overlapping oriented
IMAGINE OrthoBASE allows you to create one oriented image at
images are required in order to perform feature collection and
a time, or multiple oriented images simultaneously.
editing using Stereo Analyst for ArcGIS.
The GeoCorrection tool in Image Analysis for ArcGIS allows you to create
oriented images that can be used by Stereo Analyst for ArcGIS.
Stereo Analyst for ArcGIS initially reads the block file to determine
the location of the related images for importing. If the images are in
the location specified in the block file, they can be oriented and
imported immediately.
If the files are not located in the directory specified by the block
file, Stereo Analyst for ArcGIS looks in the same directory as the
block file. If the images are not in that location, you’ll be prompted
to locate them with the following dialog.
This is the Import IMAGINE OrthoBASE Block File dialog.
The import process opens the block file, identifies the images
referenced in the block file, verifies that the images are located in
the directory specified by the block file, and then associates the
intelligent metadata to the original images. Once this process has
been completed, the oriented images can be added to ArcMap for
use by Stereo Analyst for ArcGIS.
Use this dialog to create a new block with the correct file location.
You can import and orient block files derived from frame
camera images, digital camera images, and sensors into
Stereo Analyst for ArcGIS using the IMAGINE OrthoBASE
block file importer.
1
1. On the Stereo Analyst toolbar, click the Stereo Analyst
dropdown list and choose Import IMAGINE OrthoBASE
Block File.
2. On the Import IMAGINE OrthoBASE Block File dialog,
click the Open button and navigate to the directory
containing the block file you want to import.
The Select images to orient window displays each image
in the block file. You must select the images you want to
orient. If you do not select any images, no importing
occurs.
2
3. Use the Shift and/or Ctrl keys on the keyboard to select
the files you want to orient and import from the IMAGINE 3
OrthoBASE block file. You don’t have to select all of the
images in the block file.
4. If you would like to view the images immediately, make
sure the Add imported oriented images to ArcMap
document check box is checked.
5. Click OK to start the import process.
The imported, oriented images display in the ArcMap
data view and the Stereo Window. 4 5
PROJECT_FILE f
DATA_PATH D:\Data\StereoAnalyst\
Socetset\escon_demo
COORD_SYS 6
XY_UNITS 1
Z_UNITS 1
MINIMUM_X_OR_LAT 0.00000000000000e+000
MINIMUM_Y_OR_LON 0.00000000000000e+000
MINIMUM_Z 2.00000000000000e+002
MAXIMUM_X_OR_LAT 0.00000000000000e+000
MAXIMUM_Y_OR_LON 0.00000000000000e+000
MAXIMUM_Z 4.00000000000000e+002
GP_ORIGIN_Y 0.00000000000000e+000
GP_ORIGIN_X 0.00000000000000e+000
GP_ORIGIN_Z 0.00000000000000e+000
This is the Import SOCET SET Project File import dialog. GP_SCALE_Y 1.00000000000000e+000
GP_SCALE_X 1.00000000000000e+000
GP_SCALE_Z 1.00000000000000e+000
A SOCET SET® project file (.prj) contains general project ELLIPSOID WGS_84
information associated with a photogrammetric mapping project. VERTICAL_REFERENCE 0
This file is an American Standard Code for Information A_EARTH 6.37813700000000e+006
Interchange (ASCII) file that is used as part of the import process. E_EARTH 8.18191912720360e-002
ELLIPSOID_CENTER 0.00000000000000e+000
The project file contains general mapping information associated 0.00000000000000e+000
with a project such as projection, units used, and so on. The .prj file 0.00000000000000e+000
is what you select for import. PROJECTION_TYPE UTM_PROJECTION
ZONE 11
FALSE_NORTHING_POS 0.00000000000000e+000
A SOCET SET® project file, altdorf.prj, is included with the FALSE_NORTHING_NEG 0.00000000000000e+000
example data that comes with Stereo Analyst for ArcGIS. You can FALSE_EASTING_POS 5.00000000000000e+005
find it in the directory: \ArcTutor\StereoAnalyst\SocetSet. Note FALSE_EASTING_NEG 5.00000000000000e+005
that if you load the project and support example data in a place other GRID_NAME UTM_11N
than the default, C:\arcgis\ArcTutor\StereoAnalyst\SocetSet, you IMAGE_LOCATION escon
will have to manually edit the SOCET SET® project and support
You can import and orient SOCET SET® projects into Stereo
Analyst for ArcGIS using the SOCET SET® importer.
4 5
87
Comparing 3D features and 3D models
Ch arac terizing 3D features Chara cterizing 3D model s
A 3D feature can be a 3D point, 3D line, or a 3D polygon. A 3D A 3D model not only has 3D coordinates in X, Y, and Z, but it also
feature has an X, Y, and Z coordinate associated with each vertex has volumetric information. The following is an example of a scene
of that feature. The Z coordinate is the elevation value of that with many 3D building models (shown in grey, orange, and
vertex. For example, a vertex corresponding to the corner of a yellow).
house may have the X, Y, and Z coordinate values of 691402.4,
191111.6, and 466.6, respectively.
The Virtual 2D To 3D capability temporarily transforms a dataset Understan ding how it wo rks
to 3D so that it can be superimposed on the 3D digital earth’s
surface displayed in the Stereo Window. This is achieved by Once you define an input feature dataset and an elevation source,
referencing a user-defined elevation source at a particular X, Y the Z coordinate associated with a vertex node in the feature layer
location for Z coordinate information. The X, Y location of the is assigned the elevation value located within the corresponding
vertex is obtained from the original feature dataset. elevation source. The supported elevation sources include: constant
elevation value, DEM, and ESRI-type triangulated irregular
The Virtual 2D To 3D function does not create a new feature network (TIN) files.
dataset. Rather, it simply references and queries an elevation source
for Z coordinate information and then associates that information The conversion of the data to 3D is performed virtually, that is, your
with each vertex in the feature dataset. This Virtual 2D To 3D data is not actually edited. Stereo Analyst for ArcGIS simply uses
process only occurs when the feature dataset is being displayed in elevation information contained in a DTM file or a constant
the Stereo Window. Once all edits have been made and saved, only elevation to project your feature datasets in 3D. The initialized Z
2D (X and Y) coordinate information is written back to the original value is for viewing purposes only. The feature data you display in
feature dataset. the Stereo Window can be edited, but only X and Y information is
saved.
To successfully perform the virtual conversion of a dataset from 2D
to 3D, you need (1) a list of feature classes for conversion and (2) If you want the elevation information with a feature dataset to be
an elevation source such as a constant elevation value or an external retained, the Features to 3D option in the Stereo Analyst toolbar
elevation file. should be used. Refer to “Using the 2D to 3D converter” on page
93 for more information about this capability.
You can only use ESRI-type TIN files with Stereo Analyst for The Selected features list shows all of the current 2D feature layers
ArcGIS. TINs generated in other applications, such as in the ArcMap Table of contents. If multiple feature datasets have
IMAGINE OrthoBASE Pro, cannot be used by Stereo Analyst been added to ArcMap, all of the 2D feature classes associated with
for ArcGIS. the feature datasets are shown in the list.
5 7 9 6 8
The output dataset is placed in the same folder as the input dataset
unless you specify otherwise. The output file is given the
designation “_3D” to distinguish it from the input file.
In the diagram on page 96, the points reflect the vertices associated
with the features. The bottom layer is the original feature dataset
assuming a zero (sea level) elevation is applied to the feature
dataset. The top layer illustrates an elevation source applied to the
original feature dataset.
Original point
Interpolated point
Draping on
Point spacing is the distance between points used (sampled) during the interpolation process. The distance between the points is measured
in the same units as the image pair displayed in the Stereo Window. The distance you specify in the Point spacing window is not exceeded
when points are selected for interpolation.
Point spacing = 20
Original point
Interpolated point
Point spacing = 10
The line thinning tolerance is only active when the Drape linear A planar feature is a feature in which all vertices associated with
features on the terrain surface option is active. This option removes the feature have the same elevation. These features are commonly
redundant points contained within the feature dataset based on a flat features such as building roofs.
thinning tolerance defined by you. It’s useful when the variation in
topography is minimal. In the Planar Features section of the Feature to 3D Options dialog,
you can select certain classes to which a single elevation value is
By setting a thinning tolerance, Stereo Analyst for ArcGIS checks applied to all features contained within that feature class. For
to make sure that there aren’t any duplicate points in collinear example, if a building feature is converted to 3D, you may want to
sections. If you don’t want thinning, simply set the value to 0. constrain the building polygon to be flat so that all vertices
associated with the polygon have the same elevation value.
In the following diagrams, the green circle represents the current
point, the black circles represent adjacent points, and the red line The elevation value applied to each vertex for a particular feature
terminating in an arrow represents the thinning tolerance. can be determined in several ways. In the Elevation dropdown list
of the Planar Features section, you have the option to select one of
the following techniques to be used for computing the elevation
value: At centroid, Minimum interpolated, Maximum interpolated,
and Average interpolated.
Here, the point is inside the thinning tolerance and will be eliminated.
The At centroid option takes the elevation from the physical center If you select the Maximum interpolated option, elevations are
of the feature, the centroid. For example, in a polygon, the center interpolated for each vertex making up the feature, then the largest
pixel is used for the elevation value. The following illustration value is used to assign the elevation to the feature. The following
shows the At centroid option. illustration shows the Maximum interpolated value.
Z
Z
value at centroid maximum value
Z
Z
maximum value
average value
minimum value
minimum value
If you’re familiar with the terrain in your data, you can enter an
elevation value to apply to all questionable points.
U s i n g a m in i m u m v a l i d e l e v a ti o n v a l u e
Click the Use minimum elevation value check box and input the
value of the lowest possible elevation in your data. For example, if
you enter 30, then invalid elevation values are assigned a value no
lower than 30 map units, such as meters.
11
12
13
The Convert Features to 2D dialog allows you to remove the height attribute
from feature datasets.
105
Introducing stereo visualization
On a daily basis, we unconsciously perceive and measure depth using our eyes. Persons using both eyes to view an object have binocular
vision. Persons using one eye to view an object have monocular vision. The perception of depth through binocular vision is referred to as
stereoscopic viewing.
This anaglyph image shows a 1:1 image pixel to screen pixel ratio. With red/blue glasses, the drastic elevation differences in the region are obvious.
U s i n g t h e 1 - Pan e V i ew
If the graphics card used by the computer does not support stereo
In this view, the sensor model information associated with each viewing, Stereo Analyst for ArcGIS automatically reverts to
oriented image in an image pair is used to visually superimpose the anaglyph stereo mode. See the Web site <http://support.erdas.com/
oriented images on one another, thereby creating a 3D digital specs/specs.html> for a list of graphics cards supported for use with
representation of the earth’s surface when viewed with the Stereo Analyst for ArcGIS.
appropriate stereo viewing hardware.
The 1-Pane View button is located within the lower-left portion of In anaglyph, shown above, Stereo Analyst for ArcGIS displays the
the Stereo Window. oriented images in red, green/blue to create a stereo view.
In the 3-Pane View, the 1-Pane View and the 2-Pane View are
embedded within the Stereo Window. This configuration was
designed so that you can collect feature data in the Stereo Window
while verifying data collected within the left and right mono panes.
The 2-Pane View shows the left image and the right image of the image pair.
The 3-Pane View is the default Stereo Window setup used in Stereo
Analyst for ArcGIS. The 3-Pane View can be enabled by selecting
the 3-Pane View button located within the lower-left portion of the
Stereo Window.
If you need to switch the left and right images, you can do so by
using the Invert Stereo Model button, which is located on the Stereo
View toolbar.
.
Image Pairs list
Lets you choose
which image pair
to display in the
Stereo Window
from a dropdown list
Stereo Window
Click to open the
Stereo Window
Auto Toggle 3D
Zoom Out By 2 Synchronize Geographic Displays
Floating Cursor
Click to collect Click for 1-time Click so that the Stereo Window
features without toggling application of zoom and the ArcMap data view display
the same data coverage
Zoom In Tool Default Zoom
Click to magnify Click for a 1:1
by a power of 2 image pixel to
screen pixel ratio Invert Stereo Model
Zoom Out Tool Click so that the left image
Click to reduce of the image pair displays as
by a power of 2 the right image and vice versa
10
11
12 13
15
In cases where a mapping project uses many images (more than five
images) or large images (greater than 85 MB), the display of
oriented images in the ArcMap data view may be slow. Rather than
display each raster in ArcMap, Stereo Analyst for ArcGIS allows
you to display only the footprints of the oriented images. Selecting
this option on the ArcMap Display tab improves the display
performance in ArcMap.
Depending on the type of data you use, the minimum and maximum
values may necessarily be different. For example, you might be
working with images that have only a 30 percent overlap; therefore,
you would change the minimum threshold value to ensure that you
get image pairs you can use in Stereo Analyst for ArcGIS.
By default, Stereo Analyst for ArcGIS displays the entire image For more detailed information, see “Understanding the epipolar
pair in the Stereo Window. However, if you only want to see the line” on page 206.
overlap region common to the two images of an image pair, you can
select the Image Pair overlap region option. This only changes the
display in the Stereo Window, not the ArcMap data view.
You can choose from the following types of contrast stretches: Two
standard deviations, Min/max, Linear, or None.
A Two standard deviations stretch uses the data that are between A Min/max contrast stretch makes the range of the data values vary
-2 and +2 standard deviations from the mean of the file values and linearly from the minimum statistics value to the maximum
stretches them to the complete range of output screen values. statistics value in the input direction, and from 0 to the maximum
brightness value in the output direction.
Output data
Output data
Input data
Input data
Output data
Histogram is yellow
Output data
Histogram is yellow
Input data
Original histogram is grey Input data
Original histogram is grey
U s i n g n o s t r e t c h a t a ll
If you select None, the data is displayed in raw form without any
contrast adjustment.
131
Using the 3D Floating Cursor
A 3D Floating Cursor consists of an independent cursor displayed The left and right cursors for the left and right images reference a
for the left image and an independent cursor displayed for the right location. When a feature is being collected in stereo, the image
image of an image pair. position of the cursor for the left and right image must be at the
exact same feature and location. If this does not occur, the feature
When images are not viewed in stereo, the 3D Floating Cursor cannot be reliably collected. For example, if a road along a rolling
simply appears to be two separate cursors that may or may not rest hill is being collected, the elevation of the 3D Floating Cursor must
on the same feature. However, when viewed in stereo, the two be adjusted so that the 3D Floating Cursor rests on the surface of
cursors fuse to create the perception of a 3D Floating Cursor. the road each time a point (vertex) for the road is collected.
Stereo Analyst for ArcGIS provides you with some custom tools for
use in controlling the position of the 3D Floating Cursor in the
Stereo Window. You can access these tools by selecting the Tools
menu, then Customize, then Commands. Click the Stereo Analyst
category to see the commands. You can drag the commands to any
existing toolbar.
The other category you can choose from is Leica Feature Editing.
Some notable Stereo Analyst for ArcGIS tools are described in the
following sections.
Use the Customize dialog to add commands that do not already appear on
toolbars.
The 3D Floating Cursor tab of the Stereo Analyst Options dialog is where you
make changes to the appearance of the 3D Floating Cursor in the Stereo
Window.
You may want a different color display for the 3D Floating Cursor,
which is white by default. While viewing in quad-buffered stereo,
an optimum 3D Floating Cursor color is red. While viewing in
anaglyph, optimum 3D Floating Cursor colors are yellow and
white.
Cross
Open Cross
Open X
In the setup above, the elevation source is a raster DEM file. Image correlation is set to 85 percent, which ensures acceptable accuracy.
With image correlation, Stereo Analyst for ArcGIS consults the Selecting a low Minimum correlation threshold value increases the
images themselves to derive 3D coordinate information. Using probability of a false match, whereas increasing the correlation
sensor model information and the correlated image positions of a threshold may yield no correlation at all. A high Minimum
point on the ground, 3D coordinate information is computed correlation threshold value is preferred in forested and urban areas
directly from imagery without requiring an external elevation (with shadows) where the probability of a false match is high. A
source. low value is preferred in grassy areas and other areas where a
specific land cover type is homogenous in the area of interest.
Using an external elevation source, like a DEM, the images
themselves are not consulted at all for elevation information. Using Terrain slope
Instead, elevation information comes strictly from the DEM, which
may be outdated due to construction, natural disaster, and so on In images with a large amount of slope, correlation can be more
since it was created. difficult since the relief displacement on the ground creates a
parallax effect that increases with terrain variation. Similarly, if
Using the Correlation Options each image of the image pair is collected at a radically different
angle, the matching can be more computationally stressful to
Three correlation options are provided for optimizing the process. Therefore, in both instances you can set the Terrain slope
performance of the Terrain Following Mode when image slider bar to Steep. This forces Stereo Analyst for ArcGIS to
correlation is used. These include correlation threshold, terrain perform more extensive computations to ensure that the match of
slope specification, and image contrast specification, and are all points between images is correct. If the area of interest is flat with
located on the Terrain Following Cursor tab of the Stereo Analyst very little variation in elevation, the Terrain slope slider bar should
Options dialog. be set to Flat.
U s i n g e l e v a ti o n b i a s t o a f f e c t Y - p a r a l l a x
If, while you are viewing in stereo, you perceive Y-parallax, you’ll
This illustration shows application of an elevation bias, 8 meters, to derive notice that your perception of 3D may not be comfortable.
telephone pole feature height.
Y-parallax can be adjusted using the digitizing device, such as the
Regarding the diagram above, the steps to collect the telephone system mouse. Position the 3D Floating Cursor inside the Stereo
features are as follows: Window (you may have to press the F3 key to give the 3D Floating
Cursor focus), then press and hold the “y” key on the keyboard.
1. Begin by putting the 3D Floating Cursor in Terrain Following Then, click and hold the left mouse button and move the mouse up
Mode and position it at the base of the feature. The elevation and down to adjust the Y-parallax of the images. Release the mouse
displays in the status bar at the bottom of the Stereo Window. button and the “y” key when you have the Y-parallax set to a
For example, the base of the telephone pole feature may be at comfortable viewing level.
an elevation of 450 meters.
2. Adjust the elevation of the 3D Floating Cursor so that it is at If Allow elevation bias is turned on, once Y-parallax has been
the top of the same feature and collect it. That elevation may adjusted, Stereo Analyst for ArcGIS computes a correction that is
add 8 additional meters, for a total elevation of 458 meters. applied to the elevation associated with the 3D Floating Cursor at
3. Collect all remaining similar features at the top of the feature. the time of collecting a feature.
Each individual base height as determined by the Terrain
See “Correcting Y-parallax” on page 203 for more information.
Following Mode plus the elevation bias, 8 meters, yields a
total elevation for each separate feature.
3 4
Both CE90 and LE90 are computed based on the sensor model If the correlation has failed, indicated by a red colorblock, the
information that is part of the metadata associated with the oriented numbers are not representative of anything. If the correlation has
images. This information is derived photogrammetrically when the succeeded, indicated by the green colorblock, the numbers indicate
position and attitude of the sensor as it existed as the time of capture the standard deviation of the point feature. This colorblock is only
is computed. active when the 3D Floating Cursor is in Terrain Following Mode.
CE90 and LE90 provide a quality index for the current position of If the colorblock is green, as shown below, then the 3D Floating
the 3D Floating Cursor. CE90 refers to circular error and LE90 Cursor is correlated and is located on the same feature in both the
refers to linear error. The 90 refers to the level of confidence in the left image and the right image of the image pair.
3D coordinates of the point. For example, an LE90 of 1.765 meters
means that the current position of the 3D Floating Cursor is reliable
to ± 1.765 meters.
The equation to compute CE90 is as follows: If the colorblock is red, as shown below, the 3D Floating Cursor is
not correlated and is not located on the same feature in both the left
image and the right image of the image pair.
CE90 = ( ΣX + ΣY )1.073
LE90 = ΣZ × 1.646
You’ll see the standard Windows cursor (an arrow) as you move it
into the Stereo Window. This is the best mode to use (if you don’t
have a special motion device like the Leica Geosystems
Any adjustment of the mouse’s scroll wheel adjusts the elevation of TopoMouse). Remember, to reenter the Manually Toggle 3D
Floating Cursor Mode, press the F3 key on the keyboard.
the 3D Floating Cursor. Use the status bar at the bottom of the
Stereo Window to see the current elevation. Press the F3 key again
to exit Manually Toggle 3D Floating Cursor Mode—you’ll see the Toggl ing a utoma tical ly
standard Windows cursor (an arrow) display in the Stereo Window.
When active, the Auto Toggle 3D Floating Cursor Mode eliminates
Of course, you might have the 3D Floating Cursor manually the need to press the F3 key in order to use the 3D Floating Cursor
toggled on in conjunction with Fixed Cursor Mode. The Fixed in the Stereo Window. The button associated with the toggled on
Cursor Mode button, toggled on, is shown below. Auto Toggle 3D Floating Cursor Mode is shown below.
Press F3 to toggle the 3D Floating Cursor on and off in the Stereo Using “i ”
Window. This switches between the regular, Windows cursor and
the 3D Floating Cursor when the system mouse is positioned over Press the “i” key to toggle between Fixed Image Mode and Fixed
the Stereo Window and this shortcut is selected. You may have to Cursor Mode. See “Using “c””, above.
click inside the Stereo Window before pressing F3 to give the
cursor focus. Using “a ”
Usi ng F 4 Press the “a” key to activate the Arrow tool (the standard Windows
pointer), which can be used to select buttons and options.
Press F4 to resynchronize the ArcMap display with the display in
the Stereo Window. This shortcut modifies the ArcMap display so Using “z”
that it shows the same geographic area as the Stereo Window.
Press the “z” key to zoom in the area of display by 1.5 in the Stereo
Using “t” Window.
Press the “t” key to toggle the Terrain Following Mode. This Using “x ”
shortcut should be used when you want the 3D Floating Cursor to
follow the terrain’s elevation. See “Using the Terrain Following Press the “x” key to zoom out of the area of display by 1.5 in the
Mode” on page 138 for more information. Stereo Window.
Press the “s” key to apply Snap To Ground. This shortcut is good Press the “r” key to recenter the area of the image pair displayed so
when you’re doing feature extraction in an area where it is difficult that the 3D Floating Cursor is in the middle of the Stereo Window.
to accurately place the 3D Floating Cursor on the ground. The 3D This shortcut is useful when navigating near the edges of the Stereo
Floating Cursor’s elevation is automatically adjusted so that it is Window. See “Recentering the stereo cursor” on page 127 for more
placed on the ground or feature of interest. See “Using Snap To information.
Ground” on page 143 for more information.
Section 3
154 USING STEREO ANALYST FOR ARCGIS
7 Capturing GIS data
7
IN THIS CHAPTER To collect features in Stereo Analyst for ArcGIS, you make use of the existing
ArcGIS tools that you’re probably already familiar with. These tools are located on
• Collecting features in different the Editor toolbar, and can be applied both in ArcMap and the Stereo Window.
modes Stereo Analyst for ArcGIS also provides you with some new tools to make your
feature collection and editing easy in the Stereo Window.
• Using 3D Snap
In this chapter, you’ll learn about how to determine the best mode for feature
• Using Squaring collection and whether or not you may be able to apply 3D Snap settings to collect
adjacent 3D features.
• Using the Monotonic Mode
You’ll also learn about applying Squaring settings to collect 3D features, and using
• Using digitizing devices the Monotonic Mode for special applications.
Finally, you’ll learn about the button mapping process for digitizing devices. If you
need more detailed information about digitizing devices, you can find it in the
Stereo Analyst for ArcGIS On-line Help.
155
Collecting features in different modes
Stereo Analyst for ArcGIS has different modes in which you can
digitize features in the Stereo Window. These different modes are
described in the following sections.
Usi ng F ixed Curso r Mo de Using Fixed Image Mode while collecting features is appropriate
when the feature you’re digitizing fits easily within the Stereo
When the Stereo Window is in Fixed Cursor Mode, the 3D Floating Window. Click inside the Stereo Window to give the cursor focus,
Cursor is fixed in the center of the Stereo Window. Adjustments then press F3 on the keyboard to apply the 3D Floating Cursor in
you make affect the position of the left image and right image of the the Stereo Window.
currently displayed image pair. When you are in Fixed Cursor
Mode, the Fixed Cursor Mode button on the Stereo View toolbar Using Terrain Fol lowin g Mode
appears to be recessed, as follows:
As you learned in “Using the Terrain Following Mode” on page
138, the Terrain Following Mode maintains the position of the 3D
Floating Cursor on the ground or a feature of interest without your
manual adjustment of the elevation of the 3D Floating Cursor.
When the Terrain Following Mode is active, the Terrain Following
Using Fixed Cursor Mode while collecting features is appropriate Mode button on the Stereo View toolbar appears to be recessed, as
when working in the Manually Toggle 3D Floating Cursor Mode follows:
and when you are using a TopoMouse. It is particularly useful when
the feature you’re digitizing extends beyond the Stereo Window
display.
You cannot use the Fixed Cursor Mode in conjunction with the
Auto Toggle 3D Floating Cursor Mode.
You can use the Terrain Following Mode while digitizing so that
the 3D Floating Cursor is on the feature. Make note of the CE90 and
Usi ng F ixed Imag e Mode
LE90 values and the red or green colorblock, which are located at
the bottom, right of the Stereo Window. These indicate whether or
When the Stereo Window is in Fixed Image Mode, the 3D Floating not the 3D Floating Cursor is correlated at that location to ensure
Cursor can move, but the images are fixed. Adjustments you make accuracy as you collect features. See “Checking accuracy of 3D
affect the separation and location of the 3D Floating Cursor. This information” on page 146 for more information about CE90 and
mode is appropriate when working with a system mouse, and works LE90.
best when the Auto Toggle 3D Floating Cursor Mode is in use.
When you are in Fixed Image Mode, the Fixed Cursor Mode button
on the Stereo View toolbar does not appear to be recessed, as
follows:
When you are in Auto Toggle 3D Floating Cursor Mode, you can
move your 3D Floating Cursor freely inside and outside the Stereo
Window without having to press the F3 key each time you want to
activate the 3D Floating Cursor for collecting or editing features.
When you are in Auto Toggle 3D Floating Cursor Mode, the Auto
Toggle 3D Floating Cursor Mode button appears recessed in the
Stereo View toolbar, as follows:
An advantage to using this mode is that you can freely change your
selections on the Editor toolbar, then move right back into the
Stereo Window to continue your work.
As you collect and edit features in the Stereo Window, you have The options you’ll see on the menus change depending on the mode
access to shortcuts by clicking the right mouse button. These you’re in. The tools specific to Stereo Analyst for ArcGIS are
options are only available during feature collection and editing. explained in the rest of this chapter. All of the other tools are well
documented in the book Editing in ArcMap as well as the On-line
Help.
The next steps tell you how to collect features in Fixed Image
Mode in the Stereo Window. Fixed Image Mode is best used
when the feature you want to collect displays wholly in the
Stereo Window.
10
11
12
D i g it i z i n g f e a t u r e s o u t s i d e t h e d is p l a y
12
The 3D Snap tab is where you set tolerance values for coordinates in the Z,
elevation, direction.
S e t t in g c a c h e s iz e
The Cache size is used to store all features around the cursor
position as the 3D Floating Cursor moves around in the Stereo
Window. The default cache size of 10 means that the cache around 2
the 3D Floating Cursor covers a range 10 times the planar snapping
tolerance. The planar snapping tolerance is set on the General tab 1
of the Editor Options dialog.
On the 3D Snap tab of the Editing Options dialog, you see a check • Endpoint—Ctrl + F5
box for Use elevation tolerance. The tolerance value, which is set
• Vertex—Ctrl + F6
to 1 by default, is measured in map units. That is, your 3D Floating
Cursor must be within one map unit, such as a meter, in Z in order • Midpoint—Ctrl + F7
to snap to the vertex. • Edge—Ctrl + F8
10
12
3 4 5
d
rd
ra
c
Within tolerance a rc
Vertices within tolerance are moved. Those outside tolerance aren’t moved. b
rb
Setting rotation mode
You can choose from four methods to determine the alignment of The Weighted mean rotation mode calculates the average rotation based
the feature. The first choice, Weighted mean, uses the length- upon the length and angle of each segment.
weighted angle of all sides to determine the alignment. First line Using the First line rotation mode
uses the line formed by the first two digitized vertices of a feature
as alignment. Longest line uses the longest side of a feature as Using the First line rotation mode means that the first and second
alignment. Active view alignment makes the squared feature have vertices form the line which is used to square the feature.
sides either horizontal or vertical to the ArcMap data view.
The tolerance value in each of the following two examples is 10.0.
Using the Weighted mean rotation mode The red polygon represents the result of squaring.
Weighted mean is the default rotation mode used by Squaring.
Using the Weighted mean rotation mode means that the length- Figure A Figure B
weighted mean angle (R) of all sides is used to determine the 4
alignment. Once the alignment angle has been determined, the
vertices are adjusted within the tolerance to square corners where
possible.
3 1
The Weighted mean function is expressed in the following
equation:
2
( ra × a ) + ( rb × b ) + ( rc × c ) + ( rd × d ) If you digitize clockwise, the First line rotation mode uses the first line
R = ----------------------------------------------------------------------------------------------
a+b+c+d digitized as the basis for squaring. Figure A shows the original polygon;
Figure B shows the squared polygon.
Figure A Figure B
2
Figure B
3 1
The Longest line rotation mode uses the longest line of the feature as the
basis for squaring. Figure A shows the original polygon; Figure B shows the
4 squared polygon in red. NOTE: segment 4 was not moved because it was
not within the tolerance value.
Digitizing the same line first but in the counter-clockwise direction can yield Using the Active view alignment rotation mode
vastly different results.
The Active view alignment rotation mode uses the borders of the
ArcMap data view to square the feature in either a horizontal or
D ig it i z in g f e a t u r e s vertical direction, or both if possible.
When using First line rotation, always digitize in a clockwise
direction.
U s i n g A c t i v e v i e w a li g n m e n t m o d e
Using the Longest line rotation mode When you are using this mode, it is best to have the Orient
ArcMap document to Image Pair when Image Pair changes
Using the Longest line rotation mode means that the line with the
option on. This option is located on the ArcMap Display tab of
greatest length is used to square the feature. In this mode, the order
the Stereo Analyst Options dialog.
in which you digitize vertices does not matter.
For more information about the Orient ArcMap document to
Image Pair when Image Pair changes option, see “Orienting
displays” on page 119.
1
Figure B 2
1
4
Figure A 2
1
Figure C 2
Figure A shows the vertices outside tolerance. Figure B shows the squared
Sides 1 & 3 adjusted polyline. Figure C shows the straightened polyline. NOTE: Resultant line
to be parallel (red) is shown slightly offset for clarity.
Figure B
with the ArcMap
data view.
The Active view alignment uses the ArcMap data view boundaries as a
guide.
5
6
The Monotonic Mode bases its upward (in the case that you start
digitizing at the water’s endpoint rather than starting point), same,
or downward flow on the elevation change between the first two
vertices you collect. The increment rate of increase or decrease is
determined by the 3D Floating Cursor elevation.
Mappi ng buttons
The Devices dialog is your starting point for all device-related settings.
Next, you’ll specify the COM port to which the digitizing device is
attached in the Add Device dialog.
12
Section 4
178 USING STEREO ANALYST FOR ARCGIS
A Capturing data using imagery
A
IN THIS APPENDIX This appendix gives you examples of how imagery is useful in the collection of
geographic data. This data is of primary importance for the creation and
• Collecting data for a GIS maintenance of a GIS. If the data and information contained within a GIS are
inaccurate or outdated, the resulting analyses performed on the data do not reflect
• Preparing imagery for a GIS true, real-world applications and scenarios.
• Identifying workflow
179
Collecting data for a GIS
Since its inception and introduction, GIS was designed to represent These approaches have been widely accepted within the GIS
the earth and its associated geography. Vector data has been industry as the primary techniques used to prepare, collect, and
accepted as the primary format for representing geographic maintain the data contained within a GIS; however, GIS
information. For example, a road is represented with a line, and a professionals throughout the world are beginning to face the
parcel of land is represented using a series of lines to form a following issues:
polygon.
• The original sources of information used to collect GIS data
Various approaches have been used to collect the vector data used are becoming obsolete and outdated. The same can be said for
as the fundamental building blocks of a GIS. These include: the GIS data collected from these sources. How can the data
and information in a GIS be updated?
• Using a digitizing tablet to digitize features from cartographic,
• The accuracy of the source data used to collect GIS data is
topographic, census, and survey maps. The resulting features
questionable. For example, how accurate is the 1960
are stored as vectors. Feature attribution occurs either during or
topographic map used to digitize contour lines?
after feature collection.
• The amount of time required to prepare and collect GIS data
• Scanning and georeferencing existing hardcopy maps. The
from existing sources of information is great.
resulting images are georeferenced and then used to digitize
and collect geographic information. For example, this includes • The cost required to prepare and collect GIS data is high. For
scanning United States Geological Survey (USGS) 1:24,000 example, georectifying 500 photographs to map an entire
quad sheets and using them as the primary source for a GIS. county may take up to three months (which does not include
collecting the GIS data). Similarly, digitizing hardcopy maps is
• Obtaining ground surveying geographic information. Ground
time-consuming and costly, not to mention inaccurate.
global positioning system (GPS), total stations, and theodolites
are commonly used for recording the 3D locations of features. • Most of the original sources of information used to collect GIS
The resulting information is commonly merged into a GIS and data provide only 2D information. For example, a building is
associated with existing vector datasets. represented with a polygon having only X and Y coordinate
information. To create a 3D GIS involves creating DTMs,
• Outsourcing photogrammetric feature collection to service
digitizing contour lines, or surveying the earth’s geography to
bureaus. Traditional stereo plotters and digital
obtain 3D coordinate information. Once collected, the 3D
photogrammetric workstations are used to collect highly
information is merged with the 2D GIS to create a 3D GIS.
accurate geographic information such as orthorectified
Each approach is ineffective in terms of the time, cost, and
imagery, DTMs, and 3D vector datasets.
accuracy associated with collecting the 3D information for a
• Applying remote sensing techniques, such as multispectral 2D GIS.
classification, which traditionally have been used for
• The cost associated with outsourcing core digital mapping to
extracting geographic information about the earth’s surface.
specialty shops is expensive in both dollars and time. Also,
performing regular GIS data updates requires additional
outsourcing.
• Raw photography,
• Geocorrected imagery, and
• Orthorectified imagery.
The following examples describe the common practices used for This procedure is repeated for each photograph.
the collection of geographic information from raw photographs and
imagery. Raw imagery includes scanned hardcopy photography, E x a m p le 2 : C o l l e c t i n g g e o g r a p h i c i n f o r m a t i o n
digital camera imagery, videography, or satellite imagery that has from hardcopy photography using a transparency
not been processed to establish a geometric relationship between
the imagery and the earth. In this case, the images are not Rather than measure and mark on the photographs directly, a
referenced to a geographic projection or coordinate system. transparency is placed on top of the photographs during feature
collection. In this case, a stereoscope is placed over the
E x a m p le 1 : C o l l e c t i n g g e o g r a p h i c i n f o r m a t i o n photographs. Then, a transparency is placed over the photographs.
from hardcopy photography Features and information (spatial and nonspatial) are recorded
directly on the transparency. Once the information has been
Hardcopy photographs are widely used by professionals in several recorded, it is transferred to a GIS. The following steps are
industries as one of the primary sources of geographic information. commonly used to transfer the information to a GIS:
Foresters, geologists, soil scientists, engineers, environmentalists,
and urban planners routinely collect geographic information • Either digitally scan the entire transparency using a desktop
directly from hardcopy photographs. The hardcopy photographs scanner, or digitize only the collected features using a
are commonly used during fieldwork and research. As such, the digitizing tablet.
hardcopy photographs are a valuable source of information. • The resulting image or set of digitized features is then
georeferenced to the earth’s surface. The information is
For the interpretation of 3D and height information, an adjacent set georeferenced to an existing vector coverage, rectified map,
of photographs is used together with a stereoscope. While in the rectified image, or is georeferenced using GCPs. Once the
field, information and measurements collected on the ground are features have been georeferenced, geographic coordinates (X
recorded directly onto the hardcopy photographs. Using the and Y) are associated with each feature.
hardcopy photographs, information regarding the feature of interest
is recorded both spatially (geographic coordinates) and • In a GIS, the recorded tabular (attribution) data is entered and
nonspatially (text attribution). merged with the digital set of georeferenced features.
Transferring the geographic information associated with the This procedure is repeated for each transparency.
hardcopy photograph to a GIS involves the following steps:
Conventional techniques generally process the images one at a Geocorrected aerial photography and satellite imagery have large
time. They cannot provide an integrated solution for multiple geometric distortion that is caused by various systematic and
images or photographs simultaneously and efficiently. It is very nonsystematic factors. Photogrammetric techniques used in
difficult, if not impossible, for conventional techniques to achieve IMAGINE OrthoBASE eliminate these errors most efficiently, and
a reasonable accuracy without a great number of GCPs when create the most reliable and accurate imagery from the raw
dealing with high-resolution imagery, images with severe imagery. IMAGINE OrthoBASE is unique in terms of considering
systematic and/or nonsystematic errors, and images covering rough the image-forming geometry by using information between
terrain such as mountain areas. Image misalignment is more likely overlapping images and explicitly dealing with the third dimension,
to occur when mosaicking separately rectified images. This which is elevation.
misalignment could result in inaccurate geographic information
being collected from the rectified images. As a result, the GIS Orthorectified images, or orthoimages, serve as the ideal
suffers. information building blocks for collecting 2D geographic
information required for a GIS. They can be used as reference
Furthermore, it is impossible for geocorrection techniques to image backdrops to maintain or update an existing GIS. Using
extract 3D information from imagery. There is no way for digitizing tools in a GIS, features can be collected and then
conventional techniques to accurately derive geometric attributed to reflect their spatial and nonspatial characteristics.
information about the sensor that captured the imagery. Multiple orthoimages can be mosaicked to form seamless
orthoimage base maps.
Solution
Problems
Techniques used in Stereo Analyst for ArcGIS and IMAGINE
OrthoBASE overcome all of these problems by using sophisticated Orthorectified images are limited to containing only 2D geometric
techniques to account for the various types of error in the input data information. Thus, geographic information collected from
sources. This solution is integrated and accurate. IMAGINE orthorectified images is georeferenced to a 2D system. Collecting
OrthoBASE can process hundreds of images or photographs with 3D information directly from orthoimagery is impossible. The
very few GCPs, while at the same time eliminating the accuracy of orthorectified imagery is highly dependent on the
misalignment problem associated with creating image mosaics. In accuracy of the DTM used to model the terrain effects caused by the
short—less time, less money, less manual effort, and more earth’s surface. The DTM source is an additional source of input
geographic fidelity can be realized using the photogrammetric during orthorectification. Acquiring a reliable DTM is another
solution. Stereo Analyst for ArcGIS uses all of the information costly process. High-resolution DTMs can be purchased, but at a
processed in IMAGINE OrthoBASE and accounts for inaccuracies great expense.
during 3D feature collection, editing, and interpretation.
Where did the DTMs come from? How accurate are the DTMs? If
the original source of the DTM is unknown, then the quality of the
DTM is also unknown. As a result, any inaccuracies are translated
into your GIS.
Exa m ple 5
Problem
The only solution that can address all of those issues involves the
use of imagery. Imagery provides an up-to-date, highly accurate
representation of the earth and its associated geography. Various
types of imagery can be used, including aerial photography,
satellite imagery, digital camera imagery, videography, and 35 3D information can be used for GIS analysis.
millimeter photography. With the advent of high-resolution
satellite imagery, GIS data can be updated accurately and 3D geographic imaging is the process associated with transforming
immediately. imagery into GIS data or, more importantly, information. 3D
geographic imaging prevents the inclusion of inaccurate or
Synthesizing the concepts associated with photogrammetry, remote outdated information into a GIS. Sophisticated and automated
sensing, GIS, and 3D visualization introduces a new paradigm for techniques are used to ensure that highly accurate 3D GIS data can
the future of digital mapping—one that integrates the respective be collected and maintained using imagery. 3D geographic imaging
technologies into a single, comprehensive environment for the techniques use a direct approach to collecting accurate 3D
accurate preparation of imagery and the collection and extraction of geographic information, thereby eliminating the need to digitize
3D GIS data and geographic information. This paradigm is referred from a secondary data source like hardcopy or digital maps. These
to as 3D geographic imaging. 3D geographic imaging techniques new tools significantly improve the reliability of GIS data and
will be used for building the 3D GIS of the future. reduce the steps and time associated with populating a GIS with
accurate information.
Usi ng i m ag er y
This workflow is generic and does not necessarily need to be External sensor model information describes the exact position and
repeated for every GIS data collection and maintenance project. For orientation of each image as they existed when the imagery was
example, a bundle block adjustment does not need to be performed collected. The position is defined using 3D coordinates. The
every time a 3D feature is collected from imagery. orientation of an image at the time of capture is defined in terms of
rotation about three axes: omega (ω), phi (ϕ), and kappa (κ). Over
Definin g th e sen sor model the last several years, it has been common practice to collect
airborne GPS and inertial navigation system (INS) information at
A sensor model describes the properties and characteristics
the time of image collection. If this information is available, the
associated with the camera or sensor used to capture photography
external sensor model information can be directly input for use in
and imagery. Since digital photogrammetry allows for the accurate
photogrammetric processing. If external sensor model information
collection of 3D information from imagery, all of the characteristics is not available, most photogrammetric systems can determine the
associated with the camera/sensor, the image, and the ground must exact position and orientation of each image in a project using the
be known and determined. Photogrammetric sensor modeling bundle block adjustment approach.
techniques define the specific information associated with a
camera/sensor as it existed when the imagery was captured. This Measuring GCPs
information includes both internal and external sensor model
information. Unlike traditional georectification techniques, GCPs in digital
photogrammetry have three coordinates: X, Y, and Z. The image
locations of 3D GCPs are measured across multiple images. GCPs
can be collected from existing vector files, orthorectified images,
DTMs, and scanned topographic and cartographic maps.
A p p ly i n g 3 D G I S t o fo r e s t r y
A p p ly i n g 3 D G I S t o t e l e c o m mu ni c a t i o n s
• Understanding scaling,
translation, and rotation
199
Learning principles of stereo viewing
Definin g stereo scopic view ing Digital photogrammetric techniques used in Stereo Analyst for
ArcGIS extend the perception and interpretation of depth to include
On a daily basis, we unconsciously perceive and measure depth the measurement and collection of 3D information.
using our eyes. Persons using both eyes to view an object have
binocular vision. Persons using one eye to view an object have Understan ding how stereo wo rks
monocular vision. The perception of depth through binocular vision
is referred to as stereoscopic viewing. A true stereo effect is achieved when two overlapping images (an
image pair), or photographs of a common area captured from two
Using stereoscopic viewing, depth information can be perceived different vantage points, are rendered and viewed simultaneously.
with great detail and accuracy. Stereo viewing allows the human The stereo effect, or ability to view with measurable depth
brain to judge and perceive changes in depth and volume. In perception, is provided by a parallax effect generated from the two
photogrammetry, stereoscopic depth perception plays a vital role in different acquisition points.
creating and viewing 3D representations of the earth’s surface. As
a result, geographic information can be collected to a greater The stereo effect is analogous to the depth perception you achieve
accuracy in stereo as compared to traditional monoscopic by looking at a feature with your two eyes. The distance between
techniques. your eyes represents two vantage points like two independent
photos, as in the following pictures.
Stereo feature collection techniques provide greater GIS data
collection and update accuracy for the following reasons:
During the stereo viewing process, the left eye concentrates on the
object in the left image and the right eye concentrates on the object
in the right image. As a result, a single 3D image is formed within
the brain. The brain discerns height and variations in height by
visually comparing the depths of various features. While the eyes
move across the overlap area of the two photographs, a continuous This illustration shows a 3D model.
3D model of the earth is formulated within the brain since the eyes
continuously perceive the change in depth as a function of change
in elevation.
Co rrectin g X-paralla x
A
The following pictures show the image positions of two ground
B
points (A and B) appearing in the overlapping area of two images.
Ground point A is located at the top of a building, and ground point
B is located on the ground. This is a profile view of the image pair that illustrates the positions of point A
and point B.
Principal Point 1 Principal Point 2 The following diagram illustrates that the parallax associated with
ground point A, depicted in the illustration of profile view above,
The left and right Images of an image pair have the same features, but at (Pa) is larger than the parallax associated with ground point B
different locations. depicted in the illustration of the profile view above (Pb).
Pa Pb
a' a b' b
o
xa' xb
xa X-parallax
Lower elevation
X (~250 meters)
xb'
This diagram illustrates parallax comparison between points. Parallax changes with increases and decreases in elevation.
X Rotating the left and right images adjusts for the large relative
variation in orientation (that is, omega, phi, kappa) for the left and
In this picture, Y-parallax exists.
right images.
The following picture displays the same stereo model without
Y-parallax.
The following picture displays a DSM created without sensor As a result of using automatic epipolar resampling display
model information. techniques, 3D GIS data can be collected to a higher accuracy.
P
Image point collected Corresponding image
from the left image of point located in the y
the image pair right image of the
Epipolar line image pair
x
Zp
Yp
Xp
Source: Keating et al 1975
Matching image points are located along the epipolar line.
The following diagram illustrates the image matching process The epipolar plane can be used as a geometric constraint to aid in the
identification of matching points.
using the epipolar plane as a geometric constraint. The figure
shows the epipolar plane which is the plane that is defined by the
Epipolar geometry is also commonly associated with the
two exposure stations ( L1 and L2 ) and the ground point, P. The
coplanarity condition. The coplanarity condition states that the two
lines pk and k′p′ are the epipolar lines and are defined by the sensor exposure stations of an image pair, any ground point, and the
intersection of the images and the epipolar plane. Using epipolar corresponding image position on the two images must all exist in a
constraint in the matching process transforms the matching common plane.
problem from a two-dimensional problem to a one-dimensional
problem, and is therefore beneficial since it reduces both the search
area and the computation time (Wolf 1983).
• Understanding exterior
orientation
209
Learning principles of photogrammetry
Photogrammetric principles are used to extract topographic The traditional, and largest, application of photogrammetry is to
information from aerial photographs and imagery. The following extract topographic and planimetric information (such as
picture illustrates rugged topography. This type of topography can topographic maps) from aerial images. However, photogrammetric
be viewed in 3D using Stereo Analyst for ArcGIS. techniques have also been applied to process satellite images and
close-range images to acquire topographic or nontopographic
information about photographed objects. Topographic information
includes spot height information, contour lines, and elevation data.
Planimetric information includes the geographic location of
buildings, roads, rivers, and so on.
Flight Line 1
Exposure station
Strip 2
20-30%
sidelap
Flying
Strip 1 direction
These units usually scan only film because film is superior to paper, Choosing scanning resolutions
both in terms of image detail and geometry. These units usually
have a root mean square error (RMSE) positional accuracy of 4 One of the primary factors contributing to the overall accuracy of
microns or less, and are capable of scanning at a maximum 3D feature collection is the resolution of the imagery being used.
resolution of 5 to 10 microns (5 microns is equivalent to Image resolution is commonly determined by the scanning
approximately 5,000 pixels per inch). resolution (if film photography is being used), or by the pixel
resolution of the sensor.
The required pixel resolution varies depending on the application.
Aerial triangulation and feature collection applications often scan In order to optimize the attainable accuracy of GIS data collection,
in the 10- to 15-micron range. Orthophoto applications often use the scanning resolution must be considered. The appropriate
15- to 30-micron pixels. Color film is less sharp than panchromatic, scanning resolution is determined by balancing the accuracy
therefore, color ortho applications often use 20- to 40-micron requirements versus the size of the mapping project and the time
pixels. The optimum scanning resolution also depends on the required to process the project.
desired photogrammetric output accuracy. Scanning at higher
resolutions provides data with higher accuracy. The following table lists the scanning resolutions associated with
various scales of photography and image file size.
Using desktop scanners
The Ground Coverage column refers to the ground coverage per pixel. Thus, a 1:40000 scale black and white photograph scanned at 25
microns (1016 dots per inch) has a ground coverage per pixel of 1 meter × 1 meter. The resulting file size is approximately 85 MB, assuming
a square 9 × 9 inch photograph.
Conceptually, photogrammetry involves establishing the relationship between the camera or sensor used to capture the imagery, the
imagery itself, and the ground. In order to understand and define this relationship, each of the three variables associated with the relationship
must be defined with respect to a coordinate space and coordinate system.
The file coordinates of a digital image are defined in a pixel coordinate system. A pixel coordinate system is usually a coordinate system
with its origin in the upper-left corner of the image, the x-axis pointing to the right, the y-axis pointing downward, and the units in pixels,
as shown by axes c and r in the following illustration. These file coordinates (c, r) can also be thought of as the pixel column and row
numbers, respectively.
x y
S x
r a
o
This illustration shows the origin of the image coordinate system (x, y) and
the origin of the pixel coordinate system (c, r).
Photogrammetric applications associated with terrestrial or ground-based images utilize slightly different image and ground space
coordinate systems. The following figure illustrates the two coordinate systems associated with image space and ground space.
YG
ϕ Ground point A
Ground space ZA
ω YA
XG
κ
XA
ZG
xa'
Image space a'
ya'
x
z
Z
Y ZL
ϕ' Perspective Center
X
κ' L
YL
X
ω'
The image and ground space coordinate systems are right-handed coordinate systems. Most terrestrial applications use a ground space
coordinate system defined using a localized Cartesian coordinate system.
∆r ∆t
radial distance (r)
o x
Zp
Y
Xp
Xo
Yp
Yo
X
x x
ω ϕ m 11 m 12 m 13
omega phi M = m 21 m 22 m 23
z m 31 m 32 m 33
y
Xp – Xo m 11 ( X p – X o ) + m 12 ( Y p – Y o ) + m 13 ( Z p – Z o )
x p – x o = – f ---------------------------------------------------------------------------------------------------------------------
1 1 1
-
m 31 ( X p – X o ) + m 32 ( Y p – Y o ) + m 33 ( Z p – Z o )
A = Yp – Yo 1 1 1
Zp – Zo
In order for the image and ground vectors to be within the same m 21 ( X p – X o ) + m 22 ( Y p – Y o ) + m 23 ( Z p – Z o )
y p – y o = – f ---------------------------------------------------------------------------------------------------------------------
1 1 1
-
coordinate system, the ground vector must be multiplied by the m 31 ( X p – X o ) + m 32 ( Y p – Y o ) + m 33 ( Z p – Z o )
1 1 1
rotation matrix M. The following equation can be formulated:
• Exterior orientation parameters If a minimum number of three GCPs is known in the X, Y, and Z
• Interior orientation parameters direction, space resection techniques can be used to determine the
six exterior orientation parameters associated with an image. Space
• Camera or sensor model information
resection assumes that camera information is available.
Well-known obstacles in photogrammetry include defining the
interior and exterior orientation parameters for each image in a Space resection is commonly used to perform single frame
project using a minimum number of GCPs. Due to the costs and orthorectification where one image is processed at a time. If
labor intensive procedures associated with collecting ground multiple images are being used, space resection techniques require
control, most photogrammetric applications do not have an a minimum of three GCPs on each image being processed.
abundant number of GCPs. Additionally, the exterior orientation
Using the collinearity condition, the positions of the exterior
parameters associated with an image are normally unknown.
orientation parameters are computed. Light rays originating from at
Depending on the input data provided, photogrammetric techniques least three GCPs intersect through the image plane through the
such as space resection, space forward intersection, and bundle image positions of the GCPs and resect at the perspective center of
block adjustment are used to define the variables required to the camera or sensor. Using least squares adjustment techniques,
perform orthorectification, automated DEM extraction, image pair the most probable positions of exterior orientation can be
creation, highly accurate point determination, and control point computed. Space resection techniques can be applied to one image
extension. or multiple images.
Space forward intersection is a technique that is commonly used to determine the ground coordinates X, Y, and Z of points that appear in
the overlapping areas of two or more images based on known interior orientation and known exterior orientation parameters. The
collinearity condition is enforced, which states that the corresponding light rays from the two exposure stations pass through the
corresponding image points on the two images and intersect at the same ground point. The following diagram illustrates the concept
associated with space forward intersection.
O1
O2
o1
p2 o2
p1
Z
Zp
Y
Xo2
Xp
Xo1 Yo2
Yp
Yo1
X
Glossary This glossary defines terms commonly used in 3D GIS applications and photogrammetry.
Nu meri cs
2D
Images or photos in X and Y coordinates only. There is no vertical element (Z) to 2D images.
Viewed in mono, 2D images are good for qualitative analysis.
3D
Images or photos in X, Y, and Z (vertical) coordinates. Viewed in stereo, 3D images approximate
true earth features.
3D feature
A 3D feature is a feature that has vertex coordinates in X, Y, and Z. The Z component is the
elevation of a particular vertex.
3D Floating Cursor
The 3D Floating Cursor is apparent when you have a DSM (that is, two images of approximately
the same area) displayed. The 3D Floating Cursor’s position is determined by the amount of
X-parallax evident in the DSM and your positioning of it on the ground or feature of interest. You
adjust the position of the 3D Floating Cursor using the keyboard and the system mouse. See also
X-parallax.
3D model
A 3D model has vertex coordinates in X, Y, and Z, where the Z coordinate indicates elevation. A
3D model displays in 3D (that is, a volumetric object).
Sy mbols
*.blk
An IMAGINE OrthoBASE block file. A block file can contain only one image, but usually
contains two or more images with approximately 60 percent overlap. Block files can be viewed in
3D using Stereo Analyst for ArcGIS.
233
*.img aerial triangulation
An ERDAS IMAGINE image file. An .img file uses the (AT) The process of establishing a mathematical relationship
hierarchical file format (HFA) structure to store many types of between images, a camera or sensor model, and the ground. The
information in addition to the image data. For example, the .img information derived is necessary for orthorectification, DEM
format stores information about the file, sensor, layers, statistics, generation, and image pair creation. This term is used when
projection, and so on. processing frame camera, digital camera, videography, and
nonmetric camera imagery.
*.prj
affine transformation
A SOCET SET® project file, which contains sensor position and
projection information about images in the project. A 2D plane-to-plane transformation that uses six parameters to
account for rotation, translation, scale, and nonorthogonality in
*.sup between the planes. Defines the relationship between two
A SOCET SET® support file, which contains geometric coordinate systems such as a pixel and an image space coordinate
information about the image it supports. system.
κ airborne GPS
Kappa. The angle used to define angular orientation. Kappa is A technique used to provide initial approximations of exterior
rotation about the Z-axis. orientation, which defines the position and orientation associated
with an image as they existed during image capture. GPS provides
ω the X, Y, and Z coordinates of the exposure station. See also
Omega. An angle used to define angular orientation. Omega is global positioning system.
rotation about the X-axis. algorithm
ϕ “A procedure for solving a mathematical problem (as of finding the
Phi. An angle used to define angular rotation. Phi is rotation about greatest common divisor) in a finite number of steps that frequently
the Y-axis. involves repetition of an operation” (Merriam-Webster OnLine
Dictionary 2001).
American Standard Code for Information Interchange
Terms
(ASCII) A “basis of character sets...to convey some control codes,
additional parameter space, numbers, most basic punctuation, and unaccented letters
a–z and A–Z” (Free On-Line Dictionary of Computing 1999).
(AP) In block triangulation, additional parameters characterize
systematic error within the block of images and observations, such anaglyph
as lens distortion. A 3D image composed of two oriented or nonoriented image pairs.
aerial photographs To view an anaglyph, you require a pair of red/blue glasses. These
glasses isolate your vision into two distinct parts corresponding
Photographs taken from positions above the earth captured by with the left and right images of an image pair. This produces a 3D
aircraft. Photographs are used for planimetric mapping projects. effect with vertical information.
GLOSSARY 235
calibration report collinearity condition
In aerial photography, the manufacturer of the camera specifies the The condition that specifies that the exposure station, ground point,
interior orientation of each camera in the form of a certificate or and its corresponding image point location must all be positioned
report. Information includes focal length, principal point offset, along a straight line.
radial lens distortion data, and fiducial mark coordinates.
contrast stretch
Cartesian coordinate system The process of reassigning a range of values to another range,
“A coordinate system consisting of intersecting straight lines called usually employing a linear function. Contrast stretching is often
axes, in which the lines intersect at a common origin. Usually it is used in displaying continuous raster layers since the range of data
a 2-dimensional surface in which a ‘x, y’ coordinate defines each file values is commonly much narrower than the range of
point location on the surface. The ‘x’ coordinate refers to the brightness values available to the display device.
horizontal distance and the ‘y’ to vertical distance. Coordinates can control point
be either positive or negative, depending on their relative position
from the origin. In a 3-dimensional space, the system can also A point with known coordinates in a coordinate system, expressed
include a ‘z’ coordinate, representing height or depth. The relative in the units (such as meters, feet, pixels, film units) of the specified
measurement of distance, direction and area are constant coordinate system.
throughout the surface of the system” (Natural Resources Canada control point extension
2001).
The process of converting tie points to control points. This
CCD technique requires the manual measurement of ground points on
See charge-coupled device. photos of overlapping areas. The ground coordinates associated
with GCPs are then determined using photogrammetric techniques.
centroid
coordinate system
The point whose coordinates are the averages of the corresponding
coordinates of the vertices of the polygon. “A system, based on mathematical rules, used to measure
horizontal and vertical distance on a surface, in order to identify the
charge-coupled device location of points by means of unique sets of numerical or angular
(CCD) A device in a digital camera that contains an array of cells values” (Natural Resources Canada 2001).
that record the intensity associated with a ground feature or object.
coplanarity condition
coefficient The coplanarity condition is used to calculate relative orientation.
One number in a matrix, or a constant in a polynomial expression. It uses an iterative least squares adjustment to estimate five
parameters (By, Bz, omega, phi, and kappa). The parameters
collinearity
explain the difference in position and rotation between two images
A nonlinear mathematical model that photogrammetric making up the image pair.
triangulation is based upon. Collinearity equations describe the
relationship among image coordinates, ground coordinates, and correlation
orientation parameters. Regions of separate images are matched for the purposes of tie
point or mass point collection.
GLOSSARY 239
image INS
A picture or representation of an object or scene on paper or a See inertial navigation system.
display screen. Remotely sensed images are digital representations
interior orientation
of the earth.
Describes the internal geometry of a camera such as the focal
image center length, principal point, lens distortion, and fiducial mark
The center of an aerial photo or satellite scene. coordinates for aerial photographs.
image pair International Society of Photogrammetry and Remote
Sensing
Two overlapping oriented images. A set of two remotely-sensed
images that overlap, providing a 3D view of the terrain in the (ISPRS) An organization “devoted to the development of
overlap area. international cooperation for the advancement of photogrammetry
and remote sensing and their application” (ISPRS 2000). For more
image scale
information, visit the Web site <http://www.isprs.org>.
(SI) Expresses the ratio between a distance in the image and the
ISPRS
same distance on the ground.
See International Society of Photogrammetry and
image space
Remote Sensing.
Events and variables associated with the camera or sensor as it
kappa
acquired the images. The area between perspective center and the
image. In a rotation system, kappa is positive rotation around the Z-axis.
image space coordinate system least squares adjustment
A coordinate system composed of the image coordinate system A technique by which the most probable values are computed for a
with the addition of a Z axis defined along the focal axis. measured or indirectly determined quantity based upon a set of
observations. It is based on the mathematical laws of probability
image-to-earth association
and provides a systematic method for computing unique values of
The 3D mathematical relationship between an image and the coordinates and other elements in photogrammetry based on a large
earth’s surface. number of redundance measurements of different kinds and
inertial navigation system weights.
GLOSSARY 241
omega, phi, kappa orthocalibration
A rotation system that defines the orientation of a camera/sensor as A form of calibration that corrects for terrain displacement and can
it acquired an image. Omega, phi, kappa is used most commonly, be used if a DEM of the study area is available. Unlike
where omega is positive rotation around the X-axis, phi is a orthorectification, this method depends upon a transformation
positive rotation around the Y-axis, and kappa is a positive rotation matrix to resample on the fly thus leaving the image file (data)
around the Z-axis. This rotation system follows the right-hand rule. unaffected.
optical axis orthocorrection
“The line joining the centers of curvature of the spherical surfaces A form of geometric correction that uses a DEM and sensor
of the lens” (Wolf and Dewitt 2000). position information to correct distortions resulting from earth
curvature and the like. See also orthorectification.
orientation
orthorectification
The position of the camera or satellite as it captured the image.
Usually represented by six coordinates: X, Y, Z, omega, phi, and The process of lessening geometric errors inherent within
kappa. photography and imagery caused by terrain displacement, lens
distortion, and the like. Then, the photography or imagery is
ORIENTATION MANAGEMENT
resampled to a specified resolution. Also called orthoresampling.
(ORIMA) Software designed to process and produce data detailing
overlap
orientation and triangulation in addition to bundle adjustment, etc.
Output files can be imported into Stereo Analyst for ArcGIS. In a traditional frame camera, when two images overlap, they share
a common area. For example, in a strip of photographs taken along
oriented image
the flight path, adjacent images typically overlap by 60 percent.
A first generation data product derived from imagery with a sensor This measurement is sometimes called endlap. See also sidelap.
model and spatial reference. Combining multiple oriented images
parallax
allows for the creation of DTMs and collection of 3D features.
“The apparent angular displacement of an object as seen in an aerial
oriented image pair
photograph with respect to a point of reference or coordinate
An image pair with known interior (camera or sensor internal system. Parallax is caused by a difference in altitude or point of
geometry) and exterior (camera or sensor position and orientation) observation” (Natural Resources Canada 2001).
orientation. The Y-parallax of an oriented image pair has been
perspective center
improved. Additionally, an oriented image pair has geometric and
geographic information concerning the earth’s surface and a (1) The optical center of a camera lens. (2) A point in the image
ground coordinate system. Features and measurements taken from coordinate system defined by the x and y coordinates of the
an oriented image pair have X, Y, and Z coordinates. principal point and the focal length of the sensor. (3) After
triangulation, a point in the ground coordinate system that defines
ORIMA
the sensor’s position relative to the ground.
See ORIENTATION MANAGEMENT.
phi
In a rotation system, phi is rotation around the Y-axis.
point A scanner in which all scanning parts are fixed and scanning is
accomplished by the forward motion of the scanner.
(1) A feature that has X, Y, and (sometimes) Z coordinates. A point
can represent a feature such as a telephone pole. You can also pyramid layer
collect multiple points to create a DEM or TIN. (2) In the case of An image layer that is successively reduced by a power of two and
defining the size of the 3D Floating Cursor used in the Stereo resampled. Pyramid layers enable large images to be displayed
Window, a point equals a pixel. faster at any resolution.
point spacing radial lens distortion
The distance between points sampled in terrain interpolation. Imaged points are distorted along radial lines from the principal
polygon point. Also referred to as symmetric lens distortion.
A set of closed line segments defining an area, composed of rational polynomial coefficients
multiple vertices. Polygons can be used to represent features such Coefficients, generally supplied by the data provider, that detail the
as buildings, and can contain elevation values. position of a satellite at the time of image capture.
GLOSSARY 243
raw image root mean square error
An image that does not have any projection associated with it. Raw (RMSE) Used to measure how well a specific, calculated solution
images serve as a record of features, relationships between fits the original data. For each observation of a phenomena, a
features, processes, and information. variation can be computed between the actual observation and a
calculated value. (The method of obtaining a calculated value is
reference coordinate system
application-specific.) Each variation is then squared. The sum of
A system that defines the geometric characteristics associated with these squared values is divided by the number of observations and
events occurring in object space. then the square root is taken. This is the RMSE value.
reference plane rotation matrix
In a topocentric coordinate system, the tangential plane at the A three-by-three matrix used in the aerial triangulation functional
center of the image on the earth ellipsoid, on which the three model. Determines the relationship between the image space
perpendicular coordinate axes are defined. coordinate system and the ground space coordinate system.
regular block of photos rubber sheeting
A rectangular block in which the number of photos in each strip is A 2D rectification technique (to correct nonlinear distortions) that
the same. This includes a single strip or a single image pair. involves the application of a nonlinear rectification (second order
rendering or higher).
Drawing an image in a view at the scale indicated by the zoom in screen dot pitch
or zoom out factor. Screen dot pitch is the size of the pixels on the screen—measured
resample horizontally in X and vertically in Y. The more accurate the screen
dot pitch values are, the more accurate scale representations are on
The process of extrapolating data file values for the pixels in a new the screen.
grid when the image is rescaled or rotated.
self-calibration
right-hand rule
A technique used in bundle block adjustment to determine internal
A convention in 3D coordinate systems (X, Y, Z) that determines sensor model information.
the location of the positive Z-axis. If you place your right hand
fingers on the positive X-axis and curl your fingers toward the sensor
positive Y-axis, the direction your thumb is pointing is the positive A device that gathers energy, converts it to a digital value, and
Z-axis direction. presents it in a form suitable for obtaining information about the
RMSE environment.
GLOSSARY 245
strip of images/photographs TIN
In traditional frame camera photography, consists of images See triangulated irregular network.
captured along a flight line, normally with an overlap of 60 percent
topocentric coordinate system
for stereo coverage. All photos in the strip are assumed to be taken
at approximately the same flying height and with a constant A coordinate system that has its origin at the center of the image on
distance between exposure stations. Camera tilt relative to the the earth ellipsoid. The three perpendicular coordinate axes are
vertical is assumed to be minimal. See also cross-strips. defined on a tangential plane at this center point. The X-axis is
oriented eastward, the Y-axis northward, and the Z-axis is vertical
support file to the reference plane (up).
A SOCET SET® file containing photogrammetric metadata transformation
associated with an image in a project file.
A series of coefficients describing the 3D mathematical
tangential lens distortion relationship between an image, the sensor that captured it, and the
Distortion that occurs at right angles to the radial lines from the ground it has recorded.
principal point. triangulated irregular network
Terrain Following Mode (TIN) A specific representation of DTM in which elevation points
A mode in which the 3D Floating Cursor follows the elevation of can occur at irregular intervals forming triangles.
the terrain displayed in the Stereo Window. This is accomplished triangulation
either by using an external elevation source, such as a DEM, or
image correlation techniques. Process of establishing the geometry of the camera or sensor
relative to objects on the earth’s surface. See also aerial
thinning tolerance triangulation.
A measure that prevents duplicate points within a certain distance vector
in terrain interpolation (such as 5 meters).
A point, line, or polygon. A vector is a one-dimensional matrix,
threshold having either one row (1 by j) or one column (i by 1). Vectors
Threshold is used during image correlation as a measure of typically represent objects such as road networks, buildings, and
probability that a points is the same in both the left image and the geographic features such as contour lines.
right image of an image pair. A high threshold value increases the vertex
probability of a correct match, but may take longer to process.
Setting a low threshold increases the probability of a false match. A component of a feature, typically made up of three axes: X, Y,
and (sometimes) Z. The Z component corresponds to the elevation
tie point of the vertex. A feature can be composed of only one vertex (such
A point. Its ground coordinates are not known, yet it can be as a point as in a TIN) or many vertices (such as a polyline or
recognized visually in the overlap or sidelap area between two polygon).
images.
GLOSSARY 247
248 USING STEREO ANALYST FOR ARCGIS
References References
Asher & Adams. 1976. Asher & Adams’ Pictorial Album of American Industry: 1876. New
York: Rutledge Books.
Keating, T. J., P. R. Wolf, and F. L. Scarpace. 1975. “An Improved Method of Digital Image
Correlation,” Photogrammetric Engineering and Remote Sensing 41, no. 8 (1975): 993.
Konecny, G. 1994. “New Trends in Technology, and their Application: Photogrammetry and
Remote Sensing—From Analog to Digital.” Paper presented at Thirteenth United
Nations Regional Cartographic Conference for Asia and the Pacific, Beijing, China, May
1994.
249
Natural Resources Canada. “Carto Corner - Glossary of Cartographic Terms: GPS, Global Positioning System.” 13 Jul. 2001
<http://www.atlas.gc.ca/english/carto/cartoglos.html#4>.
Natural Resources Canada. “Carto Corner - Glossary of Cartographic Terms: parallax.” 13 Jul. 2001 <http://www.atlas.gc.ca/english/
carto/cartoglos.html#4>.
Wang, Z. 1990. Principles of photogrammetry (with Remote Sensing). Beijing, China: Press of Wuhan Technical University of
Surveying and Mapping, and Publishing House of Surveying and Mapping.
Wolf, Paul R., and Bon A. Dewitt. 2000. Elements of Photogrammetry with Applications in GIS. 3rd ed. New York: McGraw-Hill, Inc.
Index Symbols
*.blk file 79
defined 233
feature
defined 233 characteristics 88
*.img file collecting
defined 234 and attributing 193
*.mxd file 119 workflow 158, 160
*.prj file 82 defined 233
defined 234 Floating Cursor 132
format 82 accuracy of 133
*.rrd file 19 adjusting color 136
*.sde file 77 adjusting size 136
*.sup file 82 Auto Toggle Mode 147
defined 234 changing 46
format 83 custom commands 134
“a” keyboard shortcut 149 decrease elevation 134
“c” keyboard shortcut 149 defined 233
“i” keyboard shortcut 149 different shapes 136
“r” keyboard shortcut 149 how it works 132
“s” keyboard shortcut 54, 143, 149 how to position 133
“t” keyboard shortcut 55, 141, 149 increase elevation 135
“x” keyboard shortcut 149 keyboard shortcuts 149
“y” keyboard shortcut 150 line width 136
“z” keyboard shortcut 149 Manually Toggle Mode 147
recenter 135
Numerics use with Hyperlink tool 137
1-Pane View 26, 108 Floating Cursor tab 136
2D model
defined 233 characteristics 88
to 3D conversion 35, 93 defined 233
advanced options 95 Position Tool
feature draping 95 applying 48
invalid elevations 100 Snap
planar features 98 applying 163
point spacing 97 customizing 166
thinning 98 keyboard shortcuts 163
workflow 101 options 162
virtual 89 workflow 164
2-Pane View 25, 109 Snap tab 162
use in 3D Floating Cursor accuracy 133 to 2D conversion 31
251
to 2D exporter 103 Analytical photogrammetry defined 235
virtual 89 defined 235 Block
3D Floating Cursor AP (additional parameter) file 80
recenter 127 defined 234 characteristics 79
3-Pane View 25, 109 ArcMap defined 235
data view importing 79
A calculating threshold 120 footprint
Accuracy image pair display 119 defined 235
CE90 145, 146 orienting 21 of photographs 214
LE90 145, 146 orienting displays 119 defined 235
Active view alignment rotation mode Display tab 119 regular 244
169 ASCII (American Standard Code for triangulation
Add Information Interchange) defined 235
extension 14 defined 234 Brightness
image pair 18 AT (aerial triangulation) 231 adjusting 28
toolbars 15 defined 234 application in Stereo Window 115
Additional parameter (AP) At centroid 99 Bundle
defined 234 Attribute block adjustment 192, 231
Advanced options defined 235 defined 235
2D to 3D conversion 95 table defined 235
Aerial defined 235 Button mapping 45, 173
photographs Attribution workflow 174
defined 234 defined 235
triangulation (AT) 71, 231 Auto C
defined 234 -correlation Cache size 163
Affine transformation 223 defined 235 Calibration certificate/report
defined 234 Roam Mode defined 236
Airborne GPS activating 27 Cartesian coordinate system
defined 234 Toggle 3D Floating Cursor Mode defined 236
Algorithm 50, 147, 157 CCD (charge-coupled device)
defined 234 limitations 147 defined 236
American Standard Code for Information Automated terrain following CE90 145, 146
Interchange (ASCII) defined 235 applying 55
defined 234 Automatic recenter 127 Centroid
Anaglyph 24, 27, 108, 122 Average interpolated 99 defined 236
3D Floating Cursor color with 136 Axis-To-Ground setting 43 Charge-coupled device (CCD)
defined 234 defined 236
Analog photogrammetry B Coefficient
defined 235 Binocular vision 106 defined 236
INDEX 253
bias 141, 142 F2 keyboard shortcut 51 Image Mode 156
invalid F3 keyboard shortcut 50, 149 defined 238
default value 100 F4 keyboard shortcut 149 feature collection workflow
keep original 100 Feature 158
minimum value 100 collection Flight
source 3D Snap options 162 line 214
defined 238 3D snapping defined 238
ensuring accuracy 37 workflow 164 path 214
for 2D to 3D conversion 36 defined 238 defined 238
in IMAGINE OrthoBASE 76 digitizing devices 173 Focal
Raster surface 36 Monotonic Mode 172 length 73, 223
Virtual 2D To 3D 90 workflow 172 defined 239
Ellipsoid 220 point feature 56 plane
defined 238 polygon feature 49 defined 239
Endlap polyline feature 54 Footprint
defined 238 squaring changing color 119
Endpoint workflow 171 defined 239
keyboard shortcut 163 Squaring options 167
Epipolar with elevation bias 142 G
correction 124 workflow 158 GCP (ground control point)
line 206 draping defined 239
defined 238 defined 238 Geocentric
plane 206 editing coordinate system 220
defined 238 adjust location 64 defined 239
resampling 205 adjust vertex 61 Geocorrect 183
Exporter complete 135 defined 239
3D to 2D 103 extraction Geographic
Exposure station 214 defined 238 imaging 188
defined 238 to 3D Options dialog 95 information system (GIS)
Exterior orientation 73, 226 Fiducial defined 239
defined 238 mark 223 Geolink
parameters 238 defined 238 defined 239
External elevation source First line rotation mode 168 Geoprocessing techniques 183
DEM 89 Fixed GIS (geographic information system)
TIN 89 Cursor Mode 156 3D GIS applications 195
with Terrain Following Mode 138 applying 25 application 74, 180
definition 238 building blocks 180
F feature collection workflow defined 239
F10 keyboard shortcut 172 160 extracting 3D information 181
INDEX 255
Midpoint phi, kappa 222, 226 Orthorectify 193
keyboard shortcut 163 defined 242 Overlap
Min/max contrast stretch 126 Open changing color 119
Minimum Cross (3D Floating Cursor) 137 defined 242
interpolated 99 Cross with Dot (3D Floating Cursor) display only 124
threshold 120 137 percentage 120, 214
Mode X (3D Floating Cursor) 137 Overlapping images
Auto Toggle 3D Floating Cursor X with Dot (3D Floating Cursor) threshold 120
157 137
Fixed Cursor 156 Optical axis P
Fixed Image 156 defined 242 Parallax
Terrain Following 156 Orientation defined 242
Mono changing 119 Perspective center 219
defined 241 defined 242 defined 242
Monocular vision 106 ORIENTATION MANAGEMENT Phi
defined 241 (ORIMA) defined 234, 242
Monotonic defined 242 Photogrammetric quality scanners
defined 241 Oriented defined 243
Mode 172 image 72 Photogrammetry 107
Mosaicking defined 242 defined 243
defined 241 process 70 digital 237
using Image Analysis for ArcGIS history 210
N 78 metric 241
Nadir using IMAGINE OrthoBASE plane table 243
defined 241 76 types of 188, 210
Near vertical aerial photographs using SOCET SET® 82 uses 212
defined 241 image pair Photograph types 212
Node defined 242 Photography
defined 241 Orienting ArcMap display 21, 49 terrestrial 221
Nonoriented image pair ORIMA (ORIENTATION Pixel
defined 241 MANAGEMENT) coordinate system 218
Nonorthogonality defined 242 defined 243
defined 241 Orthocalibration 76 Planar
defined 242 defined 243
O Orthocorrection feature 89, 98
Oblique photographs defined 242 At centroid 99
defined 241 Orthorectification 184 Average interpolated 99
Omega defined 242 Maximum interpolated 99
defined 234, 241 single frame 229, 245 Minimum interpolated 99
INDEX 257
defined 245 visualization 107 options 141
Spot height Window preferences 143
defined 245 applying tools in Snap To Ground 143
Squaring workflow 115 using image correlation 139
options 167 context menu 157 using with elevation bias 141
polyline 170 opening interpolation
rotation mode 168 workflow 24, 114 point spacing 97
tab 167 using with data view 119 thinning tolerance 98
tolerance 167 views 24 slope 139
Standard deviation of unit weight 1-Pane View 108 Theta 224
defined 244 2-Pane View 109 Thinning tolerance 98
Stereo 3-Pane View 109 defined 246
Analyst toolbar 16, 111 Stereoscopic Threshold 120
defined 245 parallax 202 defined 246
display defined 245 Tie point 192
contrast stretch 125 viewing 106, 200 defined 246
linear 127 defined 245 TIN (triangulated irregular network)
min/max 126 how it works 200 defined 246
none 127 Strip with Virtual 2D To 3D 89
two standard deviations of images Tolerance
126 defined 246 use with Squaring 167
displaying polygon outlines of photographs Toolbar
127 defined 246 adding 15
epipolar correction 124 Support file 82 Stereo Analyst 111
recentering the stereo cursor defined 246 Stereo Enhancement 16, 113
127 Synchronize Geographic Displays Stereo View 16, 112
screen dot pitch 125 applying 49 StereoAnalyst 16
Display tab 124 Topocentric coordinate system 220
Enhancement toolbar 16, 113 T defined 246
model 202 Tangential lens distortion 225 Transformation 73
defined 245 defined 246 defined 246
rotating 204 Terrain Translating stereo model 204
scaling 204 Following Cursor tab 53, 141 Translation 205
translating 204 Following Mode 156, 193 Transparency 182
pair accuracy 146 Triangulated irregular network (TIN)
defined 245 applying 53, 55, 144 defined 246
scene continuous 141 Triangulation
defined 245 defined 246 defined 246
View toolbar 16, 112 methods of operation 138 Two standard deviations contrast stretch
X
X
(3D Floating Cursor) 137
-parallax 202
defined 247
screen dot pitch 125
snapping tolerance 163
thumb wheel
applying 145
Y
Y
-parallax 203
adjusting 142
INDEX 259
260 USING STEREO ANALYST FOR ARCGIS