T0 - Space Syntax Methodology
T0 - Space Syntax Methodology
T0 - Space Syntax Methodology
A teaching guide for the MRes/MSc Space Syntax course (version 5),
Bartlett School of Architecture, UCL
CONTENTS 4
DEPTHMAP EXERCISE 16
Files available for you to use in the exercise 16
TASKS 16
Before you start 16
TASK 1: STEPS TO PERFORM CONVEX ANALYSIS IN DEPTHMAP 16
TASK 2: STEPS TO PERFORM AXIAL ANALYSIS IN DEPTHMAP 21
Notes 26
OBSERVATION TECHNIQUES 39
DATA ANALYSIS 49
SEGMENT ANALYSIS 71
DEPTHMAP EXERCISE 79
Files available for you to use in the exercise 79
Task 79
Steps to perform axial and segment analysis in Depthmap 79
Notes 84
DEPTHMAP EXERCISE 92
Files available for you to use in the exercise 92
Tasks 92
Steps to perform axial and segment analysis in Depthmap 92
Notes 98
AGENT ANALYSIS 99
REFERENCES 107
APPENDICES 112
APPENDIX ! 113
APPENDIX 2 116
A PRACTICAL GUIDE TO SPACE SYNTAX METHODOLOGY
Introduction
The same description might also apply on an urban scale. Cities are aggregates
of buildings held together by a network of spaces flowing in-between the blocks.
This network connects a set of street spaces that form together a discrete
structure. The structure is the optimum result of shortest paths from all origins
to all destinations in the spatial system. It is what holds it all together. It has an
architecture, and by this we mean a certain geometry and a certain topology,
that is, a certain pattern of connections (figure 1.2).
Buildings can have different structures that relate strongly to their functional-
ity. Prisons are normally hierarchical reinforcing power and control in the form
of access and visibility relationships. Prison cells appear at the very end of these
hierarchies. On the contrary, museums are made up of continuous spaces that
follow some sort of narratives. These kinds of building organisations have their
recognisable spatial characteristics. To expose these spatial characteristics we
7
use graph-based representations and measure their structural properties. The
structural properties might then be indicative to how the social organisation
functions.
On the urban scale, spatial structures can take an organic, uniform or deformed
shape. These universal types of urban grid vary in the way they interweave
connecting the part-whole structure. They emerge on different scales, and as a
result, have different geometric properties. Topological and geometric analysis
of urban grids using UCL Depthmap software helps us understand the configu-
rational structure of urban spaces and its potential impact on social behaviour
and economic activity (Hillier, 1996a).
1750
1850
2000
8
CONVEX
AND AXIAL
ANALYSIS
THEORETICAL BACKGROUND ON AXIAL AND CONVEX
ANALYSIS
Introduction
3
See appendix Space Syntax starts with defining movement and occupation as the fundamental
1 for a functions of a layout, where permeability of all spaces is the priority condition for a
mathematical
definition of the functioning layout structure. A proposed representation of a spatial structure might
axial and convex either be interpreted in a convex map or an axial map3. An example for both types
network. of representations is displayed in figure 2.1.
higher
integration
Figure 2.1. An example of how convex and axial representations are mapped on House at Creek Vean,
Team 4. Source: Sarah Parsons, MSc AAS student work 2007 @UCL
11
In axial representations, depth is identified as the change in direction between one
axial line and another. Depth is topological, in other words, it has no geometric
value. Axial maps are fundamental syntactic representations theoretically, because
they reflect many structural properties of urban street networks– i.e. line lengths,
intelligibility and synergy.
An example for how an urban area might be represented using the Space Syntax
model is demonstrated in (figure 2.2.). The urban space (a.) might be represented by
the set of fewest, longest, and walkable axial lines (b.), the axial lines are then
represented by a graph (c.), the different Connectivity (degree) values for each
vertix is then highlighted; vertices that have more connections to their immediate
neighbours will have higher Connectivity values (d.), these values of Connectivity
are then illuminated on the axial map to reveal the local network structure of street
spaces (e.).
a. b. c. d. e.
Figure 2.2 The axial representation of Space Syntax. An urban space represented by the fewest and
longest axial lines (b), axial lines are represented by a graph (c), the graph Connectivity is by high-
lighted in (d & e).
4
Note that this is Another syntactic representation of architectural space is that of convex map. A dis-
only one possible crete convex map represents adjacency relationships by reducing the spatial com-
representation
of convex spaces plexity of a layout to the fewest and fattest convex spaces4. In each convex space,
(see Peponis all pairs of points are inter-visible. Spaces that are immediately adjacent will have
et. al., 1997 for one step of depth in-between, spaces that have a minimum of one space separat-
other alternative
descriptions of
ing them will have two steps of depth in-between, and so on. In other words, depth
convex break-up). between two spaces is defined as the least number of syntactic steps –or shortest
topological distance- in a graph that are needed to move from one space to the
other.
We can attribute the value of topological depth to each node (vertex) in an adja-
cency graph GC. The graph GC will consist of two sets of information; graph vertices
(representing convex spaces) VC = {vC1, vC2, … vCn}, and a set of lines LA = {lA1, lA2, …
lAL}, each line in the graph GC represents an adjacency relationship between one
convex space and another. Spatial adjacency is the fundamental relationship that
characterises how structures might be configured in a spatial layout. Two spaces, i
and j, are considered as adjacent in the dual graph GC when it is possible to access
one space directly from another, without having to pass through intervening
spaces. The mathematical description of the network representation of convex
spaces is similar to that of axial networks.
12
Figure 2.3 shows an example for decoding an architectural layout designed by
Frank Gehry using the convex space representation. The architectural space (a.)
might be represented by the set of fewest and fattest convex spaces. These spaces
are linked where there is direct access from one space to another forming a convex
map (b.), the convex map is then represented by a graph (c.), the different Con-
nectivity (degree) values for each vertix is then highlighted; vertices that have more
connections to their immediate neighbours will have higher Connectivity values
(d.), these values of Connectivity are then illuminated on the convex map to reveal
the spatial structure of the building organisation (e.).
a. b. c. d. e.
Figure 2.3 The convex representation of Space Syntax. An architectural space represented by the
fewest and fattest convex spaces (b), convex spaces are represented by a graph (c), the graph Con-
nectivity is by highlighted in (d & e).
5
A justified Spatial relations between adjacent spaces in a layout can be represented using the
graph could be descriptive methods of justified graphs5 first presented by Hillier & Hanson (1984). A
constructed using
JASS or PAJEK
justified graph reads a spatial network of convex spaces from one space (root) to all
tools. others; representing each convex space with a circle and each permeable connec-
tion between two spaces with a line as in (figure 2.4). From a root space, all spaces
that are one syntactic step away are put on the first level above the root space,
all spaces that are two steps away are levelled on the second row, etc. A justified
graph might be deep or shallow depending on the relationship of the root space
to other spaces. Spatial relationships might form branching trees or looping rings.
A spatial relationship between two spaces might be ‘symmetrical’ if for example: “A
connects to B” is equal to “B connects to A”. Otherwise the relationship is considered
as ‘asymmetrical’. The total amount of asymmetry in a plan from any point relates to
its mean depth from that point, measured by its ‘relative asymmetry’ (RA). Spaces
that are, in sum, spatially closest to all spaces (low RA) are the most integrated in
a spatial network. They characteristically afford dense traffic through them due to
their central position in the spatial network. Spaces that locate in deeper locations
(high RA) are the most segregated. Integration and segregation are global attrib-
utes of the spatial network.
13
Hillier (1996) differentiated between four types of spaces:
The positioning of a, b, c, d types of spaces within the local and global settings
of the whole network can determine the overall spatial depth in a layout. A local
increase in the number of a-type of spaces and a global increase in d-type of spaces
would consequently minimise spatial depth, creating an integrated system, while
a global increase of b-type spaces and a local increase in c-type spaces are likely to
lead to a maximised depth, resulting in a segregated system.
a. b.
Figure 2.4 Different spatial typologies marked on a graph representing the relationships between
convex spaces. The two graphs elucidated here are; a justified graph that is being laid from the point
connecting the exteria to the interia of Frank Gehry’s house (a.) and an adjacency graph overlaid on
top of a convex map (b.).
�√I∗2�+2
Grid axiality
Grid = =
axiality (1)
L
where I is the number of islands, L is the number of axial lines. The results vary
between 0 and 1 with high values approximating to a regular grid and low values to
an axialy deformed system.
14
Syntactic measures
In this section we will be explaining four main topological measures that can
explain structural properties of a spatial graph. The measures are used to quantify
the configurational properties of a layout. The calculation might account for the
neighbourhood size of each node in a graph. By neighbourhood we mean the
nodes that are linked to each node within a certain graph distance that might either
be topological, metric or angular. For axial and convex analysis we use a topological
distance that is calculated from each node to define the radius within which dif-
ferent measures are calculated. Radius n is usually used to find measure values for
each node in relation to the whole system. Radius 2 (sometimes called radius 3), is
used to measure the relationship between each node and the neighbours located
two steps away from it.
During the last four decades, Space Syntax researchers have developed many
measures (see appendix 1) for the purpose of explaining social behaviour, some of
the most important ones are listed here;
The correlation between some of these measures might describe some char-
acteristic properties of layouts that relate to wayfinding (Conroy Dalton, 2000).
Intelligibility, for example, is the correlation coefficient between axial connectivity and
axial global integration. It helps identify how easy it is for one in a local position to
comprehend the global structure. Synergy is the relationship between smaller radii
of integration (i.e. integration HH R2) and larger radii (i.e. integration HH Rn) - also in
axial analysis. A relationship between smaller and larger radii is illustrative of the relation
between the parts and the whole in an urban system.
15
DEPTHMAP EXERCISE
Axial and Convex Analysis Using Depthmap
TASKS
The first task is to reduce an architectural layout into a convex map and analyse
its graph properties using Depthmap.
The second task in this exercise is to reduce an urban layout to an axial map and
analyse its topological configurations.
In this exercise you will learn to do the following activities in Depthmap in order
to draw convex break up maps manually and perform convex analysis.
16
2. Prepare your convex map
17
In order to draw the convex spaces you will
have to check the polygon icon on top of your
MAP window . Then you can draw the
convex spaces by clicking with your left mouse
key to identify every single point that defines
the boundary of the convex space and close
the polygon by clicking on the starting point.
Meanwhile, you can snap the end points to the
layout by holding CTRL+SHIFT while you are
drawing.
You can cancel your polygon while you are
drawing it by clicking the right hand mouse
button.
18
colours do not mean anything at this stage).
For every convex space you will also have a
connectivity attribute;
19
select NEW MAP TYPE to be CONVEX MAP. The
default name is CONVEX MAP. Give your convex
break-up map a unique name.
5. Convex analysis
20
Segregation in the architectural layout
might be an indication to a higher degree of
power or a lower degree of power depend-
ing upon the social organisation that occu-
pies that space (i. e. Head office, prisoners).
In order to read differences more clearly between high low values of each
measure you can change the colour range. This is just for the purpose of data
visualisation and will have no effects on the values of elements themselves.
Go to WINDOW---COLOUR RANGE.
A window will appear.
Use the browser to move to Depthmap classic
as a banding range type.
Adjust the sliders in such a way as to find a
satisfactory representation of the data values
in the axial or convex map.
21
TYPE drop down menu and give your axial map
a unique name.
22
and line length calculated for it immediately
after you have drawn it (The colours indicate
in which band the value of the line falls within
the range on lines drawn on the screen).
23
In order to reduce this map to the fewest lines
you need to go to TOOLS→ AXIAL/CONVEX/
PESH→ REDUCE TO FEWEST LINE MAP.
24
file called GALLERY-axial.dxf from the Depth-
map workshop’s Moodle Folder. You will find
that in the samples folder within exercise 1.
You will have it imported as a drawing map.
25
After you click OK you will see different meas-
ures added on the lower left corner of your
screen. The most meaningful in the first instance
are INTEGRATION (HH) and CONNECTIVITY.
Notes
As per usual, try to follow the rules from the Social Logic of Space to draw each
axial line. However, if you are more adventurous, you may want to think a little
more about how to draw the axial map. See Turner et al. (2005) and in particular
section 4.1 for a discussion of the role of depth minimization.
Always check the integrity of your axial map before piling straight into the
analysis of your axial map, but wait! Check that your axial lines are properly
connected to each other. You can go through the map looking at the number of
connections (just hover the mouse over a line), or check all lines are connected
by performing a ‘step depth’ calculation. Both of these methods can be found in
the Depthmap axial analysis tutorial on the UCL Bartlett website.
Similar to what we have done with convex spaces linking/unlinking, you may
need to unlink lines where it is not possible to get from road to another – for
example, if there is a bridge. Go through the axial analysis tutorial to find out
how to unlink lines, and unlink where necessary.
26
ISOVIST AND
VGA ANALYSIS
THEORETICAL BACKGROUND ON ISOVIST AND VGA ANALYSIS
Introduction
29
Figure 4.1. An isovist field for the corridor space in Maggie
Edmond & Peter Corrigan Athan House 1989. Source: MSc
AAS student work 2007 @UCL
In this section, we will be explaining three main topological measures for visibil-
ity graph that explain a high resolution picture of the spatial configurations of
a layout. The measures depend on the neighbourhood size. We will explain here
the local measure of clustering coefficient, the global measure of integration and
the local measure of control.
Clustering coefficient is derived from the local configurations of each node and
calculates the degree to which nodes that are visible from one location are
themselves inter-visible. Clustering coefficient is indicative to how much one
loses in terms of visual information when moving from one location to another.
Isovists that are closer to convex retain high clustering coefficient, hence little
visual information is lost when moving from these locations. Contrary to convex
isovists, spiky ones correspond to low clustering coefficient; hence more visual
information is lost when moving away from these locations. Understanding
these properties is vital for illuminating the relationship between navigation
and wayfinding types of movement and how visual information changes in the
system. For example, spaces that have low correlation coefficient tend to corre-
spond to locations where pedestrians make decisions on directions. The cluster-
ing coefficient might be representative of convexity in a layout, by illuminating
how ‘self-contained’ visual information is in an isovist field. The measure also
exposes how disruptive certain objects are to the visual perception of a layout.
Control calculates the area of the current neighbourhood with respect to the
total area of the immediately adjoining neighbourhood. Control is useful for
highlighting areas where observers can have a large view of the spatial layout.
30
DEPTHMAP EXERCISE
Isovist and VGA Analysis Using Depthmap
TASK
The task is to produce isovist and visibility graph analysis VGA to analyse the
visual configurations of the layouts.
In this exercise you will learn to do the following activities in Depthmap in order
to perform isovist and VGA analysis.
3. Create isovists
31
You can choose to have; quarter an isovist,
third isovist, half isovist and full isovist.
You can choose one of these options and an
isovist is going to be created at the point you
have located.
32
Using the line icon on your tool bar draw
the lines that define the path on your layout.
Make sure that you draw the lines following
one direction and make sure that the lines
intersect at their ends.
33
4. Prepare your graph for calculating visibility properties
Up to this stage you were dealing with the direct layout boundary
to produce basic and localised visual properties of it. In order to
analyse visibility relationships on a complex and global level you
will have to use the visibility graph analysis model10.
10
The model has been developed by Alasdair Turner and implemented by him
into UCL Depthmap. It aims to calculate visibility relationships that are similar to
those of Space Syntax basing that on a finer level of representation; that is the
scale of the human body represented by a grid unit. The connections are then
made on the basis of inter-visibility between two units in the grid that fills an
area within a predefined boundary.
You can also try the context fill from the drop-
down list next to the filling tool, although this
tool is designed specifically for low resolution
grid.
Also, you can fill in grid spaces in case you
had a very precise pattern that you want to
analyse. You can do that using the pencil tool
in your toolbar . you can also block grid
spaces by right clicking them.
34
After you are done defining the spaces
you want to analyse, go to TOOLS→
VISIBILITY→MAKE VISIBILITY GRAPH.
35
you will be ready to analyse visibility relation-
ships in your graph. You can click the selection
tool but you will still be able to see the green
linking lines in the background of your visibil-
ity graph. If this lines disappear after a while
don’t worry you can always check them again
when you switch to the joining mode.
36
If you choose to analyse Isovist properties you
will find that several isovists measures appear
on the lower left hand side.
Some of these are physical properties that we
have introduced previously. Other measures
include; compactness, drift, radial and occlusiv-
ity. Explanation for these measures is in the
previous section.
37
6. Changing your colour range
Go to WINDOW---COLOUR RANGE.
38
OBSERVATION
TECHNIQUES
INTRODUCTION TO OBSERVATION TECHNIQUES12
12
This section on
observations is
adapted from a
manual written Introduction
for the Space
Syntax Labora- This chapter addresses the description of field observations on pedestrian flow
tory by Tad Gra-
jewski in 1992 and static activities in the urban environment. Before designing and conducting
and rewritten by observations, field visits are normally organised to build a preliminary under-
Laura Vaughan in standing of the site conditions and settings, marking key functionalities or land
2001. uses in the layout and making initial decisions on where to allocate observation
areas. For the purpose of constructing a quantitative description of the move-
ment behaviour in the public realm, consistent and well-structured observations
on site are usually designed to measure real movement and occupational behav-
iour and test spatial predictions. In the sections that follow, we will explain the
observation methods and how these observations are conducted on site. The
observations are normally allocated in certain locations to ensure a comprehen-
sive coverage of the movement and occupation activities within target areas.
The methodologies explain can only act as guidance tips, special considerations
can be made to the particularities of each research or design project.
Why do we observe?
We observe in order to see how much we can learn about the environment
without taking account of people’s intentions. People normally say why they
are going somewhere if asked, not how they plan to get there, or what they
are going to do on their way. The collective activity within buildings or urban
contexts gives rise to a pattern of use and movement that is independent of
the intentions of individuals. Observations allow you to retrieve something that
might be considered as an objective view of human behaviour in the built envi-
ronment. However, in doing observations, some precautious measures should
be taken especially when an observer is regarded as a participant in the social
sphere that occupies the spatial scene.
1. Gate counts
41
The method must be applied with rigour and consistency at many locations.
In designing the research, observers should account for what is relevant in rela-
tion to their research question, how the site is used. They should also note that
in the UK, Fridays, Saturdays and Sundays tend to be different. So before starting
their research they should think about how often and when they should conduct
observations, and whether weekends are relevant at all.
Figure 6.1. Position of observer in relation to the observed gate. The observer
was to count pedestrian movement that crosses an imaginary line (adapted from
Laura Vaughan).
42
Figure 6.2. A sample of the gate counts table marking gate number, time of
observation task, age category, and a tally count of the number of people
passing by the gate. Special notes were also recorded in relation to weather,
use of IT equipment, and special site-related conditions such as closure of
underground station…etc. Source: Screens in the Wild @UCL.
Figure 6.3. A map showing the average number of pedestrians per hour
observed at the weekday crossing specific cordon lines in Covent Garden,
London. Pedestrian flows are categorise to recognise tourists, locals and
workers dressed in suits. Source: Space Syntax Ltd. 2007.
Figure 6.4. A map showing the average number of pedestrians per hour
observed at the weekend crossing specific cordon lines in Covent Garden,
London. Source: Space Syntax Ltd. 2007.
43
2. Static snapshots
Normally, static snapshots are conducted to record the use pattern of spaces
within buildings or public spaces in an urban context. The method is useful for
comparing static activities (standing, sitting) and movement. By tracking and
mapping these activities in time we may outline the patterns of space use in an
area and spot the locations where more potential interaction takes place natu-
rally. In general, snapshots might be comparable to a photograph taken from
above showing one moment of activities mapped onto the floor plan. They are
usually taken at consistent intervals during the day, to provide an objective view
of the invariant patterns of activity as well as different and peculiar behaviour
throughout the day.
Figure 6.5. Movement traces and static activities are drawn on a 1:50 plan of the target area.
Notes on behavioural patterns and special features and conditions are recorded. Source: Screens in the Wild @UCL.
44
3. Movement traces
Movement traces enables tracking and mapping the collective flow dynam-
ics through a predefined area. It helps understanding movement patterns and
where people are likely to enter/exit the area from (see figure 6.6). Observers
might also be able to outline islands where no movement traffic is recorded.
Similar to snapshots, target areas are normally chosen to have a convex layout
that is easy to observe. The observers position themselves in locations that
maximize their vision of the layout and record movement for 5 minutes at
several time intervals throughout the day. They are encouraged to use coloured
pens to mark different categories on the layout.
Observations are usually conducted on site to empirically track and map human
behaviour. They are mainly directed to test the spatial models we have derived
earlier from the visual configurations of the urban layouts.
Where a correspondence between both observation and space exists, it comes
as to validate and support our assumptions on the role of spatial visibility and
access in promoting certain spaces to be more hostile for human encounter and
interaction. Where there is less correspondence, further investigation is needed
to define any external attractors or outliers in the environment.
4. Traces (People-Following)
For the purpose of tracing, we first use plan of the whole area of interest. In
urban contexts it will be useful to arrange the plan so that the pick-up point is at
the centre of the plan.
45
To rule out bias in the reading of movement behaviour, observers should pick
up people randomly as they start a journey from a predefined point of origin
and follow them tracing their route. The tracing might be stopped either when
people leave the area of interest, reach a pre-defined destination, or after a
fixed period of time (e.g. ten minutes). It is important to be discreet in this
process – people should not become aware that someone is following them. It
is always good to account for a mix of people (age, gender, other categories of
interest) and note details for each trace. Tracing is a very useful technique when
comparing movement behaviour from a particular point of origin in a layout
(an entrance). It is usually used to display visual comparisons between spatial
analysis (VGA) and movement traces (see figure 6.7).
Figure 6.7. A comparison between agent traces (left) and observed movement traces (right), drawn
on the Tate Modern layout space. Source: Alasdair Turner@UCL.
5. Ethnographic Observations
46
Observation tools
47
48
DATA
ANALYSIS
INTRODUCTION ON DATA AND STATISTICS13
13
This section
was originally
prepared by Alan
Penn. Introduction
Parts of this
section are Data are collections of information. Traditionally in Space Syntax data are used
adapted from a
paper produced to understand and test the relationship between space and society. Data can
by the Cathie take any form, for example interview transcripts and field notes. The type of data
Marsh Centre collected will influence the sort of analysis that can be used to interpret them,
for Census and and academic disciplines develop research methods to reflect this.
Survey Research
at the University
of Manchester.
Other mate- Aspects of data
rial is adapted
from JMP and A key distinction made in social research is that between quantitative and quali-
the Statview
manual ‘Using tative methods. Although these methods have much in common, they differ in
Statview’, the sorts of data that are collected and the techniques applied to understand
Abacus Con- them.
cepts Inc, Berke-
ley, California.
Qualitative data is descriptive data, which can be collected by means of in-depth
interview, field observations or from other sources such as newspapers. The data
is generally quite detailed and is aimed to understand motives, understandings,
feelings and social processes, particularly at the small scale. Researchers using
this sort of data usually are attempting to explain social behaviour within a
set context, and rarely attempt to make claims about behaviour outside of this
context.
One very important type of quantitative data is that which contains a standard
amount of information about a number of different things or people. This is
the sort of information that is gathered by a survey; interviewers ask the same
questions of a large number of people so that they can investigate the extent to
which individuals’ responses differ, and whether there are discernible patterns.
This sort of data is generally (although not necessarily) collected from a repre-
sentative sample of individuals so that conclusions can be applied to a wider
population; a common way to choose individuals who are representative is to
take a random sample. When data is produced in this way it is appropriate to use
statistical methods and conclusions subsequently drawn are said to be ‘general-
isable’ to a specified population.
Quantitative data comes in different varieties. The extent to which each is used
and the way in which they are classified varies from discipline to discipline.
Some key distinctions are as follows:
51
Primary vs. Secondary Data
Primary data are data that have been specifically collected for a particular study,
either via a survey or other means (including interviews and observation).
Secondary data are those which have been collected by some other person or
organisation, but which can be re-analysed for other purposes.
There are advantages and disadvantages to each approach; primary analysis
allows researchers an in-depth understanding of how the data were collected,
but is expensive and time consuming.
Secondary data are generally cheaper to use and permit researchers to explore
very large datasets. However, the usefulness of the secondary data is limited by
the secondary analyst’s ability to identify a dataset that is fit for purpose and to
understand how the data was collected and what the values mean. Key sources
of secondary data include the large government datasets (e.g. the General
Household Survey, Farm Business Survey, British Household Panel Study and
Labour Force Survey) and the Census.
If one is interested in studying all the people living in the U.K it is very easy to
define what the population is – it is simply the population of the UK!
When researchers design a research project they decide which set of people they
are interested in, so for example, a researcher interested in patterns of travel
to work may be interested in every individual aged over 18, or every individual
between 18 and retirement age.
Every single individual in that set of people (or households or businesses... etc.)
collectively constitute the population of interest. So if, for example, we were
interested in studying patterns of travel to work within central London, our
population would be everyone currently working in Central London. If we were
a governmental institute interested in discovering transportation needs for Lon-
don’s commuters, we would want to ask this population a set of question about
their travel patterns and anticipated needs.
However, when we are collecting data we only very rarely try to collect informa-
tion from everyone in the population, it is usually too costly and too difficult. We
know that if we take a representative sample of the population we can cut our
costs, but obtain reliable estimates of the true characteristics of the population
as a whole.
Surveys obtain data for only a sample of the population. The extent to which the
sample is representative will depend on the sampling method chosen. There are
standard tables for determining a statistically significant sample. See appendix
to this section.
Unless you collected your data yourself it is quite possible to be mistaken about
its nature unless you do some research to find out where the data came from,
how it was collected, what it was collected for and how the data was coded.
The answers to these questions will be helpful in determining how useful the
dataset is likely to be for your own research.
52
Statistical Methods
- testing hypotheses.
Some techniques, especially the first two listed above, are designed for exploring
and describing data, and characteristics of the individuals from whom data was
collected. This is understandably known as ‘exploratory data analysis). Others,
particularly the second pair in the list, are designed to confirm conclusions
drawn using exploratory analysis or test theories.
Variables
Variables may be categorical or continuous (the former being the more common
in social survey or census data).
Variables in which the values can be understood as categories are called categor-
ical variables. All the values of the variable are defined and given a value label to
indicate the meaning of the value.
53
Categorical values have particular qualities. Most importantly, the numerical
values in the data, because they represent categories, have no significance. It
would be total nonsense to claim that, for example;
1 shopping street + 2 residential streets = 1 office
Categorical values can have a natural order. For example, the class classifications
given by Booth, which amount to 7 categories, from ‘1-vicious lower class’ to ‘7-
upper class’, have a natural order from lowest to highest class.
If you were categorising streets according to these values, you would normally
have the lowest number for the lowest class, but this does not imply that the
difference between being a member of the lowest class and of the second
lowest class is the same as the difference between the two top classes.
The easiest way to summarise the values for a categorical variable is to look at
the frequencies.
A statistical package will produce a table of all valid values with a tabulation of
the number of times the value occurs in the data (see table 7.1).
In JMP you need to select distribution, in the new window you will need to
select the column for which you want to display the frequency of data - here the
number of elements that have segment integration values falling within certain
bands.
Table 7.1 This histogram shows the statistical distribution of segment integra-
tion values (R 2000metric) for Barcelona 1714. Source: Kinda Al Sayed@UCL.
In contrast, continuous variables have valid values that fall between a minimum
and maximum and any value between the two is possible. The minimum and
maximum do not have to be defined.
54
Age is generally measured as a continuous variable. The youngest age possible
is zero years, the oldest ever person at the time of writing was 122. It is possible
for a persons age to fall anywhere between these extremes. Where, as in age,
a continuous variable has a meaningful zero point the variable is said to be a
‘scalar’ or ‘ratio’ variable. You can do multiplication and division sums with these
variables as well as addition and subtraction. Being 0 years means that you’ve
only just arrived on the planet; being 4 years old means that you’re twice as old
as a 2-year-old.
If the exact date of birth had been recorded then age could be defined in exact
days of months. But if we look at year of birth instead of age in years the vari-
able would have different characteristics. The 2-year-old and 4-year-olds given
in the example above could have been born in 1989 and 1987 respectively. The
difference between 1989 and 1987 is still 2, addition and subtraction can still be
done, but multiplication and division cannot. The zero of year relates to a date
in the Christian calendar 1987 rather there being a total absence of time, or age,
before 0AD, so we say that the zero is arbitrary. Only when there is a real and
meaningful zero point is it possible to do multiplication and division calcula-
tions with the values.
Variable Types
Table 7.2.
55
Looking at variables
A single image can convey some very complex observations about the character-
istics of a dataset. While you will normally be expected to produce commentary
and analysis as well as data summaries, a well thought out and well-presented
graphic can be more quickly assimilated than a complex table or commentary.
Graphics are also a very helpful means of understanding the structure of a
dataset and therefore form an important part of exploratory data analysis.
Computer packages often require only a few clicks to produce graphics that are
pleasing to the eye and professional looking. But perhaps the most important
stage in producing any graphic or other statistical output is deciding which tech-
nique to use and why. The decision that you make will depend on your research
question.
DATA ANALYSIS
You are likely to have a variety of types of questions about your data:
What?
4. How do the ages of person types vary? I.e. are men’s and
women’s age distributions different?
Why?
56
1. Data types and analytical results (graphics)
- Pie Charts
- Bar charts
57
In general: you might want to lock the scale to
compare different samples for which the same
measure has been taken (e.g. control for a
whole map and a sub area of the map).
The techniques described above are those suited to variables with a small
number of discrete values. If continuous data were used in either a bar chart
or pie chart you would obtain a bar or slice for each value and the important
characteristics of the data would get lost in the (cramped) detail. To get around
this problem, there are other techniques which either group the values or which
summarise the characteristics of the variable, rather than simply counting the
frequencies for each value. These include histograms and box plots. We will only
describe Histograms here.
- Histograms
Sometimes a research question requires you to look at more than one variable,
often to see if there is a relationship between the two. Your questions might
include: Is there a relationship between radius n and radius 3 integration?
58
Figure 7.1. This scattergram shows us Figure 7.2. The Regression here is similar to figure 7.2. UCL Depthmap is used
an R-squared value of 0.56 between to produce the correlation scattergram. By selecting the group of elements
segment integration radius 2000 that have the highest correspondence (closer to regression line) you could see
metric and segment integration where they are placed on the map. Source: Kinda Al Sayed@UCL.
radius n for Barcelona 1714. The
Bivariate Fit is produced using JMP
software.
There are many other ways of ‘interrogating’ scattergrams. First, you should
always check the p-value, to see the probability that a result occurred by chance.
Second, look at outliers: which values are under-performing or over-performing
(are not following the regression line). Does their exclusion create a better cor-
respondence (only exclude if there is a logic, such as excluding one-connected
streets and always mention you have done this in the text).
Why do you think these values are not performing like the others – is your data
accurate? Is there an anomaly in the results – you may discover new things
about your data through this process.
5. T-Tests
A t-test works by comparing the average value of a group or sample with the
average value of the population as a whole, and asking how likely it is that the
average of the smaller sample would have been arrived at by chance. The degree
to which the two averages differ is indicated by a t-value where a high number
(positive or negative) indicates greater difference – this expresses the difference
between the mean and the hypothesized value in terms of the standard error.
The probability that this could have happened by chance is indicated by the
p-value, where the smaller the number, the less likely to have occurred by chance
and the greater the significance of the result. Probabilities of less than .05 are
generally considered to be statistically significant; p-values of close to 1 mean
that it is very likely that the hypothesized and sample means are the same.
59
APPENDIX: GUIDE TO DETERMINING QUESTIONNAIRE SAMPLE SIZES
Table of sample sizes for different sizes of population at a 95% level of certainty
(assuming data are collected from all cases in the sample14
MARGIN OF ERROR
Population 5% 3% 2% 1%
50 44 48 49 50
100 79 91 96 99
150 108 132 141 148
200 132 168 185 196
250 151 203 226 244
300 168 234 267 291
400 196 291 434 384
500 217 340 414 475
750 254 440 571 696
1000 278 516 706 906
2000 322 696 1091 1655
5000 357 879 1622 3288
10000 370 964 1936 4899
100000 383 1056 2345 8762
1000000 384 1066 2395 9513
10000000 384 1067 2400 9595
14
This table is
based on a docu-
ment What is a “margin of error”? If you remember that a sample of a population is
provided to meant to reflect the characteristics of the population as a whole, the margin
Laura Vaughan
by RM Consult-
reflects the percentage by which the results are likely to deviate (plus/minus)
from those characteristics. Thus, if your survey shows that 14% of visitors to
Camden Market who responded to the survey are from abroad, you would
expect that 14% of the entire population would also be visitors from abroad. If
your sample was within the 2% margin, you would know that the percentage of
visitors would range, at the most, between 12-16%.
The higher the margin, the more built-in error there is likely to be in your results,
so a 2% margin is better than a 4% margin. The higher the population size, the
smaller the change in sample size needed to obtain a good margin or error. For
example, if your population is somewhere between 100 000 and 1 000 000,
the change in sample size required to obtain a 2% margin of error for the larger
population is only 50 more people.
Note that this table assumes, in the case of a questionnaire, that you obtain
answers from all people asked. If you have non-responses, you need to find extra
respondents to reach the required sample size. It is also important to under-
stand the cause of non-responses: are they due to refusal to respond (typical of
street surveys); due to ineligibility to respond (can’t speak good enough English
or don’t fit the profile you’re seeking); or you haven’t managed to contact the
respondent (when you’re making house-to-house surveys, for example).
60
DEPTHMAP EXERCISE
Data Analysis Using Depthmap
TASK
In this exercise, spatial and visual data will be used to find correlations between
different visual and spatial measures. This will help understanding the nature
of the measures we have learned so far. Following that we will use search for
other types of data and enter these data into the Depthmap table. We will then
analyse correspondences between space and other variables.
In this exercise you will learn to do the following activities in Depthmap in order
to find correlations between space to space and vision to vision data.
Ideally, what is required for analysis is a previously processed Depthmap file that
contains some analysed spatial and visual data. We will use the file that was
produced in the last workshop.
61
4. Looking for values rather than colours
Until now we were dealing with maps that render a colour spectrum. We were
also able to change the colour range to suit the representation that we need to
see. Whether with spatial or visual analysis, this allowed us to find out about
higher and lower values and how they distribute throughout the layout. If we
are to outline models that are less inferred from these maps, we need to go
further and look for values. There are several ways in which to check the values
of elements.
62
Another way of looking for values is to go to
ATTRIBUTES→ COLUMN PROPERTIES.
63
You can export this table as a txt. or as a csv.
file if you intend to perform more advanced
statistical analysis using specialised statistical
packages.
5. Pushing values
64
numbers aggregate to a very high value and
might increase exponentially. For that we may
need to log the column. We will learn about
that later. We can tick the box that allows for
recording intersection counts just to have an
idea about how many objects from the origin
map intersect with destination objects.
65
You can export the scatter Plot by going to
EDIT→COPY SCREEN.
6. Modelling correlations
66
are averaged out for the day and multiplied by an hourly rate. In this case and
because we did not count the number of people per gate we are going to use
another type of data.
67
ments on the screen, and by selecting them
the boxes at their rows will be checked you
can edit or enter a value at a certain column
by clicking on it once, type the new value, and
click ENTER. At the moment each element
in the new column is assigned a value of -1.
You can now enter the values of twitter rates
into the new column you have created. After
you are done entering the values click on the
twitter rate attribute you will be able to see
the street spaces assigned colours according to
the higher twitter rates you have entered.
68
tered together and other floating away from
the regression line. You might hope that these
data will present a strong correlation itself
with its associated spatial attributes.
69
click OK.
70
SEGMENT
ANALYSIS
THEORETICAL BACKGROUND ON SEGMENT ANALYSIS
Introduction
In this chapter, we will explain a new syntactic representation of cities which applies
to both topological and geometric configurations of space. This representation is on
the level of street segments, considering their topological, metric and angular con-
nections. Along with other software, UCL Depthmap has the capacity to produce
different types of segment analysis using different radii. We are going to learn about
segment analysis, what it can do, how powerful it is as a tool for predicting social
and economic activity, the different scales of measurement and graph distance
associated with it. We are also going to list the reasons to shift from axial analysis to
segment analysis, and how the measurements differ in this case. We will highlight
the most powerful tool for measuring accessibility in street networks; that is angular
depth with metric radius. Using this type of graph representation, we will calculate
integration and choice to measure accessibility and compare configurational prop-
erties of space with observed urban activity. These two measures; integration and
choice, will be devised to identify to-movement and through-movement potentials.
Other measures such as metric and topological analysis of segment maps will be
explained in later chapter.
It is important to emphasise that the metric radius of the measure refers to the
73
metric distance from each segment along all the available streets and roads from
that segment up to the radius distance. Following this definition, radius ‘n’ means
that each segment is related to every other segment in a city without any radius
restriction. This pattern changes as we reduce the radius of the measure. So if we
are to consider a metric radius as a “cookie cutter” in a network of nodes and con-
nections, resembling a segment map with segments and points of intersections, the
system will be analysed around that particular node or segment. In this case, radius
400 metres (approximately 5 minutes walking distance) will only calculate angular
turns of all nodes within 400m from the current node, any nodes beyond that radius
will not be calculated. This means that the system will only identify the local rela-
tionships between segment elements within 400 metres along the neighbouring
segment lines starting from each one of them.
Deciding about the minimum and maximum radius in an analysis is not restricted
to certain norms. Before beginning an analysis a set of fundamental research
questions need to be posed depending on the nature of investigation: What radius
measure would best correlate with block size parameters, segment length, land
values, Twitter activity, pollution rates or observed patterns of pedestrian and
vehicular movement within a certain urban area? Local movement is normally
best represented by a local radius measure – 800 metres which is equivalent to 10
minutes walk. Market areas with finer scale grid pattern are better represented at a
lower radius such as 400metres. Higher radius measures are then needed to repre-
sent vehicular movement flows.
A segment map is a broken representation of an axial map, where segments are the
inter-junction lines between points of intersection in an axial map (stubs can be
considered if long enough). Angular depth calculation in segment analysis takes a
different form compared to that of axial analysis. Angular segment depth is calcu-
lated by adding up the weighted values of the edges, where each edge is weighted
by the angle of connection. To make it more clear, the intersection of two seg-
ments at an angle of incidence 47±, which might be approximated to 45± degrees,
might have a weight of 0.5. If one of these segments is intersecting with a different
segment at 107± degrees, then it will have a weight of 1. If these three segment
elements are connected in the same direction then the depth between these street
elements is the sum of their weighted angular intersections, that is; 0.5 +1=1.5.
This angular sum can be considered to be “the cost of a putative journey through
the graph”, and from it a “shortest path” is one that presents the least angular cost
from one segment to all others in a street network, (Turner, 2001). The turn angle
is always regarded as positive and the calculation accounts only for directional
movement, meaning that, the point where one leaves the segment, a “forward” link
should be in the same direction as the point at which one has previously arrived at
the segment’s “back” links.
Tulip Analysis
74
- a turn of less than 22.5± might be assigned zero value
17
See appendix Angular Segment Analysis Measures17
2. For further
description
1. Angular Connectivity
about angular
segment analysis
measures The segment analysis measure of angular connectivity is considered to be
please refer to the cumulative turn angle to other lines. The turn angle is weighted so that
Depthmap 4: a 180°±,angle will be equivalent to a cumulative weight of 2 and an angle of
A Researcher’s
Handbook (see
45°±, will correspond to a cumulative weight of 0.5. If we look at the example
references) in figure 9.1 we see that the calculation of the measure for a segment B is made
through adding angle x to angle y;
18
Diagram
source: Turner, A.
(2005) Could A
Road-centre Line
Be An Axial Line
In Disguise?
2. Step Depth
Step depth follows the shortest angular path from the selected segment to all other
segments within the system. The weighting used in the angular scale considers 1
step as a 90± angle. Angles are cumulative. The depth from segment A to segment
B in figure 1 is 0.5 (a turn of 45o).
75
3. Node Count
Node count is the number of segments encountered on the route from the current
segment to all others. In the case of figure 1 we consider node count (NC) to be 3
because the shortest angular path goes through three segments.
Total angular depth is the cumulative total of the shortest angular paths to all seg-
ments. In the case of segment ‘A’ in figure 1, the angular total depth is;
5. Mean Depth
The angular mean depth value for a line is the sum of the shortest angular paths
divided by the sum of all angular intersections in the system rather than the
number of lines in the system. Mean depth in general terms is indicative to how
deep or shallow a node is in relation to the rest of the graph, a measure defined
as centrality. However, Mean depth seems to have several problematic issues
with regards to its implementation and verification in angular segment analysis.
On a small radius such as radius 500 metres, angular mean depth is a meaning-
less measure. It will simply approximate node count. On radius n, node count has
a constant value, because we are simply considering all the nodes (segments) in
the system. In this case, angular mean depth will be proportional to angular total
depth. In the example illustrated in figure 9.1, angular mean depth for the segment
‘A’ is calculated in the following way;
7. Integration measure
76
Some approximations of the measure in special radius types;
8. Choice measure
Note that, both integration and choice could be combined in one measure. This
makes it possible to find the segments in a network which serve as both a potential
destination and route of movement. This measure will then narrow the focus on
fewer and more significant elements within the system that combine the attributes
of being a potentially desired destination and at the same time a desired route for
movement.
77
ferent sizes. It was basically necessary to introduce new normalisation methods to
angular distance in a graph, since it was not possible to use the diamond-type of
value (D-value) that was initially used to normalise topological distance in axial and
convex graphs. The normalisation of choice was motivated by acknowledging the
relationship between high choice and high total depth, that is the more segregated
the system is, the higher choice values are. Choice was seen therefore as a neces-
sary condition to overcome the cost of segregation in the street network. This is
a cost-benefit principle that was introduced by Tao Yang in Hillier et al (2012). The
new normalised angular choice measure was named as NACH;
NACH = logCH+1/logTD+3
NAIN = NC^1.2/TD
The initial testing of the measures -as explained in Hillier et al (2012)- proved to be
very effective. However, few issues have arisen in some cases where the application
of the measures on the local scale in areas that are less urbanised was rendering out
some erroneous results. To deal with that, it is recommended that fully urbanised
areas and periurban regions should be analysed separately. Another issue was
emerging when dealing with Road-centre line maps, where multiple segments on
the same line were adding up to the values of choice, hence distorting the results.
For that, an automated procedure was developed at Space Syntax Ltd. to eliminate
this effect. This problem can also be solved by manually deleting the extra lines. It
is also advised, that stubs up to 40% should be cleared from segment maps before
running the analysis. This is to ensure that no unnecessary depth is introduced to
the system as that might increase segregation and distort the values of normalised
choice.
78
DEPTHMAP EXERCISE
Segment Analysis Using Depthmap
Task
The main task in this exercise is to get from an axial map, to segment map16, to
16
Do not convert
your map some analytic measures of the spatial structure of urban environments.
directly to a
segment map! Steps to perform axial and segment analysis in Depthmap
You must make
an axial map
first to preserve
In this exercise you will learn to do the following activities in Depthmap in order
the links/unlinks to do segment analysis from axial lines data.
if there are any
embedded in 1. Download your files
the file.
Go to http://moodle.ucl.ac.uk/mod/resource/
view.php?id=645834
79
2. Axial analysis
80
4. Calculating different types of segment step depth
81
5. Segment analysis18
18
Most description of segment anal-
ysis is rather technical, but it would
make sense to read the papers about
it: Hillier and Iida (2005) and Turner
(2007). For some historical perspec-
tive, you might also like to look at
Dalton et al. (2003), although note
that Dalton et al.’s fractional analy-
sis differs in detail from the later
methods.
82
6. Combining integration and choice
NAIN
NACH
83
Notes
NOTE 01
When you need to draw axial lines data using map image files;
You may use whichever drawing package you prefer to create a vector drawing
of the axial map to import into Depthmap, however, MapInfo or other GIS pack-
ages are strongly preferred as it will allow you to register your map images to
the OS national grid, and thus link your axial map to other axial maps.
If you are using MapInfo to produce an axial map out of a map image file,
consider how to register the image. That is, at the moment, the image is simply
pixels on the screen, but in fact, each pixel corresponds to a location on the UK
national grid, and this is the preferred format. To register it, you will need to
download contemporary vector map data from OS to which to register the map.
Please consult your GIS lectures for how to register images within MapInfo.
Once your image is registered, you should start drawing the axial map. Make
sure that you use a new layer within MapInfo for your axial map.
If you are unable to register your image, at the very least set the correct scale
within your GIS or CAD package. If OS national grid units are not available, use
metres instead.
NOTE 02
In order to import the axial lines drawing file you need to export it from the
drawing package (CAD, GIS) you have used to draw the lines. From CAD you can
export a DXF file; from MapInfo it is probably easiest to export a MIF/MID com-
bination (although you may also export DXF from MapInfo if you prefer).
Always Check the integrity of your axial map before piling straight into the
analysis of your axial map, but wait! Check that your axial lines are properly
connected to each other. You can go through the map looking at the number of
connections (just hover the mouse over a line), or check all lines are connected
by performing a ‘step depth’ calculation. Both of these methods can be found
in the Depthmap axial analysis tutorial at UCL Bartlett website. If you find lines
that are unconnected, that should be, fix the map either within Depthmap or
MapInfo(if you have imported a MapInfo file), once again, see the tutorial. Actu-
ally edit the lines rather than using the ‘link’ tool. You may want to export the
edited map as a MapInfo MIF/MID file if you make significant changes to your
original map.
In rare circumstances, you may need to unlink lines where it is not possible to
get from road to another – for example, if there is a bridge. Go through the axial
analysis tutorial to find out how to unlink lines, and unlink where necessary.
NOTE 03
if you don’t have a proper metric (scale) on your map, then Depthmap will allow
you to set integer radii i.e. 1,2,3…..n where 1 is a single one of the units in which
the map was drawn
84
ADVANCED
AXIAL AND
SEGMENT
ANALYSIS
THEORETICAL BACKGROUND ON ADVANCED AXIAL AND
SEGMENT ANALYSIS
Foreground vs. Background analysis
Introduction
This chapter is aimed to explain some of the aspects of metric analysis in cities
and how it helps to spot localities with homogeneous identity in terms of the
scale of grid structure. These localities appear as patchwork patterns in segment
maps. Along with these local metric signature of neighbourhoods there seems
to be an associated global structure which can only be identified by rendering
its topo-geometric attributes. We will introduce these two models of spatiotem-
poral segment maps. While a patchwork pattern can be approximated using
the metric depth tool in UCL Depthmap, the top-geometric structure of the
grid needs more experimentation on how to expose it. Generally, an angular
measure of integration or choice will do, however, this can vary according to the
type of grid and area that need to be represented. There is still not much to say
about how these studies can be beneficial. Yet, they do tend to help us acquire a
better understanding of cities and their generative mechanisms.
What are the main components of cities that distinguish the global structure from
the local structure?
On defining the global structures or the foreground layer of street networks, the
original syntactic axial model presented some deficiencies, hence there was a need
for a methodological intervention. Previous studies by Figueiredo and Amorim
(2004, 2005) recognised that axial analysis cannot identify continuous linearity in
the system. In response to this problem, Figueiredo and Amorim suggested “con-
tinuity lines” as a method to unify the long axial lines that intersect with smaller
angles and hence consider these unified linear representations as single elements
87
when computing different configurational measures of Space Syntax. In this way,
“continuity lines” can be considered as to represent a global structure highlighting
roads that are likely to afford higher movement flows due to their configurational
properties. The issues presented by semi-continuous streets were also discussed
by Dalton (2001), who proposed fractional analysis as to recognise angular graph
distance in calculating depth in urban layouts. Through the use of fractional analy-
sis, it was possible to highlight the role of Broadway in the grid-iron street network
of Manhattan. Another contribution was made by Dalton (2007) to empirically
define local structures, in a dedicated effort to find some quantitative description of
urban identity. In his approach, Dalton plots the values of point synergy and point
intelligibility of axial lines and accordingly reaches a form of patchworks map that
shows fuzzy boundaries of neighbourhood vicinities. Synergy was recognised by
Hillier (1996) as the relationship between local and global integration highlighting
some underlying relationship between the parts and the whole in cities. Intelligibil-
ity is similarly relating a local measure such as connectivity with the measure of
integration in an axial map, and in this way the measure gives an idea about how
intelligible and permeable a space is for users.
More recent work by Hillier et. al. (2009) revealed an interesting manifestation of
both the local and global patterns of grid structures. The local to be highlighted as
patchworks of metrically similar areas, and the global to be represented by semi-
continuous lines in the grid highlighting shortest paths between all origins and
destinations in an urban system. The metric patchworks serve as a background
structure and correspond to some extent with known localities in politically-
defined neighbourhood areas. The metric patchwork patterns are highly sensitive
to the scale of measurement. The longer continuities serve as a foreground layer
and stand for the shortest connections. They very often match busy streets in the
urban fabric.
88
consequences. If we take the metric mean depth of all elements within the analysed
urban system o the X axis and plot it against metric mean depth of radius 1000 -for
example- we might be able to highlight a pattern of peaks and troughs that is rep-
resentative of how far a metrically defined local cluster. It is important to remember
that the density of street structures and the performance of different radii are also
related to the distribution and size of urban blocks. At the centre of a spatial system,
as blocks shrink to a smaller size, street structures intensify and metric mean depth
decreases. The exact contrary happens when blocks grow larger in size at the centre
of an urban region. It must be recognised that city centres tend to intensify their
grid structures in central areas to minimize depth.
Figure 11.1 . The relationship between metric and angular graph distance in Barcelona. A set of correla-
tion coefficients that mark the relationship between MMD and angular total depth, and between MMD
and choice on different metric radii. Adapted from (Al_Sayed, 2013).
Global structures in cities are often recognised as the set of near-continuous linear
connections that afford shorter journeys from all origins to all destinations. It is
perhaps reasonable to recognise a global road infrastructure network as a sepa-
rate foreground layer in an urban grid. As we have seen, metric measures can only
function well on a small local radius whereas global radius is mostly topological
or topo-geometric. If we start calculating metric mean depth or node count on a
small radius from each segment element the resulting values of physical distance
will start highlighting local structures (figure 11.2). Once we push the radius higher,
an overlapping effect starts to emerge (Hillier et. al., 2009). More overlaps between
the ‘buffer zones’ around each single element arise and this demands a less intuitive
measure to calculate the depth and journey cost relationships of the system as a
whole.
If we look at the integration measure, the best way to illustrate important linear
connections is by calculating angular turns within a metric radius. At radius ‘n’ the i
89
ments on this measure yield that a simple weighting of integration values by
segment length does not make that much difference on the whole. As the system
increases in size, the configurational values of street elements follow a lognormal
distribution, reducing the difference between mean values. This does not apply to
indices of choice, mainly because the values of choice are likely to follow an expo-
nential distribution.
Figure 11.2 . Overlaying the foreground structure on top of the background structure. Smaller radii
are applied to both metric and angular measures to render local patterns (top), whilst larger radii
are applied to show global patterns (down). The analysis reveals varying patterns of clustering in the
background and branching of semi-continuous lines in the foreground Source: Al_Sayed, 2013, LSE.
90
In a large scale system, choice does not seem to work well when applying metric
weighting or angular weighting. In fact, a seemingly trivial effect on a local segment
of the route can add up in computing values leaving a negative impact on the rep-
resentation of the system as a whole. Restricted choice radius, however, performs
much better in measuring the network affordances for movement traffic, especially
when applying metric weighting. The best method for weighting choice would be
to weight the origin and destination of the shortest angular turn journey (Hillier
et. al., 2009). In this way, it will allow for the inclusion of the block size variable in
weighting origin and destination segments. This solution might be less effective
when computing shortest journeys within radius ‘n’. Yet, the calculation of least
angular turns is more effective than calculating shortest metric distance. Hence,
whether calculated separately or combined in one measure, an analysis of angular
choice and angular integration are usually reliable in highlighting the foreground
layer of street structures.
91
DEPTHMAP EXERCISE
Advanced Segment and Axial Analysis: Foreground vs. Back-
ground analysis
CityLondon_axial-
map.MIF Fewest line map of
MapInfo exchange the city of London
CityLondon_axial-
file containing inter- data copyright ©
map.MIDI http://moodle.ucl.
secting series of line University College
CityLondon_un- ac.uk/
elements and MIDI
links_coord.MIF London
sequence
CityLondon_un-
links_coord.MIDI
Tasks
Import a dxf file into Depthmap with its associated unlinks data
Go to http://moodle.ucl.ac.uk/
92
2. Prepare your axial map
3. Axial analysis
93
will have the file imported into your Depthmap
file as a data map containing the following
columns; REFERENCE NUMBER, X, Y.
94
4. Creating a segment map
95
might not distinguish any pattern. To start
distinguishing ‘patchworks’ in the map you will
have to adjust the colour range.
96
6. Foreground analysis23: calculating global integration and
choice values
23
Most description of segment anal-
ysis is rather technical, but it would
make sense to read the papers about
it: Hillier and Iida (2005) and Turner
(2007). For some historical perspec-
tive, you might also like to look at
Dalton et al. (2003), although note
that Dalton et al.’s fractional analy-
sis differs in detail from the later
methods.
97
the combination of the two).
Another representation of the foreground
layer is identified through global integration
measure
Notes
Save your file whenever possible. Depthmap tends to crash sometimes when
analysing large graphs.
If you stop your analysis before it is actually finished for whatever reason (often
because the analysis is taking too long) the graph will be analysed only partially.
So you will have to produce another active layer of analysis. If you don’t want to
confuse half-finished analysis with the finalized one within one layer you will
have to delete the columns of the partial analysis one by one. Note that some
columns cannot be deleted.
98
AGENT
ANALYSIS
THEORETICAL BACKGROUND ON THE AGENT MODEL
Introduction
This chapter introduces the cognitive agent-based model and simulation tech-
niques that are incorporated with the Space Syntax tools in UCL Depthmap10.14.
The cognitive agents, developed by Alasdair Turner, introduce a dynamic descrip-
tion to the static representation of Space Syntax model in that it accounts for the
adaptive behaviour of individuals/agents in relation to space. This section will be
dedicated to introduce the theoretical and modelling description of Turner’s cogni-
tive agents.
Different Agent tools are provided on the 2D and 3D graph windows to visualize
aggregate and individual agent movement. These tools were initially dedicated
to form the experimental part of Alasdair Turner’s research, for this reason further
testing is needed to validate the rules on different building typologies and different
architectural and urban scales. The manual explains the use of the Agent Analysis
Setup toolbox in the graph window. The toolbox enables users to generate differ-
ent patterns of aggregate behaviour by controlling the parameters and rules in
the toolbox window. In addition, the manual explains some of the tools provided
on the 3D view window that control the visualisation of the standard agents. The
3D view helps understanding how individual movement behaviour of standard
automata/agents builds into aggregate patterns that might then be compared to
human behaviour in space.
101
The Agent analysis tools in the 2D view window (Map window) are used to gen-
erate aggregate models of agents’ movement in space. These aggregate models
are governed by global parameters as well as parameters defining the behaviour
of individual agents. The global parameters determine the duration of analysis,
when, where and how many agents are released into the system. They also
allow for externalising agents data to compare with movement traces and with
observed gate counts. The agents’ parameters will define their field of vision
and the number of steps at which they decide to change their directions. The
agents may follow different rules to see or take turns in the system. These rules
are in need of further testing by measuring the correlation between the model
and observed movement patterns. Empirical testing would help understanding
the basic cognitive mechanisms that drive explorative and planned movement
behaviour in relation to space.
102
DEPTHMAP EXERCISE
Agent analysis in Depthmap
In this tutorial we are going to explain how to produce Agent analysis using the
different parameters settings in Depthmap. First, create a new file in Depthmap
and import a drawing file (either DXF or MIF file). The drawing you are import-
ing should have closed boundaries in order for you to be able to make Visibility
Graph before you start the agent tool. After you import the file follow the steps
to make a visibility graph, first by setting the grid using the SET GRID button .
The default value is 0.04. You could simply adjust this value by entering a differ-
ent number, preferably to be 0.02 to match human scale. This is up to you and it
depends on the resolution of the analysis you need to obtain. After you set the
grid, you can fill the enclosed spaces using the FILL button . Click inside the
area you want to analyse and it will fill it with a different colour marking the
enclosed space in which agents are going to move about. After you have done all
this steps to prepare a space for analysis, you have to make a visibility graph out
of two attributes the grid and the boundary that you have filled. In order for you
to do that, go to
A window will appear providing you with visibility graph options. Just click OK
and it will make visibility graph. Now your settings are ready for Agent Analysis.
1
1.1
1.2
2
2.1
2.2
2.3
3
3.1
3.2
3.3
4
5
In order to explain the window we have marked the different parameters with
numbers and a short explanation will be associated with each number.
103
7. Global setup
1.1. Analysis length (timesteps): Sets the period of the analysis in timesteps
1.2. Record gate counts in data map: records how many agents are passing
through predefined gates in a new column. These gate count values are stored
in a data map layer and can be compared to observed gate counts representing
pedestrian flow per time unit in the real built environment. Normally you will
need to log the values in both data map columns because the values will be
exponentially distributed and in order to calculate their correlation coefficient
they need to be normally distributed. You can log the value by just adding a log
function when updating the two columns.
2.1. Release rate (agents per timestep): Sets how many agents are released to the
system within each time step.
2.2. Release from any location: A check box giving you the option of releasing
agents randomly from any location in the predefined space.
2.3. Release from selected locations: A check box giving you the option of
releasing agents from previously selected locations. You will have to select the
locations from which you want the agents to be released before you start your
Agent analysis. Normally you will need that to simulate the flow of people start-
ing from the entry points in the layout such as the main entrance, stair cases
or elevators. This technique also helps when the observer needs to compare
observed movement traces to traces that agents leave behind when moving
from the same point that the observer has followed people from. In order to
select locations on your grid you have to press the left mouse button and keep it
pressed while you define a window containing the points you need and release
it once you are done. If you want to add to your selection you will have to hold
the SHIFT button will you define a new selection using your mouse.
3.1. Field of view (bins): This attribute will define the field of view that each
agent can see when moving in certain direction. The default is 15 bins which is
equivalent to 170 degrees. It have proven to be most effective when comparing
to natural movement patterns. However, it is up to the researcher to change this
field of view subject to the particularities of the case under investigation.
3.2. Steps before turn decision: These are steps or grid points that the agent
passes through before choosing to randomly change direction in relation to the
environment surrounding them at the time it has arrived at the last step. The
default is 3 steps and it has proven to correlate best with natural movement pat-
terns. Again, it is up to the researcher to change that.
3.3. Timesteps in system: There are time steps that the agent move about in the
system before it disappears. Normally this will be relative to the distance chosen
104
between the grid points and the walking distance that a pedestrian may take in
a certain urban or building environment.
This option will export the movement traces to a file called trails that will be
stored within the folder where you have your original graph file. You can import
that file after you have done your agent analysis and Depthmap will store it as a
separate drawing layer.
There are different rules within this drop down list. It is advised that you use the
standard rule which is the default. The rest of these rules are part of an ongoing
research and need to be further tested before implementing in natural move-
ment simulations. Further information about these experiments might be fol-
lowed in Turner (2007a). The occlusion rules are of particular interest. However
these rules will not work properly unless you calculate isovist properties. For
that you have to go to
TOOLS --- VISIBILITY --- RUN VISIBILITY GRAPH ANALYSIS --- CALCULATE ISOVIST
PROPERTIES
In this section we will demonstrate how to use the Agent tools in the 3D view
window. The first step to do that is to open the 3D View from the Window Menu.
The 3D View window like the one you see in Figure 13.2 will appear and you will
be able to see on top of the window as highlighted a set of icons that may be
used to create, control, visualise and view Agents’ movement. The functionality
of each single tool is explained below.
Click this icon to enable you to drop a new agent within the scene.
Click this icon to enable agents’ movement after you have deliberately stopped
it
Click this icon to pause agents’ movement
Click this icon to stop agents’ movement
Click this icon to enable traces to be drawn tracking agents’ movement routes.
Click this icon to enable you to control the orbit zoom of the 3D view.
Click this icon to enable you to pan your view in different directions.
Click this icon to enable you to zoom in your view
Click this icon to enable you to have a continuous zoom
Click this icon to enable you to see the gate count values and how they
emerge and change as the agents move on the grid.
105
Figure 13.2. Agent tools that may be used to demonstrate realtime behaviour of individual agents
in a 3D view window in Depthmap.
106
REFERENCES
Al_Sayed, K. Turner, A. Hanna, S., 2012, “Generative Structures in Cities”, In Proceed-
ings of the 8th International Space Syntax Symposium, Edited by Margarita Green,
Santiago, Chile.
Bafna, S., 2003, “Space Syntax: A brief introduction to its logic and analytical
techniques”. Environment and Behavior, 35(1): 17-29. http://eab.sagepub.com/
content/35/1/17.short?rss=1&ssource=mfr
Becker, Howard S., 1998, “Tricks of the trade: How to think about your research
while you’re doing it” (Chicago guides to writing, editing and publishing;
Chicago/London: The University of Chicago Press).
Benedikt, M.L., 1979, “To take hold of space: Isovists and isovist fields”, Environ-
ment and Planning B, 6: 47 – 65
Dalton N., 2001, “Fractional configuration analysis and a solution to the Man-
hattan problem”, In Proceedings of the Third International Symposium on Space
Syntax, Atlanta, GA
Dalton, N, S, Peponis, J, Conroy Dalton, R, 2003, “To tame a TIGER one has to know
its nature: extending weighted angular integration analysis to the description
of GIS road-centerline data for large scale urban analysis.” In: 4th International
Space Syntax Symposium, London, UK. http://eprints.ucl.ac.uk/1109/
Desyllas, J., Duxbury, E., 2001, “Axial maps and visibility graph analysis: a compari-
son of their methodology and use in models of urban pedestrian movement”. In
Proceedings 3rd International Symposium on Space Syntax, Georgia Institute of
Technology, GA, May 2001.
Figueiredo, L., Amorim, L., 2004, “Continuity lines: Aggregating axial lines to
predict vehicular movement patterns”, Proceedings, 3rd Great Asian Streets Sym-
posium, Singapore, National University of Singapore.
Figueiredo, L., Amorim, L., 2005, “Continuity lines in the axial system”, Proceed-
107
ings, A.Van Nes (Ed.), 5th International Space Syntax Symposium, TU
Hillier, B., Hanson, J., 1984. “The social logic of space”. Cambridge University Press.
Hillier, B., Hanson, J., Graham, H., 1987, “Ideas are in things - An application of the
Space Syntax method to discovering house genotypes”. Environment and Plan-
ning B, 14 (4): 363 - 385.
Hillier, B., Penn, A., Hanson, J. Grajewski, T., Xu, J., 1993, “Natural movement - or,
configuration and attraction in urban pedestrian movement”. Environment and
Planning B, 20 (1) 29 - 66.
Hillier, B., Turner, A., Yang, T., Park, H. T., 2009, “Metric and topo-geometric properties
of urban street networks: some convergences, divergences and new results”. The
Journal of Space Syntax, 1(2): 279.
Hillier, B., Yang, T., Turner, A., 2012, “Normalising least angle choice in Depthmap
- and how it opens up new perspectives on the global and local analysis of city
space”. In JOSS, 3(2)155-193.
108
Kruger, M. J. T., 1989, “On node and axial grid maps: distance measures and related
topics”. Presented at: European Conference on the Representation and Manage-
ment of Urban Change, Cambridge, UK.
Penn, A., 2003, “Space Syntax and spatial cognition or why the axial line?”. Envi-
ronment and Behavior, 35(1): 30-65.
Peponis, J. (ed.), 1989, “A theme issue on Space Syntax”. Ekistics, 56: 334-335
Peponis, J., Wineman, J., Rashid, M., Hong Kim, S., and Bafna, S., 1997, “On the
description of shape and spatial configuration inside buildings: Convex parti-
tions and their local properties”. Environment and Planning B, 24: 761-782.
Peponis, J., Wineman, J., Bafna, S., Rashid, M., Kim, S H., 1998, “On the generation
of linear representations of spatial configuration”, Environment and Planning B
25: 559 - 576
Peponis, J., Bafna, S., Zhang, Z., 2008, “The connectivity of streets: Reach and
directional distance”, Environment and Planning B 35: 881-901
Ratti, C., 2004, “Space Syntax: Some inconsistencies”, Environment and Planning
B, 31:487-499.
Shpuza, E., Peponis, J., 2008, “The effect of floorplate shape upon office layout inte-
gration”. Environment and Planning B, 35(2), 318.
Teklenburg, J. A. F., Timmermans, H. J. P., Van Wagenberg, A. F., 1993, “Space syntax:
Standardised integration measures and some simulations”. Environment and Plan-
ning B, 20: 347-347.
Turner, A., Penn, A., 1999, “Making isovists syntactic: Isovist integration analysis”.
In Proceedings 2nd International Symposium on Space Syntax, Universidad de
Brasil, Brazil. http://www.vr.ucl.ac.uk/publications/turner1999-000.html
Turner, A., Doxa, M., O’Sullivan, D., and Penn, A., 2001, “From isovists to visibility
graphs: a methodology for the analysis of architectural space”, Environment and
Planning B 28(1): 103--121. http://www.vr.ucl.ac.uk/publications/turner2001-000.
html
Turner, A., 2001, Angular analysis, In Proceedings of the 3nd International Sympo-
sium on Space Syntax, Peponis et al. (ed), 30:1-30.11
Turner, A., 2003, “Analysing the visual dynamics of spatial morphology”. Environ-
ment and Planning B 30:657–676
109
ac.uk/2651/
Turner, A., Mottram, C., Penn, A., 2004, “An ecological approach to generative
design”, In Design Cognition Computing ‘94 Ed. J S Gero (Kluwer, Dordrecht), 259 -
274
Turner, A, Penn, A, Hillier, B., 2005, “An algorithmic definition of the axial map”.
Environment and Planning B 32(3):425–444. http://eprints.ucl.ac.uk/624/
Turner, A., 2005, “Could a road-centre line be an axial line in disguise?”, In Pro-
ceedings of the 4th International Symposium on Space Syntax, in van Nes (ed),
145-159
Turner, A., 2007, “From axial to road-centre lines: A new representation for Space
Syntax and a new model of route choice for transport network analysis”. Envi-
ronment and Planning B 34(3):539–555 http://eprints.ucl.ac.uk/2092/
Turner, A., 2007a, “The ingredients of an exosomatic cognitive map: Isovists, agents
and axial lines”. In: Hölscher, C., Conroy Dalton, R., Turner, A. (Eds.), Space Syntax and
Spatial Cognition. Universität Bremen, Bremen, Germany.
Turner, A., 2007b, “To move through space: Lines of vision and movement”. In:
Kubat, A. S. (Ed.), In Proceedings of the 6th International Symposium on Space
Syntax. Istanbul Teknik ÄUniversitesi, Istanbul.
Van Maanen, J., 1988, “Tales of the Field” (Chicago Guides to Writing, Editing and
Publishing; Chicago/London: The University of Chicago Press).
Vaughan, L., 2001, “Space Syntax Observation Manual”, (London: Space Syntax
Ltd.)
Yang, T., Hillier, B., 2007, “The Fuzzy Boundary: The Spatial Definition of Urban Areas”,
A.S. Kubat, Ö. Ertekin, Y.İ. Güney, E. Eyüboğlu (Eds.), In Proceedings of the 6th Inter-
national Space Syntax Symposium, Istanbul Technical University, Cenkler, Istanbul,
091.01-22.
110
ADDITIONAL WEB SOURCES
Depthmap online training platform, Kayvan Karimi, Tim Stonor, Space Syntax Ltd
https://otp.spacesyntax.net/
111
APPENDICES
112
APPENDIX 1
The global measures are derived from the graph topological depth, which accounts
for the distance between each axial line and all the others, where the shallowest
axial line is the closest to all other axial lines and the deepest is the furthest one.
Depth is a topological distance between vertices in the dual graph G. Two open
spaces, i and j, are said to be at depth dij if the least number of syntactic steps
needed to reach one vertix from the other is dij. The sum of all depths from a given
origin is computed as Total Depth;
𝑛𝑛−1
The Mean Depth of the graph representing average distance of the i-th axial line
from all the other n – 1 in the dual graph G is computed as follows;
𝑛𝑛−1
1
𝑀𝑀𝑀𝑀𝑖𝑖 = � 𝑑𝑑𝑖𝑖𝑖𝑖 , 𝑖𝑖 ≠ 𝑗𝑗 (11)
𝑛𝑛 − 1
𝑗𝑗 =1
Depth might be calculated for the whole graph G containing n vertices, or for a
certain number of neighbouring vertices within a predefined graph distance from
each vertex. For example, Mean Depth at radius 2 might be defined as the average
distance of the i-th axial lines from the other w-1 axial lines at a distance dij ≤ 2;
𝑤𝑤−1
1
𝑅𝑅2 𝑀𝑀𝑀𝑀𝑖𝑖 = � 𝑑𝑑𝑖𝑖𝑖𝑖 , 𝑖𝑖 ≠ 𝑗𝑗 (12)
𝑤𝑤 − 1
𝑗𝑗 =1
Using the topological measure of depth, it is possible to deduce the structural prop-
erties of the system by deducing the Relative Asymmetry RA! values. RA represents
the centrality of an axial line comparing its actual Mean Depth with the theoretical
highest and lowest values that Mean Depth could have in the given graph. Com-
pared to Mean Depth alone, Relativized Asymmetry is a normalization of depth to
fit values within the range [0 , 1]. RA! is the normalised value of being min (MD!) =1
and max(MD!) = n/2;
2(𝑀𝑀𝑀𝑀𝑖𝑖 − 1)
𝑅𝑅𝑅𝑅𝑖𝑖 = (13)
n−2
113
The problem arises with RA is that the limits that depth is being scaled to are quite
extreme. To enable a comparison between systems of different sizes and between
local and global structures within the same graph, a normalisation of the graph
measures is needed. For this purpose, a dedicated ‘Diamond’ D-Value was com-
puted to normalise graphs that are representative of architectural or urban spaces
(Kruger, 1989). Normalization using the D-Value is obtained through comparing a
centrality measure of the i-th vertix of a graph with n vertices with the centrality
measure we would get if the vertix was at the root of a graph of the same number
of n vertices but is justified in a diamond shape (Kruger, 1989; Teklenburg et al.
1993; Hillier & Hanson, 1984). In such a graph, depth values are thought to follow a
normal distribution; therefore, comparing the RA value of its root to that of a vertix
in a graph with the same number of vertices is a way to compare a normal distribu-
tion with the actual distribution. In this type of graphs the D-Value (Hillier & Hanson,
1984) is computed as:
𝑛𝑛 + 2
2{n �𝑙𝑙𝑙𝑙𝑙𝑙2 � � − 1� + 1}
𝐷𝐷𝑛𝑛 = 3 (14)
(n − 1)(n − 2)
In order to make the centrality measure of RA independent from the size of the
graph, Real Relative Asymmetry (RRA) values were computed to allow for a com-
parison between graphs of different sizes. RRA is derived by normalising RA values
by the D-Value;
𝑅𝑅𝑅𝑅𝑖𝑖
𝑅𝑅𝑅𝑅𝑅𝑅𝑖𝑖 = (15)
𝐷𝐷𝑖𝑖
Using the aforementioned measures of street networks, Space Syntax has devel-
oped two major indices of Centrality that capture the relative structural importance
of a street represented by a vertex in a dual graph. The Centrality Closeness, defined
as Integration in Space Syntax terms, is expressed by a value that indicates the
degree to which a vertix is integrated or segregated from the urban system as a
whole (global Integration), or from a partial system consisting of vertices that reside
within a neighbourhood that is defined within a certain number of steps away from
each vertix (local Integration). The global measure of Integration is computed as
follows;
1 𝐷𝐷𝑖𝑖
𝐼𝐼𝐼𝐼𝐼𝐼𝑖𝑖 = = (16)
𝑅𝑅𝑅𝑅𝑅𝑅𝑖𝑖 𝑅𝑅𝑅𝑅𝑖𝑖
114
tion of Choice within different radii can render out the shortest path routes on dif-
ferent local and global scales. Unlike Integration which is normally or lognormally
distributed, Choice values are distributed exponentially. Most axial lines render very
low values of Choice, whilst a minority of axial lines reserve higher values than the
average and constitute the foreground of the urban fabric. Choice is computed as
follows;
𝜎𝜎𝑠𝑠,𝑡𝑡 (𝑖𝑖)
𝐶𝐶ℎ𝑖𝑖 = (17)
𝜎𝜎𝑠𝑠,𝑡𝑡
115
APPENDIX 2
Figure 15.1 . A segment map model and its graph representation to elucidate how an angular graph
distance is calculated for a street network.
The distance cost between two line segments is measured by taking a “shortest
path” from one to the other, so the cost of travelling from a street segment s to a
street segment a can be notated as 𝜔𝜔(𝜋𝜋 − 𝜃𝜃) + 𝜔𝜔 (∅) , while the cost between s
and b can be defined as 𝜔𝜔(𝜃𝜃) + 𝜔𝜔(𝜋𝜋 − ∅) . The Least angular change (geometric)
distance cost is measured as “the sum of angular changes that are made on a route,
by assigning a weight to each intersection proportional to the angle of incidence of
two line segments at the intersection” (Hillier & Iida, 2005). The weight is defined so
that the distance gain will be 1 when the turn is a right angle or 90°, 2 if the angular
turn was 180°, and 0 would be the value of angular distance gain if two segments
are continuing straight – this is the case when segments belong to one axial line.
This description notated as follows;
𝜋𝜋
𝜔𝜔(𝜃𝜃) ∝ 𝜃𝜃 (0 ≤ 𝜃𝜃 < 𝜋𝜋), 𝜔𝜔(0) = 0, 𝜔𝜔 � � = 1 (18)
2
This angular cost can then be applied as a weighting function to the Centrality
measures of ‘Closeness’ and ‘Betweenness’, originally known in graph theory.
Closeness or Integration 𝐴𝐴𝐴𝐴𝐴𝐴𝜃𝜃 in Syntactic terms is defined as:
−1
116
(𝑛𝑛 + 2)1.2
𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝜃𝜃 (𝑥𝑥) = (21)
(∑𝑖𝑖=1 d
where 𝑑𝑑𝜃𝜃 (𝑥𝑥, 𝑖𝑖)) length of a geodesic (shortest path) between vertix x and i.
is the
Angular Betweenness or Angular Choice value for a segment x in a graph of n seg-
ments is calculated as follows:
where (i, x, j) = 1 if the shortest path from i to j passes through x and 0 otherwise.
(𝑛𝑛 + 2)1.2
𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝜃𝜃 (𝑥𝑥) = (21)
(∑𝑖𝑖=1 𝑑𝑑𝜃𝜃 (𝑥𝑥, 𝑖𝑖))
(𝑛𝑛 + 2)1.2
𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝑁𝜃𝜃 (𝑥𝑥) = (21)
(∑𝑖𝑖=1 d
where 𝑑𝑑𝜃𝜃 (𝑥𝑥, 𝑖𝑖)) length of a geodesic (shortest path) between vertix x and i.
is the
Normalised Angular Choice NACHB is defined as follows;
where (i, x, j) = 1if the shortest path from i to j passes through x and 0 otherwise.
117