Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Extracting Skeleton Lines from Building Footprints by Integration of Vector and Raster Data
Previous Article in Journal
Early Detection of Suspicious Behaviors for Safe Residence from Movement Trajectory Data
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Landscape Composition Using 2D and 3D Open Urban Vectorial Data

1
Department of Computer Science, Université Lumiere Lyon 2, 69007 Lyon, France
2
LIRIS UMR-CNRS 5205, Université de Lyon, CEDEX, 69622 Villeurbanne, France
3
CPE Lyon, CEDEX, 69616 Villeurbanne, France
4
ENSA Lyon, CEDEX, 69512 Vaulx-en-Velin, France
5
EIVP, ENSG, IGN, LASTIG, University Gustave Eiffel, 75019 Paris, France
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2022, 11(9), 479; https://doi.org/10.3390/ijgi11090479
Submission received: 14 June 2022 / Revised: 26 August 2022 / Accepted: 8 September 2022 / Published: 10 September 2022
Figure 1
<p>View composition regarding city features.</p> ">
Figure 2
<p>Viewpoint from south of Lyon.</p> ">
Figure 3
<p>Three-dimensional visualization of buildings and associated documents (See more examples in [<a href="#B17-ijgi-11-00479" class="html-bibr">17</a>]).</p> ">
Figure 4
<p>Composition of the 3D view in four steps: Field of View Description (Step A), Intersecting Objects in the 3D Scene (Step B), Storing intersected objects (Step C) and Storing results in the database (Step D).</p> ">
Figure 5
<p>Discretization of the 3D space according to rays generated from the viewpoint of interest.</p> ">
Figure 6
<p>A 1 × 1 km tile of Lyon composed of four types of objects: buildings, terrain, roads and vegetation.</p> ">
Figure 7
<p>Three rays are generated from the viewpoint of different kinds of objects.</p> ">
Figure 8
<p>Organization of a model city using a regular grid.</p> ">
Figure 9
<p>(<b>Left</b>): Top view of 3D skyline for a given point or view (purple). (<b>Right</b>): Composition of this skyline.</p> ">
Figure 10
<p>View composition analysis. (<b>Left</b>): decomposition of the skyline according to the intersected object. (<b>Right</b>): the view composition.</p> ">
Figure 11
<p>Three-dimensional visualization of a tile (1 × 1 km) of the CityGML model of Lyon.</p> ">
Figure 12
<p>One-meter-resolution DEM used for a visibility analysis for the Lyon Metropolis: view from the whole DEM covering the city of Lyon (48 sq km) (<b>left</b>) and zoom on a specific part of the city (<b>right</b>).</p> ">
Figure 13
<p>Visual comparison between data describing the same location (Bellecour square): raster data (1-m DEM—<b>left</b>) and 3D vector data (<b>right</b>).</p> ">
Figure 14
<p>Three-dimensional visualizations to compare the results of 2.5D raster (<b>left</b>) and 3D vector (<b>right</b>) visibility analyses from the vantage point of the Fourvière Basilica. On the raster analysis, the visible areas are in green, while on the vector one, we only see the visible 3D points colored according to their type. The 3D model of Saint Jean’s Cathedral has been added to the bottom visualizations, which zoom in on the panoramic view of the top images.</p> ">
Figure 15
<p>Three-dimensional visualization using our tools, with each color corresponding to the CityGML category of the resulting 3D points (green: vegetation, grey: buildings, yellow: terrain, blue: water).</p> ">
Figure 16
<p>Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square). Green pixels indicate that the Basilica is seen, and red pixels that it is not seen. The results are displayed on an aerial image of the square; hence transparency is used.</p> ">
Figure 17
<p>Same as <a href="#ijgi-11-00479-f016" class="html-fig">Figure 16</a>, with the addition of the visibility analysis from our tool (in green, vegetation; in yellow, terrain; in red, roofs; in white, building’s walls).</p> ">
Figure 18
<p>Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square). Green pixels indicate areas where the Basilica can be seen, and red pixels indicate places from which it is not visible. The results are displayed on an aerial image with a little transparency. Building footprints are represented in black.</p> ">
Figure 19
<p>Buildings that have a facade from which the Fourvière Basilica can be seen are shown in white. Buildings from which the Basilica is only visible from the rooftop are excluded.</p> ">
Figure 20
<p>Visualization of vantage points (each point is a vantage point). On the (<b>left</b>), the number of landmarks seen from the vantage points (from red, five landmarks seen, to yellow, zero landmarks seen). On the (<b>right</b>), effort needed to access the vantage point on foot, in calories (from light blue (less than 20 calories) to dark blue (more than 70 calories).</p> ">
Figure 21
<p>Visualization of an imaginary high-rise project (on the right) in the existing business district of La Part-Dieu from the belvedere of Fourvière.</p> ">
Versions Notes

Abstract

:
Methods and tools for assessing the visual impact of objects such as high-rises are rarely used in planning, despite the increase in opportunities to develop automated visual assessments, now that 3D urban data are acquired and used by municipalities as well as made available through open data portals. This paper presents a new method for assessing city visibility using a 3D model on a metropolitan scale. This method measures the view composition in terms of city objects visible from a given viewpoint and produces a georeferenced and semantically rich database of those visible objects in order to propose a thematic vision of the city and its urban landscape. As far as computational efficiency is concerned and considering the large amount of data needed, the method relies on a dedicated system of automatic data organization for analyzing visibility over vast areas (hundreds of square kilometers), offering various possibilities for uses on different scales. In terms of operational uses, as shown in our paper, the various results produced by the method (quantitative data, georeferenced databases and 3D schematic images) allow for a wide spectrum of applications in urban planning.

1. Introduction

Obtaining the best 2D view or an image from a 3D structure [1] has been studied in different domains. The best view of a city is an important topic that interests not only the tourists but also urban planners. Though such a view is very subjective, urban skyline, visibility of landmarks from a given point and count of visible landmarks are few such criteria that have been identified to qualify the views. Two-dimensional, 2.5D and 3D [2] data models of the city may help in qualifying such viewpoints. A 2D data model gives only the localization of objects in 2D space and is commonly used in classical geographical information systems (GIS). A 2.5D data model gives only one elevation of a spatial object along with its localization and is frequently used for studying terrain elevations. Urban data may need to represent more complex objects, such as buildings. In this case, we have three coordinates for every point of an object in a 3D urban data model. These data models are being used for the visual 3D analyses of landscapes and applications [3,4], eye-level greenness visibility [5], etc. Urban studies [6] have even incorporated the use of crowd-sourced imagery [7] and object-based image analysis of satellite images [8] for identifying such viewpoints.
Consequently, in the quest for a sustainable city, one must also consider the possible alteration of relationships between urban societies and their landscape(s) in the context of the verticalization of urbanization [9]. The conflicts that have emerged from the return of high-rise buildings have called into question the very principles and tools of urban planning. In cities such as London, viewing corridors have been set up, then modified to accommodate high-rise pressure, whereas, in Paris, longstanding height limits have been removed altogether in outlying areas of the metropolis. In many cities, the visual assessment of high-rises has become strategic in determining tower planning applications. Tools often used to evaluate the visual impact of high-rises relies on renderings based on a selection of georeferenced photographs. The approach is rarely systematic, however, with the exception of research by Nijhuis [10,11] and Cassatella and Voghera [12], who have demonstrated the benefits of using GIS and spatial analysis tools based on simplified elevation models.
Based on the expertise of both computer scientists and geographers, the aim of this paper is to provide a precise quantification and qualification of skyline views for planning as well as view composition by using 3D geometry and semantics, as opposed to the usual analysis conducted on a short perimeter with 2.5D data without semantics. Using open data in a standardized format allows us to provide a generic method usable on other available datasets in other cities. Our method offers analytical tools for quantitatively and semantically assessing the visual impact of city objects taking into account topography, but also building types and forms, vegetation, hydrology and any other types of city features included in the 3D model over large areas (several hundred square kilometers). It measures the visual composition of a landscape from a given viewpoint: by using the 3D model composed of different types of city objects, we detect those that are visible and store their identification number and geographic coordinates in a 3D georeferenced database. We can then provide the list of visible objects and the semantic information linked in the 3D city model (name, address of a building, botanical species, etc.) to generate a database that can be queried and used for further analyses beyond a visual representation of the view in question. Moreover, thanks to the 3D coordinates of the visible objects, these results can be used together with external georeferenced data, such as socio-economic data describing the corresponding area, to perform specific analyses corresponding to user needs.
Thanks to geometry and semantics, city view composition studies can be conducted. Figure 1 illustrates this purpose with a view of Lyon (France). Landscape analysis can provide additional information, such as vegetation composition [5] and the location of significant buildings or roads [13,14,15].
Visual analysis can be carried out according to urban planners’ needs [16,17] (Figure 2 and Figure 3). This paper focuses on the production of these new methods to assess the visibility and visual impact of existing or planned features of the city described in a 3D geometrical and semantic city model in order to provide local authorities and associations with quantitative and qualitative data (through georeferenced geometrical and semantic data) for a fairer debate between stakeholders and decision-makers. Our argument is based on the shortcomings of data currently available to the public during consultations regarding high-rise building projects or urban view composition in a more general way. We propose a generic approach and provide open-source tools for processing various city models with different types of city objects. We also ensure the replicability of our work with the use of standards.
In order to present our interdisciplinary work and its implications, as well as its limitations, we will begin by offering insights both into previous scientific works and the needs expressed by practitioners taken into account in the development of our work in Section 2. We will then present the developed method in Section 3 and explain how it may be used with 3Dcity models to assess skylines and present some of its operational uses in Section 4. Finally, we will conclude by showing how our tool is opening up new possibilities for practitioners and examining how this work may be improved in Section 5.

2. Previous Scientific Works in Respect to Users’ Needs

Cities are evolving at a rapid rate, both horizontally and vertically. Because of their prominence in wide-angle and long-range panoramas, debates surrounding high-rises among governing authorities and the wider public have focused on skylines. Conflicts have emerged, especially in Paris, London ([18,19,20], Barcelona, Turin [21], Vienna and even St. Petersburg [22], where tall building projects have revived the debate between advocates of modernity and those wanting to preserve the character of European cities. Hence, the analyses of the impact of tall buildings on the skyline [23] have become a major part of urban planning. Several tools are currently being explored [24] to study the visibility of objects in different orientations. Recently, LiDAR data have been used for visibility analysis [25].
One of the main difficulties when working on visibility analyses in operational contexts is the gap between scientific works and the tools and methods used in practice. The SKYLINE research project (2013–2016) aimed to counter the existing lack of research on the skyline as a contested dimension of the urban landscape at a time when skyscrapers are rapidly spreading across the globe (http://recherche.univ-lyon2.fr/skyline/wordpress/?page_id=452, (accessed on 7 September 2022)). We identified a lack of use of the existing visibility analysis tools. Although basic 2.5D visibility analysis tools have been available in GIS software for many years, they are rarely used in planning assessments [10,11]. More sophisticated methods are almost never used in practice, and practitioners are mostly unaware of their existence and potential. Nevertheless, practitioners listed many different contexts requiring visibility assessments, with various objectives depending on the institutional context, the actors involved, the spatial configuration, etc., so that various outputs may be sought/required depending on the context. An analysis of existing scientific works conducted at the beginning of our project shed light on the shortcomings of most methods proposed by scientists, particularly regarding the opportunities offered by the rapid development of the production of 3D city models, allowing for more precise and semantically richer analyses. During the SKYLINE project, we, therefore, worked alongside practitioners in order to determine their needs, as well as with computer scientists, who were able to develop new and efficiently tailored tools in order to allow for more precise and complete visibility analyses. The following paragraphs present a summarized cross-analysis between the state of the art of scientific developments and the tools and methods used in practice.
The vast majority of scientific and operational studies that have actually been produced and used by practitioners use 2D or 2.5D analysis. For example, 2D models can be used to classify the surroundings of a pedestrian pathway [26] or to measure landscape openness [27] by using isovist (a vector approach), defined by Benedikt [28] as “the set of all points visible from a given vantage point in space and with respect to an environment”. Bartie et al. [29] focus on specific landmark features in order to quantify visibility according to the current viewpoint by defining new metrics extracted from a 2.5D city model. Yang et al. [30] also use 2.5D models to develop their concept of “viewsphere analysis” consisting of estimating the visibility of a scene, defined by a viewpoint and its spherical surroundings, by computing volume-based indices. Visibility analyses were also performed for smaller cities based on 2.5D data using tools such as the Line of Sight Analyst toolbox [31] and Viewshed [32] tools in ArcGIS. We will also compare the results obtained from our proposed approach to those obtained from the ArcGIS tool in Section 4. The isovist approach [26,27] is a vector method based on ray casting strategies (GearScape Geoprocessing Language (GGL) [33], Isovist Analyst [34]).
While 2.5D raster analyses do provide information on the distance up to which a building can be seen and indications as to which areas may be affected by its visual impact, they do not address vertical surfaces, especially facades, as noted by Suleiman et al. [35]. In other words, such analyses mostly provide information about the visual impact of a landmark on a building from its higher vantage point. In this respect, the resulting images give little insight with regard to inhabitants whose views may be affected by a given building or landmark, even if methods that aim to be more realistic from a human perspective arise [14]. In the case of visibility of a landmark or project from a public space, the main limitation of 2.5D raster analysis lies in the precision of the raster used, which cannot properly describe building shapes, especially where the study area is large, due to computing limits.
Although rarely used for visibility analyses in practice, 3D city models are increasingly used in a wide spectrum of applications [36] and are becoming more widely available in open data formats for many cities. While the use of 3D models can improve the precision and relevance of the analyses produced, it is also important to note that they can also provide more immersive visibility analyses in which non-expert users can easily identify their surroundings and understand the corresponding results.
Practitioners stated that 3D visualizations are often used in association with 2.5D raster analysis, especially through photomontages. The vast majority of such photomontages are not produced from accurate spatial data, and some have been accused of conveying a wrongful image of a project, spoiling public debate and sometimes giving rise to legal action [37]. Other practitioners use 3D models provided by Google by inserting models representing projects into Google Earth (Figure 3). In this case, users have no control over the 3D data available in their area of interest, little control over what can be displayed in the Google Earth viewer and no way to perform analysis, limited to only visualization.
However, in the scientific field, multiple works exist for using 3D models to perform visibility analyses. For example, Morello and Ratti [38] and Suleiman et al. [39] propose 3D extensions for the isovist concept on Digital Elevation Models (DEM) defined in a raster voxel model and a polygonal model, respectively. Caldwell et al. [40] chose a different approach, precomputing a Complete Intervisibility Database on a Digital Elevation Model in order to be able to directly compute metrics supporting specific decision-making needs. For each point of the DEM, the viewshed (a raster approach), which is a concept close to that of the isovist (a vector approach) [41], is computed and stored for rapid access when needed by specific processes. These visibility results can, for example, be used to compute the least and most visible paths between two positions on the ground. Choudhury et al. [42] and Rabban et al. [43] propose methods to compute a Visibility Color Map from a specific viewpoint: a color is assigned to each point of its surroundings according to the visibility of this viewpoint from it. Rosa [13] links a viewshed analysis, which detects the most visible area of a Digital Elevation Model, with landscape values based on experts’ assessments in order to generate an index of visible landscape values: this makes it possible to detect which areas represent an important landscape with high visibility.
While these visibility analyses offer tools to qualify urban and natural landscapes, they can also be used to measure the visual impact of a specific city feature such as buildings (existing or planned) on such landscapes, as in the works of Hernández et al. [15], Czyńska [44], which propose a method to measure the visual impact of rural buildings on natural landscapes and compares the results to a public survey. Similarly, Danese et al. [45] examine the Visual Impact Assessment of new projects in various Italian cities by computing viewshed analyses before and after the addition of buildings and comparing the results in order to evaluate the impact on the visibility of landmarks.
Most works propose visibility studies of 3D models but do not address the scaling problem inherent to the applications related to these data or acknowledge that their process can only address a limited area. We thus opt for a generic method that can be used on large 3D vector datasets available for numerous open data cities. As a result of the visibility analysis being stored as 3D georeferenced vector entities, any necessary post-processing of the generated data can be conducted in order to include visibility distances/range, meteorological conditions, etc.
These works also focus on analyzing the impact of buildings but rarely address other types of city features such as vegetation, landmarks or other urban features that may have a certain impact in terms of visibility. Since these kinds of data are now correctly managed by international standards, such as CityGML, it is necessary to enhance methods to take them into account semantically and geometrically.
In order to overcome the shortcomings of available tools so as to provide practitioners with precise and useful tools, we worked with practitioners to provide a list of requirements a visibility tool should meet and then developed a method that could meet different practitioners’ needs. This approach is intended to be more general than scientific works proposing one tool or one visibility indicator to answer a particular need [44]. We thus aim to produce precise and complete results that can be stored as 3D georeferenced vector data with linked attributes, allowing for a wide range of GIS-based analyses to extract the necessary information, depending on the context of use.
Based on the needs expressed, a few requirements were listed:
  • the need for precise geometrical analysis, which requires the use of a 3D vector city model instead of simplified DEM or rasters,
  • the need for semantic data to identify as precisely as possible which feature is seen from a vantage point; if a building or landmark is concerned by the visual impact of a proposed building (concerned only by its roof or its facades and which floors may be impacted, etc.),
  • the need to be able to process large amounts of data, and especially rich, 3D vector data so as to obtain precise results on any area (a whole metropolis or region if needed),
  • the need for numerous outputs that can be used to generate multiple results (images, georeferenced databases and data quantification stored in spreadsheets), some of which may be used as is and others opening possibilities in terms of spatial analysis (i.e., interaction with other georeferenced data in GIS tools), depending on the end users’ objectives and their technical capabilities,
  • the need for a generic approach for processing various city models with different types of city objects,
  • the need for open-source tools that can be widely used by any stakeholder,
  • the need to ensure replicability with the use of standards.
The above requirements were identified based on the interactions between urban project practitioners and computer scientists to identify the scientific and technical challenges.
The next section presents the new methods developed. Various uses of the results are then presented, demonstrating the extent of the opportunities offered by the tool.

3. Measuring the Visual Composition of an Urban Landscape

In this section, we will describe the various steps comprising our method, depicted in Figure 4 and described in the following subsections.

3.1. Field of View Description (Step A)

The input parameters are the studied viewpoint and the Field Of View (FOV). They are defined by a given 3D position and the angular aperture, respectively. The FOV may be human (200 degrees) ( F o V x , i.e., field of view in the horizontal direction) × 135 degrees ( F o V y , i.e., field of view in the vertical direction) according to Wandell and Thomas [46] to study the view from the window of an apartment, for example, or a complete 360 × 180-degree full panoramic field ( 2 π s r solid angle), for example, to compute a view from a rooftop terrace at the top of a tower. In order to partition the FOV by a set of rays (Figure 5), it is also necessary to define a resolution, L × H (where L is the width and H is the height of the screen), which represents the number of rays to be generated.
Currently, 3D city models may cover several hundred square kilometers, and numerous objects and polygons must, therefore, be tested for intersection computations. For example, the Lyon open data in France (https://data.grandlyon.com/, (accessed on 7 September 2022)) provides a territory of 550 sq km (an example of 1 × 1 km of Lyon is shown in Figure 6). Table 1 shows the number of triangles regarding the four different kinds of objects (buildings, roads, terrain and vegetation). Due to the density (urban, peri-urban or rural areas), the number of triangles may also evolve for these kinds of objects.
Due to visual accuracy, computing all the ray intersections in the 3D space is irrelevant and time-consuming. In our method, for a given viewpoint, it is necessary to follow the rays in order to intersect the first object for each ray. Some parameters visible to the human eye are chosen to decrease the number of rays to be computed (for instance, for visual human eye accuracy, the average angular resolution is between 0.02 and 0.03 degrees [47]; it is possible to distinguish an object with a diameter of around 40 cm at a distance of 1 km). At the very least, the object becomes less relevant if one moves away from the viewpoint. As such, the 3D space is decomposed in angular partitions. Two neighboring rays are separated by
Δ F o V x = F o V x L
or
Δ F o V y = F o V y H .
A panoramic analysis can be carried out if FoVx equals 360°. Increasing the resolution, L or H, makes it possible to obtain a higher resolution but increases the number of rays and hence the computation cost.

3.2. Intersecting Objects in the 3D Scene (Step B)

With the previously depicted discretization process, it is possible to obtain a set of rays. Each ray will be propagated from the viewpoint to the 3D scene and maintain the RGB value of the intersected object (this is a well-known process used in rendering pipelines, commonly named ray tracing). In our case, we must bear in mind that the semantic part of the data or the distance between the viewpoint and the object are of interest. The diagram in Figure 7 illustrates our purpose; three rays intersect different kinds of objects in a simplified view. For instance, the red one intersects a building; the distance or color of the object, and linked information given by attributes, are stored for this ray. Rays with no intersection are classified as out of scope (two cases remain: the intersection is out of the territory bounds (will be detected later) or in intersection with the sky).

3.3. Proposed Data Structure for a Large-Scale Study

Managing a large dataset with numerous rays is only possible with a special data structure. The entire dataset cannot be loaded at the same time. A tiling decomposition is necessary to manage such a large dataset. The city model is decomposed into a regular configurable grid (Figure 8) with an automatic process. Thanks to this organization, only the necessary tiles are loaded, which makes it possible to manage a large area.
An accelerating structure is then provided with a bounding box hierarchy based on a quadtree structure. More information can be found in [48]. Each ray is sorted according to the bounding box intersection in order to load each tile one at a time.

3.4. Generating the Database (Step C and D)

The given intersection of each ray is the starting point for producing a database composed of georeferenced 3D points with a link to the intersected object and its attributes. It is then easy to provide simple query information for a given viewpoint, such as visible buildings, or advanced information with linked attributes (tree species, number of road lanes, etc.).
Another application may be to compute the view composition in terms of percentages for a given viewpoint and the sky view factor (the sky view factor is the definition of the delimitation between the sky and the other city objects for a given viewpoint [49,50]) with additional semantic information. An example of a synthetic view is provided in Figure 9 and Figure 10.
To summarize this section, the method provides a rich panel of output data. All the results are reproducible by delivering our method via an open-source platform called 3DUSE (3D-Urban Scene Editor) (https://github.com/VCityTeam/3DUSE (accessed on 7 September 2022)) which also contains a collection of other components for manipulating, processing and visualizing urban data. 3DUSE is a desktop solution, and some of the developed components are now available as services for a web-based solution called UD-SV (Urban Data Services and Visualization) (https://github.com/VCityTeam/UD-SV (accessed on 7 September 2022)). Our goal is to incorporate the visibility analysis component into UD-SV in the near future. This work is developed in the context of the VCity (https://projet.liris.cnrs.fr/vcity/ (accessed on 7 September 2022)) project of the LIRIS (https://liris.cnrs.fr/ (accessed on 7 September 2022)) laboratory. The generic approach proposed in this article is ensured by using a standard and a large amount of open data already available (for instance, in CityGML format). This 3D geometrical and semantic information, often provided on a large scale, makes it possible to deliver numerous outputs with our method that are valuable for the applications presented in the next section.

4. Applications for Skyline Assessments in the Lyon Metropolitan Area

4.1. Data Used for Our Study

In order to test the tools presented in the previous section, we worked with practitioners from the Lyon Metropolis, using the freely available 3D data covering the entire metropolis territory (more than 550 sq km of data are available). These data are composed of 3D models and semantic information provided in CityGML format (with buildings, terrain, roads and watercourses). Additional data, such as vectorial 2D databases, describe the territory (more than 1000 different datasets are available in the Lyon open data). For instance, LiDAR data and orthophotos can be used to generate an additional layer for a 3D vegetal canopy over the entire territory (describing this work on vegetation is considered out of our scope). Figure 11 shows how the urban landscape is represented through CityGML files available on the open data portal of the Lyon Metropolis. Such data are used as input for the visibility analysis tools. Others can be easily computed, for instance, area or terrain, additional city furniture (for example, park benches, street lights, etc.), 3D roads, etc.

4.2. Geometrical Accuracy for a More Precise Description of the Skyline

Geometrical accuracy of the analysis is the primary advantage of our method, which provides accurate results regarding the masking effects of the geometry of the terrain, buildings and vegetation on the visibility of a given building or landmark (i.e., the results can be as precise as the original data). It is also possible to consider the terrains as discontinuity jumps while studying the skyline.
This gain in precision in terms of geometrical analysis was identified as a key point in the examination of high-rise construction projects, which could have impacts on places far from the building. Indeed, practitioners working in London explained to us that despite all the visibility analysis carried out for the presentation and discussion of the “Shard” Tower project [51], its visibility from several boroughs and their public areas had not been anticipated due to a lack of accurate analyses. As a result, some district councils were not contacted to discuss the project, and the councilors and residents of those areas discovered the impact of the Shard on their daily landscape once the building was built, which sparked both dissatisfaction and mistrust in consultation processes. In the case of high-rise buildings such as The Shard (https://en.wikipedia.org/wiki/The_Shard (accessed on 7 September 2022)), whose iconic aspect was put forward by its promoters, the lack of precision of the visibility analysis can, therefore, lead to errors in governance (absence of prior consultation of people affected by a project), which can negatively impact relationships between local governments and inhabitants and introduce long-term mistrust.
This gain in precision can be seen visually when exploring the results of the analysis, but we also wanted to quantify this performance by comparing our results with a 2.5D analysis.

4.2.1. Characterization of Geometrical Accuracy for the Visual Impact Assessment of a Specific Building

In our case study on the metropolis of Lyon (>550 sq km), using the visibility analysis tool of a common GIS software program (ESRI ArcGIS (http://www.esri.com/arcgis/about-arcgis (accessed on 7 September 2022))), we could only perform a 2.5D visibility analysis on a 1-m-resolution DEM, covering 48 sq km (only the City of Lyon). Figure 12 shows how the urban morphology is rendered through a 1-m-resolution raster DEM, while Figure 13 makes it possible to visually compare the representation of a given area through a 1-m raster DEM and a 3D vector model. This sheds light on the crucial importance of the processing of 3D vector data, as offered by our tool, so as to use the most precise description of building shapes input and to be able to take into account the masking effects of vegetation. Currently, the existing 2.5D raster analysis performed on raster data limited in size (by machine and/or software limitations) simplifies the geometry of the vegetation and buildings when carried out in large areas (districts, cities, metropolis, etc.). On very large scales (metropolis, region, etc.), 2.5D raster data can hardly even take into account the geometry of built structures, let alone the masking effects of vegetation. In our work on the landscape of the metropolis of Lyon with ArcGIS, we had to downsize our DEM to a resolution of 25 m or work on a larger area than the city itself (70,000 sq km) (the analysis of such a large area was interesting as the work on landscape composition included distant mountains visible in specific meteorological conditions (in our case, the visibility of Mont Blanc from the city center of Lyon)).
In order to assess the benefit of our tool in terms of precision compared to raster analysis, we studied the visibility of a well-known Lyon monument, the Fourvière Basilica. Located on a hill in the city, it is visible from many places in the city, including from a distance. This allowed us to test the method and demonstrate its potential to our practitioner partners. We thus generated an analysis using a classic GIS tool (ArcGIS), using the raster Digital Elevation Model (with 1-m resolution) mentioned above, as well as an analysis using our tool from a 3D vector dataset in CityGML. The comparison of the results was carried out only on the area containing the city of Lyon (48 sq km) as it was not possible to produce precise results over a wider area with the ArcGIS raster analysis.
At first glance, a 2D visual comparison of the results shows relatively similar results, with spatial definition of the visible areas appearing rather close. However, the 3D visualization of the results clearly shows a difference in the geometrical precision of the results (Figure 14).
The images on the right of Figure 14, which show the results of the visibility analysis using our tool, show the geometrical precision of the results produced, which are well superimposed on the facades of the 3D buildings. Each point depicted on the facade, roof, vegetation and terrain is visible from the Fourvière Basilica.

4.2.2. Quantitative Analysis of the Gain in Geometrical Accuracy

A more quantitative analysis of the results helps to highlight the strengths of our tool in terms of precision compared to the raster analysis. Table 2 shows the total number of visible points from a chosen viewpoint generated by our tool, the number of these points which are common to the raster analysis and those which were “missed” by the raster analysis. The first column gives the gross comparison of the number of points, and the second shows the same comparison by adding a buffer of one meter around the resulting points from our tool (a 3D point becomes a 1-m-diameter disk) in order to bring back the results of our tool to the resolution of the numerical elevation model used for raster analysis, the third one gives the comparison for the results of the visibility analysis beyond 1 km around the observation point.
Table 2 shows that the difference in precision between our results and those acquired from a raster analysis is greater for areas close to the chosen viewpoint. This is explained by the method used, as the results of the ray tracing give a more precise meshing over short distances. Thus, for near distances (less than 1 km from the observation point), the precision gain is obvious, confirming the previous visual analysis of the results. For far-off distances, the precision enhancement is also not negligible and confirms the advantage of using our method for analyses of close and far-off areas. Although the data are less precise over long distances, their accuracy remains higher than the results of raster analyses, which are, as we have already mentioned, limited in resolution for analyses over large areas. From this particular viewpoint, our tool makes it possible to carry out more precise analysis, and with reduced calculation times (it takes several hours to perform a raster analysis over a large area, whereas it takes a few minutes or at most a few tens of minutes (30 min maximum for the analyses presented here) for our tool).
To confirm our first results, we repeated these comparisons for visibility analyses from the top of four other landmarks in the urban Lyon landscape. The following tables (Table 3, Table 4 and Table 5) show these results in full.
There is a more notable difference for some monuments in terms of distant data (beyond 1 km). This is due to the urban morphology of Lyon and the positioning of such monuments, which are not visible from areas far from the center of the city because they are surrounded by several hills. Moreover, for the same monuments integrated in a dense urban fabric, we note that the results of our tool are much more precise for close areas (less than 1 km from the monument). We can thus show the advantage of our tool, which is more precise than the analysis derived from raster data and suitable for generating analyses in dense urban centers as well as for obtaining reliable data over long distances. The tests were limited here to the city of Lyon (which is a 48-sq-km territory) because of the intrinsic limitations to the raster mode (inducing large data for precise resolutions). We were able to carry out analyses on the whole territory of the metropolis (>500 sq km), integrating all buildings and vegetation, with the same computer used for raster analysis limited to 48 sq km.
We observe a more noticeable difference for some monuments in terms of long-distance data (beyond one km). This is due to the position of some buildings that are not visible from distant areas because they are hidden by tall buildings or by the relief. Nevertheless, for these buildings, in a nearby environment, our method remains more accurate in all cases.

4.3. Example of Uses of the Data Produced by Our Tool

Our tool generates multiple results for each viewpoint: quantitative data, 3D georeferenced data and several images. Regarding quantitative data, it is possible to obtain dashboards. Dashboards can be constructed on demand, thanks to the data stored in the database. For instance, for a given area, the attributes linked to each point can allow for the differentiation of the buildings, noticeable buildings, vegetation and terrain categories. Regarding 3D georeferenced data, they may be used as colored 3D points with a list of attributes that can be highlighted in a 3D view (for instance, Figure 13). A mix can also be performed with polygonal (CityGML) information. Images can also be produced with ray-tracing. It is possible to obtain depth cards with dedicated colors, images with building categorization, etc. All these data can be used to provide visual support for collaborative discussion and decision-making. Some examples are also detailed in the next sections.

4.4. GIS Analysis of 3D Georeferenced Results

In this section, we present two possible uses of the 3D georeferenced results produced by our tool as illustrations of some of the possibilities offered by our method. In fact, many analyses can be performed in GIS tools using 3D georeferenced data, a complete overview of which is out of the scope of this study.

4.4.1. Visual Impact of Buildings in Context

Our first illustrative example shows how selection based on semantic data and basic statistics can provide useful insights into the visual impact of a building on a district scale. This is useful for practitioners in order to verify the preservation of some particularly significant viewpoints from public spaces (views of monuments accessible by all citizens).
Our tool provides 3D points indicating a 3D location from where the target of our analysis is visible. Each point contains, in its attribute table, the nature of the corresponding CityGML element (whether terrain, vegetation, building, etc.) and its original CityGML identifier so that any data from the original 3D model can be linked to it (through a simple attribute join). For instance, in the case of the visibility analysis of the Fourvière Basilica presented above (see Section 4.2.1), we can derive precise results about its visibility in a given district from the data provided on the city of Lyon as a whole. The 2.5D raster visibility analysis provides a binary result (whether or not the target is seen) but does not provide data on what the resulting pixel entails (if it is a public space, a private building, etc.). Practitioners often use a spatial join to try to create such information (Apur 2014), but the accuracy of the result depends on the size of the pixel and the density of the urban fabric. In our case, each resulting 3D point corresponds to a unique element from the CityGML database (terrain, vegetation or building, etc.). It is then easy to count, for example, from how many buildings the target is seen and to precisely determine from which location the target is seen (which floor of a building or which area of a public space such as a square or a park).
Figure 15 shows the results of the visibility analysis from the Fourvière Basilica with a different color attached to each resulting 3D point according to its type. Green points indicate the presence of vegetation in the landscape seen from the Basilica (our tool also provides a quantitative result of the percentages of vegetation, building and terrain in the view). However, those points would not generally indicate a location from which the Basilica can be seen by inhabitants (since few of them are likely to climb trees to view it). Yellow points indicate public spaces from which the Basilica can be seen, and using them together with grey points, representing buildings, we can draw up an analysis of the Fourvière Basilica’s visibility on a scale of a large square (Bellecour square, 62.000 sq m).
Figure 16 shows the results of the raster analysis of the Fourvière Basilica visibility of this square, and Figure 17 adds the results from our tool. The main difference between our tool and the raster analysis is the better precision of both the building and vegetation masking effects (thanks to the use of more precise 3D geometry) and the ability to easily distinguish which results correspond to accessible viewpoints for citizens.
Thanks to our data, we can determine from which buildings the Fourvière Basilica can be seen more precisely. In fact, results from most raster analyses, being 2.5D, only provide information from the highest point of a building, which is often its rooftop. Since it is difficult to know whether rooftops are accessible to citizens, this does not give a clear picture of the viewpoints of the Basilica that are actually accessible. Our tool allows for more precise analyses, thus enabling practitioners to tackle the issue of the extent of public access to remarkable views and to quantify the privatization of remarkable views in the case of the construction of high rises.
Figure 18 shows the results from the raster analysis. For most of the buildings around the square, the raster results show both green and red pixels. A simple GIS request makes it possible to count the number of green and red pixels in each building, but this only concerns building rooftops.
Our tool allows us to pinpoint exactly which building is concerned and to distinguish between the visibility of the Basilica from facades and from the roofs. We can thus select buildings that have a facade from which the Basilica can be seen (see Figure 19).
Quantitatively speaking, results from the raster analysis indicate that the Basilica is visible from 119 buildings around the square (be that from their facades or roofs). Our analysis shows that the Basilica is only seen from the facades of 56 buildings. We can even calculate, through SQL queries, the percentage of the facades of those buildings that are oriented towards the Basilica or the floors from which the Basilica can be seen.
Finally, our method offers more precise results than raster analysis and open possibilities for city planners to quantify and qualify public and private access to remarkable views in each district (even on the scale of each building). This can also be used to quantify the privatization of those views for a given project by performing the analysis on the existing dataset and with the project inserted into the CityGML model, enhancing public debate.

4.4.2. Detecting the Visibility of a Building from Vantage Points

Our second illustrative example developed with practitioners is a work on the qualification of public belvederes, i.e., public spaces that offer vantage points with views of remarkable buildings and landscapes. This is of interest to practitioners since belvederes enable everyone to enjoy the urban scenery. As far as planning is concerned, identifying possible locations for public vantage points or belvederes and working on their accessibility and equipment (public benches, for instance) are ways to grant every citizen access to the urban scenery.
In the case of Lyon, we have used the visibility analysis from several landmarks in a GIS tool, along with other data (public transport, roads, topography and orientation), in order to create a database of existing belvederes and provide a diagnosis of their accessibility. For each vantage point, we have computed the number of landmarks seen, their altitude, their accessibility from the public transport network (distance by foot from the nearest public transport), the effort required to walk up to it given the slope and its exposure to the sun, according to its orientation. This database also contains the equipment of each belvedere, which was existing data and can be used to develop a public strategy to enhance citizens’ access to the skyline. Figure 20 shows visualizations of the data in a GIS tool: belvederes classified by the number of landmarks that can be seen and by the effort required to access them on foot (calculated in calories).
Our data were also used to work on pedestrian itineraries on the hills of Lyon in order to automatically determine how many landmarks can be seen from a tourist path and where particular vantage points are situated. These analyses require only an SQL query where all necessary data are available in the georeferenced format. As such, we thought it particularly useful to produce some of our results in 3D georeferenced vector format.

4.5. Outlook: Use of Images Produced by Our Tools for the Visual Analysis of High-Rise Projects and Their Impact on the Skyline

While our 3D vector results offer a great number of possibilities in terms of analysis, our tool also produces images for each analysis performed. According to the practitioners working with us on the project, images are very useful for visual analysis for many actors (practitioners, elected representatives and citizens) when working on a construction project or the design of a district, especially where it contains high rises. These images can be particularly useful to test various hypotheses and to evaluate the impact of a new building on the skyline when designing the project.
Today, in professional practice, many projects are evaluated through photomontages, which are difficult to produce with accuracy. Accounts of several cases of such images misleading a public debate already exist [37]. For this reason, practitioners involved in the SKYLINE projects expressed their wish for images that are geometrically reliable and graphically restrained (especially without texture so that the shape of the building is actually what is visually analyzed). Figure 21 shows an example of an imaginary project in a central business district in Lyon, seen from a popular public vantage point. Such images can help practitioners and elected representatives work on designs that fit in with the existing skyline and present their proposals to a larger audience.
Along with the images, quantitative data produced by our tool enabled us to draw up a list of 70 buildings that would lose their view of the Fourvière Basilica if the project was constructed.

4.6. Discussion

The study of the visibility of landmarks at a given viewpoint or at different viewpoints is important. The former ensures that certain landmarks remain visible from a given landmark, and the latter, as illustrated in Figure 20, helps to identify key vantage points that could be used by urban planners and even proposed to tourists. The use of 3D data compared to 2D or 2.5D data helped us to obtain a much finer visibility analysis. In addition, by making use of standards such as CityGML, our approach can be extended to any other city in a reproducible manner, ensuring that the entire computational process is repeatable. Delivering data and code along with complete documentation also helps towards reproducibility. Compared to traditional tools, our approach can also be used to study the visibility of far-away objects. The visibility indicators can be used to qualify certain areas and can be grouped with other indicators to propose a measure of the visual ambiance present at these locations.
However, such analyses are sometimes cumbersome, especially when it involves objects that are separated by kilometers. Furthermore, they require complete 3D models since multiple objects may play a role in the overall visual impact of a particular object of interest for large-scale analyses. With the development of versioned city models that can propose competing versions of an urban area, as illustrated in Chaturvedi et al. [52], Samuel et al. [53], visibility analyses can also be used to compare the visual impact of multiple competing urban projects. Furthermore, we have not considered the density of obstacles in this work.

5. Conclusions

Using 3D CityGML vector data, our tool provides accurate and reliable results in terms of visibility analysis and skyline assessment and can be used over very large areas. In this article, we demonstrated our proposed method of visibility analyses for the city of Lyon, especially for some of the major historical monuments by making use of 3D data models. Our analyses show a significant improvement compared to tools based on 2.5D raster data. Our tool, called 3DUSE, an open-source solution, also allows for quantitative and qualitative (through images) analyses of the skyline on various scales, enabling users to adapt the ray tracing resolution to their needs. The originality of our method lies in its versatility, the accuracy and diversity of the analysis results, as well as the optimization of the processing time. Our method opens up new opportunities for skyline assessment and can help practitioners in various areas (identifying accessible public spaces with remarkable views, quantifying privatization of access to remarkable views, producing quantitative and qualitative data for public debates on landscape projects affecting the skyline, etc.).
The ability for the visibility analysis to cover very large areas is particularly interesting for work on areas sharing a common landscape. It can, however, lead to time-consuming processing and large resulting datasets, which will need to be explored, sorted out and analyzed through specific queries, requiring some GIS technical knowledge. The images and tables produced by our tool for every analysis can be used directly to discuss with a wider audience.
The accuracy of the results produced by our tool depends directly on the 3D vector data used as input (its geometrical precision and available semantic data). Nevertheless, our process could be improved in order to provide additional results to complete the visibility analysis. For instance, complementary processes could be developed in order to take into account meteorological data or human vision limitations [10] affecting visibility at a given time and place. Working with a temporal 3D model would thus be relevant in order to produce variable analyses according to the time of day/year. This would add further complexity with regard to handling the results, while practitioners and citizens have expressed a great need for accessible data. For this reason, subsequent work may also focus on providing results structured in a database with predefined queries adapted to the user’s objectives.

Author Contributions

Conceptualization, Frédéric Pedrinis and Gilles Gesquière; methodology, Frédéric Pedrinis; software, Frédéric Pedrinis; validation, Manuel Appert, Florence Jacquinod and Gilles Gesquière; formal analysis, Gilles Gesquière; investigation, Frédéric Pedrinis; resources, Gilles Gesquière; data curation, Frédéric Pedrinis; writing—original draft preparation, Frédéric Pedrinis; writing—review and editing, Frédéric Pedrinis and John Samuel; visualization, Frédéric Pedrinis; Manuel Appert, Florence Jacquinod and Gilles Gesquière; project administration, Manuel Appert and Gilles Gesquière; funding acquisition, Manuel Appert and Gilles Gesquière All authors have read and agreed to the published version of the manuscript.

Funding

This work was performed within the framework of the LABEX IMU (ANR-10-LABX-0088) of Université de Lyon, as part of the “Investissements d’Avenir” (ANR-11-IDEX-0007) program operated by the French National Research Agency (ANR).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study is based on open data of Lyon (https://data.grandlyon.com/, (accessed on 7 September 2022)).

Acknowledgments

The SKYLINE Project, which brought together researchers and practitioners, was funded by the French National Research Agency. We wish to thank all participants in the consortium, and especially practitioners from Lyon (François Brégnac) and London (Rosemarie McQueen and Jane Hamilton). We are very grateful for their involvement and contributions. Masters students (from the Universities of Saint-Etienne, Lyon, and Marne-La-Vallée) also helped us to work on methods and use cases (Anne-Thérèse Auger, Maximilien Brossard, Sylvain Jourdana, Antoine Laurent, Geoffrey Mouret, Naïme Osseni and Sébastien Truel): we would like to sincerely thank them for their work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Polonsky, O.; Patané, G.; Biasotti, S.; Gotsman, C.; Spagnuolo, M. What’s in an image? Towards the computation of the “best” view of an object. Vis. Comput. 2005, 21, 840–847. [Google Scholar] [CrossRef]
  2. ISO Technical Committee 211 Geographic Information/Geomatics. ISO 19107:2019. 2019. Available online: https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/06/61/66175.html (accessed on 24 August 2022).
  3. Engel, J.; Döllner, J. Approaches Towards Visual 3D Analysis for Digital Landscapes and Its Applications. Digit. Landsc. Archit. Proc. 2009, 2009, 33–41. [Google Scholar]
  4. O’Sullivan, D.; Turner, A. Visibility graphs and landscape visibility analysis. Int. J. Geogr. Inf. Sci. 2001, 15, 221–237. [Google Scholar] [CrossRef]
  5. Labib, S.; Huck, J.J.; Lindley, S. Modelling and mapping eye-level greenness visibility exposure using multi-source data at high spatial resolutions. Sci. Total Environ. 2021, 755, 143050. [Google Scholar] [CrossRef]
  6. Esch, T.; Asamer, H.; Bachofer, F.; Balhar, J.; Boettcher, M.; Boissier, E.; d’ Angelo, P.; Gevaert, C.M.; Hirner, A.; Jupova, K.; et al. Digital world meets urban planet—New prospects for evidence-based urban studies arising from joint exploitation of big earth data, information technology and shared knowledge. Int. J. Digit. Earth 2020, 13, 136–157. [Google Scholar] [CrossRef]
  7. Foltête, J.C.; Ingensand, J.; Blanc, N. Coupling crowd-sourced imagery and visibility modelling to identify landscape preferences at the panorama level. Landsc. Urban Plan. 2020, 197, 103756. [Google Scholar] [CrossRef]
  8. Shang, M.; Wang, S.; Zhou, Y.; Du, C.; Liu, W. Object-based image analysis of suburban landscapes using Landsat-8 imagery. Int. J. Digit. Earth 2019, 12, 720–736. [Google Scholar] [CrossRef]
  9. Drozdz, M.; Appert, M.; Harris, A. High-rise urbanism in contemporary Europe. Built Environ. 2018, 43, 469–480. [Google Scholar] [CrossRef]
  10. Nijhuis, S. GIS-based Landscape Design Research: Exploring Aspects of Visibility in Landscape Architectonic Compositions. In Geodesign by Integrating Design and Geospatial Sciences; Lee, D.J., Dias, E., Scholten, H.J., Eds.; Springer International Publishing: Cham, Switerland, 2014; pp. 193–217. [Google Scholar] [CrossRef]
  11. Nijhuis, S. Applications of GIS in landscape design research. Res. Urban. Ser. 2016, 4, 43–56. [Google Scholar] [CrossRef]
  12. Cassatella, C.; Voghera, A. Indicators Used for Landscape. In Landscape Indicators: Assessing and Monitoring Landscape Quality; Cassatella, C., Peano, A., Eds.; Springer: Dordrecht, The Netherlands, 2011; pp. 31–46. [Google Scholar] [CrossRef]
  13. Rosa, D.L. The observed landscape: Map of visible landscape values in the province of Enna (Italy). J. Maps 2011, 7, 291–303. [Google Scholar] [CrossRef]
  14. Nutsford, D.; Reitsma, F.; Pearson, A.L.; Kingham, S. Personalising the viewshed: Visibility analysis from the human perspective. Appl. Geogr. 2015, 62, 1–7. [Google Scholar] [CrossRef]
  15. Hernández, J.; García, L.; Ayuga, F. Assessment of the visual impact made on the landscape by new buildings: A methodology for site selection. Landsc. Urban Plan. 2004, 68, 15–28. [Google Scholar] [CrossRef]
  16. La Fabrique du Paysage Métropolitain 2—Au Coeur de l’agglomération Parisienne, Quels Outils Pour Une Gestion Commune du Grand Paysage? 2014. Available online: https://www.apur.org/fr/nos-travaux/fabrique-paysage-metropolitain-2-coeur-agglomeration-parisienne-outils-une-gestion-commune-grand-paysage (accessed on 7 September 2022).
  17. Jaillot, V.; Rigolle, V.; Servigne, S.; Samuel, J.; Gesquière, G. Integrating multimedia documents and time-evolving 3D city models for web visualization and navigation. Trans. GIS 2021, 25, 1419–1438. [Google Scholar] [CrossRef]
  18. Appert, M. Ville globale versus ville patrimoniale? Des tensions entre libéralisation de la skyline de Londres et préservation des vues historiques. Rev. Géograph. l’Est 2008, 48, 1–2. [Google Scholar] [CrossRef]
  19. Harris, A. Livingstone versus Serota: The High-rise Battle of Bankside. Lond. J. 2008, 33, 289–299. [Google Scholar] [CrossRef]
  20. Appert, M.; Montes, C. Skyscrapers and the redrawing of the London skyline: A case of territorialisation through landscape control. Articulo 2015, 18. [Google Scholar] [CrossRef]
  21. Montanari, G. Privatizing the sky? Tall buildings in the historical urban landscape: The case of Turin. Géocarrefour 2017, 91. [Google Scholar] [CrossRef]
  22. Dixon, M. Gazprom versus the Skyline: Spatial Displacement and Social Contention in St. Petersburg. Int. J. Urban Reg. Res. 2010, 34, 35–54. [Google Scholar] [CrossRef]
  23. Puspitasari, A.W.; Kwon, J. A reliable method for visibility analysis of tall buildings and skyline: A case study of tall buildings cluster in Jakarta. J. Asian Archit. Build. Eng. 2021, 20, 356–367. [Google Scholar] [CrossRef]
  24. Bornaetxea, T.; Marchesini, I. r.survey: A tool for calculating visibility of variable-size objects based on orientation. Int. J. Geogr. Inf. Sci. 2021, 36, 429–452. [Google Scholar] [CrossRef]
  25. Peng, Y.; Nijhuis, S.; Zhang, G. Towards a Practical Method for Voxel-Based Visibility Analysis with Point Cloud Data for Landscape Architects: Jichang Garden (Wuxi, China) as an Example; Wichmann Verlag: Berlin, Germany, 2022. [Google Scholar]
  26. Leduc, T.; Chaillou, F.; Ouard, T. Towards a “typification” of the Pedestrian Surrounding Space: Analysis of the Isovist Using Digital processing Method. In Advancing Geoinformation Science for a Changing World; Geertman, S., Reinhardt, W., Toppen, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 275–292. [Google Scholar] [CrossRef]
  27. Weitkamp, G. Mapping landscape openness with isovists. In Exploring the Visual Landscape: Avances in Physiognomic Landscape Research in The Netherlands; IOS Press: Berlin, Germany, 2011; pp. 205–224. [Google Scholar] [CrossRef]
  28. Benedikt, M.L. To Take Hold of Space: Isovists and Isovist Fields. Environ. Plan. Plan. Des. 1979, 6, 47–65. [Google Scholar] [CrossRef]
  29. Bartie, P.; Reitsma, F.; Kingham, S.; Mills, S. Advancing visibility modelling algorithms for urban environments. Comput. Environ. Urban Syst. 2010, 34, 518–531. [Google Scholar] [CrossRef]
  30. Yang, P.P.J.; Putra, S.Y.; Li, W. Viewsphere: A GIS-Based 3D Visibility Analysis for Urban Design Evaluation. Environ. Plan. Plan. Des. 2007, 34, 971–992. [Google Scholar] [CrossRef]
  31. CAHA, J. Line of Sight Analyst: ArcGIS Python Toolbox for visibility analyses. Geogr. Cassoviensis 2018, 12. Available online: https://uge-share.science.upjs.sk/webshared/GCass_web_files/articles/GC-2018-12-1/2018_1_Caha.pdf (accessed on 7 September 2022).
  32. Ruzickova, K.; Ruzicka, J.; Bitta, J. A new GIS-compatible methodology for visibility analysis in digital surface models of earth sites. Geosci. Front. 2021, 12, 101109. [Google Scholar] [CrossRef]
  33. Cortésa, F.G.; Leducb, T. GGL: A geo-processing definition language that enhance spatial SQL with parameterization. In Proceedings of the 13th AGILE International Conference on Geographic Information Science, Guimarães, Portugal, 11–14 May 2010. [Google Scholar]
  34. Rana, S. Isovist Analyst—An Arcview extension for planning visual surveillance. In Proceedings of the ESRI International User Conference, San Diego, CA, USA, 10 August 2006. [Google Scholar]
  35. Suleiman, W.; Joliveau, T.; Favier, E. Une nouvelle méthode de calcul d’isovist en 2 et 3 dimensions. In Proceedings of the Actes de la Conférence Internationale de Géomatique et Analyse Spatiale-SAGEO, Liège, Belgium, 6–9 November 2012; pp. 366–386. [Google Scholar]
  36. Biljecki, F.; Stoter, J.; Ledoux, H.; Zlatanova, S.; Çöltekin, A. Applications of 3D City Models: State of the Art Review. ISPRS Int. J. Geo-Inf. 2015, 4, 2842–2889. [Google Scholar] [CrossRef]
  37. Mericskay, B. Cartographie en Ligne et Planification Participative: Analyse des Usages du géoweb et d’Internet dans le débat Public à travers le cas de la Ville de Québec. Ph.D. Thesis, Université Laval, Quebec City, QC, Canada, 2013. [Google Scholar]
  38. Morello, E.; Ratti, C. A Digital Image of the City: 3D Isovists in Lynch’s Urban Analysis. Environ. Plan. Plan. Des. 2009, 36, 837–853. [Google Scholar] [CrossRef]
  39. Suleiman, W.; Joliveau, T.; Favier, E. A New Algorithm for 3D Isovists. In Advances in Spatial Data Handling: Geospatial Dynamics, Geosimulation and Exploratory Visualization; Timpf, S., Laube, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 157–173. [Google Scholar] [CrossRef]
  40. Caldwell, D.; Mineter, M.; Dowers, S.; Gittings, B. Analysis and visualization of visibility surfaces. In Proceedings of the 7th International Conference on GeoComputation, Southampton, UK, 8–10 September 2003; University of Southampton: Southampton, UK, 2003. Available online: http://www.geocomputation.org/2003/ (accessed on 9 September 2022).
  41. Llobera, M. Extending GIS-based visual analysis: The concept of visualscapes. Int. J. Geogr. Inf. Sci. 2003, 17, 25–48. [Google Scholar] [CrossRef]
  42. Choudhury, F.M.; Ali, M.E.; Masud, S.; Nath, S.; Rabban, I.E. Scalable visibility color map construction in spatial databases. Inf. Syst. 2014, 42, 89–106. [Google Scholar] [CrossRef]
  43. Rabban, I.E.; Abdullah, K.; Ali, M.E.; Cheema, M.A. Visibility Color Map for a Fixed or Moving Target in Spatial Databases. In Proceedings of the Advances in Spatial and Temporal Databases; Claramunt, C., Schneider, M., Wong, R.C.W., Xiong, L., Loh, W.K., Shahabi, C., Li, K.J., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 197–215. [Google Scholar]
  44. Czyńska, K. Application of Lidar Data and 3D-City Models in Visual Impact Simulations of Tall Buildings. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2015, XL-7/W3, 1359–1366. [Google Scholar] [CrossRef]
  45. Danese, M.; Nolè, G.; Murgante, B. Visual Impact Assessment in Urban Planning. In Geocomputation and Urban Planning; Murgante, B., Borruso, G., Lapucci, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 133–146. [Google Scholar] [CrossRef]
  46. Wandell, B.; Thomas, S. Foundations of Vision. Psyccritiques 1997, 42, 1–61. [Google Scholar]
  47. Riggs, L.A. Visual acuity. In Vision and Visual Perception; John Wiley and Sons, Inc.: New York, NY, USA, 1965; pp. 321–349. [Google Scholar]
  48. Pedrinis, F. Représentations et Dynamique de la Ville Virtuelle. (Representations and Dynamics of the Virtual City). Ph.D. Thesis, Lumière University Lyon 2, Lyon, France, 2017. [Google Scholar]
  49. Watson, I.D.; Johnson, G.T. Graphical estimation of sky view-factors in urban environments. J. Climatol. 1987, 7, 193–197. [Google Scholar] [CrossRef]
  50. Brown, M.J.; Grimmond, S.; Ratti, C. Comparison of Methodologies for Computing Sky View Factor in Urban Environments; Technical Report LA-UR-01-4107; Los Alamos National Lab. (LANL): Los Alamos, NM, USA, 2001. [Google Scholar]
  51. Parker, J.; Sharratt, M.; Richmond, J. The Shard, London, UK: Response of arches to ground movements. In Proceedings of the Institution of Civil Engineers-Bridge Engineering; Thomas Telford Ltd.: London, UK, 2012; Volume 165, pp. 185–194. [Google Scholar] [CrossRef]
  52. Chaturvedi, K.; Smyth, C.S.; Gesquière, G.; Kutzner, T.; Kolbe, T.H. Managing Versions and History within Semantic 3D City Models for the Next Generation of CityGML. In Advances in 3D Geoinformation; Abdul-Rahman, A., Ed.; Lecture Notes in Geoinformation and Cartography; Springer International Publishing: Cham, Switzerland, 2017; pp. 191–206. [Google Scholar] [CrossRef]
  53. Samuel, J.; Servigne, S.; Gesquière, G. Representation of concurrent points of view of urban changes for city models. J. Geogr. Syst. 2020, 22, 335–359. [Google Scholar] [CrossRef]
Figure 1. View composition regarding city features.
Figure 1. View composition regarding city features.
Ijgi 11 00479 g001
Figure 2. Viewpoint from south of Lyon.
Figure 2. Viewpoint from south of Lyon.
Ijgi 11 00479 g002
Figure 3. Three-dimensional visualization of buildings and associated documents (See more examples in [17]).
Figure 3. Three-dimensional visualization of buildings and associated documents (See more examples in [17]).
Ijgi 11 00479 g003
Figure 4. Composition of the 3D view in four steps: Field of View Description (Step A), Intersecting Objects in the 3D Scene (Step B), Storing intersected objects (Step C) and Storing results in the database (Step D).
Figure 4. Composition of the 3D view in four steps: Field of View Description (Step A), Intersecting Objects in the 3D Scene (Step B), Storing intersected objects (Step C) and Storing results in the database (Step D).
Ijgi 11 00479 g004
Figure 5. Discretization of the 3D space according to rays generated from the viewpoint of interest.
Figure 5. Discretization of the 3D space according to rays generated from the viewpoint of interest.
Ijgi 11 00479 g005
Figure 6. A 1 × 1 km tile of Lyon composed of four types of objects: buildings, terrain, roads and vegetation.
Figure 6. A 1 × 1 km tile of Lyon composed of four types of objects: buildings, terrain, roads and vegetation.
Ijgi 11 00479 g006
Figure 7. Three rays are generated from the viewpoint of different kinds of objects.
Figure 7. Three rays are generated from the viewpoint of different kinds of objects.
Ijgi 11 00479 g007
Figure 8. Organization of a model city using a regular grid.
Figure 8. Organization of a model city using a regular grid.
Ijgi 11 00479 g008
Figure 9. (Left): Top view of 3D skyline for a given point or view (purple). (Right): Composition of this skyline.
Figure 9. (Left): Top view of 3D skyline for a given point or view (purple). (Right): Composition of this skyline.
Ijgi 11 00479 g009
Figure 10. View composition analysis. (Left): decomposition of the skyline according to the intersected object. (Right): the view composition.
Figure 10. View composition analysis. (Left): decomposition of the skyline according to the intersected object. (Right): the view composition.
Ijgi 11 00479 g010
Figure 11. Three-dimensional visualization of a tile (1 × 1 km) of the CityGML model of Lyon.
Figure 11. Three-dimensional visualization of a tile (1 × 1 km) of the CityGML model of Lyon.
Ijgi 11 00479 g011
Figure 12. One-meter-resolution DEM used for a visibility analysis for the Lyon Metropolis: view from the whole DEM covering the city of Lyon (48 sq km) (left) and zoom on a specific part of the city (right).
Figure 12. One-meter-resolution DEM used for a visibility analysis for the Lyon Metropolis: view from the whole DEM covering the city of Lyon (48 sq km) (left) and zoom on a specific part of the city (right).
Ijgi 11 00479 g012
Figure 13. Visual comparison between data describing the same location (Bellecour square): raster data (1-m DEM—left) and 3D vector data (right).
Figure 13. Visual comparison between data describing the same location (Bellecour square): raster data (1-m DEM—left) and 3D vector data (right).
Ijgi 11 00479 g013
Figure 14. Three-dimensional visualizations to compare the results of 2.5D raster (left) and 3D vector (right) visibility analyses from the vantage point of the Fourvière Basilica. On the raster analysis, the visible areas are in green, while on the vector one, we only see the visible 3D points colored according to their type. The 3D model of Saint Jean’s Cathedral has been added to the bottom visualizations, which zoom in on the panoramic view of the top images.
Figure 14. Three-dimensional visualizations to compare the results of 2.5D raster (left) and 3D vector (right) visibility analyses from the vantage point of the Fourvière Basilica. On the raster analysis, the visible areas are in green, while on the vector one, we only see the visible 3D points colored according to their type. The 3D model of Saint Jean’s Cathedral has been added to the bottom visualizations, which zoom in on the panoramic view of the top images.
Ijgi 11 00479 g014
Figure 15. Three-dimensional visualization using our tools, with each color corresponding to the CityGML category of the resulting 3D points (green: vegetation, grey: buildings, yellow: terrain, blue: water).
Figure 15. Three-dimensional visualization using our tools, with each color corresponding to the CityGML category of the resulting 3D points (green: vegetation, grey: buildings, yellow: terrain, blue: water).
Ijgi 11 00479 g015
Figure 16. Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square). Green pixels indicate that the Basilica is seen, and red pixels that it is not seen. The results are displayed on an aerial image of the square; hence transparency is used.
Figure 16. Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square). Green pixels indicate that the Basilica is seen, and red pixels that it is not seen. The results are displayed on an aerial image of the square; hence transparency is used.
Ijgi 11 00479 g016
Figure 17. Same as Figure 16, with the addition of the visibility analysis from our tool (in green, vegetation; in yellow, terrain; in red, roofs; in white, building’s walls).
Figure 17. Same as Figure 16, with the addition of the visibility analysis from our tool (in green, vegetation; in yellow, terrain; in red, roofs; in white, building’s walls).
Ijgi 11 00479 g017
Figure 18. Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square). Green pixels indicate areas where the Basilica can be seen, and red pixels indicate places from which it is not visible. The results are displayed on an aerial image with a little transparency. Building footprints are represented in black.
Figure 18. Raster analysis results regarding the visibility of the Fourvière Basilica (Bellecour Square). Green pixels indicate areas where the Basilica can be seen, and red pixels indicate places from which it is not visible. The results are displayed on an aerial image with a little transparency. Building footprints are represented in black.
Ijgi 11 00479 g018
Figure 19. Buildings that have a facade from which the Fourvière Basilica can be seen are shown in white. Buildings from which the Basilica is only visible from the rooftop are excluded.
Figure 19. Buildings that have a facade from which the Fourvière Basilica can be seen are shown in white. Buildings from which the Basilica is only visible from the rooftop are excluded.
Ijgi 11 00479 g019
Figure 20. Visualization of vantage points (each point is a vantage point). On the (left), the number of landmarks seen from the vantage points (from red, five landmarks seen, to yellow, zero landmarks seen). On the (right), effort needed to access the vantage point on foot, in calories (from light blue (less than 20 calories) to dark blue (more than 70 calories).
Figure 20. Visualization of vantage points (each point is a vantage point). On the (left), the number of landmarks seen from the vantage points (from red, five landmarks seen, to yellow, zero landmarks seen). On the (right), effort needed to access the vantage point on foot, in calories (from light blue (less than 20 calories) to dark blue (more than 70 calories).
Ijgi 11 00479 g020
Figure 21. Visualization of an imaginary high-rise project (on the right) in the existing business district of La Part-Dieu from the belvedere of Fourvière.
Figure 21. Visualization of an imaginary high-rise project (on the right) in the existing business district of La Part-Dieu from the belvedere of Fourvière.
Ijgi 11 00479 g021
Table 1. Different types of objects shown in Figure 6 and the associated number of triangles for a 1 × 1 km tile of Lyon.
Table 1. Different types of objects shown in Figure 6 and the associated number of triangles for a 1 × 1 km tile of Lyon.
Type of ObjectNumber of Triangles
Building118,948
Terrain26,785
Road17,800
Vegetation73,142
Total236,675
Table 2. Difference between the results of a raster visibility analysis and those from our tool for the viewpoint from the Basilica of Fourvière.
Table 2. Difference between the results of a raster visibility analysis and those from our tool for the viewpoint from the Basilica of Fourvière.
Gross Comparison of ResultsComparison of Results by Adding a Buffer of 1 m to the Results of Our ToolComparison for Points beyond 1 km from Viewpoint
Number of points from our tool650,048650,04847,491
Number of points that are also part of the areas considered visible by the GIS visibility analysis415,976497,06844,262
Difference (points not considered as visible by GIS analysis)234,072152,9803229
Percentage difference36%23.5%7%
Table 3. Results from the gross comparison between our analysis and a raster analysis from different landmarks in the Lyon area.
Table 3. Results from the gross comparison between our analysis and a raster analysis from different landmarks in the Lyon area.
Fourvière BasilicaCity HallOperaSaint-Jean CathedralPart-Dieu Tower
Number of points from our tool650,048683,096680,978712,178640,821
Number of points that are also part of the areas considered visible by the GIS visibility analysis415,976395,146488,047304,519570,783
Difference (points not considered as visible by GIS analysis)234,072287,950192,931407,65970,038
Percentage difference36%42.20%28.30%57.24%11%
Table 4. Results from the comparison between our analysis with 1 m buffer and a raster analysis from different landmarks in the Lyon area.
Table 4. Results from the comparison between our analysis with 1 m buffer and a raster analysis from different landmarks in the Lyon area.
Fourvière BasilicaCity HallOperaSaint-Jean CathedralPart-Dieu Tower
Number of points from our tool650,048683,096680,978712,178640,821
Number of points that are also part of the areas considered visible by the GIS visibility analysis497,068449,269552,247364,803625,114
Difference (points not considered as visible by GIS analysis)152,980233,827128,731347,37515,707
Percentage difference23.5%34.23%18.90%48.78%2%
Table 5. Results from the comparison for points beyond 1 km between our analysis and a raster analysis from different landmarks in the Lyon area.
Table 5. Results from the comparison for points beyond 1 km between our analysis and a raster analysis from different landmarks in the Lyon area.
Fourvière BasilicaCity HallOperaSaint-Jean CathedralPart-Dieu Tower
Number of points from our tool47,49111,10212,091670249,197
Number of points that are also part of the areas considered visible by the GIS visibility analysis44,26211,10212,091670246,192
Difference (points not considered as visible by GIS analysis)32290003005
Percentage difference7%0.00%0.00%0.00%6%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pedrinis, F.; Samuel, J.; Appert, M.; Jacquinod, F.; Gesquière, G. Exploring Landscape Composition Using 2D and 3D Open Urban Vectorial Data. ISPRS Int. J. Geo-Inf. 2022, 11, 479. https://doi.org/10.3390/ijgi11090479

AMA Style

Pedrinis F, Samuel J, Appert M, Jacquinod F, Gesquière G. Exploring Landscape Composition Using 2D and 3D Open Urban Vectorial Data. ISPRS International Journal of Geo-Information. 2022; 11(9):479. https://doi.org/10.3390/ijgi11090479

Chicago/Turabian Style

Pedrinis, Frédéric, John Samuel, Manuel Appert, Florence Jacquinod, and Gilles Gesquière. 2022. "Exploring Landscape Composition Using 2D and 3D Open Urban Vectorial Data" ISPRS International Journal of Geo-Information 11, no. 9: 479. https://doi.org/10.3390/ijgi11090479

APA Style

Pedrinis, F., Samuel, J., Appert, M., Jacquinod, F., & Gesquière, G. (2022). Exploring Landscape Composition Using 2D and 3D Open Urban Vectorial Data. ISPRS International Journal of Geo-Information, 11(9), 479. https://doi.org/10.3390/ijgi11090479

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop