Keywords

1 Introduction

Today, 55% of the world’s population lives in urban areas, a proportion that is expected to increase to 68% by 2050 [30]. With the trend to more urbanization, effective engagement citizens to the city governance holds the promise of improving the quality of life of urban residents, improving the governance of cities and making cities prosperous, inclusive, sustainable and resilient [21]. When we talk about citizens’ participation in the city governance, by contributing the development of Internet technologies, people have the chance to become “netizens” of web social community. More and more citizens choose dual identity to participate in the city governance. However, it can’t be easily divided into offline and online statuses. The board line between the two statuses has become fuzzy and mixed due to the popularization of ICT devices. For example, when a citizen participates in an on-site activity, he may also hold an online streaming through his ICT device. In the same time, a mass of off-site citizens may also join this event synchronously. Sometimes they even can feed their comments back to on site. The internet and ICT devices indeed blur the time-space status of citizens’ participation. Therefore, the most important issue of citizens’ participation is not about the methods of participation but is about the participation of content generation. The idea called UGC (user-generated content) [7, 11, 17] facilitates and accelerates citizens’ participation in the city governance. From another point of view, UGC derived from citizens can provide the third-party evidences for the scientific city governance.

In this paper, we are going to discuss about the participatory sensing application and its geo-visualization method. The integration of mobile-based and web-based Interfaces called the City Probe system for the spatial identification and assessment driven by citizens’ participation will be presented to demonstrate the bottom-up city governance. In the end, the experiment operated by City Probe is conducted to realize the citizens’ participation of contextual feature identification and assessment.

2 Literature Review

2.1 Participatory Sensing

By contributing the popularization of Internet-enabled and GPS-enable devices, many people have become “netizens” of the Web social community [27]. The use of terminal devices to upload UGC (user-generated contents) gives citizens the ability to act as sensors [25]. Comparing with robotic sensors, citizens have the cognitive ability to perceive the complex events or phenomena. For example, the safety of a place. It is difficult to determine how safe of a place just according to robotic sensors. Thus, the detectable environmental data such as wind, light, heat, air or water are not the target reason to engage citizens. Citizens’ participation in the city governance should address on the qualitative feature exploration of living context. In addition, another advantage of citizen’s engagement is mobility. Not like fixed sensors, citizens can migrate in the city to access almost any target place. The mobility of citizens provides the opportunity to build the dynamic sensor network [13, 22].

The advance in location-aware technologies coupled with ubiquitous citizens have paved the way for an exciting paradigm shift for accomplishing urban-scale sensing, known in literature as participatory sensing [3]. Participatory sensing forms interactive, participatory sensor networks that enable public and professional users to gather, analyze and share local knowledge [3, 15]. BikeNet [8] is a mobile sensing system for mapping the cyclist experience. It uses several sensors embedded into a cyclist’s bicycle to gather quantitative data such as current location, speed, CO2, burnt calories, and galvanic skin response etc. The peripheral sensors interact with the mobile phone over a wireless connection to upload data. The individual data can be merged with other cyclists’ data to build the complete map for the cycling community. NoiseTube project [19] monitors noise pollution involving citizens (people) and built (environment) upon the notions of participatory sensing and citizen science. It enables citizens to measure peripheral noise exposure by using GPS-equipped mobile phones. The geo-localized UGC automatically upload and form the collective noise mapping of cities.

The use of robotic sensors for environment detection like we mention above provides the accurate source feedback. However, citizens play less role and just act as vehicles. The power of citizens should not only about mobility but also cognition. Streetscore [20] engages the citizens’ perception power to assess spatial issues. Streetscore developed by MIT Media Lab is a scene understanding algorithm that predicts the perceived safety of a streetscape, using training data from an online survey with contributions from more than 7000 participants. One of most interesting part is the online participatory sensing interface [22]. The interface randomly shows two different geo-tagged images and asks participants to choose which one is safer. Finally, the algorithm analyzes crowdsourcing data and presents the safety map. The participants’ cognition ability is really engaged in the identification of safety context that sensors or computers are difficult to tell.

The application of participatory sensing can generate vast data from participants. How to visualize the location-based data to deliver the meaningful information will be discussed in the next section.

2.2 Geo-Visualization

Realizing the potential of participatory sensing result as a data source requires developing appropriate methodologies [9]. We need a proper way for meaningful explanations from newly massive scales and complexities of digital data that are emerging through UGC. Techniques from information visualization, geo-visualization, and spatialization offer ways of reducing the complexity of information and clarifying relationships in big data [10, 28, 29]. For instance, Kramis has developed an XML-based infrastructure to enhance c interactive Geo-visualization of large data sets [16]. Bailey and Grossardt use information visualization to create collaborative geospatial/geovisual decision support systems (C-GDSS) for the supporting of public decision making [1]. Both cases give the idea that information visualization is not only provide the function of efficient data interpretation, but also induce the potential of decision making supporting for authority units or anonymous citizens.

The key of valuable Geo-visualization is to visualize the big data and map them according to coordination. It provides the spatial clues which allow users to discern massive data and build their relationship. Currid and Williams use a unique data set, Getty Images and geo-coded over 6000 events and 300,000 photographic images taken in Los Angeles and New York City, and conduct GIS -based spatial statistics to analyze macro-geographical patterns [6]. The project effectively identifies the hot spots to understand cultural industries and city geographic patterns. In addition to large-scale quantitative approaches, scholars in the social sciences argue that qualitative geo-visualization methods are equally important in our efforts to draw meaning from UGC [2, 14]. The terms computer-aided qualitative GIS and geo-narrative analysis adapt existing geospatial technologies for interpretive analysis of geographic information expressed as qualitative information [5, 18]. Cidell shows how content clouds with geo-location can be used to summarize and compare information from different places on a single issue [4].

In sum, with internet-enable technologies, the basic components of a widespread participatory sensing network already exist. The participatory sensing based on citizens brings a lot of opportunities to the city governance. In the same time, citizen-based participation also means the vast production of UGC. It needs well visualization methods to analyze the patterns and deliver meaningful information. Geo-visualization integrates the location and information to arouse context awareness. We believe the integration of participation sensing and geo-visualization will enable citizens to co-govern the city in a new way.

3 City Probe

Bring citizens to the city governance can reverse the traditional top-down leading management to the bottom-up collaborative governance. It provides potential opportunities from citizens’ participation to achieve 1. the reflection of citizens’ needs, 2. the deep and broad exploration of places, 3. the spatial interpretation based on human cognition. Under those premises, one of the most important issues is how to engage citizens’ cognition ability for place features recognition. As we know, the human brain can perceive the complex context and interpret it into abstract but comprehensive concept. For instance, the safety of a place. Some urbanology scholars call it “sense of place” [13, 24]. Here we try to augment the place sensing ability of human and apply it to the identification and assessment of contextual features in the city.

We propose the participatory sensing system called City Probe to quantify citizens’ perception of cities [25, 26]. In following sections, we will introduce the methodology and interface design of City Probe to explain how we engage citizens in the city governance. In addition, the visualization map will show how the collective data from citizens can be visualized to assist the citizens’ decision making.

3.1 Identification and Assessment of Contextual Features

Comparing with robotic sensors, taking citizens as sensors presents more qualitative dimension of city features identification. The cognition ability of citizens allows them to perceive the complex and contextual phenomenon. The City Probe system we developed try to engage this kind of cognition ability for the identification and assessment of contextual features in the city. The critical challenges include 1. how to identify the contextual features, 2. how to record them, 3. how to assess them.

The City Probe system designs serial operating steps to identify and assess the contextual features. In the beginning, the manager of city probe system presets several features to make the feature pool. The manager can create any kind of features; but remember, try to make the features qualitative but NOT calculating or static. For example, “I feel comfortable” is better than “the temperature is….”. From this example we can understand that former one is the comprehensive contextual phenomenon rather than the later one only. In addition, no any robotic sensor can detect the former feature but human brain.

Then citizens can choose one of the features provided from the pool to start their participatory sensing. When citizens pick up one target feature, it means the place fills or lacks this kind of feature worth to identify. Therefore, the City Probe system need to crate the measuring rule to quantify the citizens’ identification. The measuring rule sets up the scale from −10 (most negative) to +10 (most positive). According the shared scale, citizens can quantify their identification into assessable standard. In the same time, it also makes contextual features countable in order of further urban metrology application.

3.2 Interface Design

We present twin participatory sensing interfaces to quantify citizens’ perception of cities. Two interfaces called City Probe Mobile and City Probe Web (Fig. 1) are based on place ratings from citizens. Those two interfaces share the similar participatory sensing idea but aim to two different citizen groups: on-site citizens and off-site citizens. The on-site citizen group means the citizens who really visit places and participate in the place rating via City Probe Mobile APP. In the other site, the off-site citizen group means the citizens who “surf” the street view and participate in the place rating via City Probe Web. In the beginning, we only provided the City Probe Mobile APP to citizens. After several small tests, we found the City Probe Mobile APP users could dig into the city places provide high quality local place rating due to the on-site observation and mobility. However, after a period, the rating amount and rage went down due to the constrain of citizens’ familiar area. Therefore, we develop the City Probe Web which could present the photorealistic street view to engage the off-site community into the place rating. Two City Probe modes show complementarity for the identification and assessment of contextual features in the city.

Fig. 1.
figure 1

City Probe interface design: mobile-based interface (left) and web-based interface (right)

When citizens use City Probe interfaces, the system not only quantifies their assessment, but also records their coordinates. The mobile-based interface reports the location via GPS. The web-based interface also extracts the location from Google map. Based on the location, we can map the quantified assessment information and visualize it.

3.3 Visualization Methods

We design three kinds of map mode to visualize the citizens’ participatory sensing outcomes of contextual features (Fig. 2). 1. DOT mode: the map shows the individual assessment locations and visualizes the score with gradient red (negative) to blue (positive) dots. It provides the method to view the density and location of assessment data. 2. BLOCK mode: the map calculates the average score in 50 m by 50 m square and visualizes the score with gradient red (negative) to blue (positive) blocks. It provides the method to view the regional assessment data. 3. HEAT mode: the map places variables in the rows and columns and coloring the cells within the table. It provides a warm-to-cool color spectrum to show the heat of assessment data.

Fig. 2.
figure 2

DOT, BLOCK, and HEAT mode visualize the citizens’ participatory sensing of city places. (Color figure online)

4 Experiment Design and Analysis

The City Probe tool allows us to investigate the city features by citizens’ participatory. In this paper, we conducted the interesting experiment to demonstrate how the City Probe worked in almost any contextual feature. We invited 60 student subjects and deployed the City Probe Mobile-based Interface APP in their smartphones. In the beginning, we asked subjects to discuss the issue of “dating place features”. After serial discussions, subjects submitted 10 potential features including having a great view, having a secret path, having an atmosphere, etc. We adopted the features and created the feature pool in our City Probe Qualitative Issue field. Then we asked subjects to start their field trip and use the City Probe APP by walking in 180 min.

The outcomes from 60 subjects in 180 min were amazing. First, the spread of features identification was quite fast and vast. In Fig. 3, we could see the progress and distribution of subjects’ action. They reported around 3000 data covering an area of 4 km2.

Fig. 3.
figure 3

The progress and spread of subjects’ place features identification.

Second, the outcomes provided the new method to observe the distribution of different features in the city. The reported features coupled with coordinates could be visualized in the Google map. Each feature was regarded as a layer. Therefore, we could check out the single or multiple features by turning layers on/off individually. For example, in Fig. 4, we chose the BLOCK mode to visualize the reported data. The left figure showed the result which turned on all the layers, the middle one just turned on No. 5 (having a great view) and No. 7 (having a secret path) layers, and the right one only turned in No. 7 layer.

Fig. 4.
figure 4

The feature layers can be turned on/off individually.

The visualized map provided 3 potential analysis methods via layers’ mapping.

  1. 1.

    Overall features analysis: In Fig. 4 left, we turned all layers on to map 10 features in the same diagram. The diagram showed the overall assessment results and visualized them by blue (positive score) and red (negative score) blocks. We could locate the target areas according to the shade of blocks. It helped us to find out the most positive or negative areas which were under the influence of all reported features.

  2. 2.

    The single feature analysis: In Fig. 4 right, only one layer was turened on to show the single feature. It helped us to concentrate on the target feature and find out the most positive or negative areas which were under the influence of the target feature.

  3. 3.

    The cross-features analysis: In Fig. 4 middle, we demonstrate the function that two layers were turned on to show the No. 5 and No. 7 features. It helped us to locate the target areas which were under the influence of two target features. In addition, if we compared the middle figure with left and right figures, it also provided the difference between cross features.

In this experiment we demonstrated the novel way to engage the citizens’ power driven by the City Probe system. The experiment tool was based on the City Probe Mobile APP. However, we also noticed the shortage of subjects’ mobility during the time constrain. Therefore, after the experiment, we lock the vacant areas visualized map and use the City Probe Web to fill them up. In other words, the City Probe web played the complementary tool to complete the identification and assessment of contextual features.

5 Conclusion

The City Probe system provides the potential method to engage the power of citizens’ participation in the city governance. Two interfaces called City Probe Mobile and City Probe Web are developed to allow citizens’ on-site and off-site identification and assessment of contextual features. Both of them share the same measuring rule from −10 (most negative) to +10 (most positive). Individual citizens can quantify their identification into assessable standard. Therefore, 3 kinds of maps including DOT, BLOCK, and HEAT modes can be made and visualize the quantitative results.

The experiment based on the City Probe Mobile APP presented the valuable results to demonstrate how the City Probe system could be adopted to almost any identification and assessment of contextual feature. However, the City Probe Web only worked as the complementary tool after our experiment. In the future, we may conduct the parallel experiment which adopts two City Probe interfaces in the same time to compare the difference between them.