534
Views
0
CrossRef citations to date
0
Altmetric
Review Article

A gamified map application utilising crowdsourcing engaged citizens to refine the quality and accuracy of cadastral index map border markers

ORCID Icon & ORCID Icon
Pages 4726-4748 | Received 02 Jul 2023, Accepted 31 Oct 2023, Published online: 13 Nov 2023

ABSTRACT

Due to urban expansion, agriculture, and the long history of the cadastre in Finland, the cadastral index map has millions of border markers that have low spatial accuracy, incomplete feature properties or both. The low quality of the border markers creates issues, such as forest cutting machines cutting from the wrong side of the border. As it is unfeasible for the national mapping agency to remeasure all these border markers, crowdsourcing is seen as a solution. However, the task of locating and measuring border markers requires motivated citizens. Therefore, in this study, a gamified map-based artefact enabling citizens to refine the quality of border markers in the cadastral index map was created. The artefact was designed, developed, demonstrated, and evaluated following the design science research approach. This study demonstrated with high sample size that gamified crowdsourcing is viable for motivating citizens to perform even challenging tasks. Of the applied gamification affordances, progression, points, and leaderboard were the most motivating. It was also found that involving stakeholders early in the creation process and focusing on usability of the artefact resulted in a pleasing user experience for the citizens. The artefact even spun a self-organised mapping party during its demonstration.

This article is part of the following collections:
Advances in Volunteered Geographic Information (VGI) and Citizen Sensing

1. Introduction

The cadastre and the physical border markers in Finland have a long history dating back to the early twenty-first century and even earlier. Recently, following the process of more than a century of building, maintaining, and updating, the cadastre has evolved into digital form. The cadastral index map is the digital representation of the physical border markers in the terrain maintained by the national mapping and cadastral agency (NMCA) called National Land Survey of Finland (NLS). There are over 13 million border markers in the digital cadastral index map, and they are provided as open data. However, the cadastral index map does not perfectly represent the physical border markers in the terrain. According to the NMCA experts, measurements done in recent decades have high spatial accuracy; however, the digital cadastre contains millions of inaccurate and low-quality markers. This mismatch of the physical border markers and the digital border markers create all kinds of issues. From a legal standpoint, the physical border markers in the terrain are the binding markers but the cadastral index map is used in many cases of practical life. For example, forest cutting machines rely on the digital markers when logging forest properties. Therefore, inconsistencies in the cadastral index map, such as when the stored position of the digital border marker does not match the physical border marker in the terrain or a physical border marker has been lost, can lead to border-related issues and even conflicts between landowners. For instance, an inaccurate marker in a rural area can lead to forest cuts from the wrong side of the border which in turn can results in complaints and requirements of compensation.

This study is part of an effort of the NMCA, in which possibilities of improving the quality and accuracy of the cadastral index map were sought after. The goal of the NMCA is to improve the quality of border marker accuracy in the cadastral index map to below 1 m for all inaccurate markers. There are notable advances in mobile device location accuracy with the introduction of GNSS raw observation API for Android devices in 2016 (Banville and van Diggelen Citation2016) and dual-frequency GNSS capabilities of certain mobile devices (Aggrey et al. Citation2020) in 2018. These advances make reaching the NMCA goal of sub-1-m accuracy one step closer. Two themes were the focus of this work: gamified crowdsourcing and location accuracy of consumer devices, which are covered extensively by Kontiokoski (Citation2022) and Jussila (Citation2023). The latter studies focus on how mobile phones perform when border marker locations are measured. Kontiokoski found that the practical accuracy in varying conditions of satellite geometry and positioning chip quality is around 5 m. However, Jussila found that post-processing multiple measurements on one border marker could produce 1.46-m accuracy, but requires support for raw GNSS positioning data collection from the mobile device. The accuracy also stabilised around 10 measurements, meaning that extra measurements on a border marker did not improve accuracy, and the most gains in accuracy were made with a couple of measurements. The focus of this study, however, is on improving the quality of the cadastral index map. NMCAs face significant data acquisition challenges with the customers asking for higher spatial and temporal quality data (Mooney, Crompvoets, and Lemmens Citation2018). One solution for this ever-growing need for better quality is crowdsourced mapping.

1.1. Crowdsourcing and VGI

Crowdsourcing is an online distributed problem-solving approach that transforms problems and tasks into solutions by harnessing the potential of large groups of citizens via the web rather than traditional employees or suppliers (Morschheuser et al. Citation2017a). An example of this is when citizens helped find a missing airplane from satellite images using Tomnod (Baruch et al. Citation2016). There are also examples of NMCAs using crowdsourcing in mapping. A crowdsource-like event started in the 1979 in Finland where border markers were made more visible by citizens for remote sensing purposes. More recently, a mapping community called the National Map Corps collects data to help the USGS update structures in support of the national map and US topographical maps (McCartney et al. Citation2015). In New Zealand, a study found that crowdsourcing could enable citizens to contribute and could also provide advances in data collection and maintenance processes (Clouston Citation2015). The Dutch Kadaster demonstrated a mobile application by which border markers of Netherlands and Germany were located and reported by citizens (Olteanu-Raimond et al. Citation2017). In Greece, a model for crowdsourcing cadastral surveying was demonstrated in case studies using a mobile map application to crowdsource parcel boundaries, building footprints and descriptive information (Apostolopoulos et al. Citation2018). Also, a study of how to include crowdsourcing to the Greek cadastre (Apostolopoulos and Potsiou Citation2022) was made. In Finland, Citizens helped collect map features to ameliorate the national topographic database (NTDB) using a web map provided by the National Land Survey of Finland (NLS) (Rönneberg, Laakso, and Sarjakoski Citation2019). There is also a web service for crowdsourcing entrances and waypoint data in Finland (Lemmens, Mooney, and Crompvoets Citation2020).

Crowdsourcing can be separated into four categories: crowd-processing, crowd-rating, crowd-solving, and crowd-creating and be given properties to describe crowdsourced artefacts (Morschheuser et al. Citation2017a). Another categorisation can be made by the collection method, which can be either more crowd-based, where contributors can be passive, have low interaction, and perform simple tasks (Bilogrevic Citation2018; Gómez-Barrón et al. Citation2016; See et al. Citation2016) or more community driven, where contributors need to be motivated, have higher interaction, and perform complex tasks (Gómez-Barrón et al. Citation2016). Volunteered geographic information (VGI), coined by Goodchild (Citation2007), is crowdsourced information with openness and clarity about purposes and abilities to control collection and reuse. Regarding the data collection task, VGI is potent for complex crowdsourcing tasks (Gómez-Barrón et al. Citation2016; Morschheuser et al. Citation2017a). VGI is inherently voluntary (Bilogrevic Citation2018), which is relevant from the data collection task and privacy point of view, as the citizen has more control over the data collection process, for example. The privacy by design paradigm should be applied in creating crowdsourced artefacts, as it is an approach for countering threats of privacy violation, without degrading the quality of the collected data (Monreale et al. Citation2014). The benefits of VGI come with a price as contribution is more reliant on motivation of the citizen. However, effective methods of motivating citizens are available.

1.2. Gamification

Crowdsourcing problems can be solved by gamification or even by creating complete games around it. For example, Ingress has successfully crowdsourced Niantics POI database used in Pokémon Go (Laato, Hyrynsalmi, and Paloheimo Citation2019). Gamification is the use of game design elements in non-game contexts (Sebastian et al. Citation2011). Gamification affordances (Koivisto and Hamari Citation2019), such as points, badges, and trophies (Sailer et al. Citation2017), and social media functionality, such as profiles, ratings, and comments (Kietzmann et al. Citation2011), can be used to motivate citizens in various crowdsourcing tasks. Gamification can be used to motivate citizens to share content (Olteanu-Raimond et al. Citation2017) and can be used to make repetitive tasks less tedious (Zichermann and Cunningham Citation2011). Player types: altruist, builder, adventurer, freelancer, keeper, achiever, profit-chaser, and socialiser have been identified (Gómez-Barrón, Manso-Callejo, and Alcarria Citation2019) and can be used to improve the effect of gamification. Gamification affordances can be chosen to fit the player type(s) of the citizen, rather than having the same affordance for everyone. For example, the task for a socialiser can include co-operation, while the task for an achiever can include a daily goal. Different motivation methods motivate different player types (Gómez-Barrón, Manso-Callejo, and Alcarria Citation2019), meaning that in an artefact aimed for citizens in general, multiple different motivation methods should be applied.

In a geospatial context, Martella et al. have created gamification framework to help applying gamification to VGI (Martella, Clementini, and Kray Citation2019). Successful motivation also relies on the user experience. Therefore, designing an intuitive and a simple user interface using common usability and utility conventions for geospatial applications (Kuparinen Citation2016; Ricker and Roth Citation2018; Rönneberg Citation2022) should be set as a goal as majority of the users are not experts. When regular citizens are involved in crowdsourcing, the utility–usability trade-off (Roth, Ross, and MacEachren Citation2015) should be leaning more on usability rather than utility (functionality). Increase in usability may come at a cost of limiting advanced functionality including the gamification approach. Gamification can also demotivate citizens, as for example, competition was found to motivate some but demotivate others (Preist, Massung, and Coyle Citation2014). Therefore, designing gamification for an artefact requires careful consideration and has its special set of requirements for the creation process (Morschheuser et al. Citation2017b). Examples of gamified geospatial artefacts include: Actionbound (Buchholtz et al. Citation2021), FotoQuest Austria (Laso Bayas et al. Citation2016), Geo-Wiki (Laso Bayas et al. Citation2021), Mapillary (Alvarez Leon & Quinn Citation2019), MapSwipe (Ullah et al. Citation2023; Watkinson et al. Citation2023), MapRoulette (Martella et al. Citation2015; Watkinson et al. Citation2023), StreetComplete (Watkinson et al. Citation2023), Urbanopoly (Celino et al. Citation2012), Waze (Kim Citation2015; Martella et al. Citation2015).

1.3. Scope, research questions, and structure

The scope of this empirical study is in the domain of gamified geospatial crowdsourcing. The goal is enabling citizens to refine the quality and accuracy of the border markers in the cadastral index map by using a gamified map application utilising crowdsourcing. Similar artefacts have been demonstrated before; however, according to Morschheuser et al. (Citation2017a) and Koivisto and Hamari (Citation2019) gamification studies have issues, such as (1) larger sample sizes are needed, (2) both positive and negative participant perceptions should be studied, (3) contributor differences should be studied systematically, (4) incentive and cost efficiency should be explored, and (5) contextual factors effects need further understanding. This study aims to partially fill the identified gaps by answering research questions related to use of crowdsourcing, the effect of gamification, and the sources of motivation. First, can crowdsourcing in general be utilised to create refined information about cadastral border markers (RQ1)? This research question focuses on design, development, and demonstration of a map-based artefact utilising crowdsourcing. Second, are citizens motivated to contribute if gamification methods are applied (RQ2)? This research question focuses on citizens and their motivation to contribute. Third, is there a difference in motivation based on the player types the citizen has chosen (RQ3)? This research question focuses on different player profile types and how they affect the motivation of citizens.

This research was conducted following the design science research (DSR) approach (Baskerville et al. Citation2018; Dresch, Lacerda, and Antunes Citation2015; Johannesson and Perjons Citation2014; Peffers et al. Citation2007; Vaishnavi, Kuechler, and Petter Citation2004). DSR is used to identify a problem and elicit requirements, design, and develop an artefact to solve the problem, demonstrate and evaluate the artefact, and then generalise a solution (Johannesson and Perjons Citation2014). The DSR process is iterative by nature where new knowledge gained can be used to enhance previous phases (Dresch, Lacerda, and Antunes Citation2015). DSR was chosen as the research strategy because it is intended for solving a problem by creating a practical solution (Johannesson and Perjons Citation2014). A preliminary problem definition, research strategy, and short high-level design description of the artefact solving the identified problem has been described in Rönneberg and Kettunen (Citation2021). This work has been further elaborated in this study. The structure of this study reflects the above DSR phases, where the (1) problem and requirements, (2) design, (3) development, (4) demonstration, and (5) evaluation are part of the results.

2. Methods

The gamification framework by Martella, Clementini, and Kray (Citation2019) was followed to design the gamification aspects of the artefact. The different player types used in this study were based on the classification by Gómez-Barrón, Manso-Callejo, and Alcarria (Citation2019), in addition to the different gamification affordances to engage the player types. This provided the study with a foundation to build-upon which could then be applied when creating the artefact following the DSR approach. The DSR approach provided structure and methods for the creation process and improved the integrity of the study due to its use of literature reviews, problem definition, and evaluation methods. The inclusion of stakeholders early on was another benefit of the DSR approach.

The iterative DSR phases were conducted in order but contained overlap and going back to enhance previous phases with gained new knowledge. The main method for NMCA expert, developer, and researcher interaction was focus group meetings where the problem, requirements, design issues, development decisions, and practicalities of the demonstrations were discussed. In the first DSR phase, the problem was further elaborated in focus groups (Johannesson and Perjons Citation2014). Root cause analysis (RCA) was used to create a detailed problem description. RCA is a method of problem solving used for identifying the root causes of problems (Wilson, Dell, and Anderson Citation1993) and provides a structured approach to understand the problem. Furthermore, the initial functional and non-functional requirements for the artefact were discussed in the focus group after the problem description. In the second phase, the artefact was designed first by creating a concept image of the artefact and then further detailed in the focus group. In the third phase, agile software development (Abrahamsson et al. Citation2017) was used to implement the artefact to allow the focus group to discuss issues and new ideas. In the fourth phase, the artefact was demonstrated first as a functional test, then for a closed group of participants to acquire feedback. After the feedback, focus group discussions, and further development, the artefact was made publicly available to be demonstrated. In the fifth phase, the artefact was evaluated via analysing the responses to the online questionnaire and analysing the data citizens contributed with the artefact. Citizens were asked for consent both in the demonstration and evaluation phase for using their contributions.

As the contributed data contained identifiers, the data was pruned of the email addresses before conducting the analysis. In the questionnaire, respondents could optionally include their ‘username’. This way the questionnaire responses could be linked to the data citizens contributed during the demonstration. The questionnaire focused on two themes: player profiles and usefulness (utility and usability) while also having some general questions. One goal of the questionnaire was to determine which player types started using the artefact and which motivation methods work for them. The questionnaire also had a section for general feedback. The questionnaire, feedback, and collected contribution data were analysed by the researchers. 400 randomly chosen boundary marker photos were manually reviewed by a NMCA expert to find out if the photo actually contained a border marker.

3. Results

The results of this study have been separated into three sections describing outcomes from the phases of the DSR process: the problem and the requirements; the design, the development, and the demonstration; and the evaluation.

3.1. Problem and requirements

The outputs of this phase were a detailed problem description, requirements for the artefact, and a description of the main crowdsourcing task.

3.1.1. Problem description

The problem identified in this study was that the quality and accuracy of Finnish cadastral information on sizeable portion of the border markers are insufficient. The 13 million border markers of the cadastral index are spread across the country correlating to the built environment (). Registered accuracy values lower than one meter are considered spatially accurate by the NMCA. According to the focus group discussions with NMCA experts, the following was found. First, inaccurate markers are mainly in rural areas scattered around remote terrains of the country. Second, large cities have very few of them, as border markers maintained by them are considered accurate by the NMCA. Third, in case the registered accuracy is worse than 5 m, knowledge about the status of the marker in the terrain is considered very uncertain. Finally, an unknown amount of the physical border markers has also been lost due to urban expansion, agriculture, and vegetation growth, among other reasons according to the experts. Therefore, the cadastral index map data was filtered to contain only the markers that had a physical border marker in terrain and had a spatial accuracy worse than 1 m. This filtering resulted in the data of 2 million inaccurate border markers used in the artefact demonstration and evaluation.

Figure 1. (A) The accumulated 13 million border markers and (B) the 2 million inaccurate border markers of the cadastral index map of Finland in a 10-km² grid. Pink areas have no inaccurate border markers, such as the capitol region.

Figure 1. (A) The accumulated 13 million border markers and (B) the 2 million inaccurate border markers of the cadastral index map of Finland in a 10-km² grid. Pink areas have no inaccurate border markers, such as the capitol region.

The root causes were categorised into four groups: national mapping and cadastre agency (NMCA); users; cadastre; and markers based on the discussions in the focus group (). The national mapping agency has insufficient resources to audit the inaccurate or lost markers as the task of visiting millions of markers is unfeasible due to them being scattered around the country in difficult to reach rural areas. However, audits are conducted when there is other work to be done on real estate. On the other hand, the NMCA has difficulties in communicating about the quality issues of the markers. This leads to users not being aware of quality issues since the users are reliant on the information the NMCA provides. One of the main root causes of why the NMCA face issues with cadastral border markers is their long cadastre history. The accuracy of some of these old markers in the cadastral index map is poor and the feature properties of these markers are unreliable in comparison to modern standards due to evolved measurement methods. Therefore, the amount of inaccuracy is unknown, the physical border marker type can be unknown while the knowledge whether the physical border marker even still exists in terrain can be unknown.

Figure 2. Root causes of why a significant portion of the quality and accuracy of border markers is insufficient.

Figure 2. Root causes of why a significant portion of the quality and accuracy of border markers is insufficient.

3.1.2. Requirement elicitation

The requirement elicitation advanced iteratively in two phases. The preliminary general level functional and non-functional requirements were detailed in the focus group discussion in the requirement phase. The artefact was required to have the following utility: show inaccurate border markers on a mobile map; allow citizens to measure a selected border marker; store measured border markers. The artefact should be easy to use and have minimal data transfer, due to possible low bandwidth conditions in remote operating environments. Offline support was also required because of this. The artefact service should be available during all hours.

The requirements were further refined in later focus group discussions in the design and development phase. The requirements were categorised to map, registration, validation, gamification, and guidance in the design and development phase. For the map, a custom vector tile background map was required. The citizen should be able to position themselves on the map both for navigating to the border marker and for making the contribution. The citizens should create a profile to make contributions and the profile would be used for gamification purposes. Validation should be done via photos attached to contributions. Gamification affordances should include points and a scoreboard. Guidance should be available in the form of a border marker gallery, tips for searching a border marker, instructions for measuring a border marker, and instructions for how to behave in terrain while playing the game. This requirement definition also led to a detailed description of the crowdsourcing task performed by the citizens ().

Figure 3. The crowdsourcing task performed by citizens described in the requirement phase of the artefact creation process.

Figure 3. The crowdsourcing task performed by citizens described in the requirement phase of the artefact creation process.

3.2. Artefact design, development, and demonstration

The artefact was designed and developed based the requirements obtained from the detailed problem description phase. The design process was carried out in focus groups consisting of developers, researchers, and NMCA experts. The technical development process was carried out by the developers working closely with the researchers. The NMCA experts were involved when needed, for instance, when knowledge about characteristics of border markers and their professional positioning was needed. The output of this phase was a visualised concept, a design issue document, and the artefact ready to be demonstrated.

3.2.1. Design process

The concept of the artefact was formalised during the problem and requirement phase to be a mobile map application enabling crowdsourced refinement of border markers. The artefact concept was visualised (). The main design issues discovered, e.g. in the focus groups discussions, during the creation of the artefact were documented ().

Figure 4. The artefact concept of a mobile application enabling crowdsourced refinement of border markers was depicted in the design phase.

Figure 4. The artefact concept of a mobile application enabling crowdsourced refinement of border markers was depicted in the design phase.

Table 1. The main design issues during the creation of the artefact, issue solutions, and solution rationales for the artefact sorted by their DSR phase of when the issue emerged.

The design issues, issue solutions, and solution rationales for the artefact sorted by their DSR phase. Three of the design issues are outlined to further explain the design process. First, as the main idea of the artefact was to collect quality contributions from citizens, many issues regarding the contribution were thought of during the problem phase. One of them was that citizens could accidentally or intentionally contribute for the wrong border marker. This was noted early during the problem phase. To remedy this issue, two solutions were suggested. The artefact should perform a proximity check for the citizens’ location in relation to the border marker before allowing the citizen to contribute. The artefact should also require a photo be taken by the citizen of the border marker to be attached to the contribution. The design rationale for this was that validation is needed to assure the contribution is conducted at the correct border marker.

Second, not everything can be considered during the earlier phases of artefact creation. As an example of this, citizens gave feedback that they would also want to report missing markers during the demonstration phase. The rationale for solving this design issue was that reported lost markers are a valuable source of information, which was not previously considered. Therefore, the utility to report lost markers was added to the artefact during the demonstration. This also underlines the importance of applying an iterative creation process, such as DSR, as this issue was discovered late in the creation process of the artefact.

Finally, gamification was planned as one of the main motivators for citizen contributions. In the design phase, two main approaches to gamification were outlined. The complex approach included a conquer type of game mechanic where citizens would compete over territory by measuring markers. The simpler approach included awarding points from each contribution while displaying a leaderboard with citizens and their points. Both approaches have citizens competing, but the complex approach included a possibility to lose your territory while the simple approach was less confrontational. The simple approach was chosen due to it being easier to understand for players and more straightforward to implement for the developers. The complex approach also went against the collaborative good-cause spirit of the artefact.

3.2.2. Artefact description

The artefact was designed to be used by citizens for refining the quality of cadastral border markers and has been categorised according to the literature ().

Table 2. The artefact categorisation (Gómez-Barrón et al. Citation2016; Morschheuser et al. Citation2017a).

The artefact was built on web-map technology (HTML5) and intended for mobile devices and deployed on a secure cloud environment that hosted the official content (background map and inaccurate border markers) and crowdsourced contributions (). Vector tiles were used for the background map of the artefact, and it was customised from the NMCA data to include map features relevant for moving in the terrain on foot. For example, cadastral border lines were added to help citizens find border markers at their intersections. With the artefact participants can locate, take a picture, and measure the position of border markers. If the citizen does not find the border marker it can be reported as missing. After the contribution is shared to the NMCA, the citizen is given points for their efforts. Following the privacy by design approach, pseudonyms and access control was implemented as key privacy methods. The citizen layer, visible to all citizens using the artefact, only showed the inaccurate markers and whether the marker was measured or not. The contributions of the citizens were not shared to other citizens. To use the artefact the citizen needed to register with an email address and choose a nickname visible on the leaderboard.

Figure 5. The map-based user interface of the artefact is simple. (A) The background map was customised for moving on foot in the terrain including cadastral border lines. The citizen layer consisted of unmeasured (green) and measured (orange) inaccurate border markers. (B) A measurement required a photo and the location measurement of the border marker. (C) Border markers measured by the citizen were displayed on the map with a special icon to follow one’s progression.

Figure 5. The map-based user interface of the artefact is simple. (A) The background map was customised for moving on foot in the terrain including cadastral border lines. The citizen layer consisted of unmeasured (green) and measured (orange) inaccurate border markers. (B) A measurement required a photo and the location measurement of the border marker. (C) Border markers measured by the citizen were displayed on the map with a special icon to follow one’s progression.

The user interface was designed to be easy to use following common usability and utility conventions (). In addition to contributing and self-locating, the gamification aspects of the artefact were emphasised in the map user interface. For example, the points the citizen had collected were shown on the map view. The UI had the following elements: main menu; my profile; my score and leaderboard; my location; help; and measure border marker. The citizen layer holds the inaccurate border markers and are marked green when they have not been measured yet. Orange markers have already been measured. The markers were presented on the map with the border marker number to avoid confusion amongst markers. An icon with a fuzzy border highlighted the inaccurate location of the markers on the map. Each marker could be opened to get the marker number and type if available. To help citizens recognise the border markers a document containing photos and detailed descriptions of all the different types of border markers was available in the main menu. When available, the border marker type was also displayed as a graphic in the artefact to assist the citizen in recognising the correct marker type ().

During the reporting phase, a photo of the marker was necessary to be attached to the contribution before the measurement could take place. This was to ensure that the contribution could be validated later. Once the contribution had been made the marker icon colour changed for all citizens. In addition, for the contributor a special icon was used to reflect the contribution. The data citizens contributed was stored server side in GEOJSON format where one record consisted of a measurement or an omission report. Each record had among others the following fields: ‘unique ID’, ‘marker ID’, ‘marker type’, ‘citizen ID’, ‘username’, ‘email’, ‘user agent’, ‘geolocation’, ‘image URL’, and ‘timestamp’.

Gamification affordances were included for motivating citizens to contribute. The gamification approach was chosen to be simple in the design phase. As such, a very simplistic point for contribution system was implemented. Early in the demonstration each new measurement would yield two points while repeated measurements on others’ border markers would yield one point and repeat measurements on a self-measured border marker would yield no points. Later in the demonstration when the utility for reporting border markers missing was introduced, the point system was changed yield one point for missing reports. The utility of reporting a marker missing would also remove the marker from the map for the reporter. This was unintentional but was not noticed until the end of the demonstration. It is worth noting that that measuring and reporting a marker missing required the citizen to move close enough to the border marker (500 m), find it, take a photo of the marker, and then press a button to start the measurement. The measurement took from a few seconds to a minute to complete using the devices geolocation API (e.g. Google Maps API). Due to the disparity of required effort, the point system awarded more points from the measurements than marked missing reports. The accumulated points of each citizen were displayed on a leaderboard for everyone to see.

3.2.3. Artefact demonstration

The artefact was demonstrated in three phases. First, internal functional tests were made in the development phase. This included bug fixes and feedback from the focus group. Second, the functional tests continued prior to the launch of the artefact with a different focus group consisting of a few NMCA experts and a few enthusiastic citizens. Finally, the artefact was demonstrated for 4.5 months during the summer and autumn of 2021 (). Evaluation data was collected from the demonstration phase, referred to as the pilot.

Figure 6. The artefact was demonstrated during the summer and autumn months with citizens.

Figure 6. The artefact was demonstrated during the summer and autumn months with citizens.

A targeted marketing campaign was held during the pilot. The campaign consisted of three parts. Weeklong targeted social media advertisement was done in Instagram, Facebook, LinkedIn, and Twitter every month. Bulletins for media and the public were also made monthly. Regional radio advertisements in a specific region in Southern Finland were done for a month during 17.6–14.7. The top contributor of the week was given an ice cream gift card until the end of August. It is worth noting that a self-organised mapping party was held by a county (Northern Finland) during the final month of the pilot. The mapping party lasted 1 month during October. Citizens were encouraged by the county to look for border markers using the artefact. To further motivate the contributions, the county offered five of the most active citizens a prize and lots were casted for five additional prizes. The winner of the event was interviewed by the county. At the later stages of the artefact demonstration, the need for getting more measurements for a single border marker was realised. Resources were available for marketing, but not for altering the gamification affordance to motivate multiple measurements.

3.3. Evaluation

The evaluation period for the artefact began 16.6.2021 and lasted until 28.10.2021 (4.5 months or 133 days). The artefact had 4652 users at the end of the evaluation period. Of the devices 89% were Android while 11% were iPhone based. Of the 4652 users at least one contribution was made by 1916 registered (70%) and 825 unregistered (30%) users. A total of 2741 users made contributions meaning that 59% of all the users made at least one contribution. During the evaluation a total of 22,166 contributions were made by citizens of which 19,287 (87%) were measurements and 2879 (13%) were markers reported missing. 20,831 of the contributions (94%) were made by the registered users. 1522 registered users (79%) made more than one contribution. Majority of the contributions focused to the Southern part of the country while there were notable hotspots North (). The average number of contributions by a citizen was 7.7 while the median was 2.0 contributions per citizen. The distribution of the contributions also followed the typical pattern of crowdsourcing platforms where most of the contributions are done by a relatively small number of contributors. The top 10% of users (282) made 59% (11,390) of the total contributions. The contributions made by citizens cover roughly 1% of the total amount of inaccurate markers available in the artefact. 2174 border markers were measured more than once, but only 158 markers were measured more than three times, and four markers were measured nine times. When randomly chosen 400 photos attached to border marker measurements were reviewed by an expert, 85% were either ‘likely’ or ‘definitely’ a border marker. Most of the uncertain photos contained a rock without a number; however, it is still likely many of these are border markers, but this cannot be determined from the photo.

Figure 7. Heatmap of accumulated contributions over the pilot period from 1st of July to 28th of October. Each heatmap displays the situation on the first day of the month except for November where the last day of contributions was chosen to be 28th of October.

Figure 7. Heatmap of accumulated contributions over the pilot period from 1st of July to 28th of October. Each heatmap displays the situation on the first day of the month except for November where the last day of contributions was chosen to be 28th of October.

The contributions are distributed across Finland but are more frequent in southern parts of the country. The contribution accumulation follows roughly the population distribution of Finland with a South to North decrease in contributions. While regional hotspots formed naturally, there were two hotspots that emerged due to the area specific marketing campaign and the self-organised mapping party (). During the self-organised mapping party, the participants made over 650 contributions. A citizen who made most contributions during the event reported in an interview that they walked over 70 km in total and their longest walks were over 3 h long.

Figure 8. (A) The areas with contributions (purple), the areas without contributions (yellow), and empty areas where there are no inaccurate border markers (white) displayed on a 10-km² grid. (B) The accumulated contributions on the 10 km² grid show regional hotspots. (C) The region with the marketing campaign (South to -West) and the region with the self-organised mapping party (North to East)

Figure 8. (A) The areas with contributions (purple), the areas without contributions (yellow), and empty areas where there are no inaccurate border markers (white) displayed on a 10-km² grid. (B) The accumulated contributions on the 10 km² grid show regional hotspots. (C) The region with the marketing campaign (South to -West) and the region with the self-organised mapping party (North to East)

The number of contributions per day shows a large spike, with a peak of 810 contributions a day, when the second round of social media marketing had been running for a while and many people started their summer vacation (). The average number of contributions per day during the pilot was 164. There were on average 35 registrations per day. 50% of the contributions were made during weekends (Friday to Sunday) while 38% were made on Saturdays and Sundays.

Figure 9. The number of contributions per day show a large spike at the start of the vacation period in July and some drop at the end of the vacation period in September. The contributions mean was 164, median 135 per day while the peak was 810.

Figure 9. The number of contributions per day show a large spike at the start of the vacation period in July and some drop at the end of the vacation period in September. The contributions mean was 164, median 135 per day while the peak was 810.

There were in total 5927 unique contribution days by the citizens. On average citizens played on 2 days during their use of the artefact. The median days played was 1.0. The maximum number of days played by a citizen was 41, meaning this citizen contributed at least once on 41 different days.

3.3.1. Questionnaire

The artefact main menu contained a link to an online questionnaire that had a total of 423 respondents. 210 respondents gave consent to link the questionnaire data with their contribution data by giving their username in the questionnaire. These questionnaire respondents (n = 210) count for 11% of all the registered users (1916). However, the questionnaire respondents count for 6257 (28%) of the total contributions. Therefore, the questionnaire respondents on average are more motivated players. The respondents could choose the player types they identified with (). Each player type had a short description of what type of activities the citizen could be interested in. Altruists, builders, and adventurers were the most common choices by citizens. 87% of citizens chose more than one player type. Three most chosen player types, disregarding ‘no answer’ and ‘none’, were altruist (n = 33), altruist + builder (n = 25) and adventurer (n = 17).

Figure 10. Chosen player types (multi-choice) in the questionnaire by citizens (n = 423).

Figure 10. Chosen player types (multi-choice) in the questionnaire by citizens (n = 423).

When studying the ratios of measurements to those reported missing, two player types differ from the rest (). 34% of the contributions by profit-chasers are border markers reported missing while the achievers’ equivalent is only 11%. Socialisers with 27% and freelancers with 15% fall in between the two. The average ratio for reported missing is 20.4%.

Figure 11. Border marker measurements (blue) and border markers reporting missing (orange) by player types selected by citizens. The citizen could select with multiple player types.

Figure 11. Border marker measurements (blue) and border markers reporting missing (orange) by player types selected by citizens. The citizen could select with multiple player types.

The contributions of each player type are as follows: Altruists (4454), Builders (4052), Adventurers (3570), Freelancers (2357), Keepers (2529), Achievers (2079), Profit-Chasers (1799), Socializers (1723), None (167). The absolute values of the contributions made by each player type cannot be compared with the total value of the contributions (22,166) due to the possibility for choosing multiple player types in the questionnaire. For example, the altruist contributions contain the contributions of every respondent who chose altruists as one of their player types. The median and average of contributions by player types are shown in (). A one-way ANOVA was performed to compare the effect of player type on contribution amounts. A one-way ANOVA revealed that there was not a statistically significant difference in contribution amounts between at least two groups (F(7, 613) = [0.088], p = 0.999). Compared to the median of 2.0 and mean of 7.7 of the whole contribution data, the median of the questionnaire respondents was 9.0 and the mean was 34.1 and 37.0 without including the player type ‘None’.

Figure 12. The average and median of contributions by player types show that the differences between player types were not statistically significant.

Figure 12. The average and median of contributions by player types show that the differences between player types were not statistically significant.

The top 10% of the player types have made more than half of the all the contributions of said player type, but the top 10% of achievers and socialisers have made less contributions than the rest (). The 10 most active contributors of the questionnaire respondents cover 2314 (10%) of all the contributions (22,166). 13% of the top 100 contributors had only 1 player type selected. From the top 10 contributors, 9 had 3 or more player types selected.

Figure 13. The top 10% of the player types have made more than half of the all the contributions of said player type.

Figure 13. The top 10% of the player types have made more than half of the all the contributions of said player type.

The questionnaire respondents were asked what effects the utility of the artefact had on their motivation (). The control questions of a nearby marker and a faraway marker show the range of the responses. Seeing your contributions on the map, a form of progression in gamification affordances, was considerably more motivating than seeing the measurements of others. Being the first measurer, which yielded the most points, was considered more motivating than remeasuring. The leaderboard was found motivating. Points were gained for reporting missing markers only after the 3rd month (August) of the demonstration (4.5 months total), but this did not affect the relatively poor reception of the affordance. Reporting a marker missing would also unintentionally remove the marker from the map for the reporter. This was a cause of confusion in feedback of questionnaire respondents. The differences between player types regarding the asked affordances were small.

Figure 14. The utility of the artefact had mainly positive effects on respondent motivation. Faraway marker and nearby marker were added as control questions. Mean score of the utility is in parenthesis.

Figure 14. The utility of the artefact had mainly positive effects on respondent motivation. Faraway marker and nearby marker were added as control questions. Mean score of the utility is in parenthesis.

The questionnaire respondents were asked what effects new gamification affordances would have on their motivation (). The questions included challenges: ‘Find five rare border markers’, levels: ‘You have reached the level Grand Contributor’, and campaigns: ‘Let’s find 1000 unfound border markers together’. Respondents positively welcomed these new forms of gamification affordances. As with the existing gamification affordances, the differences between player types were small.

Figure 15. New gamification affordances added to the artefact would have positive effects on respondent motivation.

Figure 15. New gamification affordances added to the artefact would have positive effects on respondent motivation.

As for the usability of the artefact on a scale of 1 (very hard) to 5 (very easy), the artefact was considered easy to use both while searching for and measuring the border marker (). 61% of respondents found the searching pleasant (36%) or very pleasant (25%) with an average score of 3.76. 64% of respondents found the measuring pleasant (33%) or very pleasant (31%) with an average score of 3.78.

Figure 16. The artefact was considered easy to use both in the tasking and reporting phase by the respondents.

Figure 16. The artefact was considered easy to use both in the tasking and reporting phase by the respondents.

The questionnaire respondents on a scale of 1 (very unpleasant) to 5 (very pleasant) found the overall experience using the artefact pleasant (). 83% of respondents found the artefact experience pleasant (36%) or very pleasant (47%) with an average score of 4.23.

Figure 17. The questionnaire respondents found the experience of using the artefact pleasant.

Figure 17. The questionnaire respondents found the experience of using the artefact pleasant.

3.3.2. Feedback from the respondents and the NMCA

Feedback was given by 251 respondents in the questionnaire via an open text field. Of the feedback (examples translated and shortened from original input) 8% were clearly negative: ‘I got frustrated quite quickly as I could not find but 2 out of the 10 border markers near my home’, 66% neutral: ‘I would like to add a comment to the missing marker report’. And 26% clearly positive: ‘Very nice and useful. Hope the quest continues’. Of the 88 (35%) reported utility issues 62 (25%) were related to low network bandwidth and positioning issues. 65 (26%) utility requests were made while 12 (5%) usability issues were reported. Unsurprisingly, the task in was reported challenging by some respondents especially if the border marker was in difficult to traverse terrain, if the marker was concealed by vegetation, or if there were unpleasant environmental conditions, such as mosquitos or heat. There were some usability complaints, but mainly respondents criticised and suggested to improve utility. For example, the utility for reporting a marker missing received many complaints and improvement suggestions, such as it should contain a comment section and that the missing marker was hidden from the map afterwards it has been reported. Another example was that border markers that are far away from other border markers should yield more points. Some respondents were also confused when they did not find already accurate border markers that were in the terrain on the artefact map due to them being filtered. Many respondents hoped the artefact would remain playable even after the pilot period.

As to the NMCA, the artefact was considered a success both by sample sizes and as a proof of concept. The NMCA was convinced that gamification will work with future crowdsourcing needs. The reports of missing markers are currently processed by the NMCA while all the contributions are studied further.

4. Discussion

This study set out to find whether crowdsourcing could be utilised to refine cadastral border marker information (RQ1), does gamification motivate citizens to contribute (RQ2), and is there a difference in motivation based on the player types the citizen has chosen (RQ3). It was found that applying simple gamification affordances regardless of player type in a usable and pleasing map-based artefact utilising crowdsourcing resulted in useful contributions for refining cadastral border markers. To reflect on the research and its contributions novelty and value of the studied artefact is compared to existing ones and limitations of the study are identified along with future work.

4.1. Novelty and value of the study

As to the utilisation of crowdsourcing in refining border markers three results emerged in the study: the creation process, user centred design, and useful contributions. First, when considering the creation process of this artefact, of the listed requirements for gamification projects (Morschheuser et al. Citation2017b) most have been fulfilled, such as identify objectives, test gamification early on, and follow an iterative design process. The requirements have many similarities to ones in the DSR approach (Johannesson and Perjons Citation2014). The outputs of the creation process of numerous similar studies (Baer and Purves Citation2022; Laso Bayas et al. Citation2016; Watkinson et al. Citation2023) and this study, e.g. design issues and artefact description, should prove valuable for future studies regardless of the research strategy. Second, considerations in the creation process to usability (Roth, Ross, and MacEachren Citation2015) paid off as a pleasant user experience. The artefact was found to be easy to use and the experience overall was pleasant according to the citizens. The choice of creating the artefact web-based and requiring registration after contributing, made it easy for the citizens to try the artefact before enrolling. Both these lowered the barrier of entry in addition to having a simple tutorial and training elements as suggested by Gómez-Barrón, Manso-Callejo, and Alcarria (Citation2019). Third, a similar gamified crowdsourcing artefact as in this study, FotoQuest Austria (Laso Bayas et al. Citation2016), had 2234 contributions and 76 contributors during a half a year demonstration period. This is in line with the findings of comparable gamified crowdsourced artefacts in Morschheuser et al. Citation2017a, where most previous studies lasted less than a month and had sample sizes less than 40. In comparison, the artefact in this study had a relatively long demonstration period (4.5 months) and a high sample size (over 4500 contributors and 22,000 contributions). The NMCA was satisfied with the spatial coverage, sample sizes and usefulness of the contributions, thus considering the study a success, and more importantly, was convinced that gamified crowdsourcing has potential.

This study showed that simple gamification affordances paired with other incentives and marketing motivated citizens to contribute. Similar results have been seen throughout the literature (Koivisto and Hamari Citation2019; Morschheuser et al. Citation2017a). The task of finding border markers from the terrain can be challenging at times with a possibility for the citizen to fail finding the marker all together. For example, a border marker can be buried under ground and covered by vegetation. However, this makes the crowdsourcing task more interesting, since the task of finding a new border marker is heterogeneous. Gamifying the challenging but interesting task proved to lessen the negative effects yet engage the citizen when they completed their task. Also, other incentives, such as the gift-card rewards (demonstration months 1, 2, 3, and 5), advertisement (demonstration months 1–5), and the restrictions due to the pandemic motivated people to pursue this alternative activity to do with their leisure time. Demonstration month 4 (September) has no monetary incentives present, and while there is a decline in contributions () some of it may be due to summer vacations ending.

Based on the results, there were no significant differences in motivation depending on the player types (Gómez-Barrón, Manso-Callejo, and Alcarria Citation2019) the citizen had chosen. In addition, respondents also thought new gamification affordances would be motivating, but the differences here were small too. However, Morschheuser et al. (Citation2017a) have found that the perception and effectiveness of a gamification approach strongly depends on users, their characteristics, and their individual goals. Furthermore, there is a difference between power contributors and free riders. As the questionnaire respondents in this study made more contributions than the average participants, they could be labelled as power contributors. This might also explain the lack of effect in the chosen player types. This study did find some differences though. According to Martella, Clementini, and Kray (Citation2019), achievers are especially motivated by reward (e.g. points), competition (e.g. a leaderboard), and progression (e.g. levels). Gómez-Barrón, Manso-Callejo, and Alcarria (Citation2019) further detail that achievers are driven by a sense of competence and prefer independent tasks. The ratio of measurements to missing reports tells something about the contributor, as reporting a border marker missing requires much less effort than finding the marker and measuring it. Some contributors may have chosen the path of least resistance as can be seen from the ratios of contributions and missing reports. For profit-chasers was this ratio was 34% while or achievers it was only 11%. Maybe achievers did not give up the search as easily while profit-chasers see the contributing more of as a game with rewards regardless of the contribution value.

4.2. Limitations of the study

The gamification approach chosen was not compared to a non-gamification approach in the demonstration of the artefact and therefore lacks the recommended A/B-testing (Morschheuser et al. Citation2017b). This is the main limitation of this study in addition to also not comparing different gamification affordances in the demonstration. Other gamification affordances were asked about in the questionnaire but were not demonstrated nor evaluated. Allowing multiple answers to the question about player type in the questionnaire makes interpreting some of the results challenging. In hindsight, the question could have been phrased in a way that the citizen can choose the identifiable player types in order. However, majority of respondents identified with multiple player types as described in the literature (Gómez-Barrón, Manso-Callejo, and Alcarria Citation2019). The point system of the artefact did not emphasise remeasuring the border markers, which led to most contributors measuring more different markers rather than making multiple measurements on one marker. The NMCA had two conflicting goals for the gamification to accomplishing. First, to gain measurements for multiple different border markers. Second, to gain multiple measurements for the same border marker. The first goal was chosen as this study focused on the spatial extent rather than the spatial accuracy of the crowdsourced data. If instead the study focus would have been solely on accuracy, then the point system would have been setup in a way that the more times a marker is measured the more points it would yield. An A/B-test between the new measurements and remeasurements focused point systems would provide further clarity. The issue of cheating was considered during the design and development of the artefact, for example, by making a location proximity check during contribution. However, cheating was still possible, for example, by spoofing the device location. Therefore, quality assurance strategies for crowdsourcing, such as fingerprinting or usability design (Daniel et al. Citation2018), should be further applied.

4.3. Future research

Current consumer devices in varying conditions can reach an average accuracy of 5 m (Kontiokoski Citation2022), which does not meet the submeter accuracy required by the NMCA involved in this study. This result is based on single measurement per border marker and can be improved with multiple measurements, by increasing the measurement period or as Jussila (Citation2023) found by utilising post-processing and raw GNSS measurements available in some mobile devices to reach above roughly a 1.5-m accuracy suitable for some NMCA needs. Regardless, current consumer devices struggle to perform well in suboptimal conditions, for example when line of sight to satellites is limited due to forest canopy. More research is needed to determine the number of repeat measurements or measurement duration needed for consumer devices to achieve submeter accuracy. If the required amount, for example, is more than 20 quick measurements per border marker or a longer than a few minutes measurement period, then the burden set on simple gamification affordances to motivate citizens can be too high. Most citizens will likely not want to repeatedly visit the same place, nor do they likely want to spend too much time on just one border marker, if the reward is only a few points shown on a leaderboard. However, Pokémon GO, for example, has both the repeat visits and long wait times in a specific place built into the game as Gyms and Raid Battles (Pokémon GO Citation2022). Such an added layer of gamification affordances can motivate citizens to complete even more challenging tasks than what was present in this artefact. However, adding more complex affordances can negatively affect barrier of entry and even demotivate citizens to contribute (Preist, Massung, and Coyle Citation2014). Therefore, the balance of gamification needs to be carefully considered while the retention aspects (Gómez-Barrón, Manso-Callejo, and Alcarria Citation2019) should be further studied.

5. Conclusion

In this study, a gamified map-based artefact utilising crowdsourcing enabled citizens to refine the accuracy and quality of border markers in the cadastral index map. The Marker Quest mobile map application was available for citizens during the summer of 2021. During this time, over 4600 registered citizens made over 22,000 border marker contributions. The results show that gamified artefacts utilising crowdsourcing can be considered as viable data sources for governmental organisations. The use of user centred design and simple gamification affordances regardless of player type paired with a relatively simple, yet challenging and interesting crowdsourcing task enabled a pleasant user experience and provided useful information for the NMCA with a high sample size. However, the lack of extensive A/B-testing leaves uncertainty on the effect of gamification in the results. The creation process and evaluation results also provide insight to future VGI studies and support development towards increasing the role of crowdsourcing in appropriate tasks of authoritative mapping.

Geolocation

The study area is the whole of Finland.

Acknowledgements

We would like to thank the National Land Survey of Finland for enabling this research. We would also like to thank all the citizens who participated in this research by playing Marker Quest (en) / Pyykkijahti (fi).

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data gathered in this study consisting of contributions and questionnaire responses by citizens who have given consent. The contributions contain both a unique identifier (an email address or username) and a geolocation (likely to reveal a movement pattern or approximate location of residence). The questionnaire contains unique identifiers that can be connected to the contributions. Therefore, the data in this study is not ethical to be shared as such.

References

  • Abrahamsson, P., O. Salo, J. Ronkainen, and J. Warsta. 2017. Agile software development methods: review and analysis. VTT Technical report. https://doi.org/10.48550/arXiv.1709.08439
  • Aggrey, J., S. Bisnath, N. Naciri, G. Shinghal, and S. Yang. 2020. “Multi-GNSS Precise Point Positioning with Next-Generation Smartphone Measurements.” Journal of Spatial Science 65 (1): 79–98. https://doi.org/10.1080/14498596.2019.1664944
  • Alvarez L., Luis F., and S. Quinn. 2019. “The value of crowdsourced street-level imagery: examining the shifting property regimes of OpenStreetCam and Mapillary.” GeoJournal 84 (2): 395–414. https://doi.org/10.1007/s10708-018-9865-4.
  • Apostolopoulos, K., M. Geli, P. Petrelli, C. Potsiou, and C. Ioannidis. 2018. “A New Model for Cadastral Surveying Using Crowdsourcing.” Survey Review 50 (359): 122–133. https://doi.org/10.1080/00396265.2016.1253522
  • Apostolopoulos, K., and C. Potsiou. 2022. “Consideration on How to Introduce Gamification Tools to Enhance Citizen Engagement in Crowdsourced Cadastral Surveys.” Survey Review 54 (383): 142–152. https://doi.org/10.1080/00396265.2021.1888027
  • Baer, Manuel F, and Ross S. Purves. 2022. “Window Expeditions: A playful approach to crowdsourcing natural language descriptions of everyday lived landscapes.” Applied Geography 148: 102802. https://doi.org/10.1016/j.apgeog.2022.102802.
  • Banville, S., and F. van Diggelen. 2016. “Precision GNSS for Everyone.” GPS World 27: 43–48.
  • Baruch, A., A. May, and D. Yu. 2016. “The motivations, enablers and barriers for voluntary participation in an online crowdsourcing platform.” Computers in Human Behavior 64: 923–931. https://doi.org/10.1016/j.chb.2016.07.039.
  • Baskerville, R., A. Baiyere, S. Gregor, A. Hevner, and M. Rossi. 2018. “Design Science Research Contributions: Finding a Balance Between Artefact and Theory.” Journal of the Association for Information Systems 19 (5): 358–376. doi:10.17705/1jais.00495.
  • Bilogrevic, I. 2018. “Privacy in Geospatial Applications and Location-Based Social Networks.” In Handbook of Mobile Data Privacy, edited by A. Gkoulalas-Divanis and C. Bettini, 195–228. Cham: Springer. https://doi.org/10.1007/978-3-319-98161-1_8.
  • Buchholtz, N. 2021. “Mathematical Modelling Education in East and West.” In . International Perspectives on the Teaching and Learning of Mathematical Modelling, edited by F. K. S. Leung, G.A. Stillman, G. Kaiser, and K.L. Wong, 331–340. Cham: Springer. https://doi.org/10.1007/978-3-030-66996-6_28.
  • Celino, I, D Cerizza, S Contessa, M Corubolo, D Dellaglio, E Della Valle, and S Fumeo. 2012. 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Conference on Social Computing, 910–913. IEEE. https://doi.org/10.1109/SOCIALCOM-PASSAT.2012.138.
  • Clouston, A. D. 2015. Crowdsourcing the Cadastre: The Applicability of Crowdsourced Geospatial Information to the New Zealand Cadastre.
  • Daniel, F., P. Kucherbaev, C. Cappiello, B. Benatallah, and M. Allahbakhsh. 2018. “Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions.” ACM Computing Surveys (CSUR 51 (1): 1–40. https://doi.org/10.1145/3148148
  • Dresch, A., D. P. Lacerda, and J. A. V. Antunes. 2015. Design Science Research. Cham: Springer.
  • Gómez-Barrón, J. P., MÁ Manso-Callejo, and R. Alcarria. 2019. “Needs, Drivers, Participants and Engagement Actions: A Framework for Motivating Contributions to Volunteered Geographic Information Systems.” Journal of Geographical Systems 21 (1): 5–41. https://doi.org/10.1007/s10109-018-00289-5
  • Gómez-Barrón, J. P., MÁ Manso-Callejo, R. Alcarria, and T. Iturrioz. 2016. “Volunteered Geographic Information System Design: Project and Participation Guidelines.” ISPRS International Journal of Geo-Information 5 (7): 108. https://doi.org/10.3390/ijgi5070108
  • Goodchild, M. F. 2007. “Citizens as Sensors: The World of Volunteered Geography.” GeoJournal 69 (4): 211–221. https://doi.org/10.1007/s10708-007-9111-y
  • Johannesson, P., and E. Perjons. 2014. An Introduction to Design Science. Cham: Springer.
  • Jussila, A. 2023. Positioning Accuracy of Smartphones in Crowdsourcing Context. http://urn.fi/URN:NBN:fi:aalto-202305213319.
  • Kietzmann, J. H., K. Hermkens, I. P. McCarthy, and B. S. Silvestre. 2011. “Social Media? Get Serious! Understanding the Functional Building Blocks of Social Media.” Business Horizons 54 (3): 241–251. https://doi.org/10.1016/j.bushor.2011.01.005
  • Kim, B. 2015. “The popularity of gamification in the mobile and social era.” Library Technology Reports 51 (2): 5–9.
  • Koivisto, J., and J. Hamari. 2019. “The Rise of Motivational Information Systems: A Review of Gamification Research.” International Journal of Information Management 45: 191–210. https://doi.org/10.1016/j.ijinfomgt.2018.10.013
  • Kontiokoski, A. 2022. Enhancing Location Accuracy of Border Markers by Crowdsourced Smartphone Positioning. https://urn.fi/URN:NBN:fi:amk-202202252860.
  • Kuparinen, L. 2016. Validation and Extension of the Usability Heuristics for Mobile Map Applications. In ICC & GIS 2016: Proceedings of the 6th International Conference on Cartography & GIS (Vol. 1 and 2). Bulgarian Cartographic Association.
  • Laato, S., S. M. Hyrynsalmi, and M. Paloheimo. 2019. “Online Multiplayer Games for Crowdsourcing the Development of Digital Assets.” In Software Business. ICSOB 2019. Lecture Notes in Business Information Processing, vol 370, edited by S. Hyrynsalmi, M. Suoranta, A. Nguyen-Duc, P. Tyrväinen, and P. Abrahamsson. Cham: Springer. doi:10.1007/978-3-030-33742-1_31
  • Laso Bayas, J., L. See, S. Fritz, T. Sturn, C. Perger, M. Dürauer, M. Karner, etal. 2016. “Crowdsourcing In-Situ Data on Land Cover and Land Use Using Gamification and Mobile Technology.” Remote Sensing 8 (11): 905. https://doi.org/10.3390/rs8110905.
  • Laso Bayas, J. C., See, L. Lesiv, M. Dürauer, M. Georgieva, I. Schepaschenko, . . Fritz. 2021. “Experiences from Recent Geo-Wiki Citizen Science Campaigns in the Creation and Sharing of New Reference Data Sets on Land Cover and Land Use.” EGU General Assembly Conference Abstracts U21–10871. https://doi.org/10.5194/egusphere-egu21-10871.
  • Lemmens, R., P. Mooney, and J. Crompvoets. 2020. Crowdsourcing in National Mapping. VGI-Map of Europe: The State of Play. Crowdsourcing in National Mapping.
  • Martella, R., E. Clementini, and C. Kray. 2019. “Crowdsourcing Geographic Information with a Gamification Approach.” Geodetski Vestnik 63 (02). https://doi.org/10.15292/geodetski-vestnik.2019.02.213-233
  • Martella, R, C Kray, and E Clementini. 2015. “A gamification framework for volunteered geographic information.” In Lecture Notes in Geoinformation and Cartography. AGILE 2015.F. Bacao, M. Santos , and M. Painho, 73–89. Cham: Springer. https://doi.org/10.1007/978-3-319-16787-9_5.
  • McCartney, E. A., K. J. Craun, E. Korris, D. A. Brostuen, and L. R. Moore. 2015. “Crowdsourcing the National map.” Cartography and Geographic Information Science 42 (sup1): 54–57. doi:10.1080/15230406.2015.1059187
  • Monreale, A., S. Rinzivillo, F. Pratesi, F. Giannotti, and D. Pedreschi. 2014. “Privacy-by-design in Big Data Analytics and Social Mining.” EPJ Data Science 3 (1): 1–26. https://doi.org/10.1140/epjds/s13688-014-0010-4
  • Mooney, P., J. Crompvoets, and R. Lemmens. 2018. “Crowdsourcing in National Mapping.” European Spatial Data Research Network Official Publication.
  • Morschheuser, B., J. Hamari, J. Koivisto, and A. Maedche. 2017a. “Gamified Crowdsourcing: Conceptualization, Literature Review, and Future Agenda.” Inter-national Journal of Human-Computer Studies 106: 26–43. doi:10.1016/j.ijhcs.2017.04.005
  • Morschheuser, B., J. Hamari, K. Werder, and J. Abe. 2017b. How to Gamify? A Method for Designing Gamification.
  • Olteanu-Raimond, A.-M., M. Laakso, V. Antoniou, C. C. Fonte, A. Fonseca, M. Grus, J. Harding, T. Kellenberger, M. Minghini, and A. Skopeliti. 2017. “VGI in National Mapping Agencies: Experiences and Recommendations.” In Mapping and the Citizen Sensor, edited by G. Foody, L. See, S. Fritz, P. Mooney, A.-M. Olteanu-Raimond, C. C. Fonte, and V. Antoniou, 299–326. London: Ubiquity Press. doi:10.5334/bbf.m.License:CC-BY4.0
  • Peffers, K., T. Tuunanen, M. A. Rothenberger, and S. Chatterjee. 2007. “A Design Science Research Methodology for Information Systems Research.” Journal of Management Information Systems 24 (3): 45–77. https://doi.org/10.2753/MIS0742-1222240302
  • Pokémon GO. 2022. Gym | Pokémon GO Wiki | Fandom, https://pokemongo.fandom.com/wiki/Gym (Accessed 22.12.2022).
  • Preist, C., E. Massung, and D. Coyle. 2014, February. Competing or Aiming To Be Average? Normification as a Means of Engaging Digital Volunteers. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 1222–1233).
  • Ricker, B., and R. E. Roth. 2018. “Mobile Maps and Responsive Design.” In The Geographic Information Science & Technology Body of Knowledge (2nd Quarter 2018 Edition), edited by John P. Wilson. https://doi.org/10.22224/gistbok/2018.2.5
  • Rönneberg, M. 2022. Approach for creating useful, gamified and social map applications utilising privacy-preserving crowdsourcing. Aalto University publication series DOCTORAL THESES, 53/2022, FGI Publications 166 86.
  • Rönneberg, M., and P. Kettunen. 2021. “Enabling citizens to refine the location accuracy of cadastre boundary markers by gamified VGI.” Abstracts of the ICA 3: 1–2. https://doi.org/10.5194/ica-abs-3-252-2021.
  • Rönneberg, M., M. Laakso, and T. Sarjakoski. 2019. “Map Gretel: social map service supporting a national mapping agency in data collection.” Journal of Geographical Systems 21 (1): 43–59. https://doi.org/10.1007/s10109-018-0288-z.
  • Roth, R. E., K. S. Ross, and A. M. MacEachren. 2015. “User-Centered Design for Inter-Active Maps: A Case Study in Crime Analysis.” ISPRS International Journal of Geo-Information 4 (1): 262–301. https://doi.org/10.3390/ijgi4010262
  • Sailer, M., J. U. Hense, S. K. Mayr, and H. Mandl. 2017. “How Gamification Moti-Vates: An Experimental Study of the Effects of Specific Game Design Elements on Psychological Need Satisfaction.” Computers in Human Behavior 69: 371–380. https://doi.org/10.1016/j.chb.2016.12.033
  • Sebastian, D., D. Dan, K. Rilla, and N. Lennart. 2011. From Game Design Elements to Gamefulness: Defining “Gamification”. Proceedings of MindTrek.
  • See, L., P. Mooney, G. Foody, L. Bastin, A. Comber, J. Estima, … M. Rutzinger. 2016. “Crowdsourcing, Citizen Science or Volunteered Geographic Information? The Current State of Crowdsourced Geographic Information.” ISPRS International Journal of Geo-Information 5 (5): 55. https://doi.org/10.3390/ijgi5050055
  • Ullah, T., S. Lautenbach, B. Herfort, M. Reinmuth, and D. Schorlemmer. 2023. “Assessing Completeness of OpenStreetMap Building Footprints Using MapSwipe.” ISPRS International Journal of Geo-Information 12 (4): 143. https://doi.org/10.3390/ijgi12040143.
  • Vaishnavi, V., B. Kuechler, and S. Petter. 2004. Design Science Research in Information Systems, DSR in IS, 2004. http://desrist.org/design-research-in-information-systems/. (Accessed 4.11.2021).
  • Watkinson, K., J. J. Huck, and A. Harris. 2023. “Using gamification to increase map data production during humanitarian volunteered geographic information (VGI) campaigns.” Cartography and Geographic Information Science 50 (1): 79–95. https://doi.org/10.1080/15230406.2022.2156389.
  • Wilson, Paul F., Larry D. Dell, and Gaylord F. Anderson. 1993. Root Cause Analysis: A Tool for Total Quality Management. Milwaukee, Wisconsin: ASQ Quality Press. ISBN 0-87389-163-5.
  • Zichermann, G., and C. Cunningham. 2011. Gamification by Design: Implementing Game Mechanics in web and Mobile Apps. Sebastopol, CA: O'Reilly Media, Inc.