Files

Abstract

Forest maps are essential to understand forest dynamics. Due to the increasing availability of remote sensing data and machine learning models like convolutional neural networks, forest maps can these days be created on large scales with high accuracy. Common methods usually predict a map from remote sensing images without deliberately considering intermediate semantic concepts that are relevant to the final map. This makes the mapping process difficult to interpret, especially when using opaque deep learning models. Moreover, such procedure is entirely agnostic to the definitions of the mapping targets (e.g., forest types depending on variables such as tree height and tree density). Common models can at best learn these rules implicitly from data, which greatly hinders trust in the produced maps. In this work, we aim at building an explainable deep learning model for forest mapping that leverages prior knowledge about forest definitions to provide explanations to its decisions. We propose a model that explicitly quantifies intermediate variables like tree height and tree canopy density involved in the forest definitions, corresponding to those used to create the forest maps for training the model in the first place, and combines them accordingly. We apply our model to mapping forest types using very high resolution aerial imagery and lay particular focus on the treeline ecotone at high altitudes, where forest boundaries are complex and highly dependent on the chosen forest definition. Results show that our rule-informed model is able to quantify intermediate key variables and predict forest maps that reflect forest definitions. Through its interpretable design, it is further able to reveal implicit patterns in the manually-annotated forest labels, which facilitates the analysis of the produced maps and their comparison with other datasets.

Details

Actions

Preview