Research on Building Extraction from Remote Sensing Images based on Deep Learning
Authors:
Guokuan He, Yuhong Wang, Ping Nie
Keywords:
Deep learning; buildings; remote sensing imagery; U-net network model; feature extraction
Doi:
10.70114/acmsr.2026.6.1.P1
Abstract
In this study, the LoveDa dataset is used to carry out experiments. In the data preprocessing stage, the sample size of the dataset is effectively expanded through data enhancement and normalization operations such as rotating, flipping, etc., so that the model can learn more images of buildings with different shapes and angles during the training process. Subsequently, the U-Net network model is trained using the processed dataset, paying close attention to the model's accuracy, F1 value and other key indicators throughout the process, flexibly adjusting the model parameters and hyper-parameters according to the changes in the indicators, and preserving the model weights after the training is completed; finally, the model is tested through comparison experiments with other methods in order to evaluate its performance. The experimental results show that this method extracts buildings with an accuracy as high as 0.9428, a recall of 0.8104, and an F1 value of 0.8227, which improves the accuracy by 20% compared with the traditional method, and the recall and F1 value are also significantly improved, which proves that U-Net has a significant advantage in the extraction of buildings from remote sensing images, and powerfully supports the urban planning and management work