Artificial intelligence and machine learning
Research Article
Comparative Analysis of Backbone Architectures for Instance Segmentation of Objects in Aerial Imagery Using Mask R-CNN
Igor Victorovich Vinokurov1
, Daria Aleksandrovna Frolova2, Andrey Ivanovich Ilyin3, Ivan Romanovich Kuznetsov4
1-4 | Financial University under the Government of the Russian Federation, Moscow, Russia |
1 |
|
Abstract. This paper compares Mask R-CNN models with various pretrained backbone architectures for implementing instance segmentation of real estate objects in aerial images. The models were fine-tuned on a specialized dataset provided by the PLC « Roskadastr».
Analysis of the accuracy of detecting bounding boxes and object segmentation masks revealed the preferred architectures: Swin transformers (Swin-S and Swin-T) and the ConvNeXt-T convolutional network. The high accuracy of these models is explained by their ability to account for global contextual dependencies of the image.
The results of the study allow us to formulate the following recommendations for choosing a backbone architecture: for real-time monitoring systems where performance is critical, lightweight models (EfficientNet-B3, ConvNeXt-T, Swin-T) are advisable; for offline tasks requiring maximum accuracy (such as real estate mapping), the large-scale Swin-S model is recommended. (Linked article texts in English and in Russian).
Keywords: instance segmentation, backbone, Mask R-CNN, ResNet, DenseNet, EfficientNet, ConvNeXt, Swin
MSC-2020
For citation: Igor V. Vinokurov, Daria A. Frolova, Andrey I. Ilyin, Ivan R. Kuznetsov. Comparative Analysis of Backbone Architectures for Instance Segmentation of Objects in Aerial Imagery Using Mask R-CNN. Program Systems: Theory and Applications, 2025, 16:4, pp. 173–216. (in Engl. In Russ.). https://psta.psiras.ru/2025/4_173-216.
Full text of bilingual article (PDF): https://psta.psiras.ru/read/psta2025_4_173-216.pdf (Clicking on the flag in the header switches the page language).
The article was submitted 22.09.2025; approved after reviewing 27.09.2025; accepted for publication 12.10.2025; published online 19.10.2025.