2 research outputs found

    Crop mapping using deep learning and multi-source satellite remote sensing

    Get PDF
    Crop mapping is the prerequisite process for supporting decision-making and providing accurate and timely crop inventories for estimating crop production and monitoring dynamic crop growth at various scales. However, in-situ crop mapping often proves to be expensive and labour-intensive. Satellite remote sensing offers a more cost-effective alternative that delivers time-series data that can repeatedly capture the dynamics of crop growth at large scales and at regularly revisited intervals. While most existing crop-type products are generated using remote sensing data and machine learning approaches, the accuracy of predictions can be low given that misclassifications persist due to phenological similarities between different crops and the complexities of farming systems in real-life scenarios. Deep neural networks demonstrate great potential in capturing seasonal patterns and sequential relationships in time series data in the context of their end-to-end feature learning manner. This thesis presented a comprehensive exploration of advanced deep learning methodologies for large-scale agricultural crop mapping using multi-temporal and multi-source remote sensing data. Focusing on Bei'an County in Northeast China, the research developed and evaluated innovative frameworks to produce accurate crop-specific map products, addressing challenges such as optimal satellite-based input feature selection, imbalanced crop type distribution, model transferability, and model learning visualisation. This research has effectively addressed these challenges in complex agricultural environments by introducing advanced deep learning architectures that utilise multi-stream models and multi-source data fusion. The classification frameworks developed through this thesis have shown improved performance in accurately mapping crops, particularly in terms of evaluating model generalisability for inference of unseen area, model spatial and interannual transferability across different test sites, and model interpretability for unveiling the model decision process that contributes to a deeper understanding of model learning behaviours for temporal growth patterns of crops. The findings highlight the importance of temporal dynamics, the integration of various data sources, and the effectiveness of ensemble learning in enhancing the accuracy and reliability of crop classification. A deep learning framework using radar-based features was developed, achieving F1 scores for maize (87%), soybean (86%), and other crops (85%) on an imbalanced crop dataset. This approach was extended by integrating Sentinel-1 and Sentinel-2 data, resulting in an overall accuracy of 91.7%, with F1 scores of 93.7%, 92.2%, and 90.9% for maize, soybean, and wheat, respectively. Furthermore, the spatiotemporal transferability of pre-trained models was systematically evaluated across two test sites, resulting in overall accuracies of 96.2% and 90.7%, mean F1 scores of 92.7% and 88.6%, and mean IoUs of 86.9% and 79.7% for site A and site B, respectively

    Enhanced crop classification through integrated optical and SAR data: a deep learning approach for multi-source image fusion

    No full text
    Agricultural crop mapping has advanced over the last decades due to improved approaches and the increased availability of image datasets at various spatial and temporal resolutions. Considering the spatial and temporal dynamics of different crops during a growing season, multi-temporal classification frameworks are well-suited for mapping crops at large scales. Addressing the challenges posed by imbalanced class distribution, our approach combines the strengths of different deep learning models in an ensemble learning framework, enabling more accurate and robust classification by capitalizing on their complementary capabilities. This research aims to enhance the crop classification of maize, soybean, and wheat in Bei’an County, Northeast China, by developing a novel deep learning architecture that combines a three-dimensional convolutional neural network (3D-CNN) with a variant of convolutional recurrent neural networks (ConvRNN). The proposed method integrates multi-temporal Sentinel-1 polarimetric features with Sentinel-2 surface reflectance data for multi-source fusion and achieves an overall accuracy of 91.7%, a Kappa coefficient of 85.7%, and F1 scores of 93.7%, 92.2%, and 90.9% for maize, soybean, and wheat, respectively. Our proposed model is also compared with alternative data augmentation techniques, maintaining the highest mean F1 score (87.7%). The best performer was weakly supervised with ten per cent of ground truth data collected in Bei’an in 2017 and used to produce an annual crop map for measuring the model’s generalizability. The model learning reliability of the proposed method is interpreted through the visualization of model soft outputs and saliency maps
    corecore