[Objective] Sunlight-induced chlorophyll fluorescence (SIF) data obtained from satellites suffer from issues such as low spatial and temporal resolution, and discrete footprint because of the limitations imposed by satellite orbits. To address these problems, obtaining higher resolution SIF data, most reconstruction studies are based on low-resolution satellite SIF. Moreover, the spatial resolution of most SIF reconstruction products is still not enough to be directly used for the study of crop photosynthetic rate at the regional scale. Although some SIF products boast elevated resolutions, but these derive not from the original satellite SIF data reconstruct but instead evolve from secondary reconstructions based on preexisting SIF reconstruction products. Satellite OCO-2 (The Orbiting Carbon Obsevatory-2) equipped with a high-resolution spectrometer, OCO-2 SIF has higher spatial resolution (1.29×2.25 km) compared to other original SIF products, making it suitable in advancing the realm of high-resolution SIF data reconstruction, particularly within the context of regional-scale crop studies. [Methods] This research primarily exploration SIF reconstruct at the regional scale, mainly focused on the partial soybean planting regions nestled within the United States. The selection of MODIS raw data hinged on a meticulous consideration of environmental conditions, the distinctive physiological attributes of soybeans, and an exhaustive evaluation of factors intricately linked to OCO-2 SIF within these soybean planting regions. The primary tasks of this research encompassed reconstructing high resolution soybean SIF while concurrently executing a rigorous assessment of the reconstructed SIF's quality. During the dataset construction process, amalgamated SIF data from multiple soybean planting regions traversed by the OCO-2 satellite's footprint to retain as many of the available original SIF samples as possible. This approach provided the subsequent SIF reconstruction model with a rich source of SIF data. SIF data obtained beneath the satellite's trajectory were matched with various MODIS datasets, including enhanced vegetation index (EVI), fraction of photosynthetically active radiation (FPAR), and land surface temperature (LST), resulting in the creation of a multisource remote sensing dataset ultimately used for model training. Because of the multisource remote sensing dataset encompassed the most relevant explanatory variables within each SIF footprint coverage area concerning soybean physiological structure and environmental conditions. Through the activation functions in the BP neural network, it enhanced the understanding of the complex nonlinear relationships between the original SIF data and these MODIS products. Leveraging these inherent nonlinear relationships, compared and analyzed the effects of different combinations of explanatory variables on SIF reconstruction, mainly analyzing the three indicators of goodness of fit R2, root mean square error RMSE, and mean absolute error MAE, and then selecting the best SIF reconstruction model, generate a regional scale, spatially continuous, and high temporal resolution (500 m, 8 d) soybean SIF reconstruction dataset (BPSIF). [Results and Discussions] The research findings confirmed the strong performance of the SIF reconstruction model in predicting soybean SIF. After simultaneously incorporating EVI, FPAR, and LST as explanatory variables to model, achieved a goodness of fit with an R2 value of 0.84, this statistical metric validated the model's capability in predicting SIF data, it also reflected that the reconstructed 8 d time resolution of SIF data's reliability of applying to small-scale agricultural crop photosynthesis research with 500 m×500 m spatial scale. Based on this optimal model, generated the reconstructed SIF product (BPSIF). The Pearson correlation coefficient between the original OCO-2 SIF data and MODIS GPP stood were at a modest 0.53. In stark contrast, the correlation coefficient between BPSIF and MODIS Gross Primary Productivity (GPP) rosed significantly to 0.80. The increased correlation suggests that BPSIF could more accurately reflect the dynamic changes in GPP during the soybean growing season, making it more reliable compared to the original SIF data. Selected soybean planting areas in the United States with relatively single crop cultivation as the research area, based on high spatial resolution (1.29 km×2.25 km) OCO-2 SIF data, greatly reduced vegetation heterogeneity under a single SIF footprint. [Conclusions] The BPSIF proposed has significantly enhancing the regional and temporal continuity of OCO-2 SIF while preserving the time and spatial attributes contained in the original SIF dataset. Within the study area, BPSIF exhibits a significantly improved correlation with MODIS GPP compared to the original OCO-2 SIF. The proposed OCO-2 SIF data reconstruction method in this study holds the potential to provide a more reliable SIF dataset. This dataset has the potential to drive further understanding of soybean SIF at finer spatial and temporal scales, as well as find its relationship with soybean GPP.
[Objective] Acurately determining the suitable sowing date for winter wheat is of great significance for improving wheat yield and ensuring national food security. Traditional visual interpretation method is not only time-consuming and labor-intensive, but also covers a relatively small area. Remote sensing monitoring, belongs to post-event monitoring, exhibits a time lag. The aim of this research is to use the temperature threshold method and accumulated thermal time requirements for wheat leaves appearance method to analyze the suitable sowing date for winter wheat in county-level towns under the influence of long-term sequence of climate warming. [Methods] The research area were various townships in Qihe county, Shandong province. Based on European centre for medium-range weather forecasts (ECMWF) reanalysis data from 1997 to 2022, 16 meteorological data grid points in Qihe county were selected. Firstly, the bilinear interpolation method was used to interpolate the temperature data of grid points into the approximate center points of each township in Qihe county, and the daily average temperatures for each township were obtained. Then, temperature threshold method was used to determine the final dates of stable passage through 18, 16, 14 and 0 ℃. Key sowing date indicators such as suitable sowing temperature for different wheat varieties, growing degree days (GDD)≥0 ℃ from different sowing dates to before overwintering, and daily average temperature over the years were used for statistical analysis of the suitable sowing date for winter wheat. Secondly, the accumulated thermal time requirements for wheat leaves appearance method was used to calculate the appropriate date of GDD for strong seedlings before winter by moving forward from the stable date of dropping to 0 ℃. Accumulating the daily average temperatures above 0 ℃ to the date when the GDD above 0 ℃ was required for the formation of strong seedlings of wheat, a range of ±3 days around this calculated date was considered the theoretical suitable sowing date. Finally, combined with actual production practices, the appropriate sowing date of winter wheat in various townships of Qihe county was determined under the trend of climate warming. [Results and Discussions] The results showed that, from November 1997 to early December 2022, winter and annual average temperatures in Qihe county had all shown an upward trend, and there was indeed a clear trend of climate warming in various townships of Qihe county. Judging from the daily average temperature over the years, the temperature fluctuation range in November was the largest in a year, with a maximum standard deviation was 2.61 ℃. This suggested a higher likelihood of extreme weather conditions in November. Therefore, it was necessary to take corresponding measures to prevent and reduce disasters in advance to avoid affecting the growth and development of wheat. In extreme weather conditions, it was limited to determine the sowing date only by temperature or GDD. In cold winter years, it was too one-sided to consider only from the perspective of GDD. It was necessary to expand the range of GDD required for winter wheat before overwintering based on temperature changes to ensure the normal growth and development of winter wheat. The suitable sowing date for semi winter wheat obtained by temperature threshold method was from October 4th to October 16th, and the suitable sowing date for winter wheat was from September 27th to October 4th. Taking into account the GDD required for the formation of strong seedlings before winter, the suitable sowing date for winter wheat was from October 3rd to October 13th, and the suitable sowing date for semi winter wheat was from October 15th to October 24th, which was consisted with the suitable sowing date for winter wheat determined by the accumulated thermal time requirements for wheat leaves appearance method. Considering the winter wheat varieties planted in Qihe county, the optimal sowing date for winter wheat in Qihe county was from October 3rd to October 16th, and the optimal sowing date was from October 5th to October 13th. With the gradual warming of the climate, the suitable sowing date for wheat in various townships of Qihe county in 2022 was later than that in 2002. However, the sowing date for winter wheat was still influenced by factors such as soil moisture, topography, and seeding quality. The suitable sowing date for a specific year still needed to be adjusted to local conditions and flexibly sown based on the specific situation of that year. [Conclusions] The experimental results proved the feasibility of the temperature threshold method and accumulated thermal time requirements for wheat leaves appearance method in determining the suitable sowing date for winter wheat. The temperature trend can be used to identify cold or warm winters, and the sowing date can be adjusted in a timely manner to enhance wheat yield and reduce the impact of excessively high or low temperatures on winter wheat. The research results can not only provide decision-making reference for winter wheat yield assessment in Qihe county, but also provide an important theoretical basis for scientifically arrangement of agricultural production.
[Objective] The number, location, and crown spread of nursery stock are important foundations data for their scientific management. Traditional approach of conducting nursery stock inventories through on-site individual plant surveys is labor-intensive and time-consuming. Low-cost and convenient unmanned aerial vehicles (UAVs) for on-site collection of nursery stock data are beginning to be utilized, and the statistical analysis of nursery stock information through technical means such as image processing achieved. During the data collection process, as the flight altitude of the UAV increases, the number of trees in a single image also increases. Although the anchor box can cover more information about the trees, the cost of annotation is enormous in the case of a large number of densely populated tree images. To tackle the challenges of tree adhesion and scale variance in images captured by UAVs over nursery stock, and to reduce the annotation costs, using point-labeled data as supervisory signals, an improved dense detection and counting model was proposed to accurately obtain the location, size, and quantity of the targets. [Method] To enhance the diversity of nursery stock samples, the spruce dataset, the Yosemite, and the KCL-London publicly available tree datasets were selected to construct a dense nursery stock dataset. A total of 1 520 nursery stock images were acquired and divided into training and testing sets at a ratio of 7:3. To enhance the model's adaptability to tree data of different scales and variations in lighting, data augmentation methods such as adjusting the contrast and resizing the images were applied to the images in the training set. After enhancement, the training set consists of 3 192 images, and the testing set contains 456 images. Considering the large number of trees contained in each image, to reduce the cost of annotation, the method of selecting the center point of the trees was used for labeling. The LSC-CNN model was selected as the base model. This model can detect the quantity, location, and size of trees through point-supervised training, thereby obtaining more information about the trees. The LSC-CNN model was made improved to address issues of missed detections and false positives that occurred during the testing process. Firstly, to address the issue of missed detections caused by severe adhesion of densely packed trees, the last convolutional layer of the feature extraction network was replaced with dilated convolution. This change enlarges the receptive field of the convolutional kernel on the input while preserving the detailed features of the trees. So the model is better able to capture a broader range of contextual information, thereby enhancing the model's understanding of the overall scene. Secondly, the convolutional block attention module (CBAM) attention mechanism was introduced at the beginning of each scale branch. This allowed the model to focus on the key features of trees at different scales and spatial locations, thereby improving the model's sensitivity to multi-scale information. Finally, the model was trained using label smooth cross-entropy loss function and grid winner-takes-all strategy, emphasizing regions with highest losses to boost tree feature recognition. [Results and Discussions] The mean counting accuracy (MCA), mean absolute error (MAE), and root mean square error (RMSE) were adopted as evaluation metrics. Ablation studies and comparative experiments were designed to demonstrate the performance of the improved LSC-CNN model. The ablation experiment proved that the improved LSC-CNN model could effectively resolve the issues of missed detections and false positives in the LSC-CNN model, which were caused by the density and large-scale variations present in the nursery stock dataset. IntegrateNet, PSGCNet, CANet, CSRNet, CLTR and LSC-CNN models were chosen as comparative models. The improved LSC-CNN model achieved MCA, MAE, and RMSE of 91.23%, 14.24, and 22.22, respectively, got an increase in MCA by 6.67%, 2.33%, 6.81%, 5.31%, 2.09% and 2.34%, respectively; a reduction in MAE by 21.19, 11.54, 18.92, 13.28, 11.30 and 10.26, respectively; and a decrease in RMSE by 28.22, 28.63, 26.63, 14.18, 24.38 and 12.15, respectively, compared to the IntegrateNet, PSGCNet, CANet, CSRNet, CLTR and LSC-CNN models. These results indicate that the improved LSC-CNN model achieves high counting accuracy and exhibits strong generalization ability. [Conclusions] The improved LSC-CNN model integrated the advantages of point supervision learning from density estimation methods and the generation of target bounding boxes from detection methods.These improvements demonstrate the enhanced performance of the improved LSC-CNN model in terms of accuracy, precision, and reliability in detecting and counting trees. This study could hold practical reference value for the statistical work of other types of nursery stock.
[Objective] The picking of famous and high-quality tea is a crucial link in the tea industry. Identifying and locating the tender buds of famous and high-quality tea for picking is an important component of the modern tea picking robot. Traditional neural network methods suffer from issues such as large model size, long training times, and difficulties in dealing with complex scenes. In this study, based on the actual scenario of the Xiqing Tea Garden in Hunan Province, proposes a novel deep learning algorithm was proposed to solve the precise segmentation challenge of famous and high-quality tea picking points. [Methods] The primary technical innovation resided in the amalgamation of a lightweight network architecture, MobilenetV2, with an attention mechanism known as efficient channel attention network (ECANet), alongside optimization modules including atrous spatial pyramid pooling (ASPP). Initially, MobilenetV2 was employed as the feature extractor, substituting traditional convolution operations with depth wise separable convolutions. This led to a notable reduction in the model's parameter count and expedited the model training process. Subsequently, the innovative fusion of ECANet and ASPP modules constituted the ECA_ASPP module, with the intention of bolstering the model's capacity for fusing multi-scale features, especially pertinent to the intricate recognition of tea shoots. This fusion strategy facilitated the model's capability to capture more nuanced features of delicate shoots, thereby augmenting segmentation accuracy. The specific implementation steps entailed the feeding of image inputs through the improved network, whereupon MobilenetV2 was utilized to extract both shallow and deep features. Deep features were then fused via the ECA_ASPP module for the purpose of multi-scale feature integration, reinforcing the model's resilience to intricate backgrounds and variations in tea shoot morphology. Conversely, shallow features proceeded directly to the decoding stage, undergoing channel reduction processing before being integrated with upsampled deep features. This divide-and-conquer strategy effectively harnessed the benefits of features at differing levels of abstraction and, furthermore, heightened the model's recognition performance through meticulous feature fusion. Ultimately, through a sequence of convolutional operations and upsampling procedures, a prediction map congruent in resolution with the original image was generated, enabling the precise demarcation of tea shoot harvesting points. [Results and Discussions] The experimental outcomes indicated that the enhanced DeepLabV3+ model had achieved an average Intersection over Union (IoU) of 93.71% and an average pixel accuracy of 97.25% on the dataset of tea shoots. Compared to the original model based on Xception, there was a substantial decrease in the parameter count from 54.714 million to a mere 5.818 million, effectively accomplishing a significant lightweight redesign of the model. Further comparisons with other prevalent semantic segmentation networks revealed that the improved model exhibited remarkable advantages concerning pivotal metrics such as the number of parameters, training duration, and average IoU, highlighting its efficacy and precision in the domain of tea shoot recognition. This considerable decreased in parameter numbers not only facilitated a more resource-economical deployment but also led to abbreviated training periods, rendering the model highly suitable for real-time implementations amidst tea garden ecosystems. The elevated mean IoU and pixel accuracy attested to the model's capacity for precise demarcation and identification of tea shoots, even amidst intricate and varied datasets, demonstrating resilience and adaptability in pragmatic contexts. [Conclusions] This study effectively implements an efficient and accurate tea shoot recognition method through targeted model improvements and optimizations, furnishing crucial technical support for the practical application of intelligent tea picking robots. The introduction of lightweight DeepLabV3+ not only substantially enhances recognition speed and segmentation accuracy, but also mitigates hardware requirements, thereby promoting the practical application of intelligent picking technology in the tea industry.
[Objective] Traditional object detection algorithms applied in the agricultural field, such as those used for crop growth monitoring and harvesting, often suffer from insufficient accuracy. This is particularly problematic for small crops like mushrooms, where recognition and detection are more challenging. The introduction of small object detection technology promises to address these issues, potentially enhancing the precision, efficiency, and economic benefits of agricultural production management. However, achieving high accuracy in small object detection has remained a significant challenge, especially when dealing with varying image sizes and target scales. Although the YOLO series models excel in speed and large object detection, they still have shortcomings in small object detection. To address the issue of maintaining high accuracy amid changes in image size and target scale, a novel detection model, Multi-Strategy Handling YOLOv8 (MSH-YOLOv8), was proposed. [Methods] The proposed MSH-YOLOv8 model builds upon YOLOv8 by incorporating several key enhancements aimed at improving sensitivity to small-scale targets and overall detection performance. Firstly, an additional detection head was added to increase the model's sensitivity to small objects. To address computational redundancy and improve feature extraction, the Swin Transformer detection structure was introduced into the input module of the head network, creating what was termed the "Swin Head (SH)". Moreover, the model integrated the C2f_Deformable convolutionv4 (C2f_DCNv4) structure, which included deformable convolutions, and the Swin Transformer encoder structure, termed "Swinstage", to reconstruct the YOLOv8 backbone network. This optimization enhanced feature propagation and extraction capabilities, increasing the network's ability to handle targets with significant scale variations. Additionally, the normalization-based attention module (NAM) was employed to improve performance without compromising detection speed or computational complexity. To further enhance training efficacy and convergence speed, the original loss function CIoU was replaced with wise-intersection over union (WIoU) Loss. Furthermore, experiments were conducted using mushrooms as the research subject on the open Fungi dataset. Approximately 200 images with resolution sizes around 600×800 were selected as the main research material, along with 50 images each with resolution sizes around 200×400 and 1 000×1 200 to ensure representativeness and generalization of image sizes. During the data augmentation phase, a generative adversarial network (GAN) was utilized for resolution reconstruction of low-resolution images, thereby preserving semantic quality as much as possible. In the post-processing phase, dynamic resolution training, multi-scale testing, soft non-maximum suppression (Soft-NMS), and weighted boxes fusion (WBF) were applied to enhance the model's small object detection capabilities under varying scales. [Results and Discussions] The improved MSH-YOLOv8 achieved an average precision at 50% (AP50) intersection over union of 98.49% and an AP@50-95 of 75.29%, with the small object detection metric APs reaching 39.73%. Compared to mainstream models like YOLOv8, these metrics showed improvements of 2.34%, 4.06% and 8.55%, respectively. When compared to the advanced TPH-YOLOv5 model, the improvements were 2.14%, 2.76% and 6.89%, respectively. The ensemble model, MSH-YOLOv8-ensemble, showed even more significant improvements, with AP50 and APs reaching 99.14% and 40.59%, respectively, an increase of 4.06% and 8.55% over YOLOv8. These results indicate the robustness and enhanced performance of the MSH-YOLOv8 model, particularly in detecting small objects under varying conditions. Further application of this methodology on the Alibaba Cloud Tianchi databases "Tomato Detection" and "Apple Detection" yielded MSH-YOLOv8-t and MSH-YOLOv8-a models (collectively referred to as MSH-YOLOv8). Visual comparison of detection results demonstrated that MSH-YOLOv8 significantly improved the recognition of dense and blurry small-scale tomatoes and apples. This indicated that the MSH-YOLOv8 method possesses strong cross-dataset generalization capability and effectively recognizes small-scale targets. In addition to quantitative improvements, qualitative assessments showed that the MSH-YOLOv8 model could handle complex scenarios involving occlusions, varying lighting conditions, and different growth stages of the crops. This demonstrates the practical applicability of the model in real-world agricultural settings, where such challenges are common. [Conclusions] The MSH-YOLOv8 improvement method proposed in this study effectively enhances the detection accuracy of small mushroom targets under varying image sizes and target scales. This approach leverages multiple strategies to optimize both the architecture and the training process, resulting in a robust model capable of high-precision small object detection. The methodology's application to other datasets, such as those for tomato and apple detection, further underscores its generalizability and potential for broader use in agricultural monitoring and management tasks.