2025 Volume 7 Issue 3 Published: 30 May 2025
  

  • Select all
    |
    Overview Article
  • Overview Article
    ZHANGZhiyong, CAOShanshan, KONGFantao, LIUJifang, SUNWei
    Abstract ( ) PDF ( ) HTML ( ) Knowledge map Save

    [Significance] Estrus monitoring and identification in cows is a crucial aspect of breeding management in beef and dairy cattle farming. Innovations in precise sensing and intelligent identification methods and technologies for estrus in cows are essential not only for scientific breeding, precise management, and smart breeding on a population level but also play a key supportive role in health management, productivity enhancement, and animal welfare improvement at the individual level. The aims are to provide a reference for scientific management and the study of modern production technologies in the beef and dairy cattle industry, as well as theoretical methodologies for the research and development of key technologies in precision livestock farming. [Progress] Based on describing the typical characteristics of normal and abnormal estrus in cows, this paper systematically categorizes and summarizes the recent research progress, development trends, and methodological approaches in estrus monitoring and identification technologies, focusing on the monitoring and diagnosis of key physiological signs and behavioral characteristics during the estrus period. Firstly, the paper outlines the digital monitoring technologies for three critical physiological parameters, body temperature, rumination, and activity levels, and their applications in cow estrus monitoring and identification. It analyzes the intrinsic reasons for performance bottlenecks in estrus monitoring models based on body temperature, compares the reliability issues faced by activity-based estrus monitoring, and addresses the difficulties in balancing model generalization and robustness design. Secondly, the paper examines the estrus sensing and identification technologies based on three typical behaviors: feeding, vocalization, and sexual desire. It highlights the latest applications of new artificial intelligence technologies, such as computer vision and deep learning, in estrus monitoring and points out the critical role of these technologies in improving the accuracy and timeliness of monitoring. Finally, the paper focuses on multi-factor fusion technologies for estrus perception and identification, summarizing how different researchers combine various physiological and behavioral parameters using diverse monitoring devices and algorithms to enhance accuracy in estrus monitoring. It emphasizes that multi-factor fusion methods can improve detection rates and the precision of identification results, being more reliable and applicable than single-factor methods. The importance and potential of multi-modal information fusion in enhancing monitoring accuracy and adaptability are underlined. The current shortcomings of multi-factor information fusion methods are analyzed, such as the potential impact on animal welfare from parameter acquisition methods, the singularity of model algorithms used for representing multi-factor information fusion, and inadequacies in research on multi-factor feature extraction models and estrus identification decision algorithms. [Conclusions and Prospects] From the perspectives of system practicality, stability, environmental adaptability, cost-effectiveness, and ease of operation, several key issues are discussed that need to be addressed in the further research of precise sensing and intelligent identification technologies for cow estrus within the context of high-quality development in digital livestock farming. These include improving monitoring accuracy under weak estrus conditions, overcoming technical challenges of audio extraction and voiceprint construction amidst complex background noise, enhancing the adaptability of computer vision monitoring technologies, and establishing comprehensive monitoring and identification models through multi-modal information fusion. It specifically discusses the numerous challenges posed by these issues to current technological research and explains that future research needs to focus not only on improving the timeliness and accuracy of monitoring technologies but also on balancing system cost-effectiveness and ease of use to achieve a transition from the concept of smart farming to its practical implementation.

  • Information Processing and Decision Making
  • Information Processing and Decision Making
    HULingyan, GUORuiya, GUOZhanjun, XUGuohui, GAIRongli, WANGZumin, ZHANGYumeng, JUBowen, NIEXiaoyu
    Abstract ( ) PDF ( ) HTML ( ) Knowledge map Save

    [Objective] Within the field of plant phenotyping feature extraction, the accurate delineation of small targets boundaries and the adequate recovery of spatial details during upsampling operations have long been recognized as significant obstacles hindering progress. To address these limitations, an improved U-Net architecture designed for greenhouse sweet cherry image segmentation. [Methods] Taking temporal phenotypic images of sweet cherries as the research subject, the U-Net segmentation model was employed to delineate the specific organ regions of the plant. This architecture was referred to as the U-Net integrating self-supervised contrastive learning method for plant time-series images with priori distance embedding (PDE) pre-training and graph convolutional networks (GCN ) skip connection for greenhouse sweet cherry image segmentation. To accelerate model convergence, the pre-trained weights derived from the PDE plant temporal image contrastive learning method were transferred to. Concurrently, the incorporation of a GCN local feature fusion layer was incorporated as a skip connection to optimize feature fusion, thereby providing robust technical support for image segmentation task. The PDE plant temporal image contrastive learning method pre-training required the construction of image pairs corresponding to different phenological periods. A classification distance loss function, which incorporated prior knowledge, was employed to construct an Encoder with adjusted parameters. Pre-trained weights obtained from the PDE plant temporal image contrastive learning method were effectively transferred and and applied to the semantic segmentation task, enabling the network to accurately learn semantic information and detailed textures of various sweet cherry organs. The Encoder module performs multi-scale feature extraction by convolutional and pooling layers. This process enabled the hierarchical processing of the semantic information embedded in the input image to construct representations that progress transitions from low-level texture features to high-level semantic features. This allows consistent extraction of semantic features from images across various scales and abstraction of underlying information, enhancing feature discriminability and optimizing modeling of complex targets. The Decoder module was employed to conduct up sampling operations, which facilitated the integration of features from diverse scales and the restoration of the original image resolution. This enabled the results to effectively reconstruct spatial details and significantly improve the efficiency of model optimization. At the interface between the Encoder and Decoder modules, a GCN layer designed for local feature fusion was strategically integrated as a skip connection, enabling the network to better capture and learn the local features in multi-scale images. [Results and Discussions] Utilizing a set of evaluation metrics including accuracy, precision, recall, and F1-Score, an in-depth and rigorous assessment of the model's performance capabilities was conducted. The research findings revealed that the improved U-Net model achieved superior performance in semantic segmentation of sweet cherry images, with an accuracy of up to 0.955 0. Ablation experiments results further revealed that the proposed method attained a precision of 0.932 8, a recall of 0.927 4, and an F1-Score of 0.912 8. The accuracy of improved U-Net is higher by 0.069 9, 0.028 8, and 0.042 compared to the original U-Net, U-Net with PDE plant temporal image contrastive learning method, and U-Net with GCN skip connections, respectively. Meanwhile the F1-Score is 0.078 3, 0.033 8, and 0.043 8 higher respectively. In comparative experiments against DeepLabV3, Swin Transformer and Segment Anything Model segmentation methods, the proposed model surpassed the above models by 0.022 2, 0.027 6 and 0.042 2 in accuracy; 0.063 7, 0.147 1 and 0.107 7 in precision; 0.035 2, 0.065 4 and 0.050 8 in recall; and 0.076 8, 0.127 5 and 0.103 4 in F1-Score. [Conclusions] The incorporation of the PDE plant temporal image contrastive learning method and the GCN techniques was utilized to develop an advanced U-Net architecture that is specifically designed and optimized for the analysis of sweat cherry plant phenotyping. The results demonstrate that the proposed method is capable of effectively addressing the issues of boundary blurring and detail loss associated with small targets in complex orchard scenarios. It enables the precise segmentation of the primary organs and background regions in sweet cherry images, thereby enhancing the segmentation accuracy of the original model. This improvement provides a solid foundation for subsequent crop modeling research and holds significant practical importance for the advancement of agricultural intelligence.

  • Information Processing and Decision Making
    CHANGJian, WANGBingbing, YINLong, LIYanqing, LIZhaoxin, LIZhuang
    Abstract ( ) PDF ( ) HTML ( ) Knowledge map Save

    [Objective] Bee pollination is pivotal to plant reproduction and crop yield, making its identification and monitoring highly significant for agricultural production. However, practical detection of bee pollination poses various challenges, including the small size of bee targets, their low pixel occupancy in images, and the complexity of floral backgrounds. Aimed to scientifically evaluate pollination efficiency, accurately detect the pollination status of flowers, and provide reliable data to guide flower and fruit thinning in orchards, ultimately supports the scientific management of bee colonies and enhances agricultural efficiency, a lightweight recognition model that can effectively overcome the above obstacles was proposed, thereby advancing the practical application of bee pollination detection technology in smart agriculture. [Methods] A specialized bee pollination dataset was constructed comprising three flower types: strawberry, blueberry, and chrysanthemum. High-resolution cameras were used to record videos of the pollination process, which were then subjected to frame sampling to extract representative images. These initial images underwent manual screening to ensure quality and relevance. To address challenges such as limited data diversity and class imbalance, a comprehensive data augmentation strategy was employed. Techniques including rotation, flipping, brightness adjustment, and mosaic augmentation were applied, significantly expanding the dataset's size and variability. The enhanced dataset was subsequently split into training and validation sets at an 8:2 ratio to ensure robust model evaluation. The base detection model was built upon an improved YOLOv10n architecture. The conventional C2f module in the backbone was replaced with a novel cross stage partial network_multi-scale edge information enhance (CSP_MSEE) module, which synergizes the cross-stage partial connections from cross stage partial network (CSPNet) with a multi-scale edge enhancement strategy. This design greatly improved feature extraction, particularly in scenarios involving fine-grained structures and small-scale targets like bees. For the neck, a hybrid-scale feature pyramid network (HS-FPN) was implemented, incorporating a channel attention (CA) mechanism and a dimension matching (DM) module to refine and align multi-scale features. These features were further integrated through a selective feature fusion (SFF) module, enabling the effective combination of low-level texture details and high-level semantic representations. The detection head was replaced with the lightweight shared detail enhanced convolutional detection head (LSDECD), an enhanced version of the Lightweight shared convolutional detection head (LSCD) detection head. It incorporated detail enhancement convolution (DEConv) from DEA-Net to improve the extraction of fine-grained bee features. Additionally, the standard convolution_groupnorm (Conv_GN) layers were replaced with detail enhancement convolution_ groupnorm (DEConv_GN), significantly reducing model parameters and enhancing the model's sensitivity to subtle bee behaviors. This lightweight yet accurate model design made it highly suitable for real-time deployment on resource-constrained edge devices in agricultural environments. [Results and Discussions] Experimental results on the three bee pollination datasets: strawberry, blueberry, and chrysanthemum, demonstrated the effectiveness of the proposed improvements over the baseline YOLOv10n model. The enhanced model achieved significant reductions in computational overhead, lowering the computational complexity by 3.1 GFLOPs and the number of parameters by 1.3 M. The computational cost of the improved model reached 5.1 GFLOPS, and the number of parameters was 1.3 M. These reductions contribute to improved efficiency, making the model more suitable for deployment on edge devices with limited processing capabilities, such as mobile platforms or embedded systems used in agricultural monitoring. In terms of detection performance, the improved model showed consistent gains across all three datasets. Specifically, the recall rates reached 82.6% for strawberry flowers, 84.0% for blueberry flowers, and 84.8% for chrysanthemum flowers. Corresponding mAP50 (Mean Average Precision at IoU threshold of 0.5) scores were 89.3%, 89.5%, and 88.0%, respectively. Compared to the original YOLOv10n model, these results marked respective improvements of 2.1% in recall and 1.7% in mAP50 on the strawberry dataset, 2.0% and 2.6% on the blueberry dataset, and 2.1% and 2.2% on the chrysanthemum dataset. [Conclusions] The proposed YOLOv10n-CHL lightweight bee pollination detection model, through coordinated enhancements at multiple architectural levels, achieved notable improvements in both detection accuracy and computational efficiency across multiple bee pollination datasets. The model significantly improved the detection performance for small objects while substantially reducing computational overhead, facilitating its deployment on edge computing platforms such as drones and embedded systems. This research could provide a solid technical foundation for the precise monitoring of bee pollination behavior and the advancement of smart agriculture. Nevertheless, the model's adaptability to extreme lighting and complex weather conditions remains an area for improvement. Future work will focus on enhancing the model's robustness in these scenarios to support its broader application in real-world agricultural environments.