[Significance] Digital agriculture is unequivocally the core driving force for modern agricultural transformation, fundamentally aiming to achieve full-process digital mapping and intelligent management of production through the deep integration of advanced information technologies such as the Internet of Things, big data, artificial intelligence (AI), and remote sensing, with earth observation (EO) technology serving as the essential data engine providing indispensable spatial information support for this systemic shift. However, the current landscape of digital agriculture development remains unbalanced, exhibiting a tendency to be "heavy on transactions and light on production", where the core production links suffer from low digitalization penetration rates; furthermore, the profound knowledge embedded within the vast corpus of EO data has yet to be fully extracted and interpreted, leading to a situation where many established algorithms demonstrate insufficient robustness and universality when confronted with the complexity and diversity of global cropping systems, thereby limiting their practical efficacy. Crucially, an over-reliance on technology to optimize production efficiency alone, without ecological guidance, can induce secondary environmental risks, such as exacerbating regional groundwater depletion or contributing to a decline in biodiversity through agricultural landscape simplification, thus necessitating an approach that promotes the deep coupling of EO technology with agronomic principles and local ecological practices to construct a resilient smart agricultural system that achieves a holistic balance between productivity, resource efficiency, and ecological integrity. [Progress] The current research frontiers of EO-driven digital agriculture primarily converge on three critical domains: intelligent crop condition monitoring, digital twin farming systems, and the enhancement of agricultural system resilience. Intelligent monitoring utilizes the fusion of high-resolution remote sensing imagery and machine learning frameworks to enable large-scale, comprehensive crop mapping and the fine-grained identification of crop types at the field scale, with next-generation yield prediction models integrating advanced deep learning techniques to significantly improve accuracy, while remote sensing is also effectively employed for agricultural disaster monitoring. The digital twin farming system represents an advanced stage of precision agriculture, centered on digitally modeling all agricultural production elements to construct a highly consistent virtual replica of the physical environment, operating through a real-time closed-loop mechanism of perception, simulation and analysis, and decision-making support to guide optimal interventions; successful applications include intelligent water resource scheduling in Chinese irrigation districts and the use of AI vision algorithms to manage complex biological processes like crab farming, although the field must overcome the issue of "pseudo-twins" that focuses on mere visualization rather than driving concrete operational decisions. The focus on agricultural system resilience is supported by digital agriculture providing crucial spatial data on global crop yields, cultivated land distribution, and practices like terracing. To illustrate the practical efficacy of these technologies, this paper analyzes two representative application cases. First, the CropWatch system represents a paradigm shift in agricultural monitoring by constructing a "Cloud-Edge" collaborative ecosystem. It integrates machine learning with a "Pre-training, Prompting, and Fine-tuning" large language model (LLM) framework to automate remote sensing-based crop monitoring, report generation and enhance decision-support intelligence. Through open application programming interfaces (APIs) and multi-scale capabilities, CropWatch provides cross-scale information and decision support from macro-level policy support to micro-level farm management, serving as a global public good that bridges the digital divide in developing nations. Second, in the domain of agricultural water management, the ETWatch technical system demonstrates a robust solution for the precise governance of water resources. By achieving high-resolution evapotranspiration (ET) monitoring from basin to field scales, it enables the accurate assessment of water productivity and the optimization of irrigation schedules. Crucially, this technology is successfully embedded into institutional mechanisms, such as water rights allocation and tiered pricing based on actual consumption, thereby realizing a transformation from empirical water use to data-driven, precise regulation. [Conclusions and Prospects] In sum, digital (smart) agriculture is rapidly transcending its role as a mere extension of agricultural informatization to become the "new-quality productivity" driving high-quality agricultural development, achieving this by fundamentally restructuring production factors, enhancing resource efficiency, strengthening risk response capabilities, and promoting value chain upgrading, thereby offering critical momentum for constructing a more efficient, greener, and sustainable modern agricultural system. Given China's pronounced global advantages in the digital economy, information technology, remote sensing, and intelligent equipment, the nation is well-positioned to integrate these strengths to construct comprehensive, full-chain smart agricultural solutions whose mature systemic models and business paradigms can ultimately form a "China Card" in the global agricultural revolution, contributing Chinese wisdom and solutions towards the realization of global food security and the zero-hunger goal.
[Objective] Lodging is a major agronomic constraint that adversely affects both yield and quality in field crops, with flax (Linum usitatissimum L.) being especially vulnerable due to its slender stems and susceptibility to wind and rainfall. Precise delineation of lodged areas from field imagery remains a significant challenge owing to the complex and heterogeneous morphology of lodging patterns, irregular and blurred boundaries, and substantial background interference from upright plants, weeds, and soil textures. These factors necessitate the development of a segmentation framework that combines high precision and strong boundary adherence with computational efficiency, enabling deployment on resource-constrained agricultural monitoring platforms. In response to this need, a lightweight accurate lodging segmentation approach based on improved YOLOv11n-seg architecture was proposed to enhance fine-grained feature sensitivity, multi-scale representation capability, and boundary precision, while markedly reducing parameter count, giga floating-point operations (GFLOPs), and model size. [Methods] The proposed architecture integrated targeted modifications across the backbone, neck, and output stages. In the backbone, standard C3k2 modules were replaced with C3k2_SDW blocks, which combined a StarBlock structure with depthwise separable convolutions to reduce redundancy and computation without sacrificing spatial and contextual representational capacity. To counteract potential reductions in channel discrimination resulting from light-weighting, a multi-scale efficient channel attention (MS-ECA) mechanism was embedded within selected backbone layers, yielding C3k2_SDW_MS-ECA modules. These modules incorporated parallel convolution branches with varying kernel sizes to capture channel-wise dependencies across multiple receptive fields, thereby adaptively recalibrating lodging-related features with minimal computational overhead. In the neck, a bidirectional feature pyramid network (BiFPN) was introduced to facilitate efficient bidirectional information exchange between scales. By assigning normalized, trainable fusion weights, the BiFPN adaptively balanced contributions from low- and high-level feature maps, while a multi-stage semantic fusion strategy further enriched the integration of spatial details and contextual semantics, thereby improving the detection of small and fragmented lodged patches. At the output stage, a boundary refinement procedure was applied to the predicted masks, improving contour sharpness, enhancing boundary compactness, and mitigating false detections in complex visual environments.The experimental dataset comprised unmanned aerial vehicle (UAV) RGB imagery at a resolution of 4 032×2 268 pixels, acquired from flax fields in Dingxi, Gansu province. Lodged regions were manually annotated with polygonal masks. To increase robustness against variability in illumination, background complexity, and lodging morphology, data augmentation techniques, including random rotation, brightness and contrast adjustment, and blurring were employed, expanding the dataset to 3 852 images. The dataset was divided into training, validation, and testing subsets in a 75%, 15% and 10% split. Model training was conducted with 640×640 pixel inputs for 300 epochs using stochastic gradient descent (initial learning rate 0.01, momentum 0.937, weight decay 0.000 5) in PyTorch 2.0.0. Evaluation involved comparison with YOLACT, YOLOv7-seg, YOLOv8n-seg, and the original YOLOv11n-seg using precision (P), recall (R), mAP@0.5, mAP@0.5:0.95, parameter count, GFLOPs, and model size. [Results and Discussions] Ablation experiments demonstrated the incremental contributions of each architectural component. Substituting C3k2 with C3k2_SDW reduced parameters from 2.83 M to 2.14 M and computation from 10.2 to 8.1 GFLOPs, with slight performance improvements. Incorporating BiFPN further lowered complexity to 1.68 M parameters and 7.7 GFLOPs, accompanied by notable gains in detection metrics. The addition of MS-ECA attention achieved the highest performance, delivering P of 92.6%, R of 92.0%, and mAP@0.5 of 95.2%, corresponding to improvements of 3.7 percentage points in Precision and 2.1 percentage points in mAP@0.5 over the YOLOv11n-seg baseline, without increasing model size. Qualitative Grad-CAM visualizations revealed more precise focus on lodging regions and reduced false activations in upright stems and non-lodged soil areas. Generalization capability was further validated on the public WE3DS agricultural segmentation dataset, where the proposed model achieved average improvements of 4.3, 1.9, and 2.6 percentage points in precision, recall, and mAP@0.5, respectively, compared to the baseline. [Conclusions] The improved YOLOv11n-seg architecture achieves a superior balance between accuracy and efficiency for flax lodging segmentation by combining the C3k2_SDW_MS-ECA backbone, BiFPN with multi-stage semantic fusion in the neck, and output boundary refinement. This combination of high accuracy, lightweight design, and robust boundary delineation renders the model highly applicable to real-time, in-field deployment for intelligent lodging monitoring and precision agriculture. The results further suggest that the approach is transferable to broader agricultural segmentation tasks, providing a practical and scalable solution for modern smart farming applications.
[Objective] The precise quantification of rice seeds within individual cavities of seedling trays constitutes a critical operational parameter for optimizing seeding efficiency and fine-tuning the performance of air-vibration precision seeders. Achieving high accuracy in this task directly impacts resource utilization, seedling uniformity, and ultimately crop yield. However, the operational environment presents significant challenges, including complex backgrounds, seed overlap, variations in lighting and seed orientation, and the inherent difficulty of distinguishing individual seeds within dense clusters. These factors often lead to suboptimal performance in existing automated detection systems, manifesting as low detection accuracy and an inability to achieve robust, precise instance segmentation of individual rice seeds. To address these persistent limitations and advance the state-of-the-art in precision seeding monitoring, an integrated framework for rice seed instance segmentation was proposed. The core innovation lies in the synergistic combination of a cross-modal grounding generation (CGG) network with a pretrained model, which is designed to leverage complementary information from visual and textual domains. [Methods] The proposed methodology fundamentally aimed to bridge the gap between visual perception and semantic understanding within the specific context of rice seed detection. The CGG-pretrained model framework achieved this through deep joint alignment of visual features extracted from seedling tray images and textual features derived from contextual knowledge. This cross-modal grounding enabled collaborative learning, where the visual processing stream (handling object localization and pixel-level segmentation) was continuously informed and refined by the semantic understanding stream (interpreting context and relationships). Specifically, the visual backbone network processes input imagery to generate feature maps, while the pretrained language model component, which utilized contextual embeddings, generated semantically rich textual representations. The CGG module acted as the fusion engine, establishing explicit correspondences between specific regions in the image (potential seeds or clusters) and relevant semantic concepts or descriptors provided by the pretrained model. This bidirectional interaction significantly enhanced the model's ability to disambiguate overlapping seeds, resolved occlusions, and accurately delineated individual seed boundaries under challenging conditions. Key technical innovations validated through rigorous ablation studies include: (1) The strategic use of the bootstrapping language-image pre-training (BLIP) model for generating high-quality pseudo-labels from unlabeled or weakly labeled image data, facilitating more effective semi-supervised learning and reducing annotation burden, and (2) the application of bidirectional encoder representations from transformers (BERT)-based word embed to capture deep semantic relationships and contextual nuances within textual descriptors related to seeds and seeding environments. [Results and Discussions] The ablation experiments demonstrated a pronounced synergistic effect when the core improvements were combined, resulting in a segmentation accuracy improvement exceeding 3 percentage points compared to the baseline model that lacking the integration. Comprehensive experimental evaluation demonstrated the superior performance of the proposed CGG model against established benchmarks. Under the standard intersection over union (IoU) threshold of 0.5, the model achieved a mean average precision (mAP) of 90.7% for bounding box detection (denoted as mAP50bb for detection) and an outstanding 91.4% mAP for instance segmentation (denoted as mAP50seg for segmentation). These results represented a statistically significant improvement over leading contemporary models, including region-based convolutional neural network (Mask R-CNN) and Mask2Former, which highlighted the efficacy of the cross-modal grounding approach in accurately localizing and segmenting individual rice seeds. Further validation within realistic seeding trial scenarios, which involved direct comparison with meticulous manual annotations, confirmed the model's practical robustness. The CGG model attained the highest accuracy in two critical operational metrics: (1) Precision in segmenting individual seed instances (single-seed segmentation accuracy), and (2) accuracy in determining the exact seed count per cavity, and it achieved an average accuracy of 88% for per-cavity quantification. Moreover, the model exhibited superior performance in minimizing estimation errors for cavity seed counts, as evidenced by its significantly lower error metrics: a root mean square error (RMSE) of 16.8 seeds, a mean absolute error (MAE) of 13.7 seeds, and a mean absolute percentage error (MAPE) of 2.46%. These error values were markedly lower than those recorded by the comparison models, which underscored the CGG model's enhanced reliability in practical counting tasks. The discussion contextualized these results and attributed the performance gains to the model's ability to leverage semantic context to resolve ambiguities inherent in visual-only approaches, particularly in dense and overlapping seed scenarios common in precision seeding trays. [Conclusions] The developed CGG-pretrained model integration presents a significant advancement in automated monitoring for precision rice seeding. The model successfully addresses the core challenges of low detection accuracy and imprecise instance segmentation for seeds in complex environments. Its high accuracy in both individual seed segmentation and per-cavity seed count quantification, coupled with low error rates, demonstrates strong potential for practical deployment. Importantly, the model enables real-time detection of rice seeds during the image analysis stage, this functionality provides a quantifiable, data-driven basis for making immediate operational decisions, most notably enabling the targeted precision reseeding of empty or under-seeded cavities identified during the seeding process. By ensuring optimal seed placement and density from the outset, the technology contributes directly to improved resource efficiency (reducing seed waste), enhanced seedling uniformity, and potentially higher crop yields. Future work will focus on further optimizing inference speed for higher-throughput seeding lines and exploring generalization to other crop types and seeding mechanisms.
[Objective] There are several critical challenges in automated safflower harvesting, particularly the inefficiencies in path planning, suboptimal route quality, and limited decision-making capability under dynamic and complex environments. To solve these issues, the problem was formulated as a three-dimensional traveling salesman problem and an enhanced reinforcement learning model named actor-critic reinforcement learning pointer network (AC-RL-PtrNet) was proposed, specifically designed for deployment on intelligent safflower picking robots in agricultural settings. [Methods] First, to address the inherent limitations of conventional attention mechanisms in dynamic environments with complex spatial structures, an enhanced attention module was proposed based on the dynamic exponential moving average framework. By combining multi-head attention, spatial distance encoding, and adaptive exponential smoothing, the improved design allowed the model to better capture long-range dependencies and spatial context among safflowers. Meanwhile, to minimize computational cost while preserving inference quality, a structured pruning approach was adopted, which selectively removed redundant connections in the long short-term memory gates and fully connected layers. In parallel, the critic network was redesigned to improve learning stability and accuracy. This was achieved through the inclusion of batch normalization, residual feature aggregation, and a multi-layer value estimation head, all of which contributed to a tighter actor-critic synergy during policy training. [Results and Discussions] To quantitatively assess the impact of each component, ablation experiments were conducted across various configurations. The results confirmed that each module contributed distinct benefits, while their combination yielded the highest improvements in both planning precision and inference efficiency. This coordinated actor-critic design effectively enhanced both trajectory quality and decision stability, which were critical in sequential robotic picking tasks. Experimental results also demonstrated that, compared with traditional swarm intelligence algorithms particle swarm optimization (PSO), ant colony optimization (ACO), and non-dominated sorting genetic algorithm, the proposed AC-RL-PtrNet model achieved a planning time improvement ranging from -2.63% to 61.87% on the 25-target dataset and from 22.93% to 59.1% on the 31-target dataset. Meanwhile, the optimized paths were significantly shortened across different planning instances, indicating robust generalization capability under varied problem scales. Furthermore, field experiments provided concrete validation of the model's practical applicability. When deployed on a mobile picking robot in real safflower fields, the AC-RL-PtrNet achieved a 9.56% reduction in path length and 5.43% time saved for a 25-target picking task, and a 20.17% path reduction and 29.70% time saving for a 31-target scenario involving a different safflower variety. Overall, these results all indicated that the proposed method exhibited significant advantages in enhancing path planning efficiency and optimizing path quality. [Conclusions] This study offers a practical solution for achieving efficient and robust automatic picking by safflower picking robots and provides new insights into solving 3D combinatorial optimization problems.
[Objective] The vegetable supply chain is characterized by multiple production entities, diverse product varieties, and complex circulation processes, which often result in low data accuracy, label forgery, data tampering, and difficulties in cross-enterprise collaboration in traditional traceability systems. Furthermore, the rapid development of quantum computing poses significant threats to existing cryptographic foundations by enabling efficient factorization or discrete logarithm attacks. This study aimed to design and implement a vegetable supply chain anti-counterfeiting and traceability system that integrates the Internet of Things (IoT), blockchain technology, and a post-quantum enhanced elliptic curve integrated encryption scheme (PQ-ECIES). The system seeks to enhance the trustworthiness, privacy protection, and collaborative efficiency of supply chain data management, while maintaining practical performance for IoT devices and high-frequency data uploading scenarios. [Methods] The proposed system was constructed on an IoT framework incorporating nine categories of devices. A registration and admission mechanism was developed to establish a trusted mapping between "device–enterprise–data", effectively preventing unauthorized entities from uploading forged data. At the data layer, collected information was divided into public and private categories: Public data were uploaded directly to the blockchain, while private data were encrypted using PQ-ECIES before being stored on-chain. Smart contracts automated processes such as data classification, permission verification, and encrypted data querying, thus reducing human intervention and ensuring compliance. PQ-ECIES was designed by combining elliptic curve cryptography (ECC) and the Kyber algorithm from lattice-based post-quantum cryptography. A dual-key mechanism was employed to generate session keys, where an ECC-derived shared secret was combined with a Kyber-derived shared secret through SHA3-256 hashing, followed by key derivation for encryption and authentication. This design provided resilience against Shor's algorithm and other quantum attacks while maintaining efficiency compatible with IoT devices. The blockchain system was implemented using Hyperledger Fabric 1.4.4, with seven organizational nodes and the Raft consensus mechanism. Performance testing included evaluations of data collection accuracy, on-chain latency, query latency, and encryption performance across RSA, advanced encryption standard (AES), and PQ-ECIES. [Results and Discussions] The IoT-based data collection achieved significantly higher accuracy than manual input, particularly in large-scale sample scenarios such as pesticide residue testing. The average latency for data uploading to the blockchain was 2 879 ms, while data query latency averaged 122 ms, both of which met the practical requirements of vegetable supply chain applications. In cryptographic performance testing, PQ-ECIES achieved encryption and decryption of 128 B plaintext in approximately 10-30 ms, outperforming RSA (50-80 ms) and only slightly slower than AES (<10 ms). This result indicates that PQ-ECIES achieved an optimal trade-off between efficiency and security, offering asymmetric encryption benefits such as key distribution and identity verification, along with strong post-quantum resistance. Simulation under quantum attack models confirmed that traditional ECC and AES could be compromised within hours using Shor's and Grover's algorithms, whereas PQ-ECIES maintained resilience due to the lattice-based hardness assumptions of Kyber. From a system-level perspective, three major contributions were identified. First, trustworthiness was enhanced by binding IoT devices to enterprises through Bluetooth-based verification and blockchain's immutable ledger, ensuring data authenticity at the source. Second, privacy protection was achieved by adopting graded visibility: Consumers accessed only public data such as testing results and logistics status, while regulators could decrypt private information (e.g., production location and batch details) via authorized keys, balancing transparency with confidentiality. Third, collaboration across enterprises was improved through the consortium blockchain structure and Fabric channel mechanisms, which eliminated information silos and enabled selective data sharing in real time, reducing inter-organizational access time from weeks to minutes. Experimental validation confirmed that IoT-based collection significantly improved accuracy, blockchain integration achieved acceptable on-chain and query latency, and PQ-ECIES outperformed RSA while offering post-quantum resistance not available in AES. [Conclusions] This study proposed and implemented a vegetable supply chain traceability system that integrates IoT, blockchain, and PQ-ECIES. By deploying nine categories of IoT devices, establishing trusted device-enterprise mappings, and incorporating blockchain's decentralized and tamper-proof ledger, the system ensured reliable data collection and storage. The integration of PQ-ECIES provided dual cryptographic protection, balancing efficiency with long-term quantum security. Beyond technical performance, the system enhanced trust, privacy, and collaboration across the vegetable supply chain, effectively addressing common issues of data forgery, tampering, and cross-enterprise coordination.Overall, the proposed framework demonstrates high potential for real-world deployment in agricultural supply chains, offering a secure, efficient, and future-proof solution to ensure authenticity, reliability, and transparency in vegetable traceability. The study also provides a reference model for extending post-quantum blockchain-based traceability to other agri-food sectors facing similar challenges.
[Objective] In the eel (Monopterus albus) farming system used in feed distribution research of mechanical arm, the challenges included slow path planning speeds, excessive trajectory redundancy, and suboptimal obstacle avoidance success rates within confined operational spaces. To mitigate these issues, an improved path planning algorithm, based on the bidirectional rapidly-exploring random tree star (BI-RRT*) algorithm was proposed. The primary aim was to significantly enhance the motion efficiency and task success rate of robotic arms operating in complex, constrained environments. [Methods] The proposed improved BI-RRT* algorithm integrated an adaptive goal-biased strategy with an enhanced artificial potential field (APF) method. The algorithm's framework comprises three core components: a high-quality sampling strategy, an efficient search strategy, and a path optimization algorithm. For the high-quality sampling strategy, an adaptive goal-biased approach was introduced to overcome the limitations of inefficient random sampling and slow convergence rates characteristic of traditional BI-RRT algorithms in complex environments. This strategy dynamically adjusted the generation of sampling points, moving beyond purely random selection. Instead, it prioritized sampling regions in the vicinity of the target, guided by the target direction and a predefined bias probability. This mechanism substantially augmented the growth propensity of the search tree towards the target area, effectively reducing the stochasticity of random sampling and consequently accelerating the path search process. To enhance search efficiency and prevent the algorithm from converging to local optima, an improved APF was incorporated into the node expansion process. The APF was refined to achieve superior integration with the BI-RRT framework. During each new node expansion, in addition to considering the inherent random exploration characteristics of BI-RRT, a directional attractive field was superimposed. This attractive field not only originated from the ultimate target point but also factored in the current growth orientation of the search tree and localized environmental information. Specifically, a composite attractive function was devised, which synergized the attractive force exerted by the target point on the current node with the attraction from potential "guide points". Concurrently, the computation of the repulsive field was optimized to more precisely delineate the geometry and proximity of obstacles, thereby circumventing common issues such as "oscillation" and "deadlock" prevalent in traditional APF. Through this methodology, the algorithm was able to more effectively steer the search tree to circumvent obstacles and rapidly converge towards the target region, significantly bolstering the directness of the search and successfully preventing the algorithm from becoming ensnared in suboptimal local solutions. For the path optimization algorithm, following the generation of an initial feasible path, a greedy optimization strategy was employed for path pruning and smoothing. This was executed to yield an optimal path characterized by reduced length, enhanced smoothness, and improved conformity with the kinematic properties of the robotic arm. Path pruning was initially applied to eliminate redundant nodes; if a collision-free direct connection existed between two non-adjacent nodes, intermediate nodes were excised, thereby substantially abbreviating the path length. Subsequently, path smoothing techniques, such as B-spline curves or cubic spline interpolation, were introduced to enable the robotic arm to execute movements with greater stability and efficiency during actual operation, mitigating impact and vibration. This two-stage optimization procedure ensured that the final generated path was not merely feasible but also optimal across metrics of length, smoothness, and motion efficiency. [Results and Discussions] To comprehensively validate the performance of the proposed algorithm, a two-stage experimental verification was conducted. Initially, comparative simulations were performed in both two-dimensional (2D) and three-dimensional (3D) environments utilizing the Matlab platform. These simulation scenarios were meticulously engineered to encompass three archetypal environments—simple, complex, and narrow passages—thereby emulating the diverse obstacle configurations potentially encountered in industrialized eel aquaculture. The results demonstrated that, concerning both path planning speed and quality, the improved BI-RRT* algorithm significantly surpassed RRT, APF-RRT*, and traditional BI-RRT* algorithms across all tested environments, substantiatingthe theoretical superiority and inherent robustness of the improved BI-RRT* algorithm proposed in this study across varying complex environments. To further ascertain the engineering applicability and practical potential of the algorithm, an eel feeding robotic arm simulation system was meticulously constructed based on the robot operating system and MoveIt frameworks. This system precisely emulated the kinematics, dynamics, and obstacle distribution pertinent to an industrialized eel aquaculture environment. During simulated continuous feeding tasks, the improved BI-RRT* algorithm consistently exhibited impressive and outstanding performance. Its average running time was merely 2.1 s, representing a substantial 41.6% reduction compared to the traditional BI-RRT*. The average length of the planned path was recorded at only 1 680 mm, with an average of 180 nodes, indicating a significant reduction in path redundancy. Furthermore, the algorithm achieved an impressive obstacle avoidance success rate of 96% in complex confined spaces. These empirical findings not only validated the algorithm's effectiveness but also underscored its immense potential for practical engineering applications. [Conclusions] The experimental results conclusively demonstrated that the improved BI-RRT* algorithm significantly enhanced the path planning efficiency and trajectory quality of robotic arms operating within confined spaces. It also exhibited exceptionally high reliability in obstacle avoidance, thereby effectively addressing the automated feeding requirements of industrialized eel aquaculture. The algorithmic framework possessed considerable generality, offering valuable theoretical insights and technical precedents for resolving analogous robotic arm path planning challenges in other agricultural automation contexts.