Most Read
  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    ZHAOChunjiang, FANBeibei, LIJin, FENGQingchun
    Smart Agriculture. 2023, 5(4): 1-15. https://doi.org/10.12133/j.smartag.SA202312030

    [Significance] Autonomous and intelligent agricultural machinery, characterized by green intelligence, energy efficiency, and reduced emissions, as well as high intelligence and man-machine collaboration, will serve as the driving force behind global agricultural technology advancements and the transformation of production methods in the context of smart agriculture development. Agricultural robots, which utilize intelligent control and information technology, have the unique advantage of replacing manual labor. They occupy the strategic commanding heights and competitive focus of global agricultural equipment and are also one of the key development directions for accelerating the construction of China's agricultural power. World agricultural powers and China have incorporated the research, development, manufacturing, and promotion of agricultural robots into their national strategies, respectively strengthening the agricultural robot policy and planning layout based on their own agricultural development characteristics, thus driving the agricultural robot industry into a stable growth period. [Progress] This paper firstly delves into the concept and defining features of agricultural robots, alongside an exploration of the global agricultural robot development policy and strategic planning blueprint. Furthermore, sheds light on the growth and development of the global agricultural robotics industry; Then proceeds to analyze the industrial backdrop, cutting-edge advancements, developmental challenges, and crucial technology aspects of three representative agricultural robots, including farmland robots, orchard picking robots, and indoor vegetable production robots. Finally, summarizes the disparity between Chinese agricultural robots and their foreign counterparts in terms of advanced technologies. (1) An agricultural robot is a multi-degree-of-freedom autonomous operating equipment that possesses accurate perception, autonomous decision-making, intelligent control, and automatic execution capabilities specifically designed for agricultural environments. When combined with artificial intelligence, big data, cloud computing, and the Internet of Things, agricultural robots form an agricultural robot application system. This system has relatively mature applications in key processes such as field planting, fertilization, pest control, yield estimation, inspection, harvesting, grafting, pruning, inspection, harvesting, transportation, and livestock and poultry breeding feeding, inspection, disinfection, and milking. Globally, agricultural robots, represented by plant protection robots, have entered the industrial application phase and are gradually realizing commercialization with vast market potential. (2) Compared to traditional agricultural machinery and equipment, agricultural robots possess advantages in performing hazardous tasks, executing batch repetitive work, managing complex field operations, and livestock breeding. In contrast to industrial robots, agricultural robots face technical challenges in three aspects. Firstly, the complexity and unstructured nature of the operating environment. Secondly, the flexibility, mobility, and commoditization of the operation object. Thirdly, the high level of technology and investment required. (3) Given the increasing demand for unmanned and less manned operations in farmland production, China's agricultural robot research, development, and application have started late and progressed slowly. The existing agricultural operation equipment still has a significant gap from achieving precision operation, digital perception, intelligent management, and intelligent decision-making. The comprehensive performance of domestic products lags behind foreign advanced counterparts, indicating that there is still a long way to go for industrial development and application. Firstly, the current agricultural robots predominantly utilize single actuators and operate as single machines, with the development of multi-arm cooperative robots just emerging. Most of these robots primarily engage in rigid operations, exhibiting limited flexibility, adaptability, and functionality. Secondly, the perception of multi-source environments in agricultural settings, as well as the autonomous operation of agricultural robot equipment, relies heavily on human input. Thirdly, the progress of new teaching methods and technologies for human-computer natural interaction is rather slow. Lastly, the development of operational infrastructure is insufficient, resulting in a relatively low degree of "mechanization". [Conclusions and Prospects] The paper anticipates the opportunities that arise from the rapid growth of the agricultural robotics industry in response to the escalating global shortage of agricultural labor. It outlines the emerging trends in agricultural robot technology, including autonomous navigation, self-learning, real-time monitoring, and operation control. In the future, the path planning and navigation information perception of agricultural robot autonomy are expected to become more refined. Furthermore, improvements in autonomous learning and cross-scenario operation performance will be achieved. The development of real-time operation monitoring of agricultural robots through digital twinning will also progress. Additionally, cloud-based management and control of agricultural robots for comprehensive operations will experience significant growth. Steady advancements will be made in the innovation and integration of agricultural machinery and techniques.

  • Special Issue--Agricultural Information Perception and Models
    GUOWang, YANGYusen, WUHuarui, ZHUHuaji, MIAOYisheng, GUJingqiu
    Smart Agriculture. 2024, 6(2): 1-13. https://doi.org/10.12133/j.smartag.SA202403015

    [Significance] Big Models, or Foundation Models, have offered a new paradigm in smart agriculture. These models, built on the Transformer architecture, incorporate numerous parameters and have undergone extensive training, often showing excellent performance and adaptability, making them effective in addressing agricultural issues where data is limited. Integrating big models in agriculture promises to pave the way for a more comprehensive form of agricultural intelligence, capable of processing diverse inputs, making informed decisions, and potentially overseeing entire farming systems autonomously. [Progress] The fundamental concepts and core technologies of big models are initially elaborated from five aspects: the generation and core principles of the Transformer architecture, scaling laws of extending big models, large-scale self-supervised learning, the general capabilities and adaptions of big models, and the emerging capabilities of big models. Subsequently, the possible application scenarios of the big model in the agricultural field are analyzed in detail, the development status of big models is described based on three types of the models: Large language models (LLMs), large vision models (LVMs), and large multi-modal models (LMMs). The progress of applying big models in agriculture is discussed, and the achievements are presented. [Conclusions and Prospects] The challenges and key tasks of applying big models technology in agriculture are analyzed. Firstly, the current datasets used for agricultural big models are somewhat limited, and the process of constructing these datasets can be both expensive and potentially problematic in terms of copyright issues. There is a call for creating more extensive, more openly accessible datasets to facilitate future advancements. Secondly, the complexity of big models, due to their extensive parameter counts, poses significant challenges in terms of training and deployment. However, there is optimism that future methodological improvements will streamline these processes by optimizing memory and computational efficiency, thereby enhancing the performance of big models in agriculture. Thirdly, these advanced models demonstrate strong proficiency in analyzing image and text data, suggesting potential future applications in integrating real-time data from IoT devices and the Internet to make informed decisions, manage multi-modal data, and potentially operate machinery within autonomous agricultural systems. Finally, the dissemination and implementation of these big models in the public agricultural sphere are deemed crucial. The public availability of these models is expected to refine their capabilities through user feedback and alleviate the workload on humans by providing sophisticated and accurate agricultural advice, which could revolutionize agricultural practices.

  • Topic--Intelligent Agricultural Sensor Technology
    WANGRujing
    Smart Agriculture. 2024, 6(1): 1-17. https://doi.org/10.12133/j.smartag.SA202401017

    [Significance] Agricultural sensor is the key technology for developing modern agriculture. Agricultural sensor is a kind of detection device that can sense and convert physical signal, which is related to the agricultural environment, plants and animals, into an electrical signal. Agricultural sensors could be applied to monitor crops and livestock in different agricultural environments, including weather, water, atmosphere and soil. It is also an important driving force to promote the iterative upgrading of agricultural technology and change agricultural production methods. [Progress] The different agricultural sensors are categorized, the cutting-edge research trends of agricultural sensors are analyzed, and summarizes the current research status of agricultural sensors are summarized in different application scenarios. Moreover, a deep analysis and discussion of four major categories is conducted, which include agricultural environment sensors, animal and plant life information sensors, agricultural product quality and safety sensors, and agricultural machinery sensors. The process of research, development, the universality and limitations of the application of the four types of agricultural sensors are summarized. Agricultural environment sensors are mainly used for real-time monitoring of key parameters in agricultural production environments, such as the quality of water, gas, and soil. The soil sensors provide data support for precision irrigation, rational fertilization, and soil management by monitoring indicators such as soil humidity, pH, temperature, nutrients, microorganisms, pests and diseases, heavy metals and agricultural pollution, etc. Monitoring of dissolved oxygen, pH, nitrate content, and organophosphorus pesticides in irrigation and aquaculture water through water sensors ensures the rational use of water resources and water quality safety. The gas sensor monitors the atmospheric CO2, NH3, C2H2, CH4 concentration, and other information, which provides the appropriate environmental conditions for the growth of crops in greenhouses. The animal life information sensor can obtain the animal's growth, movement, physiological and biochemical status, which include movement trajectory, food intake, heart rate, body temperature, blood pressure, blood glucose, etc. The plant life information sensors monitor the plant's health and growth, such as volatile organic compounds of the leaves, surface temperature and humidity, phytohormones, and other parameters. Especially, the flexible wearable plant sensors provide a new way to measure plant physiological characteristics accurately and monitor the water status and physiological activities of plants non-destructively and continuously. These sensors are mainly used to detect various indicators in agricultural products, such as temperature and humidity, freshness, nutrients, and potentially hazardous substances (e.g., bacteria, pesticide residues, heavy metals, etc. Agricultural machinery sensors can achieve real-time monitoring and controlling of agricultural machinery to achieve real-time cultivation, planting, management, and harvesting, automated operation of agricultural machinery, and accurate application of pesticide, fertilizer. [Conclusions and Prospects In the challenges and prospects of agricultural sensors, the core bottlenecks of large-scale application of agricultural sensors at the present stage are analyzed in detail. These include low-cost, specialization, high stability, and adaptive intelligence of agricultural sensors. Furthermore, the concept of "ubiquitous sensing in agriculture" is proposed, which provides ideas and references for the research and development of agricultural sensor technology.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    WANGTing, WANGNa, CUIYunpeng, LIUJuan
    Smart Agriculture. 2023, 5(4): 105-116. https://doi.org/10.12133/j.smartag.SA202311005

    [Objective] The rural revitalization strategy presents novel requisites for the extension of agricultural technology. However, the conventional method encounters the issue of a contradiction between supply and demand. Therefore, there is a need for further innovation in the supply form of agricultural knowledge. Recent advancements in artificial intelligence technologies, such as deep learning and large-scale neural networks, particularly the advent of large language models (LLMs), render anthropomorphic and intelligent agricultural technology extension feasible. With the agricultural technology knowledge service of fruit and vegetable as the demand orientation, the intelligent agricultural technology question answering system was built in this research based on LLM, providing agricultural technology extension services, including guidance on new agricultural knowledge and question-and-answer sessions. This facilitates farmers in accessing high-quality agricultural knowledge at their convenience. [Methods] Through an analysis of the demands of strawberry farmers, the agricultural technology knowledge related to strawberry cultivation was categorized into six themes: basic production knowledge, variety screening, interplanting knowledge, pest diagnosis and control, disease diagnosis and control, and drug damage diagnosis and control. Considering the current situation of agricultural technology, two primary tasks were formulated: named entity recognition and question answering related to agricultural knowledge. A training corpus comprising entity type annotations and question-answer pairs was constructed using a combination of automatic machine annotation and manual annotation, ensuring a small yet high-quality sample. After comparing four existing Large Language Models (Baichuan2-13B-Chat, ChatGLM2-6B, Llama 2-13B-Chat, and ChatGPT), the model exhibiting the best performance was chosen as the base LLM to develop the intelligent question-answering system for agricultural technology knowledge. Utilizing a high-quality corpus, pre-training of a Large Language Model and the fine-tuning method, a deep neural network with semantic analysis, context association, and content generation capabilities was trained. This model served as a Large Language Model for named entity recognition and question answering of agricultural knowledge, adaptable to various downstream tasks. For the task of named entity recognition, the fine-tuning method of Lora was employed, fine-tuning only essential parameters to expedite model training and enhance performance. Regarding the question-answering task, the Prompt-tuning method was used to fine-tune the Large Language Model, where adjustments were made based on the generated content of the model, achieving iterative optimization. Model performance optimization was conducted from two perspectives: data and model design. In terms of data, redundant or unclear data was manually removed from the labeled corpus. In terms of the model, a strategy based on retrieval enhancement generation technology was employed to deepen the understanding of agricultural knowledge in the Large Language Model and maintain real-time synchronization of knowledge, alleviating the problem of LLM hallucination. Drawing upon the constructed Large Language Model, an intelligent question-answering system was developed for agricultural technology knowledge. This system demonstrates the capability to generate high-precision and unambiguous answers, while also supporting the functionalities of multi-round question answering and retrieval of information sources. [Results and Discussions] Accuracy rate and recall rate served as indicators to evaluate the named entity recognition task performance of the Large Language Models. The results indicated that the performance of Large Language Models was closely related to factors such as model structure, the scale of the labeled corpus, and the number of entity types. After fine-tuning, the ChatGLM Large Language Model demonstrated the highest accuracy and recall rate. With the same number of entity types, a higher number of annotated corpora resulted in a higher accuracy rate. Fine-tuning had different effects on different models, and overall, it improved the average accuracy of all models under different knowledge topics, with ChatGLM, Llama, and Baichuan values all surpassing 85%. The average recall rate saw limited increase, and in some cases, it was even lower than the values before fine-tuning. Assessing the question-answering task of Large Language Models using hallucination rate and semantic similarity as indicators, data optimization and retrieval enhancement generation techniques effectively reduced the hallucination rate by 10% to 40% and improved semantic similarity by more than 15%. These optimizations significantly enhanced the generated content of the models in terms of correctness, logic, and comprehensiveness. [Conclusion] The pre-trained Large Language Model of ChatGLM exhibited superior performance in named entity recognition and question answering tasks in the agricultural field. Fine-tuning pre-trained Large Language Models for downstream tasks and optimizing based on retrieval enhancement generation technology mitigated the problem of language hallucination, markedly improving model performance. Large Language Model technology has the potential to innovate agricultural technology knowledge service modes and optimize agricultural knowledge extension. This can effectively reduce the time cost for farmers to obtain high-quality and effective knowledge, guiding more farmers towards agricultural technology innovation and transformation. However, due to challenges such as unstable performance, further research is needed to explore optimization methods for Large Language Models and their application in specific scenarios.

  • Information Processing and Decision Making
    YANGFeng, YAOXiaotong
    Smart Agriculture. 2024, 6(1): 147-157. https://doi.org/10.12133/j.smartag.SA202309010

    [Objective] To effectively tackle the unique attributes of wheat leaf pests and diseases in their native environment, a high-caliber and efficient pest detection model named YOLOv8-SS (You Only Look Once Version 8-SS) was proposed. This innovative model is engineered to accurately identify pests, thereby providing a solid scientific foundation for their prevention and management strategies. [Methods] A total of 3 639 raw datasets of images of wheat leaf pests and diseases were collected from 6 different wheat pests and diseases in various farmlands in the Yuchong County area of Gansu Province, at different periods of time, using mobile phones. This collection demonstrated the team's proficiency and commitment to advancing agricultural research. The dataset was meticulously constructed using the LabelImg software to accurately label the images with targeted pest species. To guarantee the model's superior generalization capabilities, the dataset was strategically divided into a training set and a test set in an 8:2 ratio. The dataset includes thorough observations and recordings of the wheat leaf blade's appearance, texture, color, as well as other variables that could influence these characteristics. The compiled dataset proved to be an invaluable asset for both training and validation activities. Leveraging the YOLOv8 algorithm, an enhanced lightweight convolutional neural network, ShuffleNetv2, was selected as the basis network for feature extraction from images. This was accomplished by integrating a 3×3 Depthwise Convolution (DWConv) kernel, the h-swish activation function, and a Squeeze-and-Excitation Network (SENet) attention mechanism. These enhancements streamlined the model by diminishing the parameter count and computational demands, all while sustaining high detection precision. The deployment of these sophisticated methodologies exemplified the researchers' commitment and passion for innovation. The YOLOv8 model employs the SEnet attention mechanism module within both its Backbone and Neck components, significantly reducing computational load while bolstering accuracy. This method exemplifies the model's exceptional performance, distinguishing it from other models in the domain. By integrating a dedicated small target detection layer, the model's capabilities have been augmented, enabling more efficient and precise pest and disease detection. The introduction of a new detection feature map, sized 160×160 pixels, enables the network to concentrate on identifying small-targeted pests and diseases, thereby enhancing the accuracy of pest and disease recognition. Results and Discussion The YOLOv8-SS wheat leaf pests and diseases detection model has been significantly improved to accurately detect wheat leaf pests and diseases in their natural environment. By employing the refined ShuffleNet V2 within the DarkNet-53 framework, as opposed to the conventional YOLOv8, under identical experimental settings, the model exhibited a 4.53% increase in recognition accuracy and a 4.91% improvement in F1-Score, compared to the initial model. Furthermore, the incorporation of a dedicated small target detection layer led to a subsequent rise in accuracy and F1-Scores of 2.31% and 2.16%, respectively, despite a minimal upsurge in the number of parameters and computational requirements. The integration of the SEnet attention mechanism module into the YOLOv8 model resulted in a detection accuracy rate increase of 1.85% and an F1-Score enhancement of 2.72%. Furthermore, by swapping the original neural network architecture with an enhanced ShuffleNet V2 and appending a compact object detection sublayer (namely YOLOv8-SS), the resulting model exhibited a heightened recognition accuracy of 89.41% and an F1-Score of 88.12%. The YOLOv8-SS variant substantially outperformed the standard YOLOv8, showing a remarkable enhancement of 10.11% and 9.92% in accuracy, respectively. This outcome strikingly illustrates the YOLOv8-SS's prowess in balancing speed with precision. Moreover, it achieves convergence at a more rapid pace, requiring approximately 40 training epochs, to surpass other renowned models such as Faster R-CNN, MobileNetV2, SSD, YOLOv5, YOLOX, and the original YOLOv8 in accuracy. Specifically, the YOLOv8-SS boasted an average accuracy 23.01%, 15.13%, 11%, 25.21%, 27.52%, and 10.11% greater than that of the competing models, respectively. In a head-to-head trial involving a public dataset (LWDCD 2020) and a custom-built dataset, the LWDCD 2020 dataset yielded a striking accuracy of 91.30%, outperforming the custom-built dataset by a margin of 1.89% when utilizing the same network architecture, YOLOv8-SS. The AI Challenger 2018-6 and Plant-Village-5 datasets did not perform as robustly, achieving accuracy rates of 86.90% and 86.78% respectively. The YOLOv8-SS model has shown substantial improvements in both feature extraction and learning capabilities over the original YOLOv8, particularly excelling in natural environments with intricate, unstructured backdrops. Conclusion The YOLOv8-SS model is meticulously designed to deliver unmatched recognition accuracy while consuming a minimal amount of storage space. In contrast to conventional detection models, this groundbreaking model exhibits superior detection accuracy and speed, rendering it exceedingly valuable across various applications. This breakthrough serves as an invaluable resource for cutting-edge research on crop pest and disease detection within natural environments featuring complex, unstructured backgrounds. Our method is versatile and yields significantly enhanced detection performance, all while maintaining a lean model architecture. This renders it highly appropriate for real-world scenarios involving large-scale crop pest and disease detection.

  • Special Issue--Agricultural Information Perception and Models
    ZHANGRonghua, BAIXue, FANJiangchuan
    Smart Agriculture. 2024, 6(2): 49-61. https://doi.org/10.12133/j.smartag.SA202311007

    [Objective] It is of great significance to improve the efficiency and accuracy of crop pest detection in complex natural environments, and to change the current reliance on expert manual identification in the agricultural production process. Targeting the problems of small target size, mimicry with crops, low detection accuracy, and slow algorithm reasoning speed in crop pest detection, a complex scene crop pest target detection algorithm named YOLOv8-Entend was proposed in this research. [Methods] Firstly, the GSConv was introduecd to enhance the model's receptive field, allowing for global feature aggregation. This mechanism enables feature aggregation at both node and global levels simultaneously, obtaining local features from neighboring nodes through neighbor sampling and aggregation operations, enhancing the model's receptive field and semantic understanding ability. Additionally, some Convs were replaced with lightweight Ghost Convolutions and HorBlock was utilized to capture longer-term feature dependencies. The recursive gate convolution employed gating mechanisms to remember and transmit previous information, capturing long-term correlations. Furthermore, Concat was replaced with BiFPN for richer feature fusion. The bidirectional fusion of depth features from top to bottom and from bottom to top enhances the transmission of feature information acrossed different network layers. Utilizing the VoVGSCSP module, feature maps of different scales were connected to create longer feature map vectors, increasing model diversity and enhancing small object detection. The convolutional block attention module (CBAM) attention mechanism was introduced to strengthen features of field pests and reduce background weights caused by complexity. Next, the Wise IoU dynamic non-monotonic focusing mechanism was implemented to evaluate the quality of anchor boxes using "outlier" instead of IoU. This mechanism also included a gradient gain allocation strategy, which reduced the competitiveness of high-quality anchor frames and minimizes harmful gradients from low-quality examples. This approach allowed WIoU to concentrate on anchor boxes of average quality, improving the network model's generalization ability and overall performance. Subsequently, the improved YOLOv8-Extend model was compared with the original YOLOv8 model, YOLOv5, YOLOv8-GSCONV, YOLOv8-BiFPN, and YOLOv8-CBAM to validate the accuracy and precision of model detection. Finally, the model was deployed on edge devices for inference verification to confirm its effectiveness in practical application scenarios. [Results and Discussions] The results indicated that the improved YOLOv8-Extend model achieved notable improvements in accuracy, recall, mAP@0.5, and mAP@0.5:0.95 evaluation indices. Specifically, there were increases of 2.6%, 3.6%, 2.4% and 7.2%, respectively, showcasing superior detection performance. YOLOv8-Extend and YOLOv8 run respectively on the edge computing device JETSON ORIN NX 16 GB and were accelerated by TensorRT, mAP@0.5 improved by 4.6%, FPS reached 57.6, meeting real-time detection requirements. The YOLOv8-Extend model demonstrated better adaptability in complex agricultural scenarios and exhibited clear advantages in detecting small pests and pests sharing similar growth environments in practical data collection. The accuracy in detecting challenging data saw a notable increased of 11.9%. Through algorithm refinement, the model showcased improved capability in extracting and focusing on features in crop pest target detection, addressing issues such as small targets, similar background textures, and challenging feature extraction. [Conclusions] The YOLOv8-Extend model introduced in this study significantly boosts detection accuracy and recognition rates while upholding high operational efficiency. It is suitable for deployment on edge terminal computing devices to facilitate real-time detection of crop pests, offering technological advancements and methodologies for the advancement of cost-effective terminal-based automatic pest recognition systems. This research can serve as a valuable resource and aid in the intelligent detection of other small targets, as well as in optimizing model structures.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    CHENRuiyun, TIANWenbin, BAOHaibo, LIDuan, XIEXinhao, ZHENGYongjun, TANYu
    Smart Agriculture. 2023, 5(4): 16-32. https://doi.org/10.12133/j.smartag.SA202308006

    [Significance] As the research focus of future agricultural machinery, agricultural wheeled robots are developing in the direction of intelligence and multi-functionality. Advanced environmental perception technologies serve as a crucial foundation and key components to promote intelligent operations of agricultural wheeled robots. However, considering the non-structured and complex environments in agricultural on-field operational processes, the environmental information obtained through conventional 2D perception technologies is limited. Therefore, 3D environmental perception technologies are highlighted as they can provide more dimensional information such as depth, among others, thereby directly enhancing the precision and efficiency of unmanned agricultural machinery operation. This paper aims to provide a detailed analysis and summary of 3D environmental perception technologies, investigate the issues in the development of agricultural environmental perception technologies, and clarify the future key development directions of 3D environmental perception technologies regarding agricultural machinery, especially the agricultural wheeled robot. [Progress] Firstly, an overview of the general status of wheeled robots was introduced, considering their dominant influence in environmental perception technologies. It was concluded that multi-wheel robots, especially four-wheel robots, were more suitable for the agricultural environment due to their favorable adaptability and robustness in various agricultural scenarios. In recent years, multi-wheel agricultural robots have gained widespread adoption and application globally. The further improvement of the universality, operation efficiency, and intelligence of agricultural wheeled robots is determined by the employed perception systems and control systems. Therefore, agricultural wheeled robots equipped with novel 3D environmental perception technologies can obtain high-dimensional environmental information, which is significant for improving the accuracy of decision-making and control. Moreover, it enables them to explore effective ways to address the challenges in intelligent environmental perception technology. Secondly, the recent development status of 3D environmental perception technologies in the agriculture field was briefly reviewed. Meanwhile, sensing equipment and the corresponding key technologies were also introduced. For the wheeled robots reported in the agriculture area, it was noted that the applied technologies of environmental perception, in terms of the primary employed sensor solutions, were divided into three categories: LiDAR, vision sensors, and multi-sensor fusion-based solutions. Multi-line LiDAR had better performance on many tasks when employing point cloud processing algorithms. Compared with LiDAR, depth cameras such as binocular cameras, TOF cameras, and structured light cameras have been comprehensively investigated for their application in agricultural robots. Depth camera-based perception systems have shown superiority in cost and providing abundant point cloud information. This study has investigated and summarized the latest research on 3D environmental perception technologies employed by wheeled robots in agricultural machinery. In the reported application scenarios of agricultural environmental perception, the state-of-the-art 3D environmental perception approaches have mainly focused on obstacle recognition, path recognition, and plant phenotyping. 3D environmental perception technologies have the potential to enhance the ability of agricultural robot systems to understand and adapt to the complex, unstructured agricultural environment. Furthermore, they can effectively address several challenges that traditional environmental perception technologies have struggled to overcome, such as partial sensor information loss, adverse weather conditions, and poor lighting conditions. Current research results have indicated that multi-sensor fusion-based 3D environmental perception systems outperform single-sensor-based systems. This superiority arises from the amalgamation of advantages from various sensors, which concurrently serve to mitigate individual shortcomings. [Conclusions and Prospects] The potential of 3D environmental perception technology for agricultural wheeled robots was discussed in light of the evolving demands of smart agriculture. Suggestions were made to improve sensor applicability, develop deep learning-based agricultural environmental perception technology, and explore intelligent high-speed online multi-sensor fusion strategies. Currently, the employed sensors in agricultural wheeled robots may not fully meet practical requirements, and the system's cost remains a barrier to widespread deployment of 3D environmental perception technologies in agriculture. Therefore, there is an urgent need to enhance the agricultural applicability of 3D sensors and reduce production costs. Deep learning methods were highlighted as a powerful tool for processing information obtained from 3D environmental perception sensors, improving response speed and accuracy. However, the limited datasets in the agriculture field remain a key issue that needs to be addressed. Additionally, multi-sensor fusion has been recognized for its potential to enhance perception performance in complex and changeable environments. As a result, it is clear that 3D environmental perception technology based on multi-sensor fusion is the future development direction of smart agriculture. To overcome challenges such as slow data processing speed, delayed processed data, and limited memory space for storing data, it is essential to investigate effective fusion schemes to achieve online multi-source information fusion with greater intelligence and speed.

  • Topic--Technological Innovation and Sustainable Development of Smart Animal Husbandry
    ZHANG Yanqi, ZHOU Shuo, ZHANG Ning, CHAI Xiujuan, SUN Tan
    Smart Agriculture. 2024, 6(4): 53-63. https://doi.org/10.12133/j.smartag.SA202310001

    [Objective] Currently, pig farming facilities mainly rely on manual counting for tracking slaughtered and stored pigs. This is not only time-consuming and labor-intensive, but also prone to counting errors due to pig movement and potential cheating. As breeding operations expand, the periodic live asset inventories put significant strain on human, material and financial resources. Although methods based on electronic ear tags can assist in pig counting, these ear tags are easy to break and fall off in group housing environments. Most of the existing methods for counting pigs based on computer vision require capturing images from a top-down perspective, necessitating the installation of cameras above each hogpen or even the use of drones, resulting in high installation and maintenance costs. To address the above challenges faced in the group pig counting task, a high-efficiency and low-cost pig counting method was proposed based on improved instance segmentation algorithm and WeChat public platform. [Methods] Firstly, a smartphone was used to collect pig image data in the area from a human view perspective, and each pig's outline in the image was annotated to establish a pig count dataset. The training set contains 606 images and the test set contains 65 images. Secondly, an efficient global attention module was proposed by improving convolutional block attention module (CBAM). The efficient global attention module first performed a dimension permutation operation on the input feature map to obtain the interaction between its channels and spatial dimensions. The permuted features were aggregated using global average pooling (GAP). One-dimensional convolution replaced the fully connected operation in CBAM, eliminating dimensionality reduction and significantly reducing the model's parameter number. This module was integrated into the YOLOv8 single-stage instance segmentation network to build the pig counting model YOLOv8x-Ours. By adding an efficient global attention module into each C2f layer of the YOLOv8 backbone network, the dimensional dependencies and feature information in the image could be extracted more effectively, thereby achieving high-accuracy pig counting. Lastly, with a focus on user experience and outreach, a pig counting WeChat mini program was developed based on the WeChat public platform and Django Web framework. The counting model was deployed to count pigs using images captured by smartphones. [Results and Discussions] Compared with existing methods of Mask R-CNN, YOLACT(Real-time Instance Segmentation), PolarMask, SOLO and YOLOv5x, the proposed pig counting model YOLOv8x-Ours exhibited superior performance in terms of accuracy and stability. Notably, YOLOv8x-Ours achieved the highest accuracy in counting, with errors of less than 2 and 3 pigs on the test set. Specifically, 93.8% of the total test images had counting errors of less than 3 pigs. Compared with the two-stage instance segmentation algorithm Mask R-CNN and the YOLOv8x model that applies the CBAM attention mechanism, YOLOv8x-Ours showed performance improvements of 7.6% and 3%, respectively. And due to the single-stage design and anchor-free architecture of the YOLOv8 model, the processing speed of a single image was only 64 ms, 1/8 of Mask R-CNN. By embedding the model into the WeChat mini program platform, pig counting was conducted using smartphone images. In cases where the model incorrectly detected pigs, users were given the option to click on the erroneous location in the result image to adjust the statistical outcomes, thereby enhancing the accuracy of pig counting. [Conclusions] The feasibility of deep learning technology in the task of pig counting was demonstrated. The proposed method eliminates the need for installing hardware equipment in the breeding area of the pig farm, enabling pig counting to be carried out effortlessly using just a smartphone. Users can promptly spot any errors in the counting results through image segmentation visualization and easily rectify any inaccuracies. This collaborative human-machine model not only reduces the need for extensive manpower but also guarantees the precision and user-friendliness of the counting outcomes.

  • Special Issue--Agricultural Information Perception and Models
    ZHANGYuyu, BINGShuying, JIYuanhao, YANBeibei, XUJinpu
    Smart Agriculture. 2024, 6(2): 118-127. https://doi.org/10.12133/j.smartag.SA202401005

    [Objective] The fresh cut rose industry has shown a positive growth trend in recent years, demonstrating sustained development. Considering the current fresh cut roses grading process relies on simple manual grading, which results in low efficiency and accuracy, a new model named Flower-YOLOv8s was proposed for grading detection of fresh cut roses. [Methods] The flower head of a single rose against a uniform background was selected as the primary detection target. Subsequently, fresh cut roses were categorized into four distinct grades: A, B, C, and D. These grades were determined based on factors such as color, size, and freshness, ensuring a comprehensive and objective grading system. A novel dataset contenting 778 images was specifically tailored for rose fresh-cut flower grading and detection was constructed. This dataset served as the foundation for our subsequent experiments and analysis. To further enhance the performance of the YOLOv8s model, two cutting-edge attention convolutional block attention module (CBAM) and spatial attention module (SAM) were introduced separately for comparison experiments. These modules were seamlessly integrated into the backbone network of the YOLOv8s model to enhance its ability to focus on salient features and suppressing irrelevant information. Moreover, selecting and optimizing the SAM module by reducing the number of convolution kernels, incorporating a depth-separable convolution module and reducing the number of input channels to improve the module's efficiency and contribute to reducing the overall computational complexity of the model. The convolution layer (Conv) in the C2f module was replaced by the depth separable convolution (DWConv), and then combined with Optimized-SAM was introduced into the C2f structure, giving birth to the Flower-YOLOv8s model. Precision, recall and F1 score were used as evaluation indicators. [Results and Discussions] Ablation results showed that the Flower-YOLOv8s model proposed in this study, namely YOLOv8s+DWConv+Optimized-SAM, the recall rate was 95.4%, which was 3.8% higher and the average accuracy, 0.2% higher than that of YOLOv8s with DWConv alone. When compared to the baseline model YOLOv8s, the Flower-YOLOv8s model exhibited a remarkable 2.1% increase in accuracy, reaching a peak of 97.4%. Furthermore, mAP was augmented by 0.7%, demonstrating the model's superior performance across various evaluation metrics. The effectiveness of adding Optimized-SAM was proved. From the overall experimental results, the number of parameters of Flower-YOLOv8s was reduced by 2.26 M compared with the baseline model YOLOv8s, and the reasoning time was also reduced from 15.6 to 5.7 ms. Therefore, the Flower-YOLOv8s model was superior to the baseline model in terms of accuracy rate, average accuracy, number of parameters, detection time and model size. The performances of Flower-YOLOv8s network were compared with other target detection algorithms of Fast-RCNN, Faster-RCNN and first-stage target detection models of SSD, YOLOv3, YOLOv5s and YOLOv8s to verify the superiority under the same condition and the same data set. The average precision values of the Flower-YOLOv8s model proposed in this study were 2.6%, 19.4%, 6.5%, 1.7%, 1.9% and 0.7% higher than those of Fast-RCNN, Faster-RCNN, SSD, YOLOv3, YOLOv5s and YOLOv8s, respectively. Compared with YOLOv8s with higher recall rate, Flower-YOLOv8s reduced model size, inference time and parameter number by 4.5 MB, 9.9 ms and 2.26 M, respectively. Notably, the Flower-YOLOv8s model achieved these improvements while simultaneously reducing model parameters and computational complexity. [Conclusions] The Flower-YOLOv8s model not only demonstrated superior detection accuracy but also exhibited a reduction in model parameters and computational complexity. This lightweight yet powerful model is highly suitable for real-time applications, making it a promising candidate for flower grading and detection tasks in the agricultural and horticultural industries.

  • Topic--Smart Agricultural Technology and Machinery in Hilly and Mountainous Areas
    ZHANGJun, CHENYuyan, QINZhenyu, ZHANGMengyao, ZHANGJun
    Smart Agriculture. 2024, 6(3): 46-57. https://doi.org/10.12133/j.smartag.SA202312028

    [Objective] The accurate estimation of terraced field areas is crucial for addressing issues such as slope erosion control, water retention, soil conservation, and increasing food production. The use of high-resolution remote sensing imagery for terraced field information extraction holds significant importance in these aspects. However, as imaging sensor technologies continue to advance, traditional methods focusing on shallow features may no longer be sufficient for precise and efficient extraction in complex terrains and environments. Deep learning techniques offer a promising solution for accurately extracting terraced field areas from high-resolution remote sensing imagery. By utilizing these advanced algorithms, detailed terraced field characteristics with higher levels of automation can be better identified and analyzed. The aim of this research is to explore a proper deep learning algorithm for accurate terraced field area extraction in high-resolution remote sensing imagery. [Methods] Firstly, a terraced dataset was created using high-resolution remote sensing images captured by the Gaofen-6 satellite during fallow periods. The dataset construction process involved data preprocessing, sample annotation, sample cropping, and dataset partitioning with training set augmentation. To ensure a comprehensive representation of terraced field morphologies, 14 typical regions were selected as training areas based on the topographical distribution characteristics of Yuanyang county. To address misclassifications near image edges caused by limited contextual information, a sliding window approach with a size of 256 pixels and a stride of 192 pixels in each direction was utilized to vary the positions of terraced fields in the images. Additionally, geometric augmentation techniques were applied to both images and labels to enhance data diversity, resulting in a high-resolution terraced remote sensing dataset. Secondly, an improved DeepLab v3+ model was proposed. In the encoder section, a lightweight MobileNet v2 was utilized instead of Xception as the backbone network for the semantic segmentation model. Two shallow features from the 4th and 7th layers of the MobileNet v2 network were extracted to capture relevant information. To address the need for local details and global context simultaneously, the multi-scale feature fusion (MSFF) module was employed to replace the atrous spatial pyramid pooling (ASPP) module. The MSFF module utilized a series of dilated convolutions with increasing dilation rates to handle information loss. Furthermore, a coordinate attention mechanism was applied to both shallow and deep features to enhance the network's understanding of targets. This design aimed to lightweight the DeepLab v3+ model while maintaining segmentation accuracy, thus improving its efficiency for practical applications. [Results and Discussions] The research findings reveal the following key points: (1) The model trained using a combination of near-infrared, red, and green (NirRG) bands demonstrated the optimal overall performance, achieving precision, recall, F1-Score, and intersection over union (IoU) values of 90.11%, 90.22%, 90.17% and 82.10%, respectively. The classification results indicated higher accuracy and fewer discrepancies, with an error in reference area of only 12 hm2. (2) Spatial distribution patterns of terraced fields in Yuanyang county were identified through the deep learning model. The majority of terraced fields were found within the slope range of 8º to 25º, covering 84.97% of the total terraced area. Additionally, there was a noticeable concentration of terraced fields within the altitude range of 1 000 m to 2 000 m, accounting for 95.02% of the total terraced area. (3) A comparison with the original DeepLab v3+ network showed that the improved DeepLab v3+ model exhibited enhancements in terms of precision, recall, F1-Score, and IoU by 4.62%, 2.61%, 3.81% and 2.81%, respectively. Furthermore, the improved DeepLab v3+ outperformed UNet and the original DeepLab v3+ in terms of parameter count and floating-point operations. Its parameter count was only 28.6% of UNet and 19.5% of the original DeepLab v3+, while the floating-point operations were only 1/5 of UNet and DeepLab v3+. This not only improved computational efficiency but also made the enhanced model more suitable for resource-limited or computationally less powerful environments. The lightweighting of the DeepLab v3+ network led to improvements in accuracy and speed. However, the slection of the NirGB band combination during fallow periods significantly impacted the model's generalization ability. [Conclusions] The research findings highlights the significant contribution of the near-infrared (NIR) band in enhancing the model's ability to learn terraced field features. Comparing different band combinations, it was evident that the NirRG combination resulted in the highest overall recognition performance and precision metrics for terraced fields. In contrast to PSPNet, UNet, and the original DeepLab v3+, the proposed model showcased superior accuracy and performance on the terraced field dataset. Noteworthy improvements were observed in the total parameter count, floating-point operations, and the Epoch that led to optimal model performance, outperforming UNet and DeepLab v3+. This study underscores the heightened accuracy of deep learning in identifying terraced fields from high-resolution remote sensing imagery, providing valuable insights for enhanced monitoring and management of terraced landscapes.

  • Special Issue--Agricultural Information Perception and Models
    SHENYanyan, ZHAOYutao, CHENGengshen, LYUZhengang, ZHAOFeng, YANGWanneng, MENGRan
    Smart Agriculture. 2024, 6(2): 28-39. https://doi.org/10.12133/j.smartag.SA202310016

    [Objective] In recent years, there has been a significant increase in the severity of leaf diseases in maize, with a noticeable trend of mixed occurrence. This poses a serious threat to the yield and quality of maize. However, there is a lack of studies that combine the identification of different types of leaf diseases and their severity classification, which cannot meet the needs of disease prevention and control under the mixed occurrence of different diseases and different severities in actual maize fields. [Methods] A method was proposed for identifying the types of typical leaf diseases in maize and classifying their severity using hyperspectral technology. Hyperspectral data of three leaf diseases of maize: northern corn leaf blight (NCLB), southern corn leaf blight (SCLB) and southern corn rust (SCR), were obtained through greenhouse pathogen inoculation and natural inoculation. The spectral data were preprocessed by spectral standardization, SG filtering, sensitive band extraction and vegetation index calculation, to explore the spectral characteristics of the three leaf diseases of maize. Then, the inverse frequency weighting method was utilized to balance the number of samples to reduce the overfitting phenomenon caused by sample imbalance. Relief-F and variable selection using random forests (VSURF) method were employed to optimize the sensitive spectral features, including band features and vegetation index features, to construct models for disease type identification based on the full stages of disease development (including all disease severities) and for individual disease severities using several representative machine learning approaches, demonstrating the effectiveness of the research method. Furthermore, the study individual occurrence severity classification models were also constructed for each single maize leaf disease, including the NCLB, SCLB and SCR severity classification models, respectively, aiming to achieve full-process recognition and disease severity classification for different leaf diseases. Overall accuracy (OA) and Macro F1 were used to evaluate the model accuracy in this study. Results and Discussion The research results showed significant spectrum changes of three kinds of maize leaf diseases primarily focusing on the visible (550-680 nm), red edge (740-760 nm), near-infrared (760-1 000 nm) and shortwave infrared (1 300-1 800 nm) bands. Disease-specific spectral features, optimized based on disease spectral response rules, effectively identified disease species and classify their severity. Moreover, vegetation index features were more effective in identifying disease-specific information than sensitive band features. This was primarily due to the noise and information redundancy present in the selected hyperspectral sensitive bands, whereas vegetation index could reduce the influence of background and atmospheric noise to a certain extent by integrating relevant spectral signals through band calculation, so as to achieve higher precision in the model. Among several machine learning algorithms, the support vector machine (SVM) method exhibited better robustness than random forest (RF) and decision tree (DT). In the full stage of disease development, the optimal overall accuracy (OA) of the disease classification model constructed by SVM based on vegetation index reached 77.51%, with a Macro F1 of 0.77, representing a 28.75% increase in OA and 0.30 higher of Macro F1 compared to the model based on sensitive bands. Additionally, the accuracy of the disease classification model with a single severity of the disease increased with the severity of the disease. The accuracy of disease classification during the early stage of disease development (OA=70.31%) closely approached that of the full disease development stage (OA=77.51%). Subsequently, in the moderate disease severity stage, the optimal accuracy of disease classification (OA=80.00%) surpassed the optimal accuracy of disease classification in the full disease development stage. Furthermore, the optimal accuracy of disease classification under severe severity reached 95.06%, with a Macro F1 of 0.94. This heightened accuracy during the severity stage can be attributed to significant changes in pigment content, water content and cell structure of the diseased leaves, intensifying the spectral response of each disease and enhancing the differentiation between different diseases. In disease severity classification model, the optimal accuracy of the three models for maize leaf disease severity all exceeded 70%. Among the three kinds of disease severity classification results, the NCLB severity classification model exhibited the best performance. The NCLB severity classification model, utilizing SVM based on the optimal vegetation index features, achieved an OA of 86.25%, with a Macro F1 of 0.85. In comparison, the accuracy of the SCLB severity classification model (OA=70.35%, Macro F1=0.70) and SCR severity classification model (OA=71.39%, Macro F1=0.69) were lower than that of NCLB. [Conclusions] The aforementioned results demonstrate the potential to effectively identify and classify the types and severity of common leaf diseases in maize using hyperspectral data. This lays the groundwork for research and provides a theoretical basis for large-scale crop disease monitoring, contributing to precision prevention and control as well as promoting green agriculture.

  • Topic--Smart Agricultural Technology and Machinery in Hilly and Mountainous Areas
    QIJiangtao, CHENGPanting, GAOFangfang, GUOLi, ZHANGRuirui
    Smart Agriculture. 2024, 6(3): 17-33. https://doi.org/10.12133/j.smartag.SA202404003

    [Significance] Soil stands as the fundamental pillar of agricultural production, with its quality being intrinsically linked to the efficiency and sustainability of farming practices. Historically, the intensive cultivation and soil erosion have led to a marked deterioration in some arable lands, characterized by a sharp decrease in soil organic matter, diminished fertility, and a decline in soil's structural integrity and ecological functions. In the strategic framework of safeguarding national food security and advancing the frontiers of smart and precision agriculture, the march towards agricultural modernization continues apace, intensifying the imperative for meticulous soil quality management. Consequently, there is an urgent need for the rrapid acquisition of soil's physical and chemical parameters. Interdisciplinary scholars have delved into soil monitoring research, achieving notable advancements that promise to revolutionize the way we understand and manage soil resource. [Progress] Utilizing the the Web of Science platform, a comprehensive literature search was conducted on the topic of "soil," further refined with supplementary keywords such as "electrochemistry", "spectroscopy", "electromagnetic", "ground-penetrating radar", and "satellite". The resulting literature was screened, synthesized, and imported into the CiteSpace visualization tool. A keyword emergence map was yielded, which delineates the trajectory of research in soil physical and chemical parameter detection technology. Analysis of the keyword emergence map reveals a paradigm shift in the acquisition of soil physical and chemical parameters, transitioning from conventional indoor chemical and spectrometry analyses to outdoor, real-time detection methods. Notably, soil sensors integrated into drones and satellites have garnered considerable interest. Additionally, emerging monitoring technologies, including biosensing and terahertz spectroscopy, have made their mark in recent years. Drawing from this analysis, the prevailing technologies for soil physical and chemical parameter information acquisition in agricultural fields have been categorized and summarized. These include: 1. Rapid Laboratory Testing Techniques: Primarily hinged on electrochemical and spectrometry analysis, these methods offer the dual benefits of time and resource efficiency alongside high precision; 2. Rapid Near-Ground Sensing Techniques: Leveraging electromagnetic induction, ground-penetrating radar, and various spectral sensors (multispectral, hyperspectral, and thermal infrared), these techniques are characterized by their high detection accuracy and swift operation. 3. Satellite Remote Sensing Techniques: Employing direct inversion, indirect inversion, and combined analysis methods, these approaches are prized for their efficiency and extensive coverage. 4. Innovative Rapid Acquisition Technologies: Stemming from interdisciplinary research, these include biosensing, environmental magnetism, terahertz spectroscopy, and gamma spectroscopy, each offering novel avenues for soil parameter detection. An in-depth examination and synthesis of the principles, applications, merits, and limitations of each technology have been provided. Moreover, a forward-looking perspective on the future trajectory of soil physical and chemical parameter acquisition technology has been offered, taking into account current research trends and hotspots. [Conclusions and Prospects] Current advancements in the technology for rapaid acquiring soil physical and chemical parameters in agricultural fields have been commendable, yet certain challenges persist. The development of near-ground monitoring sensors is constrained by cost, and their reliability, adaptability, and specialization require enhancement to effectively contend with the intricate and varied conditions of farmland environments. Additionally, remote sensing inversion techniques are confronted with existing limitations in data acquisition, processing, and application. To further develop the soil physical and chemical parameter acquisition technology and foster the evolution of smart agriculture, future research could beneficially delve into the following four areas: Designing portable, intelligent, and cost-effective near-ground soil information acquisition systems and equipment to facilitate rapid on-site soil information detection; Enhancing the performance of low-altitude soil information acquisition platforms and refine the methods for data interpretation to ensure more accurate insights; Integrating multifactorial considerations to construct robust satellite remote sensing inversion models, leveraging diverse and open cloud computing platforms for in-depth data analysis and mining; Engaging in thorough research on the fusion of multi-source data in the acquisition of soil physical and chemical parameter information, developing soil information sensing algorithms and models with strong generalizability and high reliability to achieve rapaid, precise, and intelligent acquisition of soil parameters.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    LIZhengkai, YUJiahui, PANShijia, JIAZefeng, NIUZijie
    Smart Agriculture. 2023, 5(4): 92-104. https://doi.org/10.12133/j.smartag.SA202308015

    [Objective] The proliferation of kiwifruit trees severely overlaps, resulting in a complex canopy structure, rendering it impossible to extract their skeletons or predict their canopies using conventional methods. The objective of this research is to propose a crown segmentation method that integrates skeleton information by optimizing image processing algorithms and developing a new scheme for fusing winter and summer information. In cases where fruit trees are densely distributed, achieving accurate segmentation of fruit tree canopies in orchard drone images can efficiently and cost-effectively obtain canopy information, providing a foundation for determining summer kiwifruit growth size, spatial distribution, and other data. Furthermore, it facilitates the automation and intelligent development of orchard management. [Methods] The 4- to 8-year-old kiwifruit trees were chosen and remote sensing images of winter and summer via unmanned aerial vehicles were obtain as the primary analysis visuals. To tackle the challenge of branch extraction in winter remote sensing images, a convolutional attention mechanism was integrated into the PSP-Net network, along with a joint attention loss function. This was designed to boost the network's focus on branches, enhance the recognition and targeting capabilities of the target area, and ultimately improve the accuracy of semantic segmentation for fruit tree branches.For the generation of the skeleton, digital image processing technology was employed for screening. The discrete information of tree branches was transformed into the skeleton data of a single fruit tree using growth seed points. Subsequently, the semantic segmentation results were optimized through mathematical morphology calculations, enabling smooth connection of the branches. In response to the issue of single tree canopy segmentation in summer, the growth characteristics of kiwifruit trees were taken into account, utilizing the outward expansion of branches growing from the trunk.The growth of tree branches was simulated by using morphological expansion to predict the summer canopy. The canopy prediction results were analyzed under different operators and parameters, and the appropriate expansion operators along with their corresponding operation lengths were selected. The skeleton of a single tree was extracted from summer images. By combining deep learning with mathematical morphology methods through the above steps, the optimized single tree skeleton was used as a prior condition to achieve canopy segmentation. [Results and Discussions] In comparison to traditional methods, the accuracy of extracting kiwifruit tree canopy information images at each stage of the process has been significantly enhanced. The enhanced PSP Net was evaluated using three primary regression metrics: pixel accuracy (PA), mean intersection over union ratio (MIoU), and weighted F1 Score (WF1). The PA, MIoU and WF1 of the improved PSP-Net were 95.84%, 95.76% and 95.69% respectively, which were increased by 12.30%, 22.22% and 17.96% compared with U-Net, and 21.39% , 21.51% and 18.12% compared with traditional PSP-Net, respectively. By implementing this approach, the skeleton extraction function for a single fruit tree was realized, with the predicted PA of the canopy surpassing 95%, an MIoU value of 95.76%, and a WF1 of canopy segmentation approximately at 94.07%.The average segmentation precision of the approach surpassed 95%, noticeably surpassing the original skeleton's 81.5%. The average conformity between the predicted skeleton and the actual summer skeleton stand at 87%, showcasing the method's strong prediction performance. Compared with the original skeleton, the PA, MIoU and WF1 of the optimized skeleton increased by 13.2%, 10.9% and 18.4%, respectively. The continuity of the predicted skeleton had been optimized, resulting in a significant improvement of the canopy segmentation index. The solution effectively addresses the issue of semantic segmentation fracture, and a single tree canopy segmentation scheme that incorporates skeleton information could effectively tackle the problem of single fruit tree canopy segmentation in complex field environments. This provided a novel technical solution for efficient and low-cost orchard fine management. [Conclusions] A method for extracting individual kiwifruit plant skeletons and predicting canopies based on skeleton information was proposed. This demonstrates the enormous potential of drone remote sensing images for fine orchard management from the perspectives of method innovation, data collection, and problem solving. Compared with manual statistics, the overall efficiency and accuracy of kiwifruit skeleton extraction and crown prediction have significantly improved, effectively solving the problem of case segmentation in the crown segmentation process.The issue of semantic segmentation fragmentation has been effectively addressed, resulting in the development of a single tree canopy segmentation method that incorporates skeleton information. This approach can effectively tackle the challenges of single fruit tree canopy segmentation in complex field environments, thereby offering a novel technical solution for efficient and cost-effective orchard fine management. While the research is primarily centered on kiwifruit trees, the methodology possesses strong universality. With appropriate modifications, it can be utilized to monitor canopy changes in other fruit trees, thereby showcasing vast application potential.

  • Special Issue--Agricultural Information Perception and Models
    WUXiaoyan, GUOWei, ZHUYiping, ZHUHuaji, WUHuarui
    Smart Agriculture. 2024, 6(2): 107-117. https://doi.org/10.12133/j.smartag.SA202401008

    [Objective] Currently, the lack of computerized systems to monitor the quality of cabbage transplants is a notable shortcoming in the agricultural industry, where transplanting operations play a crucial role in determining the overall yield and quality of the crop. To address this problem, a lightweight and efficient algorithm was developed to monitor the status of cabbage transplants in a natural environment. [Methods] First, the cabbage image dataset was established, the cabbage images in the natural environment were collected, the collected image data were filtered and the transplanting status of the cabbage was set as normal seedling (upright and intact seedling), buried seedling (whose stems and leaves were buried by the soil) and exposed seedling (whose roots were exposed), and the dataset was manually categorized and labelled using a graphical image annotation tool (LabelImg) so that corresponding XML files could be generated. And the dataset was pre-processed with data enhancement methods such as flipping, cropping, blurring and random brightness mode to eliminate the scale and position differences between the cabbages in the test and training sets and to improve the imbalance of the data. Then, a cabbage transplantation state detection model based on YOLOv8s (You Only Look Once Version 8s) was designed. To address the problem that light and soil have a large influence on the identification of the transplantation state of cabbage in the natural environment, a multi-scale attention mechanism was embedded to increase the number of features in the model, and a multi-scale attention mechanism was embedded to increase the number of features in the model. Embedding the multi-scale attention mechanism to increase the algorithm's attention to the target region and improve the network's attention to target features at different scales, so as to improve the model's detection efficiency and target recognition accuracy, and reduce the leakage rate; by combining with deformable convolution, more useful target information was captured to improve the model's target recognition and convergence effect, and the model complexity increased by C3-layer convolution was reduced, which further reduced the model complexity. Due to the unsatisfactory localization effect of the algorithm, the focal extended intersection over union loss (Focal-EIoU Loss) was introduced to solve the problem of violent oscillation of the loss value caused by low-quality samples, and the influence weight of high-quality samples on the loss value was increased while the influence of low-quality samples was suppressed, so as to improve the convergence speed and localization accuracy of the algorithm. [Results and Discussions] Eventually, the algorithm was put through a stringent testing phase, yielding a remarkable recognition accuracy of 96.2% for the task of cabbage transplantation state. This was an improvement of 2.8% over the widely used YOLOv8s. Moreover, when benchmarked against other prominent target detection models, the algorithm emerged as a clear winner. It showcased a notable enhancement of 3% and 8.9% in detection performance compared to YOLOv3-tiny. Simultaneously, it also managed to achieve a 3.7% increase in the recall rate, a metric that measured the efficiency of the algorithm in identifying actual targets among false positives. On a comparative note, the algorithm outperformed YOLOv5 in terms of recall rate by 1.1%, 2% and 1.5%, respectively. When pitted against the robust faster region-based convolutional neural network (Faster R-CNN), the algorithm demonstrated a significant boost in recall rate by 20.8% and 11.4%, resulting in an overall improvement of 13%. A similar trend was observed when the algorithm was compared to the single shot multibox detector (SSD) model, with a notable 9.4% and 6.1% improvement in recall rate. The final experimental results show that when the enhanced model was compared with YOLOv7-tiny, the recognition accuracy was increased by 3%, and the recall rate was increased by 3.5%. These impressive results validated the superiority of the algorithm in terms of accuracy and localization ability within the target area. The algorithm effectively eliminates interferenced factors such as soil and background impurities, thereby enhancing its performance and making it an ideal choice for tasks such as cabbage transplantation state recognition. [Conclusions] The experimental results show that the proposed cabbage transplantation state detection method can meet the accuracy and real-time requirements for the identification of cabbage transplantation state, and the detection accuracy and localization accuracy of the improved model perform better when the target is smaller and there are weeds and other interferences in the background. Therefore, the method proposed in this study can improve the efficiency of cabbage transplantation quality measurement, reduce the time and labor, and improve the automation of field transplantation quality survey.

  • Topic--Smart Agricultural Technology and Machinery in Hilly and Mountainous Areas
    LIHao, DUYuqiu, XIAOXingzhu, CHENYanxi
    Smart Agriculture. 2024, 6(3): 34-45. https://doi.org/10.12133/j.smartag.SA202308002

    [Objective] To fully utilize and protect farmland and lay a solid foundation for the sustainable use of land, it is particularly important to obtain real-time and precise information regarding farmland area, distribution, and other factors. Leveraging remote sensing technology to obtain farmland data can meet the requirements of large-scale coverage and timeliness. However, the current research and application of deep learning methods in remote sensing for cultivated land identification still requires further improvement in terms of depth and accuracy. The objective of this study is to investigate the potential application of deep learning methods in remote sensing for identifying cultivated land in the hilly areas of Southwest China, to provide insights for enhancing agricultural land utilization and regulation, and for harmonizing the relationship between cultivated land and the economy and ecology. [Methods] Santai county, Mianyang city, Sichuan province, China (30°42'34"~31°26'35"N, 104°43'04"~105°18'13"E) was selected as the study area. High-resolution imagery from two scenes captured by the Gaofen-6 (GF-6) satellite served as the primary image data source. Additionally, 30-meter resolution DEM data from the United States National Aeronautics and Space Administration (NASA) in 2020 was utilized. A land cover data product, SinoLC-1, was also incorporated for comparative evaluation of the accuracy of various extraction methods' results. Four deep learning models, namely Unet, PSPNet, DeeplabV3+, and Unet++, were utilized for remote sensing land identification research in cultivated areas. The study also involved analyzing the identification accuracy of cultivated land in high-resolution satellite images by combining the results of the random forest (RF) algorithm along with the deep learning models. A validation dataset was constructed by randomly generating 1 000 vector validation points within the research area. Concurrently, Google Earth satellite images with a resolution of 0.3 m were used for manual visual interpretation to determine the land cover type of the pixels where the validation points are located. The identification results of each model were compared using a confusion matrix to compute five accuracy evaluation metrics: Overall accuracy (OA), intersection over union (IoU), mean intersection over union (MIoU), F1-Score, and Kappa Coefficient to assess the cultivated land identification accuracy of different models and data products. [Results and Discussions] The deep learning models displayed significant advances in accuracy evaluation metrics, surpassing the performance of traditional machine learning approaches like RF and the latest land cover product, SinoLC-1 Landcover. Among the models assessed, the UNet++ model performed the best, its F1-Score, IoU, MIoU, OA, and Kappa coefficient values were 0.92, 85.93%, 81.93%, 90.60%, and 0.80, respectively. DeeplabV3+, UNet, and PSPNet methods followed suit. These performance metrics underscored the superior accuracy of the UNet++ model in precisely identifying and segmenting cultivated land, with a remarkable increase in accuracy of nearly 20% than machine learning methods and 50% for land cover products. Four typical areas of town, water body, forest land and contiguous cultivated land were selected to visually compare the results of cultivated land identification results. It could be observed that the deep learning models generally exhibited consistent distribution patterns with the satellite imageries, accurately delineating the boundaries of cultivated land and demonstrating overall satisfactory performance. However, due to the complex features in remote sensing images, the deep learning models still encountered certain challenges of omission and misclassification in extracting cultivated land. Among them, the UNet++ model showed the closest overall extraction results to the ground truth and exhibited advantages in terms of completeness of cultivated land extraction, discrimination between cultivated land and other land classes, and boundary extraction compared to other models. Using the UNet++ model with the highest recognition accuracy, two types of images constructed with different features—solely spectral features and spectral combined with terrain features—were utilized for cultivated land extraction. Based on the three metrics of IoU, OA, and Kappa, the model incorporating both spectral and terrain features showed improvements of 0.98%, 1.10%, and 0.01% compared to the model using only spectral features. This indicated that fusing spectral and terrain features can achieve information complementarity, further enhancing the identification effectiveness of cultivated land. [Conclusions] This study focuses on the practicality and reliability of automatic cultivated land extraction using four different deep learning models, based on high-resolution satellite imagery from the GF-6 in Santai county in China. Based on the cultivated land extraction results in Santai county and the differences in network structures among the four deep learning models, it was found that the UNet++ model, based on UNet, can effectively improve the accuracy of cultivated land extraction by introducing the mechanism of skip connections. Overall, this study demonstrates the effectiveness and practical value of deep learning methods in obtaining accurate farmland information from high-resolution remote sensing imagery.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    LUBang, DONGWanjing, DINGYouchun, SUNYang, LIHaopeng, ZHANGChaoyu
    Smart Agriculture. 2023, 5(4): 33-44. https://doi.org/10.12133/j.smartag.SA202310004

    [Objective] Unmanned seeding of rapeseed is an important link to construct unmanned rapeseed farm. Aiming at solving the problems of cumbersome manual collection of small and medium-sized field boundary information in the south, the low efficiency of turnaround operation of autonomous tractor, and leaving a large leakage area at the turnaround point, this study proposes to build an unmanned rapeseed seeding operation system based on cloud-terminal high-precision maps, and to improve the efficiency of the turnaround operation and the coverage of the operation. [Methods] The system was mainly divided into two parts: the unmanned seeding control cloud platform for oilseed rape is mainly composed of a path planning module, an operation monitoring module and a real-time control module; the navigation and control platform for rapeseed live broadcasting units is mainly composed of a Case TM1404 tractor, an intelligent seeding and fertilizing machine, an angle sensor, a high-precision Beidou positioning system, an electric steering wheel, a navigation control terminal and an on-board controller terminal. The process of constructing the high-precision map was as follows: determining the operating field, laying the ground control points; collecting the positional data of the ground control points and the orthophoto data from the unmanned aerial vehicle (UAV); processing the image data and constructing the complete map; slicing the map, correcting the deviation and transmitting it to the webpage. The field boundary information was obtained through the high-precision map. The equal spacing reduction algorithm and scanning line filling algorithm was adopted, and the spiral seeding operation path outside the shuttle row was automatically generated. According to the tractor geometry and kinematics model and the size of the distance between the tractor position and the field boundary, the specific parameters of the one-back and two-cut turning model were calculated, and based on the agronomic requirements of rapeseed sowing operation, the one-back-two-cut turn operation control strategy was designed to realize the rapeseed direct seeding unit's sowing operation for the omitted operation area of the field edges and corners. The test included map accuracy test, operation area simulation test and unmanned seeding operation field test. For the map accuracy test, the test field at the edge of Lake Yezhi of Huazhong Agricultural Universit was selected as the test site, where high-precision maps were constructed, and the image and position (POS) data collected by the UAV were processed, synthesized, and sliced, and then corrected for leveling according to the actual coordinates of the correction point and the coordinates of the image. Three rectangular fields of different sizes were selected for the operation area simulation test to compare the operation area and coverage rate of the three operation modes: set row, shuttle row, and shuttle row outer spiral. The Case TM1404 tractor equipped with an intelligent seeding and fertilizer application integrated machine was used as the test platform for the unmanned seeding operation test, and data such as tracking error and operation speed were recorded in real time by software algorithms. The data such as tracking error and operation speed were recorded in real-time. After the flowering of rapeseed, a series of color images of the operation fields were obtained by aerial photography using a drone during the flowering period of rapeseed, and the color images of the operation fields were spliced together, and then the seedling and non-seedling areas were mapped using map surveying and mapping software. [Results and Discussions] The results of the map accuracy test showed that the maximum error of the high-precision map ground verification point was 3.23 cm, and the results of the operation area simulation test showed that the full-coverage path of the helix outside the shuttle row reduced the leakage rate by 18.58%-26.01% compared with that of the shuttle row and the set of row path. The results of unmanned seeding operation field test showed that the average speed of unmanned seeding operation was 1.46 m/s, the maximum lateral deviation was 7.94 cm, and the maximum average absolute deviation was 1.85 cm. The test results in field showed that, the measured field area was 1 018.61 m2, and the total area of the non-growing oilseed rape area was 69.63 m2, with an operating area of 948.98 m2, and an operating coverage rate of 93.16%. [Conclusions] The effectiveness and feasibility of the constructed unmanned seeding operation system for rapeseed were demonstrated. This study can provide technical reference for unmanned seeding operation of rapeseed in small and medium-sized fields in the south. In the future, the unmanned seeding operation mode of rapeseed will be explored in irregular field conditions to further improve the applicability of the system.

  • Overview Article
    YANGYinsheng, WEIXin
    Smart Agriculture. 2023, 5(4): 150-159. https://doi.org/10.12133/j.smartag.SA202304008

    [Significance With the escalating global climate change and ecological pollution issues, the "dual carbon" target of Carbon Peak and Carbon Neutrality has been incorporated into various sectors of China's social development. To ensure the green and sustainable development of agriculture, it is imperative to minimize energy consumption and reduce pollution emissions at every stage of agricultural mechanization, meet the diversified needs of agricultural machinery and equipment in the era of intelligent information, and develop low-carbon agricultural mechanization. The development of low-carbon agricultural mechanization is not only an important part of the transformation and upgrading of agricultural mechanization in China but also an objective requirement for the sustainable development of agriculture under the "dual carbon" target. [Progress] The connotation and objectives of low-carbon agricultural mechanization are clarified and the development logic of low-carbon agricultural mechanization from three dimensions: theoretical, practical, and systematic are expounded. The "triple-win" of life, production, and ecology is proposed, it is an important criterion for judging the functional realization of low-carbon agricultural mechanization system from a theoretical perspective. The necessity and urgency of low-carbon agricultural mechanization development from a practical perspective is revealed. The "human-machine-environment" system of low-carbon agricultural mechanization development is analyzed and the principles and feasibility of coordinated development of low-carbon agricultural mechanization based on a systemic perspective is explained. Furthermore, the deep-rooted reasons affecting the development of low-carbon agricultural mechanization from six aspects are analyzed: factor conditions, demand conditions, related and supporting industries, production entities, government, and opportunities. [Conclusion and Prospects] Four approaches are proposed for the realization of low-carbon agricultural mechanization development: (1) Encouraging enterprises to implement agricultural machinery ecological design and green manufacturing throughout the life cycle through key and core technology research, government policies, and financial support; (2) Guiding agricultural entities to implement clean production operations in agricultural mechanization, including but not limited to innovative models of intensive agricultural land, exploration and promotion of new models of clean production in agricultural mechanization, and the construction of a carbon emission measurement system for agricultural low-carbonization; (3) Strengthening the guidance and implementation of the concept of socialized services for low-carbon agricultural machinery by government departments, constructing and improving a "8S" system of agricultural machinery operation services mainly consisting of Sale, Spare part, Service, Survey, Show, School, Service, and Scrap, to achieve the long-term development of dematerialized agricultural machinery socialized services and green shared operation system; (4) Starting from concept guidance, policy promotion, and financial support, comprehensively advancing the process of low-carbon disposal and green remanufacturing of retired and waste agricultural machinery by government departments.

  • Special Issue--Agricultural Information Perception and Models
    XURuifeng, WANGYaohua, DINGWenyong, YUJunqi, YANMaocang, CHENChen
    Smart Agriculture. 2024, 6(2): 62-71. https://doi.org/10.12133/j.smartag.SA201311014

    [Objective] In recent years, there has been a steady increase in the occurrence and fatality rates of shrimp diseases, causing substantial impacts in shrimp aquaculture. These diseases are marked by their swift onset, high infectivity, complex control requirements, and elevated mortality rates. With the continuous growth of shrimp factory farming, traditional manual detection approaches are no longer able to keep pace with the current requirements. Hence, there is an urgent necessity for an automated solution to identify shrimp diseases. The main goal of this research is to create a cost-effective inspection method using computer vision that achieves a harmonious balance between cost efficiency and detection accuracy. The improved YOLOv8 (You Only Look Once) network and multiple features were employed to detect shrimp diseases. [Methods] To address the issue of surface foam interference, the improved YOLOv8 network was applied to detect and extract surface shrimps as the primary focus of the image. This target detection approach accurately recognizes objects of interest in the image, determining their category and location, with extraction results surpassing those of threshold segmentation. Taking into account the cost limitations of platform computing power in practical production settings, the network was optimized by reducing parameters and computations, thereby improving detection speed and deployment efficiency. Additionally, the Farnberck optical flow method and gray level co-occurrence matrix (GLCM) were employed to capture the movement and image texture features of shrimp video clips. A dataset was created using these extracted multiple feature parameters, and a Support Vector Machine (SVM) classifier was trained to categorize the multiple feature parameters in video clips, facilitating the detection of shrimp health. [Results and Discussions] The improved YOLOv8 in this study effectively enhanced detection accuracy without increasing the number of parameters and flops. According to the results of the ablation experiment, replacing the backbone network with FasterNet lightweight backbone network significantly reduces the number of parameters and computation, albeit at the cost of decreased accuracy. However, after integrating the efficient multi-scale attention (EMA) on the neck, the mAP0.5 increased by 0.3% compared to YOLOv8s, while mAP0.95 only decreased by 2.1%. Furthermore, the parameter count decreased by 45%, and FLOPs decreased by 42%. The improved YOLOv8 exhibits remarkable performance, ranking second only to YOLOv7 in terms of mAP0.5 and mAP0.95, with respective reductions of 0.4% and 0.6%. Additionally, it possesses a significantly reduced parameter count and FLOPS compared to YOLOv7, matching those of YOLOv5. Despite the YOLOv7-Tiny and YOLOv8-VanillaNet models boasting lower parameters and Flops, their accuracy lags behind that of the improved YOLOv8. The mAP0.5 and mAP0.95 of YOLOv7-Tiny and YOLOv8-VanillaNet are 22.4%, 36.2%, 2.3%, and 4.7% lower than that of the improved YOLOv8, respectively. Using a support vector machine (SVM) trained on a comprehensive dataset incorporating multiple feature, the classifier achieved an impressive accuracy rate of 97.625%. The 150 normal fragments and the 150 diseased fragments were randomly selected as test samples. The classifier exhibited a detection accuracy of 89% on this dataset of the 300 samples. This result indicates that the combination of features extracted using the Farnberck optical flow method and GLCM can effectively capture the distinguishing dynamics of movement speed and direction between infected and healthy shrimp. In this research, the majority of errors stem from the incorrect recognition of diseased segments as normal segments, accounting for 88.2% of the total error. These errors can be categorized into three main types: 1) The first type occurs when floating foam obstructs the water surface, resulting in a small number of shrimp being extracted from the image. 2) The second type is attributed to changes in water movement. In this study, nanotubes were used for oxygenation, leading to the generation of sprays on the water surface, which affected the movement of shrimp. 3) The third type of error is linked to video quality. When the video's pixel count is low, the difference in optical flow between diseased shrimp and normal shrimp becomes relatively small. Therefore, it is advisable to adjust the collection area based on the actual production environment and enhance video quality. [Conclusions] The multiple features introduced in this study effectively capture the movement of shrimp, and can be employed for disease detection. The improved YOLOv8 is particularly well-suited for platforms with limited computational resources and is feasible for deployment in actual production settings. However, the experiment was conducted in a factory farming environment, limiting the applicability of the method to other farming environments. Overall, this method only requires consumer-grade cameras as image acquisition equipment and has lower requirements on the detection platform, and can provide a theoretical basis and methodological support for the future application of aquatic disease detection methods.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    LIUZhiyong, WENChangkai, XIAOYuejin, FUWeiqiang, WANGHao, MENGZhijun
    Smart Agriculture. 2023, 5(4): 58-67. https://doi.org/10.12133/j.smartag.SA202308012

    [Objective] The usual agricultural machinery navigation focuses on the tracking accuracy of the tractor, while the tracking effect of the trailed implement in the trailed agricultural vehicle is the core of the work quality. The connection mode of the tractor and the implement is non-rigid, and the implement can rotate around the hinge joint. In path tracking, this non-rigid structure, leads to the phenomenon of non-overlapping trajectories of the tractor and the implement, reduce the path tracking accuracy. In addition, problems such as large hysteresis and poor anti-interference ability are also very obvious. In order to solve the above problems, a tractor-implement path tracking control method based on variable structure sliding mode control was proposed, taking the tractor front wheel angle as the control variable and the trailed implement as the control target. [Methods] Firstly, the linear deviation model was established. Based on the structural relationship between the tractor and the trailed agricultural implements, the overall kinematics model of the vehicle was established by considering the four degrees of freedom of the vehicle: transverse, longitudinal, heading and articulation angle, ignoring the lateral force of the vehicle and the slip in the forward process. The geometric relationship between the vehicle and the reference path was integrated to establish the linear deviation model of vehicle-road based on the vehicle kinematic model and an approximate linearization method. Then, the control algorithm was designed. The switching function was designed considering three evaluation indexes: lateral deviation, course deviation and hinged angle deviation. The exponential reaching law was used as the reaching mode, the saturation function was used instead of the sign function to reduce the control variable jitter, and the convergence of the control law was verified by combining the Lyapunov function. The system was three-dimensional, in order to improve the dynamic response and steady-state characteristics of the system, the two conjugate dominant poles of the system were assigned within the required range, and the third point was kept away from the two dominant poles to reduce the interference on the system performance. The coefficient matrix of the switching function was solved based on the Ackermann formula, then the calculation formula of the tractor front wheel angle was obtained, and the whole control algorithm was designed. Finally, the path tracking control simulation experiment was carried out. The sliding mode controller was built in the MATLAB/Simulink environment, the controller was composed of the deviation calculation module and the control output calculation module. The tractor-implement model in Carsim software was selected with the front car as a tractor and the rear car as the single-axle implement, and tracking control simulation tests of different reference paths were conducted in the MATLAB/Carsim co-simulation environment. [Results and Discussions] Based on the co-simulation environment, the tracking simulation experiments of three reference paths were carried out. When tracking the double lane change path, the lateral deviation and heading deviation of the agricultural implement converged to 0 m and 0° after 8 s. When the reference heading changed, the lateral deviation and heading deviation were less than 0.1 m and less than 7°. When tracking the circular reference path, the lateral deviation of agricultural machinery tended to be stable after 7 s and was always less than 0.03 m, and the heading deviation of agricultural machinery tended to be stable after 7 s and remained at 0°. The simulation results of the double lane change path and the circular path showed that the controller could maintain good performance when tracking the constant curvature reference path. When tracking the reference path of the S-shaped curve, the tracking performance of the agricultural machinery on the section with constant curvature was the same as the previous two road conditions, and the maximum lateral deviation of the agricultural machinery at the curvature change was less than 0.05 m, the controller still maintained good tracking performance when tracking the variable curvature path. [Conclusions] The sliding mode variable structure controller designed in this study can effectively track the linear and circular reference paths, and still maintain a good tracking effect when tracking the variable curvature paths. Agricultural machinery can be on-line in a short time, which meets the requirements of speediness. In the tracking simulation test, the angle of the tractor front wheel and the articulated angle between the tractor and agricultural implement are kept in a small range, which meets the needs of actual production and reduces the possibility of safety accidents. In summary, the agricultural implement can effectively track the reference path and meet the requirements of precision, rapidity and safety. The model and method proposed in this study provide a reference for the automatic navigation of tractive agricultural implement. In future research, special attention will be paid to the tracking control effect of the control algorithm in the actual field operation and under the condition of large speed changes.

  • Topic--Technological Innovation and Sustainable Development of Smart Animal Husbandry
    LI Minghuang, SU Lide, ZHANG Yong, ZONG Zheying, ZHANG Shun
    Smart Agriculture. 2024, 6(4): 91-102. https://doi.org/10.12133/j.smartag.SA202312027

    [Objective] There exist a high genetic correlation among various morphological characteristics of Mongolian horses. Utilizing advanced technology to obtain body structure parameters related to athletic performance could provide data support for breeding institutions to develop scientific breeding plans and establish the groundwork for further improvement of Mongolian horse breeds. However, traditional manual measurement methods are time-consuming, labor-intensive, and may cause certain stress responses in horses. Therefore, ensuring precise and effective measurement of Mongolian horse body dimensions is crucial for formulating early breeding plans. [Method] Video images of 50 adult Mongolian horses in the suitable breeding stage at the Inner Mongolia Agricultural University Horse Breeding Technical Center was first collected. Fifty images per horse were captured to construct the training and validation sets, resulting in a total of 2 500 high-definition RGB images of Mongolian horses, with an equal ratio of images depicting horses in motion and at rest. To ensure the model's robustness and considering issues such as angles, lighting, and image blurring during actual image capture, a series of enhancement algorithms were applied to the original dataset, expanding the Mongolian horse image dataset to 4 000 images. The YOLOv8n-pose was employed as the foundational keypoint detection model. Through the design of the C2f_DCN module, deformable convolution (DCNV2) was integrated into the C2f module of the Backbone network to enhance the model's adaptability to different horse poses in real-world scenes. Besides, an SA attention module was added to the Neck network to improve the model's focus on critical features. The original loss function was replaced with SCYLLA-IoU (SIoU) to prioritize major image regions, and a cosine annealing method was employed to dynamically adjust the learning rate during model training. The improved model was named DSS-YOLO (DCNv2-SA-SIoU-YOLO) network model. Additionally, a test set comprising 30 RGB-D images of mature Mongolian horses was selected for constructing body dimension measurement tasks. DSS-YOLO was used for keypoint detection of body dimensions. The 2D keypoint coordinates from RGB images were fused with corresponding depth values from depth images to obtain 3D keypoint coordinates, and Mongolian horse's point cloud information was transformed. Point cloud processing and analysis were performed using pass-through filtering, random sample consensus (RANSAC) shape fitting, statistical outlier filtering, and principal component analysis (PCA) coordinate system correction. Finally, body height, body oblique length, croup height, chest circumference, and croup circumference were automatically computed based on keypoint spatial coordinates. [Results and Discussion] The proposed DSS-YOLO model exhibited parameter and computational costs of 3.48 M and 9.1 G, respectively, with an average accuracy mAP0.5:0.95 reaching 92.5%, and a dDSS of 7.2 pixels. Compared to Hourglass, HRNet, and SimCC, mAP0.5:0.95 increased by 3.6%, 2.8%, and 1.6%, respectively. By relying on keypoint coordinates for automatic calculation of body dimensions and suggesting the use of a mobile least squares curve fitting method to complete the horse's hip point cloud, experiments involving 30 Mongolian horses showed a mean average error (MAE) of 3.77 cm and mean relative error (MRE) of 2.29% in automatic measurements. [Conclusions] The results of this study showed that DSS-YOLO model combined with three-dimensional point cloud processing methods can achieve automatic measurement of Mongolian horse body dimensions with high accuracy. The proposed measurement method can also be extended to different breeds of horses, providing technical support for horse breeding plans and possessing practical application value.

  • Topic--Intelligent Agricultural Sensor Technology
    HONGYujiao, ZHANGShuo, LILi
    Smart Agriculture. 2024, 6(1): 46-62. https://doi.org/10.12133/j.smartag.SA202308019

    [Significance] Crop production is related to national food security, economic development and social stability, so timely information on the growth of major crops is of great significance for strengthening the crop production management and ensuring food security. The traditional crop growth monitoring mainly judges the growth of crops by manually observing the shape, color and other appearance characteristics of crops through the external industry, which has better reliability and authenticity, but it will consume a lot of manpower, is inefficient and difficult to carry out monitoring of a large area. With the development of space technology, satellite remote sensing technology provides an opportunity for large area crop growth monitoring. However, the acquisition of optical remote sensing data is often limited by the weather during the peak crop growth season when rain and heat coincide. Synthetic aperture radar (SAR) compensates well for the shortcomings of optical remote sensing, and has a wide demand and great potential for application in crop growth monitoring. However, the current research on crop growth monitoring using SAR data is still relatively small and lacks systematic sorting and summarization. In this paper, the research progress of SAR inversion of crop growth parameters were summarized through comprehensive analysis of existing literature, clarify the main technical methods and application of SAR monitoring of crop growth, and explore the existing problems and look forward to its future research direction. Progress] The current research status of SAR crop growth monitoring were reviewed, the application of SAR technology had gone through several development stages: from the early single-polarization, single-band stage, gradually evolving to the mid-term multi-polarization, multi-band stage, and then to the stage of joint application of tight polarization and optical remote sensing. Then, the research progress and milestone achievements of crop growth monitoring based on SAR data were summarized in three aspects, namely, crop growth SAR remote sensing monitoring indexes, crop growth SAR remote sensing monitoring data and crop growth SAR remote sensing monitoring methods. First, the key parameters of crop growth were summarized, and the crop growth monitoring indexes were divided into morphological indicators, physiological and biochemical indicators, yield indicators and stress indicators. Secondly, the core principle of SAR monitoring of crop growth parameters was introduced, which was based on the interaction between SAR signals and vegetation, and then the specific scattering model and inversion algorithm were used to estimate the crop growth parameters. Then, a detailed summary and analysis of the radar indicators mainly applied to crop growth monitoring were also presented. Finally, SAR remote sensing methods for crop growth monitoring, including mechanistic modeling, empirical modeling, semi-empirical modeling, direct monitoring, and assimilation monitoring of crop growth models, were described, and their applicability and applications in growth monitoring were analyzed. [Conclusions and Prospects] Four challenges exist in SAR crop growth monitoring are proposed: 1) Compared with the methods of crop growth monitoring using optical remote sensing data, the methods of crop growth monitoring using SAR data are obviously relatively small. The reason may be that SAR remote sensing itself has some inherent shortcomings; 2) Insufficient mining of microwave scattering characteristics, at present, a large number of studies have applied the backward scattering intensity and polarization characteristics to crop growth monitoring, but few have applied the phase information to crop growth monitoring, especially the application study of polarization decomposition parameters to growth monitoring. The research on the application of polarization decomposition parameter to crop growth monitoring is still to be deepened; 3) Compared with the optical vegetation index, the radar vegetation index applied to crop growth monitoring is relatively less; 4 ) Crop growth monitoring based on SAR scattered intensity is mainly based on an empirical model, which is difficult to be extended to different regions and types of crops, and the existence of this limitation prevents the SAR scattering intensity-based technology from effectively realizing its potential in crop growth monitoring. Finally, future research should focus on mining microwave scattering features, utilizing SAR polarization decomposition parameters, developing and optimizing radar vegetation indices, and deepening scattering models for crop growth monitoring.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    WANG Herong, CHEN Yingyi, CHAI Yingqian, XU Ling, YU Huihui
    Smart Agriculture. 2023, 5(4): 137-149. https://doi.org/10.12133/j.smartag.SA202310003

    [Objective] Intelligent feeding methods are significant for improving breeding efficiency and reducing water quality pollution in current aquaculture. Feeding image segmentation of fish schools is a critical step in extracting the distribution characteristics of fish schools and quantifying their feeding behavior for intelligent feeding method development. While, an applicable approach is lacking due to images challenges caused by blurred boundaries and similar individuals in practical aquaculture environment. In this study, a high-precision segmentation method was proposed for fish school feeding images and provides technical support for the quantitative analysis of fish school feeding behavior. [Methods] The novel proposed method for fish school feeding images segmentation combined VoVNetv2 with an attention mechanism named Shuffle Attention. Firstly, a fish feeding segmentation dataset was presented. The dataset was collected at the intensive aquaculture base of Laizhou Mingbo Company in Shandong province, with a focus on Oplegnathus punctatus as the research target. Cameras were used to capture videos of the fish school before, during, and after feeding. The images were annotated at the pixel level using Labelme software. According to the distribution characteristics of fish feeding and non-feeding stage, the data was classified into two semantic categories— non-occlusion and non-aggregation fish (fish1) and occlusion or aggregation fish (fish2). In the preprocessing stage, data cleaning and image augmentation were employed to further enhance the quality and diversity of the dataset. Initially, data cleaning rules were established based on the distribution of annotated areas within the dataset. Images with outlier annotations were removed, resulting in an improvement in the overall quality of the dataset. Subsequently, to prevent the risk of overfitting, five data augmentation techniques (random translation, random flip, brightness variation, random noise injection, random point addition) were applied for mixed augmentation on the dataset, contributing to an increased diversity of the dataset. Through data augmentation operations, the dataset was expanded to three times its original size. Eventually, the dataset was divided into a training dataset and testing dataset at a ratio of 8:2. Thus, the final dataset consisted of 1 612 training images and 404 testing images. In detail, there were a total of 116 328 instances of fish1 and 20 924 instances of fish2. Secondly, a fish feeding image segmentation method was proposed. Specifically, VoVNetv2 was used as the backbone network for the Mask R-CNN model to extract image features. VoVNetv2 is a backbone network with strong computational capabilities. Its unique feature aggregation structure enables effective fusion of features at different levels, extracting diverse feature representations. This facilitates better capturing of fish schools of different sizes and shapes in fish feeding images, achieving accurate identification and segmentation of targets within the images. To maximize feature mappings with limited resources, the experiment replaced the channel attention mechanism in the one-shot aggregation (OSA) module of VoVNetv2 with a more lightweight and efficient attention mechanism named shuffle attention. This improvement allowed the network to concentrate more on the location of fish in the image, thus reducing the impact of irrelevant information, such as noise, on the segmentation results. Finally, experiments were conducted on the fish segmentation dataset to test the performance of the proposed method. [Results and Discussions] The results showed that the average segmentation accuracy of the Mask R-CNN network reached 63.218% after data cleaning, representing an improvement of 7.018% compared to the original dataset. With both data cleaning and augmentation, the network achieved an average segmentation accuracy of 67.284%, indicating an enhancement of 11.084% over the original dataset. Furthermore, there was an improvement of 4.066% compared to the accuracy of the dataset after cleaning alone. These results demonstrated that data preprocessing had a positive effect on improving the accuracy of image segmentation. The ablation experiments on the backbone network revealed that replacing the ResNet50 backbone with VoVNetv2-39 in Mask R-CNN led to a 2.511% improvement in model accuracy. After improving VoVNetv2 through the Shuffle Attention mechanism, the accuracy of the model was further improved by 1.219%. Simultaneously, the parameters of the model decreased by 7.9%, achieving a balance between accuracy and lightweight design. Comparing with the classic segmentation networks SOLOv2, BlendMask and CondInst, the proposed model achieved the highest segmentation accuracy across various target scales. For the fish feeding segmentation dataset, the average segmentation accuracy of the proposed model surpassed BlendMask, CondInst, and SOLOv2 by 3.982%, 12.068%, and 18.258%, respectively. Although the proposed method demonstrated effective segmentation of fish feeding images, it still exhibited certain limitations, such as omissive detection, error segmentation, and false classification. [Conclusions] The proposed instance segmentation algorithm (SA_VoVNetv2_RCNN) effectively achieved accurate segmentation of fish feeding images. It can be utilized for counting the number and pixel quantities of two types of fish in fish feeding videos, facilitating quantitative analysis of fish feeding behavior. Therefore, this technique can provide technical support for the analysis of piscine feeding actions. In future research, these issues will be addressed to further enhance the accuracy of fish feeding image segmentation.

  • Topic--Intelligent Agricultural Sensor Technology
    LI Lu, GE Yuqing, ZHAO Jianlong
    Smart Agriculture. 2024, 6(1): 28-35. https://doi.org/10.12133/j.smartag.SA202309020

    Objective The soil moisture content is a crucial factor that directly affected the growth and yield of crops. By using a soil measurement instrument to measure the soil's moisture content, lots of powerful data support for the development of agriculture can be provided. Furthermore, these data have guiding significance for the implementation of scientific irrigation and water-saving irrigation in farmland. In order to develop a reliable and efficient soil moisture sensor, a new capacitive soil moisture sensor based on microfabrication technology was proposed in this study. Capacitive moisture sensors have the advantages of low power consumption, good performance, long-term stability, and easy industrialization. Method The forked electrode array consists of multiple capacitors connected in parallel on the same plane. The ideal design parameters of 10 μm spacing and 75 pairs of forked electrodes were obtained by calculating the design of forked finger logarithms, forked finger spacing, forked finger width, forked finger length, and electrode thickness, and studying the influence of electrode parameters on capacitance sensitivity using COMSOL Multiphysics software. The size obtained an initial capacitance on the order of picofarads, and was not easily breakdown or failed. The sensor was constructed using microelectromechanical systems (MEMS) technology, where a 30 nm titanium adhesion layer was sputtered onto a glass substrate, followed by sputtering a 100 nm gold electrode to form a symmetrical structure of forked electrodes. Due to the strong adsorption capacity of water molecules of the MoS2 (molybdenum disulfide) layer, it exhibited high sensitivity to soil moisture and demonstrated excellent soil moisture sensing performance. The molybdenum disulfide was coated onto the completed electrodes as the humidity-sensitive material to create a humidity sensing layer. When the humidity changed, the dielectric constant of the electrode varied due to the moisture-absorbing characteristics of molybdenum disulfide, and the capacitance value of the device changed accordingly, thus enabling the measurement of soil moisture. Subsequently, the electrode was encapsulated with a polytetrafluoroethylene (PTFE) polymer film. The electrode encapsulated with the microporous film could be directly placed in the soil, which avoided direct contact between the soil/sand particles and the molybdenum disulfide on the device and allowed the humidity sensing unit to only capture the moisture in the soil for measuring humidity. This ensured the device's sensitivity to water moisture and improved its long-term stability. The method greatly reduced the size of the sensor, making it an ideal choice for on-site dynamic monitoring of soil moisture. Results and Discussions The surface morphology of molybdenum disulfide was characterized and analyzed using a Scanning Electron Microscope (SEM). It was observed that molybdenum disulfide nanomaterial exhibited a sheet-like two-dimensional structure, with smooth surfaces on the nanosheets. Some nanosheets displayed sharp edges or irregular shapes along the edges, and they were irregularly arranged with numerous gaps in between. The capacitive soil moisture sensor, which utilized molybdenum disulfide as the humidity-sensitive layer, exhibited excellent performance under varying levels of environmental humidity and soil moisture. At room temperature, a humidity generator was constructed using saturated salt solutions. Saturated solutions of lithium chloride, potassium acetate, magnesium chloride, copper chloride, sodium chloride, potassium chloride, and potassium sulfate were used to generate relative humidity levels of 11%, 23%, 33%, 66%, 75%, 84%, and 96%, respectively. The capacitance values of the sensor were measured at different humidity levels using an LCR meter (Agilent E4980A). The capacitance output of the sensor at a frequency of 200 Hz ranged from 12.13 pF to 187.42 nF as the relative humidity varied between 11% to 96%. The sensor exhibited high sensitivity and a wide humidity sensing range. Additionally, the frequency of the input voltage signal had a significant impact on the capacitance output of the sensor. As the testing frequency increased, the response of the sensor's system decreased. The humidity sensing performance of the sensor was tested in soil samples with moisture content of 8.66%, 13.91%, 22.02%, 31.11%, and 42.75%, respectively. As the moisture content in the soil increased from 8.66% to 42.75%, the capacitance output of the sensor at a frequency of 200 Hz increased from 119.51 nF to 377.98 nF, demonstrating a relatively high sensitivity. Similarly, as the frequency of the input voltage increased, the capacitance output of the sensor decreased. Additionally, the electrode exhibited good repeatability and the sensitivity of the sensor increased significantly as the testing frequency decreased. Conclusions The capacitive soil moisture sensor holds promise for effective and accurate monitoring of soil moisture levels, with its excellent performance, sensitivity, repeatability, and responsiveness to changes in humidity and soil moisture. The ultimate goal of this study is to achieve long-term monitoring of capacitance changes in capacitive soil moisture sensors, enabling monitoring of long-term changes in soil moisture. This will enable farmers to optimize irrigation systems, improve crop yields, and reduce water usage. In conclusion, the development of this innovative soil moisture sensor has the potential to promote agricultural modernization by providing accurate and reliable monitoring of soil moisture levels.

  • Special Issue--Agricultural Information Perception and Models
    ZHANGJing, ZHAOZexuan, ZHAOYanru, BUHongchao, WUXingyu
    Smart Agriculture. 2024, 6(2): 40-48. https://doi.org/10.12133/j.smartag.SA202310010

    [Objective] The widespread prevalence of sclerotinia disease poses a significant challenge to the cultivation and supply of oilseed rape, not only results in substantial yield losses and decreased oil content in infected plant seeds but also severely impacts crop productivity and quality, leading to significant economic losses. To solve the problems of complex operation, environmental pollution, sample destruction and low detection efficiency of traditional chemical detection methods, a Bi-directional Gate Recurrent Unit (Bi-GRU) model based on space-spectrum feature fusion was constructed to achieve hyperspectral images (HSIs) segmentation of oilseed rape sclerotinia infected area. [Methods] The spectral characteristics of sclerotinia disease from a spectral perspective was initially explored. Significantly varying spectral reflectance was notably observed around 550 nm and within the wavelength range of 750-1 000 nm at different locations on rapeseed leaves. As the severity of sclerotinia infection increased, the differences in reflectance at these wavelengths became more pronounced. Subsequently, a rapeseed leaf sclerotinia disease dataset comprising 400 HSIs was curated using an intelligent data annotation tool. This dataset was divided into three subsets: a training set with 280 HSIs, a validation set with 40 HSIs, and a test set with 80 HSIs. Expanding on this, a 7×7 pixel neighborhood was extracted as the spatial feature of the target pixel, incorporating both spatial and spectral features effectively. Leveraging the Bi-GRU model enabled simultaneous feature extraction at any point within the sequence data, eliminating the impact of the order of spatial-spectral data fusion on the model's performance. The model comprises four key components: an input layer, hidden layers, fully connected layers, and an output layer. The Bi-GRU model in this study consisted of two hidden layers, each housing 512 GRU neurons. The forward hidden layer computed sequence information at the current time step, while the backward hidden layer retrieves the sequence in reverse, incorporating reversed-order information. These two hidden layers were linked to a fully connected layer, providing both forward and reversed-order information to all neurons during training. The Bi-GRU model included two fully connected layers, each with 1 000 neurons, and an output layer with two neurons representing the healthy and diseased classes, respectively. [Results and Discussions] To thoroughly validate the comprehensive performance of the proposed Bi-GRU model and assess the effectiveness of the spatial-spectral information fusion mechanism, relevant comparative analysis experiments were conducted. These experiments primarily focused on five key parameters—ClassAP(1), ClassAP(2), mean average precision (mAP), mean intersection over union (mIoU), and Kappa coefficient—to provide a comprehensive evaluation of the Bi-GRU model's performance. The comprehensive performance analysis revealed that the Bi-GRU model, when compared to mainstream convolutional neural network (CNN) and long short-term memory (LSTM) models, demonstrated superior overall performance in detecting rapeseed sclerotinia disease. Notably, the proposed Bi-GRU model achieved an mAP of 93.7%, showcasing a 7.1% precision improvement over the CNN model. The bidirectional architecture, coupled with spatial-spectral fusion data, effectively enhanced detection accuracy. Furthermore, the study visually presented the segmentation results of sclerotinia disease-infected areas using CNN, Bi-LSTM, and Bi-GRU models. A comparison with the Ground-Truth data revealed that the Bi-GRU model outperformed the CNN and Bi-LSTM models in detecting sclerotinia disease at various infection stages. Additionally, the Dice coefficient was employed to comprehensively assess the actual detection performance of different models at early, middle, and late infection stages. The dice coefficients for the Bi-GRU model at these stages were 83.8%, 89.4% and 89.2%, respectively. While early infection detection accuracy was relatively lower, the spatial-spectral data fusion mechanism significantly enhanced the effectiveness of detecting early sclerotinia infections in oilseed rape. [Conclusions] This study introduces a Bi-GRU model that integrates spatial and spectral information to accurately and efficiently identify the infected areas of oilseed rape sclerotinia disease. This approach not only addresses the challenge of detecting early stages of sclerotinia infection but also establishes a basis for high-throughput non-destructive detection of the disease.

  • Special Issue--Agricultural Information Perception and Models
    WANGTong, WANGChunshan, LIJiuxi, ZHUHuaji, MIAOYisheng, WUHuarui
    Smart Agriculture. 2024, 6(2): 85-94. https://doi.org/10.12133/j.smartag.SA202311021

    [Objective] With the development of agricultural informatization, a large amount of information about agricultural diseases exists in the form of text. However, due to problems such as nested entities and confusion of entity types, traditional named entities recognition (NER) methods often face challenges of low accuracy when processing agricultural disease text. To address this issue, this study proposes a new agricultural disease NER method called RoFormer-PointerNet, which combines the RoFormer pre-trained model with the PointerNet baseline model. The aim of this method is to improve the accuracy of entity recognition in agricultural disease text, providing more accurate data support for intelligent analysis, early warning, and prevention of agricultural diseases. [Methods] This method first utilized the RoFormer pre-trained model to perform deep vectorization processing on the input agricultural disease text. This step was a crucial foundation for the subsequent entity extraction task. As an advanced natural language processing model, the RoFormer pre-trained model's unique rotational position embedding approach endowed it with powerful capabilities in capturing textual positional information. In agricultural disease text, due to the diversity of terminology and the existence of polysemy, traditional entity recognition methods often faced challenges in confusing entity types. However, through its unique positional embedding mechanism, the RoFormer model was able to incorporate more positional information into the vector representation, effectively enriching the feature information of words. This characteristic enabled the model to more accurately distinguish between different entity types in subsequent entity extraction tasks, reducing the possibility of type confusion. After completing the vectorization representation of the text, this study further emploied a pointer network for entity extraction. The pointer network was an advanced sequence labeling approach that utilizes head and tail pointers to annotate entities within sentences. This labeling method was more flexible compared to traditional sequence labeling methods as it was not restricted by fixed entity structures, enabling the accurate extraction of all types of entities within sentences, including complex entities with nested relationships. In agricultural disease text, entity extraction often faced the challenge of nesting, such as when multiple different entity types are nested within a single disease symptom description. By introducing the pointer network, this study effectively addressed this issue of entity nesting, improving the accuracy and completeness of entity extraction. [Results and Discussions] To validate the performance of the RoFormer-PointerNet method, this study constructed an agricultural disease dataset, which comprised 2 867 annotated corpora and a total of 10 282 entities, including eight entity types such as disease names, crop names, disease characteristics, pathogens, infected areas, disease factors, prevention and control methods, and disease stages. In comparative experiments with other pre-trained models such as Word2Vec, BERT, and RoBERTa, RoFormer-PointerNet demonstrated superiority in model precision, recall, and F1-Score, achieving 87.49%, 85.76% and 86.62%, respectively. This result demonstrated the effectiveness of the RoFormer pre-trained model. Additionally, to verify the advantage of RoFormer-PointerNet in mitigating the issue of nested entities, this study compared it with the widely used bidirectional long short-term memory neural network (BiLSTM) and conditional random field (CRF) models combined with the RoFormer pre-trained model as decoding methods. RoFormer-PointerNet outperformed the RoFormer-BiLSTM, RoFormer-CRF, and RoFormer-BiLSTM-CRF models by 4.8%, 5.67% and 3.87%, respectively. The experimental results indicated that RoFormer-PointerNet significantly outperforms other models in entity recognition performance, confirming the effectiveness of the pointer network model in addressing nested entity issues. To validate the superiority of the RoFormer-PointerNet method in agricultural disease NER, a comparative experiment was conducted with eight mainstream NER models such as BiLSTM-CRF, BERT-BiLSTM-CRF, and W2NER. The experimental results showed that the RoFormer-PointerNet method achieved precision, recall, and F1-Score of 87.49%, 85.76% and 86.62%, respectively in the agricultural disease dataset, reaching the optimal level among similar methods. This result further verified the superior performance of the RoFormer-PointerNet method in agricultural disease NER tasks. [Conclusions] The agricultural disease NER method RoFormer-PointerNet, proposed in this study and based on the RoFormer pre-trained model, demonstrates significant advantages in addressing issues such as nested entities and type confusion during the entity extraction process. This method effectively identifies entities in Chinese agricultural disease texts, enhancing the accuracy of entity recognition and providing robust data support for intelligent analysis, early warning, and prevention of agricultural diseases. This research outcome holds significant importance for promoting the development of agricultural informatization and intelligence.

  • Topic--Intelligent Agricultural Sensor Technology
    HONGYan, WANGLe, WANGRujing, SUJingming, LIHao, ZHANGJiabao, GUOHongyan, CHENXiangyu
    Smart Agriculture. 2024, 6(1): 18-27. https://doi.org/10.12133/j.smartag.SA202309022

    [Objective] The content of nitrogen (N) and potassium (K) in the soil directly affects crop yield, making it a crucial indicator in agricultural production processes. Insufficient levels of the two nutrients can impede crop growth and reduce yield, while excessive levels can result in environmental pollution. Rapidly quantifying the N and K content in soil is of great importance for agricultural production and environmental protection. [Methods] A rapid and quantitative method was proposed for detecting N and K nutrient ions in soil based on polydimethylsiloxane (PDMS) microfluidic chip electrophoresis and capacitively coupled contactless conductivity detection (C4D). Microfluidic chip electrophoresis enables rapid separation of multiple ions in soil. The electrophoresis microfluidic chips have a cross-shaped channel layout and were fabricated using soft lithography technology. The sample was introduced into the microfluidic chip by applying the appropriate injection voltage at both ends of the injection channel. This simple and efficient procedure ensured an accurate sample introduction. Subsequently, an electrophoretic voltage was applied at both ends of the separation channel, creating a capillary zone electrophoresis that enables the rapid separation of different ions. This process offered high separation efficiency, required a short processing time, and had a small sample volume requirement. This enabled the rapid processing and analysis of many samples. C4D enabled precise measurement of changes in conductivity. The sensing electrodes were separated from the microfluidic chips and printed onto a printed circuit board (PCB) using an immersion gold process. The ions separated under the action of an electric field and sequentially reach the sensing electrodes. The detection circuit, connected to the sensing electrodes, received and regulated the conductivity signal to reflect the variance in conductivity between the sample and the buffer solution. The sensing electrodes were isolated from the sample solution to prevent interference from the high-voltage electric field used for electrophoresis. [Results and Discussions] The voltage used for electrophoresis, as well as the operating frequency and excitation voltage of the excitation signal in the detection system, had a significant effect on separation and detection performance. Based on the response characteristics of the system output, the optimal operating frequency of 1 000 kHz, excitation voltage of 50 V, and electrophoresis voltage of 1.5 kV were determined. A peak overshoot was observed in the electrophoresis spectrum, which was associated with the operating frequency of the system. The total noise level of the system was approximately 0.091 mV. The detection limit (S/N = 3) for soil nutrient ions was determined by analyzing a series of standard sample solutions with varying concentrations. The detection limited for potassium (K+), ammonium (NH4+), and nitrate (NO3) standard solutions were 0.5, 0.1 and 0.4 mg/L, respectively. For the quantitative determination of soil nutrient ion concentration, the linear relationship between peak area and corresponding concentration was investigated under optimal experimental conditions. K+, NH4+, and NO3 exhibit a strong linear relationship in the range of 0.5~40 mg/L, with linear correlation coefficients (R2) of 0.994, 0.997, and 0.990, respectively, indicating that this method could accurately quantify N and K ions in soil. At the same time, to evaluate the repeatability of the system, peak height, peak area, and peak time were used as evaluation indicators in repeatability experiments. The relative standard deviation (RSD) was less than 4.4%, indicating that the method shows good repeatability. In addition, to assess the ability of the C4D microfluidic system to detect actual soil samples, four collected soil samples were tested using MES/His and PVP/PTAE as running buffers. K+, NH4+,Na+, Chloride (Cl), NO3, and sulfate (SO43‒) were separated sequentially within 1 min. The detection efficiency was significantly improved. To evaluate the accuracy of this method, spiked recovery experiments were performed on four soil samples. The recovery rates ranged from 81.74% to 127.76%, indicating the good accuracy of the method. [Conclusions] This study provides a simple and effective method for the rapid detection of N and K nutrient ions in soil. The method is highly accurate and reliable, and it can quickly and efficiently detect the contents of N and K nutrient ions in soil. This contactless measurement method reduced costs and improved economic efficiency while extending the service life of the sensing electrodes and reducing the frequency of maintenance and replacement. It provided strong support for long-term, continuous conductivity monitoring.

  • Topic--Smart Agricultural Technology and Machinery in Hilly and Mountainous Areas
    LUAN Shijie, SUN Yefeng, GONG Liang, ZHANG Kai
    Smart Agriculture. 2024, 6(3): 69-81. https://doi.org/10.12133/j.smartag.SA202306013

    [Objective] The technology of multi-machine convoy driving has emerged as a focal point in the field of agricultural mechanization. By organizing multiple agricultural machinery units into convoys, unified control and collaborative operations can be achieved. This not only enhances operational efficiency and reduces costs, but also minimizes human labor input, thereby maximizing the operational potential of agricultural machinery. In order to solve the problem of communication delay in cooperative control of multi-vehicle formation and its compensation strategy, the trajectory control method of multi-vehicle formation was proposed based on model predictive control (MPC) delay compensator. [Methods] The multi-vehicle formation cooperative control strategy was designed, which introduced the four-vehicle formation cooperative scenario in three lanes, and then introduced the design of the multi-vehicle formation cooperative control architecture, which was respectively enough to establish the kinematics and dynamics model and equations of the agricultural machine model, and laied down a sturdy foundation for solving the formation following problem later. The testing and optimization of automatic driving algorithms based on real vehicles need to invest too much time and economic costs, and were subject to the difficulties of laws and regulations, scene reproduction and safety, etc. Simulation platform testing could effectively solve the above question. For the agricultural automatic driving multi-machine formation scenarios, the joint simulation platform Carsim and Simulink were used to simulate and validate the formation driving control of agricultural machines. Based on the single-machine dynamics model of the agricultural machine, a delay compensation controller based on MPC was designed. Feedback correction first detected the actual output of the object and then corrected the model-based predicted output with the actual output and performed a new optimization. Based on the above model, the nonlinear system of kinematics and dynamics was linearized and discretized in order to ensure the real-time solution. The objective function was designed so that the agricultural machine tracks on the desired trajectory as much as possible. And because the operation range of the actuator was limited, the control increment and control volume were designed with corresponding constraints. Finally, the control increment constraints were solved based on the front wheel angle constraints, front wheel angle increments, and control volume constraints of the agricultural machine. [Results and Discussions] Carsim and MATLAB/Simulink could be effectively compatible, enabling joint simulation of software with external solvers. When the delay step size d=5 was applied with delay compensation, the MPC response was faster and smoother; the speed error curve responded faster and gradually stabilized to zero error without oscillations. Vehicle 1 effectively changed lanes in a short time and maintains the same lane as the lead vehicle. In the case of a longer delay step size d =10, controllers without delay compensation showed more significant performance degradation. Even under higher delay conditions, MPC with delay compensation applied could still quickly respond with speed error and longitudinal acceleration gradually stabilizing to zero error, avoiding oscillations. The trajectory of Vehicle 1 indicated that the effectiveness of the delay compensation mechanism decreased under extreme delay conditions. The simulation results validated the effectiveness of the proposed formation control algorithm, ensuring that multiple vehicles could successfully change lanes to form queues while maintaining specific distances and speeds. Furthermore, the communication delay compensation control algorithm enables vehicles with added delay to effectively complete formation tasks, achieving stable longitudinal and lateral control. This confirmed the feasibility of the model predictive controller with delay compensation proposed. [Conclusions] At present, most of the multi-machine formation coordination is based on simulation platform for verification, which has the advantages of safety, economy, speed and other aspects, however, there's still a certain gap between the idealized model in the simulation platform and the real machine experiment. Therefore, multi-machine formation operation of agricultural equipment still needs to be tested on real machines under sound laws and regulations.

  • Topic--Technological Innovation and Sustainable Development of Smart Animal Husbandry
    WENG Zhi, FAN Qi, ZHENG Zhiqiang
    Smart Agriculture. 2024, 6(4): 64-75. https://doi.org/10.12133/j.smartag.SA202310007

    [Objective] The body size parameter of cattle is a key indicator reflecting the physical development of cattle, and is also a key factor in the cattle selection and breeding process. In order to solve the demand of measuring body size of beef cattle in the complex environment of large-scale beef cattle ranch, an image acquisition device and an automatic measurement algorithm of body size were designed. [Methods] Firstly, the walking channel of the beef cattle was established, and when the beef cattle entered the restraining device through the channel, the RGB and depth maps of the image on the right side of the beef cattle were acquired using the Inter RealSense D455 camera. Secondly, in order to avoid the influence of the complex environmental background, an improved instance segmentation network based on Mask2former was proposed, adding CBAM module and CA module, respectively, to improve the model's ability to extract key features from different perspectives, extracting the foreground contour from the 2D image of the cattle, partitioning the contour, and comparing it with other segmentation algorithms, and using curvature calculation and other mathematical methods to find the required body size measurement points. Thirdly, in the processing of 3D data, in order to solve the problem that the pixel point to be measured in the 2D RGB image was null when it was projected to the corresponding pixel coordinates in the depth-valued image, resulting in the inability to calculate the 3D coordinates of the point, a series of processing was performed on the point cloud data, and a suitable point cloud filtering and point cloud segmentation algorithm was selected to effectively retain the point cloud data of the region of the cattle's body to be measured, and then the depth map was 16. Then the depth map was filled with nulls in the field to retain the integrity of the point cloud in the cattle body region, so that the required measurement points could be found and the 2D data could be returned. Finally, an extraction algorithm was designed to combine 2D and 3D data to project the extracted 2D pixel points into a 3D point cloud, and the camera parameters were used to calculate the world coordinates of the projected points, thus automatically calculating the body measurements of the beef cattle. [Results and Discussions] Firstly, in the part of instance segmentation, compared with the classical Mask R-CNN and the recent instance segmentation networks PointRend and Queryinst, the improved network could extract higher precision and smoother foreground images of cattles in terms of segmentation accuracy and segmentation effect, no matter it was for the case of occlusion or for the case of multiple cattles. Secondly, in three-dimensional data processing, the method proposed in the study could effectively extract the three-dimensional data of the target area. Thirdly, the measurement error of body size was analysed, among the four body size measurement parameters, the smallest average relative error was the height of the cross section, which was due to the more prominent position of the cross section, and the different standing positions of the cattle have less influence on the position of the cross section, and the largest average relative error was the pipe circumference, which was due to the influence of the greater overlap of the two front legs, and the higher requirements for the standing position. Finally, automatic body measurements were carried out on 137 beef cattle in the ranch, and the automatic measurements of the four body measurements parameters were compared with the manual measurements, and the results showed that the average relative errors of body height, cross section height, body slant length, and tube girth were 4.32%, 3.71%, 5.58% and 6.25%, respectively, which met the needs of the ranch. The shortcomings were that fewer body-size parameters were measured, and the error of measuring circumference-type body-size parameters was relatively large. Later studies could use a multi-view approach to increase the number of body rule parameters to be measured and improve the accuracy of the parameters in the circumference category. [Conclusions] The article designed an automatic measurement method based on two-dimensional and three-dimensional contactless body measurements of beef cattle. Moreover, the innovatively proposed method of measuring tube girth has higher accuracy and better implementation compared with the current research on body measurements in beef cattle. The relative average errors of the four body tape parameters meet the needs of pasture measurements and provide theoretical and practical guidance for the automatic measurement of body tape in beef cattle.

  • Topic--Intelligent Agricultural Sensor Technology
    SHUHongwei, WANGYuwei, RAOYuan, ZHUHaojie, HOUWenhui, WANGTan
    Smart Agriculture. 2024, 6(1): 63-75. https://doi.org/10.12133/j.smartag.SA202311018

    [Objective] The investigation of plant photosynthetic phenotypes is essential for unlocking insights into plant physiological characteristics and dissecting morphological traits. However, traditional two-dimensional chlorophyll fluorescence imaging methods struggle to capture the complex three-dimensional spatial variations inherent in plant photosynthetic processes. To boost the efficacy of plant phenotyping and meet the increasingly demand for high-throughput analysis of photosynthetic phenotypes, the development and validation of a novel plant photosynthetic phenotype imaging system was explored, which uniquely combines three-dimensional structured light techniques with chlorophyll fluorescence technology. [Methods] The plant photosynthetic phenotype imaging system was composed of three primary parts: A tailored light source and projector, a camera, and a motorized filter wheel fitted with filters of various bandwidths, in addition to a terminal unit equipped with a development board and a touchscreen interface. The system was based on the principles and unique characteristics of chlorophyll fluorescence and structured light phase-shifted streak 3D reconstruction techniques. It utilized the custom-designed light source and projector, together with the camera's capability to choose specific wavelength bands, to its full potential. The system employed low-intensity white light within the 400–700 nm spectrum to elicit stable fluorescence, with blue light in the 440–450 nm range optimally triggering the fluorescence response. A projector was used to project dual-frequency, twelve-step phase-shifted stripes onto the plant, enabling the capture of both planar and stripe images, which were essential for the reconstruction of the plant's three-dimensional structure. An motorized filter wheel containing filters for red, green, blue, and near-infrared light, augmented by a filter less wheel for camera collaboration, facilitated the collection of images of plants at different wavelengths under varying lighting conditions. When illuminated with white light, filters corresponding to the red, green, and blue bands were applied to capture multiband images, resulting in color photographs that provides a comprehensive documentation of the plant's visual features. Upon exposure to blue light, the near-infrared filter was employed to capture near-infrared images, yielding data on chlorophyll fluorescence intensity. During the structured light streak projection, no filter was applied to obtain both planar and streak images of the plant, which were then employed in the 3D morphological reconstruction of the plant. The terminal, incorporating a development board and a touch screen, served as the control hub for the data acquisition and subsequent image processing within the plant photosynthetic phenotypic imaging system. It enabled the switching of light sources and the selection of camera bands through a combination of command and serial port control circuits. Following image acquisition, the data were transmitted back to the development board for analysis, processing, storage, and presentation. To validate the accuracy of 3D reconstruction and the reliability of photosynthetic efficiency assessments by the system, a prototype of the plant photosynthetic phenotypic imaging system was developed using 3D structured light and chlorophyll fluorescence technology, in accordance with the aforementioned methods, serving as an experimental validation platform. The accuracy of 3D reconstruction and the effectiveness of photosynthetic analysis capabilities of this imaging system were further confirmed through the analysis and processing of the experimental results, with comparative evaluations conducted against conventional 3D reconstruction methods and traditional chlorophyll fluorescence-based photosynthetic efficiency analyses. [Results and Discussions] The imaging system utilized for plant photosynthetic phenotypes incorporates a dual-frequency phase-shift algorithm to facilitate the reconstruction of three-dimensional (3D) plant phenotypes. Simultaneously, plant chlorophyll fluorescence images were employed to evaluate the plant's photosynthetic efficiency. This method enabled the analysis of the distribution of photosynthetic efficiency within a 3D space, offering a significant advancement over traditional plant photosynthetic imaging techniques. The 3D phenotype reconstructed using this method exhibits high precision, with an overall reconstruction accuracy of 96.69%. The total error was merely 3.31%, and the time required for 3D reconstruction was only 1.11 s. A comprehensive comparison of the 3D reconstruction approach presented with conventional methods had validated the accuracy of this technique, laying a robust foundation for the precise estimation of a plant's 3D photosynthetic efficiency. In the realm of photosynthetic efficiency analysis, the correlation coefficient between the photosynthetic efficiency values inferred from the chlorophyll fluorescence image analysis and those determined by conventional analysis exceeded 0.9. The experimental findings suggest a significant correlation between the photosynthetic efficiency values obtained using the proposed method and those from traditional methods, which could be characterized by a linear relationship, thereby providing a basis for more precise predictions of plant photosynthetic efficiency. [Conclusions] The method melds the 3D phenotype of plants with an analysis of photosynthetic efficiency, allowing for a more holistic assessment of the spatial heterogeneity in photosynthetic efficiency among plants by examining the pseudo-color images of chlorophyll fluorescence's spatial distribution. This approach elucidates the discrepancies in photosynthetic efficiency across various regions. The plant photosynthetic phenotype imaging system affords an intuitive and comprehensive view of the photosynthetic efficiency in plants under diverse stress conditions. Additionally, It provides technical support for the analysis of the spatial heterogeneity of high-throughput photosynthetic efficiency in plants.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    ZHOUHuamao, WANGJing, YINHua, CHENQi
    Smart Agriculture. 2023, 5(4): 117-126. https://doi.org/10.12133/j.smartag.SA202309024

    [Objective]Pleurotus geesteranus is a rare edible mushroom with a fresh taste and rich nutritional elements, which is popular among consumers. It is not only cherished for its unique palate but also for its abundant nutritional elements. The phenotype of Pleurotus geesteranus is an important determinant of its overall quality, a specific expression of its intrinsic characteristics and its adaptation to various cultivated environments. It is crucial to select varieties with excellent shape, integrity, and resistance to cracking in the breeding process. However, there is still a lack of automated methods to measure these phenotype parameters. The method of manual measurement is not only time-consuming and labor-intensive but also subjective, which lead to inconsistent and inaccurate results. Thus, the traditional approach is unable to meet the demand of the rapid development Pleurotus geesteranus industry. [Methods] To solve the problems which mentioned above, firstly, this study utilized an industrial-grade camera (Daheng MER-500-14GM) and a commonly available smartphone (Redmi K40) to capture high-resolution images in DongSheng mushroom industry (Jiujiang, Jiangxi province). After discarding blurred and repetitive images, a total of 344 images were collected, which included two commonly distinct varieties, specifically Taixiu 57 and Gaoyou 818. A series of data augmentation algorithms, including rotation, flipping, mirroring, and blurring, were employed to construct a comprehensive Pleurotus geesteranus image dataset. At the end, the dataset consisted of 3 440 images and provided a robust foundation for the proposed phenotype recognition model. All images were divided into training and testing sets at a ratio of 8:2, ensuring a balanced distribution for effective model training. In the second part, based upon foundational structure of classical Mask R-CNN, an enhanced version specifically tailored for Pleurotus geesteranus phenotype recognition, aptly named PG-Mask R-CNN (Pleurotus geesteranus-Mask Region-based Convolutional Neural Network) was designed. The PG-Mask R-CNN network was refined through three approaches: 1) To take advantage of the attention mechanism, the SimAM attention mechanism was integrated into the third layer of ResNet101feature extraction network after analyzing and comparing carefully, it was possible to enhance the network's performance without increasing the original network parameters. 2) In order to avoid the problem of Mask R-CNN's feature pyramid path too long to split low-level feature and high-level feature, which may impair the semantic information of the high-level feature and lose the positioning information of the low-level feature, an improved feature pyramid network was used for multiscale fusion, which allowed us to amalgamate information from multiple levels for prediction. 3) To address the limitation of IoU (Intersection over Union) bounding box, which only considered the overlapping area between the prediction box and target box while ignoring the non-overlapping area, a more advanced loss function called GIoU (Generalized Intersection over Union) was introduced. This replacement improved the calculation of image overlap and enhanced the performance of the model. Furthermore, to evaluate crack state of Pleurotus geesteranus more scientifically, reasonably and accurately, the damage rate as a new crack quantification evaluation method was introduced, which was calculated by using the proportion of cracks in the complete pileus of the mushroom and utilized the MRE (Mean Relative Error) to calculate the mean relative error of the Pleurotus geesteranus's damage rate. Thirdly, the PG-Mask R-CNN network was trained and tested based on the Pleurotus geesteranus image dataset. According to the detection and segmentation results, the measurement and accuracy verification were conducted. Finally, considering that it was difficult to determine the ground true of the different shapes of Pleurotus geesteranus, the same method was used to test 4 standard blocks of different specifications, and the rationality of the proposed method was verified. [Results and Discussions] In the comparative analysis, the PG-Mask R-CNN model was superior to Grabcut algorithm and other 4 instance segmentation models, including YOLACT (You Only Look At Coefficien Ts), InstaBoost, QueryInst, and Mask R-CNN. In object detection tasks, the experimental results showed that PG-Mask R-CNN model achieved a mAP of 84.8% and a mAR (mean Average Recall) of 87.7%, respectively, higher than the five methods were mentioned above. Furthermore, the MRE of the instance segmentation results was 0.90%, which was consistently lower than that of other instance segmentation models. In addition, from a model size perspective, the PG-Mask R-CNN model had a parameter count of 51.75 M, which was slightly larger than that of the unimproved Mask R-CNN model but smaller than other instance segmentation models. With the instance segmentation results on the pileus and crack, the MRE were 1.30% and 7.54%, respectively, while the MAE of the measured damage rate was 0.14%. [Conclusions] The proposed PG-Mask R-CNN model demonstrates a high accuracy in identifying and segmenting the stipe, pileus, and cracks of Pleurotus geesteranus. Thus, it can help the automated measurements of phenotype measurements of Pleurotus geesteranus, which lays a technical foundation for subsequent intelligent breeding, smart cultivation and grading of Pleurotus geesteranus.

  • Topic--Smart Agricultural Technology and Machinery in Hilly and Mountainous Areas
    HEQing, JIJie, FENGWei, ZHAOLijun, ZHANGBohan
    Smart Agriculture. 2024, 6(3): 82-93. https://doi.org/10.12133/j.smartag.SA202401010

    [Objective] The traditional predictive control approach usually employs a fixed time horizon and often overlooks the impact of changes in curvature and road bends. This oversight leads to subpar tracking performance and inadequate adaptability of robots for navigating curves and paths. Although extending the time horizon of the standard fixed time horizon model predictive control (MPC) can improve curve path tracking accuracy, it comes with high computational costs, making it impractical in situations with restricted computing resources. Consequently, an adaptive time horizon MPC controller was developed to meet the requirements of complex tasks such as autonomous mowing. [Methods] Initially, it was crucial to establish a kinematic model for the mowing robot, which required employing Taylor linearization and Euler method discretization techniques to ensure accurate path tracking. The prediction equation for the error model was derived after conducting a comprehensive analysis of the robot's kinematics model employed in mowing. Second, the size of the previewing area was determined by utilizing the speed data and reference path information gathered from the mowing robot. The region located a certain distance ahead of the robot's current position, was identified to as the preview region, enabling a more accurate prediction of the robot's future traveling conditions. Calculations for both the curve factor and curve change factor were carried out within this preview region. The curvature factor represented the initial curvature of the path, while the curvature change factor indicated the extent of curvature variation in this region. These two variables were then fed into a fuzzy controller, which adjusted the prediction time horizon of the MPC. The integration enabled the mowing robot to promptly adjust to changes in the path's curvature, thereby improving its accuracy in tracking the desired trajectory. Additionally, a novel technique for triggering MPC execution was developed to reduce computational load and improve real-time performance. This approach ensured that MPC activation occurred only when needed, rather than at every time step, resulting in reduced computational expenses especially during periods of smooth robot motion where unnecessary computation overhead could be minimized. By meeting kinematic and dynamic constraints, the optimization algorithm successfully identified an optimal control sequence, ultimately enhancing stability and reliability of the control system. Consequently, these set of control algorithms facilitated precise path tracking while considering both kinematic and dynamic limitations in complex environments. [Results and Discussion] The adaptive time-horizon MPC controller effectively limited the maximum absolute heading error and maximum absolute lateral error to within 0.13 rad and 11 cm, respectively, surpassing the performance of the MPC controller in the control group. Moreover, compared to both the first and fourth groups, the adaptive time-horizon MPC controller achieved a remarkable reduction of 75.39% and 57.83% in mean values for lateral error and heading error, respectively (38.38% and 31.84%, respectively). Additionally, it demonstrated superior tracking accuracy as evidenced by its significantly smaller absolute standard deviation of lateral error (0.025 6 m) and course error (0.025 5 rad), outperforming all four fixed time-horizon MPC controllers tested in the study. Furthermore, this adaptive approach ensured precise tracking and control capabilities for the mowing robot while maintaining a remarkably low average solution time of only 0.004 9 s, notably faster than that observed with other control data sets-reducing computational load by approximately 10.9 ms compared to maximum time-horizon MPC. [Conclusions] The experimental results demonstrated that the adaptive time-horizon MPC tracking approach effectively addressed the trade-off between control accuracy and computational complexity encountered in fixed time-horizon MPC. By dynamically adjusting the time horizon length the and performing MPC calculations based on individual events, this approach can more effectively handle scenarios with restricted computational resources, ensuring superior control precision and stability. Furthermore, it achieves a balance between control precision and real-time performance in curve route tracking for mowing robots, offering a more practical and reliable solution for their practical application.

  • Topic--Smart Agricultural Technology and Machinery in Hilly and Mountainous Areas
    ZHANGXingshan, YANGHeng, MAWenqiu, YANGMinli, WANGHaiyi, YOUYong, HUIYunting, GONGZeqi, WANGTianyi
    Smart Agriculture. 2024, 6(3): 58-68. https://doi.org/10.12133/j.smartag.SA202306008

    [Objective] Farmland consolidation for agricultural mechanization in hilly and mountainous areas can alter the landscape pattern, elevation, slope and microgeomorphology of cultivated land. It is of great significance to assess the ecological risk of cultivated land to provide data reference for the subsequent farmland consolidation for agricultural mechanization. This study aims to assess the ecological risk of cultivated land before and after farmland consolidation for agricultural mechanization in hilly and mountainous areas, and to explore the relationship between cultivated land ecological risk and cultivated land slope. [Methods] Twenty counties in Tongnan district of Chongqing city was selected as the assessment units. Based on the land use data in 2010 and 2020 as two periods, ArcGIS 10.8 and Excel software were used to calculate landscape pattern indices. The weights for each index were determined by entropy weight method, and an ecological risk assessment model was constructed, which was used to reveal the temporal and spatial change characteristics of ecological risk. Based on the principle of mathematical statistics, the correlation analysis between cultivated land ecological risk and cultivated land slope was carried out, which aimed to explore the relationship between cultivated land ecological risk and cultivated land slope. [Results and Discussions] Comparing to 2010, patch density (PD), division (D), fractal dimension (FD), and edge density (ED) of cultivated land all decreased in 2020, while meant Patch Size (MPS) increased, indicating an increase in the contiguity of cultivated land. The mean shape index (MSI) of cultivated land increased, indicating that the shape of cultivated land tended to be complicated. The landscape disturbance index (U) decreased from 0.97 to 0.94, indicating that the overall resistance to disturbances in cultivated land has increased. The landscape vulnerability index (V) increased from 2.96 to 3.20, indicating that the structure of cultivated land become more fragile. The ecological risk value of cultivated land decreased from 3.10 to 3.01, indicating the farmland consolidation for agricultural mechanization effectively improved the landscape pattern of cultivated land and enhanced the safety of the agricultural ecosystem. During the two periods, the ecological risk areas were primarily composed of low-risk and relatively low-risk zones. The area of low-risk zones increased by 6.44%, mainly expanding towards the northern part, while the area of relatively low-risk zones increased by 6.17%, primarily spreading towards the central-eastern and southeastern part. The area of moderate-risk zones increased by 24.4%, mainly extending towards the western and northwestern part, while the area of relatively high-risk zones decreased by 60.70%, with some new additions spreading towards the northeastern part. The area of high-risk zones increased by 16.30%, with some new additions extending towards the northwest part. Overall, the ecological safety zones of cultivated relatively increased. The cultivated land slope was primarily concentrated in the range of 2° to 25°. On the one hand, when the cultivated land slope was less than 15°, the proportion of the slope area was negatively correlated with the ecological risk value. On the other hand, when the slope was above 15°, the proportion of the slope area was positively correlated with the ecological risk value. In 2010, there was a highly significant correlation between the proportion of slope area and ecological risk value for cultivated land slope within the ranges of 5° to 8°, 15° to 25°, and above 25°, with corresponding correlation coefficients of 0.592, 0.609, and 0.849, respectively. In 2020, there was a highly significant correlation between the proportion of slope area and ecological risk value for cultivated land slope within the ranges of 2° to 5°, 5° to 8°, 15° to 25°, and above 25°, with corresponding correlation coefficients of 0.534, 0.667, 0.729, and 0.839, respectively. [Conclusions] The assessment of cultivated land ecological risk in Tongnan district of Chongqing city before and after the farmland consolidation for agricultural mechanization, as well as the analysis of the correlation between ecological risk and cultivated land slope, demonstrate that the farmland consolidation for agricultural mechanization can reduce cultivated land ecological risk, and the proportion of cultivated land slope can be an important basis for precision guidance in the farmland consolidation for agricultural mechanization. Considering the occurrence of moderate sheet erosion from a slope of 5° and intense erosion from a slope of 10° to 15°, and taking into account the reduction of ecological risk value and the actual topographic conditions, the subsequent farmland consolidation for agricultural mechanization in Tongnan district should focus on areas with cultivated land slope ranging from 5° to 8° and 15° to 25°.

  • Special Issue--Agricultural Information Perception and Models
    ZHANGYue, LIWeijia, HANZhiping, ZHANGKun, LIUJiawen, HENKEMichael
    Smart Agriculture. 2024, 6(2): 140-153. https://doi.org/10.12133/j.smartag.SA202310011

    [Objective] The daylily, a perennial herb in the lily family, boasts a rich nutritional profile. Given its economic importance, enhancing its yield is a crucial objective. However, current research on daylily cultivation is limited, especially regarding three-dimensional dynamic growth simulation of daylily plants. In order to establish a technological foundation for improved cultivation management, growth dynamics prediction, and the development of plant variety types in daylily crops, this study introduces an innovative three-dimensional dynamic growth and yield simulation model for daylily plants. [Methods] The open-source GroIMP software platform was used to simulate and visualize three-dimensional scenes. With Datong daylily, the primary cultivated variety of daylily in the Datong area, as the research subject, a field experiment was conducted from March to September 2022, which covered the growth season of daylily. Through actual cultivation experiment measurements, morphological data and leaf photosynthetic physiological parameters of daylily leaves, flower stems, flower buds, and other organs were collected. The functional-structural plant model (FSPM) platform's three-dimensional modeling technology was employed to establish the Cloud Cover-based solar radiation models (CSRMs) and the Farquhar, von Camerer, and Berry model (FvCB model) suitable for daylily. Moreover, based on the source-sink relationship of daylily, the carbon allocation model of daylily photosynthetic products was developed. By using the β growth function, the growth simulation model of daylily organs was constructed, and the daily morphological data of daylily during the growth period were calculated, achieving the three-dimensional dynamic growth and yield simulation of daylily plants. Finally, the model was validated with measured data. [Results and Discussions] The coefficient of determination (R2) between the measured and simulated outdoor surface solar radiation was 0.87, accompanied by a Root Mean Squared Error (RMSE) of 28.52 W/m2. For the simulated model of each organ of the daylily plant, the R2 of the measured against the predicted values ranged from 0.896 to 0.984, with an RMSE varying between 1.4 and 17.7 cm. The R2 of the average flower bud yield simulation was 0.880, accompanied by an RMSE of 0.5 g. The overall F-value spanned from 82.244 to 1 168.533, while the Sig. value was consistently below the 0.05 significance level, suggesting a robust fit and statistical significance for the aforementioned models. Subsequently, a thorough examination of the light interaction, temperature influences, and photosynthetic attributes of daylily leaves throughout their growth cycle was carried out. The findings revealed that leaf nutrition growth played a pivotal role in the early phase of daylily's growth, followed by the contribution of leaf and flower stem nutrition in the middle stage, and finally the growth of daylily flower buds, which is the crucial period for yield formation, in the later stages. Analyzing the photosynthetic traits of daylily leaves comprehensively, it was observed that the photosynthetic rate was relatively low in the early spring as the new leaves were initially emerging and reached a plateau during the summer. Considering real-world climate conditions, the actual net photosynthetic rate was marginally lower than the rate verified under optimal conditions, with the simulated net assimilation rate typically ranging from 2 to 4 μmol CO2/(m2·s). [Conclusions] The three-dimensional dynamic growth model of daylily plants proposed in this study can faithfully articulate the growth laws and morphological traits of daylily plants across the three primary growth stages. This model not only illustrates the three-dimensional dynamic growth of daylily plants but also effectively mimics the yield data of daylily flower buds. The simulation outcomes concur with actual conditions, demonstrating a high level of reliability.

  • Topic--Intelligent Agricultural Sensor Technology
    DONGShanshan, ZHANGFengqiu, XIAQi, LIJialin, LIUChao, LIUShaowei, CHENXiangyu, WANGRujing, HUANGQing
    Smart Agriculture. 2024, 6(1): 101-110. https://doi.org/10.12133/j.smartag.SA202311010

    [Objective] The use of pesticides is one of the root causes of food safety problems. Pesticide exposure and pesticide residues can not only lead to environmental pollution issues but also seriously affect human health. In order to meet the rapid and sensitive detection needs of pesticide residues in agricultural products, a method based on lemon juice reduction to prepare silver nanoparticles (AgNPs) is reported in this research. [Methods] First, fresh lemon juice was filtered through filter paper and diluted to a 2% lemon juice aqueous solution. Then, a certain concentration of AgNO3 solution, 50 mm NaOH solution were prepared and stored at room temperature. Then, 10 mL ddH2O, 2 mL NaOH, 2 mL 2% lemon juice, and 5 mL AgNO3 solution were mixed. When the solution turned to a clear yellow color, the solution was centrifuged to obtain AgNPs. The morphology and structure of AgNPs were observed by transmission electron microscopy (TEM). In order to verify the successful synthesis of the nanoparticles and the distribution characteristics of the nanoparticles, ultraviolet spectroscopy was used for measurement and analysis, and 4-ATP was used as a SERS probe to preliminarily verify the SERS enhancement performance of AgNPs. Furthermore, the content of the main reducing components in lemon juice, namely ascorbic acid, glucose, and fructose was analyzed. The content of ascorbic acid in lemon juice was determined by high-performance liquid chromatography, and the content of glucose and fructose in lemon juice was determined by UV-visible spectrophotometry. To verify the stability and uniformity of the SERS signal of the nanoparticles, 4-ATP was used as an surface enhancement of raman scattering (SERS) probe for detection analysis. The stability of the SERS performance of the colloidal substrate within 41 days and the SERS performance at temperatures ranging from 0-80 °C were analyzed. Using 4-ATP as the SERS probe, the experimental conditions were optimized for the preparation of AgNPs by the lemon juice method, including pH and AgNO3 concentration. To validate the practical usability of the nanoparticles, the solutions of paraquat and carbendazim and the detection limits of pesticide residues on different fruits and vegetables were detected by SERS. [Results and discussions] The method for preparing AgNPs has the advantages of simple operation, green and easy synthesis. The particle morphology and size of the prepared AgNPs were basically uniform, with a size of about 20 nm. The ultraviolet-visible spectrum of AgNPs solution showed that the absorption peak was about 400 nm and the peak shape was narrow, indicating that the colloidal solution had good homogeneity. The detection limit of 4-ATP as the SERS probe was 10-14 M, indicating that the nanoparticle had a good SERS. In addition, the content of ascorbic acid, the main reducing ingredient, in lemon juice measured by high-performance liquid chromatography (HPLC) was 395.76 μg/mL. The contents of glucose and fructose, which were the main reducing components in lemon juice, were 5.95 and 5.90 mg/mL, respectively. Furthermore, the characterization and analysis results of the AgNPs prepared by the mixed reducing solution prepared according to the concentration data of each component showed that the AgNPs obtained were also uniform in morphology and size, with a diameter of about 20 nm, but the SERS enhancement performance was not as good as that of the AgNPs reduced by lemon juice. The SERS signal uniformity of the AgNPs reduced by lemon juice analyzed results showed that the peak intensity of the SERS spectral of 4-ATP at different sites at the same concentration was not significantly changed for 15 times, and its standard deviation RSD=5.03%, which was much lower than the intersubstrate RSD value (<16%) of the qualified new SERS active substrate for quantitative analysis. The temporal stability and temperature stability of AgNPs analyzed results showed that the nanoparticles still had SERS enhanced performance after 41 days of storage, and had SERS enhanced performance stability over a wide temperature range (0~80 °C). In addition, the optimization results of experimental conditions showed that the optimal pH for the preparation of AgNPs was around 7.5, and the optimal range of AgNO3 concentration used was 1.76×10-4~3.33×10-4 mol/L. Finally, using AgNPs prepared by lemon juice reduction method for pesticide residue SERS detection on the surface of fruits and vegetables, the detection limits for paraquat and carbendazim in solution were as low as 10-14 and 10-10 M, respectively, and the concentration of pesticides showed a good linear relationship with Raman spectral intensity. The lowest detection limits for paraquat and carbendazim residues on different fruits and vegetables were as low as 3.90 ng/kg and 0.22 µg/kg, respectively. [Conclusions] This work provides a green and convenient method for preparing SERS materials for rapid detection of pesticide residues on fruits and vegetables. This method has practical value for universal operation. The prepared AgNPs can be used for trace pesticide residue detection, providing a pathway for rapid and sensitive detection of pesticide residues in agricultural products.

  • Topic--Intelligent Agricultural Sensor Technology
    ZHANGQing, LIYang, YOUYong, WANGDecheng, HUIYunting
    Smart Agriculture. 2024, 6(1): 111-122. https://doi.org/10.12133/j.smartag.SA202306010

    [Objective] During the operation of the silage machine, the inclusion of ferrous metal foreign objects such as stray iron wires can inflict severe damage to the machine's critical components and livestock organs. To safeguard against that, a metal detection system with superior performance was developed in this research to enable precise and efficient identification of metal foreign bodies during field operations, ensuring the integrity of the silage process and the well-being of the animals. [Methods] The ferrous metal detection principle of silage machine was firstly analyzed. The detection coil is the probe of the metal detection system. After being connected in parallel with a capacitor, it is connected to the detection module. The detection coil received the alternating signal generated by the detection module to generate an alternating magnetic field. After the metal object entered the magnetic field, it affects the equivalent resistance and equivalent inductance of the detection coil. The detection module detected the change of the equivalent resistance and equivalent inductance, and then transmited the signal to the control module through the serial peripheral interface (SPI). The control module filtered the signal and transmited it to the display terminal through the serial port. The display terminal could set the threshold. When the data exceeded the threshold, the system performed sound and light alarm and other processing. Hardware part of the metal detection system of silage machine were firstly design. The calculation of the planar spiral coil and the cylindrical coil was carried out and the planar spiral coil was selected as the research object. By using the nondominated sorting genetic algorithm-Ⅱ (NSGA-II) combined with the method of finite element simulation analysis, the wire diameter, inner diameter, outer diameter, layer number and frequency of the coil were determined, and the calculation of the bent coil and the unbent coil and the array coil was carried out. The hardware system was integrated. The software system for the metal detection system was also designed, utilizing an STM32 microcontroller as the control module and LabView for writing the primary program on the upper computer. The system continuously displayed the read data and time-equivalent impedance graph in real-time, allowing for the setting of upper and lower alarm thresholds. When a metal foreign object was detected, the warning light turned red and an alarm sound was emitted, causing the feed roll to stop. To simulate the scenario of metal detection during the operation of a silage machine, a test bench was set up to validate the performance of the metal detection system. [Results and Discussions] The test results of the metal detection function showed that for a metal wire with a diameter of 0.6 mm and a length of 20 mm, as the inner diameter of the detection coil increased, the maximum alarm distance increased first and then decreased. The maximum alarm distance occured when the inner diameter was 35 mm, which was consistent with the optimization result. The maximum alarm distance was the largest when the detection coil was two layers, and there was no data readout when it was three layers. Therefore, the optimal thickness of the detection coil for this metal detection system was two layers. When the detection distance was greater than 80 mm, the alarm rate began to decrease, and the detection effect was weakened. When the detection distance was within 70 mm, the metal detection system could achieve a 100% alarm rate. The test results of the system response time showed that the average system response time was 0.105 0 s, which was less than the safe transportation time of 0.202 0 s. The system can give an alarm before the metal foreign object reaches the cutter, so the system is safe and effective. [Conclusion] In this study, a metal detection system for silage machines was designed. A set of optimization methods for metal detection coils was proposed, and the corresponding metal detection software and hardware systems were developed, and the functions of the metal detection system were verified through experiments, which could provide strong technical support for the safe operation of silage machines.

  • Topic--Technological Innovation and Sustainable Development of Smart Animal Husbandry
    FAN Mingshuo, ZHOU Ping, LI Miao, LI Hualong, LIU Xianwang, MA Zhirun
    Smart Agriculture. 2024, 6(4): 103-115. https://doi.org/10.12133/j.smartag.SA202312016

    [Objective] Manual disinfection in large-scale sheep farm is laborious, time-consuming, and often results in incomplete coverage and inadequate disinfection. With the rapid development of the application of artificial intelligence and automation technology, the automatic navigation and spraying robot for livestock and poultry breeding, has become a research hotspot. To maintain shed hygiene and ensure sheep health, an automatic navigation and spraying robot was proposed for sheep sheds. [Methods] The automatic navigation and spraying robot was designed with a focus on three aspects: hardware, semantic segmentation model, and control algorithm. In terms of hardware, it consisted of a tracked chassis, cameras, and a collapsible spraying device. For the semantic segmentation model, enhancements were made to the lightweight semantic segmentation model ENet, including the addition of residual structures to prevent network degradation and the incorporation of a squeeze-and-excitation network (SENet) attention mechanism in the initialization module. This helped to capture global features when feature map resolution was high, addressing precision issues. The original 6-layer ENet network was reduced to 5 layers to balance the encoder and decoder. Drawing inspiration from dilated spatial pyramid pooling, a context convolution module (CCM) was introduced to improve scene understanding. A criss-cross attention (CCA) mechanism was adapted to acquire context global features of different scales without cascading, reducing information loss. This led to the development of a double attention enet (DAENet) semantic segmentation model was proposed to achieve real-time and accurate segmentation of sheep shed surfaces. Regarding control algorithms, a method was devised to address the robot's difficulty in controlling its direction at junctions. Lane recognition and lane center point identification algorithms were proposed to identify and mark navigation points during the robot's movement outside the sheep shed by simulating real roads. Two cameras were employed, and a camera switching algorithm was developed to enable seamless switching between them while also controlling the spraying device. Additionally, a novel offset and velocity calculation algorithm was proposed to control the speeds of the robot's left and right tracks, enabling control over the robot's movement, stopping, and turning. [Results and Discussions] The DAENet model achieved a mean intersection over union (mIoU) of 0.945 3 in image segmentation tasks, meeting the required segmentation accuracy. During testing of the camera switching algorithm, it was observed that the time taken for the complete transition from camera to spraying device action does not exceed 15 seconds when road conditions changed. Testing of the center point and offset calculation algorithm revealed that, when processing multiple frames of video streams, the algorithm averages 0.04 to 0.055 per frame, achieving frame rates of 20 to 24 frames per second, meeting real-time operational requirements. In field experiments conducted in sheep farm, the robot successfully completed automatic navigation and spraying tasks in two sheds without colliding with roadside troughs. The deviation from the road and lane centerlines did not exceed 0.3 meters. Operating at a travel speed of 0.2 m/s, the liquid in the medicine tank was adequate to complete the spraying tasks for two sheds. Additionally, the time taken for the complete transition from camera to spraying device action did not exceed 15 when road conditions changed. The robot maintained an average frame rate of 22.4 frames per second during operation, meeting the experimental requirements for accurate and real-time information processing. Observation indicated that the spraying coverage rate of the robot exceeds 90%, meeting the experimental coverage requirements. [Conclusions] The proposed automatic navigation and spraying robot, based on the DAENet semantic segmentation model and center point recognition algorithm, combined with hardware design and control algorithms, achieves comprehensive disinfection within sheep sheds while ensuring safety and real-time operation.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    XUJishuang, JIAOJun, LIMiao, LIHualong, YANGXuanjiang, LIUXianwang, GUOPanpan, MAZhirun
    Smart Agriculture. 2023, 5(4): 127-136. https://doi.org/10.12133/j.smartag.SA202308001

    [Objective] A key challenge for the harmless treatment center of sick and dead animal is to prevent secondary environmental pollution, especially during the process of transporting the animals from cold storage to intelligent treatment facilities. In order to solve this problem and achieve the intelligent equipment process of transporting sick and dead animal from storage cold storage to harmless treatment equipment in the harmless treatment center, it is necessary to conduct in-depth research on the key technical problems of path planning and autonomous walking of transport robots. [Methods] A * algorithm is mainly adopted for the robot path planning algorithm for indoor environments, but traditional A * algorithms have some problems, such as having many inflection points, poor smoothness, long calculation time, and many traversal nodes. In order to solve these problems, a path planning method for the harmless treatment of diseased and dead animal using transport robots based on the improved A algorithm was constructed, as well as a motion control method based on fuzzy proportional integral differential (PID). The Manhattan distance method was used to replace the heuristic function of the traditional A * algorithm, improving the efficiency of calculating the distance between the starting and ending points in the path planning process. Referring to the actual location of the harmless treatment site for sick and dead animal, vector cross product calculation was performed based on the vector from the starting point to the target point and the vector from the current position to the endpoint target. Additional values were added and dynamic adjustments were implemented, thereby changing the value of the heuristic function. In order to further improve the efficiency of path planning and reduce the search for nodes in the planning process, a method of adding function weights to the heuristic function was studied based on the actual situation on site, to change the weights according to different paths. When the current location node was relatively open, the search efficiency was improved by increasing the weight. When encountering situations such as corners, the weight was reduced to improve the credibility of the path. By improving the heuristic function, a driving path from the starting point to the endpoint was quickly obtained, but the resulting path was not smooth enough. Meanwhile, during the tracking process, the robot needs to accelerate and decelerate frequently to adapt to the path, resulting in energy loss. Therefore, according to the different inflection points and control points of the path, different orders of Bessel functions were introduced to smooth the planning process for the path, in order to achieve practical application results. By analyzing the kinematics of robot, the differential motion method of the track type was clarified. On this basis, a walking control algorithm for the robot based on fuzzy PID control was studied and proposed. Based on the actual operation status of the robot, the fuzzy rule conditions were recorded into a fuzzy control rule table, achieving online identification of the characteristic parameters of the robot and adjusting the angular velocity deviation of robot. When the robot controller received a fuzzy PID control signal, the angular velocity output from the control signal was converted into a motor rotation signal, which changed the motor speed on both sides of the robot to achieve differential control and adjust the steering of the robot. [Results and Discussions] Simulation experiments were conducted using the constructed environmental map obtained, verifying the effectiveness of the path planning method for the harmless treatment of sick and dead animal using the improved A algorithm. The comparative experiments between traditional A * algorithm and improved algorithm were conducted. The experimental results showed that the average traversal nodes of the improved A * algorithm decreased from 3 067 to 1 968, and the average time of the algorithm decreased from 20.34 s to 7.26 s. Through on-site experiments, the effectiveness and reliability of the algorithm were further verified. Different colors were used to identify the planned paths, and optimization comparison experiments were conducted on large angle inflection points, U-shaped inflection points, and continuous inflection points in the paths, verifying the optimization effect of the Bessel function on path smoothness. The experimental results showed that the path optimized by the Bessel function was smoother and more suitable for the walking of robot in practical scenarios. Fuzzy PID path tracking experiment results showed that the loading truck can stay close to the original route during both straight and turning driving, demonstrating the good effect of fuzzy PID on path tracking. Further experiments were conducted on the harmless treatment center to verify the effectiveness and practical application of the improved algorithm. Based on the path planning algorithm, the driving path of robot was quickly planned, and the fuzzy PID control algorithm was combined to accurately output the angular velocity, driving the robot to move. The transport robots quickly realized the planning of the transportation path, and during the driving process, could always be close to the established path, and the deviation error was maintained within a controllable range. [Conclusions] A path planning method for the harmless treatment of sick and dead animal using an transport robots based on an improved A * algorithm combined with a fuzzy PID motion control was proposed in this study. This method could effectively shorten the path planning time, reduce traversal nodes, and improve the efficiency and smoothness of path planning.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    CHENDandan, ZHANGLijie, JIANGShuangfeng, ZHANGEn, ZHANGJie, ZHAOQing, ZHENGGuoqing, LIGuoqiang
    Smart Agriculture. 2023, 5(4): 68-78. https://doi.org/10.12133/j.smartag.SA202307004

    [Objective] The InterPlanetary File System (IPFS) is a peer-to-peer distributed file system, aiming to establish a global, open, and decentralized network for storage and sharing. Combining the IPFS and blockchain technology could alleviate the pressure on blockchain storage. The distinct features of the supply chain for agricultural products in the plantation industry, including extended production cycles, multiple, heterogeneous data sources, and relatively fragmented production, which can readily result in information gaps and opacity throughout the supply chain; in the traceability process of agricultural products, there are issues with sensitive data being prone to leakage and a lack of security, and the supply chain of plantation agricultural products is long, and the traceability data is often stored in multiple blocks, which requires frequent block tracing operations during tracing, resulting in low efficiency. Consequently, the aim of this study is to fully encapsulate the decentralized nature of blockchain, safeguard the privacy of sensitive data, and alleviate the storage strain of blockchain. [Method] A traceability model for plantation-based agricultural products was developed, leveraging the hyperledger fabric consortium chain and the IPFS. Based on data type, traceability data was categorized into structured and unstructured data. Given that blockchain ledgers were not optimized for direct storage of unstructured data, such as images and videos, to alleviate the storage strain on the blockchain, unstructured data was persisted in the IPFS, while structured data remains within the blockchain ledger. Based on data privacy categories, traceability data was categorized into public data and sensitive data. Public data was stored in the public ledger of hyperledger fabric, while sensitive data was stored in the private data collection of hyperledger fabric. This method allowed for efficient data access while maintaining data security, enhancing the efficiency of traceability. Hyperledger Fabric was the foundational platform for the development of the prototype system. The front-end website was based on the TCP/IP protocol stack. The website visualization was implemented through the React framework. Smart contracts were crafted using the Java programming language. The performance of the application layer interface was tested using the testing tool Postman. [Conclusions and Discussions] The blockchain-based plantation agricultural product traceability system was structured into a five-tiered architecture, starting from the top: the application layer, gateway layer, contract layer, consensus layer, and data storage layer. The primary service providers at the application layer were the enterprises and consumers involved in each stage of the traceability process. The gateway layer served as the middleware between users and the blockchain, primarily providing interface support for the front-end interface of the application layer. The contract layer mainly included smart contracts for planting, processing, warehousing, transportation, and sales. The consensus layer used the EtcdRaft consensus algorithm. The data storage layer was divided into the on-chain storage layer of the blockchain ledger and the off-chain storage layer of the IPFS cluster. In terms of data types, each piece of traceability data was categorized into structured data items and unstructured data items. Unstructured data was stored in the Interstellar File System cluster, and the returned content identifiers were integrated with the structured data items into the blockchain nodes within the traceability system. In the realm of data privacy, smart contracts were employed to segregate public and sensitive data, with public data directly integrating onto the blockchain, and sensitive data, adhering to predefined sharing policies, being stored in a private dataset designated by hyperledger fabric. In terms of user queries, consumers could retrieve product traceability information via a traceability system overseen by a reputable authority. The developed model website consisted of three parts: a login section, an agricultural product circulation information management and user data management section for enterprises in various links, and a traceability data query section for consumers. When using synchronous and asynchronous Application Program Interfaces, the average data on-chain latency was 2 138.9 and 37.6 ms, respectively, and the average data query latency was 12.3 ms. Blockchain, as the foundational data storage technology, enhances the credibility and transaction efficiency in agricultural product traceability. [Conclusions] This study designed and implemented a plantation agricultural product traceability model leveraging blockchain technology's private dataset and the IPFS cluster. This model ensured secure sharing and storage of traceability data, particularly sensitive data, across all stages. Compared to traditional centralized traceability models, it enhanced the reliability of the traceability data. Based on the evaluation through experimental systems, the traceability model proposed in this study effectively safeguarded the privacy of sensitive data in enterprises. Additionally, it offered high efficiency in data linking and querying. Applicable to the real-world traceability environment of plantation agricultural products, it showed potential for widespread application and promotion, offering fresh insights for designing blockchain traceability models in this sector. The model is still in its experimental phase and lacks applications across various types of crops in the farming industry. The subsequent step is to apply the model in real-world scenarios, continually enhance its efficiency, refine the model, advance the practical application of blockchain technology, and lay the foundation for agricultural modernization.

  • Topic--Smart Agricultural Technology and Machinery in Hilly and Mountainous Areas
    MU Xiaodong, YANG Fuzeng, DUAN Luojia, LIU Zhijie, SONG Zhuoying, LI Zonglin, GUAN Shouqing
    Smart Agriculture. 2024, 6(3): 1-16. https://doi.org/10.12133/j.smartag.SA202312015

    [Significance] The mechanization, automation and intelligentization of agricultural equipment are key factors to improve operation efficiency, free up labor force and promote the sustainable development of agriculture. It is also the hot spot of research and development of agricultural machinery industry in the future. In China, hills and mountains serves as vital production bases for agricultural products, accounting for about 70% of the country's land area. In addition, these regions face various environmental factors such as steep slopes, narrow road, small plots, complex terrain and landforms, as well as harsh working environment. Moreover, there is a lack of reliable agricultural machinery support across various production stages, along with a shortage of theoretical frameworks to guide the research and development of agricultural machinery tailored to hilly and mountainous locales. [Progress] This article focuses on the research advances of tractor leveling and anti-overturning systems in hilly and mountainous areas, including tractor body, cab and seat leveling technology, tractor rear suspension and implement leveling slope adaptive technology, and research progress on tractor anti-overturning protection devices and warning technology. The vehicle body leveling mechanism can be roughly divided into five types based on its different working modes: parallel four bar, center of gravity adjustable, hydraulic differential high, folding and twisting waist, and omnidirectional leveling. These mechanisms aim to address the issue of vehicle tilting and easy overturning when traversing or working on sloping or rugged roads. By keeping the vehicle body posture horizontal or adjusting the center of gravity within a stable range, the overall driving safety of the vehicle can be improved to ensure the accuracy of operation. Leveling the driver's cab and seats can mitigate the lateral bumps experienced by the driver during rough or sloping operations, reducing driver fatigue and minimizing strain on the lumbar and cervical spine, thereby enhancing driving comfort. The adaptive technology of tractor rear suspension and implement leveling on slopes can ensure that the tractor maintains consistent horizontal contact with the ground in hilly and mountainous areas, avoiding changes in the posture of the suspended implement with the swing of the body or the driving path, which may affect the operation effect. The tractor rollover protection device and warning technology have garnered significant attention in recent years. Prioritizing driver safety, rollover warning system can alert the driver in advance of the dangerous state of the tractor, automatically adjust the vehicle before rollover, or automatically open the rollover protection device when it is about to rollover, and timely send accident reports to emergency contacts, thereby ensuring the safety of the driver to the greatest extent possible. [Conclusions and Prospects] The future development directions of hill and mountain tractor leveling, anti-overturning early warning, unmanned, automatic technology were looked forward: Structure optimization, high sensitivity, good stability of mountain tractor leveling system research; Study on copying system of agricultural machinery with good slope adaptability; Research on anti-rollover early warning technology of environment perception and automatic interference; Research on precision navigation technology, intelligent monitoring technology and remote scheduling and management technology of agricultural machinery; Theoretical study on longitudinal stability of sloping land. This review could provide reference for the research and development of high reliability and high safety mountain tractor in line with the complex working environment in hill and mountain areas.

  • Information Processing and Decision Making
    MAOYongwen, HANJunying, LIUChengzhong
    Smart Agriculture. 2024, 6(1): 135-146. https://doi.org/10.12133/j.smartag.SA202309011

    [Objective] Flax, characterized by its short growth cycle and strong adaptability, is one of the major cash crops in northern China. Due to its versatile uses and unique quality, it holds a significant position in China's oil and fiber crops. The quality of flax seeds directly affects the yield of the flax plant. Seed evaluation is a crucial step in the breeding process of flax. Common parameters used in the seed evaluation process of flax include circumference, area, length axis, and 1 000-seed weight. To ensure the high-quality production of flax crops, it is of great significance to understand the phenotypic characteristics of flax seeds, select different resources as parents based on breeding objectives, and adopt other methods for the breeding, cultivation, and evaluation of seed quality and traits of flax. [Methods] In response to the high error rates and low efficiency issues observed during the automated seed testing of flax seeds, the measurement methods were explored of flax seed contours based on machine vision research. The flax seed images were preprocessed, and the collected color images were converted to grayscale. A filtering and smoothing process was applied to obtain binary images. To address the issues of flax seed overlap and adhesion, a contour fitting image segmentation method based on fused corner features was proposed. This method incorporated adaptive threshold selection during edge detection of the image contour. Only multi-seed target areas that met certain criteria were subjected to image segmentation processing, while single-seed areas bypassed this step and were directly summarized for seed testing data. After obtaining the multi-seed adhesion target areas, the flax seeds underwent contour approximation, corner extraction, and contour fitting. Based on the provided image contour information, the image contour shape was approximated to another contour shape with fewer vertices, and the original contour curve was simplified to a more regular and compact line segment or polygon, minimizing computational complexity. All line shape characteristics in the image were marked as much as possible. Since the pixel intensity variations in different directions of image corners were significant, the second derivative matrix based on pixel grayscale values was used to detect image corners. Based on the contour approximation algorithm, contour corner detection was performed to obtain the coordinates of each corner. The resulting contour points and corners were used as outputs to further improve the accuracy and precision of subsequent contour fitting methods, resulting in a two-dimensional discrete point dataset of the image contour. Using the contour point dataset as an input, the geometric moments of the image contour were calculated, and the optimal solution for the ellipse parameters was obtained through numerical optimization based on the least squares method and the geometric features of the ellipse shape. Ultimately, the optimal contour was fitted to the given image, achieving the segmentation and counting of flax seed images. Meanwhile, each pixel in the digital image was a uniform small square in size and shape, so the circumference, area, and major and minor axes of the flax seeds could be represented by the total number of pixels occupied by the seeds in the image. The weight of a single seed could be calculated by dividing the total weight of the seeds by the total number of seeds detected by the contour, thereby obtaining the weight of the individual seed and converting it accordingly. Through the pixelization of the 1 yuan and 1 jiao coins from the fifth iteration of the 2019 Renminbi, a summary of the circumference, area, major axis, minor axis, and 1 000-seed weight of the flax seeds was achieved. Additionally, based on the aforementioned method, this study designed an automated real-time analysis system for flax seed testing data, realizing the automation of flax seed testing research. Experiments were conducted on images of flax seeds captured by an industrial camera. [Results and Discussions] The proposed automated seed identification method achieved an accuracy rate of 97.28% for statistically distinguishing different varieties of flax seeds. The average processing time for 100 seeds was 69.58 ms. Compared to the extreme erosion algorithm and the watershed algorithm based on distance transformation, the proposed method improved the average calculation accuracy by 19.6% over the extreme erosion algorithm and required a shorter average computation time than the direct use of the watershed algorithm. Considering the practical needs of automated seed identification, this method did not employ methods such as dilation or erosion for image morphology processing, thereby preserving the original features of the image to the greatest extent possible. Additionally, the flax seed automated seed identification data real-time analysis system could process image information in batches. By executing data summarization functions, it automatically generated corresponding data table folders, storing the corresponding image data summary tables. [Conclusions] The proposed method exhibits superior computational accuracy and processing speed, with shorter operation time and robustness. It is highly adaptable and able to accurately acquire the morphological feature parameters of flax seeds in bulk, ensuring measurement errors remain within 10%, which could provide technical support for future flax seed evaluation and related industrial development.