Most Download
  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All
  • Most Downloaded in Recent Month
  • Most Downloaded in Recent Year
Please wait a minute...
  • Select all
    |
  • ZHAO Ruixue, YANG Chenxue, ZHENG Jianhua, LI Jiao, WANG Jian
    Smart Agriculture. 2022, 4(4): 105-125. https://doi.org/10.12133/j.smartag.SA202207009

    The wide application of advanced information technologies such as big data, Internet of Things and artificial intelligence in agriculture has promoted the modernization of agriculture in rural areas and the development of smart agriculture. This trend has also led to the boost of demands for technology and knowledge from a large amount of agricultural business entities. Faced with problems such as dispersiveness of knowledges, hysteric knowledge update, inadequate agricultural information service and prominent contradiction between supply and demand of knowledge, the agricultural knowledge service has become an important engine for the transformation, upgrading and high-quality development of agriculture. To better facilitate the agriculture modernization in China, the research and application perspectives of agricultural knowledge services were summarized and analyzed. According to the whole life cycle of agricultural data, based on the whole agricultural industry chain, a systematic framework for the construction of agricultural intelligent knowledge service systems towards the requirement of agricultural business entities was proposed. Three layers of techniques in necessity were designed, ranging from AIoT-based agricultural situation perception to big data aggregation and governance, and from agricultural knowledge organization to computation/mining based on knowledge graph and then to multi-scenario-based agricultural intelligent knowledge service. A wide range of key technologies with comprehensive discussion on their applications in agricultural intelligent knowledge service were summarized, including the aerial and ground integrated Artificial Intelligence & Internet-of-Things (AIoT) full-dimensional of agricultural condition perception, multi-source heterogeneous agricultural big data aggregation/governance, knowledge modeling, knowledge extraction, knowledge fusion, knowledge reasoning, cross-media retrieval, intelligent question answering, personalized recommendation, decision support. At the end, the future development trends and countermeasures were discussed, from the aspects of agricultural data acquisition, model construction, knowledge organization, intelligent knowledge service technology and application promotion. It can be concluded that the agricultural intelligent knowledge service is the key to resolve the contradiction between supply and demand of agricultural knowledge service, can provide support in the realization of the advance from agricultural cross-media data analytics to knowledge reasoning, and promote the upgrade of agricultural knowledge service to be more personalized, more precise and more intelligent. Agricultural knowledge service is also an important support for agricultural science and technologies to be more self-reliance, modernized, and facilitates substantial development and upgrading of them in a more effective manner.

  • Topic--Crop Growth and Its Environmental Monitoring
    SHAO Mingyue, ZHANG Jianhua, FENG Quan, CHAI Xiujuan, ZHANG Ning, ZHANG Wenrong
    Smart Agriculture. 2022, 4(1): 29-46. https://doi.org/10.12133/j.smartag.SA202202005

    Accurate detection and recognition of plant diseases is the key technology to early diagnosis and intelligent monitoring of plant diseases, and is the core of accurate control and information management of plant diseases and insect pests. Deep learning can overcome the disadvantages of traditional diagnosis methods and greatly improve the accuracy of diseases detection and recognition, and has attracted a lot of attention of researchers. This paper collected the main public plant diseases image data sets all over the world, and briefly introduced the basic information of each data set and their websites, which is convenient to download and use. And then, the application of deep learning in plant disease detection and recognition in recent years was systematically reviewed. Plant disease target detection is the premise of accurate classification and recognition of plant disease and evaluation of disease hazard level. It is also the key to accurately locate plant disease area and guide spray device of plant protection equipment to spray drug on target. Plant disease recognition refers to the processing, analysis and understanding of disease images to identify different kinds of disease objects, which is the main basis for the timely and effective prevention and control of plant diseases. The research progress in early disease detection and recognition algorithm was expounded based on depth of learning research, as well as the advantages and existing problems of various algorithms were described. It can be seen from this review that the detection and recognition algorithm based on deep learning is superior to the traditional detection and recognition algorithm in all aspects. Based on the investigation of research results, it was pointed out that the illumination, sheltering, complex background, different disorders with similar symptoms, different changes of disease symptoms in different periods, and overlapping coexistence of multiple diseases were the main challenges for the detection and recognition of plant diseases. At the same time, the establishment of a large-scale and more complex data set that meets the specific research needs is also a difficulty that need to face together. And at further, we point out that the combination of the better performance of the neural network, large-scale data set and agriculture theoretical basis is a major trend of the development of the future. It is also pointed out that multimodal data can be used to identify early plant diseases, which is also one of the future development direction.

  • Topic--Smart Farming of Field Crops
    YIN Yanxin, MENG Zhijun, ZHAO Chunjiang, WANG Hao, WEN Changkai, CHEN Jingping, LI Liwei, DU Jingwei, WANG Pei, AN Xiaofei, SHANG Yehua, ZHANG Anqi, YAN Bingxin, WU Guangwei
    Smart Agriculture. 2022, 4(4): 1-25. https://doi.org/10.12133/j.smartag.SA202212005

    As one of the important way for constructing smart agriculture, unmanned farms are the most attractive in nowadays, and have been explored in many countries. Generally, data, knowledge and intelligent equipment are the core elements of unmanned farms. It deeply integrates modern information technologies such as the Internet of Things, big data, cloud computing, edge computing, and artificial intelligence with agriculture to realize agricultural production information perception, quantitative decision-making, intelligent control, precise input and personalized services. In the paper, the overall technical architecture of unmanned farms is introduced, and five kinds of key technologies of unmanned farms are proposed, which include information perception and intelligent decision-making technology, precision control technology and key equipment for agriculture, automatic driving technology in agriculture, unmanned operation agricultural equipment, management and remote controlling system for unmanned farms. Furthermore, the latest research progress of the above technologies both worldwide are analyzed. Based on which, critical scientific and technological issues to be solved for developing unmanned farms in China are proposed, include unstructured environment perception of farmland, automatic drive for agriculture machinery in complex and changeable farmland environment, autonomous task assignment and path planning of unmanned agricultural machinery, autonomous cooperative operation control of unmanned agricultural machinery group. Those technologies are challenging and absolutely, and would be the most competitive commanding height in the future. The maize unmanned farm constructed in the city of Gongzhuling, Jilin province, China, was also introduced in detail. The unmanned farms is mainly composed of information perception system, unmanned agricultural equipment, management and controlling system. The perception system obtains and provides the farmland information, maize growth, pest and disease information of the farm. The unmanned agricultural machineries could complete the whole process of the maize mechanization under unattended conditions. The management and controlling system includes the basic GIS, remote controlling subsystem, precision operation management subsystem and working display system for unmanned agricultural machineries. The application of the maize unmanned farm has improved maize production efficiency (the harvesting efficiency has been increased by 3-4 times) and reduced labors. Finally, the paper summarizes the important role of the unmanned farm technology were summarized in solving the problems such as reduction of labors, analyzes the opportunities and challenges of developing unmanned farms in China, and put forward the strategic goals and ideas of developing unmanned farm in China.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    ZHAOChunjiang, FANBeibei, LIJin, FENGQingchun
    Smart Agriculture. 2023, 5(4): 1-15. https://doi.org/10.12133/j.smartag.SA202312030

    [Significance] Autonomous and intelligent agricultural machinery, characterized by green intelligence, energy efficiency, and reduced emissions, as well as high intelligence and man-machine collaboration, will serve as the driving force behind global agricultural technology advancements and the transformation of production methods in the context of smart agriculture development. Agricultural robots, which utilize intelligent control and information technology, have the unique advantage of replacing manual labor. They occupy the strategic commanding heights and competitive focus of global agricultural equipment and are also one of the key development directions for accelerating the construction of China's agricultural power. World agricultural powers and China have incorporated the research, development, manufacturing, and promotion of agricultural robots into their national strategies, respectively strengthening the agricultural robot policy and planning layout based on their own agricultural development characteristics, thus driving the agricultural robot industry into a stable growth period. [Progress] This paper firstly delves into the concept and defining features of agricultural robots, alongside an exploration of the global agricultural robot development policy and strategic planning blueprint. Furthermore, sheds light on the growth and development of the global agricultural robotics industry; Then proceeds to analyze the industrial backdrop, cutting-edge advancements, developmental challenges, and crucial technology aspects of three representative agricultural robots, including farmland robots, orchard picking robots, and indoor vegetable production robots. Finally, summarizes the disparity between Chinese agricultural robots and their foreign counterparts in terms of advanced technologies. (1) An agricultural robot is a multi-degree-of-freedom autonomous operating equipment that possesses accurate perception, autonomous decision-making, intelligent control, and automatic execution capabilities specifically designed for agricultural environments. When combined with artificial intelligence, big data, cloud computing, and the Internet of Things, agricultural robots form an agricultural robot application system. This system has relatively mature applications in key processes such as field planting, fertilization, pest control, yield estimation, inspection, harvesting, grafting, pruning, inspection, harvesting, transportation, and livestock and poultry breeding feeding, inspection, disinfection, and milking. Globally, agricultural robots, represented by plant protection robots, have entered the industrial application phase and are gradually realizing commercialization with vast market potential. (2) Compared to traditional agricultural machinery and equipment, agricultural robots possess advantages in performing hazardous tasks, executing batch repetitive work, managing complex field operations, and livestock breeding. In contrast to industrial robots, agricultural robots face technical challenges in three aspects. Firstly, the complexity and unstructured nature of the operating environment. Secondly, the flexibility, mobility, and commoditization of the operation object. Thirdly, the high level of technology and investment required. (3) Given the increasing demand for unmanned and less manned operations in farmland production, China's agricultural robot research, development, and application have started late and progressed slowly. The existing agricultural operation equipment still has a significant gap from achieving precision operation, digital perception, intelligent management, and intelligent decision-making. The comprehensive performance of domestic products lags behind foreign advanced counterparts, indicating that there is still a long way to go for industrial development and application. Firstly, the current agricultural robots predominantly utilize single actuators and operate as single machines, with the development of multi-arm cooperative robots just emerging. Most of these robots primarily engage in rigid operations, exhibiting limited flexibility, adaptability, and functionality. Secondly, the perception of multi-source environments in agricultural settings, as well as the autonomous operation of agricultural robot equipment, relies heavily on human input. Thirdly, the progress of new teaching methods and technologies for human-computer natural interaction is rather slow. Lastly, the development of operational infrastructure is insufficient, resulting in a relatively low degree of "mechanization". [Conclusions and Prospects] The paper anticipates the opportunities that arise from the rapid growth of the agricultural robotics industry in response to the escalating global shortage of agricultural labor. It outlines the emerging trends in agricultural robot technology, including autonomous navigation, self-learning, real-time monitoring, and operation control. In the future, the path planning and navigation information perception of agricultural robot autonomy are expected to become more refined. Furthermore, improvements in autonomous learning and cross-scenario operation performance will be achieved. The development of real-time operation monitoring of agricultural robots through digital twinning will also progress. Additionally, cloud-based management and control of agricultural robots for comprehensive operations will experience significant growth. Steady advancements will be made in the innovation and integration of agricultural machinery and techniques.

  • Special Issue--Agricultural Information Perception and Models
    GUOWang, YANGYusen, WUHuarui, ZHUHuaji, MIAOYisheng, GUJingqiu
    Smart Agriculture. 2024, 6(2): 1-13. https://doi.org/10.12133/j.smartag.SA202403015

    [Significance] Big Models, or Foundation Models, have offered a new paradigm in smart agriculture. These models, built on the Transformer architecture, incorporate numerous parameters and have undergone extensive training, often showing excellent performance and adaptability, making them effective in addressing agricultural issues where data is limited. Integrating big models in agriculture promises to pave the way for a more comprehensive form of agricultural intelligence, capable of processing diverse inputs, making informed decisions, and potentially overseeing entire farming systems autonomously. [Progress] The fundamental concepts and core technologies of big models are initially elaborated from five aspects: the generation and core principles of the Transformer architecture, scaling laws of extending big models, large-scale self-supervised learning, the general capabilities and adaptions of big models, and the emerging capabilities of big models. Subsequently, the possible application scenarios of the big model in the agricultural field are analyzed in detail, the development status of big models is described based on three types of the models: Large language models (LLMs), large vision models (LVMs), and large multi-modal models (LMMs). The progress of applying big models in agriculture is discussed, and the achievements are presented. [Conclusions and Prospects] The challenges and key tasks of applying big models technology in agriculture are analyzed. Firstly, the current datasets used for agricultural big models are somewhat limited, and the process of constructing these datasets can be both expensive and potentially problematic in terms of copyright issues. There is a call for creating more extensive, more openly accessible datasets to facilitate future advancements. Secondly, the complexity of big models, due to their extensive parameter counts, poses significant challenges in terms of training and deployment. However, there is optimism that future methodological improvements will streamline these processes by optimizing memory and computational efficiency, thereby enhancing the performance of big models in agriculture. Thirdly, these advanced models demonstrate strong proficiency in analyzing image and text data, suggesting potential future applications in integrating real-time data from IoT devices and the Internet to make informed decisions, manage multi-modal data, and potentially operate machinery within autonomous agricultural systems. Finally, the dissemination and implementation of these big models in the public agricultural sphere are deemed crucial. The public availability of these models is expected to refine their capabilities through user feedback and alleviate the workload on humans by providing sophisticated and accurate agricultural advice, which could revolutionize agricultural practices.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    WANGTing, WANGNa, CUIYunpeng, LIUJuan
    Smart Agriculture. 2023, 5(4): 105-116. https://doi.org/10.12133/j.smartag.SA202311005

    [Objective] The rural revitalization strategy presents novel requisites for the extension of agricultural technology. However, the conventional method encounters the issue of a contradiction between supply and demand. Therefore, there is a need for further innovation in the supply form of agricultural knowledge. Recent advancements in artificial intelligence technologies, such as deep learning and large-scale neural networks, particularly the advent of large language models (LLMs), render anthropomorphic and intelligent agricultural technology extension feasible. With the agricultural technology knowledge service of fruit and vegetable as the demand orientation, the intelligent agricultural technology question answering system was built in this research based on LLM, providing agricultural technology extension services, including guidance on new agricultural knowledge and question-and-answer sessions. This facilitates farmers in accessing high-quality agricultural knowledge at their convenience. [Methods] Through an analysis of the demands of strawberry farmers, the agricultural technology knowledge related to strawberry cultivation was categorized into six themes: basic production knowledge, variety screening, interplanting knowledge, pest diagnosis and control, disease diagnosis and control, and drug damage diagnosis and control. Considering the current situation of agricultural technology, two primary tasks were formulated: named entity recognition and question answering related to agricultural knowledge. A training corpus comprising entity type annotations and question-answer pairs was constructed using a combination of automatic machine annotation and manual annotation, ensuring a small yet high-quality sample. After comparing four existing Large Language Models (Baichuan2-13B-Chat, ChatGLM2-6B, Llama 2-13B-Chat, and ChatGPT), the model exhibiting the best performance was chosen as the base LLM to develop the intelligent question-answering system for agricultural technology knowledge. Utilizing a high-quality corpus, pre-training of a Large Language Model and the fine-tuning method, a deep neural network with semantic analysis, context association, and content generation capabilities was trained. This model served as a Large Language Model for named entity recognition and question answering of agricultural knowledge, adaptable to various downstream tasks. For the task of named entity recognition, the fine-tuning method of Lora was employed, fine-tuning only essential parameters to expedite model training and enhance performance. Regarding the question-answering task, the Prompt-tuning method was used to fine-tune the Large Language Model, where adjustments were made based on the generated content of the model, achieving iterative optimization. Model performance optimization was conducted from two perspectives: data and model design. In terms of data, redundant or unclear data was manually removed from the labeled corpus. In terms of the model, a strategy based on retrieval enhancement generation technology was employed to deepen the understanding of agricultural knowledge in the Large Language Model and maintain real-time synchronization of knowledge, alleviating the problem of LLM hallucination. Drawing upon the constructed Large Language Model, an intelligent question-answering system was developed for agricultural technology knowledge. This system demonstrates the capability to generate high-precision and unambiguous answers, while also supporting the functionalities of multi-round question answering and retrieval of information sources. [Results and Discussions] Accuracy rate and recall rate served as indicators to evaluate the named entity recognition task performance of the Large Language Models. The results indicated that the performance of Large Language Models was closely related to factors such as model structure, the scale of the labeled corpus, and the number of entity types. After fine-tuning, the ChatGLM Large Language Model demonstrated the highest accuracy and recall rate. With the same number of entity types, a higher number of annotated corpora resulted in a higher accuracy rate. Fine-tuning had different effects on different models, and overall, it improved the average accuracy of all models under different knowledge topics, with ChatGLM, Llama, and Baichuan values all surpassing 85%. The average recall rate saw limited increase, and in some cases, it was even lower than the values before fine-tuning. Assessing the question-answering task of Large Language Models using hallucination rate and semantic similarity as indicators, data optimization and retrieval enhancement generation techniques effectively reduced the hallucination rate by 10% to 40% and improved semantic similarity by more than 15%. These optimizations significantly enhanced the generated content of the models in terms of correctness, logic, and comprehensiveness. [Conclusion] The pre-trained Large Language Model of ChatGLM exhibited superior performance in named entity recognition and question answering tasks in the agricultural field. Fine-tuning pre-trained Large Language Models for downstream tasks and optimizing based on retrieval enhancement generation technology mitigated the problem of language hallucination, markedly improving model performance. Large Language Model technology has the potential to innovate agricultural technology knowledge service modes and optimize agricultural knowledge extension. This can effectively reduce the time cost for farmers to obtain high-quality and effective knowledge, guiding more farmers towards agricultural technology innovation and transformation. However, due to challenges such as unstable performance, further research is needed to explore optimization methods for Large Language Models and their application in specific scenarios.

  • CHEN Feng, SUN Chuanheng, XING Bin, LUO Na, LIU Haishen
    Smart Agriculture. 2022, 4(4): 126-137. https://doi.org/10.12133/j.smartag.SA202206006

    As an emerging concept, metaverse has attracted extensive attention from industry, academia and scientific research field. The combination of agriculture and metaverse will greatly promote the development of agricultural informatization and agricultural intelligence, provide new impetus for the transformation and upgrading of agricultural intelligence. Firstly, to expound feasibility of the application research of metaverse in agriculture, the basic principle and key technologies of agriculture metaverse were briefly described, such as blockchain, non-fungible token, 5G/6G, artificial intelligence, Internet of Things, 3D reconstruction, cloud computing, edge computing, augmented reality, virtual reality, mixed reality, brain computer interface, digital twins and parallel system. Then, the main scenarios of three agricultural applications of metaverse in the fields of virtual farm, agricultural teaching system and agricultural product traceability system were discussed. Among them, virtual farm is one of the most important applications of agricultural metaverse. Agricultural metaverse can help the growth of crops and the raising of livestock and poultry in the field of agricultural production, provide a three-dimensional and visual virtual leisure agricultural experience, provide virtual characters in the field of agricultural product promotion. The agricultural metaverse teaching system can provide virtual agricultural teaching similar to natural scenes, save training time and improve training efficiency by means of fragmentation. Traceability of agricultural products can let consumers know the production information of agricultural products and feel more confident about enterprises and products. Finally, the challenges in the development of agricultural metaverse were summarized in the aspects of difficulties in establishing agricultural metaverse system, weak communication foundation of agricultural metaverse, immature agricultural metaverse hardware equipment and uncertain agricultural meta universe operation, and the future development directions of agricultural metaverse were prospected. In the future, researches on the application of metaverse, agricultural growth mechanism, and low power wireless communication technologies are suggested to be carried out. A rural broadband network covering households can be established. The industrialization application of agricultural meta universe can be promoted. This review can provide theoretical references and technical supports for the development of metaverse in the field of agriculture.

  • Topic--Smart Farming of Field Crops
    LI Li, LI Minzan, LIU Gang, ZHANG Man, WANG Maohua
    Smart Agriculture. 2022, 4(4): 26-34. https://doi.org/10.12133/j.smartag.SA202207003

    Smart farming for field crops is a significant part of the smart agriculture. It aims at crop production, integrating modern sensing technology, new generation mobile communication technology, computer and network technology, Internet of Things(IoT), big data, cloud computing, blockchain and expert wisdom and knowledge. Deeply integrated application of biotechnology, engineering technology, information technology and management technology, it realizes accurate perception, quantitative decision-making, intelligent operation and intelligent service in the process of crop production, to significantly improve land output, resource utilization and labor productivity, comprehensively improves the quality, and promotes efficiency of agricultural products. In order to promote the sustainable development of the smart farming, through the analysis of the development process of smart agriculture, the overall objectives and key tasks of the development strategy were clarified, the key technologies in smart farming were condensed. Analysis and breakthrough of smart farming key technologies were crucial to the industrial development strategy. The main problems of the smart farming for field crops include: the lack of in-situ accurate measurement technology and special agricultural sensors, the large difference between crop model and actual production, the instantaneity, reliability, universality, and stability of the information transmission technologies, and the combination of intelligent agricultural equipment with agronomy. Based on the above analysis, five primary technologies and eighteen corresponding secondary technologies of smart farming for field crops were proposed, including: sensing technologies of environmental and biological information in field, agricultural IoT technologies and mobile internet, cloud computing and cloud service technologies in agriculture, big data analysis and decision-making technology in agriculture, and intelligent agricultural machinery and agricultural robots in fireld production. According to the characteristics of China's cropping region, the corresponding smart farming development strategies were proposed: large-scale smart production development zone in the Northeast region and Inner Mongolia region, smart urban agriculture and water-saving agriculture development zone in the region of Beijing, Tianjin, Hebei and Shandong, large-scale smart farming of cotton and smart dry farming green development comprehensive test zone in the Northwest arid region, smart farming of rice comprehensive development test zone in the Southeast coast region, and characteristic smart farming development zone in the Southwest mountain region. Finally, the suggestions were given from the perspective of infrastructure, key technology, talent and policy.

  • Topic--Smart Animal Husbandry Key Technologies and Equipment
    MA Weihong, LI Jiawei, WANG Zhiquan, GAO Ronghua, DING Luyu, YU Qinyang, YU Ligen, LAI Chengrong, LI Qifeng
    Smart Agriculture. 2022, 4(2): 99-109. https://doi.org/10.12133/j.smartag.SA202203005

    Focusing on the low level of management and informatization and intelligence of the beef cattle industry in China, a big data platform for commercial beef cattle breeding, referring to the experience of international advanced beef cattle breeding countries, was proposed in this research. The functions of the platform includes integrating germplasm resources of beef cattle, automatic collecting of key beef cattle breeding traits, full-service support for the beef cattle breeding process, formation of big data analysis and decision-making system for beef cattle germplasm resources, and joint breeding innovation model. Aiming at the backward storage and sharing methods of beef cattle breeding data and incomplete information records in China, an information resource integration platform and an information database for beef cattle germplasm were established. Due to the vagueness and subjectivity of the breeding performance evaluation standard, a scientific online evaluation technology of beef cattle breeding traits and a non-contact automatic acquisition and intelligent calculation method were proposed. Considering the lack of scientific and systematic breeding planning and guidance for farmers in China, a full-service support system for beef cattle breeding and nanny-style breeding guidance during beef cattle breeding was developed. And an interactive progressive decision-making method for beef cattle breeding to solve the lack of data accumulation of beef cattle germplasm was proposed. The main body of breeding and farming enterprises was not closely integrated, according to that, the innovative breeding model of regional integration was explored. The idea of commercial beef cattle breeding big data software platform and the technological and model innovation content were also introduced. The technology innovations included the deep mining of germplasm resources data and improved breed management pedigree, the automatic acquisition and evaluation technology of non-contact breeding traits, the fusion of multi-source heterogeneous information to provide intelligent decision support. The future goal is to form a sustainable information solution for China's beef cattle breeding industry and improve the overall level of China's beef cattle breeding industry.

  • Overview Article
    MAO Kebiao, ZHANG Chenyang, SHI Jiancheng, WANG Xuming, GUO Zhonghua, LI Chunshu, DONG Lixin, WU Menxin, SUN Ruijing, WU Shengli, JI Dabin, JIANG Lingmei, ZHAO Tianjie, QIU Yubao, DU Yongming, XU Tongren
    Smart Agriculture. 2023, 5(2): 161-171. https://doi.org/10.12133/j.smartag.SA202304013

    Deep learning is one of the most important technologies in the field of artificial intelligence, which has sparked a research boom in academic and engineering applications. It also shows strong application potential in remote sensing retrieval of geophysical parameters. The cross-disciplinary research is just beginning, and most deep learning applications in geosciences are still "black boxes", with most applications lacking physical significance, interpretability, and universality. A paradigm theory for geophysical parameter retrieval based on artificial intelligence coupled physics and statistical methods was proposed in this research. Firstly, physical logic deduction was performed based on the physical energy balance equation, and the inversion equation system was constructed theoretically. Then, a fuzzy statistical method was constructed based on physical deduction. Representative solutions of physical methods were obtained through physical model simulation, and other representative solutions as the training and testing database for deep learning were obtained using multi-source data. Finally, the solution using deep learning was optimized. The conditions for determining the formation of a universal and physically interpretable paradigm are: (1) There must be a causal relationship between input and output variables (parameters); (2) In theory, a closed system of equations (with unknowns less than or equal to the number of equations) can be constructed between input and output variables (parameters), which means that the output parameters can be uniquely determined by the input parameters. If there is a strong causal relationship between input parameters (variables) and output parameters (variables), deep learning can be directly used for inversion. If there is a weak correlation between the input and output parameters, prior knowledge needs to be added to improve the inversion accuracy of the output parameters. Thermal infrared remote sensing data were used to retrieve land surface temperature, emissivity, near surface air temperature and atmospheric water vapor content as a case to prove the theory. The analysis results show that the proposed theory and conditions are feasible, and the accuracy and applicability are better than traditional methods. The theory and judgment conditions of geophysical parameter retrieval paradigms are also applicable for target recognition such as remote sensing classification, but it needs to be interpreted from a different perspective. For example, the feature information extracted by different convolutional kernels must be able to uniquely determine the target. Under satisfying with the conditions of paradigm theory, the inversion of geophysical parameters based on artificial intelligence is the best choice. The proposal of this theory is of milestone significance in the history of geophysical parameter retrieval.

  • Information Processing and Decision Making
    YANGFeng, YAOXiaotong
    Smart Agriculture. 2024, 6(1): 147-157. https://doi.org/10.12133/j.smartag.SA202309010

    [Objective] To effectively tackle the unique attributes of wheat leaf pests and diseases in their native environment, a high-caliber and efficient pest detection model named YOLOv8-SS (You Only Look Once Version 8-SS) was proposed. This innovative model is engineered to accurately identify pests, thereby providing a solid scientific foundation for their prevention and management strategies. [Methods] A total of 3 639 raw datasets of images of wheat leaf pests and diseases were collected from 6 different wheat pests and diseases in various farmlands in the Yuchong County area of Gansu Province, at different periods of time, using mobile phones. This collection demonstrated the team's proficiency and commitment to advancing agricultural research. The dataset was meticulously constructed using the LabelImg software to accurately label the images with targeted pest species. To guarantee the model's superior generalization capabilities, the dataset was strategically divided into a training set and a test set in an 8:2 ratio. The dataset includes thorough observations and recordings of the wheat leaf blade's appearance, texture, color, as well as other variables that could influence these characteristics. The compiled dataset proved to be an invaluable asset for both training and validation activities. Leveraging the YOLOv8 algorithm, an enhanced lightweight convolutional neural network, ShuffleNetv2, was selected as the basis network for feature extraction from images. This was accomplished by integrating a 3×3 Depthwise Convolution (DWConv) kernel, the h-swish activation function, and a Squeeze-and-Excitation Network (SENet) attention mechanism. These enhancements streamlined the model by diminishing the parameter count and computational demands, all while sustaining high detection precision. The deployment of these sophisticated methodologies exemplified the researchers' commitment and passion for innovation. The YOLOv8 model employs the SEnet attention mechanism module within both its Backbone and Neck components, significantly reducing computational load while bolstering accuracy. This method exemplifies the model's exceptional performance, distinguishing it from other models in the domain. By integrating a dedicated small target detection layer, the model's capabilities have been augmented, enabling more efficient and precise pest and disease detection. The introduction of a new detection feature map, sized 160×160 pixels, enables the network to concentrate on identifying small-targeted pests and diseases, thereby enhancing the accuracy of pest and disease recognition. Results and Discussion The YOLOv8-SS wheat leaf pests and diseases detection model has been significantly improved to accurately detect wheat leaf pests and diseases in their natural environment. By employing the refined ShuffleNet V2 within the DarkNet-53 framework, as opposed to the conventional YOLOv8, under identical experimental settings, the model exhibited a 4.53% increase in recognition accuracy and a 4.91% improvement in F1-Score, compared to the initial model. Furthermore, the incorporation of a dedicated small target detection layer led to a subsequent rise in accuracy and F1-Scores of 2.31% and 2.16%, respectively, despite a minimal upsurge in the number of parameters and computational requirements. The integration of the SEnet attention mechanism module into the YOLOv8 model resulted in a detection accuracy rate increase of 1.85% and an F1-Score enhancement of 2.72%. Furthermore, by swapping the original neural network architecture with an enhanced ShuffleNet V2 and appending a compact object detection sublayer (namely YOLOv8-SS), the resulting model exhibited a heightened recognition accuracy of 89.41% and an F1-Score of 88.12%. The YOLOv8-SS variant substantially outperformed the standard YOLOv8, showing a remarkable enhancement of 10.11% and 9.92% in accuracy, respectively. This outcome strikingly illustrates the YOLOv8-SS's prowess in balancing speed with precision. Moreover, it achieves convergence at a more rapid pace, requiring approximately 40 training epochs, to surpass other renowned models such as Faster R-CNN, MobileNetV2, SSD, YOLOv5, YOLOX, and the original YOLOv8 in accuracy. Specifically, the YOLOv8-SS boasted an average accuracy 23.01%, 15.13%, 11%, 25.21%, 27.52%, and 10.11% greater than that of the competing models, respectively. In a head-to-head trial involving a public dataset (LWDCD 2020) and a custom-built dataset, the LWDCD 2020 dataset yielded a striking accuracy of 91.30%, outperforming the custom-built dataset by a margin of 1.89% when utilizing the same network architecture, YOLOv8-SS. The AI Challenger 2018-6 and Plant-Village-5 datasets did not perform as robustly, achieving accuracy rates of 86.90% and 86.78% respectively. The YOLOv8-SS model has shown substantial improvements in both feature extraction and learning capabilities over the original YOLOv8, particularly excelling in natural environments with intricate, unstructured backdrops. Conclusion The YOLOv8-SS model is meticulously designed to deliver unmatched recognition accuracy while consuming a minimal amount of storage space. In contrast to conventional detection models, this groundbreaking model exhibits superior detection accuracy and speed, rendering it exceedingly valuable across various applications. This breakthrough serves as an invaluable resource for cutting-edge research on crop pest and disease detection within natural environments featuring complex, unstructured backgrounds. Our method is versatile and yields significantly enhanced detection performance, all while maintaining a lean model architecture. This renders it highly appropriate for real-world scenarios involving large-scale crop pest and disease detection.

  • Overview Article
    ZHAO Chunjiang
    Smart Agriculture. 2023, 5(2): 126-148. https://doi.org/10.12133/j.smartag.SA202306002

    Significance Agricultural environment is dynamic and variable, with numerous factors affecting the growth of animals and plants and complex interactions. There are numerous factors that affect the growth of all kinds of animals and plants. There is a close but complex correlation between these factors such as air temperature, air humidity, illumination, soil temperature, soil humidity, diseases, pests, weeds and etc. Thus, farmers need agricultural knowledge to solve production problems. With the rapid development of internet technology, a vast amount of agricultural information and knowledge is available on the internet. However, due to the lack of effective organization, the utilization rate of these agricultural information knowledge is relatively low.How to analyze and generate production knowledge or decision cases from scattered and disordered information is a big challenge all over the world. Agricultural knowledge intelligent service technology is a good way to resolve the agricultural data problems such as low rank, low correlation, and poor interpretability of reasoning. It is also the key technology to improving the comprehensive prediction and decision-making analysis capabilities of the entire agricultural production process. It can eliminate the information barriers between agricultural knowledge, farmers, and consumers, and is more conducive to improve the production and quality of agricultural products, provide effective information services. Progress The definition, scope, and technical application of agricultural knowledge intelligence services are introduced in this paper. The demand for agricultural knowledge services are analyzed combining with artificial intelligence technology. Agricultural knowledge intelligent service technologies such as perceptual recognition, knowledge coupling, and inference decision-making are conducted. The characteristics of agricultural knowledge services are analyzed and summarized from multiple perspectives such as industrial demand, industrial upgrading, and technological development. The development history of agricultural knowledge services is introduced. Current problems and future trends are also discussed in the agricultural knowledge services field. Key issues in agricultural knowledge intelligence services such as animal and plant state recognition in complex and uncertain environments, multimodal data association knowledge extraction, and collaborative reasoning in multiple agricultural application scenarios have been discussed. Combining practical experience and theoretical research, a set of intelligent agricultural situation analysis service framework that covers the entire life cycle of agricultural animals and plants and combines knowledge cases is proposed. An agricultural situation perception framework has been built based on satellite air ground multi-channel perception platform and Internet real-time data. Multimodal knowledge coupling, multimodal knowledge graph construction and natural language processing technology have been used to converge and manage agricultural big data. Through knowledge reasoning decision-making, agricultural information mining and early warning have been carried out to provide users with multi-scenario agricultural knowledge services. Intelligent agricultural knowledge services have been designed such as multimodal fusion feature extraction, cross domain knowledge unified representation and graph construction, and complex and uncertain agricultural reasoning and decision-making. An agricultural knowledge intelligent service platform composed of cloud computing support environment, big data processing framework, knowledge organization management tools, and knowledge service application scenarios has been built. Rapid assembly and configuration management of agricultural knowledge services could be provide by the platform. The application threshold of artificial intelligence technology in agricultural knowledge services could be reduced. In this case, problems of agricultural users can be solved. A novel method for agricultural situation analysis and production decision-making is proposed. A full chain of intelligent knowledge application scenario is constructed. The scenarios include planning, management, harvest and operations during the agricultural before, during and after the whole process. Conclusions and Prospects The technology trend of agricultural knowledge intelligent service is summarized in five aspects. (1) Multi-scale sparse feature discovery and spatiotemporal situation recognition of agricultural conditions. The application effects of small sample migration discovery and target tracking in uncertain agricultural information acquisition and situation recognition are discussed. (2) The construction and self-evolution of agricultural cross media knowledge graph, which uses robust knowledge base and knowledge graph to analyze and gather high-level semantic information of cross media content. (3) In response to the difficulties in tracing the origin of complex agricultural conditions and the low accuracy of comprehensive prediction, multi granularity correlation and multi-mode collaborative inversion prediction of complex agricultural conditions is discussed. (4) The large language model (LLM) in the agricultural field based on generative artificial intelligence. ChatGPT and other LLMs can accurately mine agricultural data and automatically generate questions through large-scale computing power, solving the problems of user intention understanding and precise service under conditions of dispersed agricultural data, multi-source heterogeneity, high noise, low information density, and strong uncertainty. In addition, the agricultural LLM can also significantly improve the accuracy of intelligent algorithms such as identification, prediction and decision-making by combining strong algorithms with Big data and super computing power. These could bring important opportunities for large-scale intelligent agricultural production. (5) The construction of knowledge intelligence service platforms and new paradigm of knowledge service, integrating and innovating a self-evolving agricultural knowledge intelligence service cloud platform. Agricultural knowledge intelligent service technology will enhance the control ability of the whole agricultural production chain. It plays a technical support role in achieving the transformation of agricultural production from "observing the sky and working" to "knowing the sky and working". The intelligent agricultural application model of "knowledge empowerment" provides strong support for improving the quality and efficiency of the agricultural industry, as well as for the modernization transformation and upgrading.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    CHENRuiyun, TIANWenbin, BAOHaibo, LIDuan, XIEXinhao, ZHENGYongjun, TANYu
    Smart Agriculture. 2023, 5(4): 16-32. https://doi.org/10.12133/j.smartag.SA202308006

    [Significance] As the research focus of future agricultural machinery, agricultural wheeled robots are developing in the direction of intelligence and multi-functionality. Advanced environmental perception technologies serve as a crucial foundation and key components to promote intelligent operations of agricultural wheeled robots. However, considering the non-structured and complex environments in agricultural on-field operational processes, the environmental information obtained through conventional 2D perception technologies is limited. Therefore, 3D environmental perception technologies are highlighted as they can provide more dimensional information such as depth, among others, thereby directly enhancing the precision and efficiency of unmanned agricultural machinery operation. This paper aims to provide a detailed analysis and summary of 3D environmental perception technologies, investigate the issues in the development of agricultural environmental perception technologies, and clarify the future key development directions of 3D environmental perception technologies regarding agricultural machinery, especially the agricultural wheeled robot. [Progress] Firstly, an overview of the general status of wheeled robots was introduced, considering their dominant influence in environmental perception technologies. It was concluded that multi-wheel robots, especially four-wheel robots, were more suitable for the agricultural environment due to their favorable adaptability and robustness in various agricultural scenarios. In recent years, multi-wheel agricultural robots have gained widespread adoption and application globally. The further improvement of the universality, operation efficiency, and intelligence of agricultural wheeled robots is determined by the employed perception systems and control systems. Therefore, agricultural wheeled robots equipped with novel 3D environmental perception technologies can obtain high-dimensional environmental information, which is significant for improving the accuracy of decision-making and control. Moreover, it enables them to explore effective ways to address the challenges in intelligent environmental perception technology. Secondly, the recent development status of 3D environmental perception technologies in the agriculture field was briefly reviewed. Meanwhile, sensing equipment and the corresponding key technologies were also introduced. For the wheeled robots reported in the agriculture area, it was noted that the applied technologies of environmental perception, in terms of the primary employed sensor solutions, were divided into three categories: LiDAR, vision sensors, and multi-sensor fusion-based solutions. Multi-line LiDAR had better performance on many tasks when employing point cloud processing algorithms. Compared with LiDAR, depth cameras such as binocular cameras, TOF cameras, and structured light cameras have been comprehensively investigated for their application in agricultural robots. Depth camera-based perception systems have shown superiority in cost and providing abundant point cloud information. This study has investigated and summarized the latest research on 3D environmental perception technologies employed by wheeled robots in agricultural machinery. In the reported application scenarios of agricultural environmental perception, the state-of-the-art 3D environmental perception approaches have mainly focused on obstacle recognition, path recognition, and plant phenotyping. 3D environmental perception technologies have the potential to enhance the ability of agricultural robot systems to understand and adapt to the complex, unstructured agricultural environment. Furthermore, they can effectively address several challenges that traditional environmental perception technologies have struggled to overcome, such as partial sensor information loss, adverse weather conditions, and poor lighting conditions. Current research results have indicated that multi-sensor fusion-based 3D environmental perception systems outperform single-sensor-based systems. This superiority arises from the amalgamation of advantages from various sensors, which concurrently serve to mitigate individual shortcomings. [Conclusions and Prospects] The potential of 3D environmental perception technology for agricultural wheeled robots was discussed in light of the evolving demands of smart agriculture. Suggestions were made to improve sensor applicability, develop deep learning-based agricultural environmental perception technology, and explore intelligent high-speed online multi-sensor fusion strategies. Currently, the employed sensors in agricultural wheeled robots may not fully meet practical requirements, and the system's cost remains a barrier to widespread deployment of 3D environmental perception technologies in agriculture. Therefore, there is an urgent need to enhance the agricultural applicability of 3D sensors and reduce production costs. Deep learning methods were highlighted as a powerful tool for processing information obtained from 3D environmental perception sensors, improving response speed and accuracy. However, the limited datasets in the agriculture field remain a key issue that needs to be addressed. Additionally, multi-sensor fusion has been recognized for its potential to enhance perception performance in complex and changeable environments. As a result, it is clear that 3D environmental perception technology based on multi-sensor fusion is the future development direction of smart agriculture. To overcome challenges such as slow data processing speed, delayed processed data, and limited memory space for storing data, it is essential to investigate effective fusion schemes to achieve online multi-source information fusion with greater intelligence and speed.

  • Overview Article
    GUO Yangyang, DU Shuzeng, QIAO Yongliang, LIANG Dong
    Smart Agriculture. 2023, 5(1): 52-65. https://doi.org/10.12133/j.smartag.SA202205009

    Accurate and efficient monitoring of animal information, timely analysis of animal physiological and physical health conditions, and automatic feeding and farming management combined with intelligent technologies are of great significance for large-scale livestock farming. Deep learning techniques, with automatic feature extraction and powerful image representation capabilities, solve many visual challenges, and are more suitable for application in monitoring animal information in complex livestock farming environments. In order to further analyze the research and application of artificial intelligence technology in intelligent animal farming, this paper presents the current state of research on deep learning techniques for tag detection recognition, body condition evaluation and weight estimation, and behavior recognition and quantitative analysis for cattle, sheep and pigs. Among them, target detection and recognition is conducive to the construction of electronic archives of individual animals, on which basis the body condition and weight information, behavior information and health status of animals can be related, which is also the trend of intelligent animal farming. At present, intelligent animal farming still faces many problems and challenges, such as the existence of multiple perspectives, multi-scale, multiple scenarios and even small sample size of a certain behavior in data samples, which greatly increases the detection difficulty and the generalization of intelligent technology application. In addition, animal breeding and animal habits are a long-term process. How to accurately monitor the animal health information in real time and effectively feed it back to the producer is also a technical difficulty. According to the actual feeding and management needs of animal farming, the development of intelligent animal farming is prospected and put forward. First, enrich the samples and build a multi perspective dataset, and combine semi supervised or small sample learning methods to improve the generalization ability of in-depth learning models, so as to realize the perception and analysis of the animal's physical environment. Secondly, the unified cooperation and harmonious development of human, intelligent equipment and breeding animals will improve the breeding efficiency and management level as a whole. Third, the deep integration of big data, deep learning technology and animal farming will greatly promote the development of intelligent animal farming. Last, research on the interpretability and security of artificial intelligence technology represented by deep learning model in the breeding field. And other development suggestions to further promote intelligent animal farming. Aiming at the progress of research application of deep learning in livestock smart farming, it provides reference for the modernization and intelligent development of livestock farming.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    LI Yangde, MA Xiaohui, WANG Ji
    Smart Agriculture. 2023, 5(2): 35-44. https://doi.org/10.12133/j.smartag.SA202211007

    Objective Pineapple is a common tropical fruit, and its ripeness has an important impact on the storage and marketing. It is particularly important to analyze the maturity of pineapple fruit before picking. Deep learning technology can be an effective method to achieve automatic recognition of pineapple maturity. To improve the accuracy and rate of automatic recognition of pineapple maturity, a new network model named MobileNet V3-YOLOv4 was proposed in this study. Methods Firstly, pineapple maturity analysis data set was constructed. A total of 1580 images were obtained, with 1264 images selected as the training set, 158 images as the validation set, and 158 images as the test set. Pineapple photos were taken in natural environment. In order to ensure the diversity of the data set and improve the robustness and generalization of the network, pineapple photos were taken under the influence of different factors such as branches and leaves occlusion, uneven lighting, overlapping shadows, etc. and the location, weather and growing environment of the collection were different. Then, according to the maturity index of pineapple, the photos of pineapple with different maturity were marked, and the labels were divided into yellow ripeness and green ripeness. The annotated images were taken as data sets and input into the network for training. Aiming at the problems of the traditional YOLOv4 network, such as large number of parameters, complex network structure and slow reasoning speed, a more optimized lightweight MobileNet V3-YOLOv4 network model was proposed. The model utilizes the benck structure to replace the Resblock in the CSPDarknet backbone network of YOLOv4. Meanwhile, in order to verify the effectiveness of the MobileNet V3-YOLOv4 network, MobileNet V1-YOLOv4 model and MobileNet V2-YOLOv4 model were also trained. Five different single-stage and two-stage network models, including R-CNN, YOLOv3, SSD300, Retinanet and Centernet were compared with each evaluation index to analyze the performance superiority of MobileNet V3-YOLOv4 model. Results and Discussions] MobileNet V3-YOLOv4 was validated for its effectiveness in pineapple maturity detection through experiments comparing model performance, model classification prediction, and accuracy tests in complex pineapple detection environments.The experimental results show that, in terms of model performance comparison, the training time of MobileNet V3-YOLOv4 was 11,924 s, with an average training time of 39.75 s per round, the number of parameters was 53.7 MB, resulting in a 25.59% reduction in the saturation time compared to YOLOv4, and the parameter count accounted for only 22%. The mean average precision (mAP) of the trained MobileNet V3-YOLOv4 in the verification set was 53.7 MB. In order to validate the classification prediction performance of the MobileNet V3-YOLOv4 model, four metrics, including Recall score, F1 Score, Precision, and average precision (AP), were utilized to classify and recognize pineapples of different maturities. The experimental results demonstrate that MobileNet V3-YOLOv4 exhibited significantly higher Precision, AP, and F1 Score the other. For the semi-ripe stage, there was a 4.49% increase in AP, 0.07 improvement in F1 Score, 1% increase in Recall, and 3.34% increase in Precision than YOLOv4. As for the ripe stage, there was a 6.06% increase in AP, 0.13 improvement in F1 Score, 16.55% increase in Recall, and 6.25% increase in Precision. Due to the distinct color features of ripe pineapples and their easy differentiation from the background, the improved network achieved a precision rate of 100.00%. Additionally, the mAP and reasoning speed (Frames Per Second, FPS) of nine algorithms were examined. The results showed that MobileNet V3-YOLOv4 achieved an mAP of 90.92%, which was 5.28% higher than YOLOv4 and 3.67% higher than YOLOv3. The FPS was measured at 80.85 img/s, which was 40.28 img/s higher than YOLOv4 and 8.91 img/s higher than SSD300. The detection results of MobileNet V3-YOLOv4 for pineapples of different maturities in complex environments indicated a 100% success rate for both the semi-ripe and ripe stages, while YOLOv4, MobileNet V1-YOLOv4, and MobileNet V2-YOLOv4 exhibited varying degrees of missed detections. Conclusions Based on the above experimental results, it can be concluded that MobileNet V3-YOLOv4 proposed in this study could not only reduce the training speed and parameter number number, but also improve the accuracy and reasoning speed of pineapple maturity recognition, so it has important application prospects in the field of smart orchard. At the same time, the pineapple photo data set collected in this research can also provide valuable data resources for the research and application of related fields.

  • Special Issue--Key Technologies and Equipment for Smart Orchard
    LIU Limin, HE Xiongkui, LIU Weihong, LIU Ziyan, HAN Hu, LI Yangfan
    Smart Agriculture. 2022, 4(3): 63-74. https://doi.org/10.12133/j.smartag.SA202207008

    To realize the autonomous navigation and automatic target spraying of intelligent plant protect machinery in orchard, in this study, an autonomous navigation and automatic target spraying robot for orchards was developed. Firstly, a single 3D light detection and ranging (LiDAR) was used to collect fruit trees and other information around the robot. The region of interest (ROI) was determined using information on the fruit trees in the orchard (plant spacing, plant height, and row spacing), as well as the fundamental LiDAR parameters. Additionally, it must be ensured that LiDAR was used to detect the canopy information of a whole fruit tree in the ROI. Secondly, the point clouds within the ROI was two-dimension processing to obtain the fruit tree center of mass coordinates. The coordinate was the location of the fruit trees. Based on the location of the fruit trees, the row lines of fruit tree were obtained by random sample consensus (RANSAC) algorithm. The center line (navigation line) of the fruit tree row within ROI was obtained through the fruit tree row lines. The robot was controlled to drive along the center line by the angular velocity signal transmitted from the computer. Next, the ATRS's body speed and position were determined by encoders and the inertial measurement unit (IMU). And the collected fruit tree zoned canopy information was corrected by IMU. The presence or absence of fruit tree zoned canopy was judged by the logical algorithm designed. Finally, the nozzles were controlled to spray or not according to the presence or absence of corresponding zoned canopy. The conclusions were obtained. The maximum lateral deviation of the robot during autonomous navigation was 21.8 cm, and the maximum course deviation angle was 4.02°. Compared with traditional spraying, the automatic target spraying designed in this study reduced pesticide volume, air drift and ground loss by 20.06%, 38.68% and 51.40%, respectively. There was no significant difference between the automatic target spraying and the traditional spraying in terms of the percentage of air drift. In terms of the percentage of ground loss, automatic target spraying had 43% at the bottom of the test fruit trees and 29% and 28% at the middle of the test fruit trees and the left and right neighboring fruit trees. But in traditional spraying, the percentage of ground loss was, in that sequence, 25%, 38%, and 37%. The robot developted can realize autonomous navigation while ensuring the spraying effect, reducing the pesticides volume and loss.

  • Overview Article
    GUI Zechun, ZHAO Sijian
    Smart Agriculture. 2023, 5(1): 82-98. https://doi.org/10.12133/j.smartag.SA202211004

    Agriculture is a basic industry deeply related to the national economy and people's livelihood, while it is also a weak industry. There are some problems with traditional agricultural risk management research methods, such as insufficient mining of nonlinear information, low accuracy and poor robustness. Artificial intelligence(AI) has powerful functions such as strong nonlinear fitting, end-to-end modeling, feature self-learning based on big data, which can solve the above problems well. The research progress of artificial intelligence technology in agricultural vulnerability assessment, agricultural risk prediction and agricultural damage assessment were first analyzed in this paper, and the following conclusions were obtained: 1. The feature importance assessment of AI in agricultural vulnerability assessment lacks scientific and effective verification indicators, and the application method makes it impossible to compare the advantages and disadvantages of multiple AI models. Therefore, it is suggested to use subjective and objective methods for evaluation; 2. In risk prediction, it is found that with the increase of prediction time, the prediction ability of machine learning model tends to decline. Overfitting is a common problem in risk prediction, and there are few researches on the mining of spatial information of graph data; 3. Complex agricultural production environment and varied application scenarios are important factors affecting the accuracy of damage assessment. Improving the feature extraction ability and robustness of deep learning models is a key and difficult issue to be overcome in future technological development. Then, in view of the performance improvement problem and small sample problem existing in the application process of AI technology, corresponding solutions were put forward. For the performance improvement problem, according to the user's familiarity with artificial intelligence, a variety of model comparison method, model group method and neural network structure optimization method can be used respectively to improve the performance of the model; For the problem of small samples, data augmentation, GAN (Generative Adversarial Network) and transfer learning can often be combined to increase the amount of input data of the model, enhance the robustness of the model, accelerate the training speed of the model and improve the accuracy of model recognition. Finally, the applications of AI in agricultural risk management were prospected: In the future, AI algorithm could be considered in the construction of agricultural vulnerability curve; In view of the relationship between upstream and downstream of agricultural industry chain and agriculture-related industries, the graph neural network can be used more in the future to further study the agricultural price risk prediction; In the modeling process of future damage assessment, more professional knowledge related to the assessment target can be introduced to enhance the feature learning of the target, and expanding the small sample data is also the key subject of future research.

  • Overview Article
    BAI Geng, GE Yufeng
    Smart Agriculture. 2023, 5(1): 66-81. https://doi.org/10.12133/j.smartag.SA202211001

    Enhancing resource use efficiency in agricultural field management and breeding high-performance crop varieties are crucial approaches for securing crop yield and mitigating negative environmental impact of crop production. Crop stress sensing and plant phenotyping systems are integral to variable-rate (VR) field management and high-throughput plant phenotyping (HTPP), with both sharing similarities in hardware and data processing techniques. Crop stress sensing systems for VR field management have been studied for decades, aiming to establish more sustainable management practices. Concurrently, significant advancements in HTPP system development have provided a technological foundation for reducing conventional phenotyping costs. In this paper, we present a systematic review of crop stress sensing systems employed in VR field management, followed by an introduction to the sensors and data pipelines commonly used in field HTPP systems. State-of-the-art sensing and decision-making methodologies for irrigation scheduling, nitrogen application, and pesticide spraying are categorized based on the degree of modern sensor and model integration. We highlight the data processing pipelines of three ground-based field HTPP systems developed at the University of Nebraska-Lincoln. Furthermore, we discuss current challenges and propose potential solutions for field HTPP research. Recent progress in artificial intelligence, robotic platforms, and innovative instruments is expected to significantly enhance system performance, encouraging broader adoption by breeders. Direct quantification of major plant physiological processes may represent one of next research frontiers in field HTPP, offering valuable phenotypic data for crop breeding under increasingly unpredictable weather conditions. This review can offer a distinct perspective, benefiting both research communities in a novel manner.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    ZHU Yanjun, DU Wensheng, WANG Chunying, LIU Ping, LI Xiang
    Smart Agriculture. 2023, 5(2): 23-34. https://doi.org/10.12133/j.smartag.SA202304001

    Objective Rapid recognition and automatic positioning of table grapes in the natural environment is the prerequisite for the automatic picking of table grapes by the picking robot. Methods An rapid recognition and automatic picking points positioning method based on improved K-means clustering algorithm and contour analysis was proposed. First, euclidean distance was replaced by a weighted gray threshold as the judgment basis of K-means similarity. Then the images of table grapes were rasterized according to the K value, and the initial clustering center was obtained. Next, the average gray value of each cluster and the percentage of pixel points of each cluster in the total pixel points were calculated. And the weighted gray threshold was obtained by the average gray value and percentage of adjacent clusters. Then, the clustering was considered as have ended until the weighted gray threshold remained unchanged. Therefore, the cluster image of table grape was obtained. The improved clustering algorithm not only saved the clustering time, but also ensured that the K value could change adaptively. Moreover, the adaptive Otsu algorithm was used to extract grape cluster information, so that the initial binary image of the table grape was obtained. In order to reduce the interference of redundant noise on recognition accuracy, the morphological algorithms (open operation, close operation, images filling and the maximum connected domain) were used to remove noise, so the accurate binary image of table grapes was obtained. And then, the contours of table grapes were obtained by the Sobel operator. Furthermore, table grape clusters grew perpendicular to the ground due to gravity in the natural environment. Therefore, the extreme point and center of gravity point of the grape cluster were obtained based on contour analysis. In addition, the linear bundle where the extreme point and the center of gravity point located was taken as the carrier, and the similarity of pixel points on both sides of the linear bundle were taken as the judgment basis. The line corresponding to the lowest similarity value was taken as the grape stem, so the stem axis of the grape was located. Moreover, according to the agronomic picking requirements of table grapes, and combined with contour analysis, the region of interest (ROI) in picking points could be obtained. Among them, the intersection of the grapes stem and the contour was regarded as the middle point of the bottom edge of the ROI. And the 0.8 times distance between the left and right extreme points was regarded as the length of the ROI, the 0.25 times distance between the gravity point and the intersection of the grape stem and the contour was regarded as the height of the ROI. After that, the central point of the ROI was captured. Then, the nearest point between the center point of the ROI and the grape stem was determined, and this point on the grape stem was taken as the picking point of the table grapes. Finally, 917 grape images (including Summer Black, Moldova, and Youyong) taken by the rear camera of MI8 mobile phone at Jinniu Mountain Base of Shandong Fruit and Vegetable Research Institute were verified experimentally. Results and Discussions] The results showed that the success rate was 90.51% when the error between the table grape picking points and the optimal points were less than 12 pixels, and the average positioning time was 0.87 s. The method realized the fast and accurate localization of table grape picking points. On top of that, according to the two cultivation modes (hedgerow planting and trellis planting) of table grapes, a simulation test platform based on the Dense mechanical arm and the single-chip computer was set up in the study. 50 simulation tests were carried out for the four conditions respectively, among which the success rate of localization for purple grape picking point of hedgerow planting was 86.00%, and the average localization time was 0.89 s; the success rate of localization for purple grape identification and localization of trellis planting was 92.00%, and the average localization time was 0.67 s; the success rate of localization for green grape picking point of hedgerow planting was 78.00%, and the average localization time was 0.72 s; and the success rate of localization for green grape identification and localization of trellis planting was 80.00%, and the average localization time was 0.71 s. Conclusions The experimental results showed that the method proposed in the study can meet the requirements of table grape picking, and can provide technical supports for the development of grape picking robot.

  • Intelligent Equipment and Systems
    QIN Yingdong, JIA Wenshen
    Smart Agriculture. 2023, 5(1): 155-165. https://doi.org/10.12133/j.smartag.SA202211008

    To meet the needs of environmental monitoring and regulation in rabbit houses, a real-time environmental monitoring system for rabbit houses was proposed based on narrow band Internet of Things (NB-IoT). The system overcomes the limitations of traditional wired networks, reduces network costs, circuit components, and expenses is low. An Arduino development board and the Quectel BC260Y-NB-IoT network module were used, along with the message queuing telemetry transport (MQTT) protocol for remote telemetry transmission, which enables network connectivity and communication with an IoT cloud platform. Multiple sensors, including SGP30, MQ137, and 5516 photoresistors, were integrated into the system to achieve real-time monitoring of various environmental parameters within the rabbit house, such as sound decibels, light intensity, humidity, temperature, and gas concentrations. The collected data was stored for further analysis and could be used to inform environmental regulation and monitoring in rabbit houses, both locally and in the cloud. Signal alerts based on circuit principles were triggered when thresholds were exceeded, creating an optimal living environment for the rabbits. The advantages of NB-IoT networks and other networks, such as Wi-Fi and LoRa were compared. The technology and process of building a system based on the three-layer architecture of the Internet of Things was introduced. The prices of circuit components were analyzed, and the total cost of the entire system was less than 400 RMB. The system underwent network and energy consumption tests, and the transmission stability, reliability, and energy consumption were reasonable and consistent across different time periods, locations, and network connection methods. An average of 0.57 transactions per second (TPS) was processed by the NB-IoT network using the MQTT communication protocol, and 34.2 messages per minute were sent and received with a fluctuation of 1 message. The monitored device was found to have an average voltage of approximately 12.5 V, a current of approximately 0.42 A, and an average power of 5.3 W after continuous monitoring using an electricity meter. No additional power consumption was observed during communication. The performance of various sensors was tested through a 24-hour indoor test, during which temperature and lighting conditions showed different variations corresponding to day and night cycles. The readings were stably and accurately captured by the environmental sensors, demonstrating their suitability for long-term monitoring purposes. This system is can provide equipment cost and network selection reference values for remote or large-scale livestock monitoring devices.

  • Topic--Intelligent Agricultural Sensor Technology
    WANGRujing
    Smart Agriculture. 2024, 6(1): 1-17. https://doi.org/10.12133/j.smartag.SA202401017

    [Significance] Agricultural sensor is the key technology for developing modern agriculture. Agricultural sensor is a kind of detection device that can sense and convert physical signal, which is related to the agricultural environment, plants and animals, into an electrical signal. Agricultural sensors could be applied to monitor crops and livestock in different agricultural environments, including weather, water, atmosphere and soil. It is also an important driving force to promote the iterative upgrading of agricultural technology and change agricultural production methods. [Progress] The different agricultural sensors are categorized, the cutting-edge research trends of agricultural sensors are analyzed, and summarizes the current research status of agricultural sensors are summarized in different application scenarios. Moreover, a deep analysis and discussion of four major categories is conducted, which include agricultural environment sensors, animal and plant life information sensors, agricultural product quality and safety sensors, and agricultural machinery sensors. The process of research, development, the universality and limitations of the application of the four types of agricultural sensors are summarized. Agricultural environment sensors are mainly used for real-time monitoring of key parameters in agricultural production environments, such as the quality of water, gas, and soil. The soil sensors provide data support for precision irrigation, rational fertilization, and soil management by monitoring indicators such as soil humidity, pH, temperature, nutrients, microorganisms, pests and diseases, heavy metals and agricultural pollution, etc. Monitoring of dissolved oxygen, pH, nitrate content, and organophosphorus pesticides in irrigation and aquaculture water through water sensors ensures the rational use of water resources and water quality safety. The gas sensor monitors the atmospheric CO2, NH3, C2H2, CH4 concentration, and other information, which provides the appropriate environmental conditions for the growth of crops in greenhouses. The animal life information sensor can obtain the animal's growth, movement, physiological and biochemical status, which include movement trajectory, food intake, heart rate, body temperature, blood pressure, blood glucose, etc. The plant life information sensors monitor the plant's health and growth, such as volatile organic compounds of the leaves, surface temperature and humidity, phytohormones, and other parameters. Especially, the flexible wearable plant sensors provide a new way to measure plant physiological characteristics accurately and monitor the water status and physiological activities of plants non-destructively and continuously. These sensors are mainly used to detect various indicators in agricultural products, such as temperature and humidity, freshness, nutrients, and potentially hazardous substances (e.g., bacteria, pesticide residues, heavy metals, etc. Agricultural machinery sensors can achieve real-time monitoring and controlling of agricultural machinery to achieve real-time cultivation, planting, management, and harvesting, automated operation of agricultural machinery, and accurate application of pesticide, fertilizer. [Conclusions and Prospects In the challenges and prospects of agricultural sensors, the core bottlenecks of large-scale application of agricultural sensors at the present stage are analyzed in detail. These include low-cost, specialization, high stability, and adaptive intelligence of agricultural sensors. Furthermore, the concept of "ubiquitous sensing in agriculture" is proposed, which provides ideas and references for the research and development of agricultural sensor technology.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    SHI Jiefeng, HUANG Wei, FAN Xieyang, LI Xiuhua, LU Yangxu, JIANG Zhuhui, WANG Zeping, LUO Wei, ZHANG Muqing
    Smart Agriculture. 2023, 5(2): 82-92. https://doi.org/10.12133/j.smartag.SA202304004

    Objective Accurate prediction of changes in sugarcane yield in Guangxi can provide important reference for the formulation of relevant policies by the government and provide decision-making basis for farmers to guide sugarcane planting, thereby improving sugarcane yield and quality and promoting the development of the sugarcane industry. This research was conducted to provide scientific data support for sugar factories and related management departments, explore the relationship between sugarcane yield and meteorological factors in the main sugarcane producing areas of Guangxi Zhuang Autonomous Region. Methods The study area included five sugarcane planting regions which laid in five different counties in Guangxi, China. The average yields per hectare of each planting regions were provided by Guangxi Sugar Industry Group which controls the sugar refineries of each planting region. The daily meteorological data including 14 meteorological factors from 2002 to 2019 were acquired from National Data Center for Meteorological Sciences to analyze their influences placed on sugarcane yield. Since meteorological factors could pose different influences on sugarcane growth during different time spans, a new kind of factor which includes meteorological factors and time spans was defined, such as the average precipitation in August, the average temperature from February to April, etc. And then the inter-correlation of all the meteorological factors of different time spans and their correlations with yields were analyzed to screen out the key meteorological factors of sensitive time spans. After that, four algorithms of BP neural network (BPNN), support vector machine (SVM), random forest (RF), and long short-term memory (LSTM) were employed to establish sugarcane apparent yield prediction models for each planting region. Their corresponding reference models based on the annual meteorological factors were also built. Additionally, the meteorological yields of every planting region were extracted by HP filtering, and a general meteorological yield prediction model was built based on the data of all the five planting regions by using RF, SVM BPNN, and LSTM, respectively. Results and Discussions The correlation analysis showed that different planting regions have different sensitive meteorological factors and key time spans. The highly representative meteorological factors mainly included sunshine hours, precipitation, and atmospheric pressure. According to the results of correlation analysis, in Region 1, the highest negative correlation coefficient with yield was observed at the sunshine hours during October and November, while the highest positive correlation coefficient was found at the minimum relative humidity in November. In Region 2, the maximum positive correlation coefficient with yield was observed at the average vapor pressure during February and March, whereas the maximum negative correlation coefficient was associated with the precipitation in August and September. In Region 3, the maximum positive correlation coefficient with yield was found at the 20‒20 precipitation during August and September, while the maximum negative correlation coefficient was related to sunshine hours in the same period. In Region 4, the maximum positive correlation coefficient with yield was observed at the 20‒20 precipitation from March to December, whereas the maximum negative correlation coefficient was associated with the highest atmospheric pressure from August to December. In Region 5, the maximum positive correlation coefficient with yield was found at the average vapor pressure from June and to August, whereas the maximum negative correlation coefficient as related to the lowest atmospheric pressure in February and March. For each specific planting region, the accuracy of apparent yield prediction model based on sensitive meteorological factors during key time spans was obviously better than that based on the annual average meteorological values. The LSTM model performed significantly better than the widely used classic BPNN, SVM, and RF models for both kinds of meteorological factors (under sensitive time spans or annually). The overall root mean square error (RMSE) and mean absolute percentage error (MAPE) of the LSTM model under key time spans were 10.34 t/ha and 6.85%, respectively, with a coefficient of determination Rv2 of 0.8489 between the predicted values and true values. For the general prediction models of the meteorological yield to multiple the sugarcane planting regions, the RF, SVM, and BPNN models achieved good results, and the best prediction performance went to BPNN model, with an RMSE of 0.98 t/ha, MAPE of 9.59%, and Rv2 of 0.965. The RMSE and MAPE of the LSTM model were 0.25 t/ha and 39.99%, respectively, and the Rv2 was 0.77. Conclusions Sensitive meteorological factors under key time spans were found to be more significantly correlated with the yields than the annual average meteorological factors. LSTM model shows better performances on apparent yield prediction for specific planting region than the classic BPNN, SVM, and RF models, but BPNN model showed better results than other models in predicting meteorological yield over multiple sugarcane planting regions.

  • Information Processing and Decision Making
    Zhu Yeping, Li Shijuan, Li Shuqin
    Smart Agriculture. 2019, 1(1): 53-66. https://doi.org/10.12133/j.smartag.2019.1.1.201901-SA005

    According to the demand of digitized analysis and visualization representation of crop yield formation and variety adaptability analysis, aiming at improving the timeliness, coordination and sense of reality of crop simulation model, key technologies of crop growth process simulation model and morphological 3D visualization were studied in this research. The internet of things technology was applied to collect the field data. The multi-agent technology was used to study the co-simulation method and design crop model framework. Winter wheat (Triticum aestivum L.) was taken as an example to conducted filed test, the 3D morphology visualization system was developed and validated. Taking three wheat varieties, Hengguan35 (Hg35), Jimai22 (Jm22) and Heng4399 (H4399) as research objects, logistic equation was constructed to simulate the change of leaf length, maximum leaf width, leaf height and plant height. Parametric modeling method and 3D graphics library (OpenGL) were used to build wheat organ geometry model so as to draw wheat morphological structure model. The R2 values of leaf length, maximum leaf width, leaf height and plant height were between 0.772-0.999, indicating that the model has high fitting degree. F values (between 10.153-4359.236) of regression equation and Sig. values (under 0.05) show that the model has good significance. Taking wheat as example, this research combined wheat growth model and structure model effectively in order to realize the 3D morphology visualization of crop growth processes under different conditions, it will provide references for developing the crop simulation visualization system, the method and related technologies are suitable for other field crops such as corn and rice, etc.

  • Special Issue--Key Technologies and Equipment for Smart Orchard
    SHANG Fengnan, ZHOU Xuecheng, LIANG Yingkai, XIAO Mingwei, CHEN Qiao, LUO Chendi
    Smart Agriculture. 2022, 4(3): 120-131. https://doi.org/10.12133/j.smartag.SA202207001

    Dragon fruit detection in natural environment is the prerequisite for fruit harvesting robots to perform harvesting. In order to improve the harvesting efficiency, by improving YOLOX (You Only Look Once X) network, a target detection network with an attention module was proposed in this research. As the benchmark, YOLOX-Nano network was chose to facilitate deployment on embedded devices, and the convolutional block attention module (CBAM) was added to the backbone feature extraction network of YOLOX-Nano, which improved the robustness of the model to dragon fruit target detection to a certain extent. The correlation of features between different channels was learned by weight allocation coefficients of features of different scales, which were extracted for the backbone network. Moreover, the transmission of deep information of network structure was strengthened, which aimed at reducing the interference of dragon fruit recognition in the natural environment as well as improving the accuracy and speed of detection significantly. The performance evaluation and comparison test of the method were carried out. The results showed that, after training, the dragon fruit target detection network got an AP0.5 value of 98.9% in the test set, an AP0.5:0.95 value of 72.4% and F1 score was 0.99. Compared with other YOLO network models under the same experimental conditions, on the one hand, the improved YOLOX-Nano network model proposed in this research was more lightweight, on the other hand, the detection accuracy of this method surpassed that of YOLOv3, YOLOv4 and YOLOv5 respectively. The average detection accuracy of the improved YOLOX-Nano target detection network was the highest, reaching 98.9%, 26.2% higher than YOLOv3, 9.8% points higher than YOLOv4-Tiny, and 7.9% points higher than YOLOv5-S. Finally, real-time tests were performed on videos with different input resolutions. The improved YOLOX-Nano target detection network proposed in this research had an average detection time of 21.72 ms for a single image. In terms of the size of the network model was only 3.76 MB, which was convenient for deployment on embedded devices. In conclusion, not only did the improved YOLOX-Nano target detection network model accurately detect dragon fruit under different lighting and occlusion conditions, but the detection speed and detection accuracy showed in this research could able to meet the requirements of dragon fruit harvesting in natural environment requirements at the same time, which could provide some guidance for the design of the dragon fruit harvesting robot.

  • Topic--Agricultural Sensor and Internet of Things
    YANG XuanJiang, LI Hualong​, LI Miao​, HU Zelin​, LIAO Jianjun​, LIU Xianwang​, GUO Panpan​, YUE Xudong​
    Smart Agriculture. 2020, 2(2): 115-125. https://doi.org/10.12133/j.smartag.2020.2.2.202004-SA001

    With the development of information technology, using big data analysis, monitoring of Internet of Things, sensor perception, wireless communication and other technologies to build a real-time online monitoring system for beehive is a feasible solution for reducing the stress response of bee colony caused by check the beehive artificially. Focusing on situation that real-time monitoring in the closed environment of the beehive is difficult, the STM32F103VBT6 32-bit microcontroller, integrated with the temperature and humidity sensor, microphone, and laser beam sensor were used in this study to develop a low-power, continuous working online monitoring system for the multi-parameter information acquisition and monitoring of beehive key parameters. The system mainly includes core processing module, data acquisition module, data sending module and database server. The data collection module includes a temperature and humidity collection unit inside the beehive, a bee colony sound collection unit, a bee in and out nest number counting unit, etc., and transfers data by accessing the mobile communication network. The performance test results of system on-site deployment showed that the developed system could monitor the temperature and humidity in the beehive in real time, effectively distinguish the bees of entering and leaving the beehive, record the numbers of bees of entering and leaving the nest door, and the bee colony sounds that the automatically obtained were consistent with the standard sound distribution of bee colony. The results indicate that this system meets the design requirements, can accurately and reliably collect the beehive parameters data, and can be used as a data collection method for related research of bee colony.

  • Overview Article
    HUANG Zichen, SUGIYAMA Saki
    Smart Agriculture. 2022, 4(2): 135-149. https://doi.org/10.12133/j.smartag.SA202202008

    Intelligent equipment is necessary to ensure stable, high-quality, and efficient production of facility agriculture. Among them, intelligent harvesting equipment needs to be designed and developed according to the characteristics of fruits and vegetables, so there is little large-scale mechanization. The intelligent harvesting equipment in Japan has nearly 40 years of research and development history since the 1980s, and the review of its research and development products has specific inspiration and reference significance. First, the preferential policies that can be used for harvesting robots in the support policies of the government and banks to promote the development of facility agriculture were introduced. Then, the development of agricultural robots in Japan was reviewed. The top ten fruits and vegetables in the greenhouse were selected, and the harvesting research of tomato, eggplant, green pepper, cucumber, melon, asparagus, and strawberry harvesting robots based on the combination of agricultural machinery and agronomy was analyzed. Next, the commercialized solutions for tomato, green pepper, and strawberry harvesting system were detailed and reviewed. Among them, taking the green pepper harvesting robot developed by the start-up company AGRIST Ltd. in recent years as an example, the harvesting robot developed by the company based on the Internet of Things technology and artificial intelligence algorithms was explained. This harvesting robot can work 24 h a day and can control the robot's operation through the network. Then, the typical strawberry harvesting robot that had undergone four generations of prototype development were reviewed. The fourth-generation system was a systematic solution developed by the company and researchers. It consisted of high-density movable seedbeds and a harvesting robot with the advantages of high space utilization, all-day work, and intelligent quality grading. The strengths, weaknesses, challenges, and future trends of prototype and industrialized solutions developed by universities were also summarized. Finally, suggestions for accelerating the development of intelligent, smart, and industrialized harvesting robots in China's facility agriculture were provided.

  • Special Issue--Monitoring Technology of Crop Information
    GUANBolun, ZHANGLiping, ZHUJingbo, LIRunmei, KONGJuanjuan, WANGYan, DONGWei
    Smart Agriculture. 2023, 5(3): 17-34. https://doi.org/10.12133/j.smartag.SA202306012

    [Significance] The scientific dataset of agricultural pests and diseases is the foundation for monitoring and warning of agricultural pests and diseases. It is of great significance for the development of agricultural pest control, and is an important component of developing smart agriculture. The quality of the dataset affecting the effectiveness of image recognition algorithms, with the discovery of the importance of deep learning technology in intelligent monitoring of agricultural pests and diseases. The construction of high-quality agricultural pest and disease datasets is gradually attracting attention from scholars in this field. In the task of image recognition, on one hand, the recognition effect depends on the improvement strategy of the algorithm, and on the other hand, it depends on the quality of the dataset. The same recognition algorithm learns different features in different quality datasets, so its recognition performance also varies. In order to propose a dataset evaluation index to measure the quality of agricultural pest and disease datasets, this article analyzes the existing datasets and takes the challenges faced in constructing agricultural pest and disease image datasets as the starting point to review the construction of agricultural pest and disease datasets. [Progress] Firstly, disease and pest datasets are divided into two categories: private datasets and public datasets. Private datasets have the characteristics of high annotation quality, high image quality, and a large number of inter class samples that are not publicly available. Public datasets have the characteristics of multiple types, low image quality, and poor annotation quality. Secondly, the problems faced in the construction process of datasets are summarized, including imbalanced categories at the dataset level, difficulty in feature extraction at the dataset sample level, and difficulty in measuring the dataset size at the usage level. These include imbalanced inter class and intra class samples, selection bias, multi-scale targets, dense targets, uneven data distribution, uneven image quality, insufficient dataset size, and dataset availability. The main reasons for the problem are analyzed by two key aspects of image acquisition and annotation methods in dataset construction, and the improvement strategies and suggestions for the algorithm to address the above issues are summarized. The collection devices of the dataset can be divided into handheld devices, drone platforms, and fixed collection devices. The collection method of handheld devices is flexible and convenient, but it is inefficient and requires high photography skills. The drone platform acquisition method is suitable for data collection in contiguous areas, but the detailed features captured are not clear enough. The fixed device acquisition method has higher efficiency, but the shooting scene is often relatively fixed. The annotation of image data is divided into rectangular annotation and polygonal annotation. In image recognition and detection, rectangular annotation is generally used more frequently. It is difficult to label images that are difficult to separate the target and background. Improper annotation can lead to the introduction of more noise or incomplete algorithm feature extraction. In response to the problems in the above three aspects, the evaluation methods are summarized for data distribution consistency, dataset size, and image annotation quality at the end of the article. Conclusions and Prospects The future research and development suggestions for constructing high-quality agricultural pest and disease image datasets based are proposed on the actual needs of agricultural pest and disease image recognition:(1) Construct agricultural pest and disease datasets combined with practical usage scenarios. In order to enable the algorithm to extract richer target features, image data can be collected from multiple perspectives and environments to construct a dataset. According to actual needs, data categories can be scientifically and reasonably divided from the perspective of algorithm feature extraction, avoiding unreasonable inter class and intra class distances, and thus constructing a dataset that meets task requirements for classification and balanced feature distribution. (2) Balancing the relationship between datasets and algorithms. When improving algorithms, consider the more sufficient distribution of categories and features in the dataset, as well as the size of the dataset that matches the model, to improve algorithm accuracy, robustness, and practicality. It ensures that comparative experiments are conducted on algorithm improvement under the same evaluation standard dataset, and improved the pest and disease image recognition algorithm. Research the correlation between the scale of agricultural pest and disease image data and algorithm performance, study the relationship between data annotation methods and algorithms that are difficult to annotate pest and disease images, integrate recognition algorithms for fuzzy, dense, occluded targets, and propose evaluation indicators for agricultural pest and disease datasets. (3) Enhancing the use value of datasets. Datasets can not only be used for research on image recognition, but also for research on other business needs. The identification, collection, and annotation of target images is a challenging task in the construction process of pest and disease datasets. In the process of collecting image data, in addition to collecting images, attention can be paid to the collection of surrounding environmental information and host information. This method is used to construct a multimodal agricultural pest and disease dataset, fully leveraging the value of the dataset. In order to focus researchers on business innovation research, it is necessary to innovate the organizational form of data collection, develop a big data platform for agricultural diseases and pests, explore the correlation between multimodal data, improve the accessibility and convenience of data, and provide efficient services for application implementation and business innovation.

  • Special Issue--Monitoring Technology of Crop Information
    WANGJingyong, ZHANGMingzhen, LINGHuarong, WANGZiting, GAIJingyao
    Smart Agriculture. 2023, 5(3): 142-153. https://doi.org/10.12133/j.smartag.SA202308018

    Objectives Chlorophyll content and water content are key physiological indicators of crop growth, and their non-destructive detection is a key technology to realize the monitoring of crop growth status such as drought stress. This study took maize as an object to develop a hyperspectral-based approach for the rapid and non-destructive acquisition of the leaf chlorophyll content and water content for drought stress assessment. [Methods] Drought treatment experiments were carried out in a greenhouse of the College of Agriculture, Guangxi University. Maize plants were subjected to drought stress treatment at the seedling stage (four leaves). Four drought treatments were set up for normal water treatment [CK], mild drought [W1], moderate drought [W2], and severe drought [W3], respectively. Leaf samples were collected at the 3rd, 6th, and 9th days after drought treatments, and 288 leaf samples were collected in total, with the corresponding chlorophyll content and water content measured in a standard laboratory protocol. A pair of push-broom hyperspectral cameras were used to collect images of the 288 seedling maize leaf samples, and image processing techniques were used to extract the mean spectra of the leaf lamina part. The algorithm flow framework of "pre-processing - feature extraction - machine learning inversion" was adopted for processing the extracted spectral data. The effects of different pre-processing methods, feature wavelength extraction methods and machine learning regression models were analyzed systematically on the prediction performance of chlorophyll content and water content, respectively. Accordingly, the optimal chlorophyll content and water content inversion models were constructed. Firstly, 70% of the spectral data was randomly sampled and used as the training dataset for training the inversion model, whereas the remaining 30% was used as the testing dataset to evaluate the performance of the inversion model. Subsequently, the effects of different spectral pre-processing methods on the prediction performance of chlorophyll content and water content were compared. Different feature wavelengths were extracted from the optimal pre-processed spectra using different algorithms, then their capabilities in preserve the information useful for the inversion of leaf chlorophyll content and water content were compared. Finally, the performances of different machine learning regression model were compared, and the optimal inversion model was constructed and used to visualize the chlorophyll content and water content. Additionally, the construction of vegetation coefficients were explored for the inversion of chlorophyll content and water content and evaluated their inversion ability. The performance evaluation indexes used include determination coefficient and root mean squared error (RMSE). [Results and Discussions] With the aggravation of stress, the reflectivity of leaves in the wavelength range of 400~1700 nm gradually increased with the degree of drought stress. For the inversion of leaf chlorophyll content and water content, combining stepwise regression (SR) feature extraction with Stacking regression could obtain an optimal performance for chlorophyll content prediction, with an R2 of 0.878 and an RMSE of 0.317 mg/g. Compared with the full-band stacking model, SR-Stacking not only improved R2 by 2.9%, reduced RMSE by 0.0356mg/g, but also reduced the number of model input variables from 1301 to 9. Combining the successive projection algorithm (SPA) feature extraction with Stacking regression could obtain the optimal performance for water content prediction, with an R2 of 0.859 and RMSE of 3.75%. Compared with the full-band stacking model, SPA-Stacking not only increased R2 by 0.2%, reduced RMSE by 0.03%, but also reduced the number of model input variables from 1301 to 16. As the newly constructed vegetation coefficients, normalized difference vegetation index(NDVI) [(R410-R559)/(R410+R559)] and ratio index (RI) (R400/R1171) had the highest accuracy and were significantly higher than the traditional vegetation coefficients for chlorophyll content and water content inversion, respectively. Their R2 were 0.803 and 0.827, and their RMSE were 0.403 mg/g and 3.28%, respectively. The chlorophyll content and water content of leaves were visualized. The results showed that the physiological parameters of leaves could be visualized and the differences of physiological parameters in different regions of the same leaves can be found more intuitively and in detail. [Conclusions] The inversion models and vegetation indices constructed based on hyperspectral information can achieve accurate and non-destructive measurement of chlorophyll content and water content in maize leaves. This study can provide a theoretical basis and technical support for real-time monitoring of corn growth status. Through the leaf spectral information, according to the optimal model, the water content and chlorophyll content of each pixel of the hyperspectral image can be predicted, and the distribution of water content and chlorophyll content can be intuitively displayed by color. Because the field environment is more complex, transfer learning will be carried out in future work to improve its generalization ability in different environments subsequently and strive to develop an online monitoring system for field drought and nutrient stress.

  • Special Issue--Key Technologies and Equipment for Smart Orchard
    HAN Leng, HE Xiongkui, WANG Changling, LIU Yajia, SONG Jianli, QI Peng, LIU Limin, LI Tian, ZHENG Yi, LIN Guihai, ZHOU Zhan, HUANG Kang, WANG Zhong, ZHA Hainie, ZHANG Guoshan, ZHOU Guotao, MA Yong, FU Hao, NIE Hongyuan, ZENG Aijun, ZHANG Wei
    Smart Agriculture. 2022, 4(3): 1-11. https://doi.org/10.12133/j.smartag.SA200201014

    Traditional orchard production is facing problems of labor shortage due to the aging, difficulties in the management of agricultural equipment and production materials, and low production efficiency which can be expected to be solved by building a smart orchard that integrates technologies of Internet of Things(IoT), big data, equipment intelligence, et al. In this study, based on the objectives of full mechanization and intelligent management, a smart orchard was built in Pinggu district, an important peaches and pears fruit producing area of Beijing. The orchard covers an aera of more than 30 hm2 in Xiying village, Yukou town. In the orchard, more than 10 kinds of information acquisition sensors for pests, diseases, water, fertilizers and medicines are applied, 28 kinds of agricultural machineries with intelligent technical support are equipped. The key technologies used include: intelligent information acquisition system, integrated water and fertilizer management system and intelligent pest management system. The intelligent operation equipment system includes: unmanned lawn mower, intelligent anti-freeze machine, trenching and fertilizer machine, automatic driving crawler, intelligent profiling variable sprayer, six-rotor branch-to-target drone, multi-functional picking platform and finishing and pruning machine, etc. At the same time, an intelligent management platform has been built in the smart orchard. The comparison results showed that, smart orchard production can reduce labor costs by more than 50%, save pesticide dosage by 30% ~ 40%, fertilizer dosage by 25% ~ 35%, irrigation water consumption by 60% ~ 70%, and comprehensive economic benefits increased by 32.5%. The popularization and application of smart orchards will further promote China's fruit production level and facilitate the development of smart agriculture in China.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    WEI Yongkang, YANG Tiancong, DING Xinyao, GAO Yuezhi, YUAN Xinru, HE Li, WANG Yonghua, DUAN Jianzhao, FENG Wei
    Smart Agriculture. 2023, 5(2): 56-67. https://doi.org/10.12133/j.smartag.SA202304014

    Objective To quickly and accurately assess the situation of crop lodging disasters, it is necessary to promptly obtain information such as the location and area of the lodging occurrences. Currently, there are no corresponding technical standards for identifying crop lodging based on UAV remote sensing, which is not conducive to standardizing the process of obtaining UAV data and proposing solutions to problems. This study aims to explore the impact of different spatial resolution remote sensing images and feature optimization methods on the accuracy of identifying wheat lodging areas. Methods Digital orthophoto images (DOM) and digital surface models (DSM) were collected by UAVs with high-resolution sensors at different flight altitudes after wheat lodging. The spatial resolutions of these image data were 1.05, 2.09, and 3.26 cm. A full feature set was constructed by extracting 5 spectral features, 2 height features, 5 vegetation indices, and 40 texture features from the pre-processed data. Then three feature selection methods, ReliefF algorithm, RF-RFE algorithm, and Boruta-Shap algorithm, were used to construct an optimized subset of features at different flight altitudes to select the best feature selection method. The ReliefF algorithm retains features with weights greater than 0.2 by setting a threshold of 0.2; the RF-RFE algorithm quantitatively evaluated the importance of each feature and introduces variables in descending order of importance to determine classification accuracy; the Boruta-Shap algorithm performed feature subset screening on the full feature set and labels a feature as green when its importance score was higher than that of the shaded feature, defining it as an important variable for model construction. Based on the above-mentioned feature subset, an object-oriented classification model on remote sensing images was conducted using eCognition9.0 software. Firstly, after several experiments, the feature parameters for multi-scale segmentation in the object-oriented classification were determined, namely a segmentation scale of 1, a shape factor of 0.1, and a tightness of 0.5. Three object-oriented supervised classification algorithms, support vector machine (SVM), random forest (RF), and K nearest neighbor (KNN), were selected to construct wheat lodging classification models. The Overall classification accuracy and Kappa coefficient were used to evaluate the accuracy of wheat lodging identification. By constructing a wheat lodging classification model, the appropriate classification strategy was clarified and a technical path for lodging classification was established. This technical path can be used for wheat lodging monitoring, providing a scientific basis for agricultural production and improving agricultural production efficiency. Results and Discussions The results showed that increasing the altitude of the UAV to 90 m significantly improved flight efficiency of wheat lodging areas. In comparison to flying at 30 m for the same monitoring range, data acquisition time was reduced to approximately 1/6th, and the number of photos needed decreased from 62 to 6. In terms of classification accuracy, the overall classification effect of SVM is better than that of RF and KNN. Additionally, when the image spatial resolution varied from 1.05 to 3.26 cm, the full feature set and all three optimized feature subsets had the highest classification accuracy at a resolution of 1.05 cm, which was better than at resolutions of 2.09 and 3.26 cm. As the image spatial resolution decreased, the overall classification effect gradually deteriorated and the positioning accuracy decreased, resulting in poor spatial consistency of the classification results. Further research has found that the Boruta-Shap feature selection method can reduce data dimensionality and improve computational speed while maintaining high classification accuracy. Among the three tested spatial resolution conditions (1.05, 2.09, and 3.26 cm), the combination of SVM and Boruta-Shap algorithms demonstrated the highest overall classification accuracy. Specifically, the accuracy rates were 95.6%, 94.6%, and 93.9% for the respective spatial resolutions. These results highlighted the superior performance of this combination in accurately classifying the data and adapt to changes in spatial resolution. When the image resolution was 3.26 cm, the overall classification accuracy decreased by 1.81% and 0.75% compared to 1.05 and 2.09 cm; when the image resolution was 2.09 cm, the overall classification accuracy decreased by 1.06% compared to 1.05 cm, showing a relatively small difference in classification accuracy under different flight altitudes. The overall classification accuracy at an altitude of 90 m reached 95.6%, with Kappa coefficient of 0.914, meeting the requirements for classification accuracy. Conclusions The study shows that the object-oriented SVM classifier and the Boruta-Shap feature optimization algorithm have strong application extension advantages in identifying lodging areas in remote sensing images at multiple flight altitudes. These methods can achieve high-precision crop lodging area identification and reduce the influence of image spatial resolution on model stability. This helps to increase flight altitude, expand the monitoring range, improve UAV operation efficiency, and reduce flight costs. In practical applications, it is possible to strike a balance between classification accuracy and efficiency based on specific requirements and the actual scenario, thus providing guidance and support for the development of strategies for acquiring crop lodging information and evaluating wheat disasters.

  • Special Issue--Monitoring Technology of Crop Information
    MAYujing, WUShangrong, YANGPeng, CAOHong, TANJieyang, ZHAORongkun
    Smart Agriculture. 2023, 5(3): 1-16. https://doi.org/10.12133/j.smartag.SA202303002

    [Significance] Oil crops play a significant role in the food supply, as well as the important source of edible vegetable oils and plant proteins. Real-time, dynamic and large-scale monitoring of oil crop growth is essential in guiding agricultural production, stabilizing markets, and maintaining health. Previous studies have made a considerable progress in the yield simulation of staple crops in regional scale based on remote sensing methods, but the yield simulation of oil crops in regional scale is still poor as its complexity of the plant traits and structural characteristics. Therefore, it is urgently needed to study regional oil crop yield estimation based on remote sensing technology. [Progress] This paper summarized the content of remote sensing technology in oil crop monitoring from three aspects: backgrounds, progressions, opportunities and challenges. Firstly, significances and advantages of using remote sensing technology to estimate the of oil crops have been expounded. It is pointed out that both parameter inversion and crop area monitoring were the vital components of yield estimation. Secondly, the current situation of oil crop monitoring was summarized based on remote sensing technology from three aspects of remote sensing parameter inversion, crop area monitoring and yield estimation. For parameter inversion, it is specified that optical remote sensors were used more than other sensors in oil crops inversion in previous studies. Then, advantages and disadvantages of the empirical model and physical model inversion methods were analyzed. In addition, advantages and disadvantages of optical and microwave data were further illustrated from the aspect of oil crops structure and traits characteristics. At last, optimal choice on the data and methods were given in oil crop parameter inversion. For crop area monitoring, this paper mainly elaborated from two parts of optical and microwave remote sensing data. Combined with the structure of oil crops and the characteristics of planting areas, the researches on area monitoring of oil crops based on different types of remote sensing data sources were reviewed, including the advantages and limitations of different data sources in area monitoring. Then, two yield estimation methods were introduced: remote sensing yield estimation and data assimilation yield estimation. The phenological period of oil crop yield estimation, remote sensing data source and modeling method were summarized. Next, data assimilation technology was introduced, and it was proposed that data assimilation technology has great potential in oil crop yield estimation, and the assimilation research of oil crops was expounded from the aspects of assimilation method and grid selection. All of them indicate that data assimilation technology could improve the accuracy of regional yield estimation of oil crops. Thirdly, this paper pointed out the opportunities of remote sensing technology in oil crop monitoring, put forward some problems and challenges in crop feature selection, spatial scale determination and remote sensing data source selection of oil crop yield, and forecasted the development trend of oil crop yield estimation research in the future. Conclusions and Prospects The paper puts forward the following suggestions for the three aspects: (1) Regarding crop feature selection, when estimating yields for oil crops such as rapeseed and soybeans, which have active photosynthesis in siliques or pods, relying solely on canopy leaf area index (LAI) as the assimilation state variable for crop yield estimation may result in significant underestimation of yields, thereby impacting the accuracy of regional crop yield simulation. Therefore, it is necessary to consider the crop plant characteristics and the agronomic mechanism of yield formation through siliques or pods when estimating yields for oil crops. (2) In determining the spatial scale, some oil crops are distributed in hilly and mountainous areas with mixed land cover. Using regularized yield simulation grids may result in the confusion of numerous background objects, introducing additional errors and affecting the assimilation accuracy of yield estimation. This poses a challenge to yield estimation research. Thus, it is necessary to choose appropriate methods to divide irregular unit grids and determine the optimal scale for yield estimation, thereby improving the accuracy of yield estimation. (3) In terms of remote sensing data selection, the monitoring of oil crops can be influenced by crop structure and meteorological conditions. Depending solely on spectral data monitoring may have a certain impact on yield estimation results. It is important to incorporate radar off-nadir remote sensing measurement techniques to perceive the response relationship between crop leaves and siliques or pods and remote sensing data parameters. This can bridge the gap between crop characteristics and remote sensing information for crop yield simulation. This paper can serve as a valuable reference and stimulus for further research on regional yield estimation and growth monitoring of oil crops. It supplements existing knowledge and provides insightful considerations for enhancing the accuracy and efficiency of oil crop production monitoring and management.

  • Information Processing and Decision Making
    XU Yulin, KANG Mengzhen, WANG Xiujuan, HUA Jing, WANG Haoyu, SHEN Zhen
    Smart Agriculture. 2022, 4(4): 156-163. https://doi.org/10.12133/j.smartag.SA20220712

    Corn and soybean are upland grain in the same season, and the contradiction of scrambling for land between corn and soybean is prominent in China, so it is necessary to explore the price relations between corn and soybean. In addition, agricultural futures have the function of price discovery compared with the spot. Therefore, the analysis and prediction of corn and soybean futures prices are of great significance for the management department to adjust the planting structure and for farmers to select the crop varieties. In this study, the correlation between corn and soybean futures prices was analyzed, and it was found that the corn and soybean futures prices have a strong correlation by correlation test, and soybean futures price is the Granger reason of corn futures price by Granger causality test. Then, the corn and soybean futures prices were predicted using a long short-term memory (LSTM) model. To optimize the futures price prediction model performance, Attention mechanism was introduced as Attention-LSTM to assign weights to the outputs of the LSTM model at different times. Specifically, LSTM model was used to process the input sequence of futures prices, the Attention layer assign different weights to the outputs, and then the model output the prediction results after a layer of linearity. The experimental results showed that Attention-LSTM model could significantly improve the prediction performance of both corn and soybean futures prices compared to autoregressive integrated moving average model (ARIMA), support vector regression model (SVR), and LSTM. For example, mean absolute error (MAE) was improved by 3.8% and 3.3%, root mean square error (RMSE) was improved by 0.6% and 1.8% and mean absolute error percentage (MAPE) was improved by 4.8% and 2.9% compared with a single LSTM, respectively. Finally, the corn futures prices were forecasted using historical corn and soybean futures prices together. Specifically, two LSTM models were used to process the input sequences of corn futures prices and soybean futures prices respectively, two parameters were trained to perform a weighted summation of the output of two LSTM models, and the prediction results were output by the model after a layer of linearity. The experimental results showed that MAE was improved by 6.9%, RMSE was improved by 1.1% and MAPE was improved by 5.3% compared with the LSTM model using only corn futures prices. The results verify the strong correlation between corn and soybean futures prices at the same time. In conclusion, the results verify the Attention-LSTM model can improve the performances of soybean and corn futures price forecasting compared with the general prediction model, and the combination of related agricultural futures price data can improve the prediction performances of agricultural product futures forecasting model.

  • Topic--Smart Farming of Field Crops
    LUO Qing, RAO Yuan, JIN Xiu, JIANG Zhaohui, WANG Tan, WANG Fengyi, ZHANG Wu
    Smart Agriculture. 2022, 4(4): 84-104. https://doi.org/10.12133/j.smartag.SA202210004

    Accurate peach detection is a prerequisite for automated agronomic management, e.g., peach mechanical harvesting. However, due to uneven illumination and ubiquitous occlusion, it is challenging to detect the peaches, especially when the peaches are bagged in orchards. To this end, an accurate multi-class peach detection method was proposed by means of improving YOLOv5s and using multi-modal visual data for mechanical harvesting in this paper. RGB-D dataset with multi-class annotations of naked and bagging peach was proposed, including 4127 multi-modal images of corresponding pixel-aligned color, depth, and infrared images acquired with consumer-level RGB-D camera. Subsequently, an improved lightweight YOLOv5s (small depth) model was put forward by introducing a direction-aware and position-sensitive attention mechanism, which could capture long-range dependencies along one spatial direction and preserve precise positional information along the other spatial direction, helping the networks accurately detect peach targets. Meanwhile, the depthwise separable convolution was employed to reduce the model computation by decomposing the convolution operation into convolution in the depth direction and convolution in the width and height directions, which helped to speed up the training and inference of the network while maintaining accuracy. The comparison experimental results demonstrated that the improved YOLOv5s using multi-modal visual data recorded the detection mAP of 98.6% and 88.9% on the naked and bagging peach with 5.05 M model parameters in complex illumination and severe occlusion environment, increasing by 5.3% and 16.5% than only using RGB images, as well as by 2.8% and 6.2% when compared to YOLOv5s. As compared with other networks in detecting bagging peaches, the improved YOLOv5s performed best in terms of mAP, which was 16.3%, 8.1% and 4.5% higher than YOLOX-Nano, PP-YOLO-Tiny, and EfficientDet-D0, respectively. In addition, the proposed improved YOLOv5s model offered better results in different degrees than other methods in detecting Fuji apple and Hayward kiwifruit, verified the effectiveness on different fruit detection tasks. Further investigation revealed the contribution of each imaging modality, as well as the proposed improvement in YOLOv5s, to favorable detection results of both naked and bagging peaches in natural orchards. Additionally, on the popular mobile hardware platform, it was found out that the improved YOLOv5s model could implement 19 times detection per second with the considered five-channel multi-modal images, offering real-time peach detection. These promising results demonstrated the potential of the improved YOLOv5s and multi-modal visual data with multi-class annotations to achieve visual intelligence of automated fruit harvesting systems.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    XIA Xue, CHAI Xiujuan, ZHANG Ning, ZHOU Shuo, SUN Qixin, SUN Tan
    Smart Agriculture. 2023, 5(2): 1-12. https://doi.org/10.12133/j.smartag.SA202305004

    Objective The fruit load estimation of fruit tree is essential for horticulture management. Traditional estimation method by manual sampling is not only labor-intensive and time-consuming but also prone to errors. Most existing models can not apply to edge computing equipment with limited computing resources because of their high model complexity. This study aims to develop a lightweight model for edge computing equipment to estimate fruit load automatically in the orchard. Methods The experimental data were captured using the smartphone in the citrus orchard in Jiangnan district, Nanning city, Guangxi province. In the dataset, 30 videos were randomly selected for model training and other 10 for testing. The general idea of the proposed algorithm was divided into two parts: Detecting fruits and extracting ReID features of fruits in each image from the video, then tracking fruit and estimating the fruit load. Specifically, the CSPDarknet53 network was used as the backbone of the model to achieve feature extraction as it consumes less hardware computing resources, which was suitable for edge computing equipment. The path aggregation feature pyramid network PAFPN was introduced as the neck part for the feature fusion via the jump connection between the low-level and high-level features. The fused features from the PAFPN were fed into two parallel branches. One was the fruit detection branch and another was the identity embedding branch. The fruit detection branch consisted of three prediction heads, each of which performed 3×3 convolution and 1×1 convolution on the feature map output by the PAFPN to predict the fruit's keypoint heat map, local offset and bounding box size, respectively. The identity embedding branch distinguished between different fruit identity features. In the fruit tracking stage, the byte mechanism from the ByteTrack algorithm was introduced to improve the data association of the FairMOT method, enhancing the performance of fruit load estimation in the video. The Byte algorithm considered both high-score and low-score detection boxes to associate the fruit motion trajectory, then matches the identity features' similarity of fruits between frames. The number of fruit IDs whose tracking duration longer than five frames was counted as the amount of citrus fruit in the video. Results and Discussions All experiments were conducted on edge computing equipment. The fruit detection experiment was conducted under the same test dataset containing 211 citrus tree images. The experimental results showed that applying CSPDarkNet53+PAFPN structure in the proposed model achieved a precision of 83.6%, recall of 89.2% and F1 score of 86.3%, respectively, which were superior to the same indexes of FairMOT (ResNet34) model, FairMOT (HRNet18) model and Faster RCNN model. The CSPDarkNet53+PAFPN structure adopted in the proposed model could better detect the fruits in the images, laying a foundation for estimating the amount of citrus fruit on trees. The model complexity experimental results showed that the number of parameters, FLOPs (Floating Point Operations) and size of the proposed model were 5.01 M, 36.44 G and 70.2 MB, respectively. The number of parameters for the proposed model was 20.19% of FairMOT (ResNet34) model's and 41.51% of FairMOT (HRNet18) model's. The FLOPs for the proposed model was 78.31% less than FairMOT (ResNet34) model's and 87.63% less than FairMOT (HRNet18) model's. The model size for the proposed model was 23.96% of FairMOT (ResNet34) model's and 45.00% of FairMOT (HRNet18) model's. Compared with the Faster RCNN, the model built in this study showed advantages in the number of parameters, FLOPs and model size. The low complexity proved that the proposed model was more friendly to edge computing equipment. Compared with the lightweight backbone network EfficientNet-Lite, the CSPDarkNet53 applied in the proposed model's backbone performed better fruit detection and model complexity. For fruit load estimation, the improved tracking strategy that integrated the Byte algorithm into the FairMOT positively boosted the estimation accuracy of fruit load. The experimental results on the test videos showed that the AEP (Average Estimating Precision) and FPS (Frames Per Second) of the proposed model reached 91.61% and 14.76 f/s, which indicated that the proposed model could maintain high estimation accuracy while the FPS was 2.4 times and 4.7 times of the comparison models, respectively. The RMSE (Root Mean Square Error) of the proposed model was 4.1713, which was 47.61% less than FairMOT (ResNet34) model's and 22.94% less than FairMOT (HRNet18) model's. The R2 of the determination coefficient between the algorithm-measured value and the manual counted value was 0.9858, which was superior to other comparison models. The proposed model revealed better performance in estimating fruit load and lower model complexity than other comparatives. Conclusions The experimental results proved the validity of the proposed model for fruit load estimation on edge computing equipment. This research could provide technical references for the automatic monitoring and analysis of orchard productivity. Future research will continue to enrich the data resources, further improve the model's performance, and explore more efficient methods to serve more fruit tree varieties.

  • Topic--Smart Farming of Field Crops
    LIU Xiaohang, ZHANG Zhao, LIU Jiaying, ZHANG Man, LI Han, FLORES Paulo, HAN Xiongzhe
    Smart Agriculture. 2022, 4(4): 49-60. https://doi.org/10.12133/j.smartag.SA202207004

    Machine vision has been increasingly used for agricultural sensing tasks. The detection method based on deep learning for infield corn kernels can improve the detection accuracy. In order to obtain the number of lost corn kernels quickly and accurately after the corn harvest, and evaluate the corn harvest combine performance on grain loss, the method of directly using deep learning technology to count corn kernels in the field was developed and evaluated. Firstly, an RGB camera was used to collect image with different backgrounds and illuminations, and the datasets were generated. Secondly, different target detection networks for kernel recognition were constructed, including Mask R-CNN, EfficientDet-D5, YOLOv5-L and YOLOX-L, and the collected 420 effective images were used to train, verify and test each model. The number of images in train, verify and test datasets were 200, 40 and 180, respectively. Finally, the counting performances of different models were evaluated and compared according to the recognition results of test set images. The experimental results showed that among the four models, YOLOv5-L had overall the best performance, and could reliably identify corn kernels under different scenes and light conditions. The average precision (AP) value of the model for the image detection of the test set was 78.3%, and the size of the model was 89.3 MB. The correct rate of kernel count detection in four scenes of non-occlusion, surface mid-level-occlusion, surface severe-occlusion and aggregation were 98.2%, 95.5%, 76.1% and 83.3%, respectively, and F1 values were 94.7%, 93.8%, 82.8% and 87%, respectively. The overall detection correct rate and F1 value of the test set were 90.7% and 91.1%, respectively. The frame rate was 55.55 f/s, and the detection and counting performance were better than Mask R-CNN, EfficientDet-D5 and YOLOX-L networks. The detection accuracy was improved by about 5% compared with the second best performance of Mask R-CNN. With good precision, high throughput, and proven generalization, YOLOv5-L can realize real-time monitoring of corn harvest loss in practical operation.

  • Information Processing and Decision Making
    LI Zhijun, YANG Shenghui, SHI Deshuai, LIU Xingxing, ZHENG Yongjun
    Smart Agriculture. 2021, 3(2): 100-114. https://doi.org/10.12133/j.smartag.2021.3.2.202105-SA005

    Yield estimation of fruit tree is one of the important works in orchard management. In order to improve the accuracy of in-situ yield estimation of apple trees in orchard, a method for the yield estimation of single apple tree, which includes an improved YOLOv5 fruit detection network and a yield fitting network was proposed. The in-situ images of the apples without bags at different periods were acquired by using an unmanned aerial vehicle and Raspberry Pi camera, formed an image sample data set. For dealing with no attention preference and the parameter redundancy in feature extraction, the YOLOv5 network was improved by two approaches: 1) replacing the depth separable convolution, and 2) adding the attention mechanism module, so that the computation cost was decreased. Based on the improvement, the quantity of fruit was estimated and the total area of the bounding box of apples were respectively obtained as output. Then, these results were used as the input of the yield fitting network and actual yields were applied as the output to train the yield fitting network. The final model of fruit tree production estimation was obtained by combining the improved YOLOv5 network and the yield fitting network. Yield estimation experimental results showed that the improved YOLOv5 fruit detection algorithm could improve the recognition accuracy and the degree of lightweight. Compared with the previous algorithm, the detection speed of the algorithm proposed in this research was increased by up to 15.37%, while the mean of average accuracy (mAP) was raised up to 96.79%. The test results based on different data sets showed that the lighting conditions, coloring time and with white cloth in background had a certain impact on the accuracy of the algorithm. In addition, the yield fitting network performed better on predicting the yield of apple trees. The coefficients of determination in the training set and test set were respectively 0.7967 and 0.7982. The prediction accuracy of different yield samples was generally stable. Meanwhile, in terms of the with/without of white cloth in background, the range of relative error of the fruit tree yield measurement model was respectively within 7% and 13%. The yield estimation method of apple tree based on improved lightweight YOLOv5 had good accuracy and effectiveness, which could achieve yield estimation of apples in the natural environment, and would provide a technical reference for intelligent agricultural equipment in modern orchard environment.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    LIU Yongbo, GAO Wenbo, HE Peng, TANG Jiangyun, HU Liang
    Smart Agriculture. 2023, 5(2): 13-22. https://doi.org/10.12133/j.smartag.SA202304009

    Objective Aiming at the problems of low accuracy and incomplete coverage of image recognition of phenological period of apple in natural environment by traditional methods, an improved ResNet50 model was proposed for phenological period recognition of apple. Methods With 8 kinds of phenological period images of Red Fuji apple in Sichuan plateau area as the research objects and 3 sets of spherical cameras built in apple orchard as acquisition equipment, the original data set of 9800 images of apple phenological period were obtained, labeled by fruit tree experts. Due to the different duration of each phenological period of apple, there were certain differences in the quantity of collection. In order to avoid the problem of decreasing model accuracy due to the quantity imbalance, data set was enhanced by random cropping, random rotation, horizontal flip and brightness adjustment, and the original data set was expanded to 32,000 images. It was divided into training set (25,600 images), verification set (3200 images) and test set (3200 images) in a ratio of 8:1:1. Based on the ResNet50 model, the SE (Squeeze and Excitation Network) channel attention mechanism and Adam optimizer were integrated. SE channel attention was introduced at the end of each residual module in the benchmark model to improve the model's feature extraction ability for plateau apple tree images. In order to achieve fast convergence of the model, the Adam optimizer was combined with the cosine annealing attenuation learning rate, and ImageNet was selected as the pre-training model to realize intelligent recognition of plateau Red Fuji apple phenological period under natural environment. A "Intelligent Monitoring and Production Management Platform for Fruit Tree Growth Period" has been developed using the identification model of apple tree phenology. In order to reduce the probability of model misjudgment, improve the accuracy of model recognition, and ensure precise control of the platform over the apple orchard, three sets of cameras deployed in the apple orchard were set to capture motion trajectories, and images were collected at three time a day: early, middle, and late, a total of 27 images per day were collected. The model calculated the recognition results of 27 images and takes the category with the highest number of recognition as the output result to correct the recognition rate and improve the reliability of the platform. Results and Discussions Experiments were carried out on 32,000 apple tree images. The results showed that when the initial learning rate of Adam optimizer was set as 0.0001, the accuracy of the test model tended to the optimal, and the loss value curve converged the fastest. When the initial learning rate was set to 0.0001 and the iteration rounds are set to 30, 50 and 70, the accuracies of the optimal verification set obtained by the model was 0.9354, 0.9635 and 0.9528, respectively. Therefore, the improved ResNet50 model selects the learning rate of 0.0001 and iteration rounds of 50 as the training parameters of the Adam optimizer. Ablation experiments showed that the accuracy of validation set and test set were increased by 0.8% and 2.99% in the ResNet50 model with increased SE attention mechanism, respectively. The validation set accuracy and test set accuracy of the ResNet50 model increased by 2.19% and 1.42%, respectively, when Adam optimizer was added. The accuracy of validation set and test set was 2.33% and 3.65%, respectively. The accuracy of validation set was 96.35%, the accuracy of test set was 91.94%, and the average detection time was 2.19 ms.Compared with the AlexNet, VGG16, ResNet18, ResNet34, and ResNet101 models, the improved ResNet50 model improved the accuracy of the optimal validation set by 9.63%, 5.07%, 5.81%, 4.55%, and 0.96%, respectively. The accuracy of the test set increased by 12.31%, 6.88%, 8.53%, 8.67%, and 5.58%, respectively. The confusion matrix experiment result showed that the overall recognition rate of the improved ResNet50 model for the phenological period of apple tree images was more than 90%, of which the accuracy rate of bud stage and dormancy stage was the lowest, and the probability of mutual misjudgment was high, and the test accuracy rates were 89.50% and 87.44% respectively. There were also a few misjudgments during the young fruit stage, fruit enlargement stage, and fruit coloring stage due to the similarity in characteristics between adjacent stages. The external characteristics of the Red Fuji apple tree were more obvious during the flowering and fruit ripening stages, and the model had the highest recognition rate for the flowering and fruit ripening stages, with test accuracy reaching 97.50% and 97.49%, respectively. Conclusions The improved ResNet50 can effectively identify apple phenology, and the research results can provide reference for the identification of orchard phenological period. After integration into the intelligent monitoring production management platform of fruit tree growth period, intelligent management and control of apple orchard can be realized.

  • Special Issue--Key Technologies and Equipment for Smart Orchard
    FENG Han, ZHANG Hao, WANG Zi, JIANG Shijie, LIU Weihong, ZHOU Linghui, WANG Yaxiong, KANG Feng, LIU Xingxing, ZHENG Yongjun
    Smart Agriculture. 2022, 4(3): 12-23. https://doi.org/10.12133/j.smartag.SA202207002

    To solve the problems of low level of digitalization of orchard management and relatively single construction method, a three-dimensional virtual orchard construction method based on laser point cloud was proposed in this research. First, the hand-held 3D point cloud acquistion equipment (3D-BOX) combined with the lidar odometry and mapping (SLAM-LOAM) algorithm was used to complete the acquisition of the point cloud data set of orchard; then the outliers and noise points of the point cloud data were removed by using the statistical filtering algorithm, which was based on the K-neighbor distance statistical method. To achieve this, a distance threshold model for removing noise points was established. When a discrete point exceeded, it would be marked as an outlier, and the point was separated from the point cloud dataset to achieve the effect of discrete point filtering. The VoxelGrid filter was used for down sampling, the cloth simulation filtering (CSF) cloth simulation algorithm was used to calculate the distance between the cloth grid points and the corresponding laser point cloud, and the distinction between ground points and non-ground points was achieved by dividing the distance threshold, and when combined with the density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm, ground removal and cluster segmentation of orchard were realized; finally, the Unity3D engine was used to build a virtual orchard roaming scene, and convert the real-time GPS data of the operating equipment from the WGS-84 coordinate system to the Gauss projection plane coordinate system through Gaussian projection forward calculation. The real-time trajectory of the equipment was displayed through the LineRenderer, which realized the visual display of the motion trajectory control and operation trajectory of the working machine. In order to verify the effectiveness of the virtual orchard construction method, the test of orchard construction method was carried out in the Begonia fruit and the mango orchard. The results showed that the proposed point cloud data processing method could achieve the accuracy of cluster segmentation of Begonia fruit trees and mango trees 95.3% and 98.2%, respectively. Compared with the row spacing and plant spacing of fruit trees in the actual mango orchard, the average inter-row error of the virtual mango orchard was about 3.5%, and the average inter-plant error was about 6.6%. And compared the virtual orchard constructed by Unity3D with the actual orchard, the proposed method can effectively reproduce the actual three-dimensional situation of the orchard, and obtain a better visualization effect, which provides a technical solution for the digital modeling and management of the orchard.

  • Topic--Smart Supply Chain of Agricultural Products
    WANG Xiang, ZOU Jingui, LI You, SUN Yun, ZHANG Xiaoshuan
    Smart Agriculture. 2023, 5(1): 1-21. https://doi.org/10.12133/j.smartag.SA202301007

    The global energy is increasingly tight, and the global temperature is gradually rising. Energy efficiency assessment and carbon emission accounting can provide theoretical tools and practical support for the formulation of energy conservation and emission reduction strategies for the food cold chain, and is also a prerequisite for the sustainable development of the food cold chain. In this paper, the relationship and differences between energy consumption and carbon emissions in the general food cold chain are first described, and the principle, advantages and disadvantages of three energy consumption conversion standards of solar emergy value, standard coal and equivalent electricity are discussed. Besides, the possibilities of applying these three energy consumption conversion standards to energy consumption analysis and energy efficiency evaluation of food cold chain are explored. Then, for a batch of fresh agricultural products, the energy consumption of six links of the food cold chain, including the first transportation, the manufacturer, the second transportation, the distribution center, the third transportation, and the retailer, are systematically and comprehensively analyzed from the product level, and the comprehensive energy consumption level of the food cold chain are obtained. On this basis, ten energy efficiency indicators from five aspects of macro energy efficiency are proposed, including micro energy efficiency, energy economy, environmental energy efficiency and comprehensive energy efficiency, and constructs the energy efficiency evaluation index system of food cold chain. At the same time, other energy efficiency evaluation indicators and methods are also summarized. In addition, the standard of carbon emission conversion of food cold chain, namely carbon dioxide equivalent is introduce, the boundary of carbon emission accounting is determined, and the carbon emission factors of China's electricity is mainly discussed. Furthermore, the origin, principle, advantages and disadvantages of the emission factor method, the life cycle assessment method, the input-output analysis method and the hybrid life cycle assessment method, and the basic process of life cycle assessment method in the calculation of food cold chain carbon footprint are also reviewed. In order to improve the energy efficiency level of the food cold chain and reduce the carbon emissions of each link of the food cold chain, energy conservation and emission reduction methods for food cold chain are proposed from five aspects: refrigerant, distribution path, energy, phase change cool storage technology and digital twin technology. Finally, the energy efficiency assessment and carbon emission accounting of the food cold chain are briefly prospected in order to provide reference for promoting the sustainable development of China's food cold chain.

  • Overview Article
    GUO Dafang, DU Yuefeng, WU Xiuheng, HOU Siyu, LI Xiaoyu, ZHANG Yan'an, CHEN Du
    Smart Agriculture. 2023, 5(2): 149-160. https://doi.org/10.12133/j.smartag.SA202305007

    Significance Agricultural machinery serves as the fundamental support for implementing advanced agricultural production concepts. The key challenge for the future development of smart agriculture lies in how to enhance the design, manufacturing, operation, and maintenance of these machines to fully leverage their capabilities. To address this, the concept of the digital twin has emerged as an innovative approach that integrates various information technologies and facilitates the integration of virtual and real-world interactions. By providing a deeper understanding of agricultural machinery and its operational processes, the digital twin offers solutions to the complexity encountered throughout the entire lifecycle, from design to recycling. Consequently, it contributes to an all-encompassing enhancement of the quality of agricultural machinery operations, enabling them to better meet the demands of agricultural production. Nevertheless, despite its significant potential, the adoption of the digital twin for agricultural machinery is still at an early stage, lacking the necessary theoretical guidance and methodological frameworks to inform its practical implementation. Progress Drawing upon the successful experiences of the author's team in the digital twin for agricultural machinery, this paper presents an overview of the research progress made in digital twin. It covers three main areas: The digital twin in a general sense, the digital twin in agriculture, and the digital twin for agricultural machinery. The digital twin is conceptualized as an abstract notion that combines model-based system engineering and cyber-physical systems, facilitating the integration of virtual and real-world environments. This paper elucidates the relevant concepts and implications of digital twin in the context of agricultural machinery. It points out that the digital twin for agricultural machinery aims to leverage advanced information technology to create virtual models that accurately describe agricultural machinery and its operational processes. These virtual models act as a carrier, driven by data, to facilitate interaction and integration between physical agricultural machinery and their digital counterparts, consequently yielding enhanced value. Additionally, it proposes a comprehensive framework comprising five key components: Physical entities, virtual models, data and connectivity, system services, and business applications. Each component's functions operational mechanism, and organizational structure are elucidated. The development of the digital twin for agricultural machinery is still in its conceptual phase, and it will require substantial time and effort to gradually enhance its capabilities. In order to advance further research and application of the digital twin in this domain, this paper integrates relevant theories and practical experiences to propose an implementation plan for the digital twin for agricultural machinery. The macroscopic development process encompasses three stages: Theoretical exploration, practical application, and summarization. The specific implementation process entails four key steps: Intelligent upgrading of agricultural machinery, establishment of information exchange channels, construction of virtual models, and development of digital twin business applications. The implementation of digital twin for agricultural machinery comprises four stages: Pre-research, planning, implementation, and evaluation. The digital twin serves as a crucial link and bridge between agricultural machinery and the smart agriculture. It not only facilitates the design and manufacturing of agricultural machinery, aligning them with the realities of agricultural production and supporting the advancement of advanced manufacturing capabilities, but also enhances the operation, maintenance, and management of agricultural production to better meet practical requirements. This, in turn, expedites the practical implementation of smart agriculture. To fully showcase the value of the digital twin for agricultural machinery, this paper addresses the existing challenges in the design, manufacturing, operation, and management of agricultural machinery. It expounds the methods by which the digital twin can address these challenges and provides a technical roadmap for empowering the design, manufacturing, operation, and management of agricultural machinery through the use of the digital twin. In tackling the critical issue of leveraging the digital twin to enhance the operational quality of agricultural machinery, this paper presents two research cases focusing on high-powered tractors and large combine harvesters. These cases validate the feasibility of the digital twin in improving the quality of plowing operations for high-powered tractors and the quality of grain harvesting for large combine harvesters. Conclusions and Prospects This paper serves as a reference for the development of research on digital twin for agricultural machinery, laying a theoretical foundation for empowering smart agriculture and intelligent equipment with the digital twin. The digital twin provides a new approach for the transformation and upgrade of agricultural machinery, offering a new path for enhancing the level of agricultural mechanization and presenting new ideas for realizing smart agriculture. However, existing digital twin for agricultural machinery is still in its early stages, and there are a series of issues that need to be explored. It is necessary to involve more professionals from relevant fields to advance the research in this area.