Most Download
  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All
  • Most Downloaded in Recent Month
  • Most Downloaded in Recent Year
Please wait a minute...
  • Select all
    |
  • Special Issue--Agricultural Information Perception and Models
    GUOWang, YANGYusen, WUHuarui, ZHUHuaji, MIAOYisheng, GUJingqiu
    Smart Agriculture. 2024, 6(2): 1-13. https://doi.org/10.12133/j.smartag.SA202403015

    [Significance] Big Models, or Foundation Models, have offered a new paradigm in smart agriculture. These models, built on the Transformer architecture, incorporate numerous parameters and have undergone extensive training, often showing excellent performance and adaptability, making them effective in addressing agricultural issues where data is limited. Integrating big models in agriculture promises to pave the way for a more comprehensive form of agricultural intelligence, capable of processing diverse inputs, making informed decisions, and potentially overseeing entire farming systems autonomously. [Progress] The fundamental concepts and core technologies of big models are initially elaborated from five aspects: the generation and core principles of the Transformer architecture, scaling laws of extending big models, large-scale self-supervised learning, the general capabilities and adaptions of big models, and the emerging capabilities of big models. Subsequently, the possible application scenarios of the big model in the agricultural field are analyzed in detail, the development status of big models is described based on three types of the models: Large language models (LLMs), large vision models (LVMs), and large multi-modal models (LMMs). The progress of applying big models in agriculture is discussed, and the achievements are presented. [Conclusions and Prospects] The challenges and key tasks of applying big models technology in agriculture are analyzed. Firstly, the current datasets used for agricultural big models are somewhat limited, and the process of constructing these datasets can be both expensive and potentially problematic in terms of copyright issues. There is a call for creating more extensive, more openly accessible datasets to facilitate future advancements. Secondly, the complexity of big models, due to their extensive parameter counts, poses significant challenges in terms of training and deployment. However, there is optimism that future methodological improvements will streamline these processes by optimizing memory and computational efficiency, thereby enhancing the performance of big models in agriculture. Thirdly, these advanced models demonstrate strong proficiency in analyzing image and text data, suggesting potential future applications in integrating real-time data from IoT devices and the Internet to make informed decisions, manage multi-modal data, and potentially operate machinery within autonomous agricultural systems. Finally, the dissemination and implementation of these big models in the public agricultural sphere are deemed crucial. The public availability of these models is expected to refine their capabilities through user feedback and alleviate the workload on humans by providing sophisticated and accurate agricultural advice, which could revolutionize agricultural practices.

  • ZHAO Ruixue, YANG Chenxue, ZHENG Jianhua, LI Jiao, WANG Jian
    Smart Agriculture. 2022, 4(4): 105-125. https://doi.org/10.12133/j.smartag.SA202207009

    The wide application of advanced information technologies such as big data, Internet of Things and artificial intelligence in agriculture has promoted the modernization of agriculture in rural areas and the development of smart agriculture. This trend has also led to the boost of demands for technology and knowledge from a large amount of agricultural business entities. Faced with problems such as dispersiveness of knowledges, hysteric knowledge update, inadequate agricultural information service and prominent contradiction between supply and demand of knowledge, the agricultural knowledge service has become an important engine for the transformation, upgrading and high-quality development of agriculture. To better facilitate the agriculture modernization in China, the research and application perspectives of agricultural knowledge services were summarized and analyzed. According to the whole life cycle of agricultural data, based on the whole agricultural industry chain, a systematic framework for the construction of agricultural intelligent knowledge service systems towards the requirement of agricultural business entities was proposed. Three layers of techniques in necessity were designed, ranging from AIoT-based agricultural situation perception to big data aggregation and governance, and from agricultural knowledge organization to computation/mining based on knowledge graph and then to multi-scenario-based agricultural intelligent knowledge service. A wide range of key technologies with comprehensive discussion on their applications in agricultural intelligent knowledge service were summarized, including the aerial and ground integrated Artificial Intelligence & Internet-of-Things (AIoT) full-dimensional of agricultural condition perception, multi-source heterogeneous agricultural big data aggregation/governance, knowledge modeling, knowledge extraction, knowledge fusion, knowledge reasoning, cross-media retrieval, intelligent question answering, personalized recommendation, decision support. At the end, the future development trends and countermeasures were discussed, from the aspects of agricultural data acquisition, model construction, knowledge organization, intelligent knowledge service technology and application promotion. It can be concluded that the agricultural intelligent knowledge service is the key to resolve the contradiction between supply and demand of agricultural knowledge service, can provide support in the realization of the advance from agricultural cross-media data analytics to knowledge reasoning, and promote the upgrade of agricultural knowledge service to be more personalized, more precise and more intelligent. Agricultural knowledge service is also an important support for agricultural science and technologies to be more self-reliance, modernized, and facilitates substantial development and upgrading of them in a more effective manner.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    ZHAOChunjiang, FANBeibei, LIJin, FENGQingchun
    Smart Agriculture. 2023, 5(4): 1-15. https://doi.org/10.12133/j.smartag.SA202312030

    [Significance] Autonomous and intelligent agricultural machinery, characterized by green intelligence, energy efficiency, and reduced emissions, as well as high intelligence and man-machine collaboration, will serve as the driving force behind global agricultural technology advancements and the transformation of production methods in the context of smart agriculture development. Agricultural robots, which utilize intelligent control and information technology, have the unique advantage of replacing manual labor. They occupy the strategic commanding heights and competitive focus of global agricultural equipment and are also one of the key development directions for accelerating the construction of China's agricultural power. World agricultural powers and China have incorporated the research, development, manufacturing, and promotion of agricultural robots into their national strategies, respectively strengthening the agricultural robot policy and planning layout based on their own agricultural development characteristics, thus driving the agricultural robot industry into a stable growth period. [Progress] This paper firstly delves into the concept and defining features of agricultural robots, alongside an exploration of the global agricultural robot development policy and strategic planning blueprint. Furthermore, sheds light on the growth and development of the global agricultural robotics industry; Then proceeds to analyze the industrial backdrop, cutting-edge advancements, developmental challenges, and crucial technology aspects of three representative agricultural robots, including farmland robots, orchard picking robots, and indoor vegetable production robots. Finally, summarizes the disparity between Chinese agricultural robots and their foreign counterparts in terms of advanced technologies. (1) An agricultural robot is a multi-degree-of-freedom autonomous operating equipment that possesses accurate perception, autonomous decision-making, intelligent control, and automatic execution capabilities specifically designed for agricultural environments. When combined with artificial intelligence, big data, cloud computing, and the Internet of Things, agricultural robots form an agricultural robot application system. This system has relatively mature applications in key processes such as field planting, fertilization, pest control, yield estimation, inspection, harvesting, grafting, pruning, inspection, harvesting, transportation, and livestock and poultry breeding feeding, inspection, disinfection, and milking. Globally, agricultural robots, represented by plant protection robots, have entered the industrial application phase and are gradually realizing commercialization with vast market potential. (2) Compared to traditional agricultural machinery and equipment, agricultural robots possess advantages in performing hazardous tasks, executing batch repetitive work, managing complex field operations, and livestock breeding. In contrast to industrial robots, agricultural robots face technical challenges in three aspects. Firstly, the complexity and unstructured nature of the operating environment. Secondly, the flexibility, mobility, and commoditization of the operation object. Thirdly, the high level of technology and investment required. (3) Given the increasing demand for unmanned and less manned operations in farmland production, China's agricultural robot research, development, and application have started late and progressed slowly. The existing agricultural operation equipment still has a significant gap from achieving precision operation, digital perception, intelligent management, and intelligent decision-making. The comprehensive performance of domestic products lags behind foreign advanced counterparts, indicating that there is still a long way to go for industrial development and application. Firstly, the current agricultural robots predominantly utilize single actuators and operate as single machines, with the development of multi-arm cooperative robots just emerging. Most of these robots primarily engage in rigid operations, exhibiting limited flexibility, adaptability, and functionality. Secondly, the perception of multi-source environments in agricultural settings, as well as the autonomous operation of agricultural robot equipment, relies heavily on human input. Thirdly, the progress of new teaching methods and technologies for human-computer natural interaction is rather slow. Lastly, the development of operational infrastructure is insufficient, resulting in a relatively low degree of "mechanization". [Conclusions and Prospects] The paper anticipates the opportunities that arise from the rapid growth of the agricultural robotics industry in response to the escalating global shortage of agricultural labor. It outlines the emerging trends in agricultural robot technology, including autonomous navigation, self-learning, real-time monitoring, and operation control. In the future, the path planning and navigation information perception of agricultural robot autonomy are expected to become more refined. Furthermore, improvements in autonomous learning and cross-scenario operation performance will be achieved. The development of real-time operation monitoring of agricultural robots through digital twinning will also progress. Additionally, cloud-based management and control of agricultural robots for comprehensive operations will experience significant growth. Steady advancements will be made in the innovation and integration of agricultural machinery and techniques.

  • Topic--Intelligent Agricultural Sensor Technology
    WANGRujing
    Smart Agriculture. 2024, 6(1): 1-17. https://doi.org/10.12133/j.smartag.SA202401017

    [Significance] Agricultural sensor is the key technology for developing modern agriculture. Agricultural sensor is a kind of detection device that can sense and convert physical signal, which is related to the agricultural environment, plants and animals, into an electrical signal. Agricultural sensors could be applied to monitor crops and livestock in different agricultural environments, including weather, water, atmosphere and soil. It is also an important driving force to promote the iterative upgrading of agricultural technology and change agricultural production methods. [Progress] The different agricultural sensors are categorized, the cutting-edge research trends of agricultural sensors are analyzed, and summarizes the current research status of agricultural sensors are summarized in different application scenarios. Moreover, a deep analysis and discussion of four major categories is conducted, which include agricultural environment sensors, animal and plant life information sensors, agricultural product quality and safety sensors, and agricultural machinery sensors. The process of research, development, the universality and limitations of the application of the four types of agricultural sensors are summarized. Agricultural environment sensors are mainly used for real-time monitoring of key parameters in agricultural production environments, such as the quality of water, gas, and soil. The soil sensors provide data support for precision irrigation, rational fertilization, and soil management by monitoring indicators such as soil humidity, pH, temperature, nutrients, microorganisms, pests and diseases, heavy metals and agricultural pollution, etc. Monitoring of dissolved oxygen, pH, nitrate content, and organophosphorus pesticides in irrigation and aquaculture water through water sensors ensures the rational use of water resources and water quality safety. The gas sensor monitors the atmospheric CO2, NH3, C2H2, CH4 concentration, and other information, which provides the appropriate environmental conditions for the growth of crops in greenhouses. The animal life information sensor can obtain the animal's growth, movement, physiological and biochemical status, which include movement trajectory, food intake, heart rate, body temperature, blood pressure, blood glucose, etc. The plant life information sensors monitor the plant's health and growth, such as volatile organic compounds of the leaves, surface temperature and humidity, phytohormones, and other parameters. Especially, the flexible wearable plant sensors provide a new way to measure plant physiological characteristics accurately and monitor the water status and physiological activities of plants non-destructively and continuously. These sensors are mainly used to detect various indicators in agricultural products, such as temperature and humidity, freshness, nutrients, and potentially hazardous substances (e.g., bacteria, pesticide residues, heavy metals, etc. Agricultural machinery sensors can achieve real-time monitoring and controlling of agricultural machinery to achieve real-time cultivation, planting, management, and harvesting, automated operation of agricultural machinery, and accurate application of pesticide, fertilizer. [Conclusions and Prospects In the challenges and prospects of agricultural sensors, the core bottlenecks of large-scale application of agricultural sensors at the present stage are analyzed in detail. These include low-cost, specialization, high stability, and adaptive intelligence of agricultural sensors. Furthermore, the concept of "ubiquitous sensing in agriculture" is proposed, which provides ideas and references for the research and development of agricultural sensor technology.

  • Topic--Crop Growth and Its Environmental Monitoring
    SHAO Mingyue, ZHANG Jianhua, FENG Quan, CHAI Xiujuan, ZHANG Ning, ZHANG Wenrong
    Smart Agriculture. 2022, 4(1): 29-46. https://doi.org/10.12133/j.smartag.SA202202005

    Accurate detection and recognition of plant diseases is the key technology to early diagnosis and intelligent monitoring of plant diseases, and is the core of accurate control and information management of plant diseases and insect pests. Deep learning can overcome the disadvantages of traditional diagnosis methods and greatly improve the accuracy of diseases detection and recognition, and has attracted a lot of attention of researchers. This paper collected the main public plant diseases image data sets all over the world, and briefly introduced the basic information of each data set and their websites, which is convenient to download and use. And then, the application of deep learning in plant disease detection and recognition in recent years was systematically reviewed. Plant disease target detection is the premise of accurate classification and recognition of plant disease and evaluation of disease hazard level. It is also the key to accurately locate plant disease area and guide spray device of plant protection equipment to spray drug on target. Plant disease recognition refers to the processing, analysis and understanding of disease images to identify different kinds of disease objects, which is the main basis for the timely and effective prevention and control of plant diseases. The research progress in early disease detection and recognition algorithm was expounded based on depth of learning research, as well as the advantages and existing problems of various algorithms were described. It can be seen from this review that the detection and recognition algorithm based on deep learning is superior to the traditional detection and recognition algorithm in all aspects. Based on the investigation of research results, it was pointed out that the illumination, sheltering, complex background, different disorders with similar symptoms, different changes of disease symptoms in different periods, and overlapping coexistence of multiple diseases were the main challenges for the detection and recognition of plant diseases. At the same time, the establishment of a large-scale and more complex data set that meets the specific research needs is also a difficulty that need to face together. And at further, we point out that the combination of the better performance of the neural network, large-scale data set and agriculture theoretical basis is a major trend of the development of the future. It is also pointed out that multimodal data can be used to identify early plant diseases, which is also one of the future development direction.

  • Topic--Smart Farming of Field Crops
    YIN Yanxin, MENG Zhijun, ZHAO Chunjiang, WANG Hao, WEN Changkai, CHEN Jingping, LI Liwei, DU Jingwei, WANG Pei, AN Xiaofei, SHANG Yehua, ZHANG Anqi, YAN Bingxin, WU Guangwei
    Smart Agriculture. 2022, 4(4): 1-25. https://doi.org/10.12133/j.smartag.SA202212005

    As one of the important way for constructing smart agriculture, unmanned farms are the most attractive in nowadays, and have been explored in many countries. Generally, data, knowledge and intelligent equipment are the core elements of unmanned farms. It deeply integrates modern information technologies such as the Internet of Things, big data, cloud computing, edge computing, and artificial intelligence with agriculture to realize agricultural production information perception, quantitative decision-making, intelligent control, precise input and personalized services. In the paper, the overall technical architecture of unmanned farms is introduced, and five kinds of key technologies of unmanned farms are proposed, which include information perception and intelligent decision-making technology, precision control technology and key equipment for agriculture, automatic driving technology in agriculture, unmanned operation agricultural equipment, management and remote controlling system for unmanned farms. Furthermore, the latest research progress of the above technologies both worldwide are analyzed. Based on which, critical scientific and technological issues to be solved for developing unmanned farms in China are proposed, include unstructured environment perception of farmland, automatic drive for agriculture machinery in complex and changeable farmland environment, autonomous task assignment and path planning of unmanned agricultural machinery, autonomous cooperative operation control of unmanned agricultural machinery group. Those technologies are challenging and absolutely, and would be the most competitive commanding height in the future. The maize unmanned farm constructed in the city of Gongzhuling, Jilin province, China, was also introduced in detail. The unmanned farms is mainly composed of information perception system, unmanned agricultural equipment, management and controlling system. The perception system obtains and provides the farmland information, maize growth, pest and disease information of the farm. The unmanned agricultural machineries could complete the whole process of the maize mechanization under unattended conditions. The management and controlling system includes the basic GIS, remote controlling subsystem, precision operation management subsystem and working display system for unmanned agricultural machineries. The application of the maize unmanned farm has improved maize production efficiency (the harvesting efficiency has been increased by 3-4 times) and reduced labors. Finally, the paper summarizes the important role of the unmanned farm technology were summarized in solving the problems such as reduction of labors, analyzes the opportunities and challenges of developing unmanned farms in China, and put forward the strategic goals and ideas of developing unmanned farm in China.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    WANGTing, WANGNa, CUIYunpeng, LIUJuan
    Smart Agriculture. 2023, 5(4): 105-116. https://doi.org/10.12133/j.smartag.SA202311005

    [Objective] The rural revitalization strategy presents novel requisites for the extension of agricultural technology. However, the conventional method encounters the issue of a contradiction between supply and demand. Therefore, there is a need for further innovation in the supply form of agricultural knowledge. Recent advancements in artificial intelligence technologies, such as deep learning and large-scale neural networks, particularly the advent of large language models (LLMs), render anthropomorphic and intelligent agricultural technology extension feasible. With the agricultural technology knowledge service of fruit and vegetable as the demand orientation, the intelligent agricultural technology question answering system was built in this research based on LLM, providing agricultural technology extension services, including guidance on new agricultural knowledge and question-and-answer sessions. This facilitates farmers in accessing high-quality agricultural knowledge at their convenience. [Methods] Through an analysis of the demands of strawberry farmers, the agricultural technology knowledge related to strawberry cultivation was categorized into six themes: basic production knowledge, variety screening, interplanting knowledge, pest diagnosis and control, disease diagnosis and control, and drug damage diagnosis and control. Considering the current situation of agricultural technology, two primary tasks were formulated: named entity recognition and question answering related to agricultural knowledge. A training corpus comprising entity type annotations and question-answer pairs was constructed using a combination of automatic machine annotation and manual annotation, ensuring a small yet high-quality sample. After comparing four existing Large Language Models (Baichuan2-13B-Chat, ChatGLM2-6B, Llama 2-13B-Chat, and ChatGPT), the model exhibiting the best performance was chosen as the base LLM to develop the intelligent question-answering system for agricultural technology knowledge. Utilizing a high-quality corpus, pre-training of a Large Language Model and the fine-tuning method, a deep neural network with semantic analysis, context association, and content generation capabilities was trained. This model served as a Large Language Model for named entity recognition and question answering of agricultural knowledge, adaptable to various downstream tasks. For the task of named entity recognition, the fine-tuning method of Lora was employed, fine-tuning only essential parameters to expedite model training and enhance performance. Regarding the question-answering task, the Prompt-tuning method was used to fine-tune the Large Language Model, where adjustments were made based on the generated content of the model, achieving iterative optimization. Model performance optimization was conducted from two perspectives: data and model design. In terms of data, redundant or unclear data was manually removed from the labeled corpus. In terms of the model, a strategy based on retrieval enhancement generation technology was employed to deepen the understanding of agricultural knowledge in the Large Language Model and maintain real-time synchronization of knowledge, alleviating the problem of LLM hallucination. Drawing upon the constructed Large Language Model, an intelligent question-answering system was developed for agricultural technology knowledge. This system demonstrates the capability to generate high-precision and unambiguous answers, while also supporting the functionalities of multi-round question answering and retrieval of information sources. [Results and Discussions] Accuracy rate and recall rate served as indicators to evaluate the named entity recognition task performance of the Large Language Models. The results indicated that the performance of Large Language Models was closely related to factors such as model structure, the scale of the labeled corpus, and the number of entity types. After fine-tuning, the ChatGLM Large Language Model demonstrated the highest accuracy and recall rate. With the same number of entity types, a higher number of annotated corpora resulted in a higher accuracy rate. Fine-tuning had different effects on different models, and overall, it improved the average accuracy of all models under different knowledge topics, with ChatGLM, Llama, and Baichuan values all surpassing 85%. The average recall rate saw limited increase, and in some cases, it was even lower than the values before fine-tuning. Assessing the question-answering task of Large Language Models using hallucination rate and semantic similarity as indicators, data optimization and retrieval enhancement generation techniques effectively reduced the hallucination rate by 10% to 40% and improved semantic similarity by more than 15%. These optimizations significantly enhanced the generated content of the models in terms of correctness, logic, and comprehensiveness. [Conclusion] The pre-trained Large Language Model of ChatGLM exhibited superior performance in named entity recognition and question answering tasks in the agricultural field. Fine-tuning pre-trained Large Language Models for downstream tasks and optimizing based on retrieval enhancement generation technology mitigated the problem of language hallucination, markedly improving model performance. Large Language Model technology has the potential to innovate agricultural technology knowledge service modes and optimize agricultural knowledge extension. This can effectively reduce the time cost for farmers to obtain high-quality and effective knowledge, guiding more farmers towards agricultural technology innovation and transformation. However, due to challenges such as unstable performance, further research is needed to explore optimization methods for Large Language Models and their application in specific scenarios.

  • CHEN Feng, SUN Chuanheng, XING Bin, LUO Na, LIU Haishen
    Smart Agriculture. 2022, 4(4): 126-137. https://doi.org/10.12133/j.smartag.SA202206006

    As an emerging concept, metaverse has attracted extensive attention from industry, academia and scientific research field. The combination of agriculture and metaverse will greatly promote the development of agricultural informatization and agricultural intelligence, provide new impetus for the transformation and upgrading of agricultural intelligence. Firstly, to expound feasibility of the application research of metaverse in agriculture, the basic principle and key technologies of agriculture metaverse were briefly described, such as blockchain, non-fungible token, 5G/6G, artificial intelligence, Internet of Things, 3D reconstruction, cloud computing, edge computing, augmented reality, virtual reality, mixed reality, brain computer interface, digital twins and parallel system. Then, the main scenarios of three agricultural applications of metaverse in the fields of virtual farm, agricultural teaching system and agricultural product traceability system were discussed. Among them, virtual farm is one of the most important applications of agricultural metaverse. Agricultural metaverse can help the growth of crops and the raising of livestock and poultry in the field of agricultural production, provide a three-dimensional and visual virtual leisure agricultural experience, provide virtual characters in the field of agricultural product promotion. The agricultural metaverse teaching system can provide virtual agricultural teaching similar to natural scenes, save training time and improve training efficiency by means of fragmentation. Traceability of agricultural products can let consumers know the production information of agricultural products and feel more confident about enterprises and products. Finally, the challenges in the development of agricultural metaverse were summarized in the aspects of difficulties in establishing agricultural metaverse system, weak communication foundation of agricultural metaverse, immature agricultural metaverse hardware equipment and uncertain agricultural meta universe operation, and the future development directions of agricultural metaverse were prospected. In the future, researches on the application of metaverse, agricultural growth mechanism, and low power wireless communication technologies are suggested to be carried out. A rural broadband network covering households can be established. The industrialization application of agricultural meta universe can be promoted. This review can provide theoretical references and technical supports for the development of metaverse in the field of agriculture.

  • Overview Article
    ZHAO Chunjiang
    Smart Agriculture. 2023, 5(2): 126-148. https://doi.org/10.12133/j.smartag.SA202306002

    Significance Agricultural environment is dynamic and variable, with numerous factors affecting the growth of animals and plants and complex interactions. There are numerous factors that affect the growth of all kinds of animals and plants. There is a close but complex correlation between these factors such as air temperature, air humidity, illumination, soil temperature, soil humidity, diseases, pests, weeds and etc. Thus, farmers need agricultural knowledge to solve production problems. With the rapid development of internet technology, a vast amount of agricultural information and knowledge is available on the internet. However, due to the lack of effective organization, the utilization rate of these agricultural information knowledge is relatively low.How to analyze and generate production knowledge or decision cases from scattered and disordered information is a big challenge all over the world. Agricultural knowledge intelligent service technology is a good way to resolve the agricultural data problems such as low rank, low correlation, and poor interpretability of reasoning. It is also the key technology to improving the comprehensive prediction and decision-making analysis capabilities of the entire agricultural production process. It can eliminate the information barriers between agricultural knowledge, farmers, and consumers, and is more conducive to improve the production and quality of agricultural products, provide effective information services. Progress The definition, scope, and technical application of agricultural knowledge intelligence services are introduced in this paper. The demand for agricultural knowledge services are analyzed combining with artificial intelligence technology. Agricultural knowledge intelligent service technologies such as perceptual recognition, knowledge coupling, and inference decision-making are conducted. The characteristics of agricultural knowledge services are analyzed and summarized from multiple perspectives such as industrial demand, industrial upgrading, and technological development. The development history of agricultural knowledge services is introduced. Current problems and future trends are also discussed in the agricultural knowledge services field. Key issues in agricultural knowledge intelligence services such as animal and plant state recognition in complex and uncertain environments, multimodal data association knowledge extraction, and collaborative reasoning in multiple agricultural application scenarios have been discussed. Combining practical experience and theoretical research, a set of intelligent agricultural situation analysis service framework that covers the entire life cycle of agricultural animals and plants and combines knowledge cases is proposed. An agricultural situation perception framework has been built based on satellite air ground multi-channel perception platform and Internet real-time data. Multimodal knowledge coupling, multimodal knowledge graph construction and natural language processing technology have been used to converge and manage agricultural big data. Through knowledge reasoning decision-making, agricultural information mining and early warning have been carried out to provide users with multi-scenario agricultural knowledge services. Intelligent agricultural knowledge services have been designed such as multimodal fusion feature extraction, cross domain knowledge unified representation and graph construction, and complex and uncertain agricultural reasoning and decision-making. An agricultural knowledge intelligent service platform composed of cloud computing support environment, big data processing framework, knowledge organization management tools, and knowledge service application scenarios has been built. Rapid assembly and configuration management of agricultural knowledge services could be provide by the platform. The application threshold of artificial intelligence technology in agricultural knowledge services could be reduced. In this case, problems of agricultural users can be solved. A novel method for agricultural situation analysis and production decision-making is proposed. A full chain of intelligent knowledge application scenario is constructed. The scenarios include planning, management, harvest and operations during the agricultural before, during and after the whole process. Conclusions and Prospects The technology trend of agricultural knowledge intelligent service is summarized in five aspects. (1) Multi-scale sparse feature discovery and spatiotemporal situation recognition of agricultural conditions. The application effects of small sample migration discovery and target tracking in uncertain agricultural information acquisition and situation recognition are discussed. (2) The construction and self-evolution of agricultural cross media knowledge graph, which uses robust knowledge base and knowledge graph to analyze and gather high-level semantic information of cross media content. (3) In response to the difficulties in tracing the origin of complex agricultural conditions and the low accuracy of comprehensive prediction, multi granularity correlation and multi-mode collaborative inversion prediction of complex agricultural conditions is discussed. (4) The large language model (LLM) in the agricultural field based on generative artificial intelligence. ChatGPT and other LLMs can accurately mine agricultural data and automatically generate questions through large-scale computing power, solving the problems of user intention understanding and precise service under conditions of dispersed agricultural data, multi-source heterogeneity, high noise, low information density, and strong uncertainty. In addition, the agricultural LLM can also significantly improve the accuracy of intelligent algorithms such as identification, prediction and decision-making by combining strong algorithms with Big data and super computing power. These could bring important opportunities for large-scale intelligent agricultural production. (5) The construction of knowledge intelligence service platforms and new paradigm of knowledge service, integrating and innovating a self-evolving agricultural knowledge intelligence service cloud platform. Agricultural knowledge intelligent service technology will enhance the control ability of the whole agricultural production chain. It plays a technical support role in achieving the transformation of agricultural production from "observing the sky and working" to "knowing the sky and working". The intelligent agricultural application model of "knowledge empowerment" provides strong support for improving the quality and efficiency of the agricultural industry, as well as for the modernization transformation and upgrading.

  • Topic--Smart Farming of Field Crops
    LI Li, LI Minzan, LIU Gang, ZHANG Man, WANG Maohua
    Smart Agriculture. 2022, 4(4): 26-34. https://doi.org/10.12133/j.smartag.SA202207003

    Smart farming for field crops is a significant part of the smart agriculture. It aims at crop production, integrating modern sensing technology, new generation mobile communication technology, computer and network technology, Internet of Things(IoT), big data, cloud computing, blockchain and expert wisdom and knowledge. Deeply integrated application of biotechnology, engineering technology, information technology and management technology, it realizes accurate perception, quantitative decision-making, intelligent operation and intelligent service in the process of crop production, to significantly improve land output, resource utilization and labor productivity, comprehensively improves the quality, and promotes efficiency of agricultural products. In order to promote the sustainable development of the smart farming, through the analysis of the development process of smart agriculture, the overall objectives and key tasks of the development strategy were clarified, the key technologies in smart farming were condensed. Analysis and breakthrough of smart farming key technologies were crucial to the industrial development strategy. The main problems of the smart farming for field crops include: the lack of in-situ accurate measurement technology and special agricultural sensors, the large difference between crop model and actual production, the instantaneity, reliability, universality, and stability of the information transmission technologies, and the combination of intelligent agricultural equipment with agronomy. Based on the above analysis, five primary technologies and eighteen corresponding secondary technologies of smart farming for field crops were proposed, including: sensing technologies of environmental and biological information in field, agricultural IoT technologies and mobile internet, cloud computing and cloud service technologies in agriculture, big data analysis and decision-making technology in agriculture, and intelligent agricultural machinery and agricultural robots in fireld production. According to the characteristics of China's cropping region, the corresponding smart farming development strategies were proposed: large-scale smart production development zone in the Northeast region and Inner Mongolia region, smart urban agriculture and water-saving agriculture development zone in the region of Beijing, Tianjin, Hebei and Shandong, large-scale smart farming of cotton and smart dry farming green development comprehensive test zone in the Northwest arid region, smart farming of rice comprehensive development test zone in the Southeast coast region, and characteristic smart farming development zone in the Southwest mountain region. Finally, the suggestions were given from the perspective of infrastructure, key technology, talent and policy.

  • Special Issue--Key Technologies and Equipment for Smart Orchard
    LIU Limin, HE Xiongkui, LIU Weihong, LIU Ziyan, HAN Hu, LI Yangfan
    Smart Agriculture. 2022, 4(3): 63-74. https://doi.org/10.12133/j.smartag.SA202207008

    To realize the autonomous navigation and automatic target spraying of intelligent plant protect machinery in orchard, in this study, an autonomous navigation and automatic target spraying robot for orchards was developed. Firstly, a single 3D light detection and ranging (LiDAR) was used to collect fruit trees and other information around the robot. The region of interest (ROI) was determined using information on the fruit trees in the orchard (plant spacing, plant height, and row spacing), as well as the fundamental LiDAR parameters. Additionally, it must be ensured that LiDAR was used to detect the canopy information of a whole fruit tree in the ROI. Secondly, the point clouds within the ROI was two-dimension processing to obtain the fruit tree center of mass coordinates. The coordinate was the location of the fruit trees. Based on the location of the fruit trees, the row lines of fruit tree were obtained by random sample consensus (RANSAC) algorithm. The center line (navigation line) of the fruit tree row within ROI was obtained through the fruit tree row lines. The robot was controlled to drive along the center line by the angular velocity signal transmitted from the computer. Next, the ATRS's body speed and position were determined by encoders and the inertial measurement unit (IMU). And the collected fruit tree zoned canopy information was corrected by IMU. The presence or absence of fruit tree zoned canopy was judged by the logical algorithm designed. Finally, the nozzles were controlled to spray or not according to the presence or absence of corresponding zoned canopy. The conclusions were obtained. The maximum lateral deviation of the robot during autonomous navigation was 21.8 cm, and the maximum course deviation angle was 4.02°. Compared with traditional spraying, the automatic target spraying designed in this study reduced pesticide volume, air drift and ground loss by 20.06%, 38.68% and 51.40%, respectively. There was no significant difference between the automatic target spraying and the traditional spraying in terms of the percentage of air drift. In terms of the percentage of ground loss, automatic target spraying had 43% at the bottom of the test fruit trees and 29% and 28% at the middle of the test fruit trees and the left and right neighboring fruit trees. But in traditional spraying, the percentage of ground loss was, in that sequence, 25%, 38%, and 37%. The robot developted can realize autonomous navigation while ensuring the spraying effect, reducing the pesticides volume and loss.

  • Information Processing and Decision Making
    YANGFeng, YAOXiaotong
    Smart Agriculture. 2024, 6(1): 147-157. https://doi.org/10.12133/j.smartag.SA202309010

    [Objective] To effectively tackle the unique attributes of wheat leaf pests and diseases in their native environment, a high-caliber and efficient pest detection model named YOLOv8-SS (You Only Look Once Version 8-SS) was proposed. This innovative model is engineered to accurately identify pests, thereby providing a solid scientific foundation for their prevention and management strategies. [Methods] A total of 3 639 raw datasets of images of wheat leaf pests and diseases were collected from 6 different wheat pests and diseases in various farmlands in the Yuchong County area of Gansu Province, at different periods of time, using mobile phones. This collection demonstrated the team's proficiency and commitment to advancing agricultural research. The dataset was meticulously constructed using the LabelImg software to accurately label the images with targeted pest species. To guarantee the model's superior generalization capabilities, the dataset was strategically divided into a training set and a test set in an 8:2 ratio. The dataset includes thorough observations and recordings of the wheat leaf blade's appearance, texture, color, as well as other variables that could influence these characteristics. The compiled dataset proved to be an invaluable asset for both training and validation activities. Leveraging the YOLOv8 algorithm, an enhanced lightweight convolutional neural network, ShuffleNetv2, was selected as the basis network for feature extraction from images. This was accomplished by integrating a 3×3 Depthwise Convolution (DWConv) kernel, the h-swish activation function, and a Squeeze-and-Excitation Network (SENet) attention mechanism. These enhancements streamlined the model by diminishing the parameter count and computational demands, all while sustaining high detection precision. The deployment of these sophisticated methodologies exemplified the researchers' commitment and passion for innovation. The YOLOv8 model employs the SEnet attention mechanism module within both its Backbone and Neck components, significantly reducing computational load while bolstering accuracy. This method exemplifies the model's exceptional performance, distinguishing it from other models in the domain. By integrating a dedicated small target detection layer, the model's capabilities have been augmented, enabling more efficient and precise pest and disease detection. The introduction of a new detection feature map, sized 160×160 pixels, enables the network to concentrate on identifying small-targeted pests and diseases, thereby enhancing the accuracy of pest and disease recognition. Results and Discussion The YOLOv8-SS wheat leaf pests and diseases detection model has been significantly improved to accurately detect wheat leaf pests and diseases in their natural environment. By employing the refined ShuffleNet V2 within the DarkNet-53 framework, as opposed to the conventional YOLOv8, under identical experimental settings, the model exhibited a 4.53% increase in recognition accuracy and a 4.91% improvement in F1-Score, compared to the initial model. Furthermore, the incorporation of a dedicated small target detection layer led to a subsequent rise in accuracy and F1-Scores of 2.31% and 2.16%, respectively, despite a minimal upsurge in the number of parameters and computational requirements. The integration of the SEnet attention mechanism module into the YOLOv8 model resulted in a detection accuracy rate increase of 1.85% and an F1-Score enhancement of 2.72%. Furthermore, by swapping the original neural network architecture with an enhanced ShuffleNet V2 and appending a compact object detection sublayer (namely YOLOv8-SS), the resulting model exhibited a heightened recognition accuracy of 89.41% and an F1-Score of 88.12%. The YOLOv8-SS variant substantially outperformed the standard YOLOv8, showing a remarkable enhancement of 10.11% and 9.92% in accuracy, respectively. This outcome strikingly illustrates the YOLOv8-SS's prowess in balancing speed with precision. Moreover, it achieves convergence at a more rapid pace, requiring approximately 40 training epochs, to surpass other renowned models such as Faster R-CNN, MobileNetV2, SSD, YOLOv5, YOLOX, and the original YOLOv8 in accuracy. Specifically, the YOLOv8-SS boasted an average accuracy 23.01%, 15.13%, 11%, 25.21%, 27.52%, and 10.11% greater than that of the competing models, respectively. In a head-to-head trial involving a public dataset (LWDCD 2020) and a custom-built dataset, the LWDCD 2020 dataset yielded a striking accuracy of 91.30%, outperforming the custom-built dataset by a margin of 1.89% when utilizing the same network architecture, YOLOv8-SS. The AI Challenger 2018-6 and Plant-Village-5 datasets did not perform as robustly, achieving accuracy rates of 86.90% and 86.78% respectively. The YOLOv8-SS model has shown substantial improvements in both feature extraction and learning capabilities over the original YOLOv8, particularly excelling in natural environments with intricate, unstructured backdrops. Conclusion The YOLOv8-SS model is meticulously designed to deliver unmatched recognition accuracy while consuming a minimal amount of storage space. In contrast to conventional detection models, this groundbreaking model exhibits superior detection accuracy and speed, rendering it exceedingly valuable across various applications. This breakthrough serves as an invaluable resource for cutting-edge research on crop pest and disease detection within natural environments featuring complex, unstructured backgrounds. Our method is versatile and yields significantly enhanced detection performance, all while maintaining a lean model architecture. This renders it highly appropriate for real-world scenarios involving large-scale crop pest and disease detection.

  • Overview Article
    GUO Yangyang, DU Shuzeng, QIAO Yongliang, LIANG Dong
    Smart Agriculture. 2023, 5(1): 52-65. https://doi.org/10.12133/j.smartag.SA202205009

    Accurate and efficient monitoring of animal information, timely analysis of animal physiological and physical health conditions, and automatic feeding and farming management combined with intelligent technologies are of great significance for large-scale livestock farming. Deep learning techniques, with automatic feature extraction and powerful image representation capabilities, solve many visual challenges, and are more suitable for application in monitoring animal information in complex livestock farming environments. In order to further analyze the research and application of artificial intelligence technology in intelligent animal farming, this paper presents the current state of research on deep learning techniques for tag detection recognition, body condition evaluation and weight estimation, and behavior recognition and quantitative analysis for cattle, sheep and pigs. Among them, target detection and recognition is conducive to the construction of electronic archives of individual animals, on which basis the body condition and weight information, behavior information and health status of animals can be related, which is also the trend of intelligent animal farming. At present, intelligent animal farming still faces many problems and challenges, such as the existence of multiple perspectives, multi-scale, multiple scenarios and even small sample size of a certain behavior in data samples, which greatly increases the detection difficulty and the generalization of intelligent technology application. In addition, animal breeding and animal habits are a long-term process. How to accurately monitor the animal health information in real time and effectively feed it back to the producer is also a technical difficulty. According to the actual feeding and management needs of animal farming, the development of intelligent animal farming is prospected and put forward. First, enrich the samples and build a multi perspective dataset, and combine semi supervised or small sample learning methods to improve the generalization ability of in-depth learning models, so as to realize the perception and analysis of the animal's physical environment. Secondly, the unified cooperation and harmonious development of human, intelligent equipment and breeding animals will improve the breeding efficiency and management level as a whole. Third, the deep integration of big data, deep learning technology and animal farming will greatly promote the development of intelligent animal farming. Last, research on the interpretability and security of artificial intelligence technology represented by deep learning model in the breeding field. And other development suggestions to further promote intelligent animal farming. Aiming at the progress of research application of deep learning in livestock smart farming, it provides reference for the modernization and intelligent development of livestock farming.

  • Overview Article
    MAO Kebiao, ZHANG Chenyang, SHI Jiancheng, WANG Xuming, GUO Zhonghua, LI Chunshu, DONG Lixin, WU Menxin, SUN Ruijing, WU Shengli, JI Dabin, JIANG Lingmei, ZHAO Tianjie, QIU Yubao, DU Yongming, XU Tongren
    Smart Agriculture. 2023, 5(2): 161-171. https://doi.org/10.12133/j.smartag.SA202304013

    Objective Deep learning is one of the most important technologies in the field of artificial intelligence, which has sparked a research boom in academic and engineering applications. It also shows strong application potential in remote sensing retrieval of geophysical parameters. The cross-disciplinary research is just beginning, and most deep learning applications in geosciences are still "black boxes", with most applications lacking physical significance, interpretability, and universality. In order to promote the application of artificial intelligence in geosciences and agriculture and cultivate interdisciplinary talents, a paradigm theory for geophysical parameter retrieval based on artificial intelligence coupled physics and statistical methods was proposed in this research. Methods The construction of the retrieval paradigm theory for geophysical parameters mainly included three parts: Firstly, physical logic deduction was performed based on the physical energy balance equation, and the inversion equation system was constructed theoretically which eliminated the ill conditioned problem of insufficient equations. Then, a fuzzy statistical method was constructed based on physical deduction. Representative solutions of physical methods were obtained through physical model simulation, and other representative solutions as the training and testing database for deep learning were obtained using multi-source data. Finally, deep learning achieved the goal of coupling physical and statistical methods through the use of representative solutions from physical and statistical methods as training and testing databases. Deep learning training and testing were aimed at obtaining curves of solutions from physical and statistical methods, thereby making deep learning physically meaningful and interpretable. Results and Discussions The conditions for determining the formation of a universal and physically interpretable paradigm were: (1) There must be a causal relationship between input and output variables (parameters); (2) In theory, a closed system of equations (with unknowns less than or equal to the number of equations) can be constructed between input and output variables (parameters), which means that the output parameters can be uniquely determined by the input parameters. If there is a strong causal relationship between input parameters (variables) and output parameters (variables), deep learning can be directly used for inversion. If there is a weak correlation between the input and output parameters, prior knowledge needs to be added to improve the inversion accuracy of the output parameters. The MODIS thermal infrared remote sensing data were used to retrieve land surface temperature, emissivity, near surface air temperature and atmospheric water vapor content as a case to prove the theory. When there was strong correlation between output parameters (LST and LSE) and input variables (BTi), using deep learning coupled with physical and statistical methods could obtain very high accuracy. When there was a weak correlation between the output parameter (NSAT) and the input variable (BTi), adding prior knowledge (LST and LSE) could improve the inversion accuracy and stability of the output parameter (NSAT). When there was partial strong correlation (WVC and BTi), adding prior knowledge (LST and LSE) could slightly improve accuracy and stability, but the error of prior knowledge (LST and LSE) may bring uncertainty, so prior knowledge could also be omitted. According to the inversion analysis of geophysical parameters of MODIS sensor thermal infrared band, bands 27, 28, 29 and 31 were more suitable for inversion of atmospheric water vapor content, and bands 28, 29, 31 and 32 were more suitable for inversion of surface temperature, Emissivity and near surface air temperature. If someone want to achieve the highest accuracy of four parameters, it was recommended to design the instrument with five bands (27, 28, 29, 31, 32) which were most suitable. If only four thermal infrared bands were designed, bands 27, 28, 31, and 32 should be given priority consideration. From the results of land surface temperature, emissivity, near surface air temperature and atmospheric water vapor content retrieved from MODIS data using this theory, it was not only more accurate than traditional methods, but also could reduce some bands, reduce satellite load and improve satellite life. Especially, this theoretical method overcomes the influence of the MODIS official algorithm (day/night algorithm) on sudden changes in surface types and long-term lack of continuous data, which leads to unstable accuracy of the inversion product. The analysis results showed that the proposed theory and conditions are feasible, and the accuracy and applicability were better than traditional methods. The theory and judgment conditions of geophysical parameter retrieval paradigms were also applicable for target recognition such as remote sensing classification, but it needed to be interpreted from a different perspective. For example, the feature information extracted by different convolutional kernels must be able to uniquely determine the target. Under satisfying with the conditions of paradigm theory, the inversion of geophysical parameters based on artificial intelligence is the best choice. Conclusions The geophysical parameter retrieval paradigm theory based on artificial intelligence proposed in this study can overcome the shortcomings of traditional retrieval methods, especially remote sensing parameter retrieval, which simplify the inversion process and improve the inversion accuracy. At the same time, it can optimize the design of satellite sensors. The proposal of this theory is of milestone significance in the history of geophysical parameter retrieval.

  • Topic--Smart Animal Husbandry Key Technologies and Equipment
    MA Weihong, LI Jiawei, WANG Zhiquan, GAO Ronghua, DING Luyu, YU Qinyang, YU Ligen, LAI Chengrong, LI Qifeng
    Smart Agriculture. 2022, 4(2): 99-109. https://doi.org/10.12133/j.smartag.SA202203005

    Focusing on the low level of management and informatization and intelligence of the beef cattle industry in China, a big data platform for commercial beef cattle breeding, referring to the experience of international advanced beef cattle breeding countries, was proposed in this research. The functions of the platform includes integrating germplasm resources of beef cattle, automatic collecting of key beef cattle breeding traits, full-service support for the beef cattle breeding process, formation of big data analysis and decision-making system for beef cattle germplasm resources, and joint breeding innovation model. Aiming at the backward storage and sharing methods of beef cattle breeding data and incomplete information records in China, an information resource integration platform and an information database for beef cattle germplasm were established. Due to the vagueness and subjectivity of the breeding performance evaluation standard, a scientific online evaluation technology of beef cattle breeding traits and a non-contact automatic acquisition and intelligent calculation method were proposed. Considering the lack of scientific and systematic breeding planning and guidance for farmers in China, a full-service support system for beef cattle breeding and nanny-style breeding guidance during beef cattle breeding was developed. And an interactive progressive decision-making method for beef cattle breeding to solve the lack of data accumulation of beef cattle germplasm was proposed. The main body of breeding and farming enterprises was not closely integrated, according to that, the innovative breeding model of regional integration was explored. The idea of commercial beef cattle breeding big data software platform and the technological and model innovation content were also introduced. The technology innovations included the deep mining of germplasm resources data and improved breed management pedigree, the automatic acquisition and evaluation technology of non-contact breeding traits, the fusion of multi-source heterogeneous information to provide intelligent decision support. The future goal is to form a sustainable information solution for China's beef cattle breeding industry and improve the overall level of China's beef cattle breeding industry.

  • Special Issue--Monitoring Technology of Crop Information
    GUANBolun, ZHANGLiping, ZHUJingbo, LIRunmei, KONGJuanjuan, WANGYan, DONGWei
    Smart Agriculture. 2023, 5(3): 17-34. https://doi.org/10.12133/j.smartag.SA202306012

    [Significance] The scientific dataset of agricultural pests and diseases is the foundation for monitoring and warning of agricultural pests and diseases. It is of great significance for the development of agricultural pest control, and is an important component of developing smart agriculture. The quality of the dataset affecting the effectiveness of image recognition algorithms, with the discovery of the importance of deep learning technology in intelligent monitoring of agricultural pests and diseases. The construction of high-quality agricultural pest and disease datasets is gradually attracting attention from scholars in this field. In the task of image recognition, on one hand, the recognition effect depends on the improvement strategy of the algorithm, and on the other hand, it depends on the quality of the dataset. The same recognition algorithm learns different features in different quality datasets, so its recognition performance also varies. In order to propose a dataset evaluation index to measure the quality of agricultural pest and disease datasets, this article analyzes the existing datasets and takes the challenges faced in constructing agricultural pest and disease image datasets as the starting point to review the construction of agricultural pest and disease datasets. [Progress] Firstly, disease and pest datasets are divided into two categories: private datasets and public datasets. Private datasets have the characteristics of high annotation quality, high image quality, and a large number of inter class samples that are not publicly available. Public datasets have the characteristics of multiple types, low image quality, and poor annotation quality. Secondly, the problems faced in the construction process of datasets are summarized, including imbalanced categories at the dataset level, difficulty in feature extraction at the dataset sample level, and difficulty in measuring the dataset size at the usage level. These include imbalanced inter class and intra class samples, selection bias, multi-scale targets, dense targets, uneven data distribution, uneven image quality, insufficient dataset size, and dataset availability. The main reasons for the problem are analyzed by two key aspects of image acquisition and annotation methods in dataset construction, and the improvement strategies and suggestions for the algorithm to address the above issues are summarized. The collection devices of the dataset can be divided into handheld devices, drone platforms, and fixed collection devices. The collection method of handheld devices is flexible and convenient, but it is inefficient and requires high photography skills. The drone platform acquisition method is suitable for data collection in contiguous areas, but the detailed features captured are not clear enough. The fixed device acquisition method has higher efficiency, but the shooting scene is often relatively fixed. The annotation of image data is divided into rectangular annotation and polygonal annotation. In image recognition and detection, rectangular annotation is generally used more frequently. It is difficult to label images that are difficult to separate the target and background. Improper annotation can lead to the introduction of more noise or incomplete algorithm feature extraction. In response to the problems in the above three aspects, the evaluation methods are summarized for data distribution consistency, dataset size, and image annotation quality at the end of the article. Conclusions and Prospects The future research and development suggestions for constructing high-quality agricultural pest and disease image datasets based are proposed on the actual needs of agricultural pest and disease image recognition:(1) Construct agricultural pest and disease datasets combined with practical usage scenarios. In order to enable the algorithm to extract richer target features, image data can be collected from multiple perspectives and environments to construct a dataset. According to actual needs, data categories can be scientifically and reasonably divided from the perspective of algorithm feature extraction, avoiding unreasonable inter class and intra class distances, and thus constructing a dataset that meets task requirements for classification and balanced feature distribution. (2) Balancing the relationship between datasets and algorithms. When improving algorithms, consider the more sufficient distribution of categories and features in the dataset, as well as the size of the dataset that matches the model, to improve algorithm accuracy, robustness, and practicality. It ensures that comparative experiments are conducted on algorithm improvement under the same evaluation standard dataset, and improved the pest and disease image recognition algorithm. Research the correlation between the scale of agricultural pest and disease image data and algorithm performance, study the relationship between data annotation methods and algorithms that are difficult to annotate pest and disease images, integrate recognition algorithms for fuzzy, dense, occluded targets, and propose evaluation indicators for agricultural pest and disease datasets. (3) Enhancing the use value of datasets. Datasets can not only be used for research on image recognition, but also for research on other business needs. The identification, collection, and annotation of target images is a challenging task in the construction process of pest and disease datasets. In the process of collecting image data, in addition to collecting images, attention can be paid to the collection of surrounding environmental information and host information. This method is used to construct a multimodal agricultural pest and disease dataset, fully leveraging the value of the dataset. In order to focus researchers on business innovation research, it is necessary to innovate the organizational form of data collection, develop a big data platform for agricultural diseases and pests, explore the correlation between multimodal data, improve the accessibility and convenience of data, and provide efficient services for application implementation and business innovation.

  • Special Issue--Artificial Intelligence and Robot Technology for Smart Agriculture
    CHENRuiyun, TIANWenbin, BAOHaibo, LIDuan, XIEXinhao, ZHENGYongjun, TANYu
    Smart Agriculture. 2023, 5(4): 16-32. https://doi.org/10.12133/j.smartag.SA202308006

    [Significance] As the research focus of future agricultural machinery, agricultural wheeled robots are developing in the direction of intelligence and multi-functionality. Advanced environmental perception technologies serve as a crucial foundation and key components to promote intelligent operations of agricultural wheeled robots. However, considering the non-structured and complex environments in agricultural on-field operational processes, the environmental information obtained through conventional 2D perception technologies is limited. Therefore, 3D environmental perception technologies are highlighted as they can provide more dimensional information such as depth, among others, thereby directly enhancing the precision and efficiency of unmanned agricultural machinery operation. This paper aims to provide a detailed analysis and summary of 3D environmental perception technologies, investigate the issues in the development of agricultural environmental perception technologies, and clarify the future key development directions of 3D environmental perception technologies regarding agricultural machinery, especially the agricultural wheeled robot. [Progress] Firstly, an overview of the general status of wheeled robots was introduced, considering their dominant influence in environmental perception technologies. It was concluded that multi-wheel robots, especially four-wheel robots, were more suitable for the agricultural environment due to their favorable adaptability and robustness in various agricultural scenarios. In recent years, multi-wheel agricultural robots have gained widespread adoption and application globally. The further improvement of the universality, operation efficiency, and intelligence of agricultural wheeled robots is determined by the employed perception systems and control systems. Therefore, agricultural wheeled robots equipped with novel 3D environmental perception technologies can obtain high-dimensional environmental information, which is significant for improving the accuracy of decision-making and control. Moreover, it enables them to explore effective ways to address the challenges in intelligent environmental perception technology. Secondly, the recent development status of 3D environmental perception technologies in the agriculture field was briefly reviewed. Meanwhile, sensing equipment and the corresponding key technologies were also introduced. For the wheeled robots reported in the agriculture area, it was noted that the applied technologies of environmental perception, in terms of the primary employed sensor solutions, were divided into three categories: LiDAR, vision sensors, and multi-sensor fusion-based solutions. Multi-line LiDAR had better performance on many tasks when employing point cloud processing algorithms. Compared with LiDAR, depth cameras such as binocular cameras, TOF cameras, and structured light cameras have been comprehensively investigated for their application in agricultural robots. Depth camera-based perception systems have shown superiority in cost and providing abundant point cloud information. This study has investigated and summarized the latest research on 3D environmental perception technologies employed by wheeled robots in agricultural machinery. In the reported application scenarios of agricultural environmental perception, the state-of-the-art 3D environmental perception approaches have mainly focused on obstacle recognition, path recognition, and plant phenotyping. 3D environmental perception technologies have the potential to enhance the ability of agricultural robot systems to understand and adapt to the complex, unstructured agricultural environment. Furthermore, they can effectively address several challenges that traditional environmental perception technologies have struggled to overcome, such as partial sensor information loss, adverse weather conditions, and poor lighting conditions. Current research results have indicated that multi-sensor fusion-based 3D environmental perception systems outperform single-sensor-based systems. This superiority arises from the amalgamation of advantages from various sensors, which concurrently serve to mitigate individual shortcomings. [Conclusions and Prospects] The potential of 3D environmental perception technology for agricultural wheeled robots was discussed in light of the evolving demands of smart agriculture. Suggestions were made to improve sensor applicability, develop deep learning-based agricultural environmental perception technology, and explore intelligent high-speed online multi-sensor fusion strategies. Currently, the employed sensors in agricultural wheeled robots may not fully meet practical requirements, and the system's cost remains a barrier to widespread deployment of 3D environmental perception technologies in agriculture. Therefore, there is an urgent need to enhance the agricultural applicability of 3D sensors and reduce production costs. Deep learning methods were highlighted as a powerful tool for processing information obtained from 3D environmental perception sensors, improving response speed and accuracy. However, the limited datasets in the agriculture field remain a key issue that needs to be addressed. Additionally, multi-sensor fusion has been recognized for its potential to enhance perception performance in complex and changeable environments. As a result, it is clear that 3D environmental perception technology based on multi-sensor fusion is the future development direction of smart agriculture. To overcome challenges such as slow data processing speed, delayed processed data, and limited memory space for storing data, it is essential to investigate effective fusion schemes to achieve online multi-source information fusion with greater intelligence and speed.

  • Special Issue--Agricultural Information Perception and Models
    ZHANGRonghua, BAIXue, FANJiangchuan
    Smart Agriculture. 2024, 6(2): 49-61. https://doi.org/10.12133/j.smartag.SA202311007

    [Objective] It is of great significance to improve the efficiency and accuracy of crop pest detection in complex natural environments, and to change the current reliance on expert manual identification in the agricultural production process. Targeting the problems of small target size, mimicry with crops, low detection accuracy, and slow algorithm reasoning speed in crop pest detection, a complex scene crop pest target detection algorithm named YOLOv8-Entend was proposed in this research. [Methods] Firstly, the GSConv was introduecd to enhance the model's receptive field, allowing for global feature aggregation. This mechanism enables feature aggregation at both node and global levels simultaneously, obtaining local features from neighboring nodes through neighbor sampling and aggregation operations, enhancing the model's receptive field and semantic understanding ability. Additionally, some Convs were replaced with lightweight Ghost Convolutions and HorBlock was utilized to capture longer-term feature dependencies. The recursive gate convolution employed gating mechanisms to remember and transmit previous information, capturing long-term correlations. Furthermore, Concat was replaced with BiFPN for richer feature fusion. The bidirectional fusion of depth features from top to bottom and from bottom to top enhances the transmission of feature information acrossed different network layers. Utilizing the VoVGSCSP module, feature maps of different scales were connected to create longer feature map vectors, increasing model diversity and enhancing small object detection. The convolutional block attention module (CBAM) attention mechanism was introduced to strengthen features of field pests and reduce background weights caused by complexity. Next, the Wise IoU dynamic non-monotonic focusing mechanism was implemented to evaluate the quality of anchor boxes using "outlier" instead of IoU. This mechanism also included a gradient gain allocation strategy, which reduced the competitiveness of high-quality anchor frames and minimizes harmful gradients from low-quality examples. This approach allowed WIoU to concentrate on anchor boxes of average quality, improving the network model's generalization ability and overall performance. Subsequently, the improved YOLOv8-Extend model was compared with the original YOLOv8 model, YOLOv5, YOLOv8-GSCONV, YOLOv8-BiFPN, and YOLOv8-CBAM to validate the accuracy and precision of model detection. Finally, the model was deployed on edge devices for inference verification to confirm its effectiveness in practical application scenarios. [Results and Discussions] The results indicated that the improved YOLOv8-Extend model achieved notable improvements in accuracy, recall, mAP@0.5, and mAP@0.5:0.95 evaluation indices. Specifically, there were increases of 2.6%, 3.6%, 2.4% and 7.2%, respectively, showcasing superior detection performance. YOLOv8-Extend and YOLOv8 run respectively on the edge computing device JETSON ORIN NX 16 GB and were accelerated by TensorRT, mAP@0.5 improved by 4.6%, FPS reached 57.6, meeting real-time detection requirements. The YOLOv8-Extend model demonstrated better adaptability in complex agricultural scenarios and exhibited clear advantages in detecting small pests and pests sharing similar growth environments in practical data collection. The accuracy in detecting challenging data saw a notable increased of 11.9%. Through algorithm refinement, the model showcased improved capability in extracting and focusing on features in crop pest target detection, addressing issues such as small targets, similar background textures, and challenging feature extraction. [Conclusions] The YOLOv8-Extend model introduced in this study significantly boosts detection accuracy and recognition rates while upholding high operational efficiency. It is suitable for deployment on edge terminal computing devices to facilitate real-time detection of crop pests, offering technological advancements and methodologies for the advancement of cost-effective terminal-based automatic pest recognition systems. This research can serve as a valuable resource and aid in the intelligent detection of other small targets, as well as in optimizing model structures.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    ZHU Yanjun, DU Wensheng, WANG Chunying, LIU Ping, LI Xiang
    Smart Agriculture. 2023, 5(2): 23-34. https://doi.org/10.12133/j.smartag.SA202304001

    [Objective] Rapid recognition and automatic positioning of table grapes in the natural environment is the prerequisite for the automatic picking of table grapes by the picking robot. [Methods] An rapid recognition and automatic picking points positioning method based on improved K-means clustering algorithm and contour analysis was proposed. First, euclidean distance was replaced by a weighted gray threshold as the judgment basis of K-means similarity. Then the images of table grapes were rasterized according to the K value, and the initial clustering center was obtained. Next, the average gray value of each cluster and the percentage of pixel points of each cluster in the total pixel points were calculated. And the weighted gray threshold was obtained by the average gray value and percentage of adjacent clusters. Then, the clustering was considered as have ended until the weighted gray threshold remained unchanged. Therefore, the cluster image of table grape was obtained. The improved clustering algorithm not only saved the clustering time, but also ensured that the K value could change adaptively. Moreover, the adaptive Otsu algorithm was used to extract grape cluster information, so that the initial binary image of the table grape was obtained. In order to reduce the interference of redundant noise on recognition accuracy, the morphological algorithms (open operation, close operation, images filling and the maximum connected domain) were used to remove noise, so the accurate binary image of table grapes was obtained. And then, the contours of table grapes were obtained by the Sobel operator. Furthermore, table grape clusters grew perpendicular to the ground due to gravity in the natural environment. Therefore, the extreme point and center of gravity point of the grape cluster were obtained based on contour analysis. In addition, the linear bundle where the extreme point and the center of gravity point located was taken as the carrier, and the similarity of pixel points on both sides of the linear bundle were taken as the judgment basis. The line corresponding to the lowest similarity value was taken as the grape stem, so the stem axis of the grape was located. Moreover, according to the agronomic picking requirements of table grapes, and combined with contour analysis, the region of interest (ROI) in picking points could be obtained. Among them, the intersection of the grapes stem and the contour was regarded as the middle point of the bottom edge of the ROI. And the 0.8 times distance between the left and right extreme points was regarded as the length of the ROI, the 0.25 times distance between the gravity point and the intersection of the grape stem and the contour was regarded as the height of the ROI. After that, the central point of the ROI was captured. Then, the nearest point between the center point of the ROI and the grape stem was determined, and this point on the grape stem was taken as the picking point of the table grapes. Finally, 917 grape images (including Summer Black, Moldova, and Youyong) taken by the rear camera of MI8 mobile phone at Jinniu Mountain Base of Shandong Fruit and Vegetable Research Institute were verified experimentally. Results and Discussions] The results showed that the success rate was 90.51% when the error between the table grape picking points and the optimal points were less than 12 pixels, and the average positioning time was 0.87 s. The method realized the fast and accurate localization of table grape picking points. On top of that, according to the two cultivation modes (hedgerow planting and trellis planting) of table grapes, a simulation test platform based on the Dense mechanical arm and the single-chip computer was set up in the study. 50 simulation tests were carried out for the four conditions respectively, among which the success rate of localization for purple grape picking point of hedgerow planting was 86.00%, and the average localization time was 0.89 s; the success rate of localization for purple grape identification and localization of trellis planting was 92.00%, and the average localization time was 0.67 s; the success rate of localization for green grape picking point of hedgerow planting was 78.00%, and the average localization time was 0.72 s; and the success rate of localization for green grape identification and localization of trellis planting was 80.00%, and the average localization time was 0.71 s. [Conclusions] The experimental results showed that the method proposed in the study can meet the requirements of table grape picking, and can provide technical supports for the development of grape picking robot.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    LI Yangde, MA Xiaohui, WANG Ji
    Smart Agriculture. 2023, 5(2): 35-44. https://doi.org/10.12133/j.smartag.SA202211007

    [Objective] Pineapple is a common tropical fruit, and its ripeness has an important impact on the storage and marketing. It is particularly important to analyze the maturity of pineapple fruit before picking. Deep learning technology can be an effective method to achieve automatic recognition of pineapple maturity. To improve the accuracy and rate of automatic recognition of pineapple maturity, a new network model named MobileNet V3-YOLOv4 was proposed in this study. [Methods] Firstly, pineapple maturity analysis data set was constructed. A total of 1580 images were obtained, with 1264 images selected as the training set, 158 images as the validation set, and 158 images as the test set. Pineapple photos were taken in natural environment. In order to ensure the diversity of the data set and improve the robustness and generalization of the network, pineapple photos were taken under the influence of different factors such as branches and leaves occlusion, uneven lighting, overlapping shadows, etc. and the location, weather and growing environment of the collection were different. Then, according to the maturity index of pineapple, the photos of pineapple with different maturity were marked, and the labels were divided into yellow ripeness and green ripeness. The annotated images were taken as data sets and input into the network for training. Aiming at the problems of the traditional YOLOv4 network, such as large number of parameters, complex network structure and slow reasoning speed, a more optimized lightweight MobileNet V3-YOLOv4 network model was proposed. The model utilizes the benck structure to replace the Resblock in the CSPDarknet backbone network of YOLOv4. Meanwhile, in order to verify the effectiveness of the MobileNet V3-YOLOv4 network, MobileNet V1-YOLOv4 model and MobileNet V2-YOLOv4 model were also trained. Five different single-stage and two-stage network models, including R-CNN, YOLOv3, SSD300, Retinanet and Centernet were compared with each evaluation index to analyze the performance superiority of MobileNet V3-YOLOv4 model. Results and Discussions] MobileNet V3-YOLOv4 was validated for its effectiveness in pineapple maturity detection through experiments comparing model performance, model classification prediction, and accuracy tests in complex pineapple detection environments.The experimental results show that, in terms of model performance comparison, the training time of MobileNet V3-YOLOv4 was 11,924 s, with an average training time of 39.75 s per round, the number of parameters was 53.7 MB, resulting in a 25.59% reduction in the saturation time compared to YOLOv4, and the parameter count accounted for only 22%. The mean average precision (mAP) of the trained MobileNet V3-YOLOv4 in the verification set was 53.7 MB. In order to validate the classification prediction performance of the MobileNet V3-YOLOv4 model, four metrics, including Recall score, F1 Score, Precision, and average precision (AP), were utilized to classify and recognize pineapples of different maturities. The experimental results demonstrate that MobileNet V3-YOLOv4 exhibited significantly higher Precision, AP, and F1 Score the other. For the semi-ripe stage, there was a 4.49% increase in AP, 0.07 improvement in F1 Score, 1% increase in Recall, and 3.34% increase in Precision than YOLOv4. As for the ripe stage, there was a 6.06% increase in AP, 0.13 improvement in F1 Score, 16.55% increase in Recall, and 6.25% increase in Precision. Due to the distinct color features of ripe pineapples and their easy differentiation from the background, the improved network achieved a precision rate of 100.00%. Additionally, the mAP and reasoning speed (Frames Per Second, FPS) of nine algorithms were examined. The results showed that MobileNet V3-YOLOv4 achieved an mAP of 90.92%, which was 5.28% higher than YOLOv4 and 3.67% higher than YOLOv3. The FPS was measured at 80.85 img/s, which was 40.28 img/s higher than YOLOv4 and 8.91 img/s higher than SSD300. The detection results of MobileNet V3-YOLOv4 for pineapples of different maturities in complex environments indicated a 100% success rate for both the semi-ripe and ripe stages, while YOLOv4, MobileNet V1-YOLOv4, and MobileNet V2-YOLOv4 exhibited varying degrees of missed detections. [Conclusions] Based on the above experimental results, it can be concluded that MobileNet V3-YOLOv4 proposed in this study could not only reduce the training speed and parameter number number, but also improve the accuracy and reasoning speed of pineapple maturity recognition, so it has important application prospects in the field of smart orchard. At the same time, the pineapple photo data set collected in this research can also provide valuable data resources for the research and application of related fields.

  • Special Issue--Key Technologies and Equipment for Smart Orchard
    HAN Leng, HE Xiongkui, WANG Changling, LIU Yajia, SONG Jianli, QI Peng, LIU Limin, LI Tian, ZHENG Yi, LIN Guihai, ZHOU Zhan, HUANG Kang, WANG Zhong, ZHA Hainie, ZHANG Guoshan, ZHOU Guotao, MA Yong, FU Hao, NIE Hongyuan, ZENG Aijun, ZHANG Wei
    Smart Agriculture. 2022, 4(3): 1-11. https://doi.org/10.12133/j.smartag.SA200201014

    Traditional orchard production is facing problems of labor shortage due to the aging, difficulties in the management of agricultural equipment and production materials, and low production efficiency which can be expected to be solved by building a smart orchard that integrates technologies of Internet of Things(IoT), big data, equipment intelligence, et al. In this study, based on the objectives of full mechanization and intelligent management, a smart orchard was built in Pinggu district, an important peaches and pears fruit producing area of Beijing. The orchard covers an aera of more than 30 hm2 in Xiying village, Yukou town. In the orchard, more than 10 kinds of information acquisition sensors for pests, diseases, water, fertilizers and medicines are applied, 28 kinds of agricultural machineries with intelligent technical support are equipped. The key technologies used include: intelligent information acquisition system, integrated water and fertilizer management system and intelligent pest management system. The intelligent operation equipment system includes: unmanned lawn mower, intelligent anti-freeze machine, trenching and fertilizer machine, automatic driving crawler, intelligent profiling variable sprayer, six-rotor branch-to-target drone, multi-functional picking platform and finishing and pruning machine, etc. At the same time, an intelligent management platform has been built in the smart orchard. The comparison results showed that, smart orchard production can reduce labor costs by more than 50%, save pesticide dosage by 30% ~ 40%, fertilizer dosage by 25% ~ 35%, irrigation water consumption by 60% ~ 70%, and comprehensive economic benefits increased by 32.5%. The popularization and application of smart orchards will further promote China's fruit production level and facilitate the development of smart agriculture in China.

  • Information Processing and Decision Making
    XU Yulin, KANG Mengzhen, WANG Xiujuan, HUA Jing, WANG Haoyu, SHEN Zhen
    Smart Agriculture. 2022, 4(4): 156-163. https://doi.org/10.12133/j.smartag.SA20220712

    Corn and soybean are upland grain in the same season, and the contradiction of scrambling for land between corn and soybean is prominent in China, so it is necessary to explore the price relations between corn and soybean. In addition, agricultural futures have the function of price discovery compared with the spot. Therefore, the analysis and prediction of corn and soybean futures prices are of great significance for the management department to adjust the planting structure and for farmers to select the crop varieties. In this study, the correlation between corn and soybean futures prices was analyzed, and it was found that the corn and soybean futures prices have a strong correlation by correlation test, and soybean futures price is the Granger reason of corn futures price by Granger causality test. Then, the corn and soybean futures prices were predicted using a long short-term memory (LSTM) model. To optimize the futures price prediction model performance, Attention mechanism was introduced as Attention-LSTM to assign weights to the outputs of the LSTM model at different times. Specifically, LSTM model was used to process the input sequence of futures prices, the Attention layer assign different weights to the outputs, and then the model output the prediction results after a layer of linearity. The experimental results showed that Attention-LSTM model could significantly improve the prediction performance of both corn and soybean futures prices compared to autoregressive integrated moving average model (ARIMA), support vector regression model (SVR), and LSTM. For example, mean absolute error (MAE) was improved by 3.8% and 3.3%, root mean square error (RMSE) was improved by 0.6% and 1.8% and mean absolute error percentage (MAPE) was improved by 4.8% and 2.9% compared with a single LSTM, respectively. Finally, the corn futures prices were forecasted using historical corn and soybean futures prices together. Specifically, two LSTM models were used to process the input sequences of corn futures prices and soybean futures prices respectively, two parameters were trained to perform a weighted summation of the output of two LSTM models, and the prediction results were output by the model after a layer of linearity. The experimental results showed that MAE was improved by 6.9%, RMSE was improved by 1.1% and MAPE was improved by 5.3% compared with the LSTM model using only corn futures prices. The results verify the strong correlation between corn and soybean futures prices at the same time. In conclusion, the results verify the Attention-LSTM model can improve the performances of soybean and corn futures price forecasting compared with the general prediction model, and the combination of related agricultural futures price data can improve the prediction performances of agricultural product futures forecasting model.

  • Overview Article
    GUO Dafang, DU Yuefeng, WU Xiuheng, HOU Siyu, LI Xiaoyu, ZHANG Yan'an, CHEN Du
    Smart Agriculture. 2023, 5(2): 149-160. https://doi.org/10.12133/j.smartag.SA202305007

    Significance Agricultural machinery serves as the fundamental support for implementing advanced agricultural production concepts. The key challenge for the future development of smart agriculture lies in how to enhance the design, manufacturing, operation, and maintenance of these machines to fully leverage their capabilities. To address this, the concept of the digital twin has emerged as an innovative approach that integrates various information technologies and facilitates the integration of virtual and real-world interactions. By providing a deeper understanding of agricultural machinery and its operational processes, the digital twin offers solutions to the complexity encountered throughout the entire lifecycle, from design to recycling. Consequently, it contributes to an all-encompassing enhancement of the quality of agricultural machinery operations, enabling them to better meet the demands of agricultural production. Nevertheless, despite its significant potential, the adoption of the digital twin for agricultural machinery is still at an early stage, lacking the necessary theoretical guidance and methodological frameworks to inform its practical implementation. Progress Drawing upon the successful experiences of the author's team in the digital twin for agricultural machinery, this paper presents an overview of the research progress made in digital twin. It covers three main areas: The digital twin in a general sense, the digital twin in agriculture, and the digital twin for agricultural machinery. The digital twin is conceptualized as an abstract notion that combines model-based system engineering and cyber-physical systems, facilitating the integration of virtual and real-world environments. This paper elucidates the relevant concepts and implications of digital twin in the context of agricultural machinery. It points out that the digital twin for agricultural machinery aims to leverage advanced information technology to create virtual models that accurately describe agricultural machinery and its operational processes. These virtual models act as a carrier, driven by data, to facilitate interaction and integration between physical agricultural machinery and their digital counterparts, consequently yielding enhanced value. Additionally, it proposes a comprehensive framework comprising five key components: Physical entities, virtual models, data and connectivity, system services, and business applications. Each component's functions operational mechanism, and organizational structure are elucidated. The development of the digital twin for agricultural machinery is still in its conceptual phase, and it will require substantial time and effort to gradually enhance its capabilities. In order to advance further research and application of the digital twin in this domain, this paper integrates relevant theories and practical experiences to propose an implementation plan for the digital twin for agricultural machinery. The macroscopic development process encompasses three stages: Theoretical exploration, practical application, and summarization. The specific implementation process entails four key steps: Intelligent upgrading of agricultural machinery, establishment of information exchange channels, construction of virtual models, and development of digital twin business applications. The implementation of digital twin for agricultural machinery comprises four stages: Pre-research, planning, implementation, and evaluation. The digital twin serves as a crucial link and bridge between agricultural machinery and the smart agriculture. It not only facilitates the design and manufacturing of agricultural machinery, aligning them with the realities of agricultural production and supporting the advancement of advanced manufacturing capabilities, but also enhances the operation, maintenance, and management of agricultural production to better meet practical requirements. This, in turn, expedites the practical implementation of smart agriculture. To fully showcase the value of the digital twin for agricultural machinery, this paper addresses the existing challenges in the design, manufacturing, operation, and management of agricultural machinery. It expounds the methods by which the digital twin can address these challenges and provides a technical roadmap for empowering the design, manufacturing, operation, and management of agricultural machinery through the use of the digital twin. In tackling the critical issue of leveraging the digital twin to enhance the operational quality of agricultural machinery, this paper presents two research cases focusing on high-powered tractors and large combine harvesters. These cases validate the feasibility of the digital twin in improving the quality of plowing operations for high-powered tractors and the quality of grain harvesting for large combine harvesters. Conclusions and Prospects This paper serves as a reference for the development of research on digital twin for agricultural machinery, laying a theoretical foundation for empowering smart agriculture and intelligent equipment with the digital twin. The digital twin provides a new approach for the transformation and upgrade of agricultural machinery, offering a new path for enhancing the level of agricultural mechanization and presenting new ideas for realizing smart agriculture. However, existing digital twin for agricultural machinery is still in its early stages, and there are a series of issues that need to be explored. It is necessary to involve more professionals from relevant fields to advance the research in this area.

  • Overview Article
    GUI Zechun, ZHAO Sijian
    Smart Agriculture. 2023, 5(1): 82-98. https://doi.org/10.12133/j.smartag.SA202211004

    Agriculture is a basic industry deeply related to the national economy and people's livelihood, while it is also a weak industry. There are some problems with traditional agricultural risk management research methods, such as insufficient mining of nonlinear information, low accuracy and poor robustness. Artificial intelligence(AI) has powerful functions such as strong nonlinear fitting, end-to-end modeling, feature self-learning based on big data, which can solve the above problems well. The research progress of artificial intelligence technology in agricultural vulnerability assessment, agricultural risk prediction and agricultural damage assessment were first analyzed in this paper, and the following conclusions were obtained: 1. The feature importance assessment of AI in agricultural vulnerability assessment lacks scientific and effective verification indicators, and the application method makes it impossible to compare the advantages and disadvantages of multiple AI models. Therefore, it is suggested to use subjective and objective methods for evaluation; 2. In risk prediction, it is found that with the increase of prediction time, the prediction ability of machine learning model tends to decline. Overfitting is a common problem in risk prediction, and there are few researches on the mining of spatial information of graph data; 3. Complex agricultural production environment and varied application scenarios are important factors affecting the accuracy of damage assessment. Improving the feature extraction ability and robustness of deep learning models is a key and difficult issue to be overcome in future technological development. Then, in view of the performance improvement problem and small sample problem existing in the application process of AI technology, corresponding solutions were put forward. For the performance improvement problem, according to the user's familiarity with artificial intelligence, a variety of model comparison method, model group method and neural network structure optimization method can be used respectively to improve the performance of the model; For the problem of small samples, data augmentation, GAN (Generative Adversarial Network) and transfer learning can often be combined to increase the amount of input data of the model, enhance the robustness of the model, accelerate the training speed of the model and improve the accuracy of model recognition. Finally, the applications of AI in agricultural risk management were prospected: In the future, AI algorithm could be considered in the construction of agricultural vulnerability curve; In view of the relationship between upstream and downstream of agricultural industry chain and agriculture-related industries, the graph neural network can be used more in the future to further study the agricultural price risk prediction; In the modeling process of future damage assessment, more professional knowledge related to the assessment target can be introduced to enhance the feature learning of the target, and expanding the small sample data is also the key subject of future research.

  • Overview Article
    HUANG Zichen, SUGIYAMA Saki
    Smart Agriculture. 2022, 4(2): 135-149. https://doi.org/10.12133/j.smartag.SA202202008

    Intelligent equipment is necessary to ensure stable, high-quality, and efficient production of facility agriculture. Among them, intelligent harvesting equipment needs to be designed and developed according to the characteristics of fruits and vegetables, so there is little large-scale mechanization. The intelligent harvesting equipment in Japan has nearly 40 years of research and development history since the 1980s, and the review of its research and development products has specific inspiration and reference significance. First, the preferential policies that can be used for harvesting robots in the support policies of the government and banks to promote the development of facility agriculture were introduced. Then, the development of agricultural robots in Japan was reviewed. The top ten fruits and vegetables in the greenhouse were selected, and the harvesting research of tomato, eggplant, green pepper, cucumber, melon, asparagus, and strawberry harvesting robots based on the combination of agricultural machinery and agronomy was analyzed. Next, the commercialized solutions for tomato, green pepper, and strawberry harvesting system were detailed and reviewed. Among them, taking the green pepper harvesting robot developed by the start-up company AGRIST Ltd. in recent years as an example, the harvesting robot developed by the company based on the Internet of Things technology and artificial intelligence algorithms was explained. This harvesting robot can work 24 h a day and can control the robot's operation through the network. Then, the typical strawberry harvesting robot that had undergone four generations of prototype development were reviewed. The fourth-generation system was a systematic solution developed by the company and researchers. It consisted of high-density movable seedbeds and a harvesting robot with the advantages of high space utilization, all-day work, and intelligent quality grading. The strengths, weaknesses, challenges, and future trends of prototype and industrialized solutions developed by universities were also summarized. Finally, suggestions for accelerating the development of intelligent, smart, and industrialized harvesting robots in China's facility agriculture were provided.

  • Information Processing and Decision Making
    Zhu Yeping, Li Shijuan, Li Shuqin
    Smart Agriculture. 2019, 1(1): 53-66. https://doi.org/10.12133/j.smartag.2019.1.1.201901-SA005

    According to the demand of digitized analysis and visualization representation of crop yield formation and variety adaptability analysis, aiming at improving the timeliness, coordination and sense of reality of crop simulation model, key technologies of crop growth process simulation model and morphological 3D visualization were studied in this research. The internet of things technology was applied to collect the field data. The multi-agent technology was used to study the co-simulation method and design crop model framework. Winter wheat (Triticum aestivum L.) was taken as an example to conducted filed test, the 3D morphology visualization system was developed and validated. Taking three wheat varieties, Hengguan35 (Hg35), Jimai22 (Jm22) and Heng4399 (H4399) as research objects, logistic equation was constructed to simulate the change of leaf length, maximum leaf width, leaf height and plant height. Parametric modeling method and 3D graphics library (OpenGL) were used to build wheat organ geometry model so as to draw wheat morphological structure model. The R2 values of leaf length, maximum leaf width, leaf height and plant height were between 0.772-0.999, indicating that the model has high fitting degree. F values (between 10.153-4359.236) of regression equation and Sig. values (under 0.05) show that the model has good significance. Taking wheat as example, this research combined wheat growth model and structure model effectively in order to realize the 3D morphology visualization of crop growth processes under different conditions, it will provide references for developing the crop simulation visualization system, the method and related technologies are suitable for other field crops such as corn and rice, etc.

  • Overview Article
    BAI Geng, GE Yufeng
    Smart Agriculture. 2023, 5(1): 66-81. https://doi.org/10.12133/j.smartag.SA202211001

    Enhancing resource use efficiency in agricultural field management and breeding high-performance crop varieties are crucial approaches for securing crop yield and mitigating negative environmental impact of crop production. Crop stress sensing and plant phenotyping systems are integral to variable-rate (VR) field management and high-throughput plant phenotyping (HTPP), with both sharing similarities in hardware and data processing techniques. Crop stress sensing systems for VR field management have been studied for decades, aiming to establish more sustainable management practices. Concurrently, significant advancements in HTPP system development have provided a technological foundation for reducing conventional phenotyping costs. In this paper, we present a systematic review of crop stress sensing systems employed in VR field management, followed by an introduction to the sensors and data pipelines commonly used in field HTPP systems. State-of-the-art sensing and decision-making methodologies for irrigation scheduling, nitrogen application, and pesticide spraying are categorized based on the degree of modern sensor and model integration. We highlight the data processing pipelines of three ground-based field HTPP systems developed at the University of Nebraska-Lincoln. Furthermore, we discuss current challenges and propose potential solutions for field HTPP research. Recent progress in artificial intelligence, robotic platforms, and innovative instruments is expected to significantly enhance system performance, encouraging broader adoption by breeders. Direct quantification of major plant physiological processes may represent one of next research frontiers in field HTPP, offering valuable phenotypic data for crop breeding under increasingly unpredictable weather conditions. This review can offer a distinct perspective, benefiting both research communities in a novel manner.

  • Topic--Machine Vision and Agricultural Intelligent Perception
    SHI Jiefeng, HUANG Wei, FAN Xieyang, LI Xiuhua, LU Yangxu, JIANG Zhuhui, WANG Zeping, LUO Wei, ZHANG Muqing
    Smart Agriculture. 2023, 5(2): 82-92. https://doi.org/10.12133/j.smartag.SA202304004

    [Objective] Accurate prediction of changes in sugarcane yield in Guangxi can provide important reference for the formulation of relevant policies by the government and provide decision-making basis for farmers to guide sugarcane planting, thereby improving sugarcane yield and quality and promoting the development of the sugarcane industry. This research was conducted to provide scientific data support for sugar factories and related management departments, explore the relationship between sugarcane yield and meteorological factors in the main sugarcane producing areas of Guangxi Zhuang Autonomous Region. [Methods] The study area included five sugarcane planting regions which laid in five different counties in Guangxi, China. The average yields per hectare of each planting regions were provided by Guangxi Sugar Industry Group which controls the sugar refineries of each planting region. The daily meteorological data including 14 meteorological factors from 2002 to 2019 were acquired from National Data Center for Meteorological Sciences to analyze their influences placed on sugarcane yield. Since meteorological factors could pose different influences on sugarcane growth during different time spans, a new kind of factor which includes meteorological factors and time spans was defined, such as the average precipitation in August, the average temperature from February to April, etc. And then the inter-correlation of all the meteorological factors of different time spans and their correlations with yields were analyzed to screen out the key meteorological factors of sensitive time spans. After that, four algorithms of BP neural network (BPNN), support vector machine (SVM), random forest (RF), and long short-term memory (LSTM) were employed to establish sugarcane apparent yield prediction models for each planting region. Their corresponding reference models based on the annual meteorological factors were also built. Additionally, the meteorological yields of every planting region were extracted by HP filtering, and a general meteorological yield prediction model was built based on the data of all the five planting regions by using RF, SVM BPNN, and LSTM, respectively. [Results and Discussions] The correlation analysis showed that different planting regions have different sensitive meteorological factors and key time spans. The highly representative meteorological factors mainly included sunshine hours, precipitation, and atmospheric pressure. According to the results of correlation analysis, in Region 1, the highest negative correlation coefficient with yield was observed at the sunshine hours during October and November, while the highest positive correlation coefficient was found at the minimum relative humidity in November. In Region 2, the maximum positive correlation coefficient with yield was observed at the average vapor pressure during February and March, whereas the maximum negative correlation coefficient was associated with the precipitation in August and September. In Region 3, the maximum positive correlation coefficient with yield was found at the 20‒20 precipitation during August and September, while the maximum negative correlation coefficient was related to sunshine hours in the same period. In Region 4, the maximum positive correlation coefficient with yield was observed at the 20‒20 precipitation from March to December, whereas the maximum negative correlation coefficient was associated with the highest atmospheric pressure from August to December. In Region 5, the maximum positive correlation coefficient with yield was found at the average vapor pressure from June and to August, whereas the maximum negative correlation coefficient as related to the lowest atmospheric pressure in February and March. For each specific planting region, the accuracy of apparent yield prediction model based on sensitive meteorological factors during key time spans was obviously better than that based on the annual average meteorological values. The LSTM model performed significantly better than the widely used classic BPNN, SVM, and RF models for both kinds of meteorological factors (under sensitive time spans or annually). The overall root mean square error (RMSE) and mean absolute percentage error (MAPE) of the LSTM model under key time spans were 10.34 t/ha and 6.85%, respectively, with a coefficient of determination Rv2 of 0.8489 between the predicted values and true values. For the general prediction models of the meteorological yield to multiple the sugarcane planting regions, the RF, SVM, and BPNN models achieved good results, and the best prediction performance went to BPNN model, with an RMSE of 0.98 t/ha, MAPE of 9.59%, and Rv2 of 0.965. The RMSE and MAPE of the LSTM model were 0.25 t/ha and 39.99%, respectively, and the Rv2 was 0.77. [Conclusions] Sensitive meteorological factors under key time spans were found to be more significantly correlated with the yields than the annual average meteorological factors. LSTM model shows better performances on apparent yield prediction for specific planting region than the classic BPNN, SVM, and RF models, but BPNN model showed better results than other models in predicting meteorological yield over multiple sugarcane planting regions.

  • ZHANGJianhua, YAOQiong, ZHOUGuomin, WUWendi, XIUXiaojie, WANGJian
    Online available: 2024-04-30

    Significance The crop phenotype represents the external expression of the interaction between crop genes and the environment. It is the manifestation of the physiological, ecological and dynamic characteristics of crop growth and development and represents a core link within the field of intelligent breeding. Systematic analysis of crop phenotypes can not only provide insight into the function of genes and reveal the genetic factors that affect the key characteristics of crops, but also can be used to effectively utilize germplasm resources and breed varieties with major breakthroughs. The utilization of data-driven, intelligent, dynamic and non-contact crop phenotypic measurement enables the acquisition of key traits and phenotypic parameters of crop growth, thereby furnishing crucial data support for the breeding and identification of breeding materials throughout the entire growth cycle of crops. Progress Crop phenotype acquisition equipment represents the fundamental basis for the acquisition, analysis, measurement and identification of crop phenotypes. Such equipment can be employed monitor the growth status of crops in detail. The functions, performance and applications of the dominant high-throughput crop phenotyping platform, along with an analysis of the characteristics of various sensing and imaging devices employed obtain crop phenotypic information are presented. The rapid development of high-throughput crop phenotyping platforms and perceptual imaging equipment has led to the integration of advanced imaging technology, spectroscopy technology and deep learning algorithms. These technologies enable the automatic and high-throughput acquisition of yield, resistance, quality and other related traits of large-scale crops, as well as the generation of large-scale multi-dimensional, multi-scale and multi-modal crop phenotypic data. This supports the rapid development of crop phenomics. The research progress of deep learning in the intelligent perception of crop agronomic traits and morphological structure, with respect to various phenotypes such as crop plant height, leaf area index, and crop organ detection is presented. Additionally, the main challenges associated with this field are outlined, namely the complexity of environmental influences, the difficulty of large-scale data processing due to data diversity, model generalization issues, and the need for lightweight algorithms. The analysis of crop phenotypes and morphological characteristics based on three-dimensional reconstruction technology is considered to be more accurate than that based on two-dimensional images. A summary and discussion of the three-dimensional reconstruction method for crops is provided, and the main challenges encountered are also outlined, including the complexity of crop structures, the necessity of algorithm optimization and the cost and practicability of the method. Conclusions and Prospects It is devoted to the examination of the difficulties and challenges associated with the intelligent identification of crop phenotypes based on deep learning, from the perspective of research and development of innovative field equipment for the acquisition and analysis of phenotypic data, the establishment of a unified data acquisition and data sharing platform with the objective of improving the efficiency of data utilization, the enhancement of the generality aforementioned approach. A field crop phenotype intelligent identification model must consider multiple perspectives, modalities and points in time. This necessitates a continuous, multi-faceted analysis. It is achieved to identify characteristics in a spatiotemporal context through the fusion of various data sources, such as images, spectral data and weather information. In terms of interpretability models, it explores the potential of deep learning in crop phenotype intelligent recognition. It will be necessary for future research to break through the current bottleneck of high-throughput crop phenomics technology. It is vital to conduct further research into the field of visual perception and deep learning methods. This will allow for the realization of the intelligent acquisition of crop phenotypic information, as well as an intelligent management of phenotypic data.

  • Topic--Smart Farming of Field Crops
    LUO Qing, RAO Yuan, JIN Xiu, JIANG Zhaohui, WANG Tan, WANG Fengyi, ZHANG Wu
    Smart Agriculture. 2022, 4(4): 84-104. https://doi.org/10.12133/j.smartag.SA202210004

    Accurate peach detection is a prerequisite for automated agronomic management, e.g., peach mechanical harvesting. However, due to uneven illumination and ubiquitous occlusion, it is challenging to detect the peaches, especially when the peaches are bagged in orchards. To this end, an accurate multi-class peach detection method was proposed by means of improving YOLOv5s and using multi-modal visual data for mechanical harvesting in this paper. RGB-D dataset with multi-class annotations of naked and bagging peach was proposed, including 4127 multi-modal images of corresponding pixel-aligned color, depth, and infrared images acquired with consumer-level RGB-D camera. Subsequently, an improved lightweight YOLOv5s (small depth) model was put forward by introducing a direction-aware and position-sensitive attention mechanism, which could capture long-range dependencies along one spatial direction and preserve precise positional information along the other spatial direction, helping the networks accurately detect peach targets. Meanwhile, the depthwise separable convolution was employed to reduce the model computation by decomposing the convolution operation into convolution in the depth direction and convolution in the width and height directions, which helped to speed up the training and inference of the network while maintaining accuracy. The comparison experimental results demonstrated that the improved YOLOv5s using multi-modal visual data recorded the detection mAP of 98.6% and 88.9% on the naked and bagging peach with 5.05 M model parameters in complex illumination and severe occlusion environment, increasing by 5.3% and 16.5% than only using RGB images, as well as by 2.8% and 6.2% when compared to YOLOv5s. As compared with other networks in detecting bagging peaches, the improved YOLOv5s performed best in terms of mAP, which was 16.3%, 8.1% and 4.5% higher than YOLOX-Nano, PP-YOLO-Tiny, and EfficientDet-D0, respectively. In addition, the proposed improved YOLOv5s model offered better results in different degrees than other methods in detecting Fuji apple and Hayward kiwifruit, verified the effectiveness on different fruit detection tasks. Further investigation revealed the contribution of each imaging modality, as well as the proposed improvement in YOLOv5s, to favorable detection results of both naked and bagging peaches in natural orchards. Additionally, on the popular mobile hardware platform, it was found out that the improved YOLOv5s model could implement 19 times detection per second with the considered five-channel multi-modal images, offering real-time peach detection. These promising results demonstrated the potential of the improved YOLOv5s and multi-modal visual data with multi-class annotations to achieve visual intelligence of automated fruit harvesting systems.

  • Topic--Agricultural Sensor and Internet of Things
    YANG XuanJiang, LI Hualong​, LI Miao​, HU Zelin​, LIAO Jianjun​, LIU Xianwang​, GUO Panpan​, YUE Xudong​
    Smart Agriculture. 2020, 2(2): 115-125. https://doi.org/10.12133/j.smartag.2020.2.2.202004-SA001

    With the development of information technology, using big data analysis, monitoring of Internet of Things, sensor perception, wireless communication and other technologies to build a real-time online monitoring system for beehive is a feasible solution for reducing the stress response of bee colony caused by check the beehive artificially. Focusing on situation that real-time monitoring in the closed environment of the beehive is difficult, the STM32F103VBT6 32-bit microcontroller, integrated with the temperature and humidity sensor, microphone, and laser beam sensor were used in this study to develop a low-power, continuous working online monitoring system for the multi-parameter information acquisition and monitoring of beehive key parameters. The system mainly includes core processing module, data acquisition module, data sending module and database server. The data collection module includes a temperature and humidity collection unit inside the beehive, a bee colony sound collection unit, a bee in and out nest number counting unit, etc., and transfers data by accessing the mobile communication network. The performance test results of system on-site deployment showed that the developed system could monitor the temperature and humidity in the beehive in real time, effectively distinguish the bees of entering and leaving the beehive, record the numbers of bees of entering and leaving the nest door, and the bee colony sounds that the automatically obtained were consistent with the standard sound distribution of bee colony. The results indicate that this system meets the design requirements, can accurately and reliably collect the beehive parameters data, and can be used as a data collection method for related research of bee colony.

  • Intelligent Equipment and Systems
    QIN Yingdong, JIA Wenshen
    Smart Agriculture. 2023, 5(1): 155-165. https://doi.org/10.12133/j.smartag.SA202211008

    To meet the needs of environmental monitoring and regulation in rabbit houses, a real-time environmental monitoring system for rabbit houses was proposed based on narrow band Internet of Things (NB-IoT). The system overcomes the limitations of traditional wired networks, reduces network costs, circuit components, and expenses is low. An Arduino development board and the Quectel BC260Y-NB-IoT network module were used, along with the message queuing telemetry transport (MQTT) protocol for remote telemetry transmission, which enables network connectivity and communication with an IoT cloud platform. Multiple sensors, including SGP30, MQ137, and 5516 photoresistors, were integrated into the system to achieve real-time monitoring of various environmental parameters within the rabbit house, such as sound decibels, light intensity, humidity, temperature, and gas concentrations. The collected data was stored for further analysis and could be used to inform environmental regulation and monitoring in rabbit houses, both locally and in the cloud. Signal alerts based on circuit principles were triggered when thresholds were exceeded, creating an optimal living environment for the rabbits. The advantages of NB-IoT networks and other networks, such as Wi-Fi and LoRa were compared. The technology and process of building a system based on the three-layer architecture of the Internet of Things was introduced. The prices of circuit components were analyzed, and the total cost of the entire system was less than 400 RMB. The system underwent network and energy consumption tests, and the transmission stability, reliability, and energy consumption were reasonable and consistent across different time periods, locations, and network connection methods. An average of 0.57 transactions per second (TPS) was processed by the NB-IoT network using the MQTT communication protocol, and 34.2 messages per minute were sent and received with a fluctuation of 1 message. The monitored device was found to have an average voltage of approximately 12.5 V, a current of approximately 0.42 A, and an average power of 5.3 W after continuous monitoring using an electricity meter. No additional power consumption was observed during communication. The performance of various sensors was tested through a 24-hour indoor test, during which temperature and lighting conditions showed different variations corresponding to day and night cycles. The readings were stably and accurately captured by the environmental sensors, demonstrating their suitability for long-term monitoring purposes. This system is can provide equipment cost and network selection reference values for remote or large-scale livestock monitoring devices.

  • Overview Article
    Cao Hongxin, Ge Daokuo, Zhang Wenyu, Zhang Weixin, Cao Jing, Liang Wanjie, Xuan Shouli, Liu Yan, Wu Qian, Sun Chuanliang, Zhang Lingling, Xia Ji‘an, Liu Yongxia, Chen Yuli, Yue Yanbin, Zhang Zhiyou, Wan Qian, Pan Yue, Han Xujie, Wu Fei
    Smart Agriculture. 2020, 2(1): 147-162. https://doi.org/10.12133/j.smartag.2020.2.1.202002-SA006

    Agricultural models, agricultural artificial intelligent, and data analysis technology, etc., exist in whole processes of information perceiving, transmission, processing and control for smart agriculture, thus they are the core technology of smart agriculture. To furtherly make the substances and functions of agricultural models clear, facilitate its further research and application, drive smart agriculture development with healthy, steady, and sustainable, methods of systematic analysis, comparison, and chart for relationship, etc. were used in this research. The definition, classification, functions of the agricultural models were theoretically analyzed. The relationships between the agricultural models and the elements and processes of the smart agriculture were expounded, which made the functions of agricultural models clear, provided some agricultural models examples applied in the smart agriculture. The important studies and application progresses of agricultural models were reviewed. The comparison results of agricultural models showed that the 4 levels of agricultural biological elements, 6 scales of agricultural environmental elements, 6 administrative levels of agricultural technological and economic elements, and the relevant approaches for modeling agricultural system need to be considered. The research and application of multi-space scales on environment elements in the agricultural models would have the larger potential. The combination of agricultural models with molecular genetics, perceiving, and artificial intelligence, the collaboration among public and private researchers, and food security challenges have been an important power for further development of agricultural models, linking agricultural models with various agricultural system modeling, databases, harmonious and open data, and decision-making support systems (DSS) would be focus on. The research and application of the agricultural models in China have formed crop model series with Chinese characteristics, joined in the world trends of the Agricultural Model Intercomparison and Improvement Project (AgMIP), the smart agriculture, and so on. They should be speedy graspe chances and accelerate development. The agricultural models is a quantitative express of relationships within or among the agricultural system elements. An important method with epistemological values of quantifying and synthesizing agricultural sciences, and will play an indispensible role in data achieving and processing for the smart agriculture combining perceiving techniques, and become a significant bridge and bond.

  • Special Issue--Monitoring Technology of Crop Information
    MAYujing, WUShangrong, YANGPeng, CAOHong, TANJieyang, ZHAORongkun
    Smart Agriculture. 2023, 5(3): 1-16. https://doi.org/10.12133/j.smartag.SA202303002

    [Significance] Oil crops play a significant role in the food supply, as well as the important source of edible vegetable oils and plant proteins. Real-time, dynamic and large-scale monitoring of oil crop growth is essential in guiding agricultural production, stabilizing markets, and maintaining health. Previous studies have made a considerable progress in the yield simulation of staple crops in regional scale based on remote sensing methods, but the yield simulation of oil crops in regional scale is still poor as its complexity of the plant traits and structural characteristics. Therefore, it is urgently needed to study regional oil crop yield estimation based on remote sensing technology. [Progress] This paper summarized the content of remote sensing technology in oil crop monitoring from three aspects: backgrounds, progressions, opportunities and challenges. Firstly, significances and advantages of using remote sensing technology to estimate the of oil crops have been expounded. It is pointed out that both parameter inversion and crop area monitoring were the vital components of yield estimation. Secondly, the current situation of oil crop monitoring was summarized based on remote sensing technology from three aspects of remote sensing parameter inversion, crop area monitoring and yield estimation. For parameter inversion, it is specified that optical remote sensors were used more than other sensors in oil crops inversion in previous studies. Then, advantages and disadvantages of the empirical model and physical model inversion methods were analyzed. In addition, advantages and disadvantages of optical and microwave data were further illustrated from the aspect of oil crops structure and traits characteristics. At last, optimal choice on the data and methods were given in oil crop parameter inversion. For crop area monitoring, this paper mainly elaborated from two parts of optical and microwave remote sensing data. Combined with the structure of oil crops and the characteristics of planting areas, the researches on area monitoring of oil crops based on different types of remote sensing data sources were reviewed, including the advantages and limitations of different data sources in area monitoring. Then, two yield estimation methods were introduced: remote sensing yield estimation and data assimilation yield estimation. The phenological period of oil crop yield estimation, remote sensing data source and modeling method were summarized. Next, data assimilation technology was introduced, and it was proposed that data assimilation technology has great potential in oil crop yield estimation, and the assimilation research of oil crops was expounded from the aspects of assimilation method and grid selection. All of them indicate that data assimilation technology could improve the accuracy of regional yield estimation of oil crops. Thirdly, this paper pointed out the opportunities of remote sensing technology in oil crop monitoring, put forward some problems and challenges in crop feature selection, spatial scale determination and remote sensing data source selection of oil crop yield, and forecasted the development trend of oil crop yield estimation research in the future. Conclusions and Prospects The paper puts forward the following suggestions for the three aspects: (1) Regarding crop feature selection, when estimating yields for oil crops such as rapeseed and soybeans, which have active photosynthesis in siliques or pods, relying solely on canopy leaf area index (LAI) as the assimilation state variable for crop yield estimation may result in significant underestimation of yields, thereby impacting the accuracy of regional crop yield simulation. Therefore, it is necessary to consider the crop plant characteristics and the agronomic mechanism of yield formation through siliques or pods when estimating yields for oil crops. (2) In determining the spatial scale, some oil crops are distributed in hilly and mountainous areas with mixed land cover. Using regularized yield simulation grids may result in the confusion of numerous background objects, introducing additional errors and affecting the assimilation accuracy of yield estimation. This poses a challenge to yield estimation research. Thus, it is necessary to choose appropriate methods to divide irregular unit grids and determine the optimal scale for yield estimation, thereby improving the accuracy of yield estimation. (3) In terms of remote sensing data selection, the monitoring of oil crops can be influenced by crop structure and meteorological conditions. Depending solely on spectral data monitoring may have a certain impact on yield estimation results. It is important to incorporate radar off-nadir remote sensing measurement techniques to perceive the response relationship between crop leaves and siliques or pods and remote sensing data parameters. This can bridge the gap between crop characteristics and remote sensing information for crop yield simulation. This paper can serve as a valuable reference and stimulus for further research on regional yield estimation and growth monitoring of oil crops. It supplements existing knowledge and provides insightful considerations for enhancing the accuracy and efficiency of oil crop production monitoring and management.

  • CAOBingxue, LIJin, ZHAOChunjiang, LIHongfei
    Online available: 2024-06-12

    Significance Building the new agricultural quality productive forces is of great significance. The new agricultural quality productive forces is the advanced quality productive forces which realizes the transformation, upgrading, and deep integration of substantive, penetrating, operational, and media factors, and has outstanding characteristics such as intelligence, greenness, integration, and organization. As another new technological revolution in the field of agriculture, smart agricultural technology transforms agricultural production mode by integrating agricultural biotechnology, agricultural information technology, and smart agricultural machinery and equipment, with information and knowledge as important core elements. The inherent characteristics of "high-tech, high-efficiency, high-quality, and sustainable" in new agricultural quality productive forces are fully reflected in the practice of smart agricultural technology innovation. And it has become an important core and engine for promoting the new agricultural quality productive forces. Progress Through literature review and theoretical analysis, this article conducts a systematic study on the practical foundation, internal logic, and problem challenges of smart agricultural technology innovation leading the development of new agricultural quality productive forces. The conclusions show that: (1) At present, the global innovation capability of smart agriculture technology is constantly enhancing, and significant technological breakthroughs have been made in fields such as smart breeding, agricultural information perception, agricultural big data and artificial intelligence, smart agricultural machinery and equipment, providing practical foundation support for leading the development of new agricultural quality productive forces. Among them, the smart breeding of 'Phenotype+Genotype+Environmental type' has entered the fast lane, the technology system for sensing agricultural sky, air, and land information is gradually maturing, the research and exploration on agricultural big data and intelligent decision-making technology continue to advance, and the creation of smart agricultural machinery and equipment for different fields has achieved fruitful results; (2) Smart agricultural technology innovation provides basic resources for the development of new agricultural quality productive forces through empowering agricultural factor innovation, provides sustainable driving force for the development of new agricultural quality productive forces through empowering agricultural technology innovation, provides practical paradigms for the development of new agricultural quality productive forces through empowering agricultural scenario innovation, provides intellectual support for the development new agricultural quality productive forces through empowering agricultural entity innovation, and provides important guidelines for the development of new agricultural quality productive forces through empowering agricultural value innovation; (3) Compared to the development requirements of new agricultural quality productive forces in China and the advanced level of international smart agriculture technology, China's smart agriculture technology innovation is generally in the initial stage of multi-point breakthroughs, system integration, and commercial application. It still faces major challenges such as an incomplete policy system for scientific and technological innovation, key technologies with bottlenecks, blockages and breakpoints, difficulties in the transformation and implementation of scientific and technological achievements, and incomplete support systems for technological innovation. Conclusions and Prospects Regarding the issue of technological innovation in smart agriculture, this article proposes the "Four Highs" path of smart agriculture technology innovation to fill the gaps in smart agriculture technology innovation and accelerate the formation of new agricultural quality productive forces in China. The "Four Highs" path specifically includes the construction of high-energy smart agricultural science and technology innovation platforms, the breakthroughs in high-precision and cutting-edge smart agricultural technology products, the creation of high-level smart agricultural application scenarios, and the cultivation of high-level smart agricultural innovation talents. Finally, this article proposes four strategic suggestions such as deepening the understanding of smart agriculture technology innovation and new agricultural quality productive forces, optimizing the supply of smart agriculture technology innovation policies, building a national smart agriculture innovation development pilot zone, and improving the smart agriculture science and technology innovation ecosystem.

  • Invited Article
    Lan Yubin, Deng Xiaoling, Zeng Guoliang
    Smart Agriculture. 2019, 1(2): 1-19. https://doi.org/10.12133/j.smartag.2019.1.2.201904-SA003

    Rapid acquisition and analysis of crop information is the precondition and basis for carrying out precision agricultural practice. Variable spraying and agricultural operation management based on the actual degree of crop diseases, pests and weeds can reduce the cost of agricultural production, optimize crop cultivation, improve crop yield and quality, and thus achieve precise agricultural management. In recent years, with the rapid development of UAV industry, UAV agricultural remote sensing technologies have played an important role in monitoring crop diseases, insects and weeds because of high spatial resolution, strong timeliness and low cost. Firstly, this research introduces the basic idea and system composition of precision agricultural aviation, and the status of UAV remote sensing in precision agricultural aviation. Then, the common UAV remote sensing imaging and interpreting methods were discussed, and the progress of UAV agricultural remote sensing technologies in detecting crop diseases, pests and weeds were respectively expounded. Finally, the challenges in the development of UAV agricultural remote sensing technologies nowadays were summarized, and the future development directions of UAV agricultural remote sensing were prospected. This research can provide theoretical references and technical supports for the development of UAV remote sensing technology in the field of precision agricultural aviation.

  • Topic--Technological Innovation and Sustainable Development of Smart Animal Husbandry
    ZHANG Yanqi, ZHOU Shuo, ZHANG Ning, CHAI Xiujuan, SUN Tan
    Smart Agriculture. 2024, 6(4): 53-63. https://doi.org/10.12133/j.smartag.SA202310001

    [Objective] Currently, pig farming facilities mainly rely on manual counting for tracking slaughtered and stored pigs. This is not only time-consuming and labor-intensive, but also prone to counting errors due to pig movement and potential cheating. As breeding operations expand, the periodic live asset inventories put significant strain on human, material and financial resources. Although methods based on electronic ear tags can assist in pig counting, these ear tags are easy to break and fall off in group housing environments. Most of the existing methods for counting pigs based on computer vision require capturing images from a top-down perspective, necessitating the installation of cameras above each hogpen or even the use of drones, resulting in high installation and maintenance costs. To address the above challenges faced in the group pig counting task, a high-efficiency and low-cost pig counting method was proposed based on improved instance segmentation algorithm and WeChat public platform. [Methods] Firstly, a smartphone was used to collect pig image data in the area from a human view perspective, and each pig's outline in the image was annotated to establish a pig count dataset. The training set contains 606 images and the test set contains 65 images. Secondly, an efficient global attention module was proposed by improving convolutional block attention module (CBAM). The efficient global attention module first performed a dimension permutation operation on the input feature map to obtain the interaction between its channels and spatial dimensions. The permuted features were aggregated using global average pooling (GAP). One-dimensional convolution replaced the fully connected operation in CBAM, eliminating dimensionality reduction and significantly reducing the model's parameter number. This module was integrated into the YOLOv8 single-stage instance segmentation network to build the pig counting model YOLOv8x-Ours. By adding an efficient global attention module into each C2f layer of the YOLOv8 backbone network, the dimensional dependencies and feature information in the image could be extracted more effectively, thereby achieving high-accuracy pig counting. Lastly, with a focus on user experience and outreach, a pig counting WeChat mini program was developed based on the WeChat public platform and Django Web framework. The counting model was deployed to count pigs using images captured by smartphones. [Results and Discussions] Compared with existing methods of Mask R-CNN, YOLACT(Real-time Instance Segmentation), PolarMask, SOLO and YOLOv5x, the proposed pig counting model YOLOv8x-Ours exhibited superior performance in terms of accuracy and stability. Notably, YOLOv8x-Ours achieved the highest accuracy in counting, with errors of less than 2 and 3 pigs on the test set. Specifically, 93.8% of the total test images had counting errors of less than 3 pigs. Compared with the two-stage instance segmentation algorithm Mask R-CNN and the YOLOv8x model that applies the CBAM attention mechanism, YOLOv8x-Ours showed performance improvements of 7.6% and 3%, respectively. And due to the single-stage design and anchor-free architecture of the YOLOv8 model, the processing speed of a single image was only 64 ms, 1/8 of Mask R-CNN. By embedding the model into the WeChat mini program platform, pig counting was conducted using smartphone images. In cases where the model incorrectly detected pigs, users were given the option to click on the erroneous location in the result image to adjust the statistical outcomes, thereby enhancing the accuracy of pig counting. [Conclusions] The feasibility of deep learning technology in the task of pig counting was demonstrated. The proposed method eliminates the need for installing hardware equipment in the breeding area of the pig farm, enabling pig counting to be carried out effortlessly using just a smartphone. Users can promptly spot any errors in the counting results through image segmentation visualization and easily rectify any inaccuracies. This collaborative human-machine model not only reduces the need for extensive manpower but also guarantees the precision and user-friendliness of the counting outcomes.

  • Topic--Smart Farming of Field Crops
    LIU Xiaohang, ZHANG Zhao, LIU Jiaying, ZHANG Man, LI Han, FLORES Paulo, HAN Xiongzhe
    Smart Agriculture. 2022, 4(4): 49-60. https://doi.org/10.12133/j.smartag.SA202207004

    Machine vision has been increasingly used for agricultural sensing tasks. The detection method based on deep learning for infield corn kernels can improve the detection accuracy. In order to obtain the number of lost corn kernels quickly and accurately after the corn harvest, and evaluate the corn harvest combine performance on grain loss, the method of directly using deep learning technology to count corn kernels in the field was developed and evaluated. Firstly, an RGB camera was used to collect image with different backgrounds and illuminations, and the datasets were generated. Secondly, different target detection networks for kernel recognition were constructed, including Mask R-CNN, EfficientDet-D5, YOLOv5-L and YOLOX-L, and the collected 420 effective images were used to train, verify and test each model. The number of images in train, verify and test datasets were 200, 40 and 180, respectively. Finally, the counting performances of different models were evaluated and compared according to the recognition results of test set images. The experimental results showed that among the four models, YOLOv5-L had overall the best performance, and could reliably identify corn kernels under different scenes and light conditions. The average precision (AP) value of the model for the image detection of the test set was 78.3%, and the size of the model was 89.3 MB. The correct rate of kernel count detection in four scenes of non-occlusion, surface mid-level-occlusion, surface severe-occlusion and aggregation were 98.2%, 95.5%, 76.1% and 83.3%, respectively, and F1 values were 94.7%, 93.8%, 82.8% and 87%, respectively. The overall detection correct rate and F1 value of the test set were 90.7% and 91.1%, respectively. The frame rate was 55.55 f/s, and the detection and counting performance were better than Mask R-CNN, EfficientDet-D5 and YOLOX-L networks. The detection accuracy was improved by about 5% compared with the second best performance of Mask R-CNN. With good precision, high throughput, and proven generalization, YOLOv5-L can realize real-time monitoring of corn harvest loss in practical operation.

  • Special Issue--Key Technologies and Equipment for Smart Orchard
    SHANG Fengnan, ZHOU Xuecheng, LIANG Yingkai, XIAO Mingwei, CHEN Qiao, LUO Chendi
    Smart Agriculture. 2022, 4(3): 120-131. https://doi.org/10.12133/j.smartag.SA202207001

    Dragon fruit detection in natural environment is the prerequisite for fruit harvesting robots to perform harvesting. In order to improve the harvesting efficiency, by improving YOLOX (You Only Look Once X) network, a target detection network with an attention module was proposed in this research. As the benchmark, YOLOX-Nano network was chose to facilitate deployment on embedded devices, and the convolutional block attention module (CBAM) was added to the backbone feature extraction network of YOLOX-Nano, which improved the robustness of the model to dragon fruit target detection to a certain extent. The correlation of features between different channels was learned by weight allocation coefficients of features of different scales, which were extracted for the backbone network. Moreover, the transmission of deep information of network structure was strengthened, which aimed at reducing the interference of dragon fruit recognition in the natural environment as well as improving the accuracy and speed of detection significantly. The performance evaluation and comparison test of the method were carried out. The results showed that, after training, the dragon fruit target detection network got an AP0.5 value of 98.9% in the test set, an AP0.5:0.95 value of 72.4% and F1 score was 0.99. Compared with other YOLO network models under the same experimental conditions, on the one hand, the improved YOLOX-Nano network model proposed in this research was more lightweight, on the other hand, the detection accuracy of this method surpassed that of YOLOv3, YOLOv4 and YOLOv5 respectively. The average detection accuracy of the improved YOLOX-Nano target detection network was the highest, reaching 98.9%, 26.2% higher than YOLOv3, 9.8% points higher than YOLOv4-Tiny, and 7.9% points higher than YOLOv5-S. Finally, real-time tests were performed on videos with different input resolutions. The improved YOLOX-Nano target detection network proposed in this research had an average detection time of 21.72 ms for a single image. In terms of the size of the network model was only 3.76 MB, which was convenient for deployment on embedded devices. In conclusion, not only did the improved YOLOX-Nano target detection network model accurately detect dragon fruit under different lighting and occlusion conditions, but the detection speed and detection accuracy showed in this research could able to meet the requirements of dragon fruit harvesting in natural environment requirements at the same time, which could provide some guidance for the design of the dragon fruit harvesting robot.

  • Topic--Smart Farming of Field Crops
    FU Hongyu, WANG Wei, LIAO Ao, YUE Yunkai, XU Mingzhi, WANG Ziwei, CHEN Jianfu, SHE Wei, CUI Guoxian
    Smart Agriculture. 2022, 4(4): 74-83. https://doi.org/10.12133/j.smartag.SA202209001

    Ramie is an important fiber crop. Due to the shortage of land resources and the promotion of excellent varieties, the genetic variation and diversity of ramie decreased, which increased the need for investigation and protection of the ramie germplasm resources diversity. The crop phenotype measurement method based on UAV remote sensing can conduct frequent, rapid, non-destructive and accurate monitoring of different genotypes, which can fulfill the investigation of crop germplasm resources and screen specific and high-quality varieties. In order to realize efficient comprehensive evaluation of ramie germplasm phenotype and assist in screening of dominant ramie varieties, a method for monitoring and screening ramie germplasm phenotype was proposed based on UAV remote sensing images. Firstly, based on UAV remote sensing images, the digital surface model (DSM) and orthophoto of the test area were generated by Pix4dmapper. Then, the key phenotypic parameters (plant height, plant number, leaf area index, leaf chlorophyll content and water content) of ramie germplasm resources were estimated. The subtraction method was used to extract ramie plant height based on DSM, while the target detection algorithm was applied to extract ramie plant number based on orthographic images, and four machine learning methods were used to estimate the leaf area index (LAI), leaf chlorophyll content (SPAD value) and water content. Finally, according to the extracted remote sensing phenotypic parameters, the genetic diversity of ramie germplasm was analyzed by using variability analysis and principal component analysis. The results showed that: (1) The ramie phenotype estimation based on UAV remote sensing was effective, with the fitting accuracy of plant height 0.93, and the root mean square error (RMSE) 5.654 cm. The fitting indexes of SPAD value, water content and LAI were 0.66, 0.79 and 0.74, respectively, and RMSE were 2.03, 2.21 and 0.63, respectively; (2) The remote sensing phenotypes of ramie germplasm were significantly different, as the coefficients of variation of LAI, plant height and plant number reached 20.82%, 24.61% and 35.48%, respectively; (3) Principal component analysis was used to cluster the remote sensing phenotypes into factor 1 (plant height and LAI) and factor 2 (LAI and SPAD value), factor 1 can be used to evaluate the structural characteristics of ramie germplasm resources, and factor 2 can be used as the screening index of high-light efficiency ramie resources. This study could provide references for crop germplasm phenotypic monitoring and breeding correlation analysis.