农业轮式机器人三维环境感知技术研究进展

陈睿韵, 田文斌, 鲍海波, 李端, 谢鑫浩, 郑永军, 谭彧

智慧农业(中英文). 2023, 5(4): 16-32

PDF(1885 KB)
PDF(1885 KB)
智慧农业(中英文) ›› 2023, Vol. 5 ›› Issue (4) : 16-32. DOI: 10.12133/j.smartag.SA202308006
专题--面向智慧农业的人工智能和机器人技术

农业轮式机器人三维环境感知技术研究进展

作者信息 +

Three-Dimensional Environment Perception Technology for Agricultural Wheeled Robots: A Review

Author information +
History +

摘要

[目的/意义]作为未来农机装备的研究重点,农业轮式机器人正向着智能化与多功能化的方向发展。三维环境感知技术因其获取的信息量丰富、复杂环境下的鲁棒性和适应性好,成为了农业轮式机器人智能化无人作业的基础与关键,其发展水平直接影响到包括农业轮式机器人在内的无人农机的作业质量与效率。[进展]本文首先总结了农业轮式机器人和农业环境感知技术的发展现状,分析了不同类型农业轮式机器人的使用特点和应用现状。其次分析了在农业轮式机器人上实现三维环境感知所主要使用的感知设备及其对应的关键技术,重点阐述了基于激光雷达、视觉传感器和多传感器融合的农业轮式机器人三维环境感知技术的研究进展。[结论/展望]结合农业作业特点与实际需求,指出了农业轮式机器人三维环境感知技术在适用性、环境信息处理和感知效果等方面存在的一些问题,并提出了提升传感器的农业适用性、发展基于深度学习的农业环境感知技术、发展智能化的高速在线多传感器信息融合技术三个方面的建议,以期为农业轮式机器人三维环境感知技术发展提供参考与借鉴。

Abstract

[Significance] As the research focus of future agricultural machinery, agricultural wheeled robots are developing in the direction of intelligence and multi-functionality. Advanced environmental perception technologies serve as a crucial foundation and key components to promote intelligent operations of agricultural wheeled robots. However, considering the non-structured and complex environments in agricultural on-field operational processes, the environmental information obtained through conventional 2D perception technologies is limited. Therefore, 3D environmental perception technologies are highlighted as they can provide more dimensional information such as depth, among others, thereby directly enhancing the precision and efficiency of unmanned agricultural machinery operation. This paper aims to provide a detailed analysis and summary of 3D environmental perception technologies, investigate the issues in the development of agricultural environmental perception technologies, and clarify the future key development directions of 3D environmental perception technologies regarding agricultural machinery, especially the agricultural wheeled robot. [Progress] Firstly, an overview of the general status of wheeled robots was introduced, considering their dominant influence in environmental perception technologies. It was concluded that multi-wheel robots, especially four-wheel robots, were more suitable for the agricultural environment due to their favorable adaptability and robustness in various agricultural scenarios. In recent years, multi-wheel agricultural robots have gained widespread adoption and application globally. The further improvement of the universality, operation efficiency, and intelligence of agricultural wheeled robots is determined by the employed perception systems and control systems. Therefore, agricultural wheeled robots equipped with novel 3D environmental perception technologies can obtain high-dimensional environmental information, which is significant for improving the accuracy of decision-making and control. Moreover, it enables them to explore effective ways to address the challenges in intelligent environmental perception technology. Secondly, the recent development status of 3D environmental perception technologies in the agriculture field was briefly reviewed. Meanwhile, sensing equipment and the corresponding key technologies were also introduced. For the wheeled robots reported in the agriculture area, it was noted that the applied technologies of environmental perception, in terms of the primary employed sensor solutions, were divided into three categories: LiDAR, vision sensors, and multi-sensor fusion-based solutions. Multi-line LiDAR had better performance on many tasks when employing point cloud processing algorithms. Compared with LiDAR, depth cameras such as binocular cameras, TOF cameras, and structured light cameras have been comprehensively investigated for their application in agricultural robots. Depth camera-based perception systems have shown superiority in cost and providing abundant point cloud information. This study has investigated and summarized the latest research on 3D environmental perception technologies employed by wheeled robots in agricultural machinery. In the reported application scenarios of agricultural environmental perception, the state-of-the-art 3D environmental perception approaches have mainly focused on obstacle recognition, path recognition, and plant phenotyping. 3D environmental perception technologies have the potential to enhance the ability of agricultural robot systems to understand and adapt to the complex, unstructured agricultural environment. Furthermore, they can effectively address several challenges that traditional environmental perception technologies have struggled to overcome, such as partial sensor information loss, adverse weather conditions, and poor lighting conditions. Current research results have indicated that multi-sensor fusion-based 3D environmental perception systems outperform single-sensor-based systems. This superiority arises from the amalgamation of advantages from various sensors, which concurrently serve to mitigate individual shortcomings. [Conclusions and Prospects] The potential of 3D environmental perception technology for agricultural wheeled robots was discussed in light of the evolving demands of smart agriculture. Suggestions were made to improve sensor applicability, develop deep learning-based agricultural environmental perception technology, and explore intelligent high-speed online multi-sensor fusion strategies. Currently, the employed sensors in agricultural wheeled robots may not fully meet practical requirements, and the system's cost remains a barrier to widespread deployment of 3D environmental perception technologies in agriculture. Therefore, there is an urgent need to enhance the agricultural applicability of 3D sensors and reduce production costs. Deep learning methods were highlighted as a powerful tool for processing information obtained from 3D environmental perception sensors, improving response speed and accuracy. However, the limited datasets in the agriculture field remain a key issue that needs to be addressed. Additionally, multi-sensor fusion has been recognized for its potential to enhance perception performance in complex and changeable environments. As a result, it is clear that 3D environmental perception technology based on multi-sensor fusion is the future development direction of smart agriculture. To overcome challenges such as slow data processing speed, delayed processed data, and limited memory space for storing data, it is essential to investigate effective fusion schemes to achieve online multi-source information fusion with greater intelligence and speed.

关键词

轮式机器人 / 三维环境感知 / 激光雷达 / 视觉传感器 / 多传感器融合 / 自主导航

Key words

wheeled robot / 3D environment perception / laser radar / vision sensors / multi-sensor fusion / autonomous navigation

引用本文

导出引用
陈睿韵 , 田文斌 , 鲍海波 , 李端 , 谢鑫浩 , 郑永军 , 谭彧. 农业轮式机器人三维环境感知技术研究进展. 智慧农业. 2023, 5(4): 16-32 https://doi.org/10.12133/j.smartag.SA202308006
CHEN Ruiyun , TIAN Wenbin , BAO Haibo , LI Duan , XIE Xinhao , ZHENG Yongjun , TAN Yu. Three-Dimensional Environment Perception Technology for Agricultural Wheeled Robots: A Review. Smart Agriculture. 2023, 5(4): 16-32 https://doi.org/10.12133/j.smartag.SA202308006

1 引 言

目前,中国的果园植保机半数以上没有驾驶室,作业人员完全暴露在施药环境中1, 2,而果园遥控植保机控制范围有限,人机没有完全分离,存在一定安全隐患3。具备自主导航功能的植保机人机完全隔离,保障了作业时人员安全,将在植保设备中占有很大的市场4。其中基于全球导航卫星系统(Global Navigation Satellite System,GNSS)的自主导航解决方案已在野外作业环境中得到充分应用5
现代化果园采用宽行密植种植模式,与传统果园作业环境相比具有行距宽、株距窄的特点,株间枝叶紧密衔接呈“树墙”结构,导航场景较为简单1, 5, 6。目前果园自主导航作业以视觉导航和激光导航为主7, 8。视觉导航通过摄像头采集果园环境的图像及视频信息,结合采集信息中的树干颜色、材质与形状等特征对导航线进行提取,标定相机内外参数以确定机器人相对于果树的位置,最终实现自主导航控制9, 10。视觉传感器虽然具有成本低、信息丰富的优点,但成像质量易受光线干扰,不能直接获取周围环境的深度信息,无法全天候作业11
激光导航采用LiDAR(Light Detection and Ranging)对果园环境进行实时扫描,通过探测果园中不同果树位置,实现机器人的相对定位。激光具有主动发光、测距精度高等优点,可全天候作业,对环境适应能力强12。Santos等13依据同时定位与地图构建(Simultaneous Localization and Mapping,SLAM)技术开发了不依赖全球卫星导航系统(Global Navigation Satellite System,GNSS)定位的VineSlam定位与建图方法和专用于葡萄园的路径规划器“AgRobPP”,该方法的建图与路径规划效果良好,可控制机器人执行多种任务。基于3D激光雷达的SLAM技术,可以较全面地探测周围环境信息,增强机器人移动时的安全性,但也增加了计算负担,对计算性能提出了更高的要求。Saike等14对3D LiDAR数据进行了2D化处理,实现了温室内的自主导航作业。刘伟洪等15则使用3D LiDAR 实现了导航线提取及自主导航,但未对冠层特征进行处理。以上研究采用2D LiDAR 或将3D LiDAR 进行2D化处理以实现自主导航作业,但未获得完整果树特征信息。
上述导航传感器在果园精准变量喷雾中可用于收集果树冠层特征(冠层体积及叶面积指数等)信息,其中LiDAR不仅可用于自主导航作业,还能获取周围果树特征以用于精准变量喷雾作业16-20,在果园精准变量喷雾方面应用最为广泛21。李龙龙等22及窦汉杰等23采用竖直安装的2D LiDAR获取果树特征信息,分别实现了仿形变量喷雾及自动对靶喷雾(有冠层时喷雾,无冠层时不喷雾是精准变量喷雾技术的一种23, 24),达到了减少农药漂失、节省药液的目的。探测果树信息时,LiDAR竖直安装可尽量多地获取果树外貌轮廓特征信息,但机体前进方向为探测盲区无法实现自主导航作业18。相比于2D LiDAR,水平安装的3D LiDAR 可以探测周围环境较为完整的信息16
通过以上研究可知,LiDAR可实现全天候信息采集,而3D LiDAR可获取周围环境前后及上下(一定角度)的三维信息。以上研究未能使用单个3D LiDAR实现既能自主导航又能自动对靶喷雾作业。基于上述情况,本研究利用水平安装的3D LiDAR获取果树点云信息并截取合适感兴趣区(Region of Interest,ROI),并对ROI内的点云进行裁剪、滤波、欧式聚类及2D 化等处理,得到果树质心位置。通过随机一致性(Random Sample Consensus,RANSAC)算法,得到机体左右果树行线公式,依据果树行线得到导航线公式,最终控制机体沿导航线行驶。同时通过7个不同ROI中同棵果树的分区冠层有无信息逻辑计算得到上、中、下分区冠层的有无信息,依据冠层有无进行自动对靶喷雾。最后对机器人进行自主导航性能和喷雾性能测试,并进行数据绘制,为果园智能植保机械研究提供参考。

2 材料与方法

2.1 硬件设计

图1为整个机器人系统硬件部分,由传感器模块、控制模块、驱动模块、对靶执行模块及电源模块等组成,图中绿色线表示电源供给,蓝色线表示信息传递。传感器模块主要由编码器、惯性测量单元(Inertial Measurement Unit,IMU)、LiDAR及RTK GNSS等组成。LiDAR不仅用于自主导航作业,同时也用于探测果树冠层特征。所以LiDAR的选择是关键,本研究采用的是16线机械式3D激光雷达(深圳市速腾聚创科技有限公司),探测范围水平方向360°、垂直方向30°,探测距离达150 m,探测精度±2 cm,垂直角分辨率为2°,在10 Hz频率下工作时水平角分辨率为0.18°,供电范围为DC 9~32 V,采用100 M以太网与工控机通信。每秒接受32万个点云数据。
图1 机器人硬件设计图

Fig. 1 Hardware design of the robot

Full size|PPT slide

E6B2-CWZ6C型编码器(分辨率为1000 p/r (pulse per ring,脉冲每转))提供速度信息,通过连接轴与机器人主动轮(直径24 cm)共轴连接。机器人在行驶过程中,会发生偏移、倾斜与车轮打滑等现象,而ICM-20948型IMU可提供更为准确的速度和机体状态。IMU自带卡尔曼滤波程序,可提供稳定精确的数据;静态精度0.05 °/s,X、Y方向动态精度0.1 °/s,无磁场干扰时Z轴精度1 °/s,加速度精度0.02 g,陀螺仪精度为0.06 °/s,数据最大输出频率为200 Hz (本研究采用100 Hz)。为验证机器人导航性能,移动轨迹测量系统需具有足够高的精度,采用兼容了世界主流六大卫星定位系统的“P3-DU”RTK GNSS定位系统,其水平定位精度为±1 cm,数据输出频率最大为20 Hz(采用20 Hz),供电范围为DC 9~36 V。
面对每秒32万个点云,中央处理器(Central Processing Unit,CPU)需具有极强算力,同时考虑恶劣的果园作业环境,处理器应具有足够的防护等级13,选择处理器为 i7 10510U,16 G RAM、1 T 固态硬盘,预装Ubuntu18.04 Linux 操作系统,拥有RS232、以太网、USB及RS485等多种通信接口,供电范围为DC 9~36 V的国产工控机。微控制器控制机体运动及自动对靶喷雾的执行装置,同时采集编码器信息,采用M3S型STM32单片机,芯片为stm32f103zet6,采用cortex-M3协议,144个引脚,flash内存为512 k,主频72 MHz,同时拥有CAN、USB及RS232等多种通信接口。
本研究选用2台800 W的直流无刷伺服电机(SDGA08C11AB,48 V供电)为机器人提供动力,配有1:30的变速箱(60TDF-147050-L2H,中国嘉兴),并配有SDGA-21A型伺服电机驱动器控制电机转速,驱动器通过CAN总线接受来自单片机的角速度控制信号(通信速率为500k bps,Intel编码格式)。机器人在空载时最大行驶速度为1.5 m/s,最大爬坡角度为30°。可通过左右主动轮不同转速实现差速转向并能360°原地转向。对靶执行装置安装在轴流风机出风口两侧,根据冠层信息有无控制不同喷头的开闭;安装了可控制管道通断的2W150-15型电磁阀(24 V供电,中国宁波ADEC公司),使用N沟道场效应MOS(Metal-Oxide-Semiconductor,场效应管)管控制电磁阀供电通断,进而控制电磁阀开启关闭,MOS管由单片机I/O(Input/Output)口输出的高低电平进行控制。
除上述功能模块外,还需电源模块提供电源供给。采用48 V的磷酸铁锂电池(SK-48V100 Ah),电池容量为100 Ah满电电压约为55.2 V。同时,安装有可为不同功能模块(5、12、24及48 V供电)提供稳定电压的稳压器组。
喷雾系统由26 A型柱塞泵提供液路压力,通过压力阀调节整个喷雾系统压力大小,在喷雾作业过程中保持压力稳定。液泵通过170 F汽油机输出轴上三角皮带带动,机体后方的风机通过轴与机体前方的皮带轮连接,该皮带轮同样由汽油机带动。药箱内液体经过泵后进入三路液体分配器,每个分配器分别引出3、4、3液路至喷头,喷头与液路间安装有电磁阀。

2.2 工作流程

机器人通过图2流程实现自主导航及自动对靶喷雾,具体实现过程将在随后章节详细介绍。
图2 自主导航及自动对靶喷雾实现流程图

Fig. 2 Implementation flow chart of autonomous navigation and automatic target spraying

Full size|PPT slide

2.3 导航功能的具体实现过程

2.3.1 坐标确定

本研究的自主导航兼自动对靶喷雾机器人如图3所示。
图3 自主导航兼自动对靶喷雾机器人实物图
注:1. 橡胶履带;2. 主动轮;3. RTK GNSS移动站;4. 汽油机;5. LiDAR;6. 柱塞泵;7. 药箱;8. 喷头;9. 轴流风机

Fig. 3 Physical picture of autonomous navigation and automic target spraying robot

Full size|PPT slide

由机器人基本信息和运动学理论构建机体下一瞬间世界坐标系下的笛卡尔坐标及偏航角(x, y, θ)如公式(1)所示:
xw1yw1θw1=xw0yw0θw0     cos(θw0)×r     sin(θw0)×r1×1ω×Δt
(1)
其中,x w0y w0为当前笛卡尔坐标;θ w0 为偏航角,(°);r为机器人主动轮半径,12 cm;ω为整个机器人运动的角速度,rad/s;∆t为一帧雷达数据的时间,0.1 s。

2.3.2 ROI的提取

图4所示LiDAR水平安装在机器人行驶前方,安装高度1.3 m,最大测距为150 m,理论上可以感知面积约为7 hm2的圆形果园,10 Hz频率下每帧有32,000个点云数据。机器人仅需要机体前后一定范围内的点云信息,过多的信息会造成数据冗余,影响整个系统的实时性,需对冗余信息进行裁剪以提取合适的ROI。
图4 LiDAR 测量范围

Fig. 4 Measurement range of LiDAR

Full size|PPT slide

在果园实现机器人定位和自主导航仅需机体两侧果树行前后的数棵果树即可15,而本研究的ROI不仅给机器人导航提供定位信息,还需获取果树完整的外貌轮廓特征信息。图4是作业过程中3D LiDAR对一侧果树的探测情况,距LiDAR近的果树仅有局部冠层被探测到,较远的果树能被完全探测到;红线表示激光束,黑线表示LiDAR到树行的垂直距离(果树行距的一半D/2,行距4 m),蓝线表示LiDAR到目标果树的线性距离(d),蓝线、红线及果树(平均高度为H,4.05m)构成直角三角形。该果树是距离LiDAR最近的可探测全部冠层特征信息的果树(机体前方的第n棵果树,株距为R,1.5 m)。其中雷达安装高度为h,由图4中的几何关系得到公式(2),带入数据知n取7。
d=n2r2+D24H-hdtan15°
(2)
由此设ROI的X范围为[-3.0 m,10.5 m],Y为[-2.5 m,2.5 m],Z为[-1.9 m,4.8 m],前进方向为世界坐标X轴正方向。使用点云库(Point Cloud Library,PCL)的直通滤波器对x、y、z三个维度进行裁剪,得到ROI的最终范围。直通滤波器简单高效,可在指定维度下遍历点云中的每个点,并判断该点在指定维度上的取值是否在值域内,删除不在范围内的点。

2.3.3 ROI中的果树定位及导航

使用PCL体素化降采样函数将体素尺寸变为(0.05 m,0.05 m,0.05 m),进一步降低点云数量提高运算速率,并将处理后的点云结果进行存储,将机器人在该ROI区域的世界坐标信息存储到同一内存中方便快速计算。
使用PCL的统计滤波函数将噪声去除,采用PCL的欧式聚类算法对点云进行聚类得到点云数据P i,设最小距离阈值为h/3,聚类点的最小和最大数目分别为10和5500个。聚类后将点云投影至XOY面,计算投影后点云质心(Xi,Yi,0)为果树位置。依据IMU、编码器、LiDAR及果树位置,当机器人行驶到不同行果树中间位置时更新一次ROI。树行拟合采用RANSAC算法,导航线获取及导航跟踪算法与文献[15]相同,最终传递给下位机具体的角速度大小。

2.4 软件系统的编写与设计

为提高开发效率,减少重复开发,软件系统主要基于时下流行的机器人操作系统(Robot Operating System,ROS)平台行开发。使用 C/C++作为主要开发语言,基于ROS Melodic及Ubuntu18.04进行信息采集处理包、树行识别功能包、导航线提取包、运动控制功能包、分区冠层有无信息逻辑计算包及自动对靶喷雾决策包的开发,如图5所示。整个系统分为应用层、控制层、驱动层及自动对靶喷雾层,其中最重要的是控制层和自动对靶喷雾层。
图5 自动对靶喷雾系统软件构成

Fig. 5 Implementation flow chart of automic target spraying system

Full size|PPT slide

2.5 自动对靶喷雾的实现

2.5.1 果树分区冠层有无信息的确定

确定果树分区冠层的有无是实现自动喷雾的关键。机器人正常作业过程中,ROI内包含9棵果树,LiDAR距喷头1.5 m,LiDAR从收集数据到最后的自动对靶喷雾需一定的反应时间,所以舍弃位于LiDAR后方4棵果树的点云。使用IMU获取的翻滚及偏移角信息矫正ROI内剩余果树冠层点云,重新得到果树中心坐标位置(树干),将果树冠层到雷达距离值Ds与树干到雷达距离值Dg进行比较,若Ds大于Dg则该处为空隙(无冠层),反之有冠层。同时将ROI(长10.5 m)两侧果树分为长度为30 cm大小的35个小分区,并根据ROI更新频率将果树冠层上下分为7层,如图6所示。
图6 果树冠层分区示意图

Fig. 6 Diagram of fruit tree canopy zoning

Full size|PPT slide

LiDAR动态扫描机体两侧果树,将机体在果树间行驶的具体距离融合到LiDAR点云信息中,对ROI进行较为精准的分区。机体行驶到不同行果树中间位置时更新ROI并矫正机体位置,以消除IMU及编码器轨迹预测的累计误差。
由于水平安装的3D LiDAR垂直视场的局限性,整个ROI内仅完整地得到LiDAR前方第7棵果树的点云。图7中的点划线框是第一个ROI内可探测到的果树冠层,第一次更新ROI后可探测到实线框内的果树冠层,第二次更新ROI后为虚线框内的果树冠层,依次类推直至第七次更新ROI机体运动到该果树位置。LiDAR距离果树较远时,所测的果树信息极易受到机体状态(倾斜等)的影响,需使用距LiDAR较近黑色框内冠层有无对点划线框内冠层有无进行矫正,然后使用距LiDAR更近的虚线框内有无信息对黑色框内信息进行矫正。具体使用公式(3)进行逻辑计算实现上述过程。
n1(k)orn2(k)ornk-1(k)andnk(k)(3k7)
(3)
其中,nk 为第k个ROI区;k是冠层从上到下第k个分区;or为逻辑“或”运算;and为逻辑“与”运算。当k≤2时,取nkk)的值。
图7 果树分区
注:1. 未更新ROI时LiDAR可检测到的冠层范围;2. 第一次更新ROI后LiDAR可检测到的冠层范围;3. 第二次更新ROI后LiDAR可检测到的冠层范围

Fig. 7 Zoning of fruit tree

Full size|PPT slide

可将果树分为7个倾斜的分区,上两个区域为上层,下两个区域为下层,其余为中层。

2.5.2 喷头及导流板的调整

图3所示,共有10个喷嘴,每侧5个,调整喷嘴的位置和方向使其喷向冠层,果树上层对应一个喷嘴,中下层各两个喷嘴。将玻璃绳固定在导流板上,启动风机,观察玻璃绳漂浮方向是否与果树冠层对齐,若不齐,继续调整导流板直至对齐。

2.6 数据绘制

采用OriginPro 2020绘制试验结果图。

3 试验设计

为验证自主导航性能,于2021年10月11日,在北京市平谷区峪口镇西营村现代化佛见喜梨种植园进行了自主导航及喷雾性能验证试验,试验过程中天气晴,温度为17.2~18.5 ℃,风速为0.8~1.3 m/s。图8是自主导航和喷雾试验示意图,气象站在距基站10 m远的位置用于收集气象条件,RTK GNSS 基站在距试验区域15.5 m远的开阔地带。
图8 自主导航和自动对靶喷雾试验示意图

Fig. 8 Test schematic of autonomous navigation and antomic target spraying

Full size|PPT slide

3.1 导航试验

试验前用RTK GNSS移动站探测试验区两侧行首及行末果树坐标,得到果树行公式,进而得到导航线公式。试验时将机器人放置在果园地头果树行中间启动电源和汽油机,随后启动自动对靶喷雾程序,沿着图8中黑色箭头行驶进试验区,试验区距地头10.5 m,长100 m,宽4 m,行驶过试验区2 m后减速至停止,试验重复3次。定义运动轨迹在导航线右侧为横向正偏差,航向角偏向导航线右侧为正。
试验时机器人上的RTK GNSS移动站接受机体运动中的位置信号并得到运动轨迹,获取不同时刻下轨迹点到导航线的垂直距离为机器人自主导航时的横向偏差。以导航线为基准线,某个时刻的偏航角可表示为轨迹线与导航线的夹角。

3.2 喷雾对比试验

为验证机器人的喷雾性能,如图8所示选择试验区中的3棵果树作为喷雾性能试验植株。试验果树的布样点如图9所示,将试验果树分为上、中、下三层,分别距地面3.2 m、2.4 m及1.6 m高(树干平均高度为1.15 m),每层按照东西南北中布置5片直径为7 cm的滤纸(使用双头夹固定)用于收集雾滴,并在试验果树底部及距离底部0.75 m处的左右两侧各布置3片(每片相距0.5 m)直径为7 cm的滤纸用于收集施药过程中的地面流失。同时在距试验树干1.5 m处布置5 m高的竖直立杆,在距地面高为0.2、0.8、1.4……5.0 m(间隔0.6 m)处用双头夹竖直固定400目的长方形金属网(2.5 cm×7.5 cm)共9个用于收集农药漂移,每三个为一组,将金属网分为上、中、下三层。
图9 喷雾测试样品布置图

Fig. 9 Spray test sample layout

Full size|PPT slide

使用3.0 g/L的柠檬黄溶液作为试验示踪剂,试验前取药箱内的母液放置暗箱内保存。自动对靶喷雾与传统连续(简称传统)喷雾为同一机器,区别在于自动对靶喷雾启动自动对靶喷雾程序,而传统喷雾不启动。每次喷雾试验机具行进100 m,收集样品到6号自封袋中。
以去离子水为洗脱液,对自封袋样品进行洗脱处理,在自封袋内放置50 mL的去离子水后将其密封,用NMY-100A型水平摇床振荡器对样品进行振荡以充分溶解沉积雾滴,使用100 μL移液枪采集袋内溶液滴入比色皿中,放置到722 s型可见光分光光度计内,在波长 426 nm的可见光下测定吸光值,根据文献[25]的方法得到样品的沉积量V c,同时依据滤纸面积得到单位面积雾滴沉积量。两种喷雾模式使用同一药箱内的溶液但喷雾量不同,为对比两种喷雾模式的优劣,使用文献[2526]的方法对冠层内沉积量进行归一化处理,具体计算方法如公式(4)所示。
dg=Vs×102Vz×S
(4)
其中,d g为归一化单位面积沉积量,μL/cm²;V s为样品沉积量,μL;V z为施药量,L/h²;S为样品面积,cm²。

4 试验结果及分析

4.1 自主导航性能试验结果及分析

图10为自主导航时机器人横向偏差的小提琴箱线图,表示横向偏差的分布状态及概率密度。由图10(a)知3次试验的平均值和中位数皆为负值,说明机体大部分时间行驶在导航线的左侧,且试验2平均值和中位数相差较大而试验1、3无明显差别;由图10(b)知绝对值多小于15 cm,甚至是10 cm,平均数和中位数小于10 cm,说明该具有较高的导航准确性(相比于4 m的行距),试验2、3平均值及中位数无明显差别,试验1差别明显,说明试验1运动轨迹在导航线左侧时间占比最多,试验3其次,试验2较少。3次试验最大横向偏差绝对值为21.8 cm。
图10 机器人横向偏差测试结果

Fig. 1 Test results of lateral deviation of the robot

Full size|PPT slide

图11所示,是3次试验的机器人航向偏角绝对值箱线图,最大值为4.02°。随着试验进行,航向偏角的平均值逐渐减小,可能是随着试验进行,试验区内果园道路被机器人压平,使得原本崎岖的果园地面变得较为平整,机体不易发生倾斜,航向偏角也变得更小、更稳定。由以上试验结果可知,自主导航性能满足基本需求。
图11 机器人航向偏角测试结果

Fig. 11 Steering declination test results of the robot

Full size|PPT slide

4.2 喷雾对比试验结果及分析

使用两种喷雾模式在试验区进行喷雾作业时测量药液用量,传统喷雾和自动对靶喷雾分别消耗药液量为20.24和16.18 L,换算成每公顷施药液量则分别为506和404.5 L/hm²,相比于传统喷雾模式,本研究设计的自动对靶喷雾技术可节省20.06%的农药量。
图12为果树冠层内部实际雾滴沉积量与各点占比及进行归一化处理后数据可视化图。由图12(a)12(b)可知,传统喷雾与自动对靶喷雾各点沉积量占比相差无几,自动对靶喷雾总沉积量仅比传统喷雾少10.19%,但在归一化沉积后自动对靶喷雾却比传统喷雾增加了12.38%,这说明了自动对靶喷雾节省药液量造成实际沉积量有所减少,但喷雾效果优于传统喷雾,这是因为传统喷雾在空隙处也喷雾,浪费了较多农药。理论上传统喷雾与自动对靶喷雾在冠层内部沉积量相同,而实际喷雾作业中自动对靶喷雾沉积量较少,可能是自动对靶喷雾控制程序延时函数及冠层有无信息存在误判导致。
图12 自动对靶喷雾和传统喷雾的喷雾沉积测试对比结果
注:ATS:自动对靶喷雾(Automatic Target Spraying); TS:传统连续型喷雾(Traditional Spraying)

Fig. 12 Comparison results of deposition tests between automic target spraying and tranditional spraying

Full size|PPT slide

图13是自动对靶喷雾和传统喷雾在上、中、下层空中实际漂移量及占比情况,两者都是中、下层的漂移量占比最大分别为47%、42%和41%、49%,而上层漂移量仅占比11%和10%,其中传统喷雾漂移量随着收集杆高度的增加而减少,这是因为中上层喷雾距离较远,大雾滴难以达到,而到达的雾滴穿透能力较弱无法穿透冠层漂移到金属筛网上而是沉积到了冠层内部和地面,所以上部漂移量占比最少;自动对靶喷雾中部雾滴占比较多,可能是因为调整了喷头及导流板角度。相比于传统喷雾,自动对靶喷雾的平均空中漂移可减少38.36%,大大降低了农药漂移的风险。
图13 自动对靶喷雾和传统喷雾的空中漂移量

Fig. 13 Airl drift of automic target spraying and tranditional spraying

Full size|PPT slide

图14是自动对靶喷雾和传统喷雾的地面流失量及采集点占比情况,其中采样点B为果树底部,采样点A、C是试验果树左右两侧,可知自动对靶喷雾的地面流失量远小于传统喷雾,且相比于传统喷雾,自动对靶喷雾地面流失量减少51.36%,可大大降低农药对土壤的污染。相比于两者在漂移量占比方面相似的情况,在地面流失量占比方面完全相反,自动对靶喷雾的采样点B地面流失量占比最大为43%,采样点A、C占比几乎相同;而传统喷雾恰好相反,采样点B占比最小为25%,采样点A、C占比较大且几乎相同。这是因为采样点A、C点是试验果树与临近果树的冠层间空隙,自动对靶喷雾在空隙处不喷雾而传统喷雾无视标靶差异在空隙处喷雾,造成了果树间空隙处无效喷雾进而造成地面流失严重,自动对靶喷雾空隙处不喷雾冠层区域正常喷雾,这可能是自动对靶喷雾果树底部地面流失量占比较大的原因。该机器人相比于传统喷雾机,农药空中漂移量减少了38.68%,地面流失量减少了51.40%,降低了施药过程中对环境的污染。
图14 自动对靶喷雾和传统喷雾的地面流失量

Fig. 14 Ground loss of automic target spraying and tranditional spraying

Full size|PPT slide

5 结 论

本研究采用单个3D LiDAR、编码器及IMU实现了果园机器人的自主导航和自动对靶喷雾作业。同时设计了自主导航性能试验及喷雾性能试验,试验结果表明该机器人完全满足果园自主导航和自动对靶喷雾的要求,主要结论如下。
(1)该机器人在自主导航过程中,机器人最大横向偏差为21.8 cm,最大航角为4.02°,完全满足基本的果园自主导航作业需求。
(2)该机器人相比于传统喷雾机,施药液量可减少20.06%,虽然冠层内沉积量减少了12.38%,却提高了喷雾性能且雾滴空中漂移量减少了38.68%,地面流失量减少了51.40%,可降低施药机具对环境的污染。
(3)该机器人与传统喷雾机在农业空中漂移量占比方面无明显差别,而在地面流失占比方面,在试验果树底部地面流失占比最大为43%,试验果树与左右临近果树的中间流失量占比较小分别为29%及28%,而传统喷雾机地面流失量占比情况与之相反。

参考文献

1
中共中央 国务院. 关于做好2022年全面推进乡村振兴重点工作的意见[EB/OL]. (2022-02-22)[2023-07-29]
2
罗锡文, 廖娟, 胡炼, 等. 我国智能农机的研究进展与无人农场的实践[J]. 华南农业大学学报, 2021, 42(6): 8-17, 5.
LUO X W, LIAO J, HU L, et al. Research progress of intelligent agricultural machinery and practice of unmanned farm in China[J]. Journal of South China agricultural university, 2021, 42(6): 8-17, 5.
3
李道亮, 李震. 无人农场系统分析与发展展望[J]. 农业机械学报, 2020, 51(7): 1-12.
LI D L, LI Z. System analysis and development prospect of unmanned farming[J]. Transactions of the Chinese society for agricultural machinery, 2020, 51(7): 1-12.
4
欧阳安, 崔涛, 林立. 智能农机装备产业现状及发展建议[J]. 科技导报, 2022, 40(11): 55-66.
OUYANG A, CUI T, LIN L. Development status and countermeasures of intelligent agricultural machinery equipment industry[J]. Science & technology review, 2022, 40(11): 55-66.
5
刘成良, 贡亮, 苑进, 等. 农业机器人关键技术研究现状与发展趋势[J]. 农业机械学报, 2022, 53(7): 1-22, 55.
LIU C L, GONG L, YUAN J, et al. Current status and development trends of agricultural robots[J]. Transactions of the Chinese society for agricultural machinery, 2022, 53(7): 1-22, 55.
6
HAJJAJ S S H, SAHARI K S M. Review of research in the area of agriculture mobile robots[M]// The 8th international conference on robotic, vision, signal processing & power applications. Singapore: Springer Singapore, 2014: 107-117.
7
陶永, 王田苗, 刘辉, 等. 智能机器人研究现状及发展趋势的思考与建议[J]. 高技术通讯, 2019, 29(2): 149-163.
TAO Y, WANG T M, LIU H, et al. Insights and suggestions on the current situation and development trend of intelligent robots[J]. Chinese high technology letters, 2019, 29(2): 149-163.
8
谭民, 王硕. 机器人技术研究进展[J]. 自动化学报, 2013, 39(7): 963-972.
TAN M, WANG S. Research progress on robotics[J]. Acta automatica Sinica, 2013, 39(7): 963-972.
9
HU B, XING J P, WANG Y. Research on key technologies of wheeled robot[C]// Proceedings of the 3rd Annual International Conference on Mechanics and Mechanical Engineering (MME 2016). Paris, France: Atlantis Press, 2017: 664-668.
10
JIN Y C, LIU J Z, XU Z J, et al. Development status and trend of agricultural robot technology[J]. Internationaljournal of agricultural and biological engineering, 2021, 14(3): 1-19.
11
RAMIN SHAMSHIRI R, WELTZIEN C, HAMEED I A, et al. Research and development in agricultural robotics: A perspective of digital farming[J]. International journal of agricultural and biological engineering, 2018, 11(4): 1-11.
12
CHENG C, FU J, SU H, et al. Recent advancements in agriculture robots:Benefits and challenges[J]. Machines, 2023, 11(1): ID 48.
13
BROWN H B, XU Y S. A single wheel,gyroscopically stabilized robot[J]. IEEE robotics & automation magazine, 1997, 4(3): 39-44.
14
刘延斌. 自行车机器人研究综述[J]. 机械设计与研究, 2007, 23(5): 113-115.
LIU Y B. Research summarization of riderless bicycles[J]. Machine design & research, 2007, 23(5): 113-115.
15
GAO X Y, LI J H, FAN L F, et al. Review of wheeled mobile robots' navigation problems and application prospects in agriculture[J]. IEEE access, 2018, 6: 49248-49268.
16
IndustrialCNH. Newsroom[EB/OL]. [2023-08-29]
17
王儒敬, 孙丙宇. 农业机器人的发展现状及展望[J]. 中国科学院院刊, 2015, 30(6): 803-809.
WANG R J, SUN B Y. Development status and expectation of agricultural robot[J]. Bulletin of Chinese academy of sciences, 2015, 30(6): 803-809.
18
林欢, 许林云. 中国农业机器人发展及应用现状[J]. 浙江农业学报, 2015, 27(5): 865-871.
LIN H, XU L Y. The development and prospect of agricultural robots in China[J]. Acta agriculturae Zhejiangensis, 2015, 27(5): 865-871.
19
王佳荣. 面向自动驾驶的多传感器三维环境感知系统关键技术研究[D]. 北京: 中国科学院大学, 2020.
WANG J R. Research on key technologies of multi-sensor 3D environment perception system for autonomous driving[D].Beijing: University of Chinese Academy of Sciences, 2020.
20
王世峰, 戴祥, 徐宁, 等. 无人驾驶汽车环境感知技术综述[J]. 长春理工大学学报(自然科学版), 2017, 40(1): 1-6.
WANG S F, DAI X, XU N, et al. Overview on environment perception technology for unmanned ground vehicle[J]. Journal of Changchun university of science and technology (natural science edition), 2017, 40(1): 1-6.
21
BECHAR A, VIGNEAULT C. Agricultural robots for field operations. Part 2: Operations and systems[J]. Biosystems engineering, 2017, 153: 110-128.
22
BECHAR A, VIGNEAULT C. Agricultural robots for field operations: Concepts and components[J]. Biosystems engineering, 2016, 149: 94-111.
23
HUAMANCHAHUA D, YALLI-VILLA D, BELLO-MERLO A, et al. Ground robots for inspection and monitoring: A state-of-the-art review[C]// 2021 IEEE 12th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). Piscataway, New Jersey, USA: IEEE, 2021: 768-774.
24
王宝梁, 沈文龙, 张宝玉. 农用无人车环境感知技术发展现状及趋势分析[J]. 中国农机化学报, 2021, 42(11): 214-221.
WANG B L, SHEN W L, ZHANG B Y. Development status and trend of environmental perception technology for unmanned agricultural vehicles[J]. Journal of Chinese agricultural mechanization, 2021, 42(11): 214-221.
25
XIE D B, CHEN L A, LIU L C, et al. Actuators and sensors for application in agricultural robots: A review[J]. Machines, 2022, 10(10): ID 913.
26
刘斌, 张军, 鲁敏, 等. 激光雷达应用技术研究进展[J]. 激光与红外, 2015, 45(2): 117-122.
LIU B, ZHANG J, LU M, et al. Research progress of laser radar applications[J]. Laser & infrared, 2015, 45(2): 117-122.
27
WANG R S, PEETHAMBARAN J, CHEN D. LiDAR point clouds to 3-D urban models: A review[J]. IEEE journal of selected topics in applied earth observations and remote sensing, 2018, 11(2): 606-627.
28
陈晓冬, 张佳琛, 庞伟凇, 等. 智能驾驶车载激光雷达关键技术与应用算法[J]. 光电工程, 2019, 46(7): 34-46.
CHEN X D, ZHANG J C, PANG W S, et al. Key technology and application algorithm of intelligent driving vehicle LiDAR[J]. Opto-electronic engineering, 2019, 46(7): 34-46.
29
余莹洁. 车载激光雷达的主要技术分支及发展趋势[J]. 科研信息化技术与应用, 2018, 9(6): 16-24.
YU Y J. The main technical branches and development trend of vehicle LiDAR[J]. E-science technology & application, 2018, 9(6): 16-24.
30
赵腾. 基于激光扫描的联合收割机自动导航方法研究[D]. 杨凌: 西北农林科技大学, 2017.
ZHAO T. Development of automatic navigation method for combine harvester based on laser scanner[D]. Yangling: Northwest A & F University, 2017.
31
何勇, 蒋浩, 方慧, 等. 车辆智能障碍物检测方法及其农业应用研究进展[J]. 农业工程学报, 2018, 34(9): 21-32.
HE Y, JIANG H, FANG H, et al. Research progress of intelligent obstacle detection methods of vehicles and their application on agriculture[J]. Transactions of the Chinese society of agricultural engineering, 2018, 34(9): 21-32.
32
么汝亭. 林用移动机器人的环境感知与跟踪控制研究[D]. 北京: 北京林业大学, 2021.
YAO R T. Research on environment perception and tracking control of forest mobile robots[D]. Beijing: Beijing Forestry University, 2021.
33
孙意凡, 孙建桐, 赵然, 等. 果实采摘机器人设计与导航系统性能分析[J]. 农业机械学报, 2019, 50(S1): 8-14.
SUN Y F, SUN J T, ZHAO R, et al. Design and system performance analysis of fruit picking robot[J]. Transactions of the Chinese society for agricultural machinery, 2019, 50(S1): 8-14.
34
郭成洋. 果园智能车辆自动导航系统关键技术研究[D]. 杨凌: 西北农林科技大学, 2020.
GUO C Y. Key technologies of automatic vehicles navigation system in orchard[D]. Yangling: Northwest A & F University, 2020.
35
MALAVAZI F B P, GUYONNEAU R, FASQUEL J B, et al. LiDAR-only based navigation algorithm for an autonomous agricultural robot[J]. Computers and electronics in agriculture, 2018, 154: 71-79.
36
ZHU H, YUEN K V, MIHAYLOVA L, et al. Overview of environment perception for intelligent vehicles[J]. IEEE transactions on intelligent transportation systems, 2017, 18(10): 2584-2601.
37
朱云, 凌志刚, 张雨强. 机器视觉技术研究进展及展望[J]. 图学学报, 2020, 41(6): 871-890.
ZHU Y, LING Z G, ZHANG Y Q. Research progress and prospect of machine vision technology[J]. Journal of graphics, 2020, 41(6): 871-890.
38
向学勤, 潘志庚, 童晶. 深度相机在计算机视觉与图形学上的应用研究(英文)[J]. 计算机科学与探索, 2011, 5(6): 481-492.
XIANG X Q, PAN Z G, TONG J. Depth camera in computer vision and computer graphics: An overview[J]. Journal of frontiers of computer science and technology, 2011, 5(6): 481-492.
39
WANG T H, CHEN B, ZHANG Z Q, et al. Applications of machine vision in agricultural robot navigation: A review[J]. Computers and electronics in agriculture, 2022, 198: ID 107085.
40
周星, 高志军. 立体视觉技术的应用与发展[J]. 工程图学学报, 2010, 31(4): 50-55.
ZHOU X, GAO Z J. Application and future development of stereo vision technology[J]. Journal of engineering graphics, 2010, 31(4): 50-55.
41
陈舒雅. 基于深度学习的立体匹配技术研究[D]. 杭州: 浙江大学, 2022.
CHEN S Y. Study on deep learning based stereo matching technologies[D]. Hangzhou: Zhejiang University, 2022.
42
KOLAR P, BENAVIDEZ P, JAMSHIDI M. Survey of datafusion techniques for laser and vision based sensor integration for autonomous navigation[J]. Sensors, 2020, 20(8): ID 2180.
43
赵子健. 基于深度相机的图像处理研究[D]. 合肥: 中国科学技术大学, 2022.
ZHAO Z J. Research on image processing based on depth camera[D].Hefei: University of Science and Technology of China, 2022.
44
BAI Y H, ZHANG B H, XU N M, et al. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review[J]. Computers and electronics in agriculture, 2023, 205: ID 107584.
45
ZHANG J, CHAMBERS A, MAETA S, et al. 3D perception for accurate row following: Methodology and results[C]// 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway, New Jersey, USA: IEEE, 2013: 5306-5313.
46
JIANG S K, WANG S L, YI Z Y, et al. Autonomous navigation system of greenhouse mobile robot based on 3D LiDAR and 2D LiDAR SLAM[J]. Frontiers in plant science, 2022, 13: ID 815218.
47
JONES M H, BELL J, DREDGE D, et al. Design and testing of a heavy-duty platform for autonomous navigation in kiwifruit orchards[J]. Biosystems engineering, 2019, 187: 129-146.
48
刘伟洪, 何雄奎, 刘亚佳, 等. 果园行间3D LiDAR导航方法[J]. 农业工程学报, 2021, 37(9): 165-174.
LIU W H, HE X K, LIU Y J, et al. Navigation method between rows for orchard based on 3D LiDAR[J]. Transactions of the Chinese society of agricultural engineering, 2021, 37(9): 165-174.
49
熊积奎. 基于激光雷达和卫星定位的果园喷雾机导航控制研究[D]. 镇江: 江苏大学, 2022.
XIONG J K. Research on navigation control of orchard sprayer based on lidar and satellite positioning[D]. Zhenjiang: Jiangsu University, 2022.
50
耿丽杰. 基于激光雷达和RTK的葡萄园自主导航平台的研究与设计[D]. 淄博: 山东理工大学, 2022.
GENG L J. Research and design of vineyard autonomous navigation platform based on LiDAR and RTK[D]. Zibo: Shandong University of Technology, 2022.
51
季宇寒, 徐弘祯, 张漫, 等. 基于激光雷达的农田环境点云采集系统设计[J]. 农业机械学报, 2019, 50(S1): 1-7.
JI Y H, XU H Z, ZHANG M, et al. Design of point cloud acquisition system for farmland environment based on LiDAR[J]. Transactions of the Chinese society for agricultural machinery, 2019, 50(S1): 1-7.
52
KRAGH M, JØRGENSEN R N, PEDERSEN H. Object detection and terrain classification in agricultural fields using 3D lidar data[C]// Proceedings of the 10th International Conference on Computer Vision Systems. New York, USA: ACM, 2015: 188-197.
53
尚业华, 张光强, 孟志军, 等. 基于欧氏聚类的三维激光点云田间障碍物检测方法[J]. 农业机械学报, 2022, 53(1): 23-32.
SHANG Y H, ZHANG G Q, MENG Z J, et al. Field obstacle detection method of 3D LiDAR point cloud based on Euclidean clustering[J]. Transactions of the Chinese society for agricultural machinery, 2022, 53(1): 23-32.
54
胡广锐, 孔微雨, 齐闯, 等. 果园环境下移动采摘机器人导航路径优化[J]. 农业工程学报, 2021, 37(9): 175-184.
HU G R, KONG W Y, QI C, et al. Optimization of the navigation path for a mobile harvesting robot in orchard environment[J]. Transactions of the Chinese society of agricultural engineering, 2021, 37(9): 175-184.
55
RIVERA G, PORRAS R, FLORENCIA R, et al. LiDAR applications in precision agriculture for cultivating crops: A review of recent advances[J]. Computers and electronics in agriculture, 2023, 207: ID 107737.
56
OMASA K, HOSOI F, KONISHI A. 3D LiDAR imaging for detecting and understanding plant responses and canopy structure[J]. Journal of experimental botany, 2007, 58(4): 881-898.
57
SANZ R, ROSELL J R, LLORENS J, et al. Relationship between tree row LIDAR-volume and leaf area density for fruit orchards and vineyards obtained with a LIDAR 3D Dynamic Measurement System[J]. Agricultural and forest meteorology, 2013, 171/172: 153-162.
58
张漫, 苗艳龙, 仇瑞承, 等. 基于车载三维激光雷达的玉米叶面积指数测量[J]. 农业机械学报, 2019, 50(6): 12-21.
ZHANG M, MIAO Y L, QIU R C, et al. Maize leaf area index measurement based on vehicle 3D LiDAR[J]. Transactions of the Chinese society for agricultural machinery, 2019, 50(6): 12-21.
59
GENÉ-MOLA J, GREGORIO E, AUAT CHEEIN F, et al. Fruit detection, yield prediction and canopy geometric characterization using LiDAR with forced air flow[J]. Computers and electronics in agriculture, 2020, 168: ID 105121.
60
WEISS U, BIBER P, LAIBLE S, et al. Plant species classification using a 3D LIDAR sensor and machine learning[C]// 2010 Ninth International Conference on Machine Learning and Applications. Piscataway, New Jersey, USA: IEEE, 2010: 339-345.
61
PRETTO A, ARAVECCHIA S, BURGARD W, et al. Building an aerial-ground robotics system for precision farming: An adaptable solution[J]. IEEE robotics & automation magazine, 2021, 28(3): 29-49.
62
王飞涛, 樊春春, 李兆东, 等. 机器人在设施农业领域应用现状及发展趋势分析[J]. 中国农机化学报, 2020, 41(3): 93-98, 120.
WANG F T, FAN C C, LI Z D, et al. Application status and development trend of robots in the field of facility agriculture[J]. Journal of Chinese agricultural mechanization, 2020, 41(3): 93-98, 120.
63
PEZZEMENTI Z, TABOR T, HU P Y, et al. Comparing apples and oranges: Off-road pedestrian detection on the National Robotics Engineering Center agricultural person-detection dataset[J]. Journal of field robotics, 2018, 35(4): 545-563.
64
DOS SANTOS E B, MENDES C C T, SANTOS OSORIO F, et al. Bayesian networks for obstacle classification in agricultural environments[C]// 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). Piscataway, New Jersey, USA: IEEE, 2013: 1416-1421.
65
BALL D, UPCROFT B, WYETH G, et al. Vision-based obstacle detection and navigation for an agricultural robot[J]. Journal of field robotics, 2016, 33(8): 1107-1130.
66
FLEISCHMANN P, BERNS K. A stereo vision based obstacle detection system for agricultural applications[M]// Springer tracts in advanced robotics. Cham: Springer International Publishing, 2016: 217-231.
67
NISSIMOV S, GOLDBERGER J, ALCHANATIS V. Obstacle detection in a greenhouse environment using the Kinect sensor[J]. Computers and electronics in agriculture, 2015, 113: 104-115.
68
杨福增, 刘珊, 陈丽萍, 等. 基于立体视觉技术的多种农田障碍物检测方法[J]. 农业机械学报, 2012, 43(5): 168-172, 202.
YANG F Z, LIU S, CHEN L P, et al. Detection method of various obstacles in farmland based on stereovision technology[J]. Transactions of the Chinese society for agricultural machinery, 2012, 43(5): 168-172, 202.
69
姬长英, 沈子尧, 顾宝兴, 等. 基于点云图的农业导航中障碍物检测方法[J]. 农业工程学报, 2015, 31(7): 173-179.
JI C Y, SHEN Z Y, GU B X, et al. Obstacle detection based on point clouds in application of agricultural navigation[J]. Transactions of the Chinese society of agricultural engineering, 2015, 31(7): 173-179.
70
徐俊杰. 基于视觉的丘陵山区田间道路场景理解和障碍物检测研究[D]. 重庆: 西南大学, 2019.
XU J J. Research for field road scene recognition and obstacle detection in hilly areas based on vision[D]. Chongqing: Southwest University, 2019.
71
YAGUCHI H, NAGAHAMA K, HASEGAWA T, et al. Development of an autonomous tomato harvesting robot with rotational plucking gripper[C]// 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). New York, USA: ACM, 2016: 652-657.
72
SILWAL A, DAVIDSON J R, KARKEE M, et al. Design, integration, and field evaluation of a robotic apple harvester[J]. Journal of field robotics, 2017, 34(6): 1140-1159.
73
BACHCHE S, OKA K. Distinction of green sweet peppers by using various color space models and computation of 3-dimensional location coordinates of recognized green sweet peppers based on parallel stereovision system[J]. Journal of system design and dynamics, 2013, 7(2): 178-196.
74
乐晓亮. 番茄串的机器人采收方法研究与应用[D]. 广州: 华南理工大学, 2021.
LE X L. Research and application of robot harvesting method for tomato bunches[D]. Guangzhou: South China University of Technology, 2021.
75
LING X, ZHAO Y S, GONG L, et al. Dual-arm cooperation and implementing for robotic harvesting tomato using binocular vision[J]. Robotics and autonomous systems, 2019, 114: 134-143.
76
KISE M, ZHANG Q, ROVIRA MÁS F. A stereovision-based crop row detection method for tractor-automated guidance[J]. Biosystems engineering, 2005, 90(4): 357-367.
77
WANG Q, ZHANG Q, ROVIRA-MÁS F, et al. Stereovision-based lateral offset measurement for vehicle navigation in cultivated stubble fields[J]. Biosystems engineering, 2011, 109(4): 258-265.
78
YUN C, KIM H J, JEON C W, et al. Stereovision-based guidance line detection method for auto-guidance system on furrow irrigated fields[J]. IFAC-PapersOnLine, 2018, 51(17): 157-161.
79
余越. 基于融合导航与强化学习算法的田间智能农机自主避障方法研究[D]. 杭州: 浙江大学, 2022.
YU Y. Research on autonomous obstacle avoidance method of field intelligent agricultural vehicle based on fusion navigation and reinforcement learning algorithm[D]. Hangzhou: Zhejiang University, 2022.
80
何坤. 基于ROS的草莓温室自主移动机器人全局路径规划研究[D]. 武汉: 武汉轻工大学, 2020.
HE K. Research on global path planning of autonomous mobile robot in strawberry greenhouse based on ROS[D]. Wuhan: Wuhan Polytechnic University, 2020.
81
李洋, 赵鸣, 徐梦瑶, 等. 多源信息融合技术研究综述[J]. 智能计算机与应用, 2019, 9(5): 186-189.
LI Y, ZHAO M, XU M Y, et al. A survey of research on multi-source information fusion technology[J]. Intelligent computer and applications, 2019, 9(5): 186-189.
82
许博玮, 马志勇, 李悦. 多传感器信息融合技术在环境感知中的研究进展及应用[J]. 计算机测量与控制, 2022, 30(9): 1-7, 21.
XU B W, MA Z Y, LI Y. Research progress and application of multi-sensor information fusion technology in environmental perception[J]. Computer measurement & control, 2022, 30(9): 1-7, 21.
83
UNDERWOOD J P, HUNG C, WHELAN B, et al. Mapping almond orchard canopy volume, flowers, fruit and yield using lidar and vision sensors[J]. Computers and electronics in agriculture, 2016, 130: 83-96.
84
REINA G, MILELLA A, ROUVEURE R, et al. Ambient awareness for agricultural robotic vehicles[J]. Biosystems engineering, 2016, 146: 114-132.
85
YASUKAWA S, LI B, SONODA T, et al. Development of a tomato harvesting robot[J]. Proceedings of international conference on artificial life and robotics, 2017, 22: 408-411.
86
BENET B, LENAIN R, ROUSSEAU V. Development of a sensor fusion method for crop row tracking operations[J]. Advances in animal biosciences, 2017, 8(2): 583-589.
87
YAN Y X, ZHANG B H, ZHOU J, et al. Real-time localization and mapping utilizing multi-sensor fusion and visual–IMU–wheel odometry for agricultural robots in unstructured, dynamic and GPS-denied greenhouse environments[J]. Agronomy, 2022, 12(8): ID 1740.
88
褚福春. 基于多传感器融合的农业机器人非结构化环境导航技术研究[D]. 淄博: 山东理工大学, 2022.
CHU F C. Research on unstructured environment navigation technology of agricultural robot based on multi-sensor fusion[D]. Zibo: Shandong University of Technology, 2022.
89
刘宇峰. 基于机器视觉的自主导航农机避障路径规划[D]. 南京: 南京农业大学, 2020.
LIU Y F. Obstacle avoidance path planning of autonomous navigation agricultural machinery based on machine vision[D]. Nanjing: Nanjing Agricultural University, 2020.
90
何蒙. 2D-3D信息组合的棚架果园复杂场景自主感知与导航[D]. 镇江: 江苏大学, 2021.
HE M. Autonomous perception and navigation in complex scenes of trellis orchard based on 2D-3D information combination[D]. Zhenjiang: Jiangsu University, 2021.
91
林乙蘅. 基于多源信息融合的智能农机路径规划和路径跟踪研究[D]. 南京: 东南大学, 2018.
LIN Y H. Path planning and path tracking of intelligent agricultural vehicle based on multi-source information fusion[D]. Nanjing: Southeast University, 2018.
PDF(1885 KB)

文章所在专题

智慧农业

Accesses

Citation

Detail

段落导航
相关文章

/