
基于改进YOLOv8n-pose和三维点云分析的蒙古马体尺自动测量方法
Automatic Measurement of Mongolian Horse Body Based on Improved YOLOv8n-pose and 3D Point Cloud Analysis
[目的/意义] 准确高效地获取马匹体尺信息是马产业现代化进程中的关键环节。传统的人工测量方法耗时长、工作量大,且会对马匹造成一定应激反应。因此,实现准确且高效的体尺参数自动测量对于制定蒙古马早期育种计划至关重要。 [方法] 选择Azure Kinect深度相机获取蒙古马双侧RGB-D数据,以YOLOv8n-pose为基础,通过在C2f模块中引入可变形卷积(Deformable Convolution v2, DCNv2),同时添加洗牌注意力机制(Shuffle Attention, SA)模块和优化损失函数(SCYLLA-IoU Loss, SIoU)的方法,利用余弦退火法动态调整学习率,提出一种名为DSS-YOLO(DCNv2-SA-SIoU-YOLO)的模型用于蒙古马体尺关键点的检测。其次,将RGB图中的二维关键点坐标与深度图中对应深度值相结合,得到关键点三维坐标,并实现蒙古马点云信息的转换。利用直通滤波、随机抽样一致性(Random Sample Consensus, RANSAC)、统计离群值滤波、主成分分析法(Principal Component Analysis, PCA)完成点云处理与分析。最终根据关键点坐标自动计算体高、体斜长、臀高、胸围和臀围5项体尺参数。 [结果和讨论] DSS-YOLO的平均关键点检测精度为92.5%;dDSS为7.2个像素;参数量和运算量分别仅为3.48 M和9.1 G。体尺参数自动测量结果与人工测量值相比,各项体尺参数的整体平均绝对误差为3.77 cm;平均相对误差为2.29%。 [结论] 研究结果可为蒙古马运动性能相关遗传参数的确定提供技术支撑。
[Objective] There exist a high genetic correlation among various morphological characteristics of Mongolian horses. Utilizing advanced technology to obtain body structure parameters related to athletic performance could provide data support for breeding institutions to develop scientific breeding plans and establish the groundwork for further improvement of Mongolian horse breeds. However, traditional manual measurement methods are time-consuming, labor-intensive, and may cause certain stress responses in horses. Therefore, ensuring precise and effective measurement of Mongolian horse body dimensions is crucial for formulating early breeding plans. [Method] Video images of 50 adult Mongolian horses in the suitable breeding stage at the Inner Mongolia Agricultural University Horse Breeding Technical Center was first collected. Fifty images per horse were captured to construct the training and validation sets, resulting in a total of 2 500 high-definition RGB images of Mongolian horses, with an equal ratio of images depicting horses in motion and at rest. To ensure the model's robustness and considering issues such as angles, lighting, and image blurring during actual image capture, a series of enhancement algorithms were applied to the original dataset, expanding the Mongolian horse image dataset to 4 000 images. The YOLOv8n-pose was employed as the foundational keypoint detection model. Through the design of the C2f_DCN module, deformable convolution (DCNV2) was integrated into the C2f module of the Backbone network to enhance the model's adaptability to different horse poses in real-world scenes. Besides, an SA attention module was added to the Neck network to improve the model's focus on critical features. The original loss function was replaced with SCYLLA-IoU (SIoU) to prioritize major image regions, and a cosine annealing method was employed to dynamically adjust the learning rate during model training. The improved model was named DSS-YOLO (DCNv2-SA-SIoU-YOLO) network model. Additionally, a test set comprising 30 RGB-D images of mature Mongolian horses was selected for constructing body dimension measurement tasks. DSS-YOLO was used for keypoint detection of body dimensions. The 2D keypoint coordinates from RGB images were fused with corresponding depth values from depth images to obtain 3D keypoint coordinates, and Mongolian horse's point cloud information was transformed. Point cloud processing and analysis were performed using pass-through filtering, random sample consensus (RANSAC) shape fitting, statistical outlier filtering, and principal component analysis (PCA) coordinate system correction. Finally, body height, body oblique length, croup height, chest circumference, and croup circumference were automatically computed based on keypoint spatial coordinates. [Results and Discussion] The proposed DSS-YOLO model exhibited parameter and computational costs of 3.48 M and 9.1 G, respectively, with an average accuracy mAP0.5:0.95 reaching 92.5%, and a dDSS of 7.2 pixels. Compared to Hourglass, HRNet, and SimCC, mAP0.5:0.95 increased by 3.6%, 2.8%, and 1.6%, respectively. By relying on keypoint coordinates for automatic calculation of body dimensions and suggesting the use of a mobile least squares curve fitting method to complete the horse's hip point cloud, experiments involving 30 Mongolian horses showed a mean average error (MAE) of 3.77 cm and mean relative error (MRE) of 2.29% in automatic measurements. [Conclusions] The results of this study showed that DSS-YOLO model combined with three-dimensional point cloud processing methods can achieve automatic measurement of Mongolian horse body dimensions with high accuracy. The proposed measurement method can also be extended to different breeds of horses, providing technical support for horse breeding plans and possessing practical application value.
蒙古马 / 体尺测量 / 卷积神经网络 / 注意力机制 / 三维点云处理 / YOLOv8n-pose {{custom_keyword}} /
Mongolian horse / body measurements / convolutional neural network / attention mechanism / 3D point cloud processing / YOLOv8n-pose {{custom_keyword}} /
图10 蒙古马胸、臀围点云切片及曲线拟合示意图Fig. 10 Schematic diagram of point cloud slicing and curve fitting of Mongolian horse chest and croup circumference |
表1 训练试验平台配置参数Table 1 Configuration parameters of the training platform |
配置 | 版本 |
---|---|
操作系统 | Ubuntu 20.04 |
深度学习框架 | PyTorch 1.11.0 |
中央处理器 | Intel Core i7-9700 k |
图像处理器 | Nvidia GeForce RTX 2080 ti |
CUDA | 11.3 |
CUDNN | 8.2.1 |
图12 YOLOv8n-pose和DSS-YOLO模型训练过程对比Fig. 12 Comparison of YOLOv8n-pose and DSS-YOLO model training process |
表2 YOLOv8n-pose和DSS-YOLO性能对比结果Table 2 YOLOv8n-pose and DSS-YOLO performance comparison results |
模型 | mAP/% | 参数量/×106 M | 运算量/G |
---|---|---|---|
YOLOv8n-pose | 88.2 | 3.15 | 8.7 |
DSS-YOLO | 92.5 | 3.48 | 9.1 |
表3 DSS-YOLO模型对测试集中不同条件下的RGB图检测结果统计Table 3 DSS-YOLO model detection results of RGB images under different conditions in test set |
模型 | 条件 | d DSS/pixel |
---|---|---|
YOLOv8n-pose | 高质量 | 12.4 |
模糊 | 13.9 | |
曝光 | 15.2 | |
DSS-YOLO | 高质量 | 7.2 |
模糊 | 7.9 | |
曝光 | 8.3 |
表4 不同关键点检测模型性能对比结果Table 4 Performance comparison of different keypoint detection models |
模型 | mAP/% | d DSS/pixel | 参数量/×106 M | 运算量/G |
---|---|---|---|---|
Hourglass | 88.9 | 15.4 | 94.84 | 28.7 |
HRNet | 89.7 | 12.6 | 28.52 | 16.8 |
SimCC | 90.9 | 10.4 | 26.74 | 6.8 |
DSS-YOLO | 92.5 | 7.2 | 3.48 | 9.1 |
表5 蒙古体尺参数测量结果平均绝对误差和平均相对误差Table 5 The MAE and MRE of Mongolian horses body measurements |
指标 | 体高 | 体斜长 | 臀高 | 胸围 | 臀围 |
---|---|---|---|---|---|
MAE/cm | 2.98 | 3.91 | 3.12 | 4.49 | 4.35 |
MRE/% | 1.93 | 2.51 | 2.02 | 2.46 | 2.51 |
1 |
芒来, 白东义. 马业发展 种业为先[J]. 北方经济, 2019(11): 20-25.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
2 |
曹晓娟, 王怀栋, 王勇. 基于SWOT分析的我国马产业发展对策[J]. 黑龙江畜牧兽医, 2020(10): 23-28.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
3 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
4 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
5 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
6 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
7 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
8 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
9 |
NIR O,
{{custom_citation.content}}
{{custom_citation.annotation}}
|
10 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
11 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
12 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
13 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
14 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
15 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
16 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
17 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
18 |
赵宇亮, 曾繁国, 贾楠, 等. 基于DeepLabCut算法的猪只体尺快速测量方法研究[J]. 农业机械学报, 2023, 54(2): 249-255, 292.
{{custom_citation.content}}
{{custom_citation.annotation}}
|
19 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
20 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
21 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
22 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
23 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
24 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
25 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
26 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
27 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
28 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
29 |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
{{custom_ref.label}} |
{{custom_citation.content}}
{{custom_citation.annotation}}
|
/
〈 |
|
〉 |