2.793

2018影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

PLVO: 基于平面和直线融合的RGB-D视觉里程计

孙沁璇 苑晶 张雪波 高远兮

孙沁璇, 苑晶, 张雪波, 高远兮. PLVO: 基于平面和直线融合的RGB-D视觉里程计. 自动化学报, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200878
引用本文: 孙沁璇, 苑晶, 张雪波, 高远兮. PLVO: 基于平面和直线融合的RGB-D视觉里程计. 自动化学报, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200878
Sun Qin-Xuan, Yuan Jing, Zhang Xue-Bo, Gao Yuan-Xi. PLVO: plane-line-based RGB-D visual odometry. Acta Automatica Sinica, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200878
Citation: Sun Qin-Xuan, Yuan Jing, Zhang Xue-Bo, Gao Yuan-Xi. PLVO: plane-line-based RGB-D visual odometry. Acta Automatica Sinica, 2021, x(x): 1−13 doi: 10.16383/j.aas.c200878

PLVO: 基于平面和直线融合的RGB-D视觉里程计

doi: 10.16383/j.aas.c200878
基金项目: 国家自然科学基金(62073178), 天津市杰出青年基金(20JCJQJC00140, 19JCJQJC62100), 天津市自然科学基金(20JCYBJC01470, 19JCYBJC18500), 山东省自然科学基金重大基础研究项目(ZR2019ZD07)资助
详细信息
    作者简介:

    孙沁璇:南开大学人工智能学院博士研究生. 主要研究方向为移动机器人导航和同时定位与建图. E-mail: sunqinxuan @outlook.com

    苑晶:南开大学人工智能学院教授. 主要研究方向为机器人控制, 目标跟踪以及同时定位与建图. 本文通讯作者. E-mail: nkyuanjing@gmail.com

    张雪波:南开大学人工智能学院教授. 主要研究方向为运动控制, 视觉伺服以及同时定位与建图. E-mail: zhangxb@robot.nankai.edu.cn

    高远兮:南开大学人工智能学院硕士研究生. 主要研究方向为无人机和移动机器人的同时定位与建图. E-mail: gyx0801@163.com

PLVO: Plane-Line-based RGB-D Visual Odometry

Funds: Supported by the Natural Science Foundation of China (62073178), the Tianjin Science Fund for Distinguished Young Scholars (20JCJQJC00140, 19JCJQJC62100), the Tianjin Natural Science Foundation (20JCYBJC01470, 19JCYBJC18500) and the Major basic research projects of Natural Science Foundation of Shandong Province (ZR2019ZD07)
More Information
    Author Bio:

    SUN Qin-Xuan Ph.D. candidate at the College of Artificial Intelligence, Nankai University. Her current research interests include mobile robot navigation and SLAM

    YUAN Jing Professor at the College of Artificial Intelligence, Nankai University. His research interest covers robotic control, target tracking and SLAM. Corresponding author of this paper

    ZHANG Xue-Bo Professor at the College of Artificial Intelligence, Nankai University. His research interest covers motion planning, visual servoing and SLAM

    GAO Yuan-Xi Master degree candidate at the College of Artificial Intelligence, Nankai University. His current research interests include SLAM of UAV and mobile robot

  • 摘要: 针对利用平面特征计算RGB-D相机位姿时的求解退化问题, 提出平面和直线融合的RGB-D视觉里程计(Plane-line-based RGB-D visual odometry, PLVO). 首先, 提出基于平面-直线混合关联图(Plane-line hybrid association graph, PLHAG)的多特征关联方法, 充分考虑平面和平面、平面和直线之间的几何关系, 对平面和直线两类几何特征进行一体化关联. 然后, 提出基于平面和直线主辅相济、自适应融合的RGB-D相机位姿估计方法. 具体来说, 鉴于平面特征通常比直线特征具有更好的准确性和稳定性, 本文通过自适应加权的方法, 确保平面特征在位姿计算中的主导作用, 而对平面无法约束的位姿自由度, 利用直线特征进行补充, 从而实现两类特征的融合, 解决了单纯使用平面特征求解位姿时的退化问题. 最后, 通过公开数据集上的定量实验以及真实室内环境下的机器人实验, 验证了所提出方法的有效性.
  • 图  1  PLVO系统框图

    Fig.  1  System overview of PLVO

    图  2  PLHAG结构示意图

    Fig.  2  Illustration of a PLHAG

    图  3  5DoF约束情况下直线权值. (a) RGB图像; (b) 平面与直线特征, 其中直线特征根据其权值${w_j}$进行灰度映射

    Fig.  3  The weights of lines in 5DoF constraint cases. (a) RGB images; (b) the extracted plane and line features, where the lines are colored according to the value of the weights ${w_j}$, as shown in the color bar at the right side

    图  4  角度$\theta $, $\beta $$\phi $示意图

    Fig.  4  Illustration of angles $\theta $, $\beta $ and $\phi $

    图  5  $\phi $取不同值时偏导数$\frac{{\partial \theta }}{{\partial \beta }}$$\beta $变化曲线

    Fig.  5  Shape of the function $\frac{{\partial \theta }}{{\partial \beta }}$ w.r.t. $\beta $ as the value of $\phi $ changes

    图  6  3DoF约束情况下直线权值. (a) RGB图像; (b) (c) 平面与直线特征, 其中直线特征分别根据其(b)旋转权值${w_{Rj}}$与(c)平移权值${w_j}$进行灰度映射

    Fig.  6  The weights of lines in the 3DoF constraint case. (a) RGB images; (b) (c) the extracted plane and line features, where the lines are colored according to the value of the weights (b) ${w_{Rj}}$ and (c) ${w_j}$, as shown in the color bar at the right side

    图  7  特征关联算法

    Fig.  7  Comparison of feature association algorithms in terms of the

    图  8  基于PLHAG与PAG特征关联方法时间性能对比

    Fig.  8  Comparison of real-time performance for PLHAG and PAG based feature association, respectively

    图  9  PLVO算法ATE和RPE结果评测图

    Fig.  9  Visualization of ATE and RPE for PLVO

    图  10  各个图像序列上PLVO、P-VO以及L-VO每帧运行时间统计箱线图

    Fig.  10  Boxplot of statistics of the runtime per frame for PLVO, P-VO and L-VO

    图  11  实验室场景下移动机器人定位与增量式建图结果 (左) 点云全景图 (右) 分别对应于全景图①②③处的(上)RGB图像与(下)局部点云图

    Fig.  11  Real-world experiment in a laboratory using a mobile robot (left) Panoramic view of the constructed point-cloud map (right) (top) RGB images and (bottom) zoom-in views of the map at locations ①, ② and ③, respectively, which are marked in the panoramic view

    表  1  不同VO算法相对位姿误差对比

    Table  1  Comparison of RMSE of RPE

    plane-seg-VOProb-RGBD-VOCanny-VOSTING-VOPLVO
    fr1/desk0.023 m/1.70°0.031 m/1.92°0.025 m/1.90°0.021 m/1.37°
    fr2/desk0.008 m/0.45°0.048 m/1.75°0.008 m/0.42°
    fr2/xyz0.005 m/0.36°0.004 m/0.31°0.004 m/0.34°0.004 m/0.30°
    fr2/360_hemisphere0.069 m/1.10°0.108 m/1.09°0.092 m/1.47°0.066 m/0.99°
    fr3/cabinet0.034 m/2.04°0.039 m/1.80°0.036 m/1.63°0.011 m/1.02°0.029 m/1.24°
    fr3/str_ntex0.019 m/0.70°0.027 m/0.59°0.014 m/0.83°0.012 m/0.49°
    fr3/str_tex0.013 m/0.48°0.021 m/0.59°0.013 m/0.45°
    fr3/office0.010 m/0.50°0.009 m/0.50°0.007 m/0.47°
    下载: 导出CSV

    表  2  不同VO算法绝对轨迹误差对比

    Table  2  Comparison of RMSE of ATE

    Prob-RGBD-VOCanny-VOSTING-VOPLVO
    fr1/desk0.040 m0.044 m0.041 m0.038 m
    fr2/desk0.037 m0.098 m0.044 m
    fr2/xyz0.008 m0.010 m0.008 m
    fr2/360_hemisphere0.203 m0.079 m0.122 m0.105 m
    fr3/cabinet0.200 m0.057 m0.070 m0.052 m
    fr3/str_ntex0.054 m0.031 m0.040 m0.030 m
    fr3/str_tex0.013 m0.028 m0.013 m
    fr3/office0.085 m0.089 m0.081 m
    下载: 导出CSV

    表  3  相对位姿误差消融实验结果

    Table  3  Results of ablation experiment in term of the RMSE of RPE

    PLVOPLVO(无加权)L-VOP-VO*
    fr1/desk0.021 m/1.37°0.041 m/1.52°0.039 m/1.56°0.042 m/1.95°
    fr2/desk0.008 m/0.42°0.011 m/0.42°0.018 m/0.52°0.016 m/0.55°
    fr2/xyz0.004 m/0.30°0.005 m/0.34°0.007 m/0.37°0.004 m/0.27°
    fr2/360_hemisphere0.066 m/0.99°0.096 m/1.20°0.162 m/1.22°0.118 m/1.42°
    fr3/cabinet0.029 m/1.24°0.054 m/1.44°0.097 m/1.70°0.029 m/1.71°
    fr3/str_ntex0.012 m/0.49°0.013 m/0.55°0.015 m/0.48°0.013 m/0.53°
    fr3/str_tex0.013 m/0.45°0.015 m/0.49°0.016 m/0.47°0.023 m/0.75°
    fr3/office0.007 m/0.47°0.012 m/0.57°0.016 m/0.59°0.014 m/0.62°
    * 在P-VO实验中, 出现位姿求解退化情况的位姿估计没有参与RPE的计算.
    下载: 导出CSV

    表  4  P-VO中位姿求解退化情况所占比例

    Table  4  Ratio of the degenerate cases in P-VO

    fr1/deskfr2/deskfr2/xyzfr2/360_hemispherefr3/cabinetfr3/str_ntexfr3/str_texfr3/office
    ratio73.3%60.3%46.3%91.9%83.4%37.9%40.4%17.3%
    下载: 导出CSV
  • [1] 丁文东, 徐德, 刘希龙, 张大朋, 陈天. 移动机器人视觉里程计综述. 自动化学报, 2018, 44(3): 385−400

    Ding Wen-Dong, Xu De, Liu Xi-Long, Zhang Da-Peng, Chen Tian. Review on visual odometry for mobile robots. Acta Automatica Sinica, 2018, 44(3): 385−400
    [2] 王楠, 马书根, 李斌, 王明辉, 赵明扬. 震后建筑内部层次化SLAM的地图模型转换方法. 自动化学报, 2015, 41(10): 1723−1733

    Wang Nan, Ma Shu-Gen, Li Bin, Wang Ming-Hui, Zhao Ming-Yang. A model transformation of map representation for hierarchical SLAM that can be used for after-earthquake buildings. Acta Automatica Sinica, 2015, 41(10): 1723−1733
    [3] 杨晶东, 杨敬辉, 洪炳熔. 一种有效的移动机器人里程计误差建模方法. 自动化学报, 2009, 35(2): 168−173 doi: 10.3724/SP.J.1004.2009.00168

    Yang Jing-Dong, Yang Jing-Hui, Hong Bing-Rong. An efficient approach to odometric error modeling for mobile robots. Acta Automatica Sinica, 2009, 35(2): 168−173 doi: 10.3724/SP.J.1004.2009.00168
    [4] 季秀才, 郑志强, 张辉. SLAM问题中机器人定位误差分析与控制. 自动化学报, 2008, 34(3): 323−330

    Ji Xiu-Cai, Zheng Zhi-Qiang, Zhang Hui. Analysis and control of robot position error in SLAM. Acta Automatica Sinica, 2008, 34(3): 323−330
    [5] Sun Q X, Yuan J, Zhang X B, Sun F C. RGB-D SLAM in indoor environments with STING-based plane feature extraction. IEEE/ASME Transactions on Mechatronics, 2018, 23(3): 1071−1082 doi: 10.1109/TMECH.2017.2773576
    [6] 韩锐, 李文锋. 一种基于线特征的SLAM算法研究. 自动化学报, 2006, 32(1): 43−46

    Han Rui, Li Wen-Feng. Line-feature-based SLAM algorithm. Acta Automatica Sinica, 2006, 32(1): 43−46
    [7] 俞毓锋, 赵卉菁, 崔锦实, 査红彬. 基于道路结构特征的智能车单目视觉定位. 自动化学报, 2017, 43(5): 725−734

    Yu Yu-Feng, Zhao Hui-Jing, Cui Jin-Shi, Zha Hong-Bin. Road structural feature based monocular visual localization for intelligent vehicle. Acta Automatica Sinica, 2017, 43(5): 725−734
    [8] Lee T, Kim C, Cho D D. A monocular vision sensor-based efficient SLAM method for indoor service robots. IEEE Transactions on Industrial Electronics, 2019, 66(1): 318−328 doi: 10.1109/TIE.2018.2826471
    [9] Li H, Xing Y, Zhao J, Bazin J, Liu Z, Liu Y. Leveraging structural regularity of Atlanta world for monocular SLAM. In: Proceeding of IEEE International Conference on Robotics and Automation (ICRA). Montreal, QC, Canada: IEEE Press, 2019. 2412−2418
    [10] Li H, Yao J, Bazin J, Lu X, Xing Y, Liu K. A monocular SLAM system leveraging structural regularity in Manhattan world. In: Proceeding of IEEE International Conference on Robotics and Automation (ICRA). Brisbane, QLD, Australia: IEEE Press, 2018. 2518−2525
    [11] 张峻宁, 苏群星, 刘鹏远, 朱庆, 张凯. 一种自适应特征地图匹配的改进VSLAM算法. 自动化学报, 2019, 45(3): 553−565

    Zhang Jun-Ning, Su Qun-Xing, Liu Peng-Yuan, Zhu Qing, Zhang Kai. An improved vslam algorithm based on adaptive feature map. Acta Automatica Sinica, 2019, 45(3): 553−565
    [12] Yang S, Song Y, Kaess M, Scherer S. Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments. In: Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Daejeon, South Korea: IEEE Press, 2016. 1222−1229
    [13] Lu Y, Song D. Visual navigation using heterogeneous landmarks and unsupervised geometric constraints. IEEE Transactions on Robotics, 2015, 31(3): 736−749 doi: 10.1109/TRO.2015.2424032
    [14] 李磊, 陈细军, 曹志强, 候增广, 谭民. 一处室内轮式自主移动机器人的导航控制研究. 自动化学报, 2003, 29(6): 893−899

    Li Lei, Chen Xi-Jun Cao Zhi-Qiang, Hou Zeng-Guang, Tan Min. Research on the navigation control of an indoor wheeled autonomous mobile robot. Acta Automatica Sinica, 2003, 29(6): 893−899
    [15] 庄严, 陈东, 王伟, 韩建达, 王越超. 移动机器人基于视觉室外自然场景理解的研究与进展. 自动化学报, 2010, 36(1): 1−11 doi: 10.3724/SP.J.1004.2010.00001

    Zhuang Yan, Chen Dong, Wang Wei, Han Jian-Da, Wang Yue-Chao. Status and development of natural scene understanding for vision-based outdoor mobile robot. Acta Automatica Sinica, 2010, 36(1): 1−11 doi: 10.3724/SP.J.1004.2010.00001
    [16] Nardi F, Corte B D, Grisetti G. Unified representation and registration of heterogeneous sets of geometric primitives. IEEE Robotics and Automation Letters, 2019, 4(2): 625−632 doi: 10.1109/LRA.2019.2891989
    [17] Yang Y, Huang G. Observability analysis of aided INS with heterogeneous features of points, lines, and planes. IEEE Transactions on Robotics, 2019, 35(6): 1399−1418 doi: 10.1109/TRO.2019.2927835
    [18] Zhang H, Ye C. Plane-aided visual-inertial odometry for 6-DOF pose estimation of a robotic navigation aid. IEEE Access, 2020, 8: 90042−90051 doi: 10.1109/ACCESS.2020.2994299
    [19] Taguchi Y, Jian Y D, Ramalingam S, Feng C. Point-Plane SLAM for Hand-Held 3D Sensors. In: Proceeding of IEEE International Conference on Robotics and Automation (ICRA). Karlsruhe, Germany: IEEE Press, 2013. 5182−5189
    [20] Cupec R, Nyarko E K, Filko D, Kitanov A. Place recognition based on matching of planar surfaces and line segments. International Journal of Robotics Research, 2015, 34(4-5): 674−704 doi: 10.1177/0278364914548708
    [21] Zuo X, Xie X, Liu Y, Huang G. Robust visual SLAM with point and line features. In: Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vancouver, Canada: IEEE Press, 2017. 1775−1782
    [22] Gomez-Ojeda R, Moreno F, Zuiga-Nol D, Scaramuzza D, Gonzalez-Jimenez J. PL-SLAM: A stereo SLAM system through the combination of points and line segments. IEEE Transactions on Robotics, 2019, 35(3): 734−746 doi: 10.1109/TRO.2019.2899783
    [23] Fang B, Zhan Z. A visual SLAM method based on point-line fusion in weak-matching scene. International Journal of Advanced Robotic Systems, 2020, 17(2): 1−11
    [24] Elqursh A, Elgammal A. Line-based relative pose estimation. In: Proceeding of Conference on Computer Vision and Pattern Recognition (CVPR). Providence, RI, USA: IEEE Press, 2011. 3049−3056
    [25] Proença P F, Gao Y. Probabilistic RGB-D odometry based on points, lines and planes under depth uncertainty. Robotics and Autonomous Systems, 2018, 104: 25−39 doi: 10.1016/j.robot.2018.02.018
    [26] Grompone von Gioi R, Jakubowicz J, Morel J, Randall G. LSD: A fast line segment detector with a false detection control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(4): 722−732 doi: 10.1109/TPAMI.2008.300
    [27] Dryanovski I, Valenti R G, Xiao J Z. Fast visual odometry and mapping from RGB-D data. In: Proceeding of IEEE International Conference on Robotics and Automation (ICRA). Karlsruhe, Germany: IEEE Press, 2013. 2305−2310
    [28] Sturm J, Engelhard N, Endres F, Burgard W, Cremers D. A benchmark for the evaluation of RGB-D SLAM systems. In: Proceeding of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Vilamoura, Portugal: IEEE Press, 2012. 573−580
    [29] Zhang L, Koch R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. Journal of Visual Communication and Image Representation, 2013, 24(7): 794−805 doi: 10.1016/j.jvcir.2013.05.006
    [30] 董星亮, 苑晶, 黄枢子, 杨少坤, 张雪波, 孙凤池, 黄亚楼. 室内环境下基于平面与线段特征的RGB-D视觉里程计. 机器人, 2018, 40(6): 921−932

    Dong Xing-Liang, Yuan Jing, Huang Shu-Zi, Yang Shao-Kun, Zhang Xue-Bo, Sun Feng-Chi, Huang Ya-Lou. RGB-D visual odometry based on features of planes and line segments in indoor environments. Robot, 2018, 40(6): 921−932
    [31] Zhou Y, Li H, Kneip L. Canny-VO: Visual odometry with RGB-D cameras based on geometric 3-D-2-D edge alignment. IEEE Transactions on Robotics, 2019, 35(1): 184−199 doi: 10.1109/TRO.2018.2875382
  • 加载中
计量
  • 文章访问数:  274
  • HTML全文浏览量:  133
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-10-20
  • 修回日期:  2021-04-16
  • 网络出版日期:  2021-05-24

目录

    /

    返回文章
    返回