2.793

2018影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于深度学习初始位姿估计的机器人摄影测量视点规划

姜涛 崔海华 程筱胜 田威

姜涛, 崔海华, 程筱胜, 田威. 基于深度学习初始位姿估计的机器人摄影测量视点规划. 自动化学报, 2020, 46(x): 1−12 doi: 10.16383/j.aas.c200255
引用本文: 姜涛, 崔海华, 程筱胜, 田威. 基于深度学习初始位姿估计的机器人摄影测量视点规划. 自动化学报, 2020, 46(x): 1−12 doi: 10.16383/j.aas.c200255
Jiang Tao, Cui Hai-Hua, Cheng Xiao-Sheng, Tian Wei. Viewpoint planning for robot photogrammetry based on initial pose estimation via deep learning. Acta Automatica Sinica, 2020, 46(x): 1−12 doi: 10.16383/j.aas.c200255
Citation: Jiang Tao, Cui Hai-Hua, Cheng Xiao-Sheng, Tian Wei. Viewpoint planning for robot photogrammetry based on initial pose estimation via deep learning. Acta Automatica Sinica, 2020, 46(x): 1−12 doi: 10.16383/j.aas.c200255

基于深度学习初始位姿估计的机器人摄影测量视点规划

doi: 10.16383/j.aas.c200255
基金项目: 国家重点研发计划(NO.2019YFB2006100), 中央高校基础科研项目(NO. NS2020030), 江苏省自然基金(NO.BK20191280), 国家科技重大专项/(04专项)-高档数控技术与基础制造装备(2018ZX04014001), 江苏省研究生创新项目(KYCX19_0161)资助
详细信息
    作者简介:

    姜涛:南京航空航天大学机电学院博士研究生, 主要研究方向为面向机器人装配的视觉测量技术及应用. E-mail: jtmaster1@163.com

    崔海华:南京航空航天大学机电学院副教授, 主要研究方向为光学精密测量. 本文通讯作者. E-mail: cuihh@nuaa.edu.cn

    程筱胜:南京航空航天大学机电学院教授, 主要研究方向为数字化技术与装备. E-mail: smcadme@nuaa.edu.cn

    田威:南京航空航天大学机电学院教授, 主要研究方向为机器人装配技术与装备. E-mail: tw_nj@nuaa.edu.cn

Viewpoint Planning for Robot Photogrammetry Based on Initial Pose Estimation Via Deep Learning

Funds: Supported by the National Key Research and Development Project of China (NO.2019YFB2006100), Fundamental Research Funds for the Central Universities (NO. NS2020030), Jiangsu Province Nature Science Fund (NO.BK20191280); National Science, Technology Major Project of the Ministry of Science and Technology of China (2018ZX04014001), and Funding of Jiangsu Innovation Program for Graduate Education (KYCX19_0161)
  • 摘要: 针对机器人摄影测量中离线规划受初始位姿标定影响的问题, 提出融合初始位姿估计的机器人摄影测量系统视点规划方法. 首先构建基于YOLO的深度学习网络估计被测对象3D包围盒, 利用PNP算法快速求解对象姿态; 然后随机生成机器人无奇异无碰撞的视点, 基于相机成像的2D-3D正逆性映射, 根据深度原则计算每个视角下目标可见性矩阵; 最后, 引入熵权法, 以最小化重建信息熵为目标建立优化模型, 并基于TSP模型规划机器人路径. 结果表明: 利用深度学习估计的平移误差低于5 mm, 角度误差低于2°. 考虑熵权的视点规划方法提高了摄影测量质量, 融合深度学习初始姿态的摄影测量系统提高了重建效率. 利用本算法对典型零件进行摄影测量质量和效率的验证, 均获得优异的位姿估计和重建效果. 提出的算法适用于实际工程应用, 尤其是快速稀疏摄影重建, 促进了工业摄影测量速度与自动化程度提升.
  • 图  1  机器人摄影测量系统简图

    Fig.  1  Diagram of the robotic photogrammetric system

    图  2  融合初始位姿估计的视点规划策略

    Fig.  2  Viewpoint planning strategy with estimated initial pose

    图  3  深度学习实现单幅图像位姿估计流程

    Fig.  3  Outline of single-shot pose estimation with deep learning

    图  4  训练和测量结果

    Fig.  4  Results of training and testing

    图  5  位姿估计可视化结果

    Fig.  5  Visualization of pose estimation

    图  6  不同视角下模型可见性算例对比. (a)stl模型直接投影图像; (b)本文算法得到的投影图像; (c)可见的stl模型; (d)预测位姿可视化结果

    Fig.  6  Comparison cases of visibility in different views. (a) Projected image with stl model;(b) Projected image with proposed method; (c) Visible stl model; (d) Visualization results of pose estimation.

    图  7  机器人扫描规划仿真界面(a)和现场实验(b).

    Fig.  7  Simulation interface (a) and field experiment (b) of the optimal scanning planning.

    图  8  不同候选匹配点下扫描路径对比. (a)路径三维视图; (b)路径二维视图; (c)三维点云

    Fig.  8  Comparison of scanning paths with different candidate view point. (a)3D view of the path; (b) 2D view of the path; (c) 3D point cloud.

    图  9  具有典型特征的零件摄影测量位姿估计与视点规划. (a) 球体, (b)柱体和(c)凹台

    Fig.  9  The pose estimation and viewpoint planning of part with typical features for photogrammetry. (a) sphere, (b) pillars and (c) recess

    表  1  信息熵权有效性验证表

    Table  1  Effectiveness test for entropy weight

    对比实验
    对比项目
    ABC
    目标函数$x^*=\min \!\displaystyle \sum\limits_{i=1}^N\! { {w_i}{x_i} }$$x^*=\min \!\displaystyle \sum\limits_{i=1}^N \!{ {x_i} }$$x^*=\min \!\displaystyle \sum\limits_{i=1}^N\! { {x_i} }$
    优化的视
    点数(个)
    212021=20 (B)+
    1 (288)
    点云数(个)215091836015344
    下载: 导出CSV

    表  2  综合权重和初始姿态下重建质量对比

    Table  2  Comparison of reconstruction quality with weight and first-sight pose

    权重无权重
    有初始位姿约束无初始位姿约束有初始位姿约束无初始位姿约束
    视点索引1,13,100,113,143,
    149,173,189,190,
    196,207,269,272,280
    13,100,113,143,
    149,173,189,190,196,
    207,269,272,280
    1,17,28,38,45,61,
    66,74,89,91,92,107,
    113,127,185,189,
    207,249,269,280
    14,35,43,45,56,59,
    73,75,89,111,127,149,
    162,185,189,207,249,
    256,274,281
    三维点云
    点数量105841145177039571
    下载: 导出CSV

    表  3  利用深度学习位姿估计的摄影测量效率对比

    Table  3  Comparison on effectiveness of photogrammetry with estimated pose

    点个数(个)
    初始位姿约束无初始位姿约束数量变化
    球体15635164895.18%
    圆柱101381150311.87%
    凹台11472126409.24%
    重建时间(s)
    无位姿估计有位姿估计效率提升
    球体15713315.29%
    圆柱18215514.84%
    凹台1028318.63%
    下载: 导出CSV
  • [1] 郑太雄, 黄帅, 李永福, 冯明驰. 基于视觉的三维重建关键技术研究综述. 自动化学报, 2020, 46(4): 631−652 doi: 10.16383/j.aas.2017.c170502

    ZHENG Tai-Xiong, HUANG Shuai, LI Yong-Fu, FENG Ming-Chi. Key Techniques for Vision Based 3D Reconstruction: a Review. ACTA AUTOMATICA SINICA, 2020, 46(4): 631−652 doi: 10.16383/j.aas.2017.c170502
    [2] 李杰, 李响, 许元铭, 杨绍杰, 孙可意. 工业人工智能及应用研究现状及展望. 自动化学报, 2020, 46(10): 2031−2044 doi: 10.16383/j.aas.200501

    Lee Jay, Li Xiang, Xu Yuan-Ming, Yang Shaojie, Sun Ke-Yi. Recent advances and prospects in industrial AI and applications. Acta Automatica Sinica, 2020, 46(10): 2031−2044 doi: 10.16383/j.aas.200501
    [3] 柴天佑. 工业人工智能发展方向. 自动化学报, 2020, 46(10): 2005−2012 doi: 10.16383/j.aas.c200796

    Chai Tian-You. Development directions of industrial artificial intelligence. Acta Automatica Sinica, 2020, 46(10): 2005−2012 doi: 10.16383/j.aas.c200796
    [4] Sims-Waterhouse D, Bointon P, Piano S, et al. Experimental comparison of photogrammetry for additive manufactured parts with and without laser speckle projection. Optical Measurement Systems for Industrial Inspection X. International Society for Optics and Photonics, 2017, 10329: 103290W
    [5] 许杰, 蒋山平, 杨林华, 张景川. 卫星结构件常压热变形的数字摄影测量. 光学精密工程, 2012, 20(12): 2667−2673 doi: 10.3788/OPE.20122012.2667

    Xu Jie, Jiang Shang-Ping, Yang Lin-Hua, Zhang Jing-Shang. Digital photogrammetry for thermal deformation of satellite structures in normal environment. Optics and Precision Engineering, 2012, 20(12): 2667−2673 doi: 10.3788/OPE.20122012.2667
    [6] Filion A, Joubair A, Tahan A S, et al. Robot calibration using a portable photogrammetry system. Robotics and Computer-Integrated Manufacturing, 2018, 49: 77−87 doi: 10.1016/j.rcim.2017.05.004
    [7] Kinnell P, Rymer T, Hodgson J, et al. Autonomous metrology for robot mounted 3D vision systems. CIRP Annals, 2017, 66(1): 483−486 doi: 10.1016/j.cirp.2017.04.069
    [8] Kwon H, Na M, Song J B. Rescan Strategy for Time Efficient View and Path Planning in Automated Inspection System. International Journal of Precision Engineering and Manufacturing, 2019, 20(10): 1747−1756 doi: 10.1007/s12541-019-00186-x
    [9] Raffaeli, R., Mengoni, M., Germani, M. et al. Off-line view planning for the inspection of mechanical parts. Int J Interact Des Manuf, 2013, 7: 1−12 doi: 10.1007/s12008-012-0160-1
    [10] Li L, Xu D, Niu L, et al. A path planning method for a surface inspection system based on two-dimensional laser profile scanner. International Journal of Advanced Robotic Systems, 2019, 16(4): 1729881419862463
    [11] Alsadik B, Gerke M, Vosselman G. Visibility analysis of point cloud in close range photogrammetry. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2014, 2(5): 9
    [12] Tarbox G H, Gottschlich S N. Planning for complete sensor coverage in inspection. Computer Vision and Image Understanding, 1995, 61(1): 84−111 doi: 10.1006/cviu.1995.1007
    [13] Jing W, Polden J, Lin W, et al. Sampling-based view planning for 3d visual coverage task with unmanned aerial vehicle.2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016: 1808−1815.
    [14] Jing W, Polden J, Tao P Y, et al. Model-based coverage motion planning for industrial 3D shape inspection applications. 2017 13th IEEE Conference on Automation Science and Engineering (CASE). IEEE, 2017: 1293−1300.
    [15] 姜涛, 程筱胜, 崔海华, 田威. 面向机器人位姿测量的大视场变焦测量方法. 光学学报, 2018, 38(8): 0815012 doi: 10.3788/AOS201838.0815012

    Jiang Tao, Cheng Xiaosheng, Cui Haihua, Tian Wei. Large Field of View Vision Method for Robot Pose Measurement Based on Zoom Lens. Acta Optica Sinica, 2018, 38(8): 0815012 doi: 10.3788/AOS201838.0815012
    [16] Jiang T, Cheng X, Cui H, et al. Combined shape measurement based on locating and tracking of an optical scanner. Journal of Instrumentation, 2019, 14(1): P01006 doi: 10.1088/1748-0221/14/01/P01006
    [17] Bugra Tekin, Sudipta N. Sinha, Pascal Fua. Real-Time Seamless Single Shot 6D Object Pose Prediction, In CVPR, 2019.
    [18] H. Su, C. R. Qi, Y. Li, and L. J. Guibas. Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views. In ICCV, 2015.
    [19] S. Tulsiani and J. Malik. Viewpoints and Key points. In CVPR, 2015.
    [20] A. Kendall, M. Grimes, and R. Cipolla. PoseNet: A Convolutional Network for Real-Time 6-DOF camera relocalization. In ICCV, 2015.
    [21] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox.PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes. arXiv preprint arXiv: 1711.00199, 2017.
    [22] W. Kehl, F. Manhardt, F. Tombari, S. Ilic, and N. Navab.SSD-6D: Making RGB-Based 3D Detection and 6D Pose Estimation Great Again. In ICCV, 2017.
    [23] M. Rad and V. Lepetit. BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging Objects without Using Depth. In ICCV, 2017.
    [24] S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, et al. Generation of fiducial marker dictionaries using mixed integer linear programming. Pattern Recognit., 2016, 51: 481−491 doi: 10.1016/j.patcog.2015.09.023
    [25] Carpin, Stefano, Pillonetto, et al. Motion Planning Using Adaptive Random Walks. IEEE Transactions on Robotics, 2005.
    [26] Cortsen J, Petersen H G. Advanced off-line simulation framework with deformation compensation for high speed machining with robot manipulators. IEEE/ASME International Conference on Advanced Intelligent Mechatronics. IEEE, 2012.
  • 加载中
计量
  • 文章访问数:  58
  • HTML全文浏览量:  57
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-04-26
  • 修回日期:  2020-11-04
  • 网络出版日期:  2020-12-08

目录

    /

    返回文章
    返回