2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于全向结构光的深度测量方法

贾同 吴成东 陈东岳 王炳楠 高海红 房卓群

贾同, 吴成东, 陈东岳, 王炳楠, 高海红, 房卓群. 一种基于全向结构光的深度测量方法. 自动化学报, 2015, 41(9): 1553-1562. doi: 10.16383/j.aas.2015.c140857
引用本文: 贾同, 吴成东, 陈东岳, 王炳楠, 高海红, 房卓群. 一种基于全向结构光的深度测量方法. 自动化学报, 2015, 41(9): 1553-1562. doi: 10.16383/j.aas.2015.c140857
JIA Tong, WU Cheng-Dong, CHEN Dong-Yue, WANG Bing-Nan, GAO Hai-Hong, FANG Zhuo-Qun. A Depth Measurement Method by Omni Directional Image and Structured Light. ACTA AUTOMATICA SINICA, 2015, 41(9): 1553-1562. doi: 10.16383/j.aas.2015.c140857
Citation: JIA Tong, WU Cheng-Dong, CHEN Dong-Yue, WANG Bing-Nan, GAO Hai-Hong, FANG Zhuo-Qun. A Depth Measurement Method by Omni Directional Image and Structured Light. ACTA AUTOMATICA SINICA, 2015, 41(9): 1553-1562. doi: 10.16383/j.aas.2015.c140857

一种基于全向结构光的深度测量方法

doi: 10.16383/j.aas.2015.c140857
基金项目: 

国家自然科学基金(61273078),教育部博士点基金(20110042120030),中央高校基础科研业务费(130404012)资助

详细信息
    作者简介:

    吴成东 博士,东北大学信息科学与工程学院教授.主要研究方向为图像处理,无线传感器网络,建筑智能化技术,机器人控制.E-mail:wuchengdong@ise.neu.edu.cn

    陈东岳 博士,东北大学信息科学与工程学院副教授.主要研究方向为图像处理,计算机视觉,模式识别.E-mail:chendongyue@ise.neu.edu.cn

    王炳楠 沈阳建筑大学硕士研究生.主要研究方向为图像处理与计算机视觉.E-mail:wangbingnan@163.com

    高海红 东北大学硕士研究生.主要研究方向为图像处理与计算机视觉.E-mail:dzgaohaihong@163.com

    房卓群 东北大学博士研究生.主要研究方向为图像处理与计算机视觉.E-mail:fangzhuoqun@163.com

    通讯作者:

    贾同 博士,东北大学信息科学与工程学院副教授.主要研究方向为图像处理,计算机视觉,模式识别.本文通信作者.E-mail:jiatong@ise.neu.edu.cn

A Depth Measurement Method by Omni Directional Image and Structured Light

Funds: 

Supported by National Natural Science Foundation of China (61273078), Doctoral Foundation of Ministry of Education of China (20110042120030), and Fundamental Research Funds for the Central Universities of China (130404012)

  • 摘要: 深度测量是立体视觉研究的重要问题, 本文提出一种基于全向图与结构光的深度测量方法.首 先,根据测量系统特点,采用了基于多参考面的投影仪标定算法;然后,设计了一组 "四方位沙漏状"编码结构光,实现待测图像与参考图像的对应点计算;最后,在移动条件下,研 究基于先验约束迭代就近点(Iterative closest point, ICP)的深度点云匹配算法. 实验结果表明,本文方法可以准确地对室内场景进行深度测量,且抗干扰能力较强.
  • [1] Zhang H, Reardon C, Parker L E. Real-time multiple human perception with color-depth cameras on a mobile robot. IEEE Transactions on Cybernetics, 2013, 43(5): 1429-1441
    [2] Tai Y C, Gowrisankaran S, Yang S N, Sheedy J E, Hayes J R, Younkin A C, Corriveau P J. Depth perception from stationary and moving stereoscopic three-dimensional images. In: Proceedings of the SPIE---The International Society for Optical Engineering. Burlingame, California, USA: SPIE, 2013. 8648-8658
    [3] Xu Yu-Hua, Tian Zun-Hua, Zhang Yue-Qiang, Zhu Xian-Wei, Zhang Xiao-Hu. Adaptively combining color and depth for human body contour tracking. Acta Automatica Sinica, 2014, 40(8): 1623-1634(徐玉华, 田尊华, 张跃强, 朱宪伟, 张小虎. 自适应融合颜色和深度信息的人体轮廓跟踪. 自动化学报, 2014, bf 40(8): 1623-1634)
    [4] Jung B, Sukhatme G S. Real-time motion tracking from a mobile robot. International Journal of Social Robotics, 2010, 2(1): 63-78
    [5] Guo Chun-Zhao, Yamabe Takayuki, Mita Seiichi. Drivable road boundary detection for intelligent vehicles based on stereovision with plane-induced homography. Acta Automatica Sinica, 2013, 39(4): 371-380(郭春钊, 山部尚孝, 三田诚一. 基于立体视觉平面单应性的智能车辆可行驶道路边界检测. 自动化学报, 2013, 39(4): 371-380)
    [6] Wei B Y, Gao J Y, Li K J, Fan Y, Gao X S, Gao B Q. Indoor mobile robot obstacle detection based on linear structured light vision system. In: Proceedings of the 2009 IEEE International Conference on Robotics and Biomimetics. Bangkok, Thailand: IEEE, 2009. 834-839
    [7] Susperregi L, Sierra B, Castrillón M, Lorenzo J, Martínez-Otzeta J M, Lazkano E. On the use of a low-cost thermal sensor to improve Kinect people detection in a mobile robot. Sensors, 2013, 13(11): 14687-14713
    [8] Luo Bin, Wang Yong-Tian, Shen Hao, Wu Zhi-Jie, Liu Yue. Overview of hybrid tracking in augmented reality. Acta Automatica Sinica, 2013, 39(8): 1185-1201(罗斌, 王涌天, 沈浩, 吴志杰, 刘越. 增强现实混合跟踪技术综述. 自动化学报, 2013, 39(8): 1185-1201)
    [9] Firoozfam P, Negahdaripour S. A multi-camera conical imaging system for robust 3D motion estimation, positioning and mapping from UAVs. In: Proceedings of the 2003 IEEE Conference on Advanced Video and Signal Based Surveillance. Miami, FL, USA: IEEE, 2003. 99-106
    [10] Yi S, Choi B, Ahuja N. Real-time omni-directional distance measurement with active panoramic vision. International Journal of Control, Automation, and Systems, 2007, 5(2): 184-191
    [11] Yi S, Suh J, Hong Y, Hwang D. Active ranging system based on structured laser light image. In: Proceedings of SICE Annual Conference. Taipei, China: IEEE, 2010. 747-752
    [12] Matsui K, Yamashita A, Kaneko T. 3-D shape measurement of pipe by range finder constructed with omni-directional laser and omni-directional camera. In: Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA). Anchorage, Alaska: IEEE, 2010. 2537-2542
    [13] Orghidan R, Salvi J, Mouaddib E M. Modelling and accuracy estimation of a new omnidirectional depth computation sensor. Pattern Recognition Letters, 2006, 27(7): 843-853
    [14] Zhou F Q, Peng B, Cui Y, Wang Y X, Tan H S. A novel laser vision sensor for omnidirectional 3D measurement. Optics & Laser Technology, 2013, 45(2): 1-12
    [15] Tang Yi-Ping, Wu Li-Juan, Zhou Jing-Kai. 3D active stereo omni-directional vision sensing technology. Chinese Journal of Computers, 2014, 37(6): 1289-1300(汤一平, 吴立娟, 周静恺. 主动式三维立体全景视觉传感技术. 计算机学报, 2014, 37(6): 1289-1300)
    [16] Jia T, Shi Y, Zhou Z X, Chen D Y. 3D depth information extraction with omni-directional camera. Information Processing Letters, 2015, 115(2): 285-291
    [17] Xu J, Liu S L, Wan A, Gao B T, Yi Q, Zhao D P, Luo R K, Chen, K. An absolute phase technique for 3D profile measurement using four-step structured light pattern. Optics and Lasers in Engineering, 2012, 50(9): 1274-1280
    [18] Yang W Z, Zhang G F, Bao H J, Kim J, Lee H Y. Consistent depth maps recovery from a trinocular video sequence. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE, 2012. 1466-1473
    [19] Zhang J, Kan C, Schwing A G, Urtasun R. Estimating the 3D layout of indoor scenes and its clutter from depth sensors. In: Proceedings of the 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE, 2013. 1273-1280
    [20] Chen Wang, Zhang Mao-Jun, Xiong Zhi-Hui, Lou Jing-Tao. Horizontal line 3D reconstruction from two image points based on single Catadioptric omni-directional image. Journal of Image and Graphics, 2010, 15(12): 1796-1803(陈旺, 张茂军, 熊志辉, 娄静涛. 基于单幅折反射全向图的水平直线3维重建. 中国图象图形学报, 2010, 15(12): 1796-1803)
    [21] Toda Y, Kubota N. Self-Localization based on multiresolution map for remote control of multiple mobile robots. IEEE Transactions on Industrial Informatics, 2013, 9(3): 1772-1781
    [22] Lee S O, Lim H, Kim H G, Ahn S C. RGB-D fusion: Real-time robust tracking and dense mapping with RGB-D data fusion. In: Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014). Chicago, IL, USA: IEEE, 2014. 2749-2754
    [23] Li Zhong-Wei, Shi Yu-Sheng, Zhong Kai, Wang Cong-Jun. Projector calibration algorithm for the structured light measurement technique. Acta Optica Sinica, 2009, 29(11): 3061-3065(李中伟, 史玉升, 钟凯, 王从军. 结构光测量技术中的投影仪标定算法. 光学学报, 2009, 29(11): 3061-3065)
    [24] Dai Xiao-Lin, Zhong Yue-Xian, Yuan Chao-Long, Ma Yang-Biao. Research on projector calibration in one-camera 3-D measurement systems. Machinery Design & Manufacture, 2008, (8): 194-196(戴小林, 钟约先, 袁朝龙, 马扬飚. 单摄像机结构光扫描系统中投影仪标定技术. 机械设计与制造, 2008, (8): 194-196)
    [25] Luo H F, Gao B T, Xu J, Chen K. An approach for structured light system calibration. In: Proceedings of the 2013 IEEE 3rd Annual International Conference on Cyber Technology in Automation, Control and Intelligent Systems (CYBER). Nanjing, China: IEEE, 2013. 428-433
    [26] Chen X B, Xi J T, Jin Y, Sun J. Accurate calibration for a camera-projector measurement system based on structured light projection. Optics and Lasers in Engineering, 2009, 47(3-4): 310-319
    [27] Salvi J, Pagés J, Batlle J. Pattern codification strategies in structured light systems. Pattern Recognition, 2004, 37(4): 827-849
    [28] Zhang S, Yau S T. High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm. Optical Engineering, 2007, 46(11): 113603
    [29] Kim D, Ryu M, Lee S. Antipodal gray codes for structured light. In: Proceedings of the 2008 IEEE International Conference on Robotics and Automation. Pasadena, CA, USA: IEEE, 2008. 3016-3021
    [30] Tehrani M A, Saghaeian A, Mohajerani O R. A new approach to 3D modeling using structured light pattern. In: Proceedings of 3rd International Conference on Information and Communication Technologies: From Theory to Applications. Damascus: IEEE, 2008. 1-5
    [31] Cheng F H, Lu C T, Huang Y S. 3D Object scanning system by coded structured light. In: Proceedings of the 3rd International Symposium on Electronic Commerce and Security. Guangzhou: IEEE, 2010. 213-217
    [32] Fechteler P, Eisert P. Adaptive colour classification for structured light systems. IET Computer Vision, 2009, 3(2): 49-59
    [33] Pagés J, Salvi J, Collewet C, Forest J. Optimised de Bruijn patterns for one-shot shape acquisition. Image and Vision Computing, 2005, 23(8): 707-720
    [34] Yu Qing-Cang. A Study on Symbol M-Array Based 3D Detection Approach Utilizing Binary Structured Light [Ph.D. dissertation], Zhejiang University, China, 2007(喻擎苍. 基于符号M阵列二值结构光的三维检测方法的研究 [博士学位论文], 浙江大学, 中国, 2007)
    [35] Jia T, Zhou Z X, Gao H H. Depth measurement based on infrared coded structured light. Journal of Sensors, 2014, 2014: Article ID 852621
  • 加载中
计量
  • 文章访问数:  1759
  • HTML全文浏览量:  96
  • PDF下载量:  1841
  • 被引次数: 0
出版历程
  • 收稿日期:  2014-12-09
  • 修回日期:  2015-05-28
  • 刊出日期:  2015-09-20

目录

    /

    返回文章
    返回