2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

仿人机器人视觉导航中的实时性运动模糊探测器设计

吴俊君 管贻生 张宏 周雪峰 苏满佳

吴俊君, 管贻生, 张宏, 周雪峰, 苏满佳. 仿人机器人视觉导航中的实时性运动模糊探测器设计. 自动化学报, 2014, 40(2): 267-276. doi: 10.3724/SP.J.1004.2014.00267
引用本文: 吴俊君, 管贻生, 张宏, 周雪峰, 苏满佳. 仿人机器人视觉导航中的实时性运动模糊探测器设计. 自动化学报, 2014, 40(2): 267-276. doi: 10.3724/SP.J.1004.2014.00267
WU Jun-Jun, GUAN Yi-Sheng, ZHANG Hong, ZHOU Xue-Feng, SU Man-Jia. A Real-time Method for Motion Blur Detection in Visual Navigation with a Humanoid Robot. ACTA AUTOMATICA SINICA, 2014, 40(2): 267-276. doi: 10.3724/SP.J.1004.2014.00267
Citation: WU Jun-Jun, GUAN Yi-Sheng, ZHANG Hong, ZHOU Xue-Feng, SU Man-Jia. A Real-time Method for Motion Blur Detection in Visual Navigation with a Humanoid Robot. ACTA AUTOMATICA SINICA, 2014, 40(2): 267-276. doi: 10.3724/SP.J.1004.2014.00267

仿人机器人视觉导航中的实时性运动模糊探测器设计

doi: 10.3724/SP.J.1004.2014.00267
基金项目: 

国家自然科学基金(50975089);中国博士后科学基金(2012M521600)资助

详细信息
    作者简介:

    吴俊君 华南理工大学博士研究生.主要研究方向为移动机器人视觉导航,同时定位与地图构建.E-mail:junjun-wu@hotmail.com

A Real-time Method for Motion Blur Detection in Visual Navigation with a Humanoid Robot

Funds: 

Supported by National Natural Science Foundation of China (50975089) and China Postdoctoral Science Foundation (2012M5 21600)

  • 摘要: 针对仿人机器人视觉导航系统的鲁棒性受到运动模糊制约的问题,提出一种基于运动模糊特征的实时性异常探测方法. 首先定量地分析运动模糊对视觉导航系统的负面影响,然后研究仿人机器人上图像的运动模糊规律,在此基础上对图像的运动模糊特征进行无参考的度量,随后采用无监督的异常探测技术,在探测框架下对时间序列上发生的图像运动模糊特征进行聚类分析,实时地召回数据流中的模糊异常,以增强机器人视觉导航系统对运动模糊的鲁棒性. 仿真实验和仿人机器人实验表明:针对国际公开的标准数据集和仿人机器人NAO数据集,方法具有良好的实时性(一次探测时间0.1s)和有效性(召回率98.5%,精确率90.7%). 方法的探测框架对地面移动机器人亦具有较好的普适性和集成性,可方便地与视觉导航系统协同工作.
  • [1] Cummins M, Newman P. Appearance-only SLAM at large scale with FAB-MAP 2.0. The International Journal of Robotics Research, 2011, 30(9): 1100-1123
    [2] Granström K, Schön T B, Nieto J I, Ramos F T. Learning to close loops from range data. The International Journal of Robotics Research, 2011, 30(14): 1728-1754
    [3] Sun Yao, Zhang Qiang, Wan Lei. Small autonomous underwater vehicle navigation system based on adaptive UKF algorithm. Acta Automatica Sinica, 2011, 37(3): 342-353(孙尧, 张强, 万磊. 基于自适应UKF算法的小型水下机器人导航系统. 自动化学报, 2011, 37(3): 342-353)
    [4] Guo Shuai, Ma Shu-Gen, Li Bin, Wang Ming-Hui, Wang Yue-Chao. A data association approach based on multi-rules in VorSLAM. Acta Automatica Sinica, 2013, 39(6): 883-894(郭帅, 马书根, 李斌, 王明辉, 王越超. VorSLAM 算法中基于多规则的数据关联方法. 自动化学报, 2013, 39(6): 883-894)
    [5] Sun Rong-Chuan, Ma Shu-Gen, Li Bin, Wang Ming-Hui, Wang Yue-Chao. Simultaneous localization and sampled environment mapping based on a divide-and-conquer ideology. Acta Automatica Sinica, 2010, 36(12): 1697-705(孙荣川, 马书根, 李斌, 王明辉, 王越超. 基于分治法的同步定位与环境采样地图创建. 自动化学报, 2010, 36(12): 1697-1705)
    [6] Bonin-Font F, Ortiz A, Olive G. Visual navigation for mobile robots: a survey. Journal of Intelligent and Robotic Systems, 2008, 53(3): 263-296
    [7] Droeschel D, Holz D, Stuckler J, Behnke S. Using time-of-flight cameras with active gaze control for 3D collision avoidance. In: Proceedings of the 2010 International Conference on Robotics and Automation. Anchorage, AK: IEEE, 2010. 4035-4040
    [8] Gemeiner P, Ponweiser W, Vincze M. Real-time SLAM with a high-speed CMOS camera. In: Proceedings of the 14th International Conference on Image Analysis and Processing. Modena: IEEE, 2007. 297-302
    [9] Lee H S, Kwon J, Lee K M. Simultaneous localization, mapping and deblurring. In: Proceedings of the 2011 IEEE International Conference on Computer Vision. Barcelona: IEEE, 2011. 1203-1210
    [10] Fu S Y, Zhang Y C, Cheng L, Liang Z Z, Hou Z G, Tan M. Motion based image deblur using recurrent neural network for power transmission line inspection robot. In: Proceedings of the 2006 International Joint Conference on Neural Networks. Vancouver, BC: IEEE, 2006. 3854-3859
    [11] Williams B P, Klein G, Reid I. Real-time SLAM relocalisation. In: Proceedings of the 11th International Conference on Computer Vision. Rio de Janeiro: IEEE, 2007. 1-8
    [12] Williams B, Klein G, Reid I. Automatic relocalization and loop closing for real-time monocular SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(9): 1699-1712
    [13] Pretto A, Menegatti E, Bennewitz M, Burgard W, Pagello E. A visual odometry framework robust to motion blur. In: Proceedings of the 2009 International Conference on Robotics and Automation. Kobe: IEEE, 2009. 2250-2257
    [14] Hornung A, Bennewitz M, Strasdat H. Efficient vision-based navigation. Autonomous Robots, 2010, 29(2): 137-149
    [15] Liu R T, Li Z R, Jia J Y. Image partial blur detection and classification. In: Proceedings of the 2008 International Conference on Computer Vision Pattern Recognition. Anchorage, AK: IEEE, 2008. 1-8
    [16] Ciancio A, da Costa A L N T, da Silva E A B, Said A, Samadani R, Obrador P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Process, 2011, 20(1): 64-75
    [17] Dash R, Sa P, Majhi B. RBFN based motion blur parameter estimation. In: Proceedings of the 2009 IEEE International Conference on in Advanced Computer Control. Singapore: IEEE, 2009. 327-331
    [18] Mittal A, Moorthy A K, Bovik A C. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 2012, 21(12): 4695-4708
    [19] Yang K C, Clark C G, Das P. Motion blur detecting by support vector machine. In: Proceedings of Mathematical Methods in Pattern and Image Analysis. California, USA: SPIE, 2005, 5916: 261-273
    [20] Hsu P, Chen B Y. Blurred image detection and classification. Advances in Multimedia Modeling Lecture Notes in Computer Science. Berlin: Springer-Verlag, 2008. 277-286
    [21] Hansen B C, Hess R F. Discrimination of amplitude spectrum slope in the fovea and parafovea and the local amplitude distributions of natural scene imagery. Journal of Vision, 2006, 6(7): 696-711
    [22] Roth S, Black M J. Fields of experts: a framework for learning image priors. In: Proceedings of the 2005 Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA: IEEE, 2005. 860-867
    [23] Zhang N F, Vladar A, Postek M T, Larrabee R D. A kurtosis-based statistical measure for two-dimensional processes and its applications to image sharpness. In: Proceedings of the 2003 Section on Physical and Engineering Sciences. Alexandria, VA: IEEE, 2003. 4730-4736
    [24] Caviedes J, Oberti F. A new sharpness metric based on local kurtosis, edge and energy information. Signal Processing: Image Communication, 2004, 19(2): 147-161
    [25] Liu Y, Zhang H. Visual loop closure detection with a compact image descriptor. In: Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. Vilamoura: IEEE, 2012. 1051-1056
    [26] Perona P, Malik J. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(7): 629-639
    [27] Marziliano P, Dufaux F, Winkler S, Ebrahimi T. A no-reference perceptual blur metric. In: Proceedings of the 2002 International Conference on Image Processing. Rochester, NY, USA: IEEE, 2002. Ⅲ-57-Ⅲ-60
    [28] Zhang H, Li B, Yang D. Keyframe detection for appearance-based visual SLAM. In: Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. Taipei, China: IEEE, 2010. 2071-2076
    [29] Cummins M, Newman P. Fab-map: probabilistic localization and mapping in the space of appearance. The International Journal of Robotics Research, 2008, 27(6): 647-665
  • 加载中
计量
  • 文章访问数:  2153
  • HTML全文浏览量:  109
  • PDF下载量:  1206
  • 被引次数: 0
出版历程
  • 收稿日期:  2013-03-12
  • 修回日期:  2013-08-01
  • 刊出日期:  2014-02-20

目录

    /

    返回文章
    返回