2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

改进的YOLO特征提取算法及其在服务机器人隐私情境检测中的应用

杨观赐 杨静 苏志东 陈占杰

杨观赐, 杨静, 苏志东, 陈占杰. 改进的YOLO特征提取算法及其在服务机器人隐私情境检测中的应用. 自动化学报, 2018, 44(12): 2238-2249. doi: 10.16383/j.aas.2018.c170265
引用本文: 杨观赐, 杨静, 苏志东, 陈占杰. 改进的YOLO特征提取算法及其在服务机器人隐私情境检测中的应用. 自动化学报, 2018, 44(12): 2238-2249. doi: 10.16383/j.aas.2018.c170265
YANG Guan-Ci, YANG Jing, SU Zhi-Dong, CHEN Zhan-Jie. An Improved YOLO Feature Extraction Algorithm and Its Application to Privacy Situation Detection of Social Robots. ACTA AUTOMATICA SINICA, 2018, 44(12): 2238-2249. doi: 10.16383/j.aas.2018.c170265
Citation: YANG Guan-Ci, YANG Jing, SU Zhi-Dong, CHEN Zhan-Jie. An Improved YOLO Feature Extraction Algorithm and Its Application to Privacy Situation Detection of Social Robots. ACTA AUTOMATICA SINICA, 2018, 44(12): 2238-2249. doi: 10.16383/j.aas.2018.c170265

改进的YOLO特征提取算法及其在服务机器人隐私情境检测中的应用

doi: 10.16383/j.aas.2018.c170265
基金项目: 

贵州省科技计划项目 (2015) 13

贵州省科技计划项目 JZ [2014] 2004

贵州省教育厅研究生教改重点课题 JG [2015] 002

贵州省科技计划项目 LH [2016] 7433

国家自然科学基金 61863005

国家自然科学基金 61640209

贵州省科技计划项目 PTRC [2018] 5702

详细信息
    作者简介:

    杨观赐   贵州大学现代制造技术教育部重点实验室教授.主要研究方向为智能与自主机器人, 计算智能与智能系统.E-mail:guanci_yang@163.com

    苏志东   贵州大学现代制造技术教育部重点实验室硕士研究生.主要研究方向为自然语言处理, 智能与自主服务机器人.E-mail:suzhidong2016@163.com

    陈占杰   贵州大学现代制造技术教育部重点实验室硕士研究生.主要研究方向为机器人自动建图与导航技术, 智能与自主服务机器人.E-mail:chenzhanjie0320@163.com

    通讯作者:

    杨静   贵州大学现代制造技术教育部重点实验室硕士研究生.主要研究方向为智能视觉计算, 智能与自主服务机器人.本文通信作者.E-mail:yang_jing0903@163.com

An Improved YOLO Feature Extraction Algorithm and Its Application to Privacy Situation Detection of Social Robots

Funds: 

Science and Technology Foundation of Guizhou Province (2015) 13

Science and Technology Foundation of Guizhou Province JZ [2014] 2004

Graduate Education Reform Fund of Education Bureau of Guizhou Province JG [2015] 002

Science and Technology Foundation of Guizhou Province LH [2016] 7433

National Natural Science Foundation of China 61863005

National Natural Science Foundation of China 61640209

Science and Technology Foundation of Guizhou Province PTRC [2018] 5702

More Information
    Author Bio:

      Professor at the Key Laboratory of Advanced Manufacturing Technology of Ministry of Education, Guizhou University. His research interest covers intelligent autonomous social robots, computational intelligence, and intelligent systems

      Master student at the Key Laboratory of Advanced Manufacturing Technology of Ministry of Education, Guizhou University. His research interest covers natural language processing and intelligent autonomous social robots

      Master student at the Key Laboratory of Advanced Manufacturing technology, Ministry of Education, Guizhou University. His research interest covers simultaneous localization and mapping, and intelligent autonomous social robots

    Corresponding author: YANG Jing   Master student at the Key Laboratory of Advanced Manufacturing Technology of Ministry of Education, Guizhou University. His research interest covers intelligent vision computing and intelligent autonomous social robots. Corresponding author of this paper
  • 摘要: 为了提高YOLO识别较小目标的能力,解决其在特征提取过程中的信息丢失问题,提出改进的YOLO特征提取算法.将目标检测方法DPM与R-FCN融入到YOLO中,设计一种改进的神经网络结构,包含一个全连接层以及先池化再卷积的特征提取模式以减少特征信息的丢失.然后,设计基于RPN的滑动窗口合并算法,进而形成基于改进YOLO的特征提取算法.搭建服务机器人情境检测平台,给出服务机器人情境检测的总体工作流程.设计家居环境下的六类情境,建立训练数据集、验证数据集和4类测试数据集.测试分析训练步骤与预测概率估计值、学习率与识别准确性之间的关系,找出了适合所提出算法的训练步骤与学习率的经验值.测试结果表明:所提出的算法隐私情境检测准确率为94.48%,有较强的识别鲁棒性.最后,与YOLO算法的比较结果表明,本文算法在识别准确率方面优于YOLO算法.
    1)  本文责任编委 胡清华
  • 图  1  改进的YOLO神经网络结构

    Fig.  1  Improved YOLO neural network structure

    图  2  不同网格规模下的目标识别效果对比图

    Fig.  2  Comparison diagram of object recognition with different grid scale

    图  3  服务机器人平台

    Fig.  3  Social robot platform

    图  4  情境检测系统的总体工作流程

    Fig.  4  The overall flow chart of the privacy situation detection system

    图  5  数据集示例

    Fig.  5  Samples of the collected dataset

    图  6  不同步骤下的模型性能变化趋势

    Fig.  6  Variation trends of the proposed model under different steps

    图  7  不同训练步骤下模型的预测概率估计值统计盒图

    Fig.  7  Boxplot of prediction accuracy under different training steps

    图  8  不同学习率下的模型性能变化趋势

    Fig.  8  The trend of model performance under different learning rates

    图  9  不同学习率下的预测估计值统计盒图

    Fig.  9  Boxplot of prediction accuracy with different learning rates

    图  10  预测概率估计值统计盒图

    Fig.  10  Boxplot of prediction accuracy

    表  1  不同步骤下的模型性能表现

    Table  1  The model performance with different steps

    助记符步骤规模预测概率估计值均值识别准确率均值单图识别时间均值(ms)
    L11 0000.5880.7332.46
    L22 0000.6270.7502.50
    L33 0000.6290.7172.51
    L44 0000.6420.7002.53
    L55 0000.7290.8002.55
    L66 0000.7310.8172.52
    L77 0000.7820.8502.45
    L88 0000.8030.8832.17
    L99 0000.8300.9672.16
    L1010 0000.8040.9002.21
    L1120 0000.5690.4172.27
    下载: 导出CSV

    表  2  不同学习率下的模型性能统计结果

    Table  2  The statistical results of model performance with different learning rates

    助记符学习率预测概率估计值均值识别准确率均值
    R110.6700.817
    R2 ${10}^{-1}$0.9111.000
    R3 ${10}^{-2}$0.8430.933
    R4 ${10}^{-3}$0.8050.950
    R5 ${10}^{-4}$0.8010.950
    R6 ${10}^{-5}$0.6720.933
    R7 ${10}^{-6}$0.6260.900
    R8 ${10}^{-7}$0.5650.880
    R9 ${10}^{-8}$0.5690.867
    R10 ${10}^{-9}$0.3910.800
    R11 ${10}^{-10}$0.3150.417
    下载: 导出CSV

    表  3  系统针对不同测试数据集的隐私识别准确率

    Table  3  Privacy situation recognition accuracy of the proposed system for different testing data sets

    实验测试数据集情境识别准确率
    C1C2C3C4C5C6
    实验1a类测试数据0.9000.9750.9750.9751.0000.975
    b类测试数据0.8500.9500.9750.9251.0000.950
    实验2c类测试数据0.8500.8500.9501.0001.0000.925
    实验3d类测试数据0.8500.8500.8500.9000.9750.875
    下载: 导出CSV

    表  4  系统针对不同测试数据的隐私类别估计值统计表

    Table  4  Privacy situation recognition accuracy of the proposed system for different testing data sets

    判别估计值测试数据
    C1C2 C3C4C5C6
    均值方差均值方差均值方差均值方差均值方差均值方差
    a类测试数据0.8200.2750.9680.0060.9710.1680.9720.0380.9200.1410.9720.152
    b类测试数据0.7890.2760.8490.1920.9220.0960.9970.0030.9180.2160.8690.191
    c类测试数据0.7510.3590.7740.2530.9370.2720.9740.0470.8540.2120.8640.214
    d类测试数据0.7420.3040.7130.2740.8540.2920.8900.1860.7680.3320.8070.311
    单图识别时间(ms)3.321.623.132.872.693.15
    下载: 导出CSV

    表  5  YOLO算法的隐私识别准确率统计结果

    Table  5  Privacy situation recognition accuracy by applying YOLO

    实验测试数据集情境识别准确率
    C1C2C3C4C5C6
    实验1a类测试数据0.7500.9750.9501.0000.9750.950
    b类测试数据0.7250.9750.8750.8750.8250.750
    实验2c类测试数据0.6250.8500.6750.6750.6000.750
    实验3d类测试数据0.6000.6000.6000.6000.6000.725
    下载: 导出CSV

    表  6  YOLO算法的隐私类别预测概率估计值统计结果

    Table  6  Statistical results of privacy situation estimates by applying YOLO

    判别估计值
    情境类别a类测试数据b类测试数据c类测试数据d类测试数据
    均值方差均值方差均值方差均值方差
    C10.6440.3660.5680.4650.5400.3810.5010.413
    C20.9390.1820.9230.1490.6930.3170.3050.433
    C30.8730.2440.8670.3020.8660.2900.8510.313
    C40.9990.0010.9630.0170.6470.4390.5130.399
    C50.9720.1330.8150.2280.5700.3810.5680.465
    C60.9360.2230.7250.3390.6740.3860.6220.345
    下载: 导出CSV
  • [1] Shankar K, Camp L J, Connelly K, Huber L L. Aging, privacy, and home-based computing:developing a design framework. IEEE Pervasive Computing, 2012, 11(4):46-54 doi: 10.1109/MPRV.2011.19
    [2] Fernandes F E, Yang G C, Do H M, Sheng W H. Detection of privacy-sensitive situations for social robots in smart homes. In: Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE). Fort Worth, TX, USA: IEEE, 2016. 727-732
    [3] Arabo A, Brown I, El-Moussa F. Privacy in the age of mobility and smart devices in smart homes. In: Proceedings of the 2012 ASE/IEEE International Conference on and 2012 International Conference on Social Computing (SocialCom) Privacy, Security, Risk and Trust. Amsterdam, Netherlands: IEEE, 2012. 819-826
    [4] Kozlov D, Veijalainen J, Ali Y. Security and privacy threats in IoT architectures. In: Proceedings of the 7th International Conference on Body Area Networks. Brussels, Belgium: ICST, 2012. 256-262
    [5] Denning T, Matuszek C, Koscher K, Smith J R. A spotlight on security and privacy risks with future household robots: attacks and lessons. In: Proceedings of the 11th International Conference on Ubiquitous Computing. Orlando, USA: ACM, 2009. 105-114
    [6] Lee A L, Hill C J, McDonald C F, Holland A E. Pulmonary rehabilitation in individuals with non-cystic fibrosis bronchiectasis:a systematic review. Archives of Physical Medicine and Rehabilitation, 2017, 98(4):774-782 doi: 10.1016/j.apmr.2016.05.017
    [7] 刘凯, 张立民, 范晓磊.改进卷积玻尔兹曼机的图像特征深度提取.哈尔滨工业大学学报, 2016, 48(5):155-159 http://d.old.wanfangdata.com.cn/Periodical/hebgydxxb201605026

    Liu Kai, Zhang Li-Min, Fan Xiao-Lei. New image deep feature extraction based on improved CRBM. Journal of Harbin Institute of Technology, 2016, 48(5):155-159 http://d.old.wanfangdata.com.cn/Periodical/hebgydxxb201605026
    [8] Lee H, Grosse R, Ranganath R, Ng A Y. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the 26th Annual International Conference on Machine Learning. New York, USA: ACM, 2009. 609-616
    [9] 于来行, 冯林, 张晶, 刘胜蓝.自适应融合目标和背景的图像特征提取方法.计算机辅助设计与图形学学报, 2016, 28(8):1250-1259 http://d.old.wanfangdata.com.cn/Periodical/jsjfzsjytxxxb201608006

    Yu Lai-Hang, Feng Lin, Zhang Jing, Liu Sheng-Lan. An image feature extraction method based on adaptive fusion of object and background. Journal of Computer-Aided Design and Computer Graphics, 2016, 28(8):1250-1259 http://d.old.wanfangdata.com.cn/Periodical/jsjfzsjytxxxb201608006
    [10] Ding Y, Zhao Y, Zhao X Y. Image quality assessment based on multi-feature extraction and synthesis with support vector regression. Signal Processing:Image Communication, 2017, 54:81-92 doi: 10.1016/j.image.2017.03.001
    [11] Batool N, Chellappa R. Fast detection of facial wrinkles based on Gabor features using image morphology and geometric constraints. Pattern Recognition, 2015, 48(3):642-658 doi: 10.1016/j.patcog.2014.08.003
    [12] Joseph R, Santosh D. YOLO: real-time object detection[Online], available: http://pjreddie.com/darknet, November 3, 2016
    [13] Liu Y L, Zhang Y M, Zhang X Y, Liu C L. Adaptive spatial pooling for image classification. Pattern Recognition, 2016, 55:58-67 doi: 10.1016/j.patcog.2016.01.030
    [14] 朱煜, 赵江坤, 王逸宁, 郑兵兵.基于深度学习的人体行为识别算法综述.自动化学报, 2016, 42(6):848-857 http://www.aas.net.cn/CN/abstract/abstract18875.shtml

    Zhu Yu, Zhao Jiang-Kun, Wang Yi-Ning, Zheng Bing-Bing. A review of human action recognition based on deep learning. Acta Automatica Sinica, 2016, 42(6):848-857 http://www.aas.net.cn/CN/abstract/abstract18875.shtml
    [15] Peng Q W, Luo W, Hong G Y, Feng M. Pedestrian detection for transformer substation based on Gaussian mixture model and YOLO. In: Proceedings of the 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). Hangzhou, China: IEEE, 2016. 562-565
    [16] Nguyen V T, Nguyen T B, Chung S T. ConvNets and AGMM based real-time human detection under fisheye camera for embedded surveillance. In: Proceedings of the 2016 International Conference on Information and Communication Technology Convergence (ICTC). Jeju, South Korea: IEEE, 2016. 840-845
    [17] Erseghe T. Distributed optimal power flow using ADMM. IEEE Transactions on Power Systems, 2014, 29(5):2370-2380 doi: 10.1109/TPWRS.2014.2306495
    [18] Gupta A, Vedaldi A, Zisserman A. Synthetic data for text localisation in natural images. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 2315-2324
    [19] Parham J, Stewart C. Detecting plains and grevy's zebras in the realworld. In: Proceedings of the 2016 IEEE Winter Applications of Computer Vision Workshops (WACVW). Lake Placid, USA: IEEE, 2016. 1-9
    [20] Ren S Q, He K M, Girshick R, Sun J. Faster R-CNN:towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(6):1137-1146 doi: 10.1109/TPAMI.2016.2577031
    [21] Gall J, Lempitsky V. Class-specific Hough forests for object detection. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE, 2013. 1022-1029
    [22] Yeung S, Russakovsky O, Mori G, Li F F. End-to-end learning of action detection from frame glimpses in videos. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 2678-2687
    [23] Redmon J, Farhadi A. YOLO9000: better, faster, stronger[Online], available: https://arxiv.org/abs/1612.08242, December 30, 2016
    [24] Körtner T. Ethical challenges in the use of social service robots for elderly people. Zeitschrift Für Gerontologie Und Geriatrie, 2016, 49(4):303-307 doi: 10.1007/s00391-016-1066-5
    [25] Felzenszwalb P F, Girshick R B, McAllester D, Ramanan D. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9):1627-1645 doi: 10.1109/TPAMI.2009.167
    [26] Girshick R, Donahue J, Darrell T, Malik J. Region-based convolutional networks for accurate object detection and segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(1):142-158 doi: 10.1109/TPAMI.2015.2437384
    [27] Gao W, Zhou Z H. Dropout rademacher complexity of deep neural networks. Science China Information Sciences, 2016, 59: Article No. 072104
    [28] Tzutalin. LabelImg[Online], available: https://github.com/tzutalin/labelImg, November 6, 2016
    [29] Abadi M, Agarwal A, Barham P, Zheng X Q. TensorFlow: large-scale machine learning on heterogeneous distributed systems[Online], available: http://download.tensorflow.org/paper/whitepaper2015.pdf. November 12, 2015
    [30] Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2015. 779-788
  • 加载中
图(10) / 表(6)
计量
  • 文章访问数:  2649
  • HTML全文浏览量:  551
  • PDF下载量:  751
  • 被引次数: 0
出版历程
  • 收稿日期:  2017-05-15
  • 录用日期:  2017-08-29
  • 刊出日期:  2018-12-20

目录

    /

    返回文章
    返回