2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于状态识别的高炉料面视频关键帧提取方法

黄建才 蒋朝辉 桂卫华 潘冬 许川 周科

黄建才, 蒋朝辉, 桂卫华, 潘冬, 许川, 周科. 基于状态识别的高炉料面视频关键帧提取方法. 自动化学报, 2023, 49(11): 2257−2271 doi: 10.16383/j.aas.c220969
引用本文: 黄建才, 蒋朝辉, 桂卫华, 潘冬, 许川, 周科. 基于状态识别的高炉料面视频关键帧提取方法. 自动化学报, 2023, 49(11): 2257−2271 doi: 10.16383/j.aas.c220969
Huang Jian-Cai, Jiang Zhao-Hui, Gui Wei-Hua, Pan Dong, Xu Chuan, Zhou Ke. Key frame extraction method of blast furnace burden surface video based on state recognition. Acta Automatica Sinica, 2023, 49(11): 2257−2271 doi: 10.16383/j.aas.c220969
Citation: Huang Jian-Cai, Jiang Zhao-Hui, Gui Wei-Hua, Pan Dong, Xu Chuan, Zhou Ke. Key frame extraction method of blast furnace burden surface video based on state recognition. Acta Automatica Sinica, 2023, 49(11): 2257−2271 doi: 10.16383/j.aas.c220969

基于状态识别的高炉料面视频关键帧提取方法

doi: 10.16383/j.aas.c220969
基金项目: 国家重大科研仪器研制项目(61927803), 国家自然科学基金基础科学中心项目(61988101), 湖南省科技创新计划(2021RC4054), 湖南省研究生科研创新项目(CX20210243), 中南大学中央高校基本科研业务费专项资金(2021zzts0184)资助
详细信息
    作者简介:

    黄建才:中南大学自动化学院博士研究生. 2018年获重庆大学学士学位. 主要研究方向为图像处理, 计算机视觉和复杂工业过程参数检测. E-mail: huangjiancai@csu.edu.cn

    蒋朝辉:中南大学自动化学院教授. 2011年获中南大学博士学位. 主要研究方向为智能传感与检测技术, 图像处理与智能识别和人工智能与机器学习. 本文通信作者. E-mail: jzh0903@csu.edu.cn

    桂卫华:中南大学自动化学院教授. 1981年获得中南矿冶学院硕士学位. 主要研究方向为复杂工业过程建模, 优化与控制应用和故障诊断与分布式鲁棒控制. E-mail: gwh@csu.edu.cn

    潘冬:中南大学自动化学院讲师. 分别于2015年和2021年获中南大学学士学位和博士学位. 主要研究方向为红外热成像, 视觉检测, 图像处理和深度学习. E-mail: pandong@csu.edu.cn

    许川:中南大学自动化学院博士研究生. 2021年获中南大学硕士学位. 主要研究方向为数据分析, 机器学习和复杂工业过程建模. E-mail: csuxuchuan@csu.edu.cn

    周科:中南大学自动化学院博士研究生. 2018年获重庆大学学士学位. 主要研究方向为复杂工业过程的数据挖掘、建模与优化控制. E-mail: zhouke95@csu.edu.cn

Key Frame Extraction Method of Blast Furnace Burden Surface Video Based on State Recognition

Funds: Supported by the National Major Scientific Research Equipment of China (61927803), National Natural Science Foundation of China Basic Science Center Project (61988101), Science and Technology Innovation Program of Hunan Province (2021RC4054), Hunan Provincial Innovation Foundation for Postgraduate (CX20210243), and Fundamental Research Funds for the Central Universities of Central South University (2021zzts0184)
More Information
    Author Bio:

    HUANG Jian-Cai Ph.D. candidate at the School of Automation, Central South University. He received his bachelor degree from Chongqing University in 2018. His research interest covers image processing, computer vision, and parameter detection of complex industrial processes

    JIANG Zhao-Hui Professor at the School of Automation, Central South University. He received his Ph.D. degree from Central South University in 2011. His research interest covers intelligent sensing and detection technology, image processing and intelligent recognition, artificial intelligence and machine learning. Corresponding author of this paper

    GUI Wei-Hua Professor at the School of Automation, Central South University. He received his master degree from Central South Institute of Mining and Metallurgy in 1981. His research interest covers complex industrial process modeling, optimization and control applications, fault diagnosis and distributed robust control

    PAN Dong Lecturer at the School of Automation, Central South University. He received his bachelor degree and Ph.D. degree from Central South University in 2015 and 2021, respectively. His research interest covers infrared thermography, vision-based measurement, image processing, and deep learning

    XU Chuan Ph.D. candidate at the School of Automation, Central South University. He received his master degree from Central South University in 2021. His research interest covers data analysis, machine learning, and complex industrial process modeling

    ZHOU Ke Ph.D. candidate at the School of Automation, Central South University. He received his bachelor degree from Chongqing University in 2018. His research interest covers data mining, modeling and optimal control of complex industrial processes

  • 摘要: 高炉料面视频关键帧是视频中的中心气流稳定、清晰、无炉料及粉尘遮挡且特征明显的图像序列, 对于及时获取炉内运行状态、指导炉顶布料操作具有重要的意义. 然而, 由于高炉内部恶劣的冶炼环境及布料的周期性和间歇性等特征, 料面视频存在信息冗余、图像质量参差不齐、状态多变等问题, 无法直接用于分析处理. 为了从大量高炉冶炼过程料面视频中自动准确筛选清晰稳定的料面图像, 提出基于状态识别的高炉料面视频关键帧提取方法. 首先, 基于高温工业内窥镜采集高炉冶炼过程中的料面视频, 并清晰完整给出料面反应新现象和形貌变化情况; 然后, 提取能够表征料面运动状态的显著性区域的特征点密集程度和像素位移特征, 并提出基于局部密度极大值高斯混合模型(Local density maxima-based Gaussian mixture model, LDGMM)聚类的方法识别料面状态; 最后, 基于料面状态识别结果提取每个布料周期不同状态下的关键帧. 实验结果表明, 该方法能够准确识别料面状态并剔除料面视频冗余信息, 能提取出不同状态下的料面视频关键帧, 为优化炉顶布料操作提供指导.
  • 图  1  料面视频采集系统

    Fig.  1  Burden surface video acquisition system

    图  2  设备安装图

    Fig.  2  Equipment installation diagram

    图  3  一个布料周期内的料面反应现象

    Fig.  3  Burden surface reaction phenomenon in a burden cycle

    图  4  一个布料周期内料面视频帧

    Fig.  4  Burden surface video frames in a burden cycle

    图  5  料面显著性区域提取

    Fig.  5  Salient region extraction of burden surface

    图  6  提取的特征点及其光流矢量

    Fig.  6  Extracted feature points and their corresponding optical flow vectors

    图  7  特征点密集程度和平均光流矢量的概率密度分布图

    Fig.  7  Probability density distribution map of feature point density and average optical flow vectors

    图  8  特征点平均光流矢量概率分布直方图

    Fig.  8  Probability distribution histogram of average feature point optical flow vectors

    图  9  不同方法不同训练过程似然函数变化情况

    Fig.  9  Change of likelihood function of different methods in multiple training processes

    图  10  料面视频差异度曲线

    Fig.  10  Difference curve of burden surface video

    图  11  不同布料周期不同状态的识别结果

    Fig.  11  Various state recognition results in different burden cycles

    图  12  相同料面图像不同方法的聚类结果

    Fig.  12  Clustering results of different methods for the same burden surface image

    图  13  不同方法识别不同料面状态的对比结果

    Fig.  13  Comparison results of different burden surface states identified by different methods

    图  14  不同方法识别的不同状态

    Fig.  14  Different states identified by different methods

    图  15  所提方法的混淆矩阵

    Fig.  15  Confusion matrix of the proposed method

    图  16  采用不同方法提取的同一布料周期的部分关键帧

    Fig.  16  Extracted partial key frames of the same burden cycle using different methods

    图  17  提取的不同布料周期的关键帧

    Fig.  17  Extracted key frames of different burden cycles

    图  18  不同形貌料面视频关键帧提取精度

    Fig.  18  Extraction accuracy of video key frames with different shapes and surfaces

    表  1  不同方法的聚类效果比较

    Table  1  Comparison of clustering performance of different methods

    指标
    DB CH SC SP
    LK光流 0.2603 474.41 0.9826 0.6013
    特征点光流 0.2867 5392.80 0.9949 0.7129
    GMM 0.1376 1347.30 0.9816 0.8018
    本文方法 0.0010 7762.36 0.9989 0.9537
    下载: 导出CSV

    表  2  不同方法的识别精度比较

    Table  2  Accuracy comparison of recognition results of different methods

    指标
    ARI NMI E P
    DIFlow 0.4731 0.5105 1.0125 0.7666
    SelFlow 0.4133 0.4276 1.0629 0.7344
    本文方法 0.7669 0.7602 0.5212 0.9083
    下载: 导出CSV

    表  3  不同方法提取的关键帧精度比较

    Table  3  Accuracy comparison of key frames extracted by different methods

    方法 关键帧 查全率 准确率 $ F1 $值
    DSBD 679 60.3% 28.9% 0.3904
    DeepReS 451 76.0% 54.8% 0.6366
    DSN 394 85.2% 70.3% 0.7705
    人工经验 325
    本文方法 338 92.0% 88.5% 0.9020
    下载: 导出CSV
  • [1] 周平, 刘记平, 梁梦圆, 张瑞垚. 基于KPLS鲁棒重构误差的高炉燃料比监测与异常识别. 自动化学报, 2021, 47(7): 1661-1671

    Zhou Ping, Liu Ji-Ping, Liang Meng-Yuan, Zhang Rui-Yao. KPLS robust reconstruction error based monitoring and anomaly identification of fuel ratio in blast furnace ironmaking. Acta Automatica Sinica, 2021, 47(7): 1661-1671
    [2] 蒋朝辉, 许川, 桂卫华, 蒋珂. 基于最优工况迁移的高炉铁水硅含量预测方法. 自动化学报, 2021, 48(1): 207-219

    Jiang Zhao-Hui, Xu Chuang, Gui Wei-Hua, Jiang Ke. Prediction method of hot metal silicon content in blast furnace based on optimal smelting condition migration. Acta Automatica Sinica, 2021, 48(1): 207-219
    [3] Shi L, Wen Y B, Zhao G S, Yu T. Recognition of blast furnace gas flow center distribution based on infrared image processing. Journal of Iron and Steel Research International, 2016, 23(3): 203-209 doi: 10.1016/S1006-706X(16)30035-8
    [4] Chen Z P, Jiang Z H, Gui W H, Yang C H. A novel device for optical imaging of blast furnace burden surface: Parallel low-light-loss backlight high-temperature industrial endoscope. IEEE Sensors Journal, 2016, 16(17): 6703-6717 doi: 10.1109/JSEN.2016.2587729
    [5] 张晓宇, 张云华. 基于融合特征的视频关键帧提取方法. 计算机系统应用, 2019, 28(11): 176-181

    Zhang Xiao-Yu, Zhang Yun-Hua. Video Keyframe extraction method based on fusion feature. Computer Systems & Applications, 2019, 28(11): 176-181
    [6] Xu T X, Chen Z P, Jiang Z H, Huang J C, Gui W H. A real-time 3D measurement system for the blast furnace burden surface using high-temperature industrial endoscope. Sensors, 2020, 20(3): 869 doi: 10.3390/s20030869
    [7] Nandini H M, Chethan H K, Rashmi B S. Shot based keyframe extraction using edge-LBP approach. Journal of King Saud University-Computer and Information Sciences, 2022, 34(7): 4537-4545 doi: 10.1016/j.jksuci.2020.10.031
    [8] 智敏, 蔡安妮. 基于基色调的镜头边界检测方法. 自动化学报, 2007, 33(6): 655-657

    Zhi Min, Cai An-Ni. Shot boundary detection with main color. Acta Automatica Sinica, 2007, 33(6): 655-657
    [9] Tang H, Liu H, Xiao W, Sebe N. Fast and robust dynamic hand gesture recognition via key frames extraction and feature fusion. Neurocomputing, 2019, 331: 424-433 doi: 10.1016/j.neucom.2018.11.038
    [10] Yuan Y, Lu Z, Yang Z, Jian M, Wu L F, Li Z Y, et al. Key frame extraction based on global motion statistics for team-sport videos. Multimedia Systems, 2022, 28(2): 387-401 doi: 10.1007/s00530-021-00777-7
    [11] Li Z N, Li Y J, Tan B Y, Ding S X, Xie S L. Structured sparse coding with the group log-regularizer for key frame extraction. IEEE/CAA Journal of Automatica Sinica, 2022, 9(10): 1818-1830 doi: 10.1109/JAS.2022.105602
    [12] Li X L, Zhao B, Lu X Q. Key frame extraction in the summary space. IEEE Transactions on Cybernetics, 2018, 48(6): 1923-1934 doi: 10.1109/TCYB.2017.2718579
    [13] Zhao B, Gong M G, Li X L. Hierarchical multimodal transformer to summarize videos. Neurocomputing, 2022, 468: 360-369 doi: 10.1016/j.neucom.2021.10.039
    [14] Singh A, Thounaojam D M, Chakraborty S. A novel automatic shot boundary detection algorithm: Robust to illumination and motion effect. Signal, Image and Video Processing, 2020, 14(4): 645-653 doi: 10.1007/s11760-019-01593-3
    [15] 王婷娴, 贾克斌, 姚萌. 面向轻轨的高精度实时视觉定位方法. 自动化学报, 2021, 47(9): 2194-2204

    Wang Ting-Xian, Jia Ke-Bin, Yao Meng. Real-time visual localization method for light-rail with high accuracy. Acta Automatica Sinica, 2021, 47(9): 2194-2204
    [16] Zhang Y Z, Tao R, Wang Y. Motion-state-adaptive video summarization via spatiotemporal analysis. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(6): 1340-1352 doi: 10.1109/TCSVT.2016.2539638
    [17] Gharbi H, Bahroun S, Zagrouba E. Key frame extraction for video summarization using local description and repeatability graph clustering. Signal, Image and Video Processing, 2019, 13(3): 507-515 doi: 10.1007/s11760-018-1376-8
    [18] Lai J L, Yi Y. Key frame extraction based on visual attention model. Journal of Visual Communication and Image Representation, 2012, 23(1): 114-125 doi: 10.1016/j.jvcir.2011.08.005
    [19] Wu J X, Zhong S H, Jiang J M, Yang Y Y. A novel clustering method for static video summarization. Multimedia Tools and Applications, 2017, 76(7): 9625-9641 doi: 10.1007/s11042-016-3569-x
    [20] Chu W S, Song Y, Jaimes A. Video co-summarization: Video summarization by visual co-occurrence. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA, USA: IEEE, 2015. 3584−3592
    [21] Elahi G M M E, Yang Y H. Online learnable keyframe extraction in videos and its application with semantic word vector in action recognition. Pattern Recognition, 2022, 122: 108273 doi: 10.1016/j.patcog.2021.108273
    [22] Wu G D, Lin J Z, Silva C T. IntentVizor: Towards generic query guided interactive video summarization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA: IEEE, 2022. 10493−10502
    [23] Ji Z, Zhao Y X, Pang Y W, Li X, Han J G. Deep attentive video summarization with distribution consistency learning. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(4): 1765-1775 doi: 10.1109/TNNLS.2020.2991083
    [24] Abed R, Bahroun S, Zagrouba E. Keyframe extraction based on face quality measurement and convolutional neural network for efficient face recognition in videos. Multimedia Tools and Applications, 2021, 80(15): 23157-23179 doi: 10.1007/s11042-020-09385-5
    [25] Jian M, Zhang S, Wu L F, Zhang S J, Wang X D, He Y H. Deep key frame extraction for sport training. Neurocomputing, 2019, 328: 147-156 doi: 10.1016/j.neucom.2018.03.077
    [26] Muhammad K, Hussain T, Ser J D, Palade V, de Albuquerque V H C. DeepReS: A deep learning-based video summarization strategy for resource-constrained industrial surveillance scenarios. IEEE Transactions on Industrial Informatics, 2020, 16(9): 5938-5947 doi: 10.1109/TII.2019.2960536
    [27] Xiao S W, Zhao Z, Zhang Z J, Yan X H, Yang M. Convolutional hierarchical attention network for query-focused video summarization. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 12426-12433 doi: 10.1609/aaai.v34i07.6929
    [28] Zhou K Y, Qiao Y, Xiang T. Deep reinforcement learning for unsupervised video summarization with diversity-representativeness reward. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1): 7582-7589
    [29] Huang J C, Jiang Z H, Gui W H, Yi Z H, Pan D, Zhou K, et al. Depth estimation from a single image of blast furnace burden surface based on edge defocus tracking. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(9): 6044-6057 doi: 10.1109/TCSVT.2022.3155626
    [30] Yi Z H, Chen Z P, Jiang Z H, Gui W H. A novel 3-D high-temperature industrial endoscope with large field depth and wide field. IEEE Transactions on Instrumentation and Measurement, 2020, 69(9): 6530-6543 doi: 10.1109/TIM.2020.2970372
    [31] 李东民, 李静, 梁大川, 王超. 基于多尺度先验深度特征的多目标显著性检测方法. 自动化学报, 2019, 45(11): 2058-2070 doi: 10.16383/j.aas.c170154

    Li Dong-Min, Li Jing, Liang Da-Chuan, Wang Chao. Multiple salient objects detection using multi-scale prior and deep features. Acta Automatica Sinica, 2019, 45(11): 2058-2070 doi: 10.16383/j.aas.c170154
    [32] Cai S Z, Huang Y B, Ye B, Xu C. Dynamic illumination optical flow computing for sensing multiple mobile robots from a drone. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018, 48(8): 1370-1382 doi: 10.1109/TSMC.2017.2709404
    [33] Liu P P, Lyu M, King I, Xu J. SelFlow: Self-supervised learning of optical flow. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE, 2019. 4566−4575
  • 加载中
图(18) / 表(3)
计量
  • 文章访问数:  600
  • HTML全文浏览量:  121
  • PDF下载量:  219
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-12-12
  • 录用日期:  2023-02-26
  • 网络出版日期:  2023-08-22
  • 刊出日期:  2023-11-22

目录

    /

    返回文章
    返回