2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于WCGAN的矿物浮选泡沫图像光照不变颜色提取

刘金平 何捷舟 唐朝晖 谢永芳 马天雨

刘金平, 何捷舟, 唐朝晖, 谢永芳, 马天雨. 基于WCGAN的矿物浮选泡沫图像光照不变颜色提取. 自动化学报, 2022, 48(9): 2301−2315 doi: 10.16383/j.aas.c190330
引用本文: 刘金平, 何捷舟, 唐朝晖, 谢永芳, 马天雨. 基于WCGAN的矿物浮选泡沫图像光照不变颜色提取. 自动化学报, 2022, 48(9): 2301−2315 doi: 10.16383/j.aas.c190330
Liu Jin-Ping, He Jie-Zhou, Tang Zhao-Hui, Xie Yong-Fang, Ma Tian-Yu. WCGAN-based illumination-invariant color measuring of mineral flotation froth images. Acta Automatica Sinica, 2022, 48(9): 2301−2315 doi: 10.16383/j.aas.c190330
Citation: Liu Jin-Ping, He Jie-Zhou, Tang Zhao-Hui, Xie Yong-Fang, Ma Tian-Yu. WCGAN-based illumination-invariant color measuring of mineral flotation froth images. Acta Automatica Sinica, 2022, 48(9): 2301−2315 doi: 10.16383/j.aas.c190330

基于WCGAN的矿物浮选泡沫图像光照不变颜色提取

doi: 10.16383/j.aas.c190330
基金项目: 国家自然科学基金(61971188, 61771492), 国家杰出青年科学基金(61725306), 国家自然科学基金−广东联合基金重点项目(U1701261), 湖南省自然科学基金(2018JJ3349), 湖南省研究生科研创新项目(CX2018B312, CX20190415)资助
详细信息
    作者简介:

    刘金平:湖南师范大学信息科学与工程学院副教授. 主要研究方向为智能信息处理. 本文通信作者.E-mail: ljp202518@163.com

    何捷舟:湖南师范大学信息科学与工程学院硕士研究生. 主要研究方向为计算机视觉和模式识别.E-mail: hdc@smail.hunnu.edu.cn

    唐朝晖:中南大学自动化学院教授. 2005年至2006年任德国杜伊斯堡−埃森大学访问学者. 主要研究方向为信号处理和工业过程故障诊断. E-mail: zhtang@csu.edu.cn

    谢永芳:中南大学自动化学院教授. 主要研究方向为复杂工业过程的建模与控制, 分散鲁棒控制, 故障诊断. E-mail: yfxie@csu.edu.cn

    马天雨:博士, 湖南师范大学物理与电子学院讲师. 主要研究方向为复杂工业过程建模及优化控制.E-mail: mty@hunnu.edu.cn

WCGAN-based Illumination-invariant Color Measuring of Mineral Flotation Froth Images

Funds: Supported by National Natural Science Foundation of China (61971188, 61771492), National Science Fund for Distinguished Yong Scholars (61725306), Joint Found of National Natural Science Foundation of China and Guangdong Provincial Government (U1701261), Hunan Natural Science Fund (2018JJ3349), and Hunan Postgraduate Research Innovation Project (CX2018B312, CX20190415)
More Information
    Author Bio:

    LIU Jin-Ping Associate professor at the College of Information Science and Engineering, Hunan Normal University. His research interest covers digital signal processing and pattern recognition. Corresponding author of this paper

    HE Jie-Zhou Master student at the College of Information Science and Engineering, Hunan Normal University. His research interest covers computer vision and pattern recognition

    TANG Zhao-Hui Professor at the School of Automation, Central South University. He was a visiting scholar at the University of Duisburg-Essen, Germany from 2005 to 2006. His research interest covers signal processing and industrial process fault diagnosis

    XIE Yong-Fang Professor at the School of Automation, Central South University. His research interest covers modeling and control of complex industrial processes, decentralized robust control, and fault diagnosis

    MA Tian-Yu Ph.D., lecturer at the College of Physics and Electronics, Hunan Normal University. His research interest covers complex industrial process modeling and optimal control

  • 摘要: 浮选泡沫表面颜色是选矿生产指标(精矿品位)最为快速便捷的直接指示器. 然而, 泡沫图像信号因受多种可变光照的交叉干扰而不可避免存在严重色偏, 导致浮选指标难以准确评估. 本文将传统的基于光照估计的图像颜色恒常问题转换为一种结构保持的图到图颜色(风格)转移问题, 提出一种基于Wasserstein距离的循环生成对抗网络(Wasserstein distance-based cycle generative adversarial network, WCGAN)用于泡沫图像光照不变颜色特征在线监测. 在标准颜色恒常数据集和实际的工业铝土矿浮选过程进行实验验证, 结果表明, WCGAN能有效实现各种未知光照条件下(色偏)图像到基准光照条件下的颜色转换, 转换速度快且具有模型在线更新功能. 与传统的基于生成对抗学习的颜色转换模型相比, WCGAN能更好地保持泡沫图像的轮廓和表面纹理等结构信息, 为基于机器视觉的矿物浮选过程生产指标的在线监测提供了有效的客观评价信息.
  • 图  1  泡沫图像光照转换思想

    Fig.  1  Scheme of the color translation of froth images

    图  2  CycleGAN结构图

    Fig.  2  CycleGAN structure

    图  3  WCGAN的生成器结构

    Fig.  3  Generator structure of WCGAN

    图  4  图像颜色校正结果

    Fig.  4  Image color correction results

    图  5  铝土矿浮选回路

    Fig.  5  Bauxite flotation circuit

    图  6  基准光照泡沫图像及其Lab颜色分布

    Fig.  6  Reference light froth image and its Lab color distribution

    图  7  浮选泡沫图像颜色校正结果

    Fig.  7  Color correction result of flotation froth image

    图  8  泡沫图像颜色特征与A/S间相关性 ((a1)和(a2)分别代表校正后和校正前H均值与A/S间的相关性;(b1)和(b2)分别代表校正后和校正前a通道的标准差与A/S的相关性; (c1)和(c2)分别代表校正后和校正前的归一化R通道均值与A/S之间的相关性)

    Fig.  8  The correlation between color characteristics of froth images and A/S ((a1) and (a2) represent the correlation between H-means and A/S after correction and before correction; (b1) and (b2) represent the correlation between standard deviation of a-channel and A/S after correction and before correction; (c1) and (c2) represent the correlation between normalized R-channel mean and A/S after correction and before correction, respectively)

    图  9  基于泡沫图像颜色特征的精矿品位预测

    Fig.  9  Prediction of concentrate grade based on color characteristics of foam images

    表  1  基于统计量的颜色恒常方法在Gehler-Shi 568 data 上的对比结果

    Table  1  Comparison of statistics-based color constancy methods on Gehler-Shi 568 data

    方法色度误差角度误差测试时间 (s)
    MedianMaxRMSMeanMaxRMS
    Gray-Edge[30]0.621.350.736.310.46.50.9
    MAX-RGB[28]1.172.551.269.918.610.30.7
    Gray-World[29]0.781.470.887.617.98.40.8
    White-patch[31]0.731.560.817.514.78.30.9
    下载: 导出CSV

    表  2  基于机器学习的颜色恒常方法在Gehler-Shi 568 data上的对比结果

    Table  2  Comparison of machine learning-based color constancy methods on Gehler-Shi 568 data

    方法SSIM色度误差角度误差训练时间 (s)测试时间 (s)
    MedianMaxRMSMeanMaxRMS
    FC4[13]0.85760.571.390.654.711.35.61.70.9
    Neural Gray[33]0.91660.691.920.775.713.46.51.40.5
    Based-SVR[34]0.89450.611.880.705.412.66.31.61.2
    CycleGAN[35]0.69180.983.111.076.316.57.43.00.12
    WD + CycleGAN0.83990.761.840.695.114.35.93.00.12
    WCGAN0.98970.421.310.504.310.55.41.50.06
    下载: 导出CSV

    表  3  基于统计量的颜色恒常方法在SFU 321 lab images上的对比结果

    Table  3  Comparison of statistics-based color constancy methods on SFU 321 lab images

    方法色度误差角度误差测试时间 (s)
    MedianMaxRMSMeanMaxRMS
    Gray-Edge[30]0.541.260.625.912.76.80.9
    MAX-RGB[28]1.162.461.2410.517.611.40.7
    Gray-World[29]0.741.430.837.918.28.70.8
    White-patch[31]0.641.490.727.115.37.90.9
    下载: 导出CSV

    表  4  基于机器学习的颜色恒常方法在SFU 321 lab images上的对比结果

    Table  4  Comparison of machine learning-based color constancy methods on SFU 321 lab images

    方法SSIM色度误差角度误差训练时间 (s)测试时间 (s)
    MedianMaxRMSMeanMaxRMS
    FC4[13]0.87910.611.450.695.29.06.01.10.7
    Neural Gray[33]0.92860.711.870.806.412.17.30.90.4
    Based-SVR[34]0.91390.631.840.725.812.16.51.30.9
    CycleGAN[35]0.73470.842.110.926.215.77.92.70.09
    WD+CycleGAN0.91450.701.750.664.713.96.92.70.09
    WCGAN0.99360.391.280.453.112.24.11.20.05
    下载: 导出CSV
  • [1] Szczerkowska S, Wiertel-Pochopien A, Zawala J, Larsen E, Kowalczuk P B. Kinetics of froth flotation of naturally hydrophobic solids with different shapes. Minerals Engineering, 2018, 121: 90-99 doi: 10.1016/j.mineng.2018.03.006
    [2] 姜艺, 范家璐, 贾瑶, 柴天佑. 数据驱动的浮选过程运行反馈解耦控制方法. 自动化学报, 2019, 45(4): 759-770 doi: 10.16383/j.aas.2018.c170552

    Jiang Yi, Fan Jia-Lu, Jia Yao, Chai Tian-You. Data-driven flotation process operational feedback decoupling control. Acta Automatica Sinica, 2019, 45(4): 759-770 doi: 10.16383/j.aas.2018.c170552
    [3] 桂卫华, 阳春华, 徐德刚, 卢明, 谢永芳. 基于机器视觉的矿物浮选过程监控技术研究进展. 自动化学报, 2013, 39(11): 1879-1888 doi: 10.3724/SP.J.1004.2013.01879

    Gui Wei-Hua, Yang Chun-Hua, Xu De-Gang, Lu Ming, Xie Yong-Fang. Machine-vision-based online measuring and controlling technologies for mineral flotation——a review. Acta Automatica Sinica, 2013, 39(11): 1879-1888 doi: 10.3724/SP.J.1004.2013.01879
    [4] Jahedsaravani A, Massinaei M, Marhaban M H. Development of a machine vision system for real-time monitoring and control of batch flotation process. International Journal of Mineral Processing, 2017, 167: 16-26 doi: 10.1016/j.minpro.2017.07.011
    [5] Popli K, Maries V, Afacan A, Liu Q, Prasad V. Development of a vision-based online soft sensor for oil sands flotation using support vector regression and its application in the dynamic monitoring of bitumen extraction. The Canadian Journal of Chemical Engineering, 2018, 96(7): 1532-1540 doi: 10.1002/cjce.23164
    [6] Xie Y F, Wu J, Xu D G, Yang C H, Gui W H. Reagent addition control for stibium rougher flotation based on sensitive froth image features. IEEE Transactions on Industrial Electronics, 2017, 64(5): 4199-4206 doi: 10.1109/TIE.2016.2613499
    [7] Liu J P, Zhou J M, Tang Z H, Gui W H, Xie Y F, He J Z, et al. Toward flotation process operation-state identification via statistical modeling of biologically inspired Gabor filtering responses. IEEE Transactions on Cybernetics, 2020, 50(10): 4242-4255 doi: 10.1109/TCYB.2019.2909763
    [8] Reddick J F, Hesketh A H, Morar S H, Bradshaw D J. An evaluation of factors affecting the robustness of colour measurement and its potential to predict the grade of flotation concentrate. Minerals Engineering, 2009, 22(1): 64-69 doi: 10.1016/j.mineng.2008.03.018
    [9] Gijsenij A, Gevers T, Weijer J V D. Computational color constancy: Survey and experiments. IEEE Transactions on Image Processing, 2011, 20(9): 2475-2489 doi: 10.1109/TIP.2011.2118224
    [10] Oh S W, Kim S J. Approaching the computational color constancy as a classification problem through deep learning. Pattern Recognition, 2017, 61: 405-416 doi: 10.1016/j.patcog.2016.08.013
    [11] Gatta C, Farup I. Gamut mapping in RGB colour spaces with the iterative ratios diffusion algorithm. In: Proceedings of the 2017 S&T International Symposium on Electronic Imaging: Color Imaging XXII: Displaying, Processing, Hardcopy, and Applications. Burlingame, USA: Ingenta, 2017. 12−20
    [12] Bianco S, Cusano C, Schettini R. Single and multiple illuminant estimation using convolutional neural networks. IEEE Transactions on Image Processing, 2017, 26(9): 4347-4362 doi: 10.1109/TIP.2017.2713044
    [13] Hu Y M, Wang B Y, Lin S. FC.4: Fully convolutional color constancy with confidence-weighted pooling. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 330−339
    [14] Wang C, Gao R, Wei W, Shafie-khah M, Bi T S, Catalão J P S. Risk-based distributionally robust optimal gas-power flow with Wasserstein distance. IEEE Transactions on Power Systems, 2019, 34(3): 2190-2204 doi: 10.1109/TPWRS.2018.2889942
    [15] 赵洪伟, 谢永芳, 蒋朝辉, 徐德刚, 阳春华, 桂卫华. 基于泡沫图像特征的浮选槽液位智能优化设定方法. 自动化学报, 2014, 40(6): 1086-1097

    Zhao Hong-Wei, Xie Yong-Fang, Jiang Zhao-Hui, Xu De-Gang, Yang Chun-Hua, Gui Wei-Hua. An intelligent optimal setting approach based on froth features for level of flotation cells. Acta Automatica Sinica, 2014, 40(6): 1086-1097
    [16] Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2014. 2672−2680
    [17] Isola P, Zhu J Y, Zhou T H, Efros A A. Image-to-image translation with conditional adversarial networks. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 5967−5976
    [18] Kim T, Cha M, Kim H, Lee J K, Kim J. Learning to discover cross-domain relations with generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: JMLR.org, 2017. 1857−1865
    [19] Arjovsky M, Chintala S, Bottou L. Wasserstein generative adversarial networks. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: JMLR.org, 2017. 214−223
    [20] Nowozin S, Cseke B, Tomioka R. $f \text{-}{\rm{GAN}} :$Training generative neural samplers using variational divergence minimization. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates Inc., 2016. 271−279
    [21] Arjovsky M, Bottou L. Towards principled methods for training generative adversarial networks. In: Proceedings of the 5th International Conference on Learning Representations (ICLR). Toulon, France: OpenReview.net, 2017. 1−17
    [22] 姚乃明, 郭清沛, 乔逢春, 陈辉, 王宏安. 基于生成式对抗网络的鲁棒人脸表情识别. 自动化学报, 2018, 44(5): 865-877 doi: 10.16383/j.aas.2018.c170477

    Yao Nai-Ming, Guo Qing-Pei, Qiao Feng-Chun, Chen Hui, Wang Hong-An. Robust facial expression recognition with generative adversarial networks. Acta Automatica Sinica, 2018, 44(5): 865-877 doi: 10.16383/j.aas.2018.c170477
    [23] Mukkamala M C, Hein M. Variants of RMSProp and Adagrad with logarithmic regret bounds. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: JMLR.org, 2017. 2545−2553
    [24] Sultana N N, Mandal B, Puhan N B. Deep residual network with regularised fisher framework for detection of melanoma. IET Computer Vision, 2018, 12(8): 1096-1104 doi: 10.1049/iet-cvi.2018.5238
    [25] Chen G, Chacón L, Barnes D C. An efficient mixed-precision, hybrid CPU–GPU implementation of a nonlinearly implicit one-dimensional particle-in-cell algorithm. Journal of Computational Physics, 2012, 231(16): 5374-5388 doi: 10.1016/j.jcp.2012.04.040
    [26] Gehler P V, Rother C, Blake A, Minka T, Sharp T. Bayesian color constancy revisited. In: Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, USA: IEEE, 2008. 1−8
    [27] Barnard K, Martin L, Funt B, Coath A. A data set for color research. Color Research & Application, 2002, 27(3): 147-151
    [28] Hussain A, Akbari A S. Color constancy algorithm for mixed-illuminant scene images. IEEE Access, 2018, 6: 8964-8976 doi: 10.1109/ACCESS.2018.2808502
    [29] Sulistyo S B, Woo W L, Dlay S S. Regularized neural networks fusion and genetic algorithm based on-field nitrogen status estimation of wheat plants. IEEE Transactions on Industrial Informatics, 2017, 13(1): 103-114 doi: 10.1109/TII.2016.2628439
    [30] Yoo J H, Kyung W J, Choi J S, Ha Y H. Color image enhancement using weighted multi-scale compensation based on the gray world assumption. Journal of Imaging Science and Technology, 2017, 61(3): Article No. 030507
    [31] Joze H R V, Drew M S. White patch gamut mapping colour constancy. In: Proceedings of the 19th IEEE International Conference on Image Processing. Orlando, USA: IEEE, 2012. 801−804
    [32] Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612 doi: 10.1109/TIP.2003.819861
    [33] Faghih M M, Moghaddam M E. Neural Gray: A color constancy technique using neural network. Color Research & Application, 2014, 39(6): 571-581
    [34] Zhang J X, Zhang P, Wu X L, Zhou Z Y, Yang C. Illumination compensation in textile colour constancy, based on an improved least-squares support vector regression and an improved GM(1,1) model of grey theory. Coloration Technology, 2017, 133(2): 128-134 doi: 10.1111/cote.12243
    [35] Zhu J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2242−2251
    [36] Yuan X F, Ge Z Q, Song Z H. Soft sensor model development in multiphase/multimode processes based on Gaussian mixture regression. Chemometrics and Intelligent Laboratory Systems, 2014, 138: 97-109 doi: 10.1016/j.chemolab.2014.07.013
  • 加载中
图(9) / 表(4)
计量
  • 文章访问数:  577
  • HTML全文浏览量:  45
  • PDF下载量:  147
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-05-05
  • 录用日期:  2019-09-02
  • 网络出版日期:  2022-08-08
  • 刊出日期:  2022-09-16

目录

    /

    返回文章
    返回