2.624

2020影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

复杂工业过程非串级双速率组合分散运行优化控制

赵建国 杨春雨

赵建国, 杨春雨. 复杂工业过程非串级双速率组合分散运行优化控制. 自动化学报, 2022, 45(x): 1−13 doi: 10.16383/j.aas.c210897
引用本文: 赵建国, 杨春雨. 复杂工业过程非串级双速率组合分散运行优化控制. 自动化学报, 2022, 45(x): 1−13 doi: 10.16383/j.aas.c210897
Zhao Jian-Guo, Yang Chun-Yu. Non-cascade dual-rate composite decentralized operational optimal control for complex industrial processes. Acta Automatica Sinica, 2022, 45(x): 1−13 doi: 10.16383/j.aas.c210897
Citation: Zhao Jian-Guo, Yang Chun-Yu. Non-cascade dual-rate composite decentralized operational optimal control for complex industrial processes. Acta Automatica Sinica, 2022, 45(x): 1−13 doi: 10.16383/j.aas.c210897

复杂工业过程非串级双速率组合分散运行优化控制

doi: 10.16383/j.aas.c210897
基金项目: 国家自然科学基金(61873272, 62073327), 东北大学流程工业综合自动化国家重点实验室开放课题(2019-KF-23-04) 资助
详细信息
    作者简介:

    赵建国:中国矿业大学信息与控制工程学院博士研究生. 2020 年获得中国矿业大学信息与控制工程学院硕士学位. 主要研究方向为多时间尺度系统, 强化学习最优控制. E-mail: jianguozhao@cumt.edu.cn

    杨春雨:中国矿业大学信息与控制工程学院教授. 2009 年获得东北大学信息科学与工程学院博士学位. 主要研究方向为多时间尺度系统的智能控制与优化. 本文通信作者. E-mail: chunyuyang@cumt.edu.cn

Non-cascade Dual-rate Composite Decentralized Operational Optimal Control for Complex Industrial Processes

Funds: Supported by National Natural Science Foundation of China (61873272, 62073327), and Open Project Foundation of State Key Laboratory of Synthetical Automation for Process Industries of Northeastern University (2019-KF-23-04)
More Information
    Author Bio:

    ZHAO Jian-Guo Ph.D. candidate at the School of Information and Control Engineering, China University of Mining and Technology. He received his master covers degree from China University of Mining and Technology in 2020. His research interest multi-time scale systems and reinforcement learning based optimal control

    YANG Chun-Yu Professor at the School of Information and Control Engineering, China University of Mining and Technology. He received his Ph.D. degree from Northeastern University in 2009. His research interest covers intelligent control and optimization of multi-time scale systems. Corresponding author of this paper

  • 摘要: 复杂工业过程具有模型维数高、多时间尺度耦合、动态不确定性等特点, 其运行优化控制(Operational optimal control, OOC)一直是控制领域的研究难点与热点. 本文聚焦一类由多个快变且互联的设备单元与慢变且模型未知的运行过程串联组成的工业过程, 提出一种数据和模型混合驱动的非串级双速率组合分散运行优化控制方法. 该方法通过奇异摄动理论, 将非串级双速率运行优化问题描述为异步采样的慢子系统最优设定值跟踪和快子系统最优调节控制. 利用工业运行数据, 采用不依赖系统动态的Q-学习算法设计慢子系统最优跟踪策略, 克服运行过程模型难以建立的情形; 针对快子系统, 设计基于模型的分散次优控制策略, 并给出收敛因子的下界, 解决设备层互联项对系统稳定性的影响. 通过浮选过程仿真实验验证了所提控制方法的有效性.
  • 图  1  工业过程串级运行优化控制结构

    Fig.  1  The cascade structure of operational optimal control in industrial process

    图  2  工业过程非串级运行优化控制结构

    Fig.  2  The non-cascade structure of operational optimal control in industrial process

    图  3  多设备单元互联的工业过程

    Fig.  3  Industrial process with multiple and interconnected unit devices

    图  4  单浮选槽示意图

    Fig.  4  Configuration of single flotation cell

    图  5  内核矩阵${\tilde H}$的收敛性

    Fig.  5  Convergence of ${\tilde H}$ to its ideal value ${H}$

    图  6  控制增益${\tilde K_s}$的收敛性

    Fig.  6  Convergence of ${\tilde K_s}$ to its ideal value ${ K_s}$

    图  7  精矿品位跟踪曲线

    Fig.  7  The tracking performance of the concentrate grade to its set-point

    图  8  尾矿品位跟踪曲线

    Fig.  8  The tracking performance of the tail grade to its set-point

    图  9  浮选过程矿物品位跟踪误差曲线

    Fig.  9  Evolution of the ore grade tracking error

    图  10  扰动曲线

    Fig.  10  Evolution of the disturbance

    图  11  扰动下精矿品位跟踪曲线

    Fig.  11  The tracking performance of the concentrate grade to its set-point under disturbance

    图  12  扰动下尾矿品位跟踪曲线

    Fig.  12  The tracking performance of the tail grade to its set-point under disturbance

    图  13  基于文献[5]的精矿品位跟踪曲线

    Fig.  13  The tracking performance of the concentrate grade to its set-point using the method in references [5]

    图  14  基于文献[5]的尾矿品位跟踪曲线

    Fig.  14  The tracking performance of the tail grade to its set-point using the method in references [5]

    表  1  对比仿真评价指标

    Table  1  Performance index of comparison simulation

    IAE MSE
    本文$ r_1 $ 0.0734 0.0383
    本文$ r_2 $ 0.0624 0.0353
    文献 [18]$ r_1 $ 19.3290 0.6218
    文献 [18]$ r_2 $ 15.7166 0.5607
    下载: 导出CSV
  • [1] 柴天佑. 工业过程控制系统研究现状与发展方向. 中国科学: 信息科学, 2016, 46: 1003-1015 doi: 10.1360/N112016-00062

    Chai Tian-You. Industrial process control systems: Research status and development direction. Scientia Sinica Informationis, 2016, 46: 1003-1015 doi: 10.1360/N112016-00062
    [2] 柴天佑. 工业人工智能发展方向. 自动化学报, 2020, 46(10): 2005-2012

    Chai Tian-You. Development directions of industrial artificial intelligence. Acta Automatica Sinica, 2020, 46(10): 2005-2012
    [3] Jiang Y, Fan J L, Chai T Y, Lewis F L. Dual-rate operational optimal control for flotation industrial process with unknown operational model. IEEE Transactions on Industrial Electronics, 2019, 66(6): 4587-4599 doi: 10.1109/TIE.2018.2856198
    [4] 代伟, 陆文捷, 付俊, 马小平. 工业过程多速率分层运行优化控制. 自动化学报, 2019, 45(10): 1946-1959

    Dai Wei, Lu Wen-Jie, Fu Jun, Ma Xiao-Ping. Multi-rate layered optimal operational control of industrial processes. Acta Automatica Sinica, 2019, 45(10): 1946-1959
    [5] Xue W Q, Fan J L, Lopez V G, Li J N, Jiang Y, Chai T Y. New methods for optimal operational control of industrial processes using reinforcement learning on two time scales. IEEE Transactions on Industrial Informatics, 2020, 16(5): 3085-3099 doi: 10.1109/TII.2019.2912018
    [6] 柴天佑, 刘强, 丁进良, 卢绍文, 宋延杰, 张艺洁. 工业互联网驱动的流程工业智能优化制造新模式研究展望. 中国科学: 技术科学, 2022, 52(1): 14-25 doi: 10.1360/SST-2021-0405

    Chai Tian-You, Liu Qiang, Ding Jin-Liang, Lu Shao-Wen, Song Yan-Jie, Zhang Yi-Jie. Perspectives on industrial-internet-driven intelligent optimized manufacturing mode for process industries. Scientia Sinica Technologica, 2022, 52(1): 14-25 doi: 10.1360/SST-2021-0405
    [7] Yang Y R, Zou Y Y, Li S Y. Economic model predictive control of enhanced operation performance for industrial hierarchical systems. IEEE Transactions on Industrial Electronics, 2022, 69(6): 6080-6089 doi: 10.1109/TIE.2021.3088334
    [8] 富月, 杜琼. 一类工业运行过程多模型自适应控制方法. 自动化学报, 2018, 44(7): 1250-1259

    Fu Yue, Du Qiong. Multi-model adaptive control method for a class of industrial operational processes. Acta Automatica Sinica, 2018, 44(7): 1250-1259
    [9] Wang L Y, Jia Y, Chai T Y, Xie W F. Dual-rate adaptive control for mixed separation thickening process using compensation signal based approach. IEEE Transactions on Industrial Electronics, 2018, 65(4): 3621-3632 doi: 10.1109/TIE.2017.2752144
    [10] Lu X L, Kiumarsi B, Chai T Y, Jiang Y, Lewis F L. Operational control of mineral grinding processes using adaptive dynamic programming and reference governor. IEEE Transactions on Industrial Informatics, 2019, 15(4): 2210-2221 doi: 10.1109/TII.2018.2868473
    [11] Sutton R S, Barto A G. Reinforcement Learning: An Introduction. Cambridge: MIT Press, 2nd edition, 2018
    [12] Bradtke S J, Ydstie B E, Barto A G. Adaptive linear quadratic control using policy iteration. In: Proceedings of the 1994 American Control Conference. Baltimore, USA: IEEE, 1994. 3475−3479
    [13] Kiumarsi B, Lewis F L, Modares H, Karimpour A, Naghibi-Sistani M B. Reinforcement $Q$-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. Automatica, 2014, 50(4): 1167-1175 doi: 10.1016/j.automatica.2014.02.015
    [14] 吴倩, 范家璐, 姜艺, 柴天佑. 无线网络环境下数据驱动混合选别浓密过程双率控制方法. 自动化学报, 2019, 45(6): 1122-1135 doi: 10.16383/j.aas.c180202

    Wu Qian, Fan Jia-Lu, Jiang Yi, Chai Tian-You. Data-driven dual-rate control for mixed separation thickening process in a wireless network environment. Acta Automatica Sinica, 2019, 45(6): 1122-1135 doi: 10.16383/j.aas.c180202
    [15] Dai W, Zhang L Z, Fu J, Chai T Y, Ma X P. Dual-rate adaptive optimal tracking control for dense medium separation process using neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(9): 4202-4216 doi: 10.1109/TNNLS.2020.3017184
    [16] Li J N, Kiumarsi B, Chai T Y, Fan J L. Off-policy reinforcement learning: Optimal operational control for two-time-scale industrial processes. IEEE Transactions on Cybernetics, 2017, 47(12): 4547-4558 doi: 10.1109/TCYB.2017.2761841
    [17] Li J N, Chai T Y, Lewis F L, Fan J L, Ding Z T, Ding J L. Off-policy $Q$-learning: Set-point design for optimizing dual-rate rougher flotation operational processes. IEEE Transactions on Industrial Electronics, 2018, 65(5): 4092-4102 doi: 10.1109/TIE.2017.2760245
    [18] Zhao J G, Yang C Y, Dai W, Gao W N. Reinforcement learning-based composite optimal operational control of industrial systems with multiple unit devices. IEEE Transactions on Industrial Informatics, 2022, 18(2): 1091-1101 doi: 10.1109/TII.2021.3076471
    [19] Kokotovic P V, Khalil H K, O'Reilly J. Singular Perturbation Methods in Control: Analysis and Design. Philadelphia: SIAM, 1999
    [20] Zhao J G, Yang C Y, Gao W N. Reinforcement learning based optimal control of linear singularly perturbed systems. IEEE Transactions on Circuits and Systems Ⅱ: Express Briefs, 2022, 69(3): 1362-1366 doi: 10.1109/TCSII.2021.3105652
    [21] Litkouhi B, Khalil H K. Multirate and composite control of two-time-scale discrete-time systems. IEEE Transactions on Automatic Control, 1985, 30(7): 645-651 doi: 10.1109/TAC.1985.1104024
    [22] Zhou L N, Zhao J G, Ma L, Yang C Y. Decentralized composite suboptimal control for a class of two-time-scale interconnected networks with unknown slow dynamics. Neurocomputing, 2020, 382: 71-79 doi: 10.1016/j.neucom.2019.11.057
    [23] Li J N, Ding J L, Chai T Y, Lewis F L. Nonzero-sum game reinforcement learning for performance optimization in large-scale industrial processes. IEEE Transactions on Cybernetics, 2020, 50(9): 4132-4145 doi: 10.1109/TCYB.2019.2950262
    [24] 袁兆麟, 何润姿, 姚超, 李佳, 班晓娟. 基于强化学习的浓密机底流浓度在线控制算法. 自动化学报, 2021, 47(7): 1558-1571 doi: 10.16383/j.aas.c190348

    Yuan Zhao-Lin, He Run-Zi, Yao Chao, Li Jia, Ban Xiao-Juan. Online reinforcement learning control algorithm for concentration of thickener underflow. Acta Automatica Sinica, 2021, 47(7): 1558-1571 doi: 10.16383/j.aas.c190348
    [25] Granzotto M, Postoyan R, Busoniu L, Nešić D, Daafouz J. Finite-horizon discounted optimal control: $\rm{S}$tability and performance. IEEE Transactions on Automatic Control, 2021, 66(2): 550-565 doi: 10.1109/TAC.2020.2985904
    [26] Zhao J G, Yang C Y, Gao W N, Zhou L N. Reinforcement learning and optimal setpoint tracking control of linear systems with external disturbances. IEEE Transactions on Industrial Informatics, to be published, 2022, DOI: 10.1109/TII.2022.3151797
    [27] 李彦瑞, 杨春节, 张瀚文, 李俊方. 流程工业数字孪生关键技术探讨. 自动化学报, 2021, 47(3): 501-514 doi: 10.16383/j.aas.c200147

    Li Yan-Rui, Yang Chun-Jie, Zhang Han-Wen, Li Jun-Fang. Discussion on key technologies of digital twin in process industry. Acta Automatica Sinica, 2021, 47(3): 501-514 doi: 10.16383/j.aas.c200147
    [28] 姜艺, 范家璐, 柴天佑. 数据驱动的保证收敛速率最优输出调节. 自动化学报, 2022, 48(4): 980-991 doi: 10.16383/j.aas.c200932

    Jiang Yi, Fan Jia-Lu, Chai Tian-You. Data-driven optimal output regulation with assured convergence rate. Acta Automatica Sinica, 2022, 48(4): 980-991 doi: 10.16383/j.aas.c200932
    [29] Jiang Y, Gao W N, Na J, Zhang D, Hämäläinen T T, Stojanovic V, et al. Value iteration and adaptive optimal output regulation with assured convergence rate. Control Engineering Practice, 2022, 121: Article No. 105042
    [30] Huang J. Nonlinear Output Regulation: Theory and Applications. Philadelphia: SIAM, 2004
    [31] Lewis F L, Vrabie D, Syrmos V L. Optimal Control. New York: John Wiley and Sons, 3rd edition, 2012
    [32] Mukherjee S, Bai H, Chakrabortty A. Reduced-dimensional reinforcement learning control using singular perturbation approximations. Automatica, 2021, 126: Article No. 109451
    [33] Rizvi S A A, Lin Z L. Output feedback $Q$-learning for discrete-time linear zero-sum games with application to the $H$-infinity control. Automatica, 2018, 95: 213-221 doi: 10.1016/j.automatica.2018.05.027
    [34] Liu D R, Xue S, Zhao B, Luo B, Wei Q L. Adaptive dynamic programming for control: A survey and recent advances. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2021, 51(1): 142-160 doi: 10.1109/TSMC.2020.3042876
    [35] Jiang Y, Kiumarsi B, Fan J L, Chai T Y, Li J N, Lewis F L. Optimal output regulation of linear discrete-time systems with unknown dynamics using reinforcement learning. IEEE Transactions on Cybernetics, 2020, 50(7): 3147-3156 doi: 10.1109/TCYB.2018.2890046
    [36] Liu X M, Yang C Y, Luo B, Dai W. Suboptimal control for nonlinear slow-fast coupled systems using reinforcement learning and Takagi-Sugeno fuzzy methods. International Journal of Adaptive Control and Signal Processing, 2021, 35(6): 1017-1038 doi: 10.1002/acs.3234
  • 加载中
计量
  • 文章访问数:  61
  • HTML全文浏览量:  19
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-09-17
  • 录用日期:  2022-08-07
  • 网络出版日期:  2022-09-13

目录

    /

    返回文章
    返回