[1]
|
刘全, 翟建伟, 章宗长, 钟珊, 周倩, 章鹏, 等. 深度强化学习综述. 计算机学报, 2018, 41(01): 1−27 doi: 10.11897/SP.J.1016.2018.00001Liu Quan, Zhai Jian-Wei, Zhang Zong-Chang, Zhong Shan, Zhou Qian, Zhang Peng, et al. A survey on deep reinforcement learning. Chinese Journal of Computers, 2018, 41(01): 1−27 doi: 10.11897/SP.J.1016.2018.00001
|
[2]
|
Zhou F, Luo B, Wu Z, Huang T. SMONAC: Supervised multiobjective negative actor-critic for sequential recommendation. IEEE Transactions on Neural Networks and Learning Systems, 2023 doi: 10.1109/TNNLS.2023.3317353
|
[3]
|
Silver D, Huang A, Maddison C J, Guez A, Sifre L, Van D D, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529(7587): 484−489 doi: 10.1038/nature16961
|
[4]
|
Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, et al. Mastering the game of Go without human knowledge. Nature, 2017, 550(7676): 354−359 doi: 10.1038/nature24270
|
[5]
|
Huang X, Li Z, Xiang Y, Ni Y, Chi Y, Li Y, et al. Creating a dynamic quadrupedal robotic goalkeeper with reinforcement learning. In: Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems. Detroit, USA: IEEE, 2023. 2715–2722
|
[6]
|
Perolat J, De Vylder B, Hennes D, Tarassov E, Strub F, De Boer V, et al. Mastering the game of stratego with model-free multiagent reinforcement learning. Science, 2022, 378(6623): 990−996 doi: 10.1126/science.add4679
|
[7]
|
Fawzi A, Balog M, Huang A, Hubert T, Romera-Paredes B, Barekatain M, et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature, 2022, 610(7930): 47−53 doi: 10.1038/s41586-022-05172-4
|
[8]
|
Zhou Q, Wang W, Liang H, Basin M V, Wang B. Observer-based event-triggered fuzzy adaptive bipartite containment control of multiagent systems with input quantization. IEEE Transactions on Fuzzy Systems, 2021, 29(2): 372−384 doi: 10.1109/TFUZZ.2019.2953573
|
[9]
|
Wu H, Luo B. Neural network based online simultaneous policy update algorithm for solving the HJI equation in nonlinear H∞ control. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(12): 1884−1895 doi: 10.1109/TNNLS.2012.2217349
|
[10]
|
Vrabie D, Vamvoudakis K G, Lewis F L. Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles. Stevenage: Institution of Engineering and Technology, 2012
|
[11]
|
Luo B, Wu H, Huang T. Off-policy reinforcement learning for H∞ control design. IEEE Transactions on Cybernetics, 2015, 45(1): 65−76 doi: 10.1109/TCYB.2014.2319577
|
[12]
|
Luo B, Huang T, Wu H, Yang X. Data-driven H∞ Control for Nonlinear Distributed Parameter Systems. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(11): 2949−2961 doi: 10.1109/TNNLS.2015.2461023
|
[13]
|
Fu Y, Fu J, Chai T. Robust adaptive dynamic programming of two-player zero-sum games for continuous-time linear systems. IEEE Transactions on Neural Networks and Learning Systems, 2015, 26(12): 3314−3319 doi: 10.1109/TNNLS.2015.2461452
|
[14]
|
Liu Q, Wang Z, He X, Zhou D. Event-Based H∞ Consensus control of multi-agent systems with relative output feedback: the finite-horizon case. IEEE Transactions on Automatic Control, 2015, 60(9): 2553−2558 doi: 10.1109/TAC.2015.2394872
|
[15]
|
Vamvoudakis K G, Lewis F L. Online solution of nonlinear two-player zero-sum games using synchronous policy iteration. International Journal of Robust and Nonlinear Control, 2012, 22(13): 1460−1483 doi: 10.1002/rnc.1760
|
[16]
|
Luo B, Yang Y, Liu D. Policy iteration Q-learning for data-based two-player zero-sum game of linear discrete-time systems. IEEE Transactions on Cybernetics, 2021, 51(7): 3630−3640 doi: 10.1109/TCYB.2020.2970969
|
[17]
|
Van der Schaft A. L2-gain analysis of nonlinear systems and nonlinear state-feedback H∞ control. IEEE Transactions on Automatic Control, 1992, 37(6): 770−784 doi: 10.1109/9.256331
|
[18]
|
Luo B, Wu H-N. Computationally efficient simultaneous policy update algorithm for nonlinear H∞ state feedback control with Galerkin's method. International Journal of Robust and Nonlinear Control, 2013, 23(9): 991−1012 doi: 10.1002/rnc.2814
|
[19]
|
Sun J, Long T. Event-triggered distributed zero-sum differential game for nonlinear multi-agent systems using adaptive dynamic programming. ISA Transactions, 2021, 110: 39−52 doi: 10.1016/j.isatra.2020.10.043
|
[20]
|
Zhou Y, Zhou J, Wen G, Gan M, Yang T. Distributed minmax strategy for consensus tracking in differential graphical games: a model-free approach. IEEE Systems, Man, and Cybernetics Magazine, 2023, 9(4): 53−68 doi: 10.1109/MSMC.2023.3282774
|
[21]
|
Sun J, Liu C. Distributed zero-sum differential game for multi-agent systems in strict-feedback form with input saturation and output constraint. Neural Networks, 2018, 106: 8−19 doi: 10.1016/j.neunet.2018.06.007
|
[22]
|
Li M, Wang D, Qiao J. Neural critic learning for tracking control design of constrained nonlinear multi-person zero-sum games. Neurocomputing, 2022, 512: 456−465 doi: 10.1016/j.neucom.2022.09.103
|
[23]
|
Jiao Q, Modares H, Xu S, Lewis F L, Vamvoudakis K G. Multi-agent zero-sum differential graphical games for disturbance rejection in distributed control. Automatica, 2016, 69: 24−34 doi: 10.1016/j.automatica.2016.02.002
|
[24]
|
Chen C, Lewis F L, Xie K, Lyu Y, Xie S. Distributed output data-driven optimal robust synchronization of heterogeneous multi-agent systems. Automatica, 2023, 153: Article No. 111030 doi: 10.1016/j.automatica.2023.111030
|
[25]
|
Zhang H, Li Y, Wang Z, Ding Y, Yang H. Distributed optimal control of nonlinear system based on policy gradient with external disturbance. IEEE Transactions on Network Science and Engineering, 2024, 11(1): 872−885 doi: 10.1109/TNSE.2023.3309816
|
[26]
|
An C, Su H, Chen S. H∞ Consensus for discrete-time fractional-order multi-agent systems with disturbance via Q-learning in zero-sum games. IEEE Transactions on Network Science and Engineering, 2022, 9(4): 2803−2814 doi: 10.1109/TNSE.2022.3169792
|
[27]
|
Ma Y, Meng Q, Jiang B, Reng H. Fault-tolerant control for second-order nonlinear systems with actuator faults via zero-sum differential game. Engineering Applications of Artificial Intelligence, 2023, 123: Article No. 106342 doi: 10.1016/j.engappai.2023.106342
|
[28]
|
Wu Y, Chen M, Li H, Chadli M. Mixed-zero-sum-game-based memory event-triggered cooperative control of heterogeneous MASs against DoS attacks. IEEE Transactions on Cybernetics, doi: 10.1109/TCYB.2024.3369975
|
[29]
|
李梦花, 王鼎, 乔俊飞. 不对称约束多人非零和博弈的自适应评判控制. 控制理论与应用, 2023, 40(09): 1562−1568Li Meng-Hua, Wang D, Qiao Jun-Fei. Adaptive critic control for multi-player non-zero-sum games with asymmetric constraints. Control Theory & Applications, 2023, 40(09): 1562−1568
|
[30]
|
吕永峰, 田建艳, 菅垄, 任雪梅. 非线性多输入系统的近似动态规划H∞控制. 控制理论与应用, 2021, 38(10): 1662−1670 doi: 10.7641/CTA.2021.00559Lv Yong-Feng, Tian Jian-Yan, Jian Long, Ren Xue-Mei. Approximate-dynamic-programming H∞ controls for multi-input nonlinear system. Control Theory & Applications, 2021, 38(10): 1662−1670 doi: 10.7641/CTA.2021.00559
|
[31]
|
洪成文, 富月. 基于自适应动态规划的非线性鲁棒近似最优跟踪控制. 控制理论与应用, 2018, 35(09): 1285−1292 doi: 10.7641/CTA.2018.80075Hong Cheng-Wen, Fu Yue. Nonlinear robust approximate optimal tracking control based on adaptive dynamic programming. Control Theory & Applications, 2018, 35(09): 1285−1292 doi: 10.7641/CTA.2018.80075
|
[32]
|
Vamvoudakis K G, Lewis F L. Multi-player non-zero-sum games: online adaptive learning solution of coupled Hamilton-Jacobi equations. Automatica, 2011, 47(8): 1556−1569 doi: 10.1016/j.automatica.2011.03.005
|
[33]
|
Song R, Lewis F L, Wei Q. Off-policy integral reinforcement learning method to solve nonlinear continuous-time multiplayer nonzero-sum games. IEEE Transactions on Neural Networks and Learning Systems, 2016, 28(3): 704−713
|
[34]
|
Kamalapurkar R, Klotz J R, Dixon W E. Concurrent learning-based approximate feedback-Nash equilibrium solution of N-player nonzero-sum differential games. IEEE/CAA Journal of Automatica Sinica, 2014, 1(3): 239−247 doi: 10.1109/JAS.2014.7004681
|
[35]
|
Zhao Q, Sun J, Wang G, Chen Jie. Event-triggered ADP for nonzero-sum games of unknown nonlinear systems. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(5): 1905−1913
|
[36]
|
Yang X, Zhang H, Wang Z. Data-based optimal consensus control for multiagent systems with policy gradient reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(8): 3872−3883
|
[37]
|
Abouheaf M I, Lewis F L, Vamvoudakis K G, Haesaert S, Babuska R. Multi-agent discrete-time graphical games and reinforcement learning solutions. Automatica, 2014, 50(12): 3038−3053 doi: 10.1016/j.automatica.2014.10.047
|
[38]
|
Vamvoudakis K G, Lewis F L, Hudas G R. Multi-agent differential graphical games: Online adaptive learning solution for synchronization with optimality. Automatica, 2012, 48(8): 1598−1611 doi: 10.1016/j.automatica.2012.05.074
|
[39]
|
Yang N, Xiao J, Xiao L, Wang Y. Non-zero sum differential graphical game: cluster synchronisation for multi-agents with partially unknown dynamics. International Journal of Control, 2019, 92(10): 2408−2419 doi: 10.1080/00207179.2018.1441550
|
[40]
|
Odekunle A, Gao W, Davari M, Jiang Zhong-Ping. Reinforcement learning and non-zero-sum game output regulation for multi-player linear uncertain systems. Automatica, 2020, 112: Article No: 108672 doi: 10.1016/j.automatica.2019.108672
|
[41]
|
Wang Y, Xue H, Wen J, Liu J, Luan X. Efficient off-policy Q-learning for multi-agent systems by solving dual games. International Journal of Robust and Nonlinear Control, 2024, 34(6): 4193−4212 doi: 10.1002/rnc.7189
|
[42]
|
Su H, Zhang H, Liang Y, Mu Y. Online event-triggered adaptive critic design for non-zero-sum games of partially unknown networked systems. Neurocomputing, 2019, 368: 84−98 doi: 10.1016/j.neucom.2019.07.029
|
[43]
|
Yu M, Hong S H. A real-time demand-response algorithm for smart grids: A Stackelberg game approach. IEEE Transactions on Smart Grid, 2015, 7(2): 879−888
|
[44]
|
Yang B, Li Z, Chen S, Wang T, Li K. Stackelberg game approach for energy-aware resource allocation in data centers. IEEE Transactions on Parallel and Distributed Systems, 2016, 27(12): 3646−3658 doi: 10.1109/TPDS.2016.2537809
|
[45]
|
Yoon S-G, Choi Y-J, Park J-K, Bahk S. Stackelberg-game-based demand response for at-home electric vehicle charging. IEEE Transactions on Vehicular Technology, 2015, 65(6): 4172−4184
|
[46]
|
Lin M, Zhao B, Liu D. Event-triggered robust adaptive dynamic programming for multiplayer Stackelberg-Nash games of uncertain nonlinear systems. IEEE Transactions on Cybernetics, 2024, 54(1): 273−286 doi: 10.1109/TCYB.2023.3251653
|
[47]
|
Li M, Qin J, Ma Q, Zheng W, Kang Y. Hierarchical optimal synchronization for linear systems via reinforcement learning: A Stackelberg-Nash game perspective. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(4): 1600−1611 doi: 10.1109/TNNLS.2020.2985738
|
[48]
|
Yan L, Liu J, Lai G, Chen C L P, Wu Z, Liu Z. Adaptive optimal output-feedback consensus tracking control of nonlinear multiagent systems using two-player Stackelberg game. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024 doi: 10.1109/TSMC.2024.3404147
|
[49]
|
Li D, Dong J. Output-feedback optimized consensus for directed graph multi-agent systems based on reinforcement learning and subsystem error derivatives. Information Sciences, 2023, 649: Article No: 119577 doi: 10.1016/j.ins.2023.119577
|
[50]
|
Zhang D, Yao Y, Wu Z. Reinforcement learning based optimal synchronization control for multi-agent systems with input constraints using vanishing viscosity method. Information Sciences, 2023, 637: Article No: 118949 doi: 10.1016/j.ins.2023.118949
|
[51]
|
Li Q, Xia L, Song R, Liu J. Leader-follower bipartite output synchronization on signed digraphs under adversarial factors via data-based reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2019, 31(10): 4185−4195
|
[52]
|
Luo A, Zhou Q, Ren H, Ma H, Lu R. Reinforcement learning-based consensus control for MASs with intermittent constraints. Neural Networks, 2024, 172: Article No: 106105 doi: 10.1016/j.neunet.2024.106105
|
[53]
|
Yu J, Dong X, Li Q, Lv J, Ren Z. Adaptive practical optimal time-varying formation tracking control for disturbed high-order multi-agent systems. IEEE Transactions on Circuits and Systems I: Regular Papers, 2022, 69(6): 2567−2578 doi: 10.1109/TCSI.2022.3151464
|
[54]
|
Lan J, Liu Y, Yu D, Wen G, Tong S, Liu L. Time-varying optimal formation control for second-order multiagent systems based on neural network observer and reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(3): 3144−3155 doi: 10.1109/TNNLS.2022.3158085
|
[55]
|
Wang Z, Zhang L. Distributed optimal formation tracking control based on reinforcement learning for underactuated AUVs with asymmetric constraints. Ocean Engineering, 2023, 280: Article No: 114491 doi: 10.1016/j.oceaneng.2023.114491
|
[56]
|
Cheng M, Liu H, Gao Q, Lv J, Xia X. Optimal containment control of a quadrotor team with active leaders via reinforcement learning. IEEE Transactions on Cybernetics, 20231−11
|
[57]
|
Zuo S, Song Y, Lewis F L, Davoudi A. Optimal robust output containment of unknown heterogeneous multiagent system using off-policy reinforcement learning. IEEE Transactions on Cybernetics, 2017, 48(11): 3197−3207
|
[58]
|
Wang F, Cao A, Yin Y, Liu Z. Model-free containment control of fully heterogeneous linear multiagent systems. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, 54(4): 2551−2562 doi: 10.1109/TSMC.2023.3344786
|
[59]
|
Qin J, Li M, Shi Y, Ma Q, Zheng W. Optimal synchronization control of multiagent systems with input saturation via off-policy reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2018, 30(1): 85−96
|
[60]
|
Mu C, Zhao Q, Gao Z, Sun C. Q-learning solution for optimal consensus control of discrete-time multiagent systems using reinforcement learning. Journal of the Franklin Institute, 2019, 356(13): 6946−6967 doi: 10.1016/j.jfranklin.2019.06.007
|
[61]
|
Bai W, Li T, Long Y, Chen C L P. Event-triggered multigradient recursive reinforcement learning tracking control for multiagent systems. IEEE Transactions on Neural Networks and Learning Systems, 2021, 34(1): 366−379
|
[62]
|
Sun J, Ming Z. Cooperative differential game-based distributed optimal synchronization control of heterogeneous nonlinear multiagent systems. IEEE Transactions on Cybernetics, 2023, 53(12): 7933−7942 doi: 10.1109/TCYB.2023.3240983
|
[63]
|
Ji L, Wang C, Zhang C, Wang H, Li H. Optimal consensus model-free control for multi-agent systems subject to input delays and switching topologies. Information Sciences, 2022, 589: 497−515 doi: 10.1016/j.ins.2021.12.125
|
[64]
|
Qin J, Ma Q, Yu X, Kang Y. Output containment control for heterogeneous linear multiagent systems with fixed and switching topologies. IEEE Transactions on Cybernetics, 2018, 49(12): 4117−4128
|
[65]
|
Wang Z, Liu Y, Zhang H. Two-layer reinforcement learning for output consensus of multiagent systems under switching topology. IEEE Transactions on Cybernetics, 2024, 54(9): 5463−5472 doi: 10.1109/TCYB.2024.3380001
|
[66]
|
Liu D, Liu H, Lü J, Lewis F L. Time-varying formation of heterogeneous multiagent systems via reinforcement learning subject to switching topologies. IEEE Transactions on Circuits and Systems I: Regular Papers, 2023, 70(6): 2550−2560 doi: 10.1109/TCSI.2023.3250516
|
[67]
|
Qin J, Ma Q, Yu X, Kang Y. Output containment control for heterogeneous linear multiagent systems with fixed and switching topologies. IEEE Transactions on Cybernetics, 2019, 49(12): 4117−4128 doi: 10.1109/TCYB.2018.2859159
|
[68]
|
Li H, Wu Y, Chen M. Adaptive fault-tolerant tracking control for discrete-time multiagent systems via reinforcement learning algorithm. IEEE Transactions on Cybernetics, 2020, 51(3): 1163−1174
|
[69]
|
Zhao W, Liu H, Valavanis K P, Lewis F L. Fault-tolerant formation control for heterogeneous vehicles via reinforcement learning. IEEE Transactions on Aerospace and Electronic Systems, 2021, 58(4): 2796−2806
|
[70]
|
Li T, Bai W, Liu Q, Long Y, Chen C L P. Distributed fault-tolerant containment control protocols for the discrete-time multiagent systems via reinforcement learning method. IEEE Transactions on Neural Networks and Learning Systems, 2021, 34(8): 3979−3991
|
[71]
|
Liu D, Mao Z, Jiang B, Xu L. Simplified ADP-based distributed event-triggered fault-tolerant control of heterogeneous nonlinear multiagent systems with full-state constraints. IEEE Transactions on Circuits and Systems I: Regular Papers, 2024, 71(8): 3820−3832 doi: 10.1109/TCSI.2024.3389740
|
[72]
|
Zhang Y, Zhao B, Liu D, Zhang S. Distributed fault tolerant consensus control of nonlinear multiagent systems via adaptive dynamic programming. IEEE Transactions on Neural Networks and Learning Systems, 2022, 35(7): 9041−9053
|
[73]
|
Xu Y, Wu Z. Data-based collaborative learning for multiagent systems under distributed denial-of-service attacks. IEEE Transactions on Cognitive and Developmental Systems, 2024, 16(1): 75−85 doi: 10.1109/TCDS.2022.3172937
|
[74]
|
Zhang L, Chen Y. Distributed finite-time ADP-based optimal control for nonlinear multiagent systems. IEEE Transactions on Circuits and Systems II: Express Briefs, 2023, 70(12): 4534−4538
|
[75]
|
Wang P, Yu C, Lv M, Cao J. Adaptive fixed-time optimal formation control for uncertain nonlinear multiagent systems using reinforcement learning. IEEE Transactions on Network Science and Engineering, 2024, 11(2): 1729−1743 doi: 10.1109/TNSE.2023.3330266
|
[76]
|
Zhang Y, Chadli M, Xiang Z. Prescribed-time formation control for a class of multiagent systems via fuzzy reinforcement learning. IEEE Transactions on Fuzzy Systems, 2023, 31(12): 4195−4204 doi: 10.1109/TFUZZ.2023.3277480
|
[77]
|
Peng Z, Luo R, Hu J, Shi K, Ghosh B K. Distributed optimal tracking control of discrete-time multiagent systems via event-triggered reinforcement learning. IEEE Transactions on Circuits and Systems I: Regular Papers, 2022, 69(9): 3689−3700 doi: 10.1109/TCSI.2022.3177407
|
[78]
|
Xu Y, Sun J, Pan Y, Wu Z. Optimal tracking control of heterogeneous MASs using event-driven adaptive observer and reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(4): 5577−5587 doi: 10.1109/TNNLS.2022.3208237
|
[79]
|
Tan M, Liu Z, Chen C P, Zhang Y, Wu Z. Optimized adaptive consensus tracking control for uncertain nonlinear multiagent systems using a new event-triggered communication mechanism. Information Sciences, 2022, 605: 301−316 doi: 10.1016/j.ins.2022.05.030
|
[80]
|
Li H, Wu Y, Chen M, Lu R. Adaptive multigradient recursive reinforcement learning event-triggered tracking control for multiagent systems. IEEE Transactions on Neural Networks and Learning Systems, 2021, 34(1): 144−156
|
[81]
|
Zhao H, Shan J, Peng L, Yu H. Adaptive event-triggered bipartite formation for multiagent systems via reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2023 doi: 10.1109/TNNLS.2023.3309326
|
[82]
|
Xiao W, Zhou Q, Liu Y, Li H, Lu R. Distributed reinforcement learning containment control for multiple nonholonomic mobile robots. IEEE Transactions on Circuits and Systems I: Regular Papers, 2022, 69(2): 896−907 doi: 10.1109/TCSI.2021.3121809
|
[83]
|
Xiong C, Ma Q, Guo J, Lewis F L. Data-based optimal synchronization of heterogeneous multiagent systems in graphical games via reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 2023 doi: 10.1109/TNNLS.2023.3291542
|
[84]
|
Zhang Q, Zhao D, Lewis F L. Model-free reinforcement learning for fully cooperative multi-agent graphical games. In: Proceeding of the 2018 International Joint Conference on Neural Networks. Rio de Janeiro, Brazil: IEEE, 2018. 1–6
|
[85]
|
Li J, Modares H, Chai T, Lewis F L, Xie L. Off-policy reinforcement learning for synchronization in multiagent graphical games. IEEE Transactions on Neural Networks and Learning Systems, 2017, 28(10): 2434−2445 doi: 10.1109/TNNLS.2016.2609500
|
[86]
|
Wang H, Li M. Model-free reinforcement learning for fully cooperative consensus problem of nonlinear multiagent systems. IEEE Transactions on Neural Networks and Learning Systems, 2020, 33(4): 1482−1491
|
[87]
|
Ming Z, Zhang H, Zhang J, Xie X. A novel actor-critic-identifier architecture for nonlinear multiagent systems with gradient descent method. Automatica, 2023, 155: Article No: 111128 doi: 10.1016/j.automatica.2023.111128
|
[88]
|
梁星星, 冯旸赫, 马扬, 程光权, 黄金才, 王琦, 等. 多Agent深度强化学习综述. 自动化学报, 2020, 46(12): 2537−2557Liang Xing-Xing, Feng Yang-He, Ma Yang, Cheng Guang-Quan, Huang Jin-Cai, Wang Qi, et al. Deep multi-agent reinforcement learning: A survey. Acta Automatica Sinica, 2020, 46(12): 2537−2557
|
[89]
|
Bellman R. A Markovian decision process. Journal of Mathematics and Mechanics, 1957, 6(4): 679−684
|
[90]
|
Howard R A. Dynamic programming and Markov processes. Cambridge, MA, USA: MIT press, 1960
|
[91]
|
Watkins C J, Dayan P. Q-learning. Machine Learning, 1992, 8(3): 279−292
|
[92]
|
Rummery G A, Niranjan M. On-line Q-learning using connectionist systems. Cambridge, UK: University of Cambridge, 1994
|
[93]
|
Sutton R S, McAllester D, Singh S, Mansour Y. Policy gradient methods for reinforcement learning with function approximation. In: Proceedings of the 12th International Conference on Neural Information Processing Systems. Denver, CO, USA: Curran Associates Inc., 1999: 1057–1063
|
[94]
|
Silver D, Lever G, Heess N, Degris T, Wierstra D, Riedmiller M. Deterministic policy gradient algorithms. In Proceedings of International Conference on Machine Learning. Beijing, China, 2014: 387–395
|
[95]
|
Luo B, Wu Z, Zhou F, Wang B. Human-in-the-loop reinforcement learning in continuous-action space. IEEE Transactions on Neural Networks and Learning Systems, 2023 doi: 10.1109/TNNLS.2023.3289315
|
[96]
|
Shapley L S. Stochastic games. Proceedings of the National Academy of Sciences, 1953, 39(10): 1095−1100 doi: 10.1073/pnas.39.10.1095
|
[97]
|
Hansen E A, Bernstein D S, Zilberstein S. Dynamic programming for partially observable stochastic games. In: Proceedings of the 19th AAAI Conference on Artificial Intelligence. San Jose, USA: AAAI, 2004: 709–715
|
[98]
|
Smallwood R D, Sondik E J. The optimal control of partially observable Markov processes over a finite horizon. Operations Research, 1973, 21(5): 1071−1088 doi: 10.1287/opre.21.5.1071
|
[99]
|
Tampuu A, Matiisen T, Kodelja D, Kuzovkin I, Korjus K, Aru J, et al. Multiagent cooperation and competition with deep reinforcement learning. PLOS ONE, 2017, 12(4): 1−15
|
[100]
|
Mnih V, Kavukcuoglu K, Silver D, Rusu A, Veness J, Bellemare M, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518(7540): 529−533 doi: 10.1038/nature14236
|
[101]
|
Chen Y F, Liu M, Everett M, How J. Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning. In: Proceedings of the 2017 IEEE International Conference on Robotics and Automation. Singapore: IEEE, 2017: 285–292
|
[102]
|
孙长银, 穆朝絮. 多智能体深度强化学习的若干关键科学问题. 自动化学报, 2020, 46(07): 1301−1312Sun Chang-Yin, Mu Chao-Xu. Important scientific problems of multi-agent deep reinforcement learning. Acta Automatica Sinica, 2020, 46(07): 1301−1312
|
[103]
|
Lillicrap T P, Hunt J J, Pritzel A, Heess N, Erez T, Tassa Y, et al. Continuous control with deep reinforcement learning. In: Proceedings of the 4th International Conference on Learning Representations. San Juan, Puerto Rico: CoRR, 2016. 1–10
|
[104]
|
Gupta J K, Egorov M, Kochenderfer M. Cooperative multi-agent control using deep reinforcement learning. In: Proceedings of the 16th Conference on Autonomous Agents and Multiagent Systems: AAMAS 2017 Workshops. São Paulo, Brazil: Springer, 2017. 66–83
|
[105]
|
Kraemer L, Banerjee B. Multi-agent reinforcement learning as a rehearsal for decentralized planning. Neurocomputing, 2016, 190: 82−94 doi: 10.1016/j.neucom.2016.01.031
|
[106]
|
Bhatnagar S, Sutton R S, Ghavamzadeh M, Lee M. Natural actor-critic algorithms. Automatica, 2009, 45(11): 2471−2482 doi: 10.1016/j.automatica.2009.07.008
|
[107]
|
Degris T, White M, Sutton R S. Off-policy actor-critic. In: Proceedings of the 29th International Conference on Machine Learning. Edinburgh, UK: ACM, 2012. 179–186
|
[108]
|
Sunehag P, Lever G, Gruslys A, Czarnecki W, Zambaldi V, Jaderberg M, et al. Value-decomposition networks for cooperative multi-agent learning based on team reward. In: Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems. Stockholm, Sweden: AAMAS, 2018. 2085–2087
|
[109]
|
Lowe R, Wu Y I, Tamar A, Harb J, Pieter A, Mordatch I. Multi-agent actor-critic for mixed cooperative-competitive environments. In: Proceedings of the 31th International Conference on Neural Information Processing Systems. California, USA, 2017. 6382–6393
|
[110]
|
Gronauer S, Diepold K. Multi-agent deep reinforcement learning: a survey. Artificial Intelligence Review, 2022, 55(2): 895−943 doi: 10.1007/s10462-021-09996-w
|
[111]
|
Iqbal S, Sha F. Actor-attention-critic for multi-agent reinforcement learning. In: Proceedings of the 36th International Conference on Machine Learning. California, USA: ACM, 2019. 2961–2970
|
[112]
|
Yu C, Velu A, Vnitsky E. The surprising effectiveness of ppo in cooperative multi-agent games. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2022. 24611–24624
|
[113]
|
Mnih V, Heess N, Graves A, Kavukcuoglu K. Recurrent models of visual attention. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: Curran Associates Inc., 2014. 2204–2212
|
[114]
|
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A, et al. Attention is all you need. In: Proceedings of the 31th International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 5998–6008
|
[115]
|
Tan M. Multi-agent reinforcement learning: Independent vs. cooperative agents. In: Proceedings of the 10th International Conference on Machine Learning. Amherst, USA: ACM, 1993. 330–337
|
[116]
|
Sen S, Sekaran M, Hale J. Learning to coordinate without sharing information. In: Proceedings of the 12th AAAI Conference on Artificial Intelligence. Seattle, USA: ACM, 1994. 426–431
|
[117]
|
Matignon L, Laurent G J, Le Fort-Piat N. Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. The Knowledge Engineering Review, 2012, 27(1): 1−31 doi: 10.1017/S0269888912000057
|
[118]
|
Foerster J, Nardelli N, Farquhar G, Afouras T, Torr P, Kohli P. Stabilising experience replay for deep multi-agent reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning. Sydney NSW: ACM, 2017. 1146–1155
|
[119]
|
Raileanu R, Denton E, Szlam A, Fergus R. Modeling others using oneself in multi-agent reinforcement learning. In: Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: ACM, 2018. 4257–4266
|
[120]
|
Kaelbling L P, Littman M L, Cassandra A R. Planning and acting in partially observable stochastic domains. Artificial intelligence, 1998, 101(1-2): 99−134 doi: 10.1016/S0004-3702(98)00023-X
|
[121]
|
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735−1780 doi: 10.1162/neco.1997.9.8.1735
|
[122]
|
Cho K, van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Doha, Qatar: ACL, 2014. 1724–1734
|
[123]
|
Hausknecht M, Stone P. Deep recurrent Q-learning for partially observable MDPs. In: Proceedings of the 2015 AAAI Fall Symposium Series. Arlington, USA: AAAI, 2015. 29–37
|
[124]
|
Matignon L, Laurent G J, Le Fort-Piat N. Hysteretic q-learning: an algorithm for decentralized reinforcement learning in cooperative multi-agent teams. In: Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. California, USA: IEEE, 2007. 64–69
|
[125]
|
Omidshafiei S, Pazis J, Amato C, How J P, Vian J. Deep decentralized multi-task multi-agent reinforcement learning under partial observability. In: Proceedings of the 34th International Conference on Machine Learning. Sydney NSW: ACM, 2017. 2681–2690
|
[126]
|
Foerster J N, Assael Y M, de Freitas N, Whiteson S. Learning to communicate to solve riddles with deep distributed recurrent q-networks. arXiv preprint arXiv: 1602.02672, 2016
|
[127]
|
Sukhbaatar S, Fergus R, others. Learning multiagent communication with backpropagation. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates Inc., 2016. 2244–2252
|
[128]
|
Singh A, Jain T, Sukhbaatar S. Individualized Controlled Continuous Communication Model for Multiagent Cooperative and Competitive Tasks. In: Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: OpenReview, 2019: 1–16
|
[129]
|
Chen J, Lan T, Joe-Wong C. RGMComm: Return Gap Minimization via Discrete Communications in Multi-Agent Reinforcement Learning. In Proceedings of the 38th AAAI Conference on Artificial Intelligence. Vancouver, Canada: AAAI, 2024. 17327–17336
|
[130]
|
Sheng J, Wang X, Jin B, Yan J, Li W, Chang T, et al. Learning structured communication for multi-agent reinforcement learning. Autonomous Agents and Multi-Agent Systems, 2022, 36(2): 1−31
|
[131]
|
Foerster J, Assael I A, De Freitas N, Whiteson S. Learning to communicate with deep multi-agent reinforcement learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates Inc., 2016: 2137–2145
|
[132]
|
Peng P, Wen Y, Yang Y, Yuan Q, Tang Z, Long H, et al. Multiagent bidirectionally-coordinated nets: Emergence of human-level coordination in learning to play starcraft combat games. arXiv preprint arXiv: 1703.10069, 2017
|
[133]
|
吴俊锋, 王文, 汪亮, 陶先平, 胡昊, 吴海军. 基于两阶段意图共享的多智能体强化学习方法. 计算机学报, 2023, 46(09): 1820−1837 doi: 10.11897/SP.J.1016.2023.01820Wu Jun-Feng, Wang Wen, Wang Liang, Tao Xian-Ping, Hu Hao, Wu Hai-Jun. Multi-agent reinforcement learning with two step intention sharing. Chinese Journal of Computers, 2023, 46(09): 1820−1837 doi: 10.11897/SP.J.1016.2023.01820
|
[134]
|
Mao H, Zhang Z, Xiao Z, Gong Z, Ni Y. Learning agent communication under limited bandwidth by message pruning. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 5142–5149
|
[135]
|
Kim D, Moon S, Hostallero D, Wan J, Taeyoung L, Kyunghwan S, et al. Learning to Schedule Communication in Multi-agent Reinforcement Learning. In: Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: OpenReview, 2019. 1–11
|
[136]
|
Das A, Gervet T, Romoff J, Batra D, Parikh D, Rabbat M, et al. Tarmac: Targeted multi-agent communication. In: Proceedings of the 36th International Conference on Machine Learning. California, USA: ACM, 2019. 1538–1546
|
[137]
|
Niu Y, Paleja R R, Gombolay M C. multi-agent graph-attention communication and teaming. In Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems. Virtual Event: AAMAS, 2021. 964–973
|
[138]
|
Lhaksmana K M, Murakami Y, Ishida T. Role-based modeling for designing agent behavior in self-organizing multi-agent systems. International Journal of Software Engineering and Knowledge Engineering, 2018, 28(01): 79−96 doi: 10.1142/S0218194018500043
|
[139]
|
Wang T, Dong H, Lesser V, Zhang C. ROMA: multi-agent reinforcement learning with emergent roles. In: Proceedings of the 37th International Conference on Machine Learning. Vienna, Australia: ACM, 2020. 9876–9886
|
[140]
|
Wang T, Gupta T, Mahajan A, Peng B, Whiteson S, Zhang C. RODE: Learning roles to decompose multi-agent tasks. In: Proceedings of the 9th International Conference on Learning Representations. Vienna, Australia: OpenReview, 2021. 1–24
|
[141]
|
Hu Z, Zhang Z, Li H, Chen C, Ding H, Wang Z. Attention-Guided Contrastive Role Representations for Multi-agent Reinforcement Learning. In: Proceedings of the 12th International Conference on Learning Representations. Vienna, Australia: OpenReview, 2024. 1–23
|
[142]
|
Zambaldi V, Raposo D, Santoro A, Bapst V, Li Y, Babuschkin I. Relational deep reinforcement learning. arXiv preprint arXiv: 1806.01830, 2018
|
[143]
|
Jiang H, Liu Y, Li S, et al. Diverse effective relationship exploration for cooperative multi-agent reinforcement learning. In: Proceedings of the 31st ACM International Conference on Information and Knowledge Management. Atlanta, USA: CEUR-WS, 2022. 842–851
|
[144]
|
Wang W, Yang T, Liu Y, Hao J, Hao X, Hu Y. Action semantics network: considering the effects of actions in multiagent systems. In: Proceedings of the 8th International Conference on Learning Representations. Addis Ababa, Ethiopia: OpenReview, 2020. 1–18
|
[145]
|
Ackermann J, Gabler V, Osa T, Sugiyama M. Reducing overestimation bias in multi-agent domains using double centralized critics. arXiv preprint arXiv: 1910.01465, 2019
|
[146]
|
Van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double Q-learning. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence. Phoenix, USA: AAAI, 2016. 2094–2100
|
[147]
|
Pan L, Rashid T, Peng B, Huang L, Whiteson S. Regularized softmax deep multi-agent Q-learning. In: Proceedings of the 31th International Conference on Neural Information Processing Systems. Virtual Event: Curran Associates Inc., 2021. 1365–1377
|
[148]
|
Liu J, Zhong Y, Hu S, Fu H, Fu Q, Chang X, et al. Maximum entropy heterogeneous-agent reinforcement learning. In: Proceedings of the 12nd International Conference on Learning Representations. Vienna, Australia: OpenReview, 2024. 1–12
|
[149]
|
Na H, Seo Y, Moon I. Efficient episodic memory utilization of cooperative multi-agent reinforcement learning. In: Proceedings of the 12nd International Conference on Learning Representations. Vienna, Australia: OpenReview, 2024. 1–13
|
[150]
|
Mahajan A, Rashid T, Samvelyan M, Whiteson S. Maven: Multi-agent variational exploration. In: Proceedings of the 33th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2019. 7613–7624
|
[151]
|
Liu I-J, Jain U, Yeh R A, Schwing A. Cooperative exploration for multi-agent deep reinforcement learning. In: Proceedings of the 38th International Conference on Machine Learning. Virtual Event: ACM, 2021. 6826–6836
|
[152]
|
Chen Z, Luo B, Hu T, Xu X. LJIR: Learning joint-action intrinsic reward in cooperative multi-agent reinforcement learning. Neural Networks, 2023, 167: 450−459 doi: 10.1016/j.neunet.2023.08.016
|
[153]
|
Hao J, Hao X, Mao H, Wang W, Yang Y, Li D. Boosting multiagent reinforcement learning via permutation invariant and permutation equivariant networks. In: Proceedings of the 11th International Conference on Learning Representations. Kigali, Rwanda: OpenReview, 2023. 1–12
|
[154]
|
Yang Y, Luo R, Li M, Zhou M, Zhang W, Wang J. Mean field multi-agent reinforcement learning. In: Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: ACM, 2018. 5571–5580
|
[155]
|
Ganapathi Subramanian S, Poupart P, Taylor M E, Hegde N. Multi type mean field reinforcement learning. In: Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. 2020: 411–419
|
[156]
|
Mondal W U, Agarwal M, Aggarwal V, Ukkusuri S V. On the approximation of cooperative heterogeneous multi-agent reinforcement learning (MARL) using mean field control (MFC). Journal of Machine Learning Research, 2022, 23(129): 1−46
|
[157]
|
Chang Y-H, Ho T, Kaelbling L. All learning is local: Multi-agent learning in global reward games. In: Proceedings of the 16th International Conference on Neural Information Processing Systems. Whistler, Canada: Curran Associates Inc., 2003. 807–814
|
[158]
|
Rashid T, Samvelyan M, Schroeder C, Farquhar G, Foerster J, Whiteson S. QMIX: Monotonic value function factorisation for deep multi-agent reinforcement learning. In: Proceedings of the International Conference on Machine Learning. Stockholm, Sweden: ACM, 2018. 4295–4304
|
[159]
|
Zhou M, Liu Z, Sui P, Li Y, Chung Y. Learning implicit credit assignment for cooperative multi-agent reinforcement learning. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. 11853–11864
|
[160]
|
Rashid T, Farquhar G, Peng B, Whiteson S. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. 10199–10210
|
[161]
|
Yang Y, Hao J, Liao B, Shao K, Chen G, Liu W. Qatten: A general framework for cooperative multiagent reinforcement learning. arXiv preprint arXiv: 2002.03939, 2020
|
[162]
|
Son K, Kim D, Kang W J, Hostallero D, Yi Y. Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning. In: Proceedings of the 36th International Conference on Machine Learning. California, US: ACM, 2019. 5887–5896
|
[163]
|
Foerster J, Farquhar G, Afouras T, Nardelli N, Whiteson S. Counterfactual multi-agent policy gradients. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI, 2018. 2974–2982
|
[164]
|
Wolpert D H, Tumer K. Optimal payoff functions for members of collectives. Advances in Complex Systems, 2001, 4(02n03): 265−279 doi: 10.1142/S0219525901000188
|
[165]
|
Wang J, Zhang Y, Kim T-K, Gu Y. Shapley q-value: A local reward approach to solve global reward games. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, US: AAAI, 2020. 7285–7292
|
[166]
|
Shapley L. A value for N-person games. Contributions to the Theory of Games, 1953307−317
|
[167]
|
Li J, Kuang K, Wang B, Liu F, Chen L, Wu F, et al. Shapley counterfactual credits for multi-agent reinforcement learning. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Virtual Event: ACM, 2021. 934–942
|
[168]
|
徐诚, 殷楠, 段世红, 何昊, 王然. 基于奖励滤波信用分配的多智能体深度强化学习算法. 计算机学报, 2022, 45: 2306−2320 doi: 10.11897/SP.J.1016.2022.02306Xu Chen, Yin Nan, Duan Shi-Hong, He Hao, Wang Ran. Reward-filtering-based credit assignment for multi-agent deep reinforcement learning. Chinese Journal of Computers, 2022, 45: 2306−2320 doi: 10.11897/SP.J.1016.2022.02306
|
[169]
|
Chen S, Zhang Z, Yang Y, Du Y. STAS: Spatial-temporal return decomposition for multi-agent reinforcement learning. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. Vancouver, Canada: AAAI, 2024. 17336–17345
|
[170]
|
Littman M L. Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the 11th International Conference on Machine Learning. New Brunswick, USA: ACM, 1994. 157–163
|
[171]
|
Zhang K, Yang Z, Liu H, Zhang T, Basar T. Finite-sample analysis for decentralized batch multiagent reinforcement learning with networked agents. IEEE Transactions on Automatic Control, 2021, 66(12): 5925−5940 doi: 10.1109/TAC.2021.3049345
|
[172]
|
Fan J, Wang Z, Xie Y, Yang Z. A theoretical analysis of deep Q-learning. In: Proceedings of the 2nd Annual Conference on Learning for Dynamics and Control. Berkeley, USA: PMLR, 2020. 486–489
|
[173]
|
Heinrich J, Lanctot M, Silver D. Fictitious self-play in extensive-form games. In Proceedings of the 32th International Conference on Machine Learning. Lille, France: ACM, 2015. 805–813
|
[174]
|
Berger U. Brown's original fictitious play. Journal of Economic Theory, 2007, 135(1): 572−578 doi: 10.1016/j.jet.2005.12.010
|
[175]
|
Heinrich J, Silver D. Deep reinforcement learning from self-play in imperfect-information games. arXiv preprint arXiv: 1603.01121, 2016
|
[176]
|
Zhang L, Chen Y, Wang W, Han Z, Li S, Pan Z, et al. A Monte Carlo Neural Fictitious Self-Play approach to approximate Nash Equilibrium in imperfect-information dynamic games. Frontiers of Computer Science, 2021, 15: 1−14
|
[177]
|
Lanctot M, Zambaldi V, Gruslys A, Lazaridou A, Tuyls K, Pérolat J, et al. A unified game-theoretic approach to multiagent reinforcement learning. In: Proceedings of the 31th International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 4190–4203
|
[178]
|
McAleer S, Lanier J B, Fox R, Baldi Pi. Pipeline psro: A scalable approach for finding approximate nash equilibria in large games. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. 20238–20248
|
[179]
|
Muller P, Omidshafiei S, Rowland M, Tuyls K, Perolat J, Liu S, et al. A generalized training approach for multiagent learning. In Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: ACM, 2019
|
[180]
|
徐浩添, 秦龙, 曾俊杰, 胡越, 张琪. 基于深度强化学习的对手建模方法研究综述. 系统仿真学报, 2023, 35(4): 671−694Xu Hao-Tian, Qin Long, Zeng Jun-Jie, Hu Yue, Zhang Qi. Research progress of opponent modeling based on deep reinforcement learning. Jounral of System Simulation, 2023, 35(4): 671−694
|
[181]
|
He H, Boyd-Graber J, Kwok K, Daumé III, H. Opponent modeling in deep reinforcement learning. In: Proceedings of the 33th International Conference on Machine Learning. New York, USA: ACM, 2016. 1804–1813
|
[182]
|
Everett R, Roberts S. Learning against non-stationary agents with opponent modelling and deep reinforcement learning. In: Proceedings of the 2018 AAAI Spring Symposium Series. Palo Alto, US: AAAI, 2018. 621–626
|
[183]
|
Foerster J, Chen R, Al-Shedivat M, Whiteson S, Abbeel P, Mordatch I. Learning with opponent-learning awareness. In: Proceedings of the 17th International Conference on Autonomous Agents and Multi-Agent Systems. Stockholm, Sweden: AAMAS, 2018. 122–130
|
[184]
|
Long P, Fan T, Liao X, Liu W, Zhang H, Pan J. Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning. In: Proceedings of the 2018 IEEE International Conference on Robotics and Automation. Brisbane, Australia: IEEE, 2018. 6252–6259
|
[185]
|
Willemsen D, Coppola M, de Croon G C. MAMBPO: Sample-efficient multi-robot reinforcement learning using learned world models. In: Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, Spain: IEEE, 2021. 5635–5640
|
[186]
|
Yue L, Lv M, Yan M, Zhao X, Wu A, Li L, et al. Improving cooperative multi-target tracking control for UAV swarm using multi-agent reinforcement learning. In: Proceedings of the 9th International Conference on Control, Automation and Robotics. Beijing, China: IEEE, 2023. 179–186
|
[187]
|
Xue Y, Chen W. Multi-agent deep reinforcement learning for UAVs navigation in unknown complex environment. IEEE Transactions on Intelligent Vehicles, 2024, 9(1): 2290−2303 doi: 10.1109/TIV.2023.3298292
|
[188]
|
Mou Z, Zhang Y, Gao F, Wang H, Zhang T, Han Z. Deep reinforcement learning based three-dimensional area coverage with UAV swarm. IEEE Journal on Selected Areas in Communications, 2021, 39(10): 3160−3176 doi: 10.1109/JSAC.2021.3088718
|
[189]
|
Hou Y, Zhao J, Zhang R, Cheng X, Yang L. UAV Swarm Cooperative Target Search: A Multi-Agent Reinforcement Learning Approach. IEEE Transactions on Intelligent Vehicles, 2024, 9(1): 568−578 doi: 10.1109/TIV.2023.3316196
|
[190]
|
Cui J, Liu Y, Nallanathan A. Multi-agent reinforcement learning-based resource allocation for UAV networks. IEEE Transactions on Wireless Communications, 2019, 19(2): 729−743
|
[191]
|
Wang Z, Gombolay M. Learning scheduling policies for multi-robot coordination with graph attention networks. IEEE Robotics and Automation Letters, 2020, 5(3): 4509−4516 doi: 10.1109/LRA.2020.3002198
|
[192]
|
Johnson D, Chen G, Lu Y. Multi-agent reinforcement learning for real-time dynamic production scheduling in a robot assembly cell. IEEE Robotics and Automation Letters, 2022, 7(3): 7684−7691 doi: 10.1109/LRA.2022.3184795
|
[193]
|
Paul S, Ghassemi P, Chowdhury S. Learning scalable policies over graphs for multi-robot task allocation using capsule attention networks. In: Proceedings of the 2022 International Conference on Robotics and Automation. Philadelphia, USA: IEEE, 2022. 8815–8822
|
[194]
|
Shalev-Shwartz S, Shammah S, Shashua A. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv: 1610.03295, 2016
|
[195]
|
Yu C, Wang X, Xu X, Zhang M, Ge H, Ren J, et al. Distributed multiagent coordinated learning for autonomous driving in highways based on dynamic coordination graphs. IEEE Transactions on Intelligent Transportation Systems, 2019, 21(2): 735−748
|
[196]
|
Liu B, Ding Z, Lv C. Platoon control of connected autonomous vehicles: A distributed reinforcement learning method by consensus. In: Proceedings of the 21st IFAC World Congress. Berlin, Germany: IFAC, 2020. 15241–15246
|
[197]
|
Liang Z, Cao J, Jiang S, Saxena D, Xu H. Hierarchical reinforcement learning with opponent modeling for distributed multi-agent cooperation. In: Proceedings of the 42nd IEEE International Conference on Distributed Computing Systems. Bologna, Italy : IEEE, 2022. 884–894
|
[198]
|
Candela E, Parada L, Marques L, Georgescu T, Demiris Y, Angeloudis P. Transferring multi-agent reinforcement learning policies for autonomous driving using sim-to-real. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems. Kyoto, Japan: IEEE, 2022. 8814–8820
|
[199]
|
Chu T, Wang J, Codecà L, Li Z. Multi-agent deep reinforcement learning for large-scale traffic signal control. IEEE Transactions on Intelligent Transportation Systems, 2019, 21(3): 1086−1095
|
[200]
|
Jiang S, Huang Y, Jafari M, Jalayer M. A distributed multi-agent reinforcement learning with graph decomposition approach for large-scale adaptive traffic signal control. IEEE Transactions on Intelligent Transportation Systems, 2021, 23(9): 14689−14701
|
[201]
|
Wang X, Ke L, Qiao Z, Chai X. Large-scale traffic signal control using a novel multiagent reinforcement learning. IEEE Transactions on Cybernetics, 2020, 51(1): 174−187
|
[202]
|
Wang K, Mu C. Learning-based control with decentralized dynamic event-triggering for vehicle systems. IEEE Transactions on Industrial Informatics, 2022, 19(3): 2629−2639
|
[203]
|
Xu Y, Wu Z, Pan Y. Perceptual interaction-based path tracking control of autonomous vehicles under DoS attacks: A reinforcement learning approach. IEEE Transactions on Vehicular Technology, 2023, 72(11): 14028−14039
|
[204]
|
Liu D, Liu H, Lü J, Lewis F L. Time-varying formation of heterogeneous multiagent systems via reinforcement learning subject to switching topologies. IEEE Transactions on Circuits and Systems I: Regular Papers, 2023, 70(6): 2550−2560 doi: 10.1109/TCSI.2023.3250516
|
[205]
|
Cheng M, Liu H, Wen G, Lü J, Lewis F L. Data-driven time-varying formation-containment control for a heterogeneous air-ground vehicle team subject to active leaders and switching topologies. Automatica, 2023, 153: Article No: 111029 doi: 10.1016/j.automatica.2023.111029
|
[206]
|
Zhao W, Liu H, Wan Y, Lin Z. Data-driven formation control for multiple heterogeneous vehicles in air-ground coordination. IEEE Transactions on Control of Network Systems, 2022, 9(4): 1851−1862 doi: 10.1109/TCNS.2022.3181254
|
[207]
|
Zhao J, Yang C, Wang W, Xu B, Li Y, Yang L. A game-learning-based smooth path planning strategy for intelligent air-ground vehicle considering mode switching. IEEE Transactions on Transportation Electrification, 2022, 8(3): 3349−3366 doi: 10.1109/TTE.2022.3142150
|
[208]
|
Song W, Tong S. Fuzzy optimal tracking control for nonlinear underactuated unmanned surface vehicles. Ocean Engineering, 2023, 287: Article No: 115700 doi: 10.1016/j.oceaneng.2023.115700
|
[209]
|
Chen L, Dong C, He S, Dai S. Adaptive optimal formation control for unmanned surface vehicles with guaranteed performance using actor-critic learning architecture. International Journal of Robust and Nonlinear Control, 2023, 33(8): 4504−4522 doi: 10.1002/rnc.6623
|
[210]
|
Bai W, Zhang W, Cao L, Liu Q. Adaptive control for multi-agent systems with actuator fault via reinforcement learning and its application on multi-unmanned surface vehicle. Ocean Engineering, 2023, 280: Article No: 114545 doi: 10.1016/j.oceaneng.2023.114545
|
[211]
|
Chen H, Yan H, Wang Y, Xie S, Zhang D. Reinforcement learning-based close formation control for underactuated surface vehicle with prescribed performance and time-varying state constraints. Ocean Engineering, 2022, 256: Article No: 111361 doi: 10.1016/j.oceaneng.2022.111361
|
[212]
|
Weng P, Tian X, Liu H, Mai Q. Distributed edge-based event-triggered optimal formation control for air-sea heterogeneous multiagent systems. Ocean Engineering, 2023, 288: Article No: 116066 doi: 10.1016/j.oceaneng.2023.116066
|
[213]
|
Jaderberg M, Czarnecki W M, Dunning I, Marris L, Lever G, Castaneda A G, et al. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science, 2019, 364(6443): 859−865 doi: 10.1126/science.aau6249
|
[214]
|
Xu X, Jia Y, Xu Y, Xu Z, Chai S, Lai C. A multi-agent reinforcement learning-based data-driven method for home energy management. IEEE Transactions on Smart Grid, 2020, 11(4): 3201−3211 doi: 10.1109/TSG.2020.2971427
|
[215]
|
Ahrarinouri M, Rastegar M, Seifi A R. Multiagent reinforcement learning for energy management in residential buildings. IEEE Transactions on Industrial Informatics, 2021, 17(1): 659−666 doi: 10.1109/TII.2020.2977104
|
[216]
|
Zhang Y, Yang Q, An D, Li D, Wu Z. Multistep multiagent reinforcement learning for optimal energy schedule strategy of charging stations in smart grid. IEEE Transactions on Cybernetics, 2023, 53(7): 4292−4305 doi: 10.1109/TCYB.2022.3165074
|
[217]
|
Zhao X, Wu C. Large-scale machine learning cluster scheduling via multi-agent graph reinforcement learning. IEEE Transactions on Network and Service Management, 2021, 19(4): 4962−4974
|
[218]
|
Yu T, Huang J, Chang Q. Optimizing task scheduling in human-robot collaboration with deep multi-agent reinforcement learning. Journal of Manufacturing Systems, 2021, 60: 487−499 doi: 10.1016/j.jmsy.2021.07.015
|
[219]
|
Jing X, Yao X, Liu M, Zhou J. Multi-agent reinforcement learning based on graph convolutional network for flexible job shop scheduling. Journal of Intelligent Manufacturing, 2024, 35(1): 75−93 doi: 10.1007/s10845-022-02037-5
|
[220]
|
邝祝芳, 陈清林, 李林峰, 邓晓衡, 陈志刚. 基于深度强化学习的多用户边缘计算任务卸载调度与资源分配算法. 计算机学报, 2022, 45(04): 812−824 doi: 10.11897/SP.J.1016.2022.00812Kuang Zhu-Fang, Chen Qing-Lin, Li Lin-Feng, Deng Xiao-Heng, Chen Zhi-Gang. Multi-user edge computing task offloading scheduling and resource allocation based on deep reinforcement learning. Chinese Journal of Computers, 2022, 45(04): 812−824 doi: 10.11897/SP.J.1016.2022.00812
|
[221]
|
Adibi M, Van Der Woude J. Secondary frequency control of microgrids: An online reinforcement learning approach. IEEE Transactions on Automatic Control, 2022, 67(9): 4824−4831 doi: 10.1109/TAC.2022.3162550
|
[222]
|
Liu Y, Qie T, Yu Y, Wang Y, Chau T, Zhang X. A novel integral reinforcement learning-based H∞ control strategy for proton exchange membrane fuel cell in DC Microgrids. IEEE Transactions on Smart Grid, 2022, 14(3): 1668−1681
|
[223]
|
Zhang H, Yue D, Dou C, Xie X, Li K, Hancke G P. Resilient optimal defensive strategy of TSK fuzzy-model-based microgrids' system via a novel reinforcement learning approach. IEEE Transactions on Neural Networks and Learning Systems, 2021, 34(4): 1921−1931
|
[224]
|
Duan J, Yi Z, Shi D, Lin Chang, Lu X, Wang Z. Reinforcement-learning-based optimal control of hybrid energy storage systems in hybrid AC-DC microgrids. IEEE Transactions on Industrial Informatics, 2019, 15(9): 5355−5364 doi: 10.1109/TII.2019.2896618
|
[225]
|
Dong X, Zhang H, Xie X, Ming Z. Data-driven distributed H∞ current sharing consensus optimal control of DC microgrids via reinforcement learning. IEEE Transactions on Circuits and Systems I: Regular Papers, 2024, 71(6): 2824−2834 doi: 10.1109/TCSI.2024.3366942
|
[226]
|
Fang H, Zhang M, He S, Luan X, Liu F, Ding Z. Solving the zero-sum control problem for tidal turbine system: An online reinforcement learning approach. IEEE Transactions on Cybernetics, 2023, 53(12): 7635−7647 doi: 10.1109/TCYB.2022.3186886
|
[227]
|
Dong H, Zhao X. Wind-farm power tracking via preview-based robust reinforcement learning. IEEE Transactions on Industrial Informatics, 2022, 18(3): 1706−1715 doi: 10.1109/TII.2021.3093300
|
[228]
|
Xie J, Dong H, Zhao X, Lin S. Wind turbine fault-tolerant control via incremental model-based reinforcement learning. IEEE Transactions on Automation Science and Engineering, 2023 doi: 10.1109/TASE.2024.3372713
|
[229]
|
Park J S, O'Brien J, Cai C J, Morris M R, Liang P, Bernstein M S. Generative agents: Interactive simulacra of human behavior. In: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. San Francisco, USA: ACM, 2023. 1–22
|
[230]
|
Wang H, Chen J, Huang W, Ben Q, Wang T, Mi B. GRUtopia: Dream General Robots in a City at Scale. arXiv preprint arXiv: 2407.10943, 2024
|
[231]
|
王涵, 俞扬, 姜远. 基于通信的多智能体强化学习进展综述. 中国科学: 信息科学, 2022, 52(05): 742−764 doi: 10.1360/SSI-2020-0180Wang Han, Yu Yang, Jiang Yuan. Review of the progress of communication-based multi-agent reinforcement learning. Science Sinica Information, 2022, 52(05): 742−764 doi: 10.1360/SSI-2020-0180
|
[232]
|
Hu T, Luo B. PA2D-MORL: Pareto ascent directional decomposition based multi-objective reinforcement learning. In: Proceedings of the 38th AAAI Conference on Artificial Intelligence. Vancouver, Canada: AAAI, 2024. 12547–12555
|
[233]
|
Xu M, Song Y, Wang J, Qiao M, Huo L, Wang Z. Predicting head movement in panoramic video: a deep reinforcement learning approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(11): 2693−2708 doi: 10.1109/TPAMI.2018.2858783
|
[234]
|
Skalse J, Hammond L, Griffin C, Abate A. Lexicographic multi-objective reinforcement learning. In: Proceedings of the 32nd International Joint Conference on Artificial Intelligence. Vienna, Austria: IJCAI, 2022. 3430–3436
|
[235]
|
Hu T, Luo B, Yang C, Huang T. MO-MIX: Multi-objective multi-agent cooperative decision-making with deep reinforcement learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(10): 12098−12112 doi: 10.1109/TPAMI.2023.3283537
|
[236]
|
王雪松, 王荣荣, 程玉虎. 安全强化学习综述. 自动化学报, 2023, 49(09): 1813−1835Wang Xue-Song, Wang Rong-Rong, Chen Yu-Hu. Safe reinforcement learning: A survey. Acta Automatica Sinica, 2023, 49(09): 1813−1835
|