2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

问答ChatGPT之后: 超大预训练模型的机遇和挑战

卢经纬 郭超 戴星原 缪青海 王兴霞 杨静 王飞跃

卢经纬, 郭超, 戴星原, 缪青海, 王兴霞, 杨静, 王飞跃. 问答ChatGPT之后: 超大预训练模型的机遇和挑战. 自动化学报, 2023, 49(4): 705−717 doi: 10.16383/j.aas.c230107
引用本文: 卢经纬, 郭超, 戴星原, 缪青海, 王兴霞, 杨静, 王飞跃. 问答ChatGPT之后: 超大预训练模型的机遇和挑战. 自动化学报, 2023, 49(4): 705−717 doi: 10.16383/j.aas.c230107
Lu Jing-Wei, Guo Chao, Dai Xing-Yuan, Miao Qing-Hai, Wang Xing-Xia, Yang Jing, Wang Fei-Yue. The ChatGPT after: Opportunities and challenges of very large scale pre-trained models. Acta Automatica Sinica, 2023, 49(4): 705−717 doi: 10.16383/j.aas.c230107
Citation: Lu Jing-Wei, Guo Chao, Dai Xing-Yuan, Miao Qing-Hai, Wang Xing-Xia, Yang Jing, Wang Fei-Yue. The ChatGPT after: Opportunities and challenges of very large scale pre-trained models. Acta Automatica Sinica, 2023, 49(4): 705−717 doi: 10.16383/j.aas.c230107

问答ChatGPT之后: 超大预训练模型的机遇和挑战

doi: 10.16383/j.aas.c230107
基金项目: 国家自然科学基金 (U1811463), 行动元联合研究项目: 伺服驱动系统的基础建模和平行驱控研究资助
详细信息
    作者简介:

    卢经纬:青岛智能产业技术研究院副研究员. 2022年获得中国科学院大学计算机应用技术博士学位. 主要研究方向为最优控制, 自适应动态规划, 深度强化学习和自动驾驶. E-mail: lujingweihh@gmail.com

    郭超:中国科学院自动化研究所复杂系统管理与控制国家重点实验室助理研究员. 主要研究方向为机器艺术创作, 智能机器人系统, 深度学习, 强化学习. E-mail: guochao2014@ia.ac.cn

    戴星原:中国科学院自动化研究所复杂系统管理与控制国家重点实验室助理研究员. 2022年获得中国科学院大学控制理论与控制工程专业博士学位. 主要研究方向为人工智能, 强化学习, 智能交通系统. E-mail: xingyuan.dai@ia.ac.cn

    缪青海:中国科学院大学人工智能学院副教授. 2007年获得中国科学院自动化研究所博士学位. 主要研究方向为智能系统, 机器学习, 计算机视觉. E-mail: miaoqh@ucas.ac.cn

    王兴霞:中国科学院自动化研究所复杂系统管理与控制国家重点实验室博士研究生. 2021年获得南开大学工学硕士学位. 主要研究方向为平行控制, 平行油田和多智能体系统. E-mail: wangxingxia2022@ia.ac.cn

    杨静:中国科学院自动化研究所复杂系统管理与控制国家重点实验室博士研究生. 2020年获得北京化工大学自动化学士学位. 主要研究方向为平行制造, 社会制造, 人工智能和社会物理信息系统. E-mail: yangjing2020@ia.ac.cn

    王飞跃:中国科学院自动化研究所复杂系统管理与控制国家重点实验室研究员. 主要研究方向为智能系统, 复杂系统建模, 分析与控制. 本文通信作者. E-mail: feiyue.wang@ia.ac.cn

The ChatGPT After: Opportunities and Challenges of Very Large Scale Pre-trained Models

Funds: Supported by National Natural Science Foundation of China (U1811463) and Motion G, Inc. Collaborative Research Project for Foundation Modeling and Parallel Driven/Control for Servo-Drive Systems
More Information
    Author Bio:

    LU Jing-Wei Associate professor at the Qingdao Academy of Intelligent Industries. He received his Ph.D. degree in computer application technology from University of Chinese Academy of Sciences. His research interest covers optimal control, adaptive dynamic programming, deep reinforcement learning, and autonomous driving

    GUO Chao Assistant professor at the State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences. His research interest covers AI art, intelligent robotic systems, deep learning, and reinforcement learning

    DAI Xing-Yuan Assistant professor at the State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences. He received his Ph.D. degree in control theory and control engineering from the University of Chinese Academy of Sciences. His research interest covers artificial intelligence, reinforcement learning, and intelligent transportation systems

    MIAO Qing-Hai Associate professor at the School of Artificial Intelligence, University of Chinese Academy of Sciences. He received his Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences in 2007. His research interest covers intelligent systems, machine learning, and computer vision

    WANG Xing-Xia Ph.D. candidate at the State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences. She received her master degree in engineering from Nankai University in 2021. Her research interest covers parallel control, parallel oilfields, and multi-agent systems

    YANG Jing Ph.D. candidate at the State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences. She received her bachelor degree in automation from Beijing University of Chemical Technology in 2020. Her research interest covers parallel manufacturing, social manufacturing, artificial intelligence, and cyber-physical-social systems

    WANG Fei-Yue Professor at the State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences. His research interest covers modeling, analysis, and control of intelligent systems and complex systems. Corresponding author of this paper

  • 摘要: 超大预训练模型(Pre-trained model, PTM)是人工智能领域近年来迅速崛起的研究方向, 在自然语言处理(Natural language processing, NLP)和计算机视觉等多种任务中达到了有史以来的最佳性能, 促进了人工智能生成内容(Artificial intelligence-generated content, AIGC)的发展和落地. ChatGPT作为当下最火热的PTM, 更是以优异的表现获得各界的广泛关注. 本文围绕ChatGPT展开. 首先概括PTM的基本思想并对其发展历程进行梳理; 接着, 详细探讨ChatGPT的技术细节, 并以平行智能的视角阐述ChatGPT; 最后, 从技术、范式以及应用等多个方面对PTM的发展趋势进行展望.
    1)  1 引号内文字由ChatGPT生成(https://chat.openai.com/chat/)
    2)  2 https://openai.com/blog/chatgpt/
    3)  3 https://openai.com/blog/chatgpt/
    4)  4 https://openai.com/research/language-model-safety-and-misuse
    5)  5 https://openai.com/blog/ai-and-compute
  • 图  1  典型超大预训练模型的发展历程

    Fig.  1  Typical very large scale PTMs

    图  2  ChatGPT的功能

    Fig.  2  The functions of ChatGPT

    图  3  ChatGPT采用的Transformer解码器结构

    Fig.  3  The Transformer decoder for ChatGPT

    图  4  ChatGPT的实现流程

    Fig.  4  The implementation process of ChatGPT

    图  5  强化学习视角下的ChatGPT

    Fig.  5  ChatGPT from the perspective of RL

    图  6  社会化大闭环下的ChatGPT

    Fig.  6  ChatGPT in the grand socialization closed-loop

    图  7  PTM研究范式

    Fig.  7  Research paradigms of PTMs

  • [1] Wang F Y, Miao Q H, Li X, Wang X X, Lin Y L. What does ChatGPT say: The DAO from algorithmic intelligence to linguistic intelligence. IEEE/CAA Journal of Automatica Sinica, 2023, 10(3): 575-579 doi: 10.1109/JAS.2023.123486
    [2] Han X, Zhang Z X, Ding N, Gu Y X, Liu X, Huo Y Q, et al. Pre-trained models: Past, present and future. AI Open, 2021, 2: 225-250 doi: 10.1016/j.aiopen.2021.08.002
    [3] Bommasani R, Hudson D A, Adeli E, Altman R, Arora S, von Arx S, et al. On the opportunities and risks of foundation models [Online], available: https://arxiv.org/abs/2108.07258, August 16, 2021
    [4] Lu J W, Wang X X, Cheng X, Yang J, Kwan O, Wang X. Parallel factories for smart industrial operations: From big AI models to field foundational models and scenarios engineering. IEEE/CAA Journal of Automatica Sinica, 2022, 9(12): 2079-2086 doi: 10.1109/JAS.2022.106094
    [5] Wang X X, Cheng X, Lu J W, Kwan O, Li S X, Ping Z X. Metaverses-based parallel oil fields in CPSS: A framework and methodology. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2138-2147 doi: 10.1109/TSMC.2022.3228934
    [6] Yang J, Li S M, Wang X X, Lu J W, Wu H Y, Wang X. DeFACT in ManuVerse for parallel manufacturing: Foundation models and parallel workers in smart factories. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2188-2199 doi: 10.1109/TSMC.2022.3228817
    [7] Chen T, Kornblith S, Swersky K, Norouzi M, Hinton G. Big self-supervised models are strong semi-supervised learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. Article No. 1865
    [8] Ramesh A, Dhariwal P, Nichol A, Chu C, Chen M. Hierarchical text-conditional image generation with CLIP latents [Online], available: https://arxiv.org/abs/2204.06125, April 13, 2022
    [9] Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 10674−10685
    [10] Vaswani A, Shazeer N, Parmar N, Uszkoreit U, Jones L, Gomez A N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 6000−6010
    [11] Peters M E, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, et al. Deep contextualized word representations. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, USA: Association for Computational Linguistics, 2018. 2227−2237
    [12] Devlin J, Chang M W, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minneapolis, USA: Association for Computational Linguistics, 2019. 4171−4186
    [13] Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative pre-training [Online], available: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf, April 13, 2022
    [14] Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners [Online], available: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf, April 13, 2022
    [15] Brown T B, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, et al. Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. Article No. 159
    [16] Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 2020, 21(1): Article No. 140
    [17] Zhong R Q, Lee K, Zhang Z, Klein D. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. In: Proceedings of the Findings of the Association for Computational Linguistics. Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021. 2856−2878
    [18] Lu J S, Batra D, Parikh D, Lee S. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2019. Article No. 2
    [19] Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, et al. Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning (ICML). PMLR, 2021. 8748−8763
    [20] Alayrac J B, Donahue J, Luc P, Miech A, Barr I, Hasson Y, et al. Flamingo: A visual language model for few-shot learning. In: Proceedings of the 36th International Conference on Neural Information Processing Systems (NeurIPS). New Orleans, USA: 2022.
    [21] Reed S, Żołna K, Parisotto E, Colmenarejo S G, Novikov A, Barth-Maron G, et al. A generalist agent. Transactions on Machine Learning Research, 2022.
    [22] Zeng W, Ren X, Su T, Wang H, Liao Y, Wang Z, et al. PanGu-α: Large-scale autoregressive pretrained Chinese language models with auto-parallel computation [Online], available: https://arxiv.org/abs/2104.12369, April 26, 2021
    [23] Fei N Y, Lu Z W, Gao Y Z, Yang G X, Huo Y Q, Wen J Y, et al. Towards artificial general intelligence via a multimodal foundation model. Nature Communications, 2022, 13(1): Article No. 3094 doi: 10.1038/s41467-022-30761-2
    [24] Lin J Y, Men R, Yang A, Zhou C, Ding M, Zhang Y C, et al. M6: A Chinese multimodal pretrainer [Online], available: https://arxiv.org/abs/2103.00823, May 29, 2021
    [25] Liu J, Zhu X X, Liu F, Guo L T, Zhao Z J, Sun M Z, et al. OPT: Omni-perception pre-trainer for cross-modal understanding and generation [Online], available: https://arxiv.org/abs/2107.00249, July 6, 2021
    [26] Zhang Z Y, Han X, Liu Z Y, Jiang X, Sun M S, Liu Q. ERNIE: Enhanced language representation with informative entities. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy: Association for Computational Linguistics, 2019. 1441−1451
    [27] Wang S H, Sun Y, Xiang Y, Wu Z H, Ding S Y, Gong W B, et al. ERNIE 3.0 Titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation [Online], available: https://arxiv.org/abs/2112.12731, December 23, 2021
    [28] Hoffmann J, Borgeaud S, Mensch A, Buchatskaya E, Cai T, Rutherford E, et al. Training compute-optimal large language models [Online], available: https://arxiv.org/abs/2203.15556, March 29, 2022
    [29] 王飞跃. 平行系统方法与复杂系统的管理和控制. 控制与决策, 2004, 19(5): 485-489, 514 doi: 10.3321/j.issn:1001-0920.2004.05.002

    Wang Fei-Yue. Parallel system methods for management and control of complex systems. Control and Decision, 2004, 19(5): 485-489, 514 doi: 10.3321/j.issn:1001-0920.2004.05.002
    [30] 王飞跃. 平行控制与数字孪生: 经典控制理论的回顾与重铸. 智能科学与技术学报, 2020, 2(3): 293-300 doi: 10.11959/j.issn.2096-6652.202032

    Wang Fei-Yue. Parallel control and digital twins: Control theory revisited and reshaped. Chinese Journal of Intelligent Science and Technology, 2020, 2(3): 293-300 doi: 10.11959/j.issn.2096-6652.202032
    [31] Wang F Y. Parallel intelligence in metaverses: Welcome to Hanoi! IEEE Intelligent Systems, 2022, 37(1): 16−20
    [32] Chen L, Zhang Y Q, Tian B, Ai Y F, Cao D P, Wang F Y. Parallel driving OS: A ubiquitous operating system for autonomous driving in CPSS. IEEE Transactions on Intelligent Vehicles, 2022, 7(4): 886-895 doi: 10.1109/TIV.2022.3223728
    [33] Tian F Y, Li Z H, Wang F Y, Li L. Parallel learning-based steering control for autonomous driving. IEEE Transactions on Intelligent Vehicles, 2023, 8(1): 379-389 doi: 10.1109/TIV.2022.3173448
    [34] Wang J G, Wang X, Shen T Y, Wang Y T, Li L, Tian Y L, et al. Parallel vision for long-tail regularization: Initial results from IVFC autonomous driving testing. IEEE Transactions on Intelligent Vehicles, 2022, 7(2): 286-299 doi: 10.1109/TIV.2022.3145035
    [35] Wang K F, Gou C, Zheng N N, Rehg J M, Wang F Y. Parallel vision for perception and understanding of complex scenes: Methods, framework, and perspectives. Artificial Intelligence Review, 2017, 48(3): 299-329 doi: 10.1007/s10462-017-9569-z
    [36] Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C L, Mishkin P, et al. Training language models to follow instructions with human feedback. In: Proceedings of the 36th International Conference on Neural Information Processing Systems (NeurIPS). New Orleans, USA: 2022.
    [37] 田永林, 王雨桐, 王建功, 王晓, 王飞跃. 视觉Transformer研究的关键问题: 现状及展望. 自动化学报, 2022, 48(4): 957-979

    Tian Yong-Lin, Wang Yu-Tong, Wang Jian-Gong, Wang Xiao, Wang Fei-Yue. Key problems and progress of vision transformers: The state of the art and prospects. Acta Automatica Sinica, 2022, 48(4): 957-979
    [38] Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O. Proximal policy optimization algorithms [Online], available: https://arxiv.org/abs/1707.06347, August 28, 2017
    [39] Sutton R S, Barto A G. Reinforcement Learning: An Introduction (Second edition). Cambridge: MIT Press, 2018.
    [40] 卢经纬, 程相, 王飞跃. 求解微分方程的人工智能与深度学习方法: 现状及展望. 智能科学与技术学报, 2022, 4(4): 461-476

    Lu Jing-Wei, Cheng Xiang, Wang Fei-Yue. Artificial intelligence and deep learning methods for solving differential equations: The state of the art and prospects. Chinese Journal of Intelligent Science and Technology, 2022, 4(4): 461-476
    [41] Mnih V, Kavukcuoglu K, Silver D, Rusu A A, Veness J, Bellemare M G, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518(7540), 529-533 doi: 10.1038/nature14236
    [42] 王飞跃. 平行管理: 复杂性管理智能的生态科技与智慧管理之DAO. 自动化学报, 2022, 48(11): 2655-2669

    Wang Fei-Yue. Parallel management: The DAO to smart ecological technology for complexity management intelligence. Acta Automatica Sinica, 2022, 48(11): 2655-2669
    [43] 李力, 林懿伦, 曹东璞, 郑南宁, 王飞跃. 平行学习——机器学习的一个新型理论框架. 自动化学报, 2017, 43(1): 1-8

    Li Li, Lin Yi-Lun, Cao Dong-Pu, Zheng Nan-Ning, Wang Fei-Yue. Parallel learning — a new framework for machine learning. Acta Automatica Sinica, 2017, 43(1): 1-8
    [44] Wang X J, Kang M Z, Sun H Q, de Reffye P, Wang F Y. DeCASA in AgriVerse: Parallel agriculture for smart villages in Metaverses. IEEE/CAA Journal of Automatica Sinica, 2022, 9(12): 2055-2062 doi: 10.1109/JAS.2022.106103
    [45] Wang J G, Tian Y L, Wang Y T, Yang J, Wang X X, Wang S J, et al. A framework and operational procedures for metaverses-based industrial foundation models. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2037-2046 doi: 10.1109/TSMC.2022.3226755
    [46] Li X, Tian Y L, Ye P J, Duan H B, Wang F Y. A novel scenarios engineering methodology for foundation models in metaverse. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2148-2159 doi: 10.1109/TSMC.2022.3228594
    [47] Wang Y T, Wang J G, Cao Y S, Li S X, Kwan O. Integrated inspection on PCB manufacturing in cyber–physical–social systems. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2098-2106 doi: 10.1109/TSMC.2022.3229096
    [48] Liu Y H, Shen Y, Tian Y L, Ai Y F, Tian B, Wu E, et al. RadarVerses in metaverses: A CPSI-based architecture for 6S radar systems in CPSS. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2128-2137 doi: 10.1109/TSMC.2022.3228590
    [49] Lu J W, Wei Q L, Wang F Y. Parallel control for optimal tracking via adaptive dynamic programming. IEEE/CAA Journal of Automatica Sinica, 2020, 7(6): 1662-1674 doi: 10.1109/JAS.2020.1003426
    [50] 王飞跃, 陈俊龙. 智能控制方法与应用. 北京: 中国科学技术出版社, 2020.

    Wang Fei-Yue, Chen Jun-Long. Intelligent Control Method and Application. Beijing: Science and Technology of China Press, 2020.
    [51] 王飞跃. 平行哲学与智能科学: 从莱布尼茨的Monad到区块链之DAO. 模式识别与人工智能, 2020, 33(12): 1055-1065

    Wang Fei-Yue. Parallel philosophy and intelligent science: From Leibniz’s Monad to Blockchain’s DAO. Pattern Recognition and Artificial Intelligence, 2020, 33(12): 1055-1065
    [52] Lu J W, Wei Q L, Zhou T M, Wang Z Y, Wang F Y. Event-triggered near-optimal control for unknown discrete-time nonlinear systems using parallel control. IEEE Transactions on Cybernetics, 2023, 53(3): 1890-1904 doi: 10.1109/TCYB.2022.3164977
    [53] Zhang L M, Agrawala M. Adding conditional control to text-to-image diffusion models [Online], available: https://arxiv.org/abs/2302.05543, February 10, 2023
    [54] Guo C, Dou Y, Bai T X, Dai X Y, Wang C F, Wen Y. ArtVerse: A paradigm for parallel human-machine collaborative painting creation in metaverses. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2200-2208 doi: 10.1109/TSMC.2022.3230406
    [55] Li X, Ye P J, Li J J, Liu Z M, Cao L B, Wang F Y. From features engineering to scenarios engineering for trustworthy AI: I&I, C&C, and V&V. IEEE Intelligent Systems, 2022, 37(4): 18-26 doi: 10.1109/MIS.2022.3197950
    [56] Ye P J, Wang F Y. Parallel Population and Parallel Human —— A Cyber-Physical Social Approach. Hoboken: Wiley-IEEE Press, 2023.
    [57] Ye P J, Wang X, Zheng W B, Wei Q L, Wang F Y. Parallel cognition: Hybrid intelligence for human-machine interaction and management. Frontiers of Information Technology & Electronic Engineering, 2022, 23(12): 1765-1779
    [58] 王飞跃, 蒋正华, 戴汝为. 人口问题与人工社会方法: 人工人口系统的设想与应用. 复杂系统与复杂性科学, 2005, 2(1): 1-9 doi: 10.3969/j.issn.1672-3813.2005.01.001

    Wang Fei-Yue, Jiang Zheng-Hua, Dai Ru-Wei. Population studies and artificial societies: A discussion of artificial population systems and their applications. Complex Systems and Complexity Science, 2005, 2(1): 1-9 doi: 10.3969/j.issn.1672-3813.2005.01.001
    [59] Ye P J, Wang F Y. Parallel population and parallel human-a cyber-physical social approach. IEEE Intelligent Systems, 2022, 37(5): 19-27 doi: 10.1109/MIS.2022.3208362
  • 加载中
图(7)
计量
  • 文章访问数:  3356
  • HTML全文浏览量:  2525
  • PDF下载量:  2013
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-03-05
  • 网络出版日期:  2023-03-29
  • 刊出日期:  2023-04-20

目录

    /

    返回文章
    返回