2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

具身智能研究的关键问题: 自主感知、行动与进化

沈甜雨 陶子锐 王亚东 张庭祯 刘宇航 王兴霞 杨静 李志伟 陈龙 王坤峰 王飞跃

沈甜雨, 陶子锐, 王亚东, 张庭祯, 刘宇航, 王兴霞, 杨静, 李志伟, 陈龙, 王坤峰, 王飞跃. 具身智能研究的关键问题: 自主感知、行动与进化. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240364
引用本文: 沈甜雨, 陶子锐, 王亚东, 张庭祯, 刘宇航, 王兴霞, 杨静, 李志伟, 陈龙, 王坤峰, 王飞跃. 具身智能研究的关键问题: 自主感知、行动与进化. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240364
Shen Tian-Yu, Tao Zi-Rui, Wang Ya-Dong, Zhang Ting-Zhen, Liu Yu-Hang, Wang Xing-Xia, Yang Jing, Li Zhi-Wei, Chen Long, Wang Kun-Feng, Wang Fei-Yue. Key problems of embodied intelligence: autonomous perception, action, and evolution. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240364
Citation: Shen Tian-Yu, Tao Zi-Rui, Wang Ya-Dong, Zhang Ting-Zhen, Liu Yu-Hang, Wang Xing-Xia, Yang Jing, Li Zhi-Wei, Chen Long, Wang Kun-Feng, Wang Fei-Yue. Key problems of embodied intelligence: autonomous perception, action, and evolution. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c240364

具身智能研究的关键问题: 自主感知、行动与进化

doi: 10.16383/j.aas.c240364 cstr: 32138.14.j.aas.c24036
基金项目: 国家自然科学基金项目(No.62302047; No.62076020), 中央高校基本科研业务费专项资金资助(buctrc202413)资助
详细信息
    作者简介:

    沈甜雨:北京化工大学信息科学与技术学院副教授. 2021年获得中国科学院自动化研究所工学博士学位. 主要研究方向为智能感知与智能机器人系统. E-mail: tianyu.shen@buct.edu.cn

    陶子锐:北京化工大学信息科学与技术学院硕士研究生. 2023年获北京化工大学学士学位. 主要研究方向为多任务学习和增量学习.E-mail: taozirui@126.com

    王亚东:北京化工大学信息科学与技术学院博士研究生. 主要研究方向为计算机视觉与智能交通系统.E-mail: 2021400212@buct.edu.cn

    张庭祯:北京化工大学信息科学与技术学院硕士研究生. 2018年获北京化工大学学士学位. 主要研究方向为计算机视觉与具身智能.E-mail: ztz1733565287@163.com

    刘宇航:中国科学院自动化研究所博士研究生. 2021年获清华大学学士学位. 主要研究方向为三维感知和具身智能.E-mail: liuyuhang2021@ia.ac.cn

    王兴霞:中国科学院自动化研究所博士研究生. 2021 年获得南开大学工学硕士学位. 主要研究方向为平行智能, 平行油田, 故障诊断, 多智能体系统.E-mail: wangxingxia2022@ia.ac.cn

    杨静:中国科学院自动化研究所博士研究生. 2020年获得北京化工大学学士学位. 主要研究方向为平行制造, 社会制造, 人工智能和社会物理信息系统.E-mail: yangjing2020@ia.ac.cn

    李志伟:北京化工大学信息科学与技术学院副教授. 2020年获中国矿业大学(北京)博士学位. 主要研究方向为自动驾驶, 具身智能机器人, 视觉语言大模型.E-mail: lizw@buct.edu.cn

    陈龙:中国科学院自动化研究所研究员. 2013 年获得武汉大学博士学位. 主要研究方向为自动驾驶, 机器人, 智慧矿山和平行智能. E-mail: long.chen@ia.ac.cn

    王坤峰:北京化工大学信息科学与技术学院教授. 主要研究方向为计算机视觉, 多模态感知和智能无人系统. 本文通信作者. E-mail: wangkf@buct.edu.cn

    王飞跃:中国科学院自动化研究所研究员. 主要研究方向为智能系统和复杂系统的建模, 分析与控制.E-mail: feiyue.wang@ia.ac.cn

  • 中图分类号: Y

Key Problems of Embodied Intelligence: Autonomous Perception, Action, and Evolution

Funds: Supported by National Natural Science Foundation of China Under Grant 62302047 and 62076020, Fundamental Research Funds for the Central Universities (buctrc202413)
More Information
    Author Bio:

    SHEN Tian-Yu Associate Professor at College of Information Science and Technology, Beijing University of Chemical Technology. She received the Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences in 2021. Her research interest covers intelligent perception and intelligent unmanned systems

    TAO Zi-Rui Master candidate at the college of Information Science and Technology, Beijing University of Chemical Technology. He received the B.S. degree from Beijing University of Chemical Technology in 2023. His research interest covers multi-task learning and incremental learning

    WANG Ya-Dong Ph.D. candidate at the College of Information Science and Technology, Beijing University of Chemical Technology. His research interest covers computer vision and intelligent transportation systems

    ZHANG Ting-Zhen Master Candidate at the College of Information Science and Technology, Beijing University of Chemical Technology, and received the B.S. degree from Beijing University of Chemical Technology in 2022. His research interest covers computer vision and embodied intelligence

    LIU Yu-Hang Ph.D. candidate at Institute of Automation, Chinese Academy of Sciences. He received the B.S. degree from Tsinghua University in 2021. His research interest covers 3D perception and embodied artificial intelligence

    WANG Xing-Xia Ph. D. candidate at Institute of Automation, Chinese Academy of Sciences. She received her master degree in engineering from Nankai University in 2021. Her research interest covers parallel control, parallel oilfields, and multi-agent systems

    YANG Jing Ph.D. candidate at Institute of Automation, Chinese Academy of Sciences. She received the B.S. degree from Beijing University of Chemical Technology in 2020. Her research interest covers parallel manufacturing, social manufacturing, artificial intelligence, and cyber-physical-social systems

    LI Zhi-Wei Associate Professor at College of Information Science and Technology, Beijing University of Chem- ical Technology. He received the Ph.D. degree from China University of Mining and Technology (Beijing) in 2020. His research interest covers autonomous driving, embodied intelligent robots, and large visual-language models

    CHEN Long Professor at Institute of Automation, Chinese Academy of Sciences.He received the Ph.D. degree from Wuhan University in 2013. His research interest covers autonomous driving, robotics, smart mining, and parallel intelligence

    WANG Kun-Feng Professor at the College of Information Science and Technology, Beijing University of Chemical Technology. His research interest covers computer vision, multi-modal perception, and intelligent unmanned systems. Corresponding author of this paper

    WANG Fei-Yue Professor at Institute of Automation, Chinese Academy of Sciences. His research interest covers modeling, analysis, and control of intelligent systems and complex systems

  • 摘要: 具身智能强调了大脑、身体及环境三者的相互作用, 旨在基于机器与物理世界的交互, 创建软硬件结合、可自主学习进化的智能体. 当前, 机器学习、机器人学、认知科学等多学科技术的快速发展极大地推动了具身智能的研究与应用. 不同于已有的具身智能文献更多从技术和方法分类的角度入手, 本文以具身智能在研究和应用过程中面临的关键挑战为角度切入, 分析了具身智能研究的一般性框架, 围绕具身感知与执行、具身学习与进化两个方面提出了具体的研究思路, 并针对其中涉及的关键问题详细梳理了相关技术及研究进展. 此外, 本文以移动机器人、仿生机器人、平行机器人三方面应用为例, 介绍了具身智能在感知与理解、控制与决策、交互与学习等方面给实际机器人系统设计带来的启发. 最后, 对具身智能的未来发展方向进行了展望, 探索了虚实融合数据智能、基础模型与基础智能、数字孪生与平行智能在其中的重要作用和应用潜力, 希望为相关领域学者和从业人员提供一定的借鉴和思路. 论文相关项目详见https://github.com/BUCT-IUSRC/Survey__EmbodiedAI.
  • 图  1  具身智能与智能体发展历程

    Fig.  1  A historical overview of embodied intelligence and agent development

    图  2  具身智能研究的一般性框架图

    Fig.  2  General framework diagram of embodied intelligence research

    图  3  “感知-模拟-执行”一体化机制框架

    Fig.  3  The framework of the integrated perception-simulation-execution mechanism

    图  4  典型的端到端自动驾驶框架图[18]

    Fig.  4  Typical end-to-end autonomous driving framework[18]

    图  5  典型的多模态融合感知框架图[19]

    Fig.  5  Typical multi-modal perception framework[19]

    图  6  具身智能进化与学习框架

    Fig.  6  The research framework of Embodied Intelligence Evolution and Learning

    图  7  EWC方法梯度下降方向的可视化图[71]

    Fig.  7  Visualization of gradient descent direction of EWC method[71]

    图  8  蒸馏损失POD通过约束中间层输出防止模型过度漂移, 从而避免灾难性遗忘现象发生[78]

    Fig.  8  The distillation loss POD prevent excessive model drift by constraining intermediate outputs, thereby avoiding catastrophic forgetting phenomena[78]

    图  9  以观察图像和目标图像为输入的执行器-评价器网络结构[94]

    Fig.  9  An actor-critic model with observation images and target images as inputs[94]

    图  10  NerveNet从每个节点的观测向量中获取信息, 通过多次计算相邻节点间的信息更新节点的隐藏状态, 最后在输出模型中收集每个控制器的输出形成优化策略[96]

    Fig.  10  NerveNet fetches the information from the observation vectors of each node, updates the hidden state of the nodes by calculating the information between adjacent nodes multiple times, and finally collects the output of each controller in the output model to form an optimization strategy[96]

    图  11  通过使用学习到的Q函数和策略网络进行评估优化, 有效地减少了优化计算过程中代表物理原型的参数量[99]

    Fig.  11  By using the learned Q- and policy network for evaluation and optimization, the number of parameters representing the physical prototype in the optimization calculation process has been effectively reduced[99]

    图  12  具身智能增强的机器人系统研究框架

    Fig.  12  The research framework of robot systems with enhanced embodied intelligence

    图  13  具身智能增强的自动驾驶系统框架

    Fig.  13  The framework of auto drive system with enhanced intelligence

    图  14  典型的仿生机器人

    Fig.  14  Typical bionic robots

    图  15  平行机器人框架[143]

    Fig.  15  The framework of parallel robot[143]

    表  1  具身智能研究现状

    Table  1  The current status of embodied intelligence research

    名称 年份 特点 优劣
    BigDog 2009 由波士顿动力公司制造, 能够在崎岖不平的地形上行走, 并保持稳定, 展示了在复杂环境中移动的能力 具有强大的越野能力和高负载能力, 能适应复杂环境, 但采用噪音较大的内燃机动力源
    Atlas 2013 由波士顿动力公司制造, 具备高度灵活性和稳定性的人形机器人, 能够进行跑步、跳跃和攀爬等复杂动作, 标志着人形机器人在运动控制和灵活性方面的显著进步 具备高度灵活性和稳定性, 能够执行复杂动作. 但开发和制造成本较高
    DQN算法 2014 DeepMind公司开发的DQN(Deep Q-Network)算法首次将深度学习与强化学习相结合, 使智能体在多种视频游戏中超越人类表现. 这一算法为具身智能提供了新的学习和决策方法 可在无监督环境中通过与环境的互动进行学习, 提高了适应性. 但需要大量数据和计算资源进行训练, 运行成本高
    AlphaGo 2016 DeepMind的AlphaGo战胜了围棋世界冠军李世石, 这一里程碑事件展示了智能体在复杂策略游戏中的超人表现, 推动了具身智能在复杂决策问题上的研究 结合深度学习和蒙特卡罗树搜索, 实现高效决策和自我优化. 但高计算成本和领域局限性限制了其广泛应用的可能
    Walker 2018 优必选公司发布了Walker机器人, 这是一款双足仿人服务机器人, 展示了在家居和服务领域的应用潜力 具备双足行走能力和多功能性, 但高成本和续航时间有限, 限制了长时间工作和普及应用
    Stretch 2021 波士顿动力公司推出的Stretch机器人, 专为仓库操作设计, 展示了在物流和仓储领域的巨大应用前景 专为仓库操作设计, 提升了仓库内搬运任务的效率. 但泛化到其他领域工作的能力较低
    Optimus 2024 特斯拉公司发布了Optimus人形机器人旨在解决劳动力短缺问题, 展示了未来具身智能在生产和日常生活中的广泛应用潜力 具备高度自主性和广泛应用前景, 但高成本和复杂技术性限制了普及性
    下载: 导出CSV
  • [1] 张钹, 朱军, 苏航. 迈向第三代人工智能. 中国科学: 信息科学, 2020, 50(9): 1281−1302

    Zhang Bo, Zhu Jun, Su Hang. Toward the third generation artificial intelligence. Science China Information Sciences, 2020, 50(9): 1281−1302
    [2] Jin D, Zhang L. Embodied intelligence weaves a better future. Nature Machine Intelligence, 2020, 2(11): 663−664 doi: 10.1038/s42256-020-00250-6
    [3] Gupta A, Savarese S, Ganguli S, et al. Embodied intelligence via learning and evolution. Nature Communications, 2021, 12(1): 5721 doi: 10.1038/s41467-021-25874-z
    [4] Turing A M. Computing machinery and intelligence. Creative Computing, 1980, 6(1): 44−53
    [5] Howard D, Eiben A E, Kennedy D F, et al. Evolving embodied intelligence from materials to machines. Nature Machine Intelligence, 2019, 1(1): 12−19 doi: 10.1038/s42256-018-0009-9
    [6] Shen T, Sun J, Kong S, et al. The journey/DAO/TAO of embodied intelligence: From large models to foundation intelligence and parallel intelligence. IEEE/CAA Journal of Automatica Sinica, 2024, 11(6): 1313−1316 doi: 10.1109/JAS.2024.124407
    [7] 沈甜雨, 李志伟, 范丽丽, 张庭祯, 唐丹丹, 周美华, 刘华平, 王坤峰. 具身智能驾驶:概念、方法、现状与展望. 智能科学与技术学报, 2024, 6(1): 17−32 doi: 10.11959/j.issn.2096-6652.202404

    Shen Tian-Yu, Li Zhi-Wei, Fan Li-Li, Zhang Ting-Zhen, Tang Dan-Dan, Zhou Mei-Hua, Liu Hua-Ping, Wang Kun-Feng. Embodied intelligent driving: Concepts, methods, the state of the art and beyond. Chinese Journal of Intelligent Science and Techno logy, 2024, 6(1): 17−32 doi: 10.11959/j.issn.2096-6652.202404
    [8] Brohan A, Chebotar Y, Finn C, et al. Do as I can, not as I say: Grounding language in robotic affordances. In: Proceeding of the Conference on Robot Learning, 2023: 287−318.
    [9] Shah D, Osiński B, Levine S. LM-Nav: Robotic navigation with large pre-trained models of language, vision, and action. In: Proceeding of the Conference on Robot Learning, 2023: 492−504.
    [10] Qiao H, Zhong S, Chen Z, et al. Improving performance of robots using human-inspired approaches: A survey. Science China Information Sciences, 2022, 65(12): 221201 doi: 10.1007/s11432-022-3606-1
    [11] Cao L. AI robots and humanoid AI: Review, perspectives and directions. arXiv preprint arXiv: 2405.15775, 2024.
    [12] Duan J, Yu S, Tan H L, et al. A survey of embodied AI: From simulators to research tasks. IEEE Transactions on Emerging Topics in Computational Intelligence, 2022, 6(2): 230−244 doi: 10.1109/TETCI.2022.3141105
    [13] 刘华平, 郭迪, 孙富春, 张新钰. 基于形态的具身智能研究: 历史回顾与前沿进展. 自动化学报, 2023, 49(6): 1131−1154

    Liu Hua-Ping, Guo Di, Sun Fu-Chun, Zhang Xin-Yu. Morphological embodied intelligence research: historical review and cutting-edge progress. Acta Automatica Sinica, 2023, 49(6): 1131−1154
    [14] Minsky M. Society of mind. Simon and Schuster, 1988.
    [15] Dennett D C. The embodied mind: Cognitive science and human experience, 1993.
    [16] Pfeifer R, Bongard J. How the body shapes the way we think: A new view of intelligence. MIT Press, 2006.
    [17] Deisenroth M P, Fox D, Rasmussen C E. Gaussian processes for data-efficient learning in robotics and control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 37(2): 408−423
    [18] Hu Y, Yang J, Chen L, et al. Planning-oriented autonomous driving. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 17853−17862.
    [19] Hu C, Zheng H, Li K, et al. FusionFormer: A multi-sensory fusion in bird's-eye-view and temporal consistent Transformer for 3D objection. arXiv preprint arXiv: 2309.05257, 2023.
    [20] Yin T, Zhou X, Krähenbühl P. Multimodal virtual point 3D detection. In: Proceeding of the Advances in Neural Information Processing Systems, 2021, 34: 16494−16507.
    [21] Wu X, Peng L, Yang H, et al. Sparse fuse dense: Towards high quality 3D detection with depth completion. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 5418−5427.
    [22] Wu H, Wen C, Shi S, et al. Virtual sparse convolution for multimodal 3D object detection. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 21653−21662.
    [23] Liu Z, Tang H, Amini A, et al. Bevfusion: Multi-task multi-sensor fusion with unified bird's-eye view representation. In: Proceeding of the IEEE International Conference on Robotics and Automation, 2023: 2774−2781.
    [24] Wei M, Li J, Kang H, et al. BEV-CFKT: A LiDAR-camera cross-modality-interaction fusion and knowledge transfer framework with transformer for BEV 3D object detection. Neurocomputing, 2024, 582: 127527 doi: 10.1016/j.neucom.2024.127527
    [25] Drews F, Feng D, Faion F, et al. DeepFusion: A robust and modular 3D object detector for LiDARs, cameras and radars. In: Proceeding of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022: 560−567.
    [26] Xie B, Yang Z, Yang L, et al. AMMF: Attention-based multi-phase multi-task fusion for small contour object 3D detection. IEEE Transactions on Intelligent Transportation Systems, 2022, 24(2): 1692−1701
    [27] Chiu H, Li J, Ambruş R, et al. Probabilistic 3D multi-modal, multi-object tracking for autonomous driving. In: Proceeding of the IEEE International Conference on Robotics and Automation, 2021: 14227−14233.
    [28] Tian Y, Zhang X, Wang X, et al. ACF-Net: Asymmetric cascade fusion for 3D detection with LiDAR point clouds and images. IEEE Transactions on Intelligent Vehicles (Early Access), 20231−12
    [29] Zhang P, Zhang B, Zhang T, et al. Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 12414−12424.
    [30] Lee S, Cho S, Im S. Dranet: Disentangling representation and adaptation networks for unsupervised cross-domain adaptation. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 15252−15261.
    [31] Oza P, Sindagi V A, Sharmini V V, Patel V M. Unsupervised domain adaptation of object detectors: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(6): 4018−4040 doi: 10.1109/TPAMI.2022.3217046
    [32] Ganin Y, Ustinova E, Ajakan H, et al. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 2016, 17(59): 1−35
    [33] Cai Q, Pan Y, Ngo C W, Tian X, et al. Exploring object relation in mean teacher for cross-domain detection. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 11457−11466.
    [34] Li C, Chan S H, Chen Y T. Droid: Driver-centric risk object identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(11): 13683−13698
    [35] Cheng Z, Lu J, Ding H, et al. A superposition assessment framework of multi-source traffic risks for mega-events using risk field model and time-series generative adversarial networks. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(11): 12736−12753 doi: 10.1109/TITS.2023.3290165
    [36] Wang X, Alonso M J, Wang M. Probabilistic risk metric for highway driving leveraging multi-modal trajectory predictions. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(10): 19399−19412 doi: 10.1109/TITS.2022.3164469
    [37] Caleffi F, Anzanello M J, Cybis H B B. A multivariate-based conflict prediction model for a brazilian freeway. Accident Analysis & Prevention, 2017, 98: 295−302
    [38] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 10684−10695.
    [39] Saharia C, Chan W, Saxena S, et al. Photorealistic text-to-image diffusion models with deep language understanding. In: Proceeding of the Advances in Neural Information Processing Systems, 2022, 35: 36479−36494.
    [40] Ramesh A, Pavlov M, Goh G, et al. Zero-shot text-to-image generation. In: Proceeding of the International Conference on Machine Learning, 2021: 8821−8831.
    [41] Ramesh A, Dhariwal P, Nichol A, et al. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv: 2204.06125, 2022, 1(2): 3.
    [42] Reed S, Zolna K, Parisotto E, et al. A generalist agent. arXiv preprint arXiv: 2205.06175, 2022.
    [43] Niemeyer M, Geiger A. Giraffe: Representing scenes as compositional generative neural feature fields. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 11453−11464.
    [44] Abou-Chakra J, Rana K, Dayoub F, et al. Physically embodied gaussian splatting: Embedding physical priors into a visual 3D world model for robotics. Conference on Robot Learning, 2023.
    [45] Hart P E, Nilsson N J, Raphael B. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 1968, 4(2): 100−107 doi: 10.1109/TSSC.1968.300136
    [46] Karaman S, Frazzoli E. Sampling-based algorithms for optimal motion planning. The International Journal of Robotics Research, 2011, 30(7): 846−894 doi: 10.1177/0278364911406761
    [47] Koenig S, Likhachev M. D* lite. In: Proceeding of the Eighteenth National Conference on Artificial Intelligence, 2002: 476−483.
    [48] Holland J H. Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. MIT press, 1992.
    [49] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceeding of the ICNN'95-International Conference on Neural Networks, 1995: 1942−1948.
    [50] Dolgov D, Thrun S, Montemerlo M, et al. Practical search techniques in path planning for autonomous driving. Ann Arbor, 2008, 1001(48105): 18−80
    [51] Webb D J, Van Den Berg J. Kinodynamic RRT*: Asymptotically optimal motion planning for robots with linear dynamics. In: Proceeding of the 2013 IEEE international conference on robotics and automation, 2013: 5054−5061.
    [52] Konda V R, Tsitsiklis J N. Onactor-critic algorithms. SIAM Journal on Control and Optimization, 2003, 42(4): 1143−1166 doi: 10.1137/S0363012901385691
    [53] Watkins C J C H. P. Technical note: Q-learning. 1992.
    [54] Mnih V, Badia A P, Mirza M, et al. Asynchronous methods for deep reinforcement learning. In: Proceeding of the International Conference on Machine Learning, 2016: 1928−1937.
    [55] Deisenroth M P, Rasmussen C E. PILCO: A model-based and data-efficient approach to policy search. In: Proceeding of the 28th International Conference on Machine Learning, 2011.
    [56] Ho J, Ermon S. Generative adversarial lmitation learning. 2016.
    [57] Borase R P, Maghade D K, Sondkar S Y, et al. A review of PID control, tuning methods and applications. International Journal of Dynamics and Control, 2021, 9: 818−827 doi: 10.1007/s40435-020-00665-4
    [58] Dörfler F, Tesi P, De Persis C. On the role of regularization in direct data-driven LQR control. In: Proceeding of the 2022 IEEE 61st Conference on Decision and Control, 2022: 1091−1098.
    [59] Berberich J, Koch A, Scherer C W, et al. Robust data-driven state-feedback design. In: Proceeding of the 2020 American Control Conference. IEEE, 2020: 1532−1538.
    [60] Nubert J, Köhler J, Berenz V, et al. Safe and fast tracking on a robot manipulator: Robust mpc and neural network control. IEEE Robotics and Automation Letters, 2020, 5(2): 3050−3057 doi: 10.1109/LRA.2020.2975727
    [61] Liu Y J, Zhao W, Liu L, et al. Adaptive neural network control for a class of nonlinear systems with function constraints on states. IEEE Transactions on Neural Networks and Learning Systems, 2021, 34(6): 2732−2741
    [62] Phan D, Bab-Hadiashar A, Fayyazi M, et al. Interval type 2 fuzzy logic control for energy management of hybrid electric autonomous vehicles. IEEE Transactions on Intelligent Vehi cles, 2020, 6(2): 210−220
    [63] Omidvar M N, Li X, Mei Y, et al. Cooperative co-evolution with differential grouping for large scale optimization. IEEE Transactions on Evolutionary Computation, 2013, 18(3): 378−393
    [64] Gad A G. Particle swarm optimization algorithm and its applications: a systematic review. Archives of Computational Methods in Engineering, 2022, 29(5): 2531−2561 doi: 10.1007/s11831-021-09694-4
    [65] Foderaro G, Ferrari S, Wettergren T A. Distributed optimal control for multi-agent trajectory optimization. Automatica, 2014, 50(1): 149−154 doi: 10.1016/j.automatica.2013.09.014
    [66] Bellemare M G, Dabney W, Munos R. A distributional perspective on reinforcement learning. In: Proceeding of the International Conference on Machine Learning, 2017: 449−458.
    [67] Dong G, Li H, Ma H, et al. Finite-time consensus tracking neural network FTC of multi-agent systems. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(2): 653−662
    [68] Wang Q, Liu K, Wang X, Wu L, Lü J. Leader-following consensus of multi-agent systems under antagonistic networks. Neurocomputing, 2020, 413: 339−47 doi: 10.1016/j.neucom.2020.07.006
    [69] Zhihao C, Longhong W, Jiang Z, Kun W, Yingxun W. Virtual target guidance-based distributed model predictive control for formation control of multiple UAVs. Chinese Journal of Aeronautics, 2020, 33(3): 1037−1056 doi: 10.1016/j.cja.2019.07.016
    [70] Harvey I, Husbands P, Cliff D, et al. Evolutionary robotics: the sussex approach. Robotics and Autonomous Systems, 1997, 20(2-4): 205−224 doi: 10.1016/S0921-8890(96)00067-X
    [71] Kirkpatrick J, Pascanu R, Rabinowitz N, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 2017, 114(13): 3521−3526 doi: 10.1073/pnas.1611835114
    [72] Liu X, Masana M, Herranz L, et al. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In: Proceeding of the International Conference on Pattern Recognition, 2018: 2262−2268.
    [73] Hsu Y C, Liu Y C, Ramasamy A, et al. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv: 1810.12488, 2018.
    [74] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531, 2015.
    [75] Li Z, Hoiem D. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(12): 2935−2947
    [76] Castro F M, Marín-Jiménez M J, Guil N, et al. End-to-end incremental learning. In: Proceeding of the European Conference on Computer Vision, 2018: 233−248.
    [77] Zhang J, Zhang J, Ghosh S, et al. Class-incremental learning via deep model consolidation. In: Proceeding of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020: 1131−1140.
    [78] Douillard A, Cord M, Ollion C, et al. Small-task incremental learning. ArXiv abs/2004.13513 (2020): n. pag.
    [79] Douillard A, Cord M, Ollion C, et al. PODNet: Pooled outputs distillation for small-tasks incremental learning. In: Proceeding of the European Conference on Computer Vision, 2020: 86−102.
    [80] 朱飞, 张煦尧, 刘成林. 类别增量学习研究进展和性能评价. 自动化学报, 2023, 49(3): 635−660

    Zhu Fei, Zhang Xu-Yao, Liu Cheng-Lin. Research progress and performance evaluation of category incremental learning. Acta Automatica Sinica, 2023, 49(3): 635−660
    [81] Tao X, Chang X, Hong X, et al. Topology-preserving class-incremental learning. In: Proceeding of the European Conference on Computer Vision, 2020: 254−270.
    [82] Rebuffi S A, Kolesnikov A, Sperl G, et al. Icarl: Incremental classifier and representation learning. In: Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2001−2010.
    [83] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Communications of the ACM, 2020, 63(11): 139−144 doi: 10.1145/3422622
    [84] Pellegrini L, Graffieti G, Lomonaco V, et al. Latent replay for real-time continual learning. In: Proceeding of the International Conference on Intelligent Robots and Systems, 2020: 10203−10209.
    [85] Wu Y, Chen Y, Wang L, et al. Large scale incremental learning. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 374−382.
    [86] Serra J, Suris D, Miron M, et al. Overcoming catastrophic forgetting with hard attention to the task. In: Proceeding of the International Conference on Machine Learning, 2018: 4548−4557.
    [87] Zhu K, Zhai W, Cao Y, et al. Self-sustaining representation expansion for non-exemplar class-incremental learning. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 9296−9305.
    [88] Schrauwen B, Verstraeten D, Van Campenhout J. An overview of reservoir computing: Theory, applications and implementations. In: Proceedings of the 15th European Symposium on Artificial Neural Networks, 2007: 471−482.
    [89] Hauser H, Ijspeert A J, Füchslin R M, et al. The role of feedback in morphological computation with compliant bodies. Biological Cybernetics, 2012, 106: 595−613 doi: 10.1007/s00422-012-0516-4
    [90] Caluwaerts K, Despraz J, Işçen A, et al. Design and control of compliant tensegrity robots through simulation and hardware validation. Journal of the Royal Society Interface, 2014, 11(98): 20140520 doi: 10.1098/rsif.2014.0520
    [91] Degrave J, Caluwaerts K, Dambre J, et al. Developing an embodied gait on a compliant quadrupedal robot. In: Proceeding of the International Conference on Intelligent Robots and Systems, 2015: 4486−4491.
    [92] Rückert E A, Neumann G. Stochastic optimal control methods for investigating the power of morphological computation. Artificial Life, 2013, 19(1): 115−131 doi: 10.1162/ARTL_a_00085
    [93] Pervan A, Murphey T D. Algorithmic design for embodied intelligence in synthetic cells. IEEE Transactions on Automation Science and Engineering, 2020, 18(3): 864−875
    [94] Zhu Y, Mottaghi R, Kolve E, et al. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: Proceeding of the 2017 IEEE International Conference on Robotics and Automation, 2017: 3357−3364.
    [95] Chen T, Murali A, Gupta A. Hardware conditioned policies for multi-robot transfer learning. In: Proceeding of the Advances in Neural Information Processing Systems, 2018, 31.
    [96] Wang T, Liao R, Ba J, et al. Nervenet: Learning structured policy with graph neural networks. In: Proceeding of the International Conference on Learning Representations, 2018.
    [97] Blake C, Kurin V, Igl M, et al. Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing. In: Proceeding of the Advances in Neural Information Processing Systems, 2021, 34: 23983−23992.
    [98] Pathak D, Lu C, Darrell T, et al. Learning to control self-assembling morphologies: A study of generalization via modularity. In: Proceeding of the Advances in Neural Information Processing Systems, 2019, 32.
    [99] Luck K S, Amor H B, Calandra R. Data-efficient co-adaptation of morphology and behaviour with deep reinforcement learning. ArXiv abs/1911.06832 (2019): n. pag.
    [100] Luck K S, Amor H B, Calandra R. Data-efficient co-adaptation of morphology and behaviour with deep reinforcement learning. In: Proceeding of the Conference on Robot Learning, 2020: 854−869.
    [101] Ha D. Reinforcement learning for improving agent design. Artificial Life, 2019, 25(4): 352−365 doi: 10.1162/artl_a_00301
    [102] Nguyen T T, Nguyen N D, Vamplew P, et al. A multi-objective deep reinforcement learning framework. Engineering Applications of Artificial Intelligence, 2020, 96: 103915 doi: 10.1016/j.engappai.2020.103915
    [103] Clouse J A. Learning from an automated training agent. Adaptation and Learning in Multiagent Systems. Springer Verlag, 1996: 195.
    [104] Price B, Boutilier C. Accelerating reinforcement learning through implicit imitation. Journal of Artificial Intelligence Research, 2003, 19: 569−629 doi: 10.1613/jair.898
    [105] Hasselt H. Double Q-learning. In: Proceeding of the Advances in Neural Information Processing Systems, 2010, 23.
    [106] Sorokin I, Seleznev A, Pavlov M, et al. Deep attention recurrent Q-network. arXiv preprint arXiv: 1512.01693, 2015.
    [107] Bowling M, Veloso M. Multiagent learning using a variable learning rate. Artificial Intelligence, 2002, 136(2): 215−250 doi: 10.1016/S0004-3702(02)00121-2
    [108] Lauer M, Riedmiller M A. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In: Proceeding of the Seventeenth International Conference on Machine Learning, 2000: 535−542.
    [109] Leibo J Z, Zambaldi V, Lanctot M, et al. Multi-agent reinforcement learning in sequential social dilemmas. arXiv preprint arXiv: 1702.03037, 2017.
    [110] Macenski S, Foote T, Gerkey B, et al. Robot operating system 2: Design, architecture, and uses in the wild. Science Robotics, 2022, 7(66): eabm6074 doi: 10.1126/scirobotics.abm6074
    [111] Huang W, Wang C, Zhang R, et al. Voxposer: Composable 3D value maps for robotic manipulation with language models. arXiv preprint arXiv: 2307.05973, 2023.
    [112] Li C, Zhang R, Wong J, Gokmen C, et al. BEHAVIOR-1K: A human-centered, embodied AI benchmark with 1, 000 everyday activities and realistic simulation. arXiv preprint arXiv: 2403.09227, 2024.
    [113] Li B, Zhang Y, Zhang T, et al. Embodied footprints: A safety-guaranteed collision-avoidance model for numerical optimization-based trajectory planning. IEEE Transactions on Intelligent Transportation Systems, 2024, 25(2): 2046−2060 doi: 10.1109/TITS.2023.3316175
    [114] Ma C, Provost J. Design-to-test approach for programmable controllers in safety-critical automation systems. IEEE Transactions on Industrial Informatics, 2020, 16(10): 6499−6508 doi: 10.1109/TII.2020.2968480
    [115] Cai L, Zhou C, Wang Y, et al. Binocular vision-based pole-shaped obstacle detection and ranging study. Applied Sciences, 2023, 13(23): 12617 doi: 10.3390/app132312617
    [116] Hawke J, Badrinarayanan V, Kendall A, et al. Reimagining an autonomous vehicle. arXiv preprint arXiv: 2108.05805, 2021.
    [117] Sun P, Kretzschmar H, Dotiwalla X, et al. Scalability in perception for autonomous driving: Waymo open dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 2446−2454.
    [118] Chen L, Wu P, Chitta K, et al. End-to-end autonomous driving: Challenges and frontiers. arXiv preprint arXiv: 2306.16927, 2023.
    [119] Fu Z, Zhao T Z, Finn C. Mobile aloha: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv: 2401.02117, 2024.
    [120] Fager P, Calzavara M, Sgarbossa F. Modelling time efficiency of cobot-supported kit preparation. The International Journal of Advanced Manufacturing Technology, 2020, 106: 2227−2241 doi: 10.1007/s00170-019-04679-x
    [121] 兰沣卜, 赵文博, 朱凯, 等. 基于具身智能的移动操作机器人系统发展研究. 中国工程科学, 2024, 26(01): 139−148 doi: 10.15302/J-SSCAE-2024.01.010

    Lan Feng-Bu, Zhao Wen-Bo, Zhu Kai, et al. Development of mobile manipulator robot systems with embodied intelligence. Strategic Study of CAE, 2024, 26(01): 139−148 doi: 10.15302/J-SSCAE-2024.01.010
    [122] Zhao H, Pan F, Ping H, et al. Agent as cerebrum, controller as cerebellum: Implementing an embodied lmm-based agent on drones. arXiv preprint arXiv: 2311.15033, 2023.
    [123] Wang J, Wu Z, Zhang Y, et al. Integrated tracking control of an underwater bionic robot based on multimodal motions. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 54(3): 1599−1610
    [124] Egan D, Cosker D, McDonnell, R. NeuroDog: Quadruped embodiment using neural networks. In: Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2023: 1−19.
    [125] Tong Y, Liu H, Zhang Z. Advancements in humanoid robots: A comprehensive review and future prospects. IEEE/CAA Journal of Automatica Sinica, 2024, 11(2): 301−328 doi: 10.1109/JAS.2023.124140
    [126] Haarnoja T, Moran B, Lever G, et al. Learning agile soccer skills for a bipedal robot with deep reinforcement learning. Science Robotics, 2024, 9(89): eadi8022 doi: 10.1126/scirobotics.adi8022
    [127] Dong H, Liu Y, Chu T, Saddik A E. Bringing robots home: The rise of AI robots in consumer electronics. arXiv preprint arXiv: 2403.14449, 2024.
    [128] Haldar A I, Pagar N D. Predictive control of zero moment point for terrain robot kinematics. Materials Today: Proceedings, 2023, 2023(80): 122−127
    [129] Qiao H, Chen J, Huang X. A survey of brain-inspired intelligent robots: Integration of vision, decision, motion control, and musculoskeletal systems. IEEE Transactions on Cybernetics, 2021, 52(10): 11267−11280
    [130] Qiao H, Wu Y X, Zhong S L, et al. Brain-inspired intelligent robotics: Theoretical analysis and systematic application. Machine Intelligence Research, 2023, 20(1): 1−18 doi: 10.1007/s11633-022-1390-8
    [131] Sun Y, Zong C, Pancheri F, et al. Design of topology optimized compliant legs for bio-inspired quadruped robots. Scientific Reports, 2023, 13(1): 4875 doi: 10.1038/s41598-023-32106-5
    [132] Taheri H, Mozayani N. A study on quadruped mobile robots. Mechanism and Machine Theory, 2023, 190: 105448 doi: 10.1016/j.mechmachtheory.2023.105448
    [133] Idée A, Mosca M, Pin D. Skin barrier reinforcement effect assessment of a spot-on based on natural ingredients in a dog model of tape stripping. Veterinary Sciences, 2023, 9(8): 390
    [134] Yang C, Yuan K, Zhu Q, et al. Multi-expert learning of adaptive legged locomotion. Science Robotics, 2020, 5(49): eabb2174 doi: 10.1126/scirobotics.abb2174
    [135] Dai B, Khorrambakht R, Krishnamurthy P, et al. Sailing through point clouds: Safe navigation using point cloud based control barrier functions. arXiv preprint arXiv: 2403.18206, 2024.
    [136] Huang K, Yang B, Gao W. Modality plug-and-play: Elastic modality adaptation in multimodal llms for embodied AI. arXiv preprint arXiv: 2312.07886, 2023.
    [137] Mutlu R, Alici G, Li W. Three-dimensional kinematic modeling of helix-forming lamina-emergent soft smart actuators based on electroactive polymers. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2017, 47(9): 2562−2573
    [138] Goncalves A, Kuppuswamy N, Beaulieu A, et al. Punyo-1: Soft tactile-sensing upper-body robot for large object manipulation and physical human interaction. In: Proceedings of the 2022 IEEE 5th International Conference on Soft Robotics, 2022: 844−851.
    [139] Becker K, Teeple C, Charles N, et al. Active entanglement enables stochastic, topological grasping. Proceedings of the National Academy of Sciences, 2022, 119(42): e2209819119 doi: 10.1073/pnas.2209819119
    [140] Xie Z, Yuan F, Liu J, et al. Octopus-inspired sensorized soft arm for environmental interaction. Science Robotics, 2023, 8(84): eadh7852 doi: 10.1126/scirobotics.adh7852
    [141] Mengaldo G, Renda F, Brunton S L, et al. A concise guide to modelling the physics of embodied intelligence in soft robotics. Nature Reviews Physics, 2022, 4(9): 595−610 doi: 10.1038/s42254-022-00481-z
    [142] 白天翔, 王帅, 沈震, 曹东璞, 郑南宁, 王飞跃. 平行机器人与平行无人系统: 框架、结构、过程、平台及其应用. 自动化学报, 2017, 43(2): 161−175

    Bai Tian-Xiang, Wang Shuai, Shen Zhen, Cao Dong-Pu, Zheng Nan-Ning, Wang Fei-Yue. Parallel robots and parallel unmanned systems: framework, structure, process, platform, and their applications. Acta Automatica Sinica, 2017, 43(2): 161−175
    [143] 王飞跃. 机器人的未来发展: 从工业自动化到知识自动化. 科技导报, 2015, 33(21): 39−44
    [144] 王飞跃. 软件定义的系统与知识自动化: 从牛顿到默顿的平行升华. 自动化学报, 2015, 41(1): 1−8

    Wang Fei-Yue. Software defined systems and knowledge automation: parallel sublimation from Newton to Merton. Acta Automatica Sinica, 2015, 41(1): 1−8
    [145] Driess D, Xia F, Sajjadi M S M, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv: 2303.03378, 2023.
    [146] Shridhar M, Thomason J, Gordon D, et al. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 10740−10749.
    [147] Weihs L, Deitke M, Kembhavi A, et al. Visual room rearrangement. In: Proceeding of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 5922−5931.
    [148] Batra D, Chang A X, Chernova S, et al. Rearrangement: A challenge for embodied AI. arXiv preprint arXiv: 2011.01975, 2020.
    [149] Shen B, Xia F, Li C, et al. Igibson 1.0: A simulation environment for interactive tasks in large realistic scenes. In: Proceeding of the International Conference on Intelligent Robots and Systems, 2021: 7520−7527.
    [150] Li C, Xia F, Martín-Martín R, et al. Igibson 2.0: Object-centric simulation for robot learning of everyday household tasks. arXiv preprint arXiv: 2108.03272, 2021.
    [151] Savva M, Kadian A, Maksymets O, et al. Habitat: A platform for embodied AI research. In: Proceeding of the IEEE/CVF International Conference on Computer Vision, 2019: 9339−9347.
    [152] Ramakrishnan S K, Gokaslan A, Wijmans E, et al. Habitat-matterport 3D dataset (HM3D): 1000 large-scale 3D environments for embodied AI. arXiv preprint arXiv: 2109.08238, 2021.
    [153] Wani S, Patel S, Jain U, et al. Multion: Benchmarking semantic map memory using multi-object navigation. In: Proceeding of the Advances in Neural Information Processing Systems, 2020, 33: 9700−9712.
    [154] Li C, Zhang R, Wong J, et al. Behavior-1k: A benchmark for embodied AI with 1, 000 everyday activities and realistic simulation. In: Proceeding of the Conference on Robot Learning, 2023: 80−93.
    [155] Srivastava S, Li C, Lingelbach M, et al. Behavior: Benchmark for everyday household activities in virtual, interactive, and ecological environments. In: Proceeding of the Conference on Robot Learning, 2022: 477−490.
    [156] Kolve E, Mottaghi R, Han W, et al. Ai2-thor: An interactive 3d environment for visual AI. arXiv preprint arXiv: 1712.05474, 2017.
    [157] Gan C, Schwartz J, Alter S, et al. Threedworld: A platform for interactive multi-modal physical simulation. arXiv preprint arXiv: 2007.04954, 2020.
    [158] Gan C, Zhou S, Schwartz J, et al. The threedworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied AI. In: Proceeding of the 2022 International Conference on Robotics and Automation, 2022: 8847−8854.
    [159] Szot A, Clegg A, Undersander E, et al. Habitat 2.0: Training home assistants to rearrange their habitat. In: Proceeding of the Advances in Neural Information Processing Systems, 2021, 34: 251−266.
    [160] Bi K, Xie L, Zhang H, et al. Accurate medium-range global weather forecasting with 3D neural networks. Nature, 2023, 619(7970): 533−538 doi: 10.1038/s41586-023-06185-3
    [161] Wang F Y. The emergence of intelligent enterprises: From CPS to CPSS. IEEE Intelligent Systems, 2010, 25(4): 85−88 doi: 10.1109/MIS.2010.104
    [162] Zhang J J, Wang F Y, Wang X, et al. Cyber-physical-social systems: The state of the art and perspectives. IEEE Transactions on Computational Social Systems, 2018, 5(3): 829−840 doi: 10.1109/TCSS.2018.2861224
    [163] Wang F Y, Wang X, Li L, et al. Steps toward parallel intelligence. IEEE/CAA Journal of Automatica Sinica, 2016, 3(4): 345−348 doi: 10.1109/JAS.2016.7510067
    [164] Wang F Y. Parallel intelligence in metaverses: Welcome to Hanoi!. IEEE Intelligent Systems, 2022, 37(1): 16−20 doi: 10.1109/MIS.2022.3154541
    [165] Wang X, Yang J, Han J, et al. Metaverses and deMetaverses: From digital twins in CPS to parallel intelligence in CPSS. IEEE Intelligent Systems, 2022, 37(4): 97−102 doi: 10.1109/MIS.2022.3196592
    [166] Wang F Y. Parallel control and management for intelligent transportation systems: Concepts, architectures, and applications. IEEE Transactions on Tntelligent Transportation Systems, 2010, 11(3): 630−638 doi: 10.1109/TITS.2010.2060218
    [167] Wang Z, Lv C, Wang F Y. A new era of intelligent vehicles and intelligent transportation systems: Digital twins and parallel intelligence. IEEE Transactions on Intelligent Vehicles, 2023
    [168] Yang J, Wang X, Zhao Y. Parallel manufacturing for industrial metaverses: A new paradigm in smart manufacturing. IEEE/ CAA Journal of Automatica Sinica, 2022, 9(12): 2063−2070 doi: 10.1109/JAS.2022.106097
    [169] Liu Y, Sun B, Tian Y, et al. Software-defined active lidars for autonomous driving: A parallel intelligence-based adaptive model. IEEE Transactions on Intelligent Vehicles, 2023, 8(8): 4047−4056 doi: 10.1109/TIV.2023.3289540
    [170] Wang S, Wang J, Wang X, et al. Blockchain-powered parallel healthcare systems based on the ACP approach. IEEE Transactions on Computational Social Systems, 2018, 5(4): 942−950 doi: 10.1109/TCSS.2018.2865526
    [171] Wang X, Kang M, Sun H, et al. DeCASA in agriverse: Parallel agriculture for smart villages in metaverses. IEEE/CAA Journal of Automatica Sinica, 2022, 9(12): 2055−2062 doi: 10.1109/JAS.2022.106103
    [172] Wang X, Li J, Fan L, et al. Advancing vehicular healthcare: The DAO-based parallel maintenance for intelligent vehicles. IEEE Transactions on Intelligent Vehicles, 2023, 8(12): 4671−4673 doi: 10.1109/TIV.2023.3341855
    [173] Wang X, Yang J, Wang Y, et al. Steps toward industry 5.0: Building “6S” parallel industries with cyber-physical-social intelligence. IEEE/CAA Journal of Automatica Sinica, 2023, 10(8): 1692−1703 doi: 10.1109/JAS.2023.123753
    [174] 王飞跃. 关于复杂系统研究的计算理论与方法. 中国基础科学, 2004, 6(5): 3−10 doi: 10.3969/j.issn.1009-2412.2004.05.001

    Wang Fei-Yue. Computational theory and method on complex system. China Basic Science, 2004, 6(5): 3−10 doi: 10.3969/j.issn.1009-2412.2004.05.001
    [175] Zhao Y, Zhu Z, Chen B, et al. Towards parallel intelligence: An interdisciplinary solution for complex systems. The Innovation, 2023.
    [176] Li X, Ye P J, Li J J, et al. From features engineering to scenarios engineering for trustworthy AI: I&I, C&C, and V&V. IEEE Intelligent Systems, 2022, 37(4): 18−26 doi: 10.1109/MIS.2022.3197950
    [177] Li X, Tian Y L, Ye P J, et al. A novel scenarios engineering methodology for foundation models in metaverse. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2022, 53(4): 2148−2159
    [178] 杨静, 王晓, 王雨桐, 刘忠民, 李小双, 王飞跃. 平行智能与CPSS: 三十年发展的回顾与展望. 自动化学报, 2023, 49(03): 614−634

    Yang Jing, Wang Xiao, Wang Yu-Tong, Liu Zhong-Min, Li Xiao-Shuang, Wang Fei-Yue. Parallel intelligence and CPSS in 30 years: An ACP approach. Acta Automatica Sinica, 2023, 49(03): 614−634
  • 加载中
计量
  • 文章访问数:  107
  • HTML全文浏览量:  184
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-06-19
  • 录用日期:  2024-09-22
  • 网络出版日期:  2024-10-16

目录

    /

    返回文章
    返回