2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

具身智能研究的关键问题: 自主感知、行动与进化

沈甜雨 陶子锐 王亚东 张庭祯 刘宇航 王兴霞 杨静 李志伟 陈龙 王坤峰 王飞跃

沈甜雨, 陶子锐, 王亚东, 张庭祯, 刘宇航, 王兴霞, 杨静, 李志伟, 陈龙, 王坤峰, 王飞跃. 具身智能研究的关键问题: 自主感知、行动与进化. 自动化学报, 2025, 51(1): 1−29 doi: 10.16383/j.aas.c240364
引用本文: 沈甜雨, 陶子锐, 王亚东, 张庭祯, 刘宇航, 王兴霞, 杨静, 李志伟, 陈龙, 王坤峰, 王飞跃. 具身智能研究的关键问题: 自主感知、行动与进化. 自动化学报, 2025, 51(1): 1−29 doi: 10.16383/j.aas.c240364
Shen Tian-Yu, Tao Zi-Rui, Wang Ya-Dong, Zhang Ting-Zhen, Liu Yu-Hang, Wang Xing-Xia, Yang Jing, Li Zhi-Wei, Chen Long, Wang Kun-Feng, Wang Fei-Yue. Key problems of embodied intelligence research: Autonomous perception, action, and evolution. Acta Automatica Sinica, 2025, 51(1): 1−29 doi: 10.16383/j.aas.c240364
Citation: Shen Tian-Yu, Tao Zi-Rui, Wang Ya-Dong, Zhang Ting-Zhen, Liu Yu-Hang, Wang Xing-Xia, Yang Jing, Li Zhi-Wei, Chen Long, Wang Kun-Feng, Wang Fei-Yue. Key problems of embodied intelligence research: Autonomous perception, action, and evolution. Acta Automatica Sinica, 2025, 51(1): 1−29 doi: 10.16383/j.aas.c240364

具身智能研究的关键问题: 自主感知、行动与进化

doi: 10.16383/j.aas.c240364 cstr: 32138.14.j.aas.c240364
基金项目: 国家自然科学基金(62302047, 62076020), 中央高校基本科研业务费专项资金(buctrc202413)资助
详细信息
    作者简介:

    沈甜雨:北京化工大学信息科学与技术学院副教授. 2021年获中国科学院自动化研究所博士学位. 主要研究方向为智能感知, 智能机器人系统. E-mail: tianyu.shen@buct.edu.cn

    陶子锐:北京化工大学信息科学与技术学院硕士研究生. 2023年获北京化工大学学士学位. 主要研究方向为多任务学习, 增量学习.E-mail: taozirui@126.com

    王亚东:北京化工大学信息科学与技术学院博士研究生. 主要研究方向为计算机视觉, 智能交通系统.E-mail: 2021400212@buct.edu.cn

    张庭祯:北京化工大学信息科学与技术学院硕士研究生. 2018年获北京化工大学学士学位. 主要研究方向为计算机视觉, 具身智能.E-mail: ztz1733565287@163.com

    刘宇航:中国科学院自动化研究所博士研究生. 2021年获清华大学学士学位. 主要研究方向为三维感知, 具身智能.E-mail: liuyuhang2021@ia.ac.cn

    王兴霞:中国科学院自动化研究所博士研究生. 2021 年获南开大学硕士学位. 主要研究方向为平行智能, 平行油田, 故障诊断和多智能体系统.E-mail: wangxingxia2022@ia.ac.cn

    杨静:中国科学院自动化研究所博士研究生. 2020年获北京化工大学学士学位. 主要研究方向为平行制造, 社会制造, 人工智能和社会物理信息系统.E-mail: yangjing2020@ia.ac.cn

    李志伟:北京化工大学信息科学与技术学院副教授. 2020年获中国矿业大学(北京)博士学位. 主要研究方向为自动驾驶, 具身智能机器人和视觉语言大模型.E-mail: lizw@buct.edu.cn

    陈龙:中国科学院自动化研究所研究员. 2013 年获武汉大学博士学位. 主要研究方向为自动驾驶, 机器人, 智慧矿山和平行智能. E-mail: long.chen@ia.ac.cn

    王坤峰:北京化工大学信息科学与技术学院教授. 主要研究方向为计算机视觉, 多模态感知和智能无人系统. 本文通信作者. E-mail: wangkf@buct.edu.cn

    王飞跃:中国科学院自动化研究所研究员. 主要研究方向为智能系统和复杂系统的建模、分析与控制.E-mail: feiyue.wang@ia.ac.cn

  • 中图分类号: Y

Key Problems of Embodied Intelligence Research: Autonomous Perception, Action, and Evolution

Funds: Supported by National Natural Science Foundation of China (62302047, 62076020) and Fundamental Research Funds for the Central Universities (buctrc202413)
More Information
    Author Bio:

    SHEN Tian-Yu Associate professor at the College of Information Science and Technology, Beijing University of Chemical Technology. She received her Ph.D. degree from the Institute of Automation, Chinese Academy of Sciences in 2021. Her research interest covers intelligent perception and intelligent robot systems

    TAO Zi-Rui Master student at the college of Information Science and Technology, Beijing University of Chemical Technology. He received the bachelor degree from Beijing University of Chemical Technology in 2023. His research interest covers multi-task learning and incremental learning

    WANG Ya-Dong Ph.D. candidate at the College of Information Science and Technology, Beijing University of Chemical Technology. His research interest covers computer vision and intelligent transportation systems

    ZHANG Ting-Zhen Master student at the College of Information Science and Technology, Beijing University of Chemical Technology. He received his bachelor degree from Beijing University of Chemical Technology in 2018. His research interest covers computer vision and embodied intelligence

    LIU Yu-Hang Ph.D. candidate at the Institute of Automation, Chinese Academy of Sciences. He received his bachelor degree from Tsinghua University in 2021. His research interest covers 3D perception and embodied artificial intelligence

    WANG Xing-Xia Ph.D. candidate at the Institute of Automation, Chinese Academy of Sciences. She received her master degree from Nankai University in 2021. Her research interest covers parallel intelligence, parallel oilfields, fault diagnosis, and multi-agent systems

    YANG Jing Ph.D. candidate at the Institute of Automation, Chinese Academy of Sciences. She received her bachelor degree from Beijing University of Chemical Technology in 2020. Her research interest covers parallel manufacturing, social manufacturing, artificial intelligence, and cyber-physical-social systems

    LI Zhi-Wei Associate professor at the College of Information Science and Technology, Beijing University of Chemical Technology. He received his Ph.D. degree from China University of Mining and Technology (Beijing) in 2020. His research interest covers autonomous driving, embodied intelligent robots, and large visual-language models

    CHEN Long Researcher at the Institute of Automation, Chinese Academy of Sciences. He received his Ph.D. degree from Wuhan University in 2013. His research interest covers autonomous driving, robotics, smart mining, and parallel intelligence

    WANG Kun-Feng Professor at the College of Information Science and Technology, Beijing University of Chemical Technology. His research interest covers computer vision, multi-modal perception, and intelligent unmanned systems. Corresponding author of this paper

    WANG Fei-Yue Researcher at the Institute of Automation, Chinese Academy of Sciences. His research interest covers modeling, analysis and control of intelligent systems and complex systems

  • 摘要: 具身智能强调大脑、身体及环境三者的相互作用, 旨在基于机器与物理世界的交互, 创建软硬件结合、可自主学习进化的智能体. 当前, 机器学习、机器人学、认知科学等多学科技术的快速发展极大地推动了具身智能的研究与应用. 已有的具身智能文献更多从技术和方法分类的角度入手, 本文以具身智能在研究和应用过程中面临的关键挑战为角度切入, 分析具身智能研究的一般性框架, 围绕具身感知与执行、具身学习与进化两个方面提出具体的研究思路, 并针对其中涉及的关键问题详细梳理相关技术及研究进展. 此外, 以移动机器人、仿生机器人、平行机器人三方面应用为例, 介绍具身智能在感知与理解、控制与决策、交互与学习等方面给实际机器人系统设计带来的启发. 最后, 对具身智能的发展前景进行展望, 探索虚实融合数据智能、基础模型与基础智能、数字孪生与平行智能在其中的重要作用和应用潜力, 希望为相关领域学者和从业人员提供新的启示和思路. 论文相关项目详见https://github.com/BUCT-IUSRC/Survey__EmbodiedAI.
  • 图  1  具身智能与智能体发展历程

    Fig.  1  The development history of embodied intelligence and agent

    图  2  具身智能研究的一般性框架图

    Fig.  2  General framework diagram of embodied intelligence research

    图  3  “感知−模拟−执行”一体化机制框架

    Fig.  3  The framework of the integrated perception-simulation-execution mechanism

    图  4  典型的端到端自动驾驶框架[18]

    Fig.  4  Typical end-to-end autonomous driving framework[18]

    图  5  典型的多模态融合感知框架[19]

    Fig.  5  Typical multi-modal fusion perception framework[19]

    图  6  具身智能学习与进化框架

    Fig.  6  The framework of embodied intelligence learning and evolution

    图  7  EWC方法梯度下降方向的可视化图[71]

    Fig.  7  Visualization diagram of gradient descent direction of EWC method[71]

    图  8  蒸馏损失POD通过约束中间层输出防止模型过度漂移, 从而避免灾难性遗忘现象发生[78]

    Fig.  8  The distillation loss POD prevent excessive model drift by constraining intermediate outputs, thereby avoiding catastrophic forgetting phenomena[78]

    图  9  以观察图像和目标图像为输入的执行器−评价器网络结构[93]

    Fig.  9  The actor-critic network structure with observation images and target images as inputs[93]

    图  10  NerveNet从每个节点的观测向量中获取信息, 通过多次计算相邻节点间的信息更新节点的隐藏状态, 最后在输出模型中收集每个控制器的输出形成优化策略[95]

    Fig.  10  NerveNet obtains the information from the observation vectors of each node, updates the hidden states of the nodes by calculating the information between adjacent nodes multiple times, and finally collects the output of each controller in the output model to form an optimization strategy[95]

    图  11  通过使用学习到的Q函数和策略网络进行评估优化, 有效地减少了优化计算过程中代表物理原型的参数量[98]

    Fig.  11  By using the learned Q-function and policy network for evaluation and optimization, the number of parameters representing the physical prototype in the optimization calculation process has been effectively reduced[98]

    图  12  具身智能增强的机器人系统研究框架

    Fig.  12  The research framework of robot systems with enhanced embodied intelligence

    图  13  具身智能增强的自动驾驶系统框架

    Fig.  13  The framework of auto drive system with enhanced intelligence

    图  14  典型的仿生机器人

    Fig.  14  Typical bionic robots

    图  15  平行机器人框架[141]

    Fig.  15  The framework of parallel robot[141]

    表  1  具身智能研究现状

    Table  1  The current status of embodied intelligence research

    名称 年份 特点 优劣
    BigDog 2009 由波士顿动力公司制造, 能够在崎岖不平的地形上行走, 并保持稳定, 展示了在复杂环境中移动的能力 具有强大的越野能力和高负载能力, 能适应复杂环境, 但采用噪音较大的内燃机动力源
    Atlas 2013 由波士顿动力公司制造, 具备高度灵活性和稳定性的人形机器人, 能够进行跑步、跳跃和攀爬等复杂动作, 标志着人形机器人在运动控制和灵活性方面的显著进步 具备高度灵活性和稳定性, 能够执行复杂动作, 但开发和制造成本较高
    DQN算法 2014 DeepMind公司开发的DQN (Deep Q-Network)算法首次将深度学习与强化学习相结合, 使智能体在多种视频游戏中超越人类表现, 这一算法为具身智能提供了新的学习和决策方法 可在无监督环境中通过与环境的互动进行学习, 提高了适应性, 但需要大量数据和计算资源进行训练, 运行成本高
    AlphaGo 2016 DeepMind的AlphaGo战胜了世界围棋冠军李世石, 这一里程碑事件展示了智能体在复杂策略游戏中的超人表现, 推动了具身智能在复杂决策问题上的研究 结合深度学习和蒙特卡罗树搜索, 实现高效决策和自我优化, 但高计算成本和领域局限性限制了其广泛应用的可能
    Walker 2018 优必选公司发布了Walker机器人, 这是一款双足仿人服务机器人, 展示了在家居和服务领域的应用潜力 具备双足行走能力和多功能性, 但高成本和续航时间有限,限制了长时间工作和普及应用
    Stretch 2021 波士顿动力公司推出的Stretch机器人, 专为仓库操作设计, 展示了在物流和仓储领域的巨大应用前景 专为仓库操作设计, 提升了仓库内搬运任务的效率, 但泛化到其他领域工作的能力较低
    Optimus 2024 特斯拉公司发布了Optimus人形机器人, 旨在解决劳动力短缺问题, 展示了未来具身智能在生产和日常生活中的广泛应用潜力 具备高度自主性和广泛应用前景, 但高成本和复杂技术性限制了普及性
    下载: 导出CSV
  • [1] 张钹, 朱军, 苏航. 迈向第三代人工智能. 中国科学: 信息科学, 2020, 50(9): 1281−1302 doi: 10.1360/SSI-2020-0204

    Zhang Bo, Zhu Jun, Su Hang. Toward the third generation of artificial intelligence. Scientia Sinica Informationis, 2020, 50(9): 1281−1302 doi: 10.1360/SSI-2020-0204
    [2] Jin D D, Zhang L. Embodied intelligence weaves a better future. Nature Machine Intelligence, 2020, 2(11): 663−664 doi: 10.1038/s42256-020-00250-6
    [3] Gupta A, Savarese S, Ganguli S, Li F F. Embodied intelligence via learning and evolution. Nature Communications, 2021, 12(1): Article No. 5721 doi: 10.1038/s41467-021-25874-z
    [4] Turing A M. Computing machinery and intelligence. Creative Computing, 1980, 6(1): 44−53
    [5] Howard D, Eiben A E, Kennedy D F, Mouret J B, Valencia P, Winkler D. Evolving embodied intelligence from materials to machines. Nature Machine Intelligence, 2019, 1(1): 12−19 doi: 10.1038/s42256-018-0009-9
    [6] Shen T Y, Sun J L, Kong S H, Wang Y T, Li J J, Li X, et al. The journey/DAO/TAO of embodied intelligence: From large models to foundation intelligence and parallel intelligence. IEEE/CAA Journal of Automatica Sinica, 2024, 11(6): 1313−1316 doi: 10.1109/JAS.2024.124407
    [7] 沈甜雨, 李志伟, 范丽丽, 张庭祯, 唐丹丹, 周美华, 等. 具身智能驾驶: 概念、方法、现状与展望. 智能科学与技术学报, 2024, 6(1): 17−32 doi: 10.11959/j.issn.2096-6652.202404

    Shen Tian-Yu, Li Zhi-Wei, Fan Li-Li, Zhang Ting-Zhen, Tang Dan-Dan, Zhou Mei-Hua, et al. Embodied intelligent driving: Concept, methods, the state of the art and beyond. Chinese Journal of Intelligent Science and Technology, 2024, 6(1): 17−32 doi: 10.11959/j.issn.2096-6652.202404
    [8] Ichter B, Brohan A, Chebotar Y, Finn C, Hausman K, Herzog A, et al. Do as i can, not as i say: Grounding language in robotic affordances. In: Proceedings of the 6th Conference on Robot Learning. Auckland, New Zealand: PMLR, 2023. 287–318
    [9] Shah D, Osiński B, Ichter B, Levine S. LM-Nav: Robotic navigation with large pre-trained models of language, vision, and action. In: Proceedings of the 6th Conference on Robot Learning. Auckland, New Zealand: PMLR, 2023. 492–504
    [10] Qiao H, Zhong S L, Chen Z Y, Wang H Z. Improving performance of robots using human-inspired approaches: A survey. Science China Information Sciences, 2022, 65(12): Article No. 221201 doi: 10.1007/s11432-022-3606-1
    [11] Cao L B. AI robots and humanoid AI: Review, perspectives and directions. arXiv preprint arXiv: 2405.15775, 2024.

    Cao L B. AI robots and humanoid AI: Review, perspectives and directions. arXiv preprint arXiv: 2405.15775, 2024.
    [12] Duan J F, Yu S, Tan H L, Zhu H Y, Tan C. A survey of embodied AI: From simulators to research tasks. IEEE Transactions on Emerging Topics in Computational Intelligence, 2022, 6(2): 230−244 doi: 10.1109/TETCI.2022.3141105
    [13] 刘华平, 郭迪, 孙富春, 张新钰. 基于形态的具身智能研究: 历史回顾与前沿进展. 自动化学报, 2023, 49(6): 1131−1154

    Liu Hua-Ping, Guo Di, Sun Fu-Chun, Zhang Xin-Yu. Morphology-based embodied intelligence: Historical retrospect and research progress. Acta Automatica Sinica, 2023, 49(6): 1131−1154
    [14] Minsky M. Society of Mind. New York: Simon and Schuster, 1988.
    [15] Varela F J, Thompson E, Rosch E. The Embodied Mind: Cognitive Science and Human Experience. Cambridge: MIT Press, 1993.
    [16] Pfeifer R, Bongard J. How the Body Shapes the Way We Think: A New View of Intelligence. Cambridge: MIT Press, 2006.
    [17] Deisenroth M P, Fox D, Rasmussen C E. Gaussian processes for data-efficient learning in robotics and control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(2): 408−423 doi: 10.1109/TPAMI.2013.218
    [18] Hu Y H, Yang J Z, Chen L, Li K Y, Sima C, Zhu X Z. Planning-oriented autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 17853–17862
    [19] Hu C Y, Zheng H, Li K, Xu J Y, Mao W B, Luo M C, et al. FusionFormer: A multi-sensory fusion in bird's-eye-view and temporal consistent Transformer for 3D objection. arXiv preprint arXiv: 2309.05257, 2023.

    Hu C Y, Zheng H, Li K, Xu J Y, Mao W B, Luo M C, et al. FusionFormer: A multi-sensory fusion in bird's-eye-view and temporal consistent Transformer for 3D objection. arXiv preprint arXiv: 2309.05257, 2023.
    [20] Yin T W, Zhou X Y, Krähenbühl P. Multimodal virtual point 3D detection. arXiv preprint arXiv: 2111.06881, 2021.

    Yin T W, Zhou X Y, Krähenbühl P. Multimodal virtual point 3D detection. arXiv preprint arXiv: 2111.06881, 2021.
    [21] Wu X P, Peng L, Yang H H, Xie L, Huang C X, Deng C Q. Sparse fuse dense: Towards high quality 3D detection with depth completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 5408–5417
    [22] Wu H, Wen C L, Shi S S, Li X, Wang C. Virtual sparse convolution for multimodal 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 21653–21662
    [23] Liu Z J, Tang H T, Amini A, Yang X Y, Mao H Z, Rus D L. BEVFusion: Multi-task multi-sensor fusion with unified bird's-eye view representation. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). London, UK: IEEE, 2023. 2774–2781
    [24] Wei M, Li J C, Kang H Y, Huang Y J, Lu J G. BEV-CFKT: A LiDAR-camera cross-modality-interaction fusion and knowledge transfer framework with Transformer for BEV 3D object detection. Neurocomputing, 2024, 582: Article No. 127527 doi: 10.1016/j.neucom.2024.127527
    [25] Drews F, Feng D, Faion F, Rosenbaum L, Ulrich M, Gläser C. DeepFusion: A robust and modular 3D object detector for LiDARs, cameras and radars. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Kyoto, Japan: IEEE, 2022. 560–567
    [26] Xie B Q, Yang Z M, Yang L, Wei A L, Weng X X, Li B. AMMF: Attention-based multi-phase multi-task fusion for small contour object 3D detection. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(2): 1692−1701
    [27] Chiu H K, Li J, Ambruş R, Bohg J. Probabilistic 3D multi-modal, multi-object tracking for autonomous driving. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Xi'an, China: IEEE, 2021. 14227–14233
    [28] Tian Y L, Zhang X J, Wang X, Xu J T, Wang J G, Ai R. ACF-Net: Asymmetric cascade fusion for 3D detection with LiDAR point clouds and images. IEEE Transactions on Intelligent Vehicles, 2024, 9(2): 3360−3371 doi: 10.1109/TIV.2023.3341223
    [29] Zhang P, Zhang B, Zhang T, Chen D, Wang Y, Wen F. Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 12409−12419
    [30] Lee S, Cho S, Im S. DRANet: Disentangling representation and adaptation networks for unsupervised cross-domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 15247–15256
    [31] Oza P, Sindagi V A, Vs V, Patel V M. Unsupervised domain adaptation of object detectors: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(6): 4018−4040 doi: 10.1109/TPAMI.2022.3217046
    [32] Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, et al. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 2016, 17(59): 1−35
    [33] Cai Q, Pan Y W, Ngo C W, Tian X M, Duan L Y, Yao T. Exploring object relation in mean teacher for cross-domain detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 11449–11458
    [34] Li C X, Chan S H, Chen Y T. DROID: Driver-centric risk object identification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(11): 13683−13698
    [35] Cheng Z Y, Lu J, Ding H, Li Y X, Bai H J, Zhang W H. A superposition assessment framework of multi-source traffic risks for mega-events using risk field model and time-series generative adversarial networks. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(11): 12736−12753 doi: 10.1109/TITS.2023.3290165
    [36] Wang X W, Alonso-Mora J, Wang M. Probabilistic risk metric for highway driving leveraging multi-modal trajectory predictions. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(10): 19399−19412 doi: 10.1109/TITS.2022.3164469
    [37] Caleffi F, Anzanello M J, Cybis H B B. A multivariate-based conflict prediction model for a Brazilian freeway. Accident Analysis & Prevention, 2017, 98: 295−302
    [38] Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 10674–10685
    [39] Saharia C, Chan W, Saxena S, Lit L, Whang J, Denton E, et al. Photorealistic text-to-image diffusion models with deep language understanding. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2022. Article No. 2643
    [40] Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, et al. Zero-shot text-to-image generation. arXiv preprint arXiv: 2102.12092, 2021.

    Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, et al. Zero-shot text-to-image generation. arXiv preprint arXiv: 2102.12092, 2021.
    [41] Ramesh A, Dhariwal P, Nichol A, Chu C, Chen M. Hierarchical text-conditional image generation with CLIP latents. arXiv preprint arXiv: 2204.06125, 2022.

    Ramesh A, Dhariwal P, Nichol A, Chu C, Chen M. Hierarchical text-conditional image generation with CLIP latents. arXiv preprint arXiv: 2204.06125, 2022.
    [42] Reed S, Zolna K, Parisotto E, Colmenarejo S G, Novikov A, Barth-Maron G, et al. A generalist agent.arXiv preprint arXiv: 2205.06175, 2022.
    [43] Niemeyer M, Geiger A. GIRAFFE: Representing scenes as compositional generative neural feature fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 11448–11459
    [44] Abou-Chakra J, Rana K, Dayoub F, Sünderhauf N. Physically embodied Gaussian splatting: Embedding physical priors into a visual 3D world model for robotics [Online], available: https://eprints.qut.edu.au/247354/, November 9, 2024

    Abou-Chakra J, Rana K, Dayoub F, Sünderhauf N. Physically embodied Gaussian splatting: Embedding physical priors into a visual 3D world model for robotics [Online], available: https://eprints.qut.edu.au/247354/, November 9, 2024
    [45] Hart P E, Nilsson N J, Raphael B. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 1968, 4(2): 100−107 doi: 10.1109/TSSC.1968.300136
    [46] Karaman S, Frazzoli E. Sampling-based algorithms for optimal motion planning. The International Journal of Robotics Research, 2011, 30(7): 846−894 doi: 10.1177/0278364911406761
    [47] Koenig S, Likhachev M. D* lite. In: Proceedings of the 18th National Conference on Artificial Intelligence. Edmonton, Canada: AAAI, 2002. 476–483
    [48] Holland J H. Adaptation in Natural and Artificial Systems: An Introductory Analysis With Applications to Biology, Control, and Artificial Intelligence. Cambridge: MIT Press, 1992.
    [49] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the International Conference on Neural Networks. Perth, Australia: IEEE, 1995. 1942–1948
    [50] Dolgov D, Thrun S, Montemerlo M, Diebel J. Practical search techniques in path planning for autonomous driving. Ann Arbor, 2008, 1001(48105): 18−80
    [51] Webb D J, van den Berg J. Kinodynamic RRT*: Asymptotically optimal motion planning for robots with linear dynamics. In: Proceedings of the IEEE International Conference on Robotics and Automation. Karlsruhe, Germany: IEEE, 2013. 5054–5061
    [52] Konda V R, Tsitsiklis J N. Onactor-critic algorithms. SIAM Journal on Control and Optimization, 2003, 42(4): 1143−1166 doi: 10.1137/S0363012901385691
    [53] Watkins C J C H, Dayan P. Technical note: Q-learning. Machine Learning, 1992, 8(3−4): 279−292 doi: 10.1007/BF00992698
    [54] Mnih V, Badia A P, Mirza M, Graves A, Lillicrap T, Harley T, et al. Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning. New York City, USA: JMLR.org, 2016. 1928–1937
    [55] Deisenroth M P, Rasmussen C E. PILCO: A model-based and data-efficient approach to policy search. In: Proceedings of the 28th International Conference on Machine Learning. Bellevue, USA: Omnipress, 2011. 465–472
    [56] Ho J, Ermon S. Generative adversarial imitation learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates Inc., 2016. 4572–4580
    [57] Borase R P, Maghade D K, Sondkar S Y, Pawar S N. A review of PID control, tuning methods and applications. International Journal of Dynamics and Control, 2021, 9(2): 818−827 doi: 10.1007/s40435-020-00665-4
    [58] Dörfler F, Tesi P, de Persis C. On the role of regularization in direct data-driven LQR control. In: Proceedings of the 61st Conference on Decision and Control (CDC). Cancun, Mexico: IEEE, 2022. 1091–1098
    [59] Berberich J, Koch A, Scherer C W, Allgöwer F. Robust data-driven state-feedback design. In: Proceedings of the American Control Conference (ACC). Denver, USA: IEEE, 2020. 1532–1538
    [60] Nubert J, Köhler J, Berenz V, Allgower F, Trimpe S. Safe and fast tracking on a robot manipulator: Robust MPC and neural network control. IEEE Robotics and Automation Letters, 2020, 5(2): 3050−3057 doi: 10.1109/LRA.2020.2975727
    [61] Liu Y J, Zhao W, Liu L, Li D P, Tong S C, Chen C L P. Adaptive neural network control for a class of nonlinear systems with function constraints on states. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(6): 2732−2741 doi: 10.1109/TNNLS.2021.3107600
    [62] Phan D, Bab-Hadiashar A, Fayyazi M, Hoseinnezhad R, Jazar R N, Khayyam H. Interval type 2 fuzzy logic control for energy management of hybrid electric autonomous vehicles. IEEE Transactions on Intelligent Vehicles, 2021, 6(2): 210−220 doi: 10.1109/TIV.2020.3011954
    [63] Omidvar M N, Li X D, Mei Y, Yao X. Cooperative co-evolution with differential grouping for large scale optimization. IEEE Transactions on Evolutionary Computation, 2014, 18(3): 378−393 doi: 10.1109/TEVC.2013.2281543
    [64] Gad A G. Particle swarm optimization algorithm and its applications: A systematic review. Archives of Computational Methods in Engineering, 2022, 29(5): 2531−2561 doi: 10.1007/s11831-021-09694-4
    [65] Foderaro G, Ferrari S, Wettergren T A. Distributed optimal control for multi-agent trajectory optimization. Automatica, 2014, 50(1): 149−154 doi: 10.1016/j.automatica.2013.09.014
    [66] Bellemare M G, Dabney W, Munos R. A distributional perspective on reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: PMLR, 2017. 449–458
    [67] Dong G W, Li H Y, Ma H, Lu R Q. Finite-time consensus tracking neural network FTC of multi-agent systems. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(2): 653−662 doi: 10.1109/TNNLS.2020.2978898
    [68] Wang Q Y, Liu K X, Wang X, Wu L L, Lü J H. Leader-following consensus of multi-agent systems under antagonistic networks. Neurocomputing, 2020, 413: 339−347 doi: 10.1016/j.neucom.2020.07.006
    [69] Cai Z H, Wang L H, Zhao J, Wu K, Wang Y X. Virtual target guidance-based distributed model predictive control for formation control of multiple UAVs. Chinese Journal of Aeronautics, 2020, 33(3): 1037−1056 doi: 10.1016/j.cja.2019.07.016
    [70] Harvey I, Husbands P, Cliff D, Thompson A, Jakobi N. Evolutionary robotics: The Sussex approach. Robotics and Autonomous Systems, 1997, 20(2−4): 205−224 doi: 10.1016/S0921-8890(96)00067-X
    [71] Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu A A, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 2017, 114(13): 3521−3526
    [72] Liu X L, Masana M, Herranz L, van de Weijer J, López A M, Bagdanov A D. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In: Proceedings of the 24th International Conference on Pattern Recognition (ICPR). Beijing, China: IEEE, 2018. 2262–2268
    [73] Hsu Y C, Liu Y C, Ramasamy A, Kira Z. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv: 1810.12488, 2019.

    Hsu Y C, Liu Y C, Ramasamy A, Kira Z. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv: 1810.12488, 2019.
    [74] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531, 2015.

    Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531, 2015.
    [75] Li Z Z, Hoiem D. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(12): 2935−2947 doi: 10.1109/TPAMI.2017.2773081
    [76] Castro F M, Marín-Jiménez M J, Guil N, Schmid C, Alahari K. End-to-end incremental learning. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 241–257
    [77] Zhang J T, Zhang J, Ghosh S, Li D W, Tasci S, Heck L. Class-incremental learning via deep model consolidation. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV). Snowmass, USA: IEEE, 2020. 1120–1129
    [78] Douillard A, Cord M, Ollion C, Robert T, Valle E. PODNet: Pooled outputs distillation for small-tasks incremental learning. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 86–102.

    Douillard A, Cord M, Ollion C, Robert T, Valle E. PODNet: Pooled outputs distillation for small-tasks incremental learning. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 86–102.
    [79] 朱飞, 张煦尧, 刘成林. 类别增量学习研究进展和性能评价. 自动化学报, 2023, 49(3): 635−660

    Zhu Fei, Zhang Xu-Yao, Liu Cheng-Lin. Class incremental learning: A review and performance evaluation. Acta Automatica Sinica, 2023, 49(3): 635−660
    [80] Tao X Y, Chang X Y, Hong X P, Wei X, Gong Y H. Topology-preserving class-incremental learning. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 254–270
    [81] Rebuff S A, Kolesnikov A, Sperl G, Lampert C H. ICaRL: Incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 5533–5542
    [82] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial networks. Communications of the ACM, 2020, 63(11): 139−144 doi: 10.1145/3422622
    [83] Pellegrini L, Graffieti G, Lomonaco V, Maltoni D. Latent replay for real-time continual learning. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, USA: IEEE, 2020. 10203–10209
    [84] Wu Y, Chen Y P, Wang L J, Ye Y C, Liu Z C, Guo Y D. Large scale incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 374–382
    [85] Serra J, Suris D, Miron M, Karatzoglou A. Overcoming catastrophic forgetting with hard attention to the task. In: Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: PMLR, 2018. 4548–4557
    [86] Zhu K, Zhai W, Cao Y, Luo J B, Zha Z J. Self-sustaining representation expansion for non-exemplar class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 9286–9295
    [87] Schrauwen B, Verstraeten D, van Campenhout J. An overview of reservoir computing: Theory, applications and implementations. In: Proceedings of the 15th European Symposium on Artificial Neural Networks. Bruges, Belgium: ESANN, 2007. 471–482
    [88] Hauser H, Ijspeert A J, Füchslin R M, Pfeifer R, Maass W. The role of feedback in morphological computation with compliant bodies. Biological Cybernetics, 2012, 106(10): 595−613 doi: 10.1007/s00422-012-0516-4
    [89] Caluwaerts K, Despraz J, Işçen A, Sabelhaus A P, Bruce J, Schrauwen B, et al. Design and control of compliant tensegrity robots through simulation and hardware validation. Journal of the Royal Society Interface, 2014, 11(98): Article No. 20140520 doi: 10.1098/rsif.2014.0520
    [90] Degrave J, Caluwaerts K, Dambre J, Wyffels F. Developing an embodied gait on a compliant quadrupedal robot. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg, Germany: IEEE, 2015. 4486–4491
    [91] Rückert E A, Neumann G. Stochastic optimal control methods for investigating the power of morphological computation. Artificial Life, 2013, 19(1): 115−131 doi: 10.1162/ARTL_a_00085
    [92] Pervan A, Murphey T D. Algorithmic design for embodied intelligence in synthetic cells. IEEE Transactions on Automation Science and Engineering, 2021, 18(3): 864−875 doi: 10.1109/TASE.2020.3042492
    [93] Zhu Y K, Mottaghi R, Kolve E, Lim J J, Gupta A, Li F F. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Singapore, Singapore: IEEE, 2017. 3357–3364

    Zhu Y K, Mottaghi R, Kolve E, Lim J J, Gupta A, Li F F. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Singapore, Singapore: IEEE, 2017. 3357–3364
    [94] Chen T, Murali A, Gupta A. Hardware conditioned policies for multi-robot transfer learning. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: Curran Associates Inc., 2018. 9355–9366
    [95] Wang T W, Liao R J, Ba J, Fidler S. NerveNet: Learning structured policy with graph neural networks. In: Proceedings of the 6th International Conference on Learning Representations. Vancouver, Canada: OpenReview.net, 2018.
    [96] Blake C, Kurin V, Igl M, Whiteson S. Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing. arXiv preprint arXiv: 2103.01009, 2021.

    Blake C, Kurin V, Igl M, Whiteson S. Snowflake: Scaling GNNs to high-dimensional continuous control via parameter freezing. arXiv preprint arXiv: 2103.01009, 2021.
    [97] Pathak D, Lu C, Darrell T, Isola P, Efros A A. Learning to control self-assembling morphologies: A study of generalization via modularity. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2019. Article No. 206
    [98] Luck K S, Amor H B, Calandra R. Data-efficient co-adaptation of morphology and behaviour with deep reinforcement learning. In: Proceedings of the Conference on Robot Learning. Osaka, Japan: PMLR, 2020. 854–869
    [99] Ha D. Reinforcement learning for improving agent design. Artificial Life, 2019, 25(4): 352−365 doi: 10.1162/artl_a_00301
    [100] Nguyen T T, Nguyen N D, Vamplew P, Nahavandi S, Dazeley R, Lim C P. A multi-objective deep reinforcement learning framework. Engineering Applications of Artificial Intelligence, 2020, 96: Article No. 103915 doi: 10.1016/j.engappai.2020.103915
    [101] Clouse J A. Learning from an automated training agent [Online], available: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a46bc2e3a41aa419895ecb766900d7ba878aac1e, November 9, 2024
    [102] Price B, Boutilier C. Accelerating reinforcement learning through implicit imitation. Journal of Artificial Intelligence Research, 2003, 19: 569−629 doi: 10.1613/jair.898
    [103] van Hasselt H. Double Q-learning. In: Proceedings of the 23rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2010. 2613–2621
    [104] Sorokin I, Seleznev A, Pavlov M, Fedorov A, Ignateva A. Deep attention recurrent Q-network. arXiv preprint arXiv: 1512.01693, 2015.

    Sorokin I, Seleznev A, Pavlov M, Fedorov A, Ignateva A. Deep attention recurrent Q-network. arXiv preprint arXiv: 1512.01693, 2015.
    [105] Bowling M, Veloso M. Multiagent learning using a variable learning rate. Artificial Intelligence, 2002, 136(2): 215−250 doi: 10.1016/S0004-3702(02)00121-2
    [106] Lauer M, Riedmiller M A. An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In: Proceedings of the 17th International Conference on Machine Learning. Stanford, USA: Morgan Kaufmann, 2000. 535–542
    [107] Leibo J Z, Zambaldi V, Lanctot M, Marecki J, Graepel T. Multi-agent reinforcement learning in sequential social dilemmas. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems. São Paulo, Brazil: International Foundation for Autonomous Agents and Multiagent Systems, 2017. 464–473
    [108] Macenski S, Foote T, Gerkey B, Lalancette C, Woodall W. Robot operating system 2: Design, architecture, and uses in the wild. Science Robotics, 2022, 7(66): Article No. eabm6074 doi: 10.1126/scirobotics.abm6074
    [109] Huang W L, Wang C, Zhang R H, Li Y Z, Wu J J, Li F F. VoxPoser: Composable 3D value maps for robotic manipulation with language models. In: Proceedings of the 7th Conference on Robot Learning. Atlanta, USA: PMLR, 2023. 540–562
    [110] Li C S, Zhang R H, Wong J, Gokmen C, Srivastava S, Martín-Martín R, et al. BEHAVIOR-1K: A human-centered, embodied AI benchmark with 1000 everyday activities and realistic simulation. arXiv preprint arXiv: 2403.09227, 2024.

    Li C S, Zhang R H, Wong J, Gokmen C, Srivastava S, Martín-Martín R, et al. BEHAVIOR-1K: A human-centered, embodied AI benchmark with 1000 everyday activities and realistic simulation. arXiv preprint arXiv: 2403.09227, 2024.
    [111] Li B, Zhang Y M, Zhang T T, Acarman T, Ouyang Y K, Li L, et al. Embodied footprints: A safety-guaranteed collision-avoidance model for numerical optimization-based trajectory planning. IEEE Transactions on Intelligent Transportation Systems, 2024, 25(2): 2046−2060 doi: 10.1109/TITS.2023.3316175
    [112] Ma C L, Provost J. Design-to-test approach for programmable controllers in safety-critical automation systems. IEEE Transactions on Industrial Informatics, 2020, 16(10): 6499−6508 doi: 10.1109/TII.2020.2968480
    [113] Cai L, Zhou C L, Wang Y Q, Wang H, Liu B Y. Binocular vision-based pole-shaped obstacle detection and ranging study. Applied Sciences, 2023, 13(23): Article No. 12617 doi: 10.3390/app132312617
    [114] Hawke J, E H B, Badrinarayanan V, Kendall A. Reimagining an autonomous vehicle. arXiv preprint arXiv: 2108.05805, 2021.

    Hawke J, E H B, Badrinarayanan V, Kendall A. Reimagining an autonomous vehicle. arXiv preprint arXiv: 2108.05805, 2021.
    [115] Sun P, Kretzschmar H, Dotiwalla X, Chouard A, Patnaik V, Tsui P. Scalability in perception for autonomous driving: Waymo open dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 2443–2451
    [116] Chen L, Wu P H, Chitta K, Jaeger B, Geiger A, Li H Y. End-to-end autonomous driving: Challenges and frontiers. arXiv preprint arXiv: 2306.16927, 2024.

    Chen L, Wu P H, Chitta K, Jaeger B, Geiger A, Li H Y. End-to-end autonomous driving: Challenges and frontiers. arXiv preprint arXiv: 2306.16927, 2024.
    [117] Fu Z P, Zhao T Z, Finn C. Mobile ALOHA: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv: 2401.02117, 2024.

    Fu Z P, Zhao T Z, Finn C. Mobile ALOHA: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. arXiv preprint arXiv: 2401.02117, 2024.
    [118] Fager P, Calzavara M, Sgarbossa F. Modelling time efficiency of cobot-supported kit preparation. The International Journal of Advanced Manufacturing Technology, 2020, 106(5): 2227−2241
    [119] 兰沣卜, 赵文博, 朱凯, 张涛. 基于具身智能的移动操作机器人系统发展研究. 中国工程科学, 2024, 26(1): 139−148 doi: 10.15302/J-SSCAE-2024.01.010

    Lan Feng-Bo, Zhao Wen-Bo, Zhu Kai, Zhang Tao. Development of mobile manipulator robot systems with embodied intelligence. Strategic Study of CAE, 2024, 26(1): 139−148 doi: 10.15302/J-SSCAE-2024.01.010
    [120] Zhao H R, Pan F X, Ping H Q Y, Zhou Y M. Agent as cerebrum, controller as cerebellum: Implementing an embodied LMM-based agent on drones. arXiv preprint arXiv: 2311.15033, 2023.

    Zhao H R, Pan F X, Ping H Q Y, Zhou Y M. Agent as cerebrum, controller as cerebellum: Implementing an embodied LMM-based agent on drones. arXiv preprint arXiv: 2311.15033, 2023.
    [121] Wang J, Wu Z X, Zhang Y, Kong S H, Tan M, Yu J Z. Integrated tracking control of an underwater bionic robot based on multimodal motions. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024, 54(3): 1599−1610 doi: 10.1109/TSMC.2023.3328010
    [122] Egan D, Cosker D, McDonnell R. NeuroDog: Quadruped embodiment using neural networks. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2023, 6(3): Article No. 38
    [123] Tong Y C, Liu H T, Zhang Z T. Advancements in humanoid robots: A comprehensive review and future prospects. IEEE/CAA Journal of Automatica Sinica, 2024, 11(2): 301−328 doi: 10.1109/JAS.2023.124140
    [124] Haarnoja T, Moran B, Lever G, Huang S H, Tirumala D, Humplik J, et al. Learning agile soccer skills for a bipedal robot with deep reinforcement learning. Science Robotics, 2024, 9(89): Article No. eadi8022 doi: 10.1126/scirobotics.adi8022
    [125] Dong H W, Liu Y, Chu T, Saddik A E. Bringing robots home: The rise of AI robots in consumer electronics. arXiv preprint arXiv: 2403.14449, 2024.

    Dong H W, Liu Y, Chu T, Saddik A E. Bringing robots home: The rise of AI robots in consumer electronics. arXiv preprint arXiv: 2403.14449, 2024.
    [126] Haldar A I, Pagar N D. Predictive control of zero moment point (ZMP) for terrain robot kinematics. Materials Today: Proceedings, 2023, 80: 122−127 doi: 10.1016/j.matpr.2022.10.286
    [127] Qiao H, Chen J H, Huang X. A survey of brain-inspired intelligent robots: Integration of vision, decision, motion control, and musculoskeletal systems. IEEE Transactions on Cybernetics, 2022, 52(10): 11267−11280 doi: 10.1109/TCYB.2021.3071312
    [128] Qiao H, Wu Y X, Zhong S L, Yin P J, Chen J H. Brain-inspired intelligent robotics: Theoretical analysis and systematic application. Machine Intelligence Research, 2023, 20(1): 1−18 doi: 10.1007/s11633-022-1390-8
    [129] Sun Y L, Zong C J, Pancheri F, Chen T, Lueth T C. Design of topology optimized compliant legs for bio-inspired quadruped robots. Scientific Reports, 2023, 13(1): Article No. 4875 doi: 10.1038/s41598-023-32106-5
    [130] Taheri H, Mozayani N. A study on quadruped mobile robots. Mechanism and Machine Theory, 2023, 190: Article No. 105448 doi: 10.1016/j.mechmachtheory.2023.105448
    [131] Idée A, Mosca M, Pin D. Skin barrier reinforcement effect assessment of a spot-on based on natural ingredients in a dog model of tape stripping. Veterinary Sciences, 2022, 9(8): Article No. 390 doi: 10.3390/vetsci9080390
    [132] Yang C Y, Yuan K, Zhu Q G, Yu W M, Li Z B. Multi-expert learning of adaptive legged locomotion. Science Robotics, 2020, 5(49): Article No. eabb2174 doi: 10.1126/scirobotics.abb2174
    [133] Dai B L, Khorrambakht R, Krishnamurthy P, Khorrami F. Sailing through point clouds: Safe navigation using point cloud based control barrier functions. IEEE Robotics and Automation Letters, 2024, 9(9): 7731−7738 doi: 10.1109/LRA.2024.3431870
    [134] Huang K, Yang B Y, Gao W. Modality plug-and-play: Elastic modality adaptation in multimodal LLMs for embodied AI. arXiv preprint arXiv: 2312.07886, 2023.

    Huang K, Yang B Y, Gao W. Modality plug-and-play: Elastic modality adaptation in multimodal LLMs for embodied AI. arXiv preprint arXiv: 2312.07886, 2023.
    [135] Mutlu R, Alici G, Li W H. Three-dimensional kinematic modeling of helix-forming lamina-emergent soft smart actuators based on electroactive polymers. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2017, 47(9): 2562−2573
    [136] Goncalves A, Kuppuswamy N, Beaulieu A, Uttamchandani A, Tsui K M, Alspach A. Punyo-1: Soft tactile-sensing upper-body robot for large object manipulation and physical human interaction. In: Proceedings of the 5th International Conference on Soft Robotics (RoboSoft), Edinburgh, UK: IEEE, 2022. 844–851
    [137] Becker K, Teeple C, Charles N, Jung Y, Baum D, Weaver J C, et al. Active entanglement enables stochastic, topological grasping. Proceedings of the National Academy of Sciences of the United States of America, 2022, 119(42): Article No. e2209819119
    [138] Xie Z X, Yuan F Y, Liu J Q, Tian L F, Chen B H, Fu Z Q, et al. Octopus-inspired sensorized soft arm for environmental interaction. Science Robotics, 2023, 8(84): Article No. eadh7852 doi: 10.1126/scirobotics.adh7852
    [139] Mengaldo G, Renda F, Brunton S L, Bächer M, Calisti M, Duriez C, et al. A concise guide to modelling the physics of embodied intelligence in soft robotics. Nature Reviews Physics, 2022, 4(9): 595−610 doi: 10.1038/s42254-022-00481-z
    [140] 白天翔, 王帅, 沈震, 曹东璞, 郑南宁, 王飞跃. 平行机器人与平行无人系统: 框架、结构、过程、平台及其应用. 自动化学报, 2017, 43(2): 161−175

    Bai Tian-Xiang, Wang Shuai, Shen Zhen, Cao Dong-Pu, Zheng Nan-Ning, Wang Fei-Yue. Parallel robotics and parallel unmanned systems: Framework, structure, process, platform and applications. Acta Automatica Sinica, 2017, 43(2): 161−175
    [141] 王飞跃. 机器人的未来发展: 从工业自动化到知识自动化. 科技导报, 2015, 33(21): 39−44

    Wang Fei-Yue. On future development of robotics: From industrial automation to knowledge automation. Science & Technology Review, 2015, 33(21): 39−44
    [142] 王飞跃. 软件定义的系统与知识自动化: 从牛顿到默顿的平行升华. 自动化学报, 2015, 41(1): 1−8

    Wang Fei-Yue. Software-defined systems and knowledge automation: A parallel paradigm shift from Newton to Merton. Acta Automatica Sinica, 2015, 41(1): 1−8
    [143] Driess D, Xia F, Sajjadi M S M, Lynch C, Chowdhery A, Ichter B, et al. PaLM-E: An embodied multimodal language model. In: Proceedings of the 40th International Conference on Machine Learning. Honolulu, USA: PMLR, 2023. 8469–8488
    [144] Shridhar M, Thomason J, Gordon D, Bisk Y, Han W, Mottaghi R. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 10737–10746
    [145] Weihs L, Deitke M, Kembhavi A, Mottaghi R. Visual room rearrangement. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 5918–5927
    [146] Batra D, Chang A X, Chernova S, Davison A J, Deng J, Koltun V, et al. Rearrangement: A challenge for embodied AI. arXiv preprint arXiv: 2011.01975, 2020.

    Batra D, Chang A X, Chernova S, Davison A J, Deng J, Koltun V, et al. Rearrangement: A challenge for embodied AI. arXiv preprint arXiv: 2011.01975, 2020.
    [147] Shen B K, Xia F, Li C S, Martín-Martín R, Fan L X, Wang G Z. iGibson 1.0: A simulation environment for interactive tasks in large realistic scenes. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Prague, Czech Republic: IEEE, 2021. 7520–7527
    [148] Li C S, Xia F, Martín-Martín R, Lingelbach M, Srivastava S, Shen B K, et al. iGibson 2.0: Object-centric simulation for robot learning of everyday household tasks. In: Proceedings of the 5th Conference on Robot Learning. London, UK: PMLR, 2021. 455–465
    [149] Savva M, Kadian A, Maksymets O, Zhao Y L, Wijmans E, Jain B. Habitat: A platform for embodied AI research. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, South Korea: IEEE, 2019. 9338–9346
    [150] Ramakrishnan S K, Gokaslan A, Wijmans E, Maksymets O, Clegg A, Turner J, et al. Habitat-Matterport 3D dataset (HM3D): 1000 large-scale 3D environments for embodied AI. arXiv preprint arXiv: 2109.08238, 2021.

    Ramakrishnan S K, Gokaslan A, Wijmans E, Maksymets O, Clegg A, Turner J, et al. Habitat-Matterport 3D dataset (HM3D): 1000 large-scale 3D environments for embodied AI. arXiv preprint arXiv: 2109.08238, 2021.
    [151] Wani S, Patel S, Jain U, Chang A X, Savva M. Multi ON: Benchmarking semantic map memory using multi-object navigation. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. Article No. 813
    [152] Srivastava S, Li C S, Lingelbach M, Martín-Martín R, Xia F, Vainio K E, et al. BEHAVIOR: Benchmark for everyday household activities in virtual, interactive, and ecological environments. In: Proceedings of the 5th Conference on Robot Learning. London, UK: PMLR, 2022. 477–490
    [153] Kolve E, Mottaghi R, Han W, VanderBilt E, Weihs L, Herrasti A, et al. AI2-THOR: An interactive 3D environment for visual AI. arXiv preprint arXiv: 1712.05474, 2022.

    Kolve E, Mottaghi R, Han W, VanderBilt E, Weihs L, Herrasti A, et al. AI2-THOR: An interactive 3D environment for visual AI. arXiv preprint arXiv: 1712.05474, 2022.
    [154] Gan C, Schwartz J, Alter S, Mrowca D, Schrimpf M, Traer J, et al. ThreeDWorld: A platform for interactive multi-modal physical simulation. arXiv preprint arXiv: 2007.04954, 2021.

    Gan C, Schwartz J, Alter S, Mrowca D, Schrimpf M, Traer J, et al. ThreeDWorld: A platform for interactive multi-modal physical simulation. arXiv preprint arXiv: 2007.04954, 2021.
    [155] Gan C, Zhou S Y, Schwartz J, Alter S, Bhandwaldar A, Gutfreund D. The ThreeDWorld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied AI. In: Proceedings of the International Conference on Robotics and Automation (ICRA). Philadelphia, USA: IEEE, 2022. 8847–8854
    [156] Szot A, Clegg A, Undersander E, Wijmans E, Zhao Y L, Turner J, et al. Habitat 2.0: Training home assistants to rearrange their habitat. arXiv preprint arXiv: 2106.14405, 2022.

    Szot A, Clegg A, Undersander E, Wijmans E, Zhao Y L, Turner J, et al. Habitat 2.0: Training home assistants to rearrange their habitat. arXiv preprint arXiv: 2106.14405, 2022.
    [157] Bi K F, Xie L X, Zhang H H, Chen X, Gu X T, Tian Q. Accurate medium-range global weather forecasting with 3D neural networks. Nature, 2023, 619(7970): 533−538 doi: 10.1038/s41586-023-06185-3
    [158] Wang F Y. The emergence of intelligent enterprises: From CPS to CPSS. IEEE Intelligent Systems, 2010, 25(4): 85−88 doi: 10.1109/MIS.2010.104
    [159] Zhang J J, Wang F Y, Wang X, Xiong G, Zhu F H, Lv Y S, et al. Cyber-physical-social systems: The state of the art and perspectives. IEEE Transactions on Computational Social Systems, 2018, 5(3): 829−840 doi: 10.1109/TCSS.2018.2861224
    [160] Wang F Y, Wang X, Li L X, Li L. Steps toward parallel intelligence. IEEE/CAA Journal of Automatica Sinica, 2016, 3(4): 345−348 doi: 10.1109/JAS.2016.7510067
    [161] Wang F Y. Parallel intelligence in metaverses: Welcome to Hanoi!. IEEE Intelligent Systems, 2022, 37(1): 16−20 doi: 10.1109/MIS.2022.3154541
    [162] Wang X, Yang J, Han J P, Wang W, Wang F Y. Metaverses and DeMetaverses: From digital twins in CPS to parallel intelligence in CPSS. IEEE Intelligent Systems, 2022, 37(4): 97−102 doi: 10.1109/MIS.2022.3196592
    [163] Wang F Y. Parallel control and management for intelligent transportation systems: Concepts, architectures, and applications. IEEE Transactions on Intelligent Transportation Systems, 2010, 11(3): 630−638 doi: 10.1109/TITS.2010.2060218
    [164] Wang Z R, Lv C, Wang F Y. A new era of intelligent vehicles and intelligent transportation systems: Digital twins and parallel intelligence. IEEE Transactions on Intelligent Vehicles, 2023, 8(4): 2619−2627 doi: 10.1109/TIV.2023.3264812
    [165] Yang J, Wang X X, Zhao Y D. Parallel manufacturing for industrial metaverses: A new paradigm in smart manufacturing. IEEE/CAA Journal of Automatica Sinica, 2022, 9(12): 2063−2070 doi: 10.1109/JAS.2022.106097
    [166] Liu Y H, Sun B Y, Tian Y L, Wang X X, Zhu Y, Huai R X, et al. Software-defined active lidars for autonomous driving: A parallel intelligence-based adaptive model. IEEE Transactions on Intelligent Vehicles, 2023, 8(8): 4047−4056 doi: 10.1109/TIV.2023.3289540
    [167] Wang S, Wang J, Wang X, Qiu T Y, Yuan Y, Ouyang L W, et al. Blockchain-powered parallel healthcare systems based on the ACP approach. IEEE Transactions on Computational Social Systems, 2018, 5(4): 942−950 doi: 10.1109/TCSS.2018.2865526
    [168] Wang X J, Kang M Z, Sun H Q, de Reffye P, Wang F Y. DeCASA in AgriVerse: Parallel agriculture for smart villages in metaverses. IEEE/CAA Journal of Automatica Sinica, 2022, 9(12): 2055−2062 doi: 10.1109/JAS.2022.106103
    [169] Wang X X, Li J J, Fan L L, Wang Y T, Li Y K. Advancing vehicular healthcare: The DAO-based parallel maintenance for intelligent vehicles. IEEE Transactions on Intelligent Vehicles, 2023, 8(12): 4671−4673 doi: 10.1109/TIV.2023.3341855
    [170] Wang X X, Yang J, Wang Y T, Miao Q H, Wang F Y, Zhao A J, et al. Steps toward Industry 5.0: Building “6S” parallel industries with cyber-physical-social intelligence. IEEE/CAA Journal of Automatica Sinica, 2023, 10(8): 1692−1703 doi: 10.1109/JAS.2023.123753
    [171] 王飞跃. 关于复杂系统研究的计算理论与方法. 中国基础科学, 2004, 6(5): 3−10 doi: 10.3969/j.issn.1009-2412.2004.05.001

    Wang Fei-Yue. Computational theory and method on complex system. China Basic Science, 2004, 6(5): 3−10 doi: 10.3969/j.issn.1009-2412.2004.05.001
    [172] Zhao Y, Zhu Z Q, Chen B, Qiu S H, Huang J C, Lu X, et al. Toward parallel intelligence: An interdisciplinary solution for complex systems. The Innovation, 2023, 4(6): Article No. 100521 doi: 10.1016/j.xinn.2023.100521
    [173] Li X, Ye P J, Li J J, Liu Z M, Cao L B, Wang F Y. From features engineering to scenarios engineering for trustworthy AI: I&I, C&C, and V&V. IEEE Intelligent Systems, 2022, 37(4): 18−26 doi: 10.1109/MIS.2022.3197950
    [174] Li X, Tian Y L, Ye P J, Duan H B, Wang F Y. A novel scenarios engineering methodology for foundation models in metaverse. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023, 53(4): 2148−2159 doi: 10.1109/TSMC.2022.3228594
    [175] 杨静, 王晓, 王雨桐, 刘忠民, 李小双, 王飞跃. 平行智能与CPSS: 三十年发展的回顾与展望. 自动化学报, 2023, 49(3): 614−634

    Yang Jing, Wang Xiao, Wang Yu-Tong, Liu Zhong-Min, Li Xiao-Shuang, Wang Fei-Yue. Parallel intelligence and CPSS in 30 years: An ACP approach. Acta Automatica Sinica, 2023, 49(3): 614−634
  • 加载中
计量
  • 文章访问数:  978
  • HTML全文浏览量:  774
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-06-19
  • 录用日期:  2024-09-22
  • 网络出版日期:  2024-10-16

目录

    /

    返回文章
    返回