• 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

以数据手套为媒介的人手—机械手抓握技能传递

郭策 郭子睿 陈斯灏 李依鸿 肖浩然 李金哲 陈谢沅澧 曾志文 卢惠民

郭策, 郭子睿, 陈斯灏, 李依鸿, 肖浩然, 李金哲, 陈谢沅澧, 曾志文, 卢惠民. 以数据手套为媒介的人手—机械手抓握技能传递. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250512
引用本文: 郭策, 郭子睿, 陈斯灏, 李依鸿, 肖浩然, 李金哲, 陈谢沅澧, 曾志文, 卢惠民. 以数据手套为媒介的人手—机械手抓握技能传递. 自动化学报, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250512
Guo Ce, Guo Zi-Rui, Chen Si-Hao, Li Yi-Hong, Xiao Hao-Ran, Li Jin-Zhe, Chen Xie-Yuan-Li, Zeng Zhi-Wen, Lu Hui-Min. Data glove-mediated grasping skill transfer from human hands to robotic hands. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250512
Citation: Guo Ce, Guo Zi-Rui, Chen Si-Hao, Li Yi-Hong, Xiao Hao-Ran, Li Jin-Zhe, Chen Xie-Yuan-Li, Zeng Zhi-Wen, Lu Hui-Min. Data glove-mediated grasping skill transfer from human hands to robotic hands. Acta Automatica Sinica, xxxx, xx(x): x−xx doi: 10.16383/j.aas.c250512

以数据手套为媒介的人手—机械手抓握技能传递

doi: 10.16383/j.aas.c250512 cstr: 32138.14.j.aas.c250512
基金项目: 国家自然科学基金(U22A2059, 62203460, 62403478, T2521006), 中国科协青年人才托举工程(2023QNRC001), 国防科技大学自主科研基金资助
详细信息
    作者简介:

    郭策:国防科技大学智能科学学院博士研究生. 主要研究方向为模仿学习, 机器人操作. E-mail: guoce@nudt.edu.cn

    郭子睿:国防科技大学智能科学学院博士研究生. 主要研究方向为机器人感知与操作. E-mail: guozirui@nudt.edu.cn

    陈斯灏:中国兵器装备集团自动化研究所有限公司助理工程师. 2025年获得国防科技大学硕士学位. 主要研究方向为基于视觉的灵巧手抓握. E-mail: kinzkinz101@163.com

    李依鸿:就职于中联重科股份有限公司. 2025年获得国防科技大学硕士学位. 主要研究方向为仿人灵巧手的抓握控制算法. E-mail: liyihong202111@163.com

    肖浩然:国防科技大学智能科学学院博士研究生. 主要研究方向为机器人操作, 机器人任务与运动规划. E-mail: xiaohaoran@nudt.edu.cn

    李金哲:国防科技大学智能科学学院博士研究生. 主要研究方向为人机交互设备, 模仿学习. E-mail: ljz@nudt.edu.cn

    陈谢沅澧:国防科技大学智能科学学院副教授. 2022年获得德国波恩大学博士学位. 主要研究方向为机器人技术, 机器人感知与导航. E-mail: xieyuanli.chen@nudt.edu.cn

    曾志文:国防科技大学智能科学学院副教授. 2016年获得国防科技大学博士学位. 主要研究方向为路径规划, 人机交互, 多机器人分布式协同控制, 网络化系统估计和智能机器人应用. 本文通信作者. E-mail: zengzhiwen@nudt.edu.cn

    卢惠民:国防科技大学智能科学学院教授. 2010年获得国防科技大学博士学位. 主要研究方向为人形机器人, 机器人灵巧操作和人机混合智能操控. E-mail: lhmnew@nudt.edu.cn

Data Glove-mediated Grasping Skill Transfer From Human Hands to Robotic Hands

Funds: Supported by National Natural Science Foundation of China (U22A2059, 62203460, 62403478, T2521006), Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), and the Innovation Research Foundation of National University of Defense Technology
More Information
    Author Bio:

    GUO Ce Ph. D. candidate at the College of Intelligence Science and Technology, National University of Defense Technology. His research interests include imitation learning and robot manipulation

    GUO Zi-Rui Ph. D. candidate at the College of Intelligence Science and Technology, National University of Defense Technology. His research interests include robot perception and manipulation

    CHEN Si-Hao Assistant engineer at China Ordnance Equipment Group Automation Research Institute Co., Ltd. He received his master degree from National University of Defense Technology in 2025. His main research interest is visual grasping with dexterous hands

    LI Yi-Hong Employed by Zoomlion Heavy Industry Science & Technology Co., Ltd. She received her master degree from National University of Defense Technology in 2025. Her main research interest is grasping control algorithms for anthropomorphic dexterous hands

    XIAO Hao-Ran Ph. D. candidate at the College of Intelligence Science and Technology, National University of Defense Technology. His research interests include robotic manipulation and robotic task and motion planning

    LI Jin-Zhe Ph.D. candidate at the College of Intelligence Science and Technology, National University of Defense Technology. His research interests include human-computer interaction devices and imitation learning

    CHEN Xie-Yuan-Li Associate professor at the College of Intelligence Science and Technology, National University of Defense Technology. He received his Ph.D. degree from University of Bonn in 2022. His research interests include robotics technology, robot perception and navigation

    ZENG Zhi-Wen Associate professor at the College of Intelligence Science and Technology, National University of Defense Technology. He received his Ph.D. degree from National University of Defense Technology in 2016. His research interests include path planning, human-robot interaction, multi-robot coordination distributed control, estimation of networked systems, and the application to intelligent robot. Corresponding author of this paper

    LU Hui-Min Professor at the College of Intelligence Science and Technology, National University of Defense Technology. He received his Ph.D. degree from National University of Defense Technology in 2010. His research interests include humanoid robots, dexterous robot manipulation, human-robot hybrid intelligent control

  • 摘要: 模仿学习是实现从人手到机械手技能传递的有效方式. 传统示教方法面临示教方式不够直观、示教数据难以复用、触觉和动觉感知特征难以有效传递等问题. 为解决上述问题, 设计一款能够同时采集触觉和动觉特征的数据手套, 并提出以该手套为媒介的抓握技能传递方案, 包括基于图结构和极坐标的多模态特征表示、静力平衡假设下未知接触力估计、基于期望关节角度和接触力分布的动态重映射方法等. 实验证明, 对于可变形、不规则等多种属性的物体, 该方案能够在实现较高抓握成功率的同时保持合理的接触力控制, 相比于其他基准方案, 实现了相对更接近人手直接抓握的效果.
    1)  1https://www.noitom.com.cn/perception-neuron-3-pro.html/
  • 图  1  多模态数据采集手套

    Fig.  1  Multimodal data acquisition glove

    图  2  触觉采样阵列的尺寸兼容性

    Fig.  2  Dimensional compatibility of the tactile sampling array

    图  3  手部关节和触觉采样片分布

    Fig.  3  Distribution of hand joints and tactile sampling pads

    图  4  人手抓握示教示例(a)抓握手势重建与触觉可视化(b)动觉/触觉原始数据变化

    Fig.  4  Example of human hand grasping demonstration (a) Reconstruction of grasping gestures and tactile visualization (b) Variations in kinesthetic/tactile raw data

    图  5  被抓握的物体

    Fig.  5  The grasped objects

    图  6  以数据手套为媒介的技能传递框架

    Fig.  6  Framework for data glove-mediated skill transfer

    图  7  手部节点图

    Fig.  7  Hand node graph

    图  8  极坐标下的运动特征描述

    Fig.  8  Description of motion characteristics in polar coordinates

    图  9  基于TK-GCN的模仿学习框架

    Fig.  9  Imitation learning framework based on TK-GCN

    图  10  静力平衡截面示例

    Fig.  10  Cross-section examples in static force equilibrium

    图  11  受力分析示意图

    Fig.  11  Diagram of force analysis

    图  12  动态重映射方案

    Fig.  12  Dynamic remapping scheme

    图  13  抓握测试系统

    Fig.  13  The grasping testing system

    图  14  未知接触力估计结果

    Fig.  14  Estimation results of unknown contact forces

    图  15  基于TK-GCN方法的部分成功抓握过程

    Fig.  15  Partial successful grasping processes with the TK-GCN approach

    图  16  不同方法的抓握效果对比

    Fig.  16  Comparison of grasping effects of different approaches

    表  1  被抓握物体属性

    Table  1  Properties of grasped objects

    标签质量(g)硬度形状(尺寸)(mm)材质弹性模量(GPa)抓握方式
    bottleS201.1大于90 HA圆柱体(65, 194)不锈钢196.000侧抓
    nailong48.615 HC近圆柱体(70, 115)TPE0.204侧抓
    pitaya185.532 HC球体(83)TPE0.204自上而下
    carambola18.739~50 HA近圆柱体(66, 103)MDPE0.648自上而下
    football10.716 HC球体(58)TPE0.204自上而下
    pineapple26.975~93 HA近圆柱体(70, 133)MDPE0.648侧抓
    jar49.1大于90 HA圆柱体(68, 118)PET3.250侧抓
    citrus167.431 HC球体(78)TPE0.204侧抓
    pomegrante133.832 HC球体(74)TPE0.204自上而下
    bottleP103.9大于90 HA圆柱体(60, 228)PC2.600侧抓
    下载: 导出CSV

    表  2  消融实验评估结果

    Table  2  Evaluation results of ablation experiments

    方案成功率(%)$ \uparrow $FEM-AT(N)$ \downarrow $FEM-KE(N)$ \downarrow $FEM-MAE(N)$ \downarrow $S-Time(s)$ \downarrow $P-Time(ms)$ \downarrow $
    T-GCN+力位混合映射73.33285.27 ± 79.482.6819.322.511.32
    K-GCN+力位混合映射77.33323.83 ± 81.062.9119.323.131.95
    TK-GCN+力位混合映射88.00267.81 ± 68.271.738.293.312.83
    TK-GCN+未知接触力估计+力位混合映射88.67245.78 ± 61.421.708.323.267.63
    TK-GCN+未知接触力估计+动态重映射90.67243.13 ± 62.471.728.273.5411.96
    注: 上箭头表示数值越高越好, 下箭头表示数值越低越好.
    下载: 导出CSV

    表  3  不同方法的评估结果

    Table  3  Evaluation results of different approaches

    方法成功率(%)$ \uparrow $FEM-AT(N)$ \downarrow $FEM-KE(N)$ \downarrow $FEM-MAE(N)$ \downarrow $S-Time(s)$ \downarrow $P-Time(ms)$ \downarrow $
    力反馈遥操作70.00351.31 ± 109.091.9219.325.18
    导纳控制(70%)73.33476.91 ± 113.814.6719.331.97
    导纳控制(80%)86.00563.51 ± 139.215.0219.332.05
    导纳控制(90%)86.67775.47 ± 151.657.1319.332.39
    改进的ACT70.00356.68 ± 88.603.5219.327.1116.53
    改进的MULSA50.67302.51 ± 48.742.228.656.3731.94
    TK-GCN (本文方法)90.67243.13 ± 62.471.728.273.5411.96
    人手抓握100.00268.64 ± 50.261.446.282.07
    注: 上箭头表示数值越高越好, 下箭头表示数值越低越好.
    下载: 导出CSV
  • [1] Hu Z, Zheng Y, Pan J. Grasping living objects with adversarial behaviors using inverse reinforcement learning. IEEE Transactions on Robotics, 2023, 39(2): 1151−1163 doi: 10.1109/TRO.2022.3226108
    [2] Andrychowicz M, Baker B, Chociej M, Jozefowicz R, McGrew B, Pachocki J, et al. Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 2020, 39(1): 3−20 doi: 10.1177/0278364919887447
    [3] Liang H, Cong L, Hendrich N, Li S, Sun F, Zhang J. Multifingered grasping based on multimodal reinforcement learning. IEEE Robotics and Automation Letters, 2021, 7(2): 1174−1181 doi: 10.1109/lra.2021.3138545
    [4] Tian X, Zhan Q, Zhang Y, Zou J, Jiang L, Xu Q. Simplified configuration design of anthropomorphic hand imitating specific human hand grasps. IEEE Robotics and Automation Letters, 2022, 8(1): 152−159 doi: 10.1109/lra.2022.3224309
    [5] Yang L, Huang B, Li Q, Tsai Y Y, Lee W W, Song S. TacGNN: Learning tactile-based in-hand manipulation with a blind robot using hierarchical graph neural network. IEEE Robotics and Automation Letters, 2023, 8(6): 3605−3612 doi: 10.1109/LRA.2023.3264759
    [6] Wolpert D M, Ghahramani Z, Jordan M I. An internal model for sensorimotor integration. Science, 1995, 269(5232): 1880−1882 doi: 10.1126/science.7569931
    [7] Ordás C M, Alonso-Frech F. The neural basis of somatosensory temporal discrimination threshold as a paradigm for time processing in the sub-second range: An updated review. Neuroscience & Biobehavioral Reviews, 2024, 156: Article No. 105486 doi: 10.1016/j.neubiorev.2023.105486
    [8] Zhao T Z, Kumar V, Levine S, Finn C. Learning fine-grained bimanual manipulation with low-cost hardware. In: Proceedings of the 19th Robotics: Science and System. Daegu, South Korea: RSS 2023. 16-34
    [9] Funabashi S, Isobe T, Hongyi F, Hiramoto A, Schmitz A, Sugano S. Multi-fingered in-hand manipulation with various object properties using graph convolutional networks and distributed tactile sensors. IEEE Robotics and Automation Letters, 2022, 7(2): 2102−2109 doi: 10.1109/LRA.2022.3142417
    [10] Palleschi A, Angelini F, Gabellieri C, Park D W, Pallottino L, Bicchi A. Grasp it like a Pro 2.0: A data-driven approach exploiting basic shape decomposition and human data for grasping unknown objects. IEEE Transactions on Robotics, 2023, 39(5): 4016−4036 doi: 10.1109/TRO.2023.3286115
    [11] Ravichandar H, Polydoros A S, Chernova S, Billard A. Recent advances in robot learning from demonstration. Annual Review of Control, Robotics, and Autonomous Systems, 2020, 3(1): 297−330 doi: 10.1146/annurev-control-100819-063206
    [12] 秦方博, 徐德. 机器人操作技能模型综述. 自动化学报, 2019, 45(8): 1401−1418 doi: 10.16383/j.aas.c180836

    Qin Fang-Bo, Xu De. Review of robot manipulation skill models. Acta Automatica Sinica, 2019, 45(8): 1401−1418 doi: 10.16383/j.aas.c180836
    [13] 刘乃军, 鲁涛, 蔡莹皓, 王硕. 机器人操作技能学习方法综述. 自动化学报, 2019, 45(8): 1401−1418 doi: 10.16383/j.aas.c180076

    Liu Nai-Jun, Lu Tao, Cai Ying-Hao, Wang Shuo. A review of robot manipulation skills learning methods. Acta Automatica Sinica, 2019, 45(8): 1401−1418 doi: 10.16383/j.aas.c180076
    [14] Gabellieri C, Angelini F, Arapi V, Palleschi A, Catalano M G, Grioli G, et al. Grasp it like a Pro: Grasp of unknown objects with robotic hands based on skilled human expertise. IEEE Robotics and Automation Letters, 2020, 5(2): 2808−2815 doi: 10.1109/LRA.2020.2974391
    [15] Wei D, Xu H. A wearable robotic hand for hand-over-hand imitation learning. In: Proceedings of the IEEE International Conference on Robotics and Automation. Yokohama, Japan: IEEE, 2024. 18113-18119
    [16] Li S, Hendrich N, Liang H, Ruppel P, Zhang C, Zhang J. A dexterous hand-arm teleoperation system based on hand pose estimation and active vision. IEEE Transactions on Cybernetics, 2022, 54(3): 1417−1428
    [17] Wang C, Fan L, Sun J, Zhang R, Li F F, Xu D, et al. MimicPlay: Long-horizon imitation learning by watching human play. In: Proceedings of the 7th Conference on Robot Learning. Atlanta, USA: PMLR, 2023. 201-221
    [18] Xu X, Qian K, Jing X, Song W. Learning robot manipulation skills from human demonstration videos using two-stream 2-D/3-D residual networks with self-attention. IEEE Transactions on Cognitive and Developmental Systems, 2022, 15(3): 1000−1011 doi: 10.1109/tcds.2022.3182877
    [19] de Haan P, Jayaraman D, Levine S. Causal confusion in imitation learning. In: Proceedings of the Advances in Neural Information Processing Systems. Vancouver, Canada: Curran Associates, 2019. 11698-11709
    [20] Handa A, van Wyk K, Yang W, Liang J, Chao Y W, Wan Q. DexPilot: Vision-based teleoperation of dexterous robotic hand-arm system. In: Proceedings of the IEEE International Conference on Robotics and Automation. Paris, France: IEEE, 2020. 9164-9170
    [21] Qin Y, Yang W, Huang B, van Wyk K, Su H, Wang X, et al. AnyTeleop: A general vision-based dexterous robot arm-hand teleoperation system. In: Proceedings of the 19th Robotics: Science and System. Daegu, South Korea: RSS, 2023. 15-27
    [22] Fu Z, Zhao T Z, Finn C. Mobile ALOHA: Learning bimanual mobile manipulation with low-cost whole-body teleoperation. In: Proceedings of the 8th Annual Conference on Robot Learning. Seoul, South Korea: PMLR, 2025. 4066-4083
    [23] Chi C, Xu Z, Pan C, Cousineau E, Burchfiel B, Feng S, et al. Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots. In: Proceedings of the 20th Robotics: Science and Systems. Delft, Netherlands: RSS, 2024. 45-58
    [24] Sundaram S, Kellnhofer P, Li Y, Zhu J Y, Torralba A, Matusik W. Learning the signatures of the human grasp using a scalable tactile glove. Nature, 2019, 569(7758): 698−702 doi: 10.1038/s41586-019-1234-z
    [25] Park M, Park T, Park S, Yoon S J, Koo S H, Park Y L. Stretchable glove for accurate and robust hand pose reconstruction based on comprehensive motion data. Nature Communications, 2024, 15(1): Article No. 5821 doi: 10.1038/s41467-024-50101-w
    [26] Yu T, Luo J, Gong Y, Wang H, Guo W, Yu H, et al. A compact gesture sensing glove for digital twin of hand motion and robot teleoperation. IEEE Transactions on Industrial Electronics, 2025, 72(2): 1684−1693 doi: 10.1109/TIE.2024.3417980
    [27] Shen V, Rae-Grant T, Mullenbach J, Harrison C. Fluid reality: High-resolution, untethered haptic gloves using electroosmotic pump arrays. In: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. San Francisco, USA: ACM, 2023. 1-20
    [28] Standring S, Ellis H, Healy J, Johnson D, Williams A, Collins P, et al. Gray's anatomy: The anatomical basis of clinical practice. American Journal of Neuroradiology, 2005, 26(10): Article No. 2703 doi: 10.5860/choice.43-1300
    [29] Li H, Zhang Y, Zhu J, Wang S, Lee M A, Xu H, et al. See, hear, and feel: Smart sensory fusion for robotic manipulation. In: Proceedings of the Machine Learning Research. Baltimore, USA: PMLR, 2022. 1368-1378
  • 加载中
计量
  • 文章访问数:  12
  • HTML全文浏览量:  5
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-09-30
  • 录用日期:  2026-02-13
  • 网络出版日期:  2026-03-30

目录

    /

    返回文章
    返回