2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

行人惯性定位新动态: 基于神经网络的方法、性能与展望

李岩 施忠臣 侯燕青 戚煜华 谢良 陈伟 陈洪波 闫野 印二威

李岩, 施忠臣, 侯燕青, 戚煜华, 谢良, 陈伟, 陈洪波, 闫野, 印二威. 行人惯性定位新动态: 基于神经网络的方法、性能与展望. 自动化学报, 2025, 51(2): 1−16 doi: 10.16383/j.aas.c240221
引用本文: 李岩, 施忠臣, 侯燕青, 戚煜华, 谢良, 陈伟, 陈洪波, 闫野, 印二威. 行人惯性定位新动态: 基于神经网络的方法、性能与展望. 自动化学报, 2025, 51(2): 1−16 doi: 10.16383/j.aas.c240221
Li Yan, Shi Zhong-Chen, Hou Yan-Qing, Qi Yu-Hua, Xie Liang, Chen Wei, Chen Hong-Bo, Yan Ye, Yin Er-Wei. Emerging trends in pedestrian inertial positioning: Neural network-based methods, performance and prospects. Acta Automatica Sinica, 2025, 51(2): 1−16 doi: 10.16383/j.aas.c240221
Citation: Li Yan, Shi Zhong-Chen, Hou Yan-Qing, Qi Yu-Hua, Xie Liang, Chen Wei, Chen Hong-Bo, Yan Ye, Yin Er-Wei. Emerging trends in pedestrian inertial positioning: Neural network-based methods, performance and prospects. Acta Automatica Sinica, 2025, 51(2): 1−16 doi: 10.16383/j.aas.c240221

行人惯性定位新动态: 基于神经网络的方法、性能与展望

doi: 10.16383/j.aas.c240221 cstr: 32138.14.j.aas.c240221
基金项目: 国家自然科学基金 (62332019, 62076250), 国家重点研发计划 (2023YFF1203900, 2020YFA0713502) 资助
详细信息
    作者简介:

    李岩:中山大学系统科学与工程学院博士研究生. 2022年获得昆明理工大学硕士学位. 主要研究方向为神经惯性定位和姿态估计. E-mail: liyan377@mail2.sysu.edu.cn

    施忠臣:军事科学院国防科技创新研究院研究助理. 2022年获得国防科技大学博士学位. 主要研究方向为机器人视觉, 三维计算机视觉和姿态估计. E-mail: shizhongchen@buaa.edu.cn

    侯燕青:中山大学系统科学与工程学院副教授. 2016年获得国防科技大学博士学位. 主要研究方向为卫星导航定位和多源融合导航. E-mail: houyq9@mail.sysu.edu.cn

    戚煜华:中山大学系统科学与工程学院副研究员. 2020年获得北京理工大学博士学位. 主要研究方向为同时定位与建图. E-mail: qiyh8@mail.sysu.edu.cn

    谢良:军事科学院国防科技创新研究院助理研究员. 2018年获得国防科技大学博士学位. 主要研究方向为计算机视觉、人机交互和混合现实. E-mail: xielnudt@gmail.com

    陈伟:军事科学院国防科技创新研究院研究助理. 2022年获得伯明翰大学博士学位. 主要研究方向为姿态估计. E-mail: wei.chen.ai@outlook.com

    陈洪波:中山大学系统科学与工程学院教授. 2007年获得哈尔滨工业大学博士学位. 主要研究方向为复杂系统的建模、仿真和分析, 智能无人系统以及航空航天飞行器的设计. E-mail: chenhongbo@mail.sysu.edu.cn

    闫野:军事科学院国防科技创新研究院研究员. 2000年获得国防科技大学博士学位. 主要研究方向为人机交互和混合现实. 本文通信作者. E-mail: yanye1971@sohu.com

    印二威:军事科学院国防科技创新研究院副研究员. 2015年获得国防科技大学博士学位. 主要研究方向为脑机接口和智能人机交互技术. E-mail: yinerwei1985@gmail.com

Emerging Trends in Pedestrian Inertial Positioning: Neural Network-based Methods, Performance and Prospects

Funds: Supported by National Natural Science Foundation of China (62332019, 62076250) and National Key Research and Development Program of China (2023YFF1203900, 2020YFA0713502)
More Information
    Author Bio:

    LI Yan Ph. D. candidate at the School of Systems Science and Engineering, Sun Yat-sen University. He received his M.S. degree from Kunming University of Science and Technology in 2022. His research interest covers neural inertial positioning and pose estimation

    SHI Zhong-Chen Research assistant at the Defense Innovation Institute, Academy of Military Sciences. He received his Ph.D. degree from the National University of Defense Technology in 2022. His research interest covers robot vision, 3D computer vision and pose estimation

    HOU Yan-Qing Associate professor at the School of Systems Science and Engineering, Sun Yat-sen University. He received his Ph.D. degree from National University of Defense Technology in 2016. His research interest covers satellite navigation, positioning and multi-source fusion navigation

    QI Yu-Hua Associate researcher at the School of Systems Science and Engineering, Sun Yat-sen University. He received his Ph.D. degree from Beijing Institute of Technology in 2020. His main research interest is simultaneous localization and mapping

    XIE Liang Assistant researcher at the Defense Innovation Institute, Academy of Military Sciences. He received his Ph.D. degree from the National University of Defense Technology in 2018. His research interest covers computer vision, human-machine interaction and mixed reality

    CHEN Wei Research assistant at the Defense Innovation Institute, Academy of Military Sciences. He received his Ph.D. degree from the University of Birmingham in 2022. His main research interest is pose estimation

    CHEN Hong-Bo Professor at the School of Systems Science and Engineering, Sun Yat-sen University. He received his Ph.D. degree from the Harbin Institute of Technology in 2007. His research interest covers modeling, simulation and analysis of complex systems, intelligent unmanned systems and design of aerospace vehicles

    YAN Ye Researcher at the Defense Innovation Institute, Academy of Military Sciences. He received his Ph.D. degree from the National University of Defense Technology in 2000. His research interest covers human-machine interaction and mixed reality. Corresponding author of this paper

    YIN Er-Wei Associate researcher at the Defense Innovation Institute, Academy of Military Sciences. He received his Ph.D. degrees from the National University of Defense Technology in 2015. His research interest covers brain-computer interfaces and intelligent human-machine interaction technologies

  • 摘要: 行人惯性定位通过惯性测量单元 (Inertial measurement unit, IMU) 的测量序列来估计行人的位置, 近年来已成为解决室内或卫星信号遮挡环境下行人自主定位的重要手段. 然而, 传统惯性定位方法在双重积分时易受误差源影响导致漂移问题, 一定程度上限制了行人惯性定位在长时间长距离实际运动中的应用. 幸运的是, 基于神经网络(Neural network, NN)学习的方法能够仅从IMU历史数据中学习行人的运动模式并修正惯性测量值在积分时引起的漂移. 为此, 本文对近期基于深度神经网络(Deep neural network, DNN)的行人惯性定位进行全面综述. 首先对传统的惯性定位方法进行了简要介绍; 其次, 按照是否融入领域知识分别介绍了端到端的神经惯性定位方法和融合领域知识的神经惯性定位方法的研究动态; 然后, 概述了行人惯性定位的基准数据集和评价指标, 并分析比较了其中一些代表性方法的优势和不足; 最后, 对该领域需要解决的关键难点问题进行了总结, 并探讨基于DNN的行人惯性定位未来所面临的关键挑战与发展趋势, 以期为后续的研究提供有益参考.
    1)  11 https://www.dropbox.com/s/9zzaj3h3u4bta23/ridi_data_publish_v2.zip?dl=02 https://github.com/higerra/TangoIMURecorder
    2)  23 http://deepio.cs.ox.ac.uk/4 https://ronin.cs.sfu.ca/
  • 图  1  行人惯性定位范式

    Fig.  1  Pedestrian inertial positioning paradigm

    图  2  全文组织结构

    Fig.  2  The organization structure of this paper

    图  3  捷联惯性导航系统

    Fig.  3  Strapdown inertial navigation system

    图  4  行人航位推算

    Fig.  4  Pedestrian dead reckoning

    图  5  零速修正

    Fig.  5  Zero velocity update

    图  6  基于神经网络的行人惯性定位范式

    Fig.  6  Paradigm of pedestrian inertial positioning based on neural network

    图  7  神经惯性定位算法流程图

    Fig.  7  Neural inertial positioning algorithm flowchart

    图  8  PDR + NN流程图

    Fig.  8  Flowchart of PDR + NN

    图  9  ZUPT + NN流程图

    Fig.  9  Flowchart of ZUPT + NN

    表  1  基于神经网络的行人惯性定位方法概览

    Table  1  Overview of neural network-based pedestrian inertial positioning methods

    方法时间模型学习方式方法特征
    IONet[43]2018LSTM监督将惯性定位问题转化为序列学习问题, 基于LSTM来学习位移并构造惯性里程计
    L-IONet[8]2020WaveNet监督利用自回归模型替换LSTM来处理长序列惯性信号并预测极坐标系下的位移
    Motiontransformer[48]2019LSTM监督通过生成对抗网络和域适应来学习一个领域不变的语义表示
    TLIO[51]2020CNN监督基于CNN回归相对位移和不确定性并将二者合并到卡尔曼滤波器进行状态估计
    RoNIN[27]2020CNN/LSTM监督基于CNN/LSTM从惯性数据中预测行人的2D速度向量
    Wang等[53]2021CNN监督通过ResNet来回归速度大小和移动角度
    IMUNet[54]2024CNN监督使用深度和点卷积替换传统卷积操作提高模型推理速度
    RIO[55]2022CNN自监督引入了旋转等方差作为强大的自监督信号来训练惯性定位模型
    HNNTA[57]2022CNN/LSTM监督利用时间注意力机制对LSTM产生的隐藏状态进行加权
    RBCN[58]2023CNN/LSTM监督利用多种混合注意力机制增强网络对通道和空间特征的学习能力
    Res2Net[59]2022CNN监督融入Res2Net模块来提取更加细粒度的特征表示
    CTIN[26]2022Transformer监督首个基于Transformer来融合空间表示与时间知识的模型
    RIOT[62]2023Transformer监督通过结合真实位置先验递归的学习运动特征和系统误差偏差
    NILoc[63]2022Transformer监督将独一无二的人体运动模式映射成行人位置
    IDOL[64]2021LSTM监督将行人惯性定位分为方向估计和位置估计两个阶段
    Shao等[66]2018CNN监督基于深度卷积神经网络的步长检测方案, 以提高计步器的鲁棒性
    Ren等[67]2021LSTM监督设计一种基于LSTM的步态计数器
    WAIT[68]2023CNN监督利用自动编码器将IMU测量值转化为无误差的波形并提取各种与移动性相关的信息
    Gu等[69]2018Autoencoder监督基于堆叠的自动编码器的步长估计模型
    StepNet[71]2020CNN监督基于CNN动态的回归步长或距离的变化
    Wang等[72]2019LSTM监督在步长估计模型中加入变分自编码自动消除特征向量中的固有噪声
    Manos等[78]2022CNN监督利用时间卷积和多尺度注意层提取运动矢量进行航向估计
    PDRNet[79]2022CNN监督基于ResNet设计一个位置识别和一个获取距离和航向变化的回归网络
    Wagstaff[80]2018LSTM监督用LSTM代替标准零速度检测器来辅助惯性导航系统
    Yu等[81]2019CNN监督一种基于卷积神经网络的零速度点探测器
    Bo等[73]2022ResNet/GRU无监督利用对抗训练和子类分类器来构建一个多源无监督域适应网络
    *注释: 上述方法根据是否融入领域知识分为两类.
    下载: 导出CSV

    表  2  行人惯性定位数据集

    Table  2  Pedestrian inertial positioning datasets

    数据集时间采样频率IMU载体真值数据集大小(轨迹数)设备携带方式
    RIDI[82]2017200 HzLenovo Phab2 ProTango手机74裤兜、包、手持、身体
    TUM VI[83]2018200 Hz动作捕捉系统28手持
    OXIOD[84]2018100 HziPhone 5/6/7 Plus, Nexus 5动作捕捉系统158手持、口袋、手袋、推车
    RoNIN[27]2019200 HzGalaxy S9, Pixel 2 XLAR设备276自然携带
    IDOL[64]2020100 HziPhone 8Kaarta Stencil84自然携带
    CTIN[26]2021200 HzSamsung Note, GalaxyGoogle ARCore100自然携带
    SIMD[85]202350 Hz多种型号智能手机GPS/IMU4562自然携带
    下载: 导出CSV

    表  3  在RIDI测试数据集上的行人惯性定位方法对比 (单位: m)

    Table  3  Comparison of pedestrian inertial positioning methods on the RIDI test dataset (unit: meter)

    模型seen-AEseen-REunseen-AEunseen-RE
    SINS[25]31.0637.5332.0138.04
    PDR[29]3.524.561.941.81
    RIDI[82]1.882.381.711.79
    R-LSTM[27]2.002.642.082.10
    R-ResNet[27]1.631.911.671.62
    R-TCN[27]1.662.161.662.26
    下载: 导出CSV

    表  4  在OXIOD测试数据集上的行人惯性定位方法对比 (单位: m)

    Table  4  Comparison of pedestrian inertial positioning methods on the OXIOD test dataset (unit: meter)

    模型 seen-ATE seen-RTE unseen-ATE unseen-RTE
    SINS[25] 716.31 606.75 1941.41 848.55
    PDR[29] 2.12 2.11 3.26 2.32
    RIDI[82] 4.12 3.45 4.50 2.70
    R-LSTM[27] 2.02 2.33 7.12 5.42
    R-ResNet[27] 2.40 1.77 6.71 3.04
    R-TCN[27] 2.26 2.63 7.76 5.78
    下载: 导出CSV

    表  5  在RoNIN测试数据集上的行人惯性定位方法对比 (单位: m)

    Table  5  Comparison of pedestrian inertial positioning methods on the RoNIN test dataset (unit: meter)

    模型 seen-ATE seen-RTE unseen-ATE unseen-RTE
    SINS[25] 675.21 169.48 458.06 117.06
    PDR[29] 29.54 21.36 27.67 23.17
    RIDI[82] 17.06 17.50 15.66 18.91
    R-LSTM[27] 4.18 2.63 5.32 3.58
    R-ResNet[27] 3.54 2.67 5.14 4.37
    R-TCN[27] 4.38 2.90 5.70 4.07
    *注释: 其中, “seen”表示测试集和训练集的被试相同; “unseen” 表示测试集中的被试在训练集中未出现过; “ATE” 表示绝对轨迹误差; “RTE” 表示相对轨迹误差.
    下载: 导出CSV
  • [1] Gao R P, Xiao X, Zhu S L, Xing W W, Li C, Liu L, et al. Glow in the dark: Smartphone inertial odometry for vehicle tracking in GPS blocked environments. IEEE Internet of Things Journal, 2021, 8(16): 12955−12967 doi: 10.1109/JIOT.2021.3064342
    [2] Herrera E, Kaufmann H, Secue J, Quirós R, Fabregat G. Improving data fusion in personal positioning systems for outdoor environments. Information Fusion, 2013, 14(1): 45−56 doi: 10.1016/j.inffus.2012.01.009
    [3] Roy P, Chowdhury C. A survey on ubiquitous WiFi-based indoor localization system for smartphone users from implementation perspectives. CCF Transactions on Pervasive Computing and Interaction, 2022, 4(3): 298−318 doi: 10.1007/s42486-022-00089-3
    [4] Wang R R, Li Z H, Luo H Y, Zhao F, Shao W H, Wang Q. A robust Wi-Fi fingerprint positioning algorithm using stacked denoising autoencoder and multi-layer perceptron. Remote Sensing, 2019, 11(11): Article No. 1293 doi: 10.3390/rs11111293
    [5] Mur-Artal R, Montiel J, Tardós J. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 2015, 31(5): 1147−1163 doi: 10.1109/TRO.2015.2463671
    [6] Szyc K, Nikodem M, Zdunek M. Bluetooth low energy indoor localization for large industrial areas and limited infrastructure. Ad Hoc Networks, 2023, 139: Article No. 103024 doi: 10.1016/j.adhoc.2022.103024
    [7] Yu N, Zhan X H, Zhao S N, Wu Y F, Feng R J. A precise dead reckoning algorithm based on Bluetooth and multiple sensors. IEEE Internet of Things Journal, 2018, 5(1): 336−351 doi: 10.1109/JIOT.2017.2784386
    [8] Chen C H, Zhao P J, Lu C X, Wang W, Markham A, Trigoni N. Deep-learning-based pedestrian inertial navigation: Methods, data set, and on-device inference. IEEE Internet of Things Journal, 2020, 7(5): 4431−4441 doi: 10.1109/JIOT.2020.2966773
    [9] Gowda M, Dhekne A, Shen S, Choudhury R, Yang X, Yang L, et al. Bringing IoT to sports analytics. In: Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation. Boston, USA: USENIX, 2017. 499−513
    [10] 潘献飞, 穆华, 胡小平. 单兵自主导航技术发展综述. 导航定位与授时, 2018, 5(1): 1−11

    Pan Xian-Fei, Mu Hua, Hu Xiao-Ping. A survey of autonomous navigation technology for individual soldier. Navigation Positioning and Timing, 2018, 5(1): 1−11
    [11] 郭孝宽, 岳丕玉, 安维廉. 运载火箭的距离惯性制导. 自动化学报, 1984, 10(4): 361−364

    Guo Xiao-Kuan, Yue Pi-Yu, An Wei-Lian. Distance-inertial guidance of the launch vehicle. Acta Automatica Sinica, 1984, 10(4): 361−364
    [12] Zhou B D, Wu P, Zhang X, Zhang D J, Li Q Q. Activity semantics-based indoor localization using smartphones. IEEE Sensors Journal, 2024, 24(7): 11069−11079 doi: 10.1109/JSEN.2024.3357718
    [13] Barshan B, Durrant-Whyte H F. Inertial navigation systems for mobile robots. IEEE Transactions on Robotics and Automation, 1995, 11 (3): 328−342
    [14] 王巍. 惯性技术研究现状及发展趋势. 自动化学报, 2013, 39(6): 723−729

    Wang Wei. Status and development trend of inertial technology. Acta Automatica Sinica, 2013, 39(6): 723−729
    [15] 董铭涛, 程建华, 赵琳, 刘萍. 惯性组合导航系统性能评估方法研究进展. 自动化学报, 2022, 48(10): 2361−2373

    Dong Ming-Tao, Cheng Jian-Hua, Zhao Lin, Liu Ping. Perspectives on performance evaluation method for inertial integrated navigation system. Acta Automatica Sinica, 2022, 48(10): 2361−2373
    [16] 许睿. 行人导航系统算法研究与应用实现 [硕士学位论文], 南京航空航天大学, 中国, 2008

    Xu R. Research and Application on Navigation Algorithm of Pedestrian Navigation System [Master thesis], Nanjing University of Aeronautics and Astronautics, China, 2008.
    [17] Puyol M, Bobkov D, Robertson P, Jost T. Pedestrian simultaneous localization and mapping in multistory buildings using inertial sensors. IEEE Transactions on Intelligent Transportation Systems, 2014, 15(4): 1714−1727 doi: 10.1109/TITS.2014.2303115
    [18] El-Sheimy N, Hou H Y, Niu X J. Analysis and modeling of inertial sensors using Allan variance. IEEE Transactions on Instrumentation and Measurement, 2008, 57(1): 140−149 doi: 10.1109/TIM.2007.908635
    [19] Zhuo W P, Li S J, He T L, Liu M Y, Chan S, Ha S, et al. Online path description learning based on IMU signals from IoT devices. IEEE Transactions on Mobile Computing, DOI: 10.1109/TMC.2024.3406436
    [20] Otter D, Medina J, Kalita J. A survey of the usages of deep learning for natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(2): 604−624 doi: 10.1109/TNNLS.2020.2979670
    [21] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436−444 doi: 10.1038/nature14539
    [22] Ru X, Gu N, Shang H, Zhang H. MEMS inertial sensor calibration technology: Current status and future trends. Micromachines, 2022, 13(6): Article No. 879 doi: 10.3390/mi13060879
    [23] 杨辉. 基于MEMS传感器的高精度行人导航算法研究 [硕士学位论文], 厦门大学, 中国, 2014

    Yang H. Research of High-accuracy Pedestrian Navigation Algorithm Based on MEMS Sensors [Master thesis], Xiamen University, China, 2014.
    [24] Savage P. Strapdown inertial navigation integration algorithm design part 1: Attitude algorithms. Journal of Guidance, Control, and Dynamics, 1998, 21(1): 19−28 doi: 10.2514/2.4228
    [25] Savage P. Strapdown inertial navigation integration algorithm design part 2: Velocity and position algorithms. Journal of Guidance, Control, and dynamics, 1998, 21(2): 208−221 doi: 10.2514/2.4242
    [26] Rao B, Kazemi E, Ding Y, Shila D, Tucker F, Wang L. CTIN: Robust contextual transformer network for inertial navigation. In: Proceedings of the 36th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI, 2022. 5413−5421
    [27] Herath S, Yan H, Furukawa Y. RoNIN: Robust neural inertial navigation in the wild: Benchmark, evaluations, & new methods. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Paris, France: IEEE, 2020. 3146−3152
    [28] Wang B, Liu X, Yu B, Jia R, Gan X. Pedestrian dead reckoning based on motion mode recognition using a smartphone. Sensors, 2018, 18(6): Article No. 1811 doi: 10.3390/s18061811
    [29] Li W, Wang Y, Shao Y, Hu G, Li D. TrackPuzzle: Efficient registration of unlabeled PDR trajectories for learning indoor route graph. Future Generation Computer Systems, 2023, 149: 171−183
    [30] Skog I, Handel P, Nilsson J, Rantakokko J. Zero-velocity detection——An algorithm evaluation. IEEE Transactions on Biomedical Engineering, 2010, 57(11): 2657−2666 doi: 10.1109/TBME.2010.2060723
    [31] 张伦东, 卢晓慧, 李军正, 何劢航. 基于零速修正的行人导航关键技术及研究进展. 导航定位与授时, 2020, 7(3): 141−149

    Zhang Lun-Dong, Lu Xiao-Hui, Li Jun-Zheng, He Mai-Hang. The key technologies and development of pedestrian navigation based on ZUPT. Navigation Positioning and Timing, 2020, 7(3): 141−149
    [32] Harle R. A survey of indoor inertial positioning systems for pedestrians. IEEE Communications Surveys & Tutorials, 2013, 15(3): 1281−1293
    [33] Qian J C, Ma J B, Ying R D, Liu P L, Pei L. An improved indoor localization method using smartphone inertial sensors. In: Proceedings of the International Conference on Indoor Positioning and Indoor Navigation. Montbeliard, France: IEEE, 2013. 1−7
    [34] Ao B K, Wang Y C, Liu H N, Li D Y, Song L, Li J Q. Context impacts in accelerometer-based walk detection and step counting. Sensors, 2018, 18(11): Article No. 3604 doi: 10.3390/s18113604
    [35] Kang X M, Huang B Q, Qi G D. A novel walking detection and step counting algorithm using unconstrained smartphones. Sensors, 2018, 18(1): Article No. 297 doi: 10.3390/s18010297
    [36] Weinberg H. Using the ADXL202 in pedometer and personal navigation applications. Analog Devices AN-602 Application Note, 2002, 2 (2): 1−6
    [37] Nilsson J, Skog I, Händel P, Hari K. Foot-mounted INS for everybody-an open-source embedded implementation. In: Proceedings of the IEEE/ION Position, Location and Navigation Symposium. Myrtle Beach, USA: IEEE, 2012. 140−145
    [38] Fang L, Antsaklis P J, Montestruque L A, McMickell M B, Lemmon M, Sun Y S, et al. Design of a wireless assisted pedestrian dead reckoning system-the NavMote experience. IEEE Transactions on Instrumentation and Measurement, 2005, 54(6): 2342−2358 doi: 10.1109/TIM.2005.858557
    [39] Goyal P, Ribeiro V J, Saran H, Kumar A. Strap-down pedestrian dead-reckoning system. In: Proceedings of the International Conference on Indoor Positioning and Indoor Navigation. Guimaraes, Portugal: IEEE, 2011. 1−7
    [40] Huang B Q, Qi G D, Yang X K, Zhao L, Zou H. Exploiting cyclic features of walking for pedestrian dead reckoning with unconstrained smartphones. In: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing. Heidelberg, Germany: ACM, 2016. 374−385
    [41] Wahlström J, Skog I. Fifteen years of progress at zero velocity: A review. IEEE Sensors Journal, 2021, 21(2): 1139−1151 doi: 10.1109/JSEN.2020.3018880
    [42] Chen C H, Pan X F. Deep learning for inertial positioning: A survey. IEEE Transactions on Intelligent Transportation Systems, 2024, 25(9): 10506−10523 doi: 10.1109/TITS.2024.3381161
    [43] Chen C H, Lu X X, Markham A, Trigoni N. IONet: Learning to cure the curse of drift in inertial odometry. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI, 2018. 6468−6476
    [44] Yu Y, Si X S, Hu C H, Zhang J X. A review of recurrent neural networks: LSTM cells and network architectures. Neural Computation, 2019, 31(7): 1235−1270 doi: 10.1162/neco_a_01199
    [45] Yao S C, Zhao Y R, Shao H J, Liu S Z, Liu D X, Su L, et al. FastDeepioT: Towards understanding and optimizing neural network execution time on mobile and embedded devices. In: Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems. Shenzhen, China: ACM, 2018. 278−291
    [46] Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, et al. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
    [47] Jayanth R K, Xu Y S, Wang Z Y, Chatzipantazis E, Gehrig D, Daniilidis K. EqNIO: Subequivariant neural inertial odometry. arXiv preprint arXiv: 2408.06321, 2024.
    [48] Chen C H, Miao Y S, Lu C X, Xie L H, Blunsom P, Markham A, et al. MotionTransformer: Transferring neural inertial tracking between domains. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Honolulu, USA: AAAI, 2019. 8009−8016
    [49] Yu L T, Zhang W N, Wang J, Yu Y. SeqGAN: Sequence generative adversarial nets with policy gradient. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, USA: AAAI, 2017. 2852−2858
    [50] Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 7167−7176
    [51] Liu W X, Caruso D, Ilg E, Dong J, Mourikis A I, Daniilidis K, et al. TLIO: Tight learned inertial odometry. IEEE Robotics and Automation Letters, 2020, 5(4): 5653−5660 doi: 10.1109/LRA.2020.3007421
    [52] Li M Y, Mourikis A I. High-precision, consistent EKF-based visual-inertial odometry. The International Journal of Robotics Research, 2013, 32(6): 690−711 doi: 10.1177/0278364913481251
    [53] Wang Y, Cheng H, Wang C, Meng M. Pose-invariant inertial odometry for pedestrian localization. IEEE Transactions on Instrumentation and Measurement, 2021, 70: Article No. 8503512
    [54] Zeinali B, Zanddizari H, Chang M J. IMUNet: Efficient regression architecture for inertial IMU navigation and positioning. IEEE Transactions on Instrumentation and Measurement, 2024, 73: Article No. 2516213
    [55] Cao X Y, Zhou C F, Zeng D D, Wang Y L. RIO: Rotation-equivariance supervised learning of robust inertial odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 6614−6623
    [56] Lai R C, Tian Y, Tian J D, Wang J, Li N, Jiang Y. ResMixer: A lightweight residual mixer deep inertial odometry for indoor positioning. IEEE Sensors Journal, DOI: 10.1109/JSEN.2024.3443311
    [57] Wang Y, Cheng H, Meng M. Inertial odometry using hybrid neural network with temporal attention for pedestrian localization. IEEE Transactions on Instrumentation and Measurement, 2022, 71: Article No. 7503610
    [58] Zhu Y Q, Zhang J L, Zhu Y P, Zhang B, Ma W Z. RBCN-Net: A data-driven inertial navigation algorithm for pedestrians. Applied Sciences, 2023, 13(5): Article No. 2969 doi: 10.3390/app13052969
    [59] Chen B X, Zhang R F, Wang S C, Zhang L Q, Liu Y. Deep-learning-based inertial odometry for pedestrian tracking using attention mechanism and Res2Net module. IEEE Sensors Letters, 2022, 6(11): Article No. 6003804
    [60] Gao S H, Cheng M M, Zhao K, Zhang X Y, Yang M H, Torr P. Res2Net: A new multi-scale backbone architecture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(2): 652−662 doi: 10.1109/TPAMI.2019.2938758
    [61] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 5998−6008
    [62] Brotchie J, Li W C, Greentree A D, Kealy A. RIOT: Recursive inertial odometry transformer for localisation from low-cost IMU measurements. Sensors, 2023, 23(6): Article No. 3217 doi: 10.3390/s23063217
    [63] Herath S, Caruso D, Liu C, Chen Y F, Furukawa Y. Neural inertial localization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 6604−6613
    [64] Sun S, Melamed D, Kitani K. IDOL: Inertial deep orientation-estimation and localization. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI, 2021. 6128−6137
    [65] Wang Y, Cheng H, Meng M. Pedestrian motion tracking by using inertial sensors on the smartphone. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, USA: IEEE, 2020. 4426−4431
    [66] Shao W H, Luo H Y, Zhao F, Wang C, Crivello A, Tunio M. DePedo: Anti periodic negative-step movement pedometer with deep convolutional neural networks. In: Proceedings of the IEEE International Conference on Communications (ICC). Kansas City, USA: IEEE, 2018. 1−6
    [67] Ren P, Elyasi F, Manduchi R. Smartphone-based inertial odometry for blind walkers. Sensors, 2021, 21(12): Article No. 4033 doi: 10.3390/s21124033
    [68] Han K, Yu S M, Ko S W, Kim S L. Waveform-guide transformation of IMU measurements for smartphone-based localization. IEEE Sensors Journal, 2023, 23(17): 20379−20389 doi: 10.1109/JSEN.2023.3298713
    [69] Gu F Q, Khoshelham K, Yu C Y, Shang J G. Accurate step length estimation for pedestrian dead reckoning localization using stacked autoencoders. IEEE Transactions on Instrumentation and Measurement, 2019, 68(8): 2705−2713 doi: 10.1109/TIM.2018.2871808
    [70] Gehring J, Miao Y J, Metze F, Waibel A. Extracting deep bottleneck features using stacked auto-encoders. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada: IEEE, 2013. 3377−3381
    [71] Klein I, Asraf O. StepNet-deep learning approaches for step length estimation. IEEE Access, 2020, 8: 85706−85713 doi: 10.1109/ACCESS.2020.2993534
    [72] Wang Q, Ye L L, Luo H Y, Men A D, Zhao F, Huang Y. Pedestrian stride-length estimation based on LSTM and denoising autoencoders. Sensors, 2019, 19(4): Article No. 840 doi: 10.3390/s19040840
    [73] Bo F, Li J, Wang W B. Mode-independent stride length estimation with IMUs in smartphones. IEEE Sensors Journal, 2022, 22(6): 5824−5833 doi: 10.1109/JSEN.2022.3148313
    [74] Im C, Eom C, Lee H, Jang S, Lee C. Deep LSTM-based multimode pedestrian dead reckoning system for indoor localization. In: Proceedings of the International Conference on Electronics, Information, and Communication (ICEIC). Jeju, South Korea: IEEE, 2022. 1−2
    [75] Huang Y, Zeng Q H, Lei Q Y, Chen Z J, Sun K C. Smartphone heading correction method based on LSTM neural network. In: Proceedings of China Satellite Navigation Conference. Beijing, China: Springer, 2022. 415−425
    [76] Wang Q, Luo H Y, Ye L L, Men A D, Zhao F, Huang Y, et al. Pedestrian heading estimation based on spatial transformer networks and hierarchical LSTM. IEEE Access, 2019, 7: 162309−162322 doi: 10.1109/ACCESS.2019.2950728
    [77] Jaderberg M, Simonyan K, Zisserman A, Kavukcuoglu K. Spatial transformer networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2015. 2017−2025
    [78] Manos A, Hazan T, Klein I. Walking direction estimation using smartphone sensors: A deep network-based framework. IEEE Transactions on Instrumentation and Measurement, 2022, 71: Article No. 2501112
    [79] Asraf O, Shama F, Klein I. PDRNet: A deep-learning pedestrian dead reckoning framework. IEEE Sensors Journal, 2022, 22(6): 4932−4939 doi: 10.1109/JSEN.2021.3066840
    [80] Wagstaff B, Kelly J. LSTM-based zero-velocity detection for robust inertial navigation. In: Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN). Nantes, France: IEEE, 2018. 1−8
    [81] Yu X G, Liu B, Lan X Y, Xiao Z L, Lin S S, Yan B, et al. AZUPT: Adaptive zero velocity update based on neural networks for pedestrian tracking. In: Proceedings of the IEEE Global Communications Conference (GLOBECOM). Waikoloa, USA: IEEE, 2019. 1−6
    [82] Yan H, Shan Q, Furukawa Y. RIDI: Robust IMU double integration. In: Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer, 2018. 621−636
    [83] Schubert D, Goll T, Demmel N, Usenko V, Stückler J, Cremers D. The TUM VI benchmark for evaluating visual-inertial odometry. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain: IEEE, 2018. 1680−1687
    [84] Chen C H, Zhao P J, Lu C X, Wang W, Markham A, Trigoni N. OxIOD: The dataset for deep inertial odometry. arXiv preprint arXiv: 1809.07491, 2018.
    [85] Liu F, Ge H Y, Tao D, Gao R P, Zhang Z. Smartphone-based pedestrian inertial tracking: Dataset, model, and deployment. IEEE Transactions on Instrumentation and Measurement, 2024, 73: Article No. 2504713
    [86] Shorten C, Khoshgoftaar T M. A survey on image data augmentation for deep learning. Journal of Big Data, 2019, 6(1): Article No. 60 doi: 10.1186/s40537-019-0197-0
    [87] Shorten C, Khoshgoftaar T M, Furht B. Text data augmentation for deep learning. Journal of Big Data, 2021, 8(1): Article No. 101 doi: 10.1186/s40537-021-00492-0
    [88] Wu Y, Chen Y P, Wang L J, Ye Y C, Liu Z C, Guo Y D, et al. Large scale incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 374−382
    [89] Zou H, Lu X X, Jiang H, Xie L H. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine. Sensors, 2015, 15(1): 1804−1824 doi: 10.3390/s150101804
    [90] Iman M, Arabnia H R, Rasheed K. A review of deep transfer learning and recent advancements. Technologies, 2023, 11(2): Article No. 40 doi: 10.3390/technologies11020040
    [91] 陈康鑫, 赵杰煜, 陈豪. 一种基于自监督学习的矢量球面卷积网络. 自动化学报, 2023, 49(6): 1354−1368

    Chen Kang-Xin, Zhao Jie-Yu, Chen Hao. A vector spherical convolutional network based on self-supervised learning. Acta Automatica Sinica, 2023, 49(6): 1354−1368
    [92] Gou J P, Yu B S, Maybank S J, Tao D C. Knowledge distillation: A survey. International Journal of Computer Vision, 2021, 129(6): 1789−1819 doi: 10.1007/s11263-021-01453-z
    [93] Blalock D W, Gonzalez Ortiz J J, Frankle J, Guttag J V. What is the state of neural network pruning? In: Proceedings of the 3rd Machine Learning and Systems. Austin, USA: mlsys.org, 2020. 129−146
    [94] Zhao W Y, Zhou D, Cao B Q, Zhang K, Chen J J. Adversarial modality alignment network for cross-modal molecule retrieval. IEEE Transactions on Artificial Intelligence, 2024, 5(1): 278−289 doi: 10.1109/TAI.2023.3254518
    [95] Freydin M, Segol N, Sfaradi N, Eweida A, Or B. Deep learning for inertial sensor alignment. IEEE Sensors Journal, 2024, 24(10): 17282−17290 doi: 10.1109/JSEN.2024.3384302
    [96] Aslan M F, Durdu A, Sabanci K. Visual-Inertial Image-Odometry Network (VIIONet): A Gaussian process regression-based deep architecture proposal for UAV pose estimation. Measurement, 2022, 194: Article No. 111030 doi: 10.1016/j.measurement.2022.111030
    [97] Nilsson J O, Händel P. Time synchronization and temporal ordering of asynchronous sensor measurements of a multi-sensor navigation system. In: Proceedings of the IEEE/ION Position, Location and Navigation Symposium. Indian Wells, USA: IEEE, 2010. 897−902
    [98] Chen C H, Rosa S, Miao Y S, Lu C X, Wu W, Markham A, et al. Selective sensor fusion for neural visual-inertial odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 10542−10551
    [99] 李帅鑫, 李广云, 王力, 杨啸天. LiDAR/IMU紧耦合的实时定位方法. 自动化学报, 2021, 47(6): 1377−1389

    Li Shuai-Xin, Li Guang-Yun, Wang Li, Yang Xiao-Tian. LiDAR/IMU tightly coupled real-time localization method. Acta Automatica Sinica, 2021, 47(6): 1377−1389
    [100] Almalioglu Y, Turan M, Saputra M, Gusmão P, Markham A, Trigoni N. SelfVIO: Self-supervised deep monocular visual-inertial odometry and depth estimation. Neural Networks, 2022, 150: 119−136 doi: 10.1016/j.neunet.2022.03.005
    [101] Zhou P, Wang H, Gravina R, Sun F M. WIO-EKF: Extended Kalman filtering-based Wi-Fi and inertial odometry fusion method for indoor localization. IEEE Internet of Things Journal, 2024, 11(13): 23592−23603 doi: 10.1109/JIOT.2024.3386889
    [102] Li J Y, Pan X K, Huang G, Zhang Z Y, Wang N, Bao H J, et al. RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments. IEEE Transactions on Visualization and Computer Graphics, 2024, 30(10): 6941−6955 doi: 10.1109/TVCG.2024.3353263
  • 加载中
图(9) / 表(5)
计量
  • 文章访问数:  353
  • HTML全文浏览量:  117
  • PDF下载量:  12
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-04-22
  • 录用日期:  2024-08-27
  • 网络出版日期:  2024-09-27

目录

    /

    返回文章
    返回