2.624

2020影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于事件相机的连续光流估计

付婧祎 余磊 杨文 卢昕

付婧祎, 余磊, 杨文, 卢昕. 基于事件相机的连续光流估计. 自动化学报, 2021, 47(x): 1−12 doi: 10.16383/j.aas.c210242
引用本文: 付婧祎, 余磊, 杨文, 卢昕. 基于事件相机的连续光流估计. 自动化学报, 2021, 47(x): 1−12 doi: 10.16383/j.aas.c210242
Fu Jing-Yi, Yu Lei, Yang Wen, Lu Xin. Event-based Continuous Optical Flow Estimation. Acta Automatica Sinica, 2021, 47(x): 1−12 doi: 10.16383/j.aas.c210242
Citation: Fu Jing-Yi, Yu Lei, Yang Wen, Lu Xin. Event-based Continuous Optical Flow Estimation. Acta Automatica Sinica, 2021, 47(x): 1−12 doi: 10.16383/j.aas.c210242

基于事件相机的连续光流估计

doi: 10.16383/j.aas.c210242
基金项目: 国家自然科学基金 (61871297) 资助, 中央高校基本科研业务费专项资金 (2042020kf0019) 资助, 测绘遥感信息工程国家重点实验室项目资助
详细信息
    作者简介:

    付婧祎:武汉大学电子信息学院研究生. 主要研究方向为数字图像处理. E-mail: 2019202120110@whu.edu.cn

    余磊:武汉大学电子信息学院副教授. 主要研究方向为稀疏信号处理, 图像处理和神经形态视觉感知. 本文通信作者. E-mail: ly.wd@whu.edu.cn

    杨文:武汉大学电子信息学院教授. 主要研究方向为图像处理与机器视觉, 多模态信息感知与融合. E-mail: yangwen@whu.edu.cn

    卢昕:武汉大学电子信息学院讲师. 主要研究方向为SAR图像成像及解译. E-mail: luxin@whu.edu.cn

Event-based Continuous Optical Flow Estimation

Funds: Supported by National Natural Science Foundation of China (61871297), Fundamental Research Funds for the Central Universities of China (2042020kf0019), State Key Laboratory project of Surveying, Mapping and Remote Sensing Information Engineering
More Information
    Author Bio:

    FU Jing-Yi Master student at the school of Electronic Information, Whuhan University. Her research interest covers digital image processing

    YU Lei Associate Professor at the School of Electronic Information, Wuhan University. His main research interest includes sparse signal processing, image processing and neuromorphic vision. Corresponding author of this paper

    YANG Wen Professor at the School of Electronic Information, Wuhan University. His main research interest includes image processing and machine vision, multi-modal information sensing and fusion

    LU Xin Lecturer at the School of Electronic Information, Wuhan University. Her main research interest includes SAR image processing and interpretation

  • 摘要: 事件相机对场景的亮度变化进行成像, 输出异步的事件流, 具有极低的延时, 受运动模糊问题影响较少. 因此, 可以利用事件相机解决高速运动场景下的光流估计问题. 本文基于亮度恒定假设和事件产生模型, 利用事件相机输出事件流的低延时性质, 融合存在运动模糊的亮度图像帧, 提出了基于事件相机的连续光流估计算法, 提升了高速运动场景下的光流估计精度. 实验结果表明, 相比于现有的基于事件相机的光流估计算法, 本文提出的算法在平均端点误差(AEE)、平均角度误差(AAE)和均方误差(MSE)三个指标上分别提升11%、45% 和8%. 在高速运动场景下, 本文的算法能够准确重建出高速运动目标的连续光流, 从而保证了存在运动模糊情况时光流估计的精度.
  • 图  1  基于传统相机的光流估计和基于事件相机的光流估计效果对比. (a)传统相机输出图像帧序列; (b)传统Horn-Schunk 算法[10]的光流估计结果; (c)事件相机输出事件流; (d)本文提出的EDI-CLG算法的光流估计结果.

    Fig.  1  Comparison of traditional camera based OF and event camera based OF. (a) The samples of images acquired by traditional camera; (b) The results using Horn-Schunk algorithm; (c) The event data generated by event camera; (d) The results using the proposed EDI-CLG algorithm.

    图  2  DAVIS240数据集. (a)为translBoxes数据的亮度图像和对应事件帧; (b)为rotDisk数据的亮度图像和对应事件帧; (c) 为translSin数据的亮度图像和对应事件帧.

    Fig.  2  DAVIS240 datasets. (a) Brightness image and corresponding event frame of translBoxes dataset; (b) Brightness image and corresponding event frame of rotDisk dataset; (c) Brightness image and corresponding event frame of translSin dataset.

    图  3  正则化参数$ \alpha $对光流误差的影响. (a)为translBoxes数据上$ \alpha $对光流误差的影响; (b)为rotDisk数据上$ \alpha $对光流误差的影响; (c)为translSin数据上$ \alpha $对光流误差的影响.

    Fig.  3  Relationship between optical flow error and regularisation parameter $ \alpha $. (a) Relationship between optical flow error and regularisation parameter $ \alpha $ on translBoxes dataset; (b) Relationship between optical flow error and regularisation parameter $ \alpha $ on rotDisk dataset; (c) Relationship between optical flow error and regularisation parameter $ \alpha $ on translSin dataset.

    图  4  DAVIS240数据集光流结果对比图. (a)为光流真实值; (b)为本文提出的EDI-HS方法; (c)为本文提出的EDI-CLG方法; (d)为DAVIS-OF方法; (e)为DVS-CM方法; (f)为DVS-LP方法.

    Fig.  4  Comparison of optical flow results on DAVIS240 datasets. (a) Groundtruth; (b) The proposed EDI-HS method; (c) The proposed EDI-CLG method; (d) The DAVIS-OF method; (e) The DVS-CM method; (f) The DVS-LP method.

    图  5  运动模糊数据集光流结果对比图. (a)为模糊亮度图像; (b)为EDI方法重建的清晰亮度图像; (c)为本文提出的EDI-HS 方法; (d)为本文提出的EDI-CLG方法; (e)为DVS-CM方法; (f)为DVS-LP方法.

    Fig.  5  Comparison of optical flow results on motion blur datasets. (a) Brightness image with motion blur; (b) Reconstructed clear brightness image using EDI method; (c) The proposed EDI-HS method; (d) The proposed EDI-CLG method; (e) The DVS-CM method; (f) The DVS-LP method.

    图  6  连续光流误差对比折线图. (a)为EDI-CLG算法改进前的平均端点误差; (b)为EDI-CLG算法改进前的平均角度误差. (c)为EDI-CLG算法改进后与DAVIS-OF算法的平均端点误差对比; (d)为EDI-CLG算法改进后与DAVIS-OF算法的平均角度误差对比.

    Fig.  6  Continuous optical flow error comparison. (a) The average endpoint error of EDI-CLG before improvement; (b) The average angular error of EDI-CLG before improvement; (c) Comparison of the average endpoint error between the improved EDI-CLG and DAVIS-OF; (d) Comparison of the average angular error between the improved EDI-CLG and DAVIS-OF.

    图  7  EDI-CLG算法和DAVIS-OF算法连续光流结果对比图. (a)为光流真实值; (b)为DAVIS-OF方法;(c)为本文提出的EDI-CLG方法在单帧图像曝光时间内连续四次进行光流计算的结果.

    Fig.  7  Comparison of continuous optical flow results between EDI-CLG algorithm and DAVIS-OF algorithm. (a) Groundtruth; (b) The DAVIS-OF method; (c) The result of four continuous optical flow calculations within the exposure time of a frame using the proposed EDI-CLG method.

    表  1  DAVIS240数据集光流误差表(精度第一和第二的算法分别使用加粗和下划线标注)

    Table  1  Optical flow error on DAVIS240 datasets (The first and second accurate algorithms are bolded and underlined respectively)

    算法AEE($\%$)AAE(°)MSE
    数据translBoxes
    DVS-CM43.65±27.1521.46±32.8639.94
    DVS-LP124.78±92.0519.66±13.7181.03
    DAVIS-OF31.20±3.1817.29±7.1815.57
    EDI-HS$\underline{18.65}\pm\underline{2.92}$$\underline{5.13}\pm\underline{4.72}$17.86
    EDI-CLG${\bf{18.01}}\pm{\bf{2.65}}$${\bf{4.79}}\pm{\bf{3.05}}$$\underline{16.77}$
    数据rotDisk
    DVS-CM54.26±28.3034.39±25.8840.75
    DVS-LP104.63±97.1520.76±14.1777.25
    DAVIS-OF${\bf{33.94}}\pm{\bf{17.02}}$${\bf{13.07}}\pm{\bf{8.58}}$14.30
    EDI-HS42.93±20.9114.87±12.8333.10
    EDI-CLG$\underline{42.44}\pm\underline{20.86}$$\underline{13.79}\pm\underline{10.52}$$\underline{33.02}$
    数据translSin
    DVS-CM91.96±9.9543.16±39.0985.41
    DVS-LP107.68±70.0469.53±30.8294.53
    DAVIS-OF84.78±61.2256.75±41.53$\underline{62.61}$
    EDI-HS$\underline{75.74}\pm51.69$$\underline{30.14}\pm\underline{9.98}$72.96
    EDI-CLG${\bf{72.45}}\pm\underline{44.12}$${\bf{28.53}}\pm{\bf{4.97}}$35.28
    下载: 导出CSV

    表  2  运行时间对比(运行速度第一和第二的算法分别使用加粗和下划线标注)

    Table  2  The comparison of running time (The first and second fastest algorithms are bolded and underlined respectively)

    算法平均每帧运行时间(s)
    DVS-CM206.85
    DVS-LP5.29
    DAVIS-OF${\bf{0.52}}$
    EDI-HS$\underline{0.61}$
    EDI-CLG0.63
    下载: 导出CSV
  • [1] Mitrokhin A, C Fermüller, Parameshwara C, et al. Event-based moving object detection and tracking. In: Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain: IEEE, 2019. 1−9.
    [2] 江志军, 易华蓉. 一种基于图像金字塔光流的特征跟踪方法. 武汉大学学报·信息科学版, 2007, 32(8): 680-683.

    Jiang Zhi-Jun, Yi Hua-Rong. A feature tracking method based on image pyramid optical flow. Journal of Wuhan University·Information Science Edition, 2007, 32(8): 680-683.
    [3] Liu X, Zhao G, Yao J, et al. Background subtraction based on low-rank and structured sparse decomposition. IEEE Transactions on Image Processing, 2015, 24(8): 2502-2514. doi: 10.1109/TIP.2015.2419084
    [4] 祝轩, 王磊, 张超, 等. 基于连续性约束背景模型减除的运动目标检测. 计算机科学, 2019(6): 317-321.

    Zhu Xuan, Wang Lei, Zhang Chao, et al. Moving object detection based on continuous constraint background model subtraction. Computer Science, 2019(6): 317-321.
    [5] Vidal A R, Rebecq H, Horstschaefer T, et al. Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high speed scenarios. IEEE Robotics and Automation Letters, 2018, 3(2): 994-1001. doi: 10.1109/LRA.2018.2793357
    [6] Leutenegger S, Lynen S, Bosse M, et al. Keyframe-based visual–inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 2014, 34(3): 314-334.
    [7] Antink C H, Singh T, Singla P, et al. Advanced Lucas Kanada optical flow for deformable image registration. Journal of Critical Care, 2012, 27(3): e14-e14.
    [8] H Kostler, Ruhnau K, Wienands R. Multigrid solution of the optical flow system using a combined diffusion and curvature based regularizer. Numerical Linear Algebra with Applications, 2010, 15(2-3): 201-218.
    [9] Ishiyama H, Okatani T, Deguchi K. High-speed and high-precision optical flow detection for real-time motion segmentation. In: Proceedings of SICE 2004 Annual Conference. Sapporo, Japan: IEEE, 2004, 2: 1202−1205.
    [10] Horn B, Schunck B G. Determining optical flow. Artificial Intelligence, 1981, 17(1-3): 185-203. doi: 10.1016/0004-3702(81)90024-2
    [11] Delbruck T. Neuromorophic vision sensing and processing. In: Proceedings of 2016 46th European Solid State Device Research Conference. Lausanne, Switzerland: IEEE, 2016. 7-14.
    [12] 马艳阳, 叶梓豪, 刘坤华, 等. 基于事件相机的定位与建图算法: 综述. 自动化学报, 2020, 46: 1-11.

    Ma Yan-Yang, Ye Zi-Hao, Liu Kun-Hua, et al. Location and mapping algorithms based on event cameras: a survey. Acta Automatica Sinica, 2020, 46: 1-11.
    [13] Hu Y, Liu S C, Delbruck T. v2e: From video frames to realistic DVS events. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). NewYork, USA: IEEE, 2021.
    [14] Benosman R, Ieng S H, Clercq C, et al. Asynchronous frameless event-based optical flow. Neural Networks: the official journal of the International Neural Network Society, 2012, 27(3): 32-37.
    [15] Gallego G, Rebecq H, Scaramuzza D. A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, Utah: IEEE, 2018. 3867−3876.
    [16] Benosman R, Clercq C, Lagorce X, et al. Event-based visual flow. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(2): 407-417. doi: 10.1109/TNNLS.2013.2273537
    [17] 江盟, 刘舟, 余磊. 低维流形约束下的事件相机去噪算法. 信号处理, 2019, 35(10): 1753-1761.

    Jiang Meng, Liu Zhou, Yu Lei. Event camera denoising algorithm with low dimensional manifold constraints. Signal Processing, 2019, 35(10): 1753-1761.
    [18] Berner R, Brandli C, Yang M, et al. A 240×180 10mW 12us latency sparse-output vision sensor for mobile applications. In: Proceedings of 2013 Symposium on VLSI Circuits. Kyoto, Japan: IEEE, 2013. 186−187.
    [19] Lichtsteiner P, Posch C, Delbruck T. A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 2008, 43(2): 566-576. doi: 10.1109/JSSC.2007.914337
    [20] Son B, Suh Y, Kim S, et al. A 640×480 dynamic vision sensor with a 9μm pixel and 300Meps address-event representation. In: Proceedings of the 2017 IEEE International Solid-state Circuits Conference(ISSCC). San Francisco, CA, USA: IEEE, 2017. 66−67.
    [21] Almatrafi M, Hirakawa K. DAViS camera optical flow. IEEE Transactions on Computational Imaging, 2020, 6: 396-407. doi: 10.1109/TCI.2019.2948787
    [22] Lucas B D, Kanade T. An iterative image registration technique with an application to stereo vision. In: Proceedings of International Joint Conference on Artificial Intelligence. San Francisco, CA, USA: IEEE, 1981, 81(3): 674−679.
    [23] Black M J, Anandan P. The robust estimation of multiple motions: parametric and piecewise smooth flow fields. Computer Vision and Image Understanding, 1996, 3(1): 75-104.
    [24] 黄波, 杨勇. 一种自适应的光流估计方法. 电路与系统学报, 2001, 6(4): 92-96.

    Huang Bo, Yang Yong. An adaptive optical flow estimation method. Journal of Circuits and Systems, 2001, 6(4): 92-96.
    [25] Fortun D, Bouthemy P, Kervrann C. Optical flow modeling and computation: A survey. Computer Vision and Image Understanding, 2015, 134: 1-21. doi: 10.1016/j.cviu.2015.02.008
    [26] Gong D, Yang J, et al. From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE, 2017. 2319−2328.
    [27] Jin M G, Hu Z, Favaro P. Learning to extract flawless slow motion from blurry videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Los Angeles, USA: IEEE, 2019. 8112−8121.
    [28] Liu P D, Janai J, et al. Self-supervised linear motion deblurring. IEEE Robotics and Automation Letters, 2020, 5(2): 2475-2482. doi: 10.1109/LRA.2020.2972873
    [29] Maqueda A I, Loquercio A, Gallego G, et al. Event-based vision meets deep learning on steering prediction for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, Utah, USA: IEEE, 2018. 5419−5427.
    [30] Yang G, Ye Q, et al. Live demonstration: Real-time VI-SLAM with high-resolution event camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Los Angeles, USA: IEEE, 2019. 1707-1708.
    [31] Bodo R, Tobi D. Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor. Frontiers in Neuroscience, 2016, 10: 176-192.
    [32] Tobias B, Stephan T, Heiko N. On event-based optical flow detection. Frontiers in Neuroscience, 2015, 9: 137-151.
    [33] Liu M, Delbruck T. Block-matching optical flow for dynamic vision sensors: algorithm and FPGA implementation. In: Proceedings of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS). Baltimore, MD, USA: IEEE, 2017. 1−4.
    [34] Liu M, Delbruck T. ABMOF: A novel optical flow algorithm for dynamic vision sensors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, Utah: IEEE, 2018. 1805−1815.
    [35] Barranco F, Fermuller C, Aloimonos Y. Bio-inspired motion estimation with event-driven sensors. In: Proceedings of International Work Conference on Artificial Neural Networks. Palma de Mallorca, Spain: IEEE, 2015, 2: 309−321.
    [36] Bardow P, Davison A J, Leutenegger S. Simultaneous optical flow and intensity estimation from an event camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 2016. 884−892.
    [37] Gehrig D, Rebecq H, Gallego G, et al. Asynchronous, photometric feature tracking using events and frames. In: Proceedings of Computer Vision – ECCV 2018. Lecture Notes in Computer Science. Munich, Germany: IEEE, 2018, 128: 601−618.
    [38] Pan L, Scheerlinck C, Yu X, et al. Bringing a blurry frame alive at high framerate with an event camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Los Angeles, USA: IEEE, 2019. 6813−6822.
    [39] Pan L, Liu M, Hartley R. Single image optical flow estimation with an event camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). NewYork, USA: IEEE, 2020. 1669−1678.
    [40] Baker S, Matthews I. Lucas-Kanade 20 years on: A unifying framework. International Journal of Computer Vision, 2004, 56(3): 221-255. doi: 10.1023/B:VISI.0000011205.11775.fd
    [41] Bruhn A, Weickert J, Schnorr C. Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods. International Journal of Computer Vision, 2005, 61(3): 211-231.
    [42] Niklaus S, Long M, Liu F. Video frame interpolation via adaptive separable convolution. In: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 261−270.
  • 加载中
计量
  • 文章访问数:  466
  • HTML全文浏览量:  148
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-03-26
  • 录用日期:  2021-09-17
  • 网络出版日期:  2021-11-04

目录

    /

    返回文章
    返回