2.765

2022影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于事件相机的合成孔径成像

余磊 廖伟 周游龙 杨文 夏桂松

余磊, 廖伟, 周游龙, 杨文, 夏桂松. 基于事件相机的合成孔径成像. 自动化学报, 2023, 49(7): 1393−1406 doi: 10.16383/j.aas.c200388
引用本文: 余磊, 廖伟, 周游龙, 杨文, 夏桂松. 基于事件相机的合成孔径成像. 自动化学报, 2023, 49(7): 1393−1406 doi: 10.16383/j.aas.c200388
Yu Lei, Liao Wei, Zhou You-Long, Yang Wen, Xia Gui-Song. Event camera based synthetic aperture imaging. Acta Automatica Sinica, 2023, 49(7): 1393−1406 doi: 10.16383/j.aas.c200388
Citation: Yu Lei, Liao Wei, Zhou You-Long, Yang Wen, Xia Gui-Song. Event camera based synthetic aperture imaging. Acta Automatica Sinica, 2023, 49(7): 1393−1406 doi: 10.16383/j.aas.c200388

基于事件相机的合成孔径成像

doi: 10.16383/j.aas.c200388
基金项目: 国家自然科学基金(62271345, 61871297), 中央高校基本科研业务费专项资金(2042020kf0019)资助
详细信息
    作者简介:

    余磊:武汉大学电子信息学院副教授. 主要研究方向为稀疏信号处理, 图像处理和神经形态视觉感知. 本文通信作者. E-mail: ly.wd@whu.edu.cn

    廖伟:武汉大学电子信息学院硕士研究生. 主要研究方向为数字图像处理. E-mail: 2016301200164@whu.edu.cn

    周游龙:武汉大学电子信息学院硕士研究生. 主要研究方向为事件相机图像重建. E-mail: zhouyl2019@whu.edu.cn

    杨文:武汉大学电子信息学院教授. 主要研究方向为图像处理与机器视觉, 多模态信息感知与融合. E-mail: yangwen@whu.edu.cn

    夏桂松:武汉大学计算机学院教授. 主要研究方向为遥感成像中的计算机视觉, 模式识别与智能系统以及相关应用. E-mail: guisong.xia@whu.edu.cn

Event Camera Based Synthetic Aperture Imaging

Funds: Supported by National Natural Science Foundation of China (62271345, 61871297) and Fundamental Research Funds for the Central Universities of China (2042020kf0019)
More Information
    Author Bio:

    YU Lei Associate professor at the School of Electronic Information, Wuhan University. His research interest covers sparse signal processing, image processing, and neuromorphic vision. Corresponding author of this paper

    LIAO Wei Master student at the School of Electronic Information, Whuhan University. His main research interest is digital image processing

    ZHOU You-Long Master student at the School of Electronic Information, Wuhan University. His main research interest is intensity-image reconstruction based on event camera

    YANG Wen Professor at the School of Electronic Information, Wuhan University. His research interest covers image processing and machine vision, and multi-modal information sensing and fusion

    XIA Gui-Song Professor at the School of Computer Science, Wuhan University. His research interest covers computer vision, pattern recognition and intelligent systems, and their applications in remote sensing imaging

  • 摘要: 合成孔径成像(Synthetic aperture imaging, SAI)通过多角度获取目标信息来等效大孔径和小景深相机成像. 因此, 该技术可以虚化遮挡物, 实现对被遮挡目标的成像. 然而, 在密集遮挡和极端光照条件下, 由于遮挡物的密集干扰和相机本身较低的动态范围, 基于传统相机的合成孔径成像(SAI with conventional cameras, SAI-C)无法有效地对被遮挡目标进行成像. 利用事件相机低延时、高动态的特性, 本文提出基于事件相机的合成孔径成像方法. 事件相机产生异步事件数据, 具有极低的延时, 能够以连续视角观测场景, 从而消除密集干扰的影响. 而事件相机的高动态范围使其能够有效处理极端光照条件下的成像问题. 通过分析场景亮度变化与事件相机输出的事件点之间的关系, 从对焦后事件点重建出被遮挡目标, 实现基于事件相机的合成孔径成像. 实验结果表明, 所提出方法与传统方法相比, 在密集遮挡条件下重建图像的对比度、清晰度、峰值信噪比(Peak signal-to-noise ratio, PSNR)和结构相似性(Structural similarity index measure, SSIM)指数均有较大提升. 同时, 在极端光照条件下, 所提出方法能有效解决过曝/欠曝问题, 重建出清晰的被遮挡目标图像.
  • 图  1  基于传统相机的SAI和基于事件相机的SAI效果对比. 第1列分别为拍摄实景和目标图像. 第2 ~ 4列分别对应密集遮挡、极高光照条件、极低光照条件下, 基于传统相机的SAI与本文提出的基于事件相机SAI的成像结果对比

    Fig.  1  Comparison of conventional camera based SAI and event camera based SAI. The first column illurstrates experimental scene and object image. Columns 2, 3, and 4 correspond to the comparison of conventional camera based SAI results and event camera based SAI results under dense occlusions, extreme high light and extreme low light conditions

    图  2  基于传统相机的合成孔径成像[4]

    Fig.  2  Conventional camera based SAI[4]

    图  3  基于事件相机的合成孔径成像系统原型

    Fig.  3  Prototype of synthetic aperture imaging system based on event camera

    图  4  亮度变化导致事件相机激发事件点数据. 光学成像系统等效于一个低通滤波器, 场景亮度的突变传入相机后转变为连续的亮度变化, 事件相机对亮度变化作出响应, 激发正极性(右下子图)或负极性(右上子图)事件点

    Fig.  4  Event camera generates events when brightness changes. The optical imaging system is equivalent to a low-pass filter. The sudden change of brightness is converted into a continuous brightness change in the camera. Event camera responds to the brightness change and generates positive (bottom-right inset) or negative (top-right inset) events

    图  5  极端光照条件下事件相机输出的事件流数据

    Fig.  5  Event stream generated by event camera under extreme light condition

    图  6  两种不同类型的遮挡物 (相较于灌木枝丛, 纸板的缝隙数量极少, 且缝隙之间的间距较大)

    Fig.  6  Two different types of occluders (Compared with dense bushes, the number of gaps in cardboard is very small, and the space between the gaps is large)

    图  7  PSNR与负重建阈值的关系

    Fig.  7  Relationship between PSNR and negative reconstruction threshold

    图  8  PSNR与尺度因子的关系

    Fig.  8  Relationship between PSNR and scale factor

    图  9  灌木枝丛遮挡条件下的多深度合成孔径成像结果与对比 (第1行为传统合成孔径成像结果,第2行为基于固定阈值重建法的成像结果, 第3行为基于自适应阈值重建法的成像结果)

    Fig.  9  Comparison of SAI results at different focus depths under the condition of dense bushes occlusion (The first row is SAI-C; the second row is reconstructed result with fixed thresholds; the third row is reconstructed result with adaptive thresholds)

    图  10  灌木枝丛遮挡条件下的合成孔径成像结果与对比 (第1行和第2行分别对应于几何目标与玩具熊目标)

    Fig.  10  Comparison of SAI results under the condition of dense bushes occlusion condition (The first and second rows correspond to geometric object and teddy bear)

    图  11  不同密集遮挡条件情况下的合成孔径成像结果与对比 (第1行为极端密集遮挡情况;第2行为一般密集遮挡情况; 第3行为稀疏遮挡情况)

    Fig.  11  Comparison of SAI results under different density occlusions condition (The first row corresponds to extremely dense occlusion condition, the second row corresponds to normal dense occlusion condition, the third row corresponds to sparse occlusion condition)

    图  12  极端光照条件下的合成孔径成像结果与对比 (第1行和第2行分别对应于极高光照条件下, 几何目标与玩具熊目标的合成孔径成像结果; 第3行和第4行分别对应于极低光照条件下, 几何目标与玩具熊目标的合成孔径成像结果)

    Fig.  12  Comparison of SAI results under extrme light conditions (The first and second rows correspond to geometric object and teddy bear under extremely high light condition, and the third and fourth rows correspond to geometric object and teddy bear under extremely low light condition)

    表  1  灌木枝丛遮挡条件下的多深度合成孔径成像质量对比

    Table  1  Quantitative comparison of multi-depth SAI results under dense bushes occlusion condition

    对焦深度 (m)方法PSNR (dB)SSIM
    0.4传统方法16.720.2167
    固定阈值重建17.730.2919
    自适应阈值重建21.130.3061
    0.8传统方法15.410.2693
    固定阈值重建15.420.2782
    自适应阈值重建21.730.3136
    1.2传统方法17.620.0916
    固定阈值重建18.030.1759
    自适应阈值重建25.220.4518
    下载: 导出CSV

    表  2  灌木枝丛遮挡条件下的合成孔径成像质量对比

    Table  2  Quantitative comparison of SAI results under the dense bushes occlusion condition

    目标类型方法PSNR (dB)SSIM
    几何目标传统方法13.450.2313
    固定阈值重建17.570.2671
    自适应阈值重建18.030.2646
    玩具熊传统方法6.9520.1439
    固定阈值重建7.7950.2175
    自适应阈值重建9.1990.2334
    下载: 导出CSV

    表  3  不同密集遮挡情况下的多深度合成孔径成像质量对比

    Table  3  Quantitative comparison of SAI results under different density occlusions condition

    遮挡密集程度方法PSNR (dB)SSIM
    极端密集遮挡传统方法11.200.1476
    固定阈值重建17.140.1884
    自适应阈值重建18.520.1741
    一般密集遮挡传统方法13.370.4028
    固定阈值重建18.540.1840
    自适应阈值重建19.510.2037
    稀疏遮挡传统方法14.350.5508
    固定阈值重建11.220.3384
    自适应阈值重建14.190.3912
    下载: 导出CSV
  • [1] Gershun A. The light field. Journal of Mathematics and Physics, 1939, 18(1-4): 51-151 doi: 10.1002/sapm193918151
    [2] Levoy M, Hanrahan P. Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. New Orleans, USA: ACM Press, 1996. 31−42
    [3] Zhang X Q, Zhang Y N, Yang T, Yang Y H. Synthetic aperture photography using a moving camera-IMU system. Pattern Recognition, 2017, 62: 175-188 doi: 10.1016/j.patcog.2016.07.019
    [4] Vaish V, Wilburn B, Joshi N, Levoy M. Using plane + parallax for calibrating dense camera arrays. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2004. 1: 2−9
    [5] Vaish V, Garg G, Talvala E, Antunez E, Wilburn B, Horowitz M, et al. Synthetic aperture focusing using a shear-warp factorization of the viewing transform. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. San Diego, USA: IEEE, 2005. 129
    [6] 项祎祎, 刘宾, 李艳艳. 基于共焦照明的合成孔径成像方法. 光学学报, 2020, 40(8): Article No. 0811003

    Xiang Yi-Yi, Liu Bin, Li Yan-Yan. Synthetic aperture imaging method based on confocal illumination. Acta Optica Sinica, 2020, 40(8): Article No. 0811003
    [7] Yang T, Zhang Y N, Tong X M, Zhang X Q, Yu R. A new hybrid synthetic aperture imaging model for tracking and seeing people through occlusion. IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(9): 1461-1475 doi: 10.1109/TCSVT.2013.2242553
    [8] Yang T, Zhang Y N, Tong X M, Zhang X Q, Yu R. Continuously tracking and see-through occlusion based on a new hybrid synthetic aperture imaging model. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Colorado Springs, USA: IEEE, 2011. 3409−3416
    [9] Joshi N, Avidan S, Matusik W, Kriegman D J. Synthetic aperture tracking: Tracking through occlusions. In: Proceedings of the 11th IEEE International Conference on Computer Vision. Rio de Janeiro, Brazil: IEEE, 2007. 1−8
    [10] 周程灏, 王治乐, 朱峰. 大口径光学合成孔径成像技术发展现状. 中国光学, 2017, 10(1): 25-38 doi: 10.3788/co.20171001.0025

    Zhou Cheng-Hao, Wang Zhi-Le, Zhu Feng. Review on optical synthetic aperture imaging technique. Chinese Optics, 2017, 10(1): 25-38 doi: 10.3788/co.20171001.0025
    [11] Pei Z, Li Y W, Ma M, Li J, Leng C C, Zhang X Q, et al. Occluded-object 3D reconstruction using camera array synthetic aperture imaging. Sensors, 2019, 19(3): Article No. 607
    [12] Pei Z, Zhang Y N, Chen X D, Yang Y H. Synthetic aperture imaging using pixel labeling via energy minimization. Pattern Recognition, 2013, 46(1): 174-187 doi: 10.1016/j.patcog.2012.06.014
    [13] Lichtsteiner P, Posch C, Delbruck T. A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 2008, 43(2): 566-576 doi: 10.1109/JSSC.2007.914337
    [14] Brandli C, Berner R, Yang M H, Liu S C, Delbruck T. A 240×180 130 dB 3 μs latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits, 2014, 49(10): 2333-2341 doi: 10.1109/JSSC.2014.2342715
    [15] Isaksen A, McMillan L, Gortler S J. Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. New Orleans, USA: ACM Press, 2000. 297−306
    [16] Yang J C, Everett M, Buehler C, McMillan L. A real-time distributed light field camera. In: Proceedings of the 13th Eurographics Workshop on Rendering. Pisa, Italy: Eurographics Association, 2002. 77−86
    [17] Wilburn B, Joshi N, Vaish V, Talvala E V, Antunez E, Barth A, et al. High performance imaging using large camera arrays. In: Proceedings of ACM SIGGRAPH Papers. Los Angeles, USA: Association for Computing Machinery, 2005. 765−776
    [18] Vaish V, Levoy M, Szeliski R, Zitnick C L, Kang S B. Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York, USA: IEEE, 2006. 2331−2338
    [19] Pei Z, Zhang Y N, Yang T, Zhang X W, Yang Y H. A novel multi-object detection method in complex scene using synthetic aperture imaging. Pattern Recognition, 2012, 45(4): 1637-1658 doi: 10.1016/j.patcog.2011.10.003
    [20] Maqueda A I, Loquercio A, Gallego G, García N, Scaramuzza D. Event-based vision meets deep learning on steering prediction for self-driving cars. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 5419−5427
    [21] Zhu A Z, Atanasov N, Daniilidis K. Event-based visual inertial odometry. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 5816−5824
    [22] Vidal A R, Rebecq H, Horstschaefer T, Scaramuzza D. Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robotics and Automation Letters, 2018, 3(2): 994-1001 doi: 10.1109/LRA.2018.2793357
    [23] Kim H, Leutenegger S, Davison A J. Real-time 3D reconstruction and 6-DoF tracking with an event camera. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands: Springer, 2016. 349−364
    [24] Cohen G, Afshar S, Morreale B, Bessell T, Wabnitz A, Rutten M, et al. Event-based sensing for space situational awareness. The Journal of the Astronautical Sciences, 2019, 66(2): 125-141 doi: 10.1007/s40295-018-00140-5
    [25] Barua S, Miyatani Y, Veeraraghavan A. Direct face detection and video reconstruction from event cameras. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Lake Placid, USA: IEEE, 2016. 1−9
    [26] Watkins Y, Thresher A, Mascarenas D, Kenyon G T. Sparse coding enables the reconstruction of high-fidelity images and video from retinal spike trains. In: Proceedings of the International Conference on Neuromorphic Systems. Knoxville, USA: ACM, 2018. Article No. 8
    [27] Scheerlinck C, Barnes N, Mahony R. Continuous-time intensity estimation using event cameras. In: Proceedings of the 14th Asian Conference on Computer Vision. Perth, Australia: Springer, 2018. 308−324
    [28] Rebecq H, Ranftl R, Koltun V, Scaramuzza D. Events-to-video: Bringing modern computer vision to event cameras. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 3852−3861
    [29] Scheerlinck C, Rebecq H, Gehrig D, Barnes N, Mahony R E, Scaramuzza D. Fast image reconstruction with an event camera. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision. Snowmass, USA: IEEE, 2020. 156−163
    [30] Wang L, Mostafavi I S M, Ho Y S, Yoon K J. Event-based high dynamic range image and very high frame rate video generation using conditional generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 10073−10082
    [31] Goodman J W. Introduction to Fourier Optics. Colorado: Roberts and Company Publishers, 2005.
    [32] Hartley R, Zisserman A. Multiple View Geometry in Computer Vision. Cambridge: Cambridge University Press, 2003.
    [33] Zhang Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334 doi: 10.1109/34.888718
    [34] Wang L, Kim T K, Yoon K J. EventSR: From asynchronous events to image reconstruction, restoration, and super-resolution via end-to-end adversarial learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 8312−8322
    [35] Li H M, Li G Q, Shi L P. Super-resolution of spatiotemporal event-stream image. Neurocomputing, 2019, 335: 206-214 doi: 10.1016/j.neucom.2018.12.048
    [36] Mostafavi I S M, Choi J, Yoon K J. Learning to super resolve intensity images from events. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 2765−2773
  • 加载中
图(12) / 表(3)
计量
  • 文章访问数:  2008
  • HTML全文浏览量:  500
  • PDF下载量:  250
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-06-17
  • 录用日期:  2020-09-14
  • 网络出版日期:  2023-05-31
  • 刊出日期:  2023-07-20

目录

    /

    返回文章
    返回