2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

一种基于自监督学习的矢量球面卷积网络

陈康鑫 赵杰煜 陈豪

陈康鑫, 赵杰煜, 陈豪. 一种基于自监督学习的矢量球面卷积网络. 自动化学报, 2023, 49(6): 1354−1368 doi: 10.16383/j.aas.c220694
引用本文: 陈康鑫, 赵杰煜, 陈豪. 一种基于自监督学习的矢量球面卷积网络. 自动化学报, 2023, 49(6): 1354−1368 doi: 10.16383/j.aas.c220694
Chen Kang-Xin, Zhao Jie-Yu, Chen Hao. A vector spherical convolutional network based on self-supervised learning. Acta Automatica Sinica, 2023, 49(6): 1354−1368 doi: 10.16383/j.aas.c220694
Citation: Chen Kang-Xin, Zhao Jie-Yu, Chen Hao. A vector spherical convolutional network based on self-supervised learning. Acta Automatica Sinica, 2023, 49(6): 1354−1368 doi: 10.16383/j.aas.c220694

一种基于自监督学习的矢量球面卷积网络

doi: 10.16383/j.aas.c220694
基金项目: 国家自然科学基金(62071260, 62006131), 浙江省自然科学基金(LZ22F020001, LQ21F020009)资助
详细信息
    作者简介:

    陈康鑫:宁波大学信息科学与工程学院硕士研究生. 主要研究方向为深度学习和计算机视觉. E-mail: kxchenxy@outlook.com

    赵杰煜:宁波大学信息科学与工程学院教授. 1985年和1988年获浙江大学学士和硕士学位、1995年伦敦大学博士学位. 主要研究方向为深度学习和计算机视觉. 本文通信作者. E-mail: zhao_jieyu@nbu.edu.cn

    陈豪:宁波大学信息科学与工程学院博士研究生. 主要研究方向为三维重建, 模式识别和机器学习. E-mail: 1901100014@nbu.edu.cn

A Vector Spherical Convolutional Network Based on Self-supervised Learning

Funds: Supported by National Natural Science Foundation of China (62071260, 62006131) and Natural Science Foundation of Zhejiang Province (LZ22F020001, LQ21F020009)
More Information
    Author Bio:

    CHEN Kang-Xin Master student at the Faculty of Electrical Engineering and Computer Science, Ningbo University. His research interest covers deep learning and computer vision

    ZHAO Jie-Yu Professor at the Faculty of Electrical Engineering and Computer Science, Ningbo University. He received his bachelor and master degrees from Zhejiang University in 1985 and 1988, and his Ph.D. degree from Royal Holloway University of London in 1995. His research interest covers deep learning and computer vision. Corresponding author of this paper

    CHEN Hao Ph.D. candidate at the Faculty of Electrical Engineering and Computer Science, Ningbo University. His research interest covers 3D reconstruction, pattern recognition, and machine learning

  • 摘要: 在三维视觉任务中, 三维目标的未知旋转会给任务带来挑战, 现有的部分神经网络框架对经过未知旋转后的三维目标进行识别或分割较为困难. 针对上述问题, 提出一种基于自监督学习方式的矢量型球面卷积网络, 用于学习三维目标的旋转信息, 以此来提升分类和分割任务的表现. 首先, 对三维点云信号进行球面采样, 映射到单位球上; 然后, 使用矢量球面卷积网络提取旋转特征, 同时将随机旋转后的三维点云信号输入相同结构的矢量球面卷积网络提取旋转特征, 利用自监督网络训练学习旋转信息; 最后, 对随机旋转的三维目标进行目标分类实验和部分分割实验. 实验表明, 所设计的网络在测试数据随机旋转的情况下, 在ModelNet40数据集上分类准确率提升75.75%, 在ShapeNet数据集上分割效果显著, 交并比(Intersection over union, IoU)提升51.48%.
  • 图  1  自监督矢量球面卷积网络训练流程图

    Fig.  1  Self-supervised vector spherical convolutional network

    图  2  矢量球面卷积层间计算方法流程图

    Fig.  2  Vector spherical convolution interlayer calculation method

    图  3  规范方向实验框架图

    Fig.  3  Canonical orientation experiment framework diagram

    图  4  点云网络结构图

    Fig.  4  PointNet architecture

    图  5  分类分割实验框架图

    Fig.  5  Classification and segmentation experiment framework diagram

    图  6  ModelNet40规范方向实验可视化结果

    Fig.  6  ModelNet40 canonical orientation experiment visualization results

    图  7  ShapeNet规范方向实验可视化结果

    Fig.  7  ShapeNet canonical orientation experiment visualization results

    图  8  部分分割实验可视化结果

    Fig.  8  Part segmentation experiment visualization results

    表  1  常用符号表

    Table  1  Table of common symbols

    序号符号说明
    1$\left(a_i, b_j, c_k\right)$球面网格坐标
    2$\left(\alpha_n, \beta_n, h_n\right)$点云用球面坐标表示
    3$S^2$单位球面
    4$SO(3)$三维旋转群
    5$g$表示$\mathrm{CON}$网络运算过程
    6$f$指$S^2$或$SO(3)$信号
    7$L_R$旋转操作符
    8$\psi$卷积核
    9${\boldsymbol{h}}$矢量神经元
    下载: 导出CSV

    表  2  分类准确度 (%)

    Table  2  Classification accuracy (%)

    PointNet[43]PointNet ++[46]Spherical CNN[11]LDGCNN[47]SO-Net[48]PRIN[49]SPRIN[14]CON+PointNet
    NR/NR$88.45$$89.82$$81.73$$92.91$94.44$80.13$$86.01$$86.79$
    NR/AR$12.47$$21.35$$55.62$$17.82$$9.64$$70.35$$86.13$88.22
    AR/AR$21.92$$31.72$$73.32$$86.21$88.27
    下载: 导出CSV

    表  3  部分分割实验结果IoUs (%)

    Table  3  Part segmentation experimental results IoUs (%)

    随机旋转不旋转
    avg.
    inst.
    avg.
    cls.
    飞机帽子汽车椅子耳机吉他笔记本
    电脑
    摩托
    马克
    手枪火箭滑板桌子avg.
    inst.
    avg.
    cls.
    PointNet[43]31.3029.3819.9046.2543.2720.8127.0415.6334.7234.6442.1036.4019.2549.8833.3022.0725.7129.7483.1578.95
    PointNet++[46]36.6635.0021.9051.7040.0623.1343.039.6538.5140.9145.5641.7518.1853.4242.1928.5138.9236.5784.6381.52
    RS-Net[50]50.3832.9938.2915.4553.7833.4960.8331.279.5043.4857.379.8620.3725.7420.6311.5130.1466.1184.9281.41
    PCNN[51]28.8031.7223.4646.5535.2522.6224.2716.6732.8939.8052.1838.6018.5448.9027.8327.4627.6024.8885.1381.80
    SPLATNet[52]32.2138.2534.5868.1046.9619.3616.2524.7288.3952.9949.2131.8317.0648.5621.2034.9828.9928.8684.9782.34
    DGCNN[53]43.7930.8724.8451.2936.6920.3330.0727.8638.0045.5042.2934.8420.5148.7426.2526.8826.9528.8585.1582.33
    SO-Net[48]26.2114.3721.088.461.8711.7827.8111.998.3415.0143.981.817.058.784.416.3816.1034.9884.8381.16
    SpiderCNN[54]31.8135.4622.2853.0754.222.5728.8623.1735.8542.7244.0955.4419.2348.9328.6525.6131.3631.3285.3382.40
    SHOT+PointNet[55]32.8831.4637.4247.3049.5327.7128.0916.349.7927.6637.3325.2216.3150.9125.0721.2943.1040.2732.7531.25
    CGF+PointNet[56]50.1346.2650.9770.3460.4425.5159.0833.2950.9271.6440.7731.9123.9363.1727.7330.9947.2552.0650.1346.31
    RIConv[57]79.3174.6078.6478.7073.1968.0386.8271.8789.3682.9574.7076.4256.5888.4472.1651.6366.6577.4779.5574.43
    Kim 等[58]79.5674.4177.5373.4376.9566.1387.2275.4487.4280.7178.4471.2151.0990.7673.6953.8668.1078.6279.9274.69
    Li 等[59]82.1778.7881.4980.0785.5574.8388.6271.3490.3882.8280.3481.6468.8792.2374.5154.0874.5979.1182.4779.40
    PRIN[49]71.2066.7569.2955.9071.4956.3178.4465.9286.0173.5866.9759.2947.5681.4771.9949.0264.7070.1272.0468.39
    SPRIN[14]82.6779.5082.0782.0176.4875.5388.1771.4590.5183.9579.2283.8372.5993.2478.9958.8574.7780.3182.5979.31
    CON+PointNet84.3980.8682.2779.1485.8876.4490.4273.2490.9682.8182.9995.6469.5191.9379.7455.6075.3381.8184.0681.22
    下载: 导出CSV

    表  4  与主流网络结合的分类准确度(%)

    Table  4  Classification accuracy in combination with mainstream networks (%)

    PointNet[43]PointNet++[46]DGCNN[53]CON+DGCNNCON+PointNet++CON+PointNet
    NR/NR88.4589.8290.2088.3287.2786.79
    NR/AR12.4721.3516.3689.8689.2188.22
    AR/AR21.9231.7229.7389.9389.3088.27
    下载: 导出CSV

    表  5  与主流网络结合的部分分割实验结果IoUs (%)

    Table  5  Experimental results of part segmentation combined with mainstream networks IoUs (%)

    随机旋转不旋转
    avg.
    inst.
    avg.
    cls.
    飞机帽子汽车椅子耳机吉他笔记本
    电脑
    摩托
    马克
    手枪火箭滑板桌子avg.
    inst.
    avg.
    cls.
    PointNet[43]31.3029.3819.9046.2543.2720.8127.0415.6334.7234.6442.1036.4019.2549.8833.3022.0725.7129.7483.1578.95
    PointNet++[46]36.6635.0021.9051.7040.0623.1343.039.6538.5140.9145.5641.7518.1853.4242.1928.5138.9236.5784.6381.52
    DGCNN[53]43.7930.8724.8451.2936.6920.3330.0727.8638.0045.5042.2934.8420.5148.7426.2526.8826.9528.8585.1582.33
    CON+PointNet84.3980.8682.2779.1485.8876.4490.4273.2490.9682.8182.9995.6469.5191.9379.7455.6075.3381.8184.0681.22
    CON+PointNet++85.7782.3084.1280.6688.9076.5190.3778.6590.1583.0183.6295.4571.2691.6780.7760.3677.2384.0186.0283.41
    CON+DGCNN85.2181.3683.7179.0286.9174.2193.2274.4391.9082.3184.2496.5370.2290.8681.3758.2876.9683.2785.7382.62
    下载: 导出CSV
  • [1] Piga N A, Onyshchuk Y, Pasquale G, Pattacini U, Natale L. ROFT: real-time optical flow-aided 6D object pose and velocity tracking. IEEE Robotics and Automation Letters, 2022, 7(1): 159-166 doi: 10.1109/LRA.2021.3119379
    [2] Gao F, Sun Q, Li S, Li W, Li Y, Yu J, et al. Efficient 6D object pose estimation based on attentive multi-scale contextual information. IET Computer Vision, 2022, 16(7): 596-606 doi: 10.1049/cvi2.12101
    [3] Peng W, Yan J, Wen H, Sun Y. Self-supervised category-level 6D object pose estimation with deep implicit shape representation. In: Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI, 2022. 2082−2090
    [4] Huang W L, Hung C Y, Lin I C. Confidence-based 6D object pose estimation. IEEE Transactions on Multimedia, 2022, 24: 3025-3035 doi: 10.1109/TMM.2021.3092149
    [5] Li X, Weng Y, Yi L, Guibas L J, Abbott A L, Song S, et al. Leveraging SE(3) equivariance for self-supervised category-level object pose estimation from point clouds. In: Proceedings of Annual Conference on Neural Information Processing Systems. New York, USA: MIT Press, 2021. 15370−15381
    [6] Melzi S, Spezialetti R, Tombari F, Bronstein M M, Stefano L D, Rodol E. Gframes: Gradient-based local reference frame for 3D shape matching. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2019. 4629−4638
    [7] Gojcic Z, Zhou C, Wegner J D, Wieser A. The perfect match: 3D point cloud matching with smoothed densities. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2019. 5545−5554
    [8] Hao Z, Zhang T, Chen M, Zhou K. RRL: Regional rotate layer in convolutional neural networks. In: Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI, 2022. 826−833
    [9] Esteves C, Allen-Blanchette C, Makadia A, Daniilidis K. Learning SO(3) equivariant representations with spherical CNNs. In: Proceedings of European Conference on Computer Vision. Berlin, DE: Springer, 2018. 52−68
    [10] Chen Y, Zhao J Y, Shi C W. Mesh convolution: a novel feature extraction method for 3d nonrigid object classification. IEEE Transactions on Multimedia, 2021, 23: 3098-3111 doi: 10.1109/TMM.2020.3020693
    [11] Cohen T S, Geiger M, Köhler J, Wellin M. Spherical CNNs. In: Proceedings of International Conference on Learning Representations. Vancouver, CA: 2018. 1−15
    [12] Gerken J E, Carlsson O, Linander H, Ohlsson F, Petersson C, Persson D. Equivariance versus augmentation for spherical images. In: Proceedings of International Conference on Machine Learning. New York, USA: PMLR, 2022. 7404−7421
    [13] Cohen T, Welling M. Group equivariant convolutional networks. In: Proceedings of International Conference on Machine Learning. New York, USA: PMLR, 2016. 2990−2999
    [14] You Y, Lou Y, Shi R, Liu Q, Tai Y W, Ma L Z, et al. Prin/sprin: on extracting point-wise rotation invariant features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(12): 9489-9502 doi: 10.1109/TPAMI.2021.3130590
    [15] Mitchel T W, Aigerman N, Kim V G, Kazhdan M. Möbius convolutions for spherical CNNs. In: Proceedings of ACM SIGGRAPH Annual Conference. New York, USA: ACM, 2022. 1−9
    [16] Mazzia V, Salvetti F, Chiaberge M. Efficient-capsnet: capsule network with self-attention routing. Scientific reports, 2021, 11(1): 1-13 doi: 10.1038/s41598-020-79139-8
    [17] Hinton G E, Krizhevsky A, Wang S D. Transforming auto-encoders. In: Proceedings of International Conference on Artificial Neural Networks. Berlin, DE: Springer, 2011. 44−51
    [18] Sabour S, Frosst N, Hinton G E. Dynamic routing between capsules. In: Proceedings of Annual Conference on Neural Information Processing Systems. New York, USA: MIT Press, 2017. 3856−3866
    [19] Hinton G E, Sabour S, Frosst N. Matrix capsules with EM routing. In: Proceedings of International Conference on Learning Representations. Vancouver, CA: 2018. 16−30
    [20] Zhang Z, Xu Y, Yu J, Gao S H. Saliency detection in 360 videos. In: Proceedings of European Conference on Computer Vision. Berlin, DE: Springer, 2018. 488−503
    [21] Iqbal T, Xu Y, Kong Q Q, Wanfg W W. Capsule routing for sound event detection. In: Proceedings of European Signal Processing Conference. Piscataway, USA: IEEE, 2018. 2255−2259
    [22] Gu J, Tresp V. Improving the robustness of capsule networks to image affine transformations. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2020. 7285−7293
    [23] Gu J, Tresp V, Hu H. Capsule network is not more robust than convolutional network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2021. 14309−14317
    [24] Venkataraman S R, Balasubramanian S, Sarma R R. Building deep equivariant capsule networks. In: Proceedings of International Conference on Learning Representations. Vancouver, CA: 2020. 1−12
    [25] 姚红革, 董泽浩, 喻钧, 白小军. 深度EM胶囊网络全重叠手写数字识别与分离. 自动化学报: 2022, 48(12): 2996-3005 DOI: 10.16383/j.aas.c190849

    Yao Hong-Ge, Dong Ze-Hao, Yu Jun, Bai Xiao-Jun. Fully overlapped handwritten number recognition and separation based on deep EM capsule network. Acta Automatica Sinica, 2022, 48(12): 2996-3005 doi: 10.16383/j.aas.c190849
    [26] Saha S, Ebel P, Zhu X X. Self-supervised multisensor change detection. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1-10
    [27] Gong Y, Lai C I, Chung Y A, Glass J R. SSAST: Self-supervised audio spectrogram transformer. In: Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI, 2022. 10699−10709
    [28] Sun L, Zhang Z, Ye J, Peng H, Zhang J W, Su S, et al. A self-supervised mixed-curvature graph neural network. In: Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI, 2022. 4146−4155
    [29] Zbontar J, Jing L, Misra I, LeCun Y, Deny S. Barlow twins: Self-supervised learning via redundancy reduction. In: Proceedings of International Conference on Machine Learning. New York, USA: PMLR, 2021. 12310−12320
    [30] Becker S, Hinton G E. Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 1992, 355(6356): 161-163 doi: 10.1038/355161a0
    [31] Goldberger J, Hinton G E, Roweis S, Salakhutdinov R. Neighbourhood components analysis. In: Proceedings of Annual Conference on Neural Information Processing Systems. New York, USA: MIT Press, 2004. 513−520
    [32] Bromley J, Bentz J W, Bottou L, Guyon I, LeCun Y, Moore C, et al. Signature verification using a siamese time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 1993, 07(04): 669-688 doi: 10.1142/S0218001493000339
    [33] Hadsell R, Chopra S, Lecun Y. Dimensionality reduction by learning an invariant mapping. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2006. 1735−1742
    [34] Chopra S, Hadsell R, Lecun Y. Learning a similarity metric discriminatively, with application to face verification. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2005. 539−546
    [35] Spezialetti R, Salti S, Stefano L D. Learning an effective equivariant 3D descriptor without supervision. In: Proceedings of International Conference on Computer Vision. Piscataway, USA: IEEE, 2019. 6400−6409
    [36] Driscoll J R, Healy D M. Computing fourier transforms and convolutions on the 2-sphere. Advances in applied mathematics, 1994, 15(2): 202-250 doi: 10.1006/aama.1994.1008
    [37] Thomas F. Approaching dual quaternions from matrix algebra. IEEE Transactions on Robotics, 2014, 30(05): 1037-1048 doi: 10.1109/TRO.2014.2341312
    [38] Busam B, Birdal T, Navab N. Camera pose filtering with local regression geodesics on the riemannian manifold of dual quaternions. In: Proceedings of International Conference on Computer Vision Workshops. Piscataway, USA: IEEE, 2017. 2436−2445
    [39] Cohen T S, Geiger M, Weiler M. A general theory of equivariant CNNs on homogeneous spaces. In: Proceedings of Annual Conference on Neural Information Processing Systems. New York, USA: MIT Press, 2019. 9142−9153
    [40] Zhao Y, Birdal T, Lenssen J E, Menegatti E, Guibas L J, Tombari F, et al. Quaternion equivariant capsule networks for 3D point clouds. In: Proceedings of European Conference on Computer Vision. Berlin, DE: Springer, 2020. 1−19
    [41] Kondor R, Trivedi S. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In: Proceedings of International Conference on Machine Learning. New York, USA: PMLR, 2018. 2747−2755
    [42] Lenssen J E, Fey M, Libuschewski P. Group equivariant capsule networks. In: Proceedings of Annual Conference on Neural Information Processing Systems. New York, USA: MIT Press, 2018. 8858−8867
    [43] Qi C R, Su H, Mo K, Guibas L J. PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2017. 77−85
    [44] Chang A X, Funkhouser T A, Guibas L J, Hanrahan P, Huang Q X, Li Z, et al. ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv: 1512.03012, 2015.
    [45] Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, 2014.
    [46] Qi C R, Yi L, Su H, Guibas L J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In: Proceedings of Annual Conference on Neural Information Processing Systems. New York, USA: MIT Press, 2017. 5099−5108
    [47] Zhang K, Hao M, Wang J, Silva C W, Fu C L. Linked dynamic graph CNN: Learning on point cloud via linking hierarchical features. arXiv preprint arXiv: 1904.10014, 2019.
    [48] Li J, Chen B M, Lee G H. SO-Net: Self-organizing network for point cloud analysis. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2018. 9397−9406
    [49] You Y, Lou Y, Liu Q, Tai Y W, Ma L Z, Lu C W, et al. Pointwise rotation-invariant network with adaptive sampling and 3D spherical voxel convolution. In: Proceedings of AAAI Conference on Artificial Intelligence. Palo Alto, USA: AAAI, 2020. 12717−12724
    [50] Huang Q, Wang W, Neumann U. Recurrent slice networks for 3D segmentation of point clouds. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2018. 2626−2635
    [51] Atzmon M, Maron H, Lipman Y. Point convolutional neural networks by extension operators. ACM Transactions on Graphics, 2018, 37(4): 71
    [52] Su H, Jampani V, Sun D, Maji S, Kalogerakis E, Yang M H, et al. SPLATNet: Sparse lattice networks for point cloud processing. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2018. 2530−2539
    [53] Wang Y, Sun Y, Liu Z, Sarma S E, Bronstein M M, Solomon J M. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics, 2019, 38(5): 146:1-146:12
    [54] Xu Y, Fan T, Xu M, Zeng L, Qiao Y. SpiderCNN: Deep learning on point sets with parameterized convolutional filters. In: Proceedings of European Conference on Computer Vision. Berlin, DE: Springer, 2018. 90−105
    [55] Tombari F, Salti S, Stefano L D. Unique signatures of histograms for local surface description. In: Proceedings of European Conference on Computer Vision. Berlin, DE: Springer, 2010. 356−369
    [56] Khoury M, Zhou Q Y, Koltun V. Learning compact geometric features. In: Proceedings of International Conference on Computer Vision. Piscataway, USA: IEEE, 2017. 153−161
    [57] Zhang Z, Hua B S, Rosen D W, Yeung S K. Rotation invariant convolutions for 3D point clouds deep learning. In: Proceedings of International Conference on 3D Vision. Piscataway, USA: IEEE, 2019. 204−213
    [58] Kim S, Park J, Han B. Rotation-invariant local-to-global representation learning for 3D point cloud. In: Proceedings of Annual Conference on Neural Information Processing Systems. New York, USA: MIT Press, 2020. 8174−8185
    [59] Li X Z, Li R H, Chen G Y, Fu C W, Cohen-Or D, Heng P. A rotation-invariant framework for deep point cloud analysis. IEEE Transactions on Visualization and Computer Graphics, 2021, 28(12): 4503-4514
  • 加载中
图(8) / 表(5)
计量
  • 文章访问数:  631
  • HTML全文浏览量:  203
  • PDF下载量:  210
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-09-02
  • 录用日期:  2022-12-27
  • 网络出版日期:  2023-05-05
  • 刊出日期:  2023-06-20

目录

    /

    返回文章
    返回