• 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于结构频谱感知框架的配电网点云语义分割

唐友源 张辉 杜瑞 张恺宁 曹云康 别克扎提·巴合提 陈厚权 王耀南

唐友源, 张辉, 杜瑞, 张恺宁, 曹云康, 别克扎提·巴合提, 陈厚权, 王耀南. 基于结构频谱感知框架的配电网点云语义分割. 自动化学报, 2026, 52(4): 833−845 doi: 10.16383/j.aas.c250540
引用本文: 唐友源, 张辉, 杜瑞, 张恺宁, 曹云康, 别克扎提·巴合提, 陈厚权, 王耀南. 基于结构频谱感知框架的配电网点云语义分割. 自动化学报, 2026, 52(4): 833−845 doi: 10.16383/j.aas.c250540
Tang You-Yuan, Zhang Hui, Du Rui, Zhang Kai-Ning, Cao Yun-Kang, Biekezati Baheti, Chen Hou-Quan, Wang Yao-Nan. Semantic segmentation of distribution network point clouds based on a structure spectrum-aware framework. Acta Automatica Sinica, 2026, 52(4): 833−845 doi: 10.16383/j.aas.c250540
Citation: Tang You-Yuan, Zhang Hui, Du Rui, Zhang Kai-Ning, Cao Yun-Kang, Biekezati Baheti, Chen Hou-Quan, Wang Yao-Nan. Semantic segmentation of distribution network point clouds based on a structure spectrum-aware framework. Acta Automatica Sinica, 2026, 52(4): 833−845 doi: 10.16383/j.aas.c250540

基于结构频谱感知框架的配电网点云语义分割

doi: 10.16383/j.aas.c250540 cstr: 32138.14.j.aas.c250540
基金项目: 国家自然科学基金重大项目(62595801), 湖南省十大技术攻关项目(2024GK1010), 湖南省自然科学基金重点项目(2025JJ30024), 国网湖南省电力有限公司科技项目(5216A522001Y, 5216A5240003, 5216AJ250008)资助
详细信息
    作者简介:

    唐友源:长沙理工大学人工智能学院硕士研究生. 2023年获得湘南学院学士学位. 主要研究方向为点云智能感知. E-mail: tangyouyuan@stu.csust.edu.cn

    张辉:湖南大学人工智能与机器人学院教授. 2007年获得湖南大学博士学位. 主要研究方向为数据分析, 图像处理和机器人控制. 本文通信作者. E-mail: zhanghui1983@hnu.edu.cn

    杜瑞:湖南大学人工智能与机器人学院博士研究生. 2020年获得湘潭大学硕士学位. 主要研究方向为数据分析, 图像处理. E-mail: durui@hnu.edu.cn

    张恺宁:湖南大学人工智能与机器人学院助理教授. 2025年获得武汉大学通信与信息系统专业博士学位. 主要研究方向为3D视觉和多模态感知. E-mail: carney@hnu.edu.cn

    曹云康:湖南大学人工智能与机器人学院助理教授. 2025年获得华中科技大学机械工程专业博士学位. 主要研究方向为复杂场景下的工业视觉检测与识别, 工业基础模型及工业具身智能. E-mail: caoyunkang@hnu.edu.cn

    别克扎提·巴合提:湖南大学人工智能与机器人学院博士研究生. 2020年获得英国克兰菲尔德大学硕士学位. 主要研究方向为信号分析, 时间序列分析. E-mail: bbiekezati@hnu.edu.cn

    陈厚权:湖南大学人工智能与机器人学院博士研究生. 2022年获得中南林业科技大学计算机信息与工程学院学士学位. 主要研究方向为电力场景的多模态技术. E-mail: chq@hnu.edu.cn

    王耀南:中国工程院院士, 湖南大学人工智能与机器人学院教授. 1995年获得湖南大学博士学位. 主要研究方向为数据分析, 智能控制和图像处理. E-mail: yaonan@hnu.edu.cn

  • 中图分类号: Y

Semantic Segmentation of Distribution Network Point Clouds Based on a Structure Spectrum-Aware Framework

Funds: Supported by Major Program of National Natural Science Foundation of China (62595801), Ten Technical Research Projects of Hunan Province (2024GK1010), Natural Science Foundation of Hunan Province (2025JJ30024), and Science and Technology Project of State Grid Hunan Electric Power Co., Ltd. (5216A522001Y, 5216A5240003, 5216AJ250008)
More Information
    Author Bio:

    TANG You-Yuan Master student at the School of Artificial Intelligence, Changsha University of Science and Technology. He received his bachelor degree from Xiangnan University in 2023. His main research interest is point cloud intelligent perception

    ZHANG Hui Professor at the School of Artificial Intelligence and Robotics, Hunan University. He received his Ph.D. degree from Hunan University in 2007. His research interests include data analysis, image processing, and robot control. Corresponding author of this paper

    DU Rui Ph.D. candidate at the School of Artificial Intelligence and Robotics, Hunan University. He received his master degree from Xiangtan University in 2020. His research interests include data analysis and image processing

    ZHANG Kai-Ning Assistant professor at the School of Artificial Intelligence and Robotics, Hunan University. She received her Ph.D. degree in communication and information systems from Wuhan University in 2025. Her research interests include 3D vision and multimodal perception

    CAO Yun-Kang Assistant professor at the School of Artificial Intelligence and Robotics, Hunan University. He received his Ph.D. degree in mechanical engineering from Huazhong University of Science and Technology in 2025. His research interests include industrial vision detection and recognition in complex scenarios, industrial foundation models, and industrial embodied intelligence

    BIEKEZATI Baheti Ph.D. candidate at the School of Artificial Intelligence and Robotics, Hunan University. He received his master degree from Cranfield University in 2020. His research interests include signal analysis and time series analysis

    CHEN Hou-Quan Ph.D. candidate at the School of Artificial Intelligence and Robotics, Hunan University. He received his bachelor degree from the School of Computer and Information Engineering, Central South University of Forestry and Technology in 2022. His research interests include multimodal technologies for power-system applications

    WANG Yao-Nan Academician at Chinese Academy of Engineering, professor at the School of Artificial Intelligence and Robotics, Hunan University. He received his Ph.D. degree from Hunan University in 1995. His research interests include data analysis, intelligent control, and image processing

  • 摘要: 配电网点云语义分割对于实现无人化巡检与智能电网运维具有重要意义. 尽管已有方法在空间建模与结构增强方面取得一定进展, 但在频谱特征挖掘与大规模点云处理效率上仍面临突出挑战. 为此, 提出一种结构频谱感知框架(SSAF), 以提升长距离配网场景下的点云表达能力. 在数据预处理阶段, 提出一种结合结构引导的层级滤波策略与结构感知的样本划分方法, 在压缩冗余背景点云的同时, 有效保持电力杆塔、电力线等关键目标的结构完整性与连续性. 在语义分割阶段, 构建空谱协同语义分割网络, 引入局部极坐标系以增强模型对方向特征的建模能力, 并设计基于注意力图的动态融合机制, 实现空间特征与频谱特征之间的自适应交互与信息增强. 实验结果表明, SSAF能在真实配电网场景点云数据集上实现更高的分割精度与推理效率, 在多个关键指标上优于现有代表性方法, 验证了其在复杂场景下的实用性和工程推广潜力.
  • 图  1  结构频谱感知框架

    Fig.  1  The structure spectrum-aware framework

    图  2  数据预处理

    Fig.  2  Data preprocessing

    图  3  空谱协同语义分割网络

    Fig.  3  Spectral-spatial collaborative semantic segmentation network

    图  4  双向门控融合

    Fig.  4  Bidirectional gated fusion

    图  5  构建的配电网场景数据集可视化

    Fig.  5  Visualization of the constructed distribution network scene dataset

    图  6  配电网场景点云语义分割结果可视化

    Fig.  6  Visualization of cloud semantic segmentation results for distribution network point cloud

    图  7  不同处理阶段下分割精度与运行时间的对比

    Fig.  7  Comparison of segmentation accuracy and runtime at different processing stages

    图  8  不同配置下的特征激活热图

    Fig.  8  Feature activation heat maps under different configurations

    表  1  配电网场景点云语义分割实验(%)

    Table  1  Experiment on semantic segmentation of distribution network point cloud (%)

    方法 OA mAcc mIoU 背景IoU 电力线IoU 电力杆塔IoU 参数量(M)
    DeepGCN[19] 98.26 71.29 78.92 98.20 63.10 75.46 3.60
    DGCNN[20] 98.89 78.88 83.36 99.03 62.13 88.92 1.30
    PointNet++[14] 98.40 86.04 84.75 99.62 67.48 87.14 1.00
    KPConv[31] 98.90 87.91 85.51 99.90 65.71 90.91 15.00
    PointNext-XL[32] 98.27 91.40 88.54 98.65 74.72 92.25 41.60
    PTv2[22] 98.36 92.63 88.42 99.28 80.04 85.93 11.30
    PointMetaBase-XL[33] 98.38 94.21 93.62 99.11 92.35 89.40 15.30
    DeLA[34] 96.87 95.34 93.65 96.89 92.69 91.36 7.00
    DeLA + X-3D[35] 98.59 96.81 94.29 96.36 94.30 92.22 8.00
    PTv3[23] 98.64 97.11 94.37 96.56 94.42 92.12
    PCM[36] 98.79 97.27 95.69 96.96 96.17 93.93 34.20
    SSCNet (本文) 98.74 97.99 96.20 97.85 94.36 96.40 12.54
    注: 加粗字体表示各指标的最优结果.
    下载: 导出CSV

    表  2  各场景滤波前后分类别点数统计(单位: 万点)

    Table  2  Category-wise point cloud statistics before and after filtering (Unit: $ 10^4 $ points)

    场景原始点云滤波后点云背景点
    滤除率 (%)
    电力
    线点
    电力
    杆塔点
    背景点电力
    线点
    电力
    杆塔点
    背景点
    S1 86.9 16.4 4890.0 86.1 15.7 1415.0 71.1
    S2 67.9 1625.0 4042.0 65.7 1622.1 45.9 98.9
    S3 117.6 1090.4 3162.0 116.8 1087.5 397.0 87.5
    下载: 导出CSV

    表  3  消融实验结果(%)

    Table  3  The ablation experiment results (%)

    方法 PRST BGF OA mIoU mAcc
    1 98.12 94.13 91.74
    2 $ \checkmark$ 98.57 95.86 96.84
    3 $ \checkmark$ 98.78 95.47 93.06
    4 $ \checkmark$ $ \checkmark$ 98.74 96.20 97.99
    下载: 导出CSV

    表  4  主流方法在S3DIS数据集上的对比结果

    Table  4  Comparison of mainstream methods on the S3DIS dataset

    方法 参数量 (M) OA (%) mAcc (%) mIoU (%)
    PointNet++[14] 1.0 83.0 53.5
    DGCNN[20] 1.3 47.9
    KPConv[31] 15.0 72.8 67.1
    PointNext-XL[32] 41.0 91.0 77.2 71.1
    PTv1[37] 90.8 76.5 70.4
    PTv2[22] 11.3 91.6 78.0 72.7
    PointMetaBase-XL[33] 15.3 90.6 71.5
    DeLA[34] 7.0 92.2 80.0 74.1
    DeLA + X-3D[35] 8.0 92.2 80.1 74.3
    PTv3[23] 74.7
    PCM[36] 34.2 92.9 81.6 74.1
    SSCNet 12.5 92.3 82.1 75.1
    下载: 导出CSV
  • [1] Shen Y Q, Huang J J, Wang J G, Jiang J D, Li J X, Ferreira V. A review and future directions of techniques for extracting powerlines and pylons from LiDAR point clouds. International Journal of Applied Earth Observation and Geoinformation, 2024, 132: Article No. 104056
    [2] Jung J, Che E Z, Olsen M J, Shafer K C. Automated and efficient powerline extraction from laser scanning data using a voxel-based subsampling with hierarchical approach. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 163: 343−361
    [3] Liu X Y, Miao X R, Jiang H, Chen J, Wu M, Chen Z H. Tower masking MIM: A self-supervised pretraining method for power line inspection. IEEE Transactions on Industrial Informatics, 2024, 20(1): 513−523 doi: 10.1109/TII.2023.3268479
    [4] 王斐然, 韩庚, 郭昕阳, 石朝阳, 王金. 激光雷达数据下架空输电线路点云场景分割及净空入侵检测. 测绘通报, 2024(5): 133−137 doi: 10.13474/j.cnki.11-2246.2024.0523

    Wang Fei-Ran, Han Geng, Guo Xin-Yang, Shi Chao-Yang, Wang Jin. Segmentation and clearance inspections on overhead transmission powerline corridor based on LiDAR point clouds. Bulletin of Surveying and Mapping, 2024(5): 133−137 doi: 10.13474/j.cnki.11-2246.2024.0523
    [5] Kim H B, Sohn G. 3D classification of power-line scene from airborne laser scanning data using random forests. In: Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Saint-Mandé, France: ISPRS, 2010. 126−132
    [6] Lehtomäki M, Kukko A, Matikainen L, Hyyppä J, Kaartinen H, Jaakkola A. Power line mapping technique using all-terrain mobile laser scanning. Automation in Construction, 2019, 105: Article No. 102802
    [7] Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981, 24(6): 381−395 doi: 10.1016/b978-0-08-051581-6.50070-2
    [8] Shen X J, Qin C, Du Y, Yu X L, Zhang R. An automatic extraction algorithm of high voltage transmission lines from airborne LiDAR point cloud data. Turkish Journal of Electrical Engineering and Computer Sciences, 2018, 26(4): 2043−2055 doi: 10.3906/elk-1801-23
    [9] Zhu S, Li Q, Zhao J W, Zhang C G, Zhao G, Li L, et al. A deep-learning-based method for extracting an arbitrary number of individual power lines from UAV-mounted laser scanning point clouds. Remote Sensing, 2024, 16(2): Article No. 393 doi: 10.3390/rs16020393
    [10] Maturana D, Scherer S. VoxNet: A 3D convolutional neural network for real-time object recognition. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg, Germany: IEEE, 2015. 922−928
    [11] 单铉洋, 孙战里, 曾志刚. RFNet: 用于三维点云分类的卷积神经网络. 自动化学报, 2023, 49(11): 2350−2359 doi: 10.16383/j.aas.c210532

    Shan Xuan-Yang, Sun Zhan-Li, Zeng Zhi-Gang. RFNet: Convolutional neural network for 3D point cloud classification. Acta Automatica Sinica, 2023, 49(11): 2350−2359 doi: 10.16383/j.aas.c210532
    [12] 鲁斌, 范晓明. 基于改进自适应K均值聚类的三维点云骨架提取的研究. 自动化学报, 2022, 48(8): 1994−2006 doi: 10.16383/j.aas.c200284

    Lu Bin, Fan Xiao-Ming. Research on 3D point cloud skeleton extraction based on improved adaptive K-means clustering. Acta Automatica Sinica, 2022, 48(8): 1994−2006 doi: 10.16383/j.aas.c200284
    [13] Qi C R, Su H, Mo K C, Guibas L J. PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 77−85
    [14] Qi C R, Yi L, Su H, Guibas L J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 5105−5114
    [15] Dong J H, Chen H, Chen S H, Zhao Y G, Yang N. PSFE-Net: Semantic segmentation network for airborne LiDAR transmission corridor scenes inspection. In: Proceedings of the 9th Asia Conference on Power and Electrical Engineering (ACPEE). Shanghai, China: IEEE, 2024. 1538−1542
    [16] Liu X N, Shuang F, Li Y, Zhang L Q, Huang X W, Qin J C. SS-IPLE: Semantic segmentation of electric power corridor scene and individual power line extraction from UAV-based LiDAR point cloud. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16: 38−50
    [17] 黄郑, 顾徐, 王红星, 张星炜, 张欣. 基于改进PointNet++ 的输电电力杆塔点云语义分割模型. 中国电力, 2023, 56(3): 77−85 doi: 10.11930/j.issn.1004-9649.202206087

    Huang Zheng, Gu Xu, Wang Hong-Xing, Zhang Xing-Wei, Zhang Xin. Semantic segmentation model for transmission tower point cloud based on improved PointNet++. Electric Power, 2023, 56(3): 77−85 doi: 10.11930/j.issn.1004-9649.202206087
    [18] Li W, Luo Z P, Xiao Z L, Chen Y P, Wang C, Li J. A GCN-based method for extracting power lines and pylons from airborne LiDAR data. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: Article No. 5700614
    [19] Li G H, Müller M, Thabet A, Ghanem B. DeepGCNs: Can GCNs go as deep as CNNs? In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, South Korea: IEEE, 2019. 9266−9275
    [20] Wang Y, Sun Y B, Liu Z W, Sarma S E, Bronstein M M, Solomon J M. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics, 2019, 38(5): Article No. 146
    [21] 李建, 王健, 王雷, 李敏, 杨立克, 赵艺龙. 双重注意力机制的电力走廊点云语义分割. 测绘通报, 2025(4): 127−133 doi: 10.13474/j.cnki.11-2246.2025.0421

    Li Jian, Wang Jian, Wang Lei, Li Min, Yang Li-Ke, Zhao Yi-Long. Dual attention for power corridor point cloud semantic segmentation. Bulletin of Surveying and Mapping, 2025(4): 127−133 doi: 10.13474/j.cnki.11-2246.2025.0421
    [22] Wu X Y, Lao Y X, Jiang L, Liu X H, Zhao H S. Point transformer V2: Grouped vector attention and partition-based pooling. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2022. Article No. 2415
    [23] Wu X Y, Jiang L, Wang P S, Liu Z J, Liu X H, Qiao Y, et al. Point transformer V3: Simpler, faster, stronger. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 4840−4851
    [24] Bu L B, Wang Y F, Ma Q M, Hou Z W, Wang R, Bu F L. Deep hierarchical learning on point clouds in feature space. Neurocomputing, 2025, 630: Article No. 129647
    [25] Liu D Z, Hu W, Li X. Point cloud attacks in graph spectral domain: When 3D geometry meets graph signal processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(5): 3079−3095 doi: 10.1109/TPAMI.2023.3339130
    [26] Wen C, Long J Z, Yu B S, Tao D C. PointWavelet: Learning in spectral domain for 3-D point cloud analysis. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(3): 4400−4412 doi: 10.1109/TNNLS.2024.3363244
    [27] Rizaldy A, Gloaguen R, Fassnacht F E, Ghamisi P. HyperPointFormer: Multimodal fusion in 3-D space with dual-branch cross-attention transformers. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2025, 18: 21254−21274
    [28] Liang D K, Feng T R, Zhou X, Zhang Y M, Zou Z K, Bai X. Parameter-efficient fine-tuning in spectral domain for point cloud learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, 47(12): 10949−10966 doi: 10.1109/TPAMI.2025.3594749
    [29] Yang Y Y, Li W, Ao S, Xu Q S, Yu S S, Guo Y, et al. RALoc: Enhancing outdoor LiDAR localization via rotation awareness. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Honolulu, USA: IEEE, 2025. 3304−3313
    [30] Zhang W M, Qi J B, Wan P, Wang H T, Xie D H, Wang X Y, et al. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sensing, 2016, 8(6): Article No. 501 doi: 10.3390/rs8060501
    [31] Thomas H, Qi C R, Deschaud J E, Marcotegui B, Goulette F, Guibas L J. KPConv: Flexible and deformable convolution for point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, South Korea: IEEE, 2019. 6410−6419
    [32] Qian G C, Li Y C, Peng H W, Mai J J, Hammoud H A A K, Elhoseiny M, et al. PointNeXt: Revisiting PointNet++ with improved training and scaling strategies. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2022. Article No. 1685
    [33] Lin H J, Zheng X W, Li L J, Chao F, Wang S S, Wang Y, et al. Meta architecture for point cloud analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 17682−17691
    [34] Yang W K, Lu X H, Chen B J, Lin C L, Bao X Y, Liu W Q, et al. DeLA: An extremely faster network with decoupled local aggregation for large scale point cloud learning. International Journal of Applied Earth Observation and Geoinformation, 2024, 135: Article No. 104255
    [35] Sun S F, Rao Y M, Lu J W, Yan H B. X-3D: Explicit 3D structure modeling for point cloud recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 5074−5083
    [36] Zhang T, Yuan H B, Qi L, Zhang J N, Zhou Q Y, Ji S P, et al. Point cloud mamba: Point cloud learning via state space model. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. Philadelphia, USA: AAAI Press, 2025. 10121−10130
    [37] Zhao H S, Jiang L, Jia J Y, Torr P, Koltun V. Point transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE, 2021. 16239−16248
  • 加载中
图(8) / 表(4)
计量
  • 文章访问数:  255
  • HTML全文浏览量:  129
  • PDF下载量:  19
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-10-13
  • 录用日期:  2025-12-31
  • 网络出版日期:  2026-03-14
  • 刊出日期:  2026-04-20

目录

    /

    返回文章
    返回