Semantic Segmentation of Distribution Network Point Clouds Based on a Structure Spectrum-Aware Framework
-
摘要: 配电网点云语义分割对于实现无人化巡检与智能电网运维具有重要意义. 尽管已有方法在空间建模与结构增强方面取得一定进展, 但在频谱特征挖掘与大规模点云处理效率上仍面临突出挑战. 为此, 提出一种结构频谱感知框架(SSAF), 以提升长距离配网场景下的点云表达能力. 在数据预处理阶段, 提出一种结合结构引导的层级滤波策略与结构感知的样本划分方法, 在压缩冗余背景点云的同时, 有效保持电力杆塔、电力线等关键目标的结构完整性与连续性. 在语义分割阶段, 构建空谱协同语义分割网络, 引入局部极坐标系以增强模型对方向特征的建模能力, 并设计基于注意力图的动态融合机制, 实现空间特征与频谱特征之间的自适应交互与信息增强. 实验结果表明, SSAF能在真实配电网场景点云数据集上实现更高的分割精度与推理效率, 在多个关键指标上优于现有代表性方法, 验证了其在复杂场景下的实用性和工程推广潜力.Abstract: The semantic segmentation of point clouds in power distribution networks is of great significance for enabling unmanned inspection and intelligent grid operation and maintenance. Although existing methods have made some progress in spatial modeling and structural enhancement, they still face prominent challenges in spectral feature extraction and the efficiency of large-scale point cloud processing. To address these issues, this paper proposes a structure spectrum-aware framework (SSAF) to enhance the expressive capability of point clouds in long-distance distribution network scenarios. In the data preprocessing stage, a structure-guided hierarchical filtering strategy and a structure-aware sample partitioning method are designed to reduce redundant background points while preserving the structural integrity and continuity of key objects such as poles and wires. During the semantic segmentation stage, a spatial-spectral collaborative semantic segmentation network is constructed, in which local polar coordinates are introduced to enhance direction-sensitive feature modeling. Furthermore, a dynamic fusion mechanism based on attention maps is employed to enable adaptive interaction and information enhancement between spatial and spectral features. Experimental results show that SSAF achieves higher segmentation accuracy and inference efficiency on real-world distribution network point cloud datasets. It outperforms existing representative methods across multiple key metrics, demonstrating its practicality and engineering generalization potential in complex scenarios.
-
表 1 配电网场景点云语义分割实验(%)
Table 1 Experiment on semantic segmentation of distribution network point cloud (%)
方法 OA mAcc mIoU 背景IoU 电力线IoU 电力杆塔IoU 参数量(M) DeepGCN[19] 98.26 71.29 78.92 98.20 63.10 75.46 3.60 DGCNN[20] 98.89 78.88 83.36 99.03 62.13 88.92 1.30 PointNet++[14] 98.40 86.04 84.75 99.62 67.48 87.14 1.00 KPConv[31] 98.90 87.91 85.51 99.90 65.71 90.91 15.00 PointNext-XL[32] 98.27 91.40 88.54 98.65 74.72 92.25 41.60 PTv2[22] 98.36 92.63 88.42 99.28 80.04 85.93 11.30 PointMetaBase-XL[33] 98.38 94.21 93.62 99.11 92.35 89.40 15.30 DeLA[34] 96.87 95.34 93.65 96.89 92.69 91.36 7.00 DeLA + X-3D[35] 98.59 96.81 94.29 96.36 94.30 92.22 8.00 PTv3[23] 98.64 97.11 94.37 96.56 94.42 92.12 — PCM[36] 98.79 97.27 95.69 96.96 96.17 93.93 34.20 SSCNet (本文) 98.74 97.99 96.20 97.85 94.36 96.40 12.54 注: 加粗字体表示各指标的最优结果. 表 2 各场景滤波前后分类别点数统计(单位: 万点)
Table 2 Category-wise point cloud statistics before and after filtering (Unit: $ 10^4 $ points)
场景 原始点云 滤波后点云 背景点
滤除率 (%)电力
线点电力
杆塔点背景点 电力
线点电力
杆塔点背景点 S1 86.9 16.4 4890.0 86.1 15.7 1415.0 71.1 S2 67.9 1625.0 4042.0 65.7 1622.1 45.9 98.9 S3 117.6 1090.4 3162.0 116.8 1087.5 397.0 87.5 表 3 消融实验结果(%)
Table 3 The ablation experiment results (%)
方法 PRST BGF OA mIoU mAcc 1 98.12 94.13 91.74 2 $ \checkmark$ 98.57 95.86 96.84 3 $ \checkmark$ 98.78 95.47 93.06 4 $ \checkmark$ $ \checkmark$ 98.74 96.20 97.99 表 4 主流方法在S3DIS数据集上的对比结果
Table 4 Comparison of mainstream methods on the S3DIS dataset
方法 参数量 (M) OA (%) mAcc (%) mIoU (%) PointNet++[14] 1.0 83.0 — 53.5 DGCNN[20] 1.3 — — 47.9 KPConv[31] 15.0 — 72.8 67.1 PointNext-XL[32] 41.0 91.0 77.2 71.1 PTv1[37] — 90.8 76.5 70.4 PTv2[22] 11.3 91.6 78.0 72.7 PointMetaBase-XL[33] 15.3 90.6 — 71.5 DeLA[34] 7.0 92.2 80.0 74.1 DeLA + X-3D[35] 8.0 92.2 80.1 74.3 PTv3[23] — — — 74.7 PCM[36] 34.2 92.9 81.6 74.1 SSCNet 12.5 92.3 82.1 75.1 -
[1] Shen Y Q, Huang J J, Wang J G, Jiang J D, Li J X, Ferreira V. A review and future directions of techniques for extracting powerlines and pylons from LiDAR point clouds. International Journal of Applied Earth Observation and Geoinformation, 2024, 132: Article No. 104056 [2] Jung J, Che E Z, Olsen M J, Shafer K C. Automated and efficient powerline extraction from laser scanning data using a voxel-based subsampling with hierarchical approach. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 163: 343−361 [3] Liu X Y, Miao X R, Jiang H, Chen J, Wu M, Chen Z H. Tower masking MIM: A self-supervised pretraining method for power line inspection. IEEE Transactions on Industrial Informatics, 2024, 20(1): 513−523 doi: 10.1109/TII.2023.3268479 [4] 王斐然, 韩庚, 郭昕阳, 石朝阳, 王金. 激光雷达数据下架空输电线路点云场景分割及净空入侵检测. 测绘通报, 2024(5): 133−137 doi: 10.13474/j.cnki.11-2246.2024.0523Wang Fei-Ran, Han Geng, Guo Xin-Yang, Shi Chao-Yang, Wang Jin. Segmentation and clearance inspections on overhead transmission powerline corridor based on LiDAR point clouds. Bulletin of Surveying and Mapping, 2024(5): 133−137 doi: 10.13474/j.cnki.11-2246.2024.0523 [5] Kim H B, Sohn G. 3D classification of power-line scene from airborne laser scanning data using random forests. In: Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Saint-Mandé, France: ISPRS, 2010. 126−132 [6] Lehtomäki M, Kukko A, Matikainen L, Hyyppä J, Kaartinen H, Jaakkola A. Power line mapping technique using all-terrain mobile laser scanning. Automation in Construction, 2019, 105: Article No. 102802 [7] Fischler M A, Bolles R C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981, 24(6): 381−395 doi: 10.1016/b978-0-08-051581-6.50070-2 [8] Shen X J, Qin C, Du Y, Yu X L, Zhang R. An automatic extraction algorithm of high voltage transmission lines from airborne LiDAR point cloud data. Turkish Journal of Electrical Engineering and Computer Sciences, 2018, 26(4): 2043−2055 doi: 10.3906/elk-1801-23 [9] Zhu S, Li Q, Zhao J W, Zhang C G, Zhao G, Li L, et al. A deep-learning-based method for extracting an arbitrary number of individual power lines from UAV-mounted laser scanning point clouds. Remote Sensing, 2024, 16(2): Article No. 393 doi: 10.3390/rs16020393 [10] Maturana D, Scherer S. VoxNet: A 3D convolutional neural network for real-time object recognition. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg, Germany: IEEE, 2015. 922−928 [11] 单铉洋, 孙战里, 曾志刚. RFNet: 用于三维点云分类的卷积神经网络. 自动化学报, 2023, 49(11): 2350−2359 doi: 10.16383/j.aas.c210532Shan Xuan-Yang, Sun Zhan-Li, Zeng Zhi-Gang. RFNet: Convolutional neural network for 3D point cloud classification. Acta Automatica Sinica, 2023, 49(11): 2350−2359 doi: 10.16383/j.aas.c210532 [12] 鲁斌, 范晓明. 基于改进自适应K均值聚类的三维点云骨架提取的研究. 自动化学报, 2022, 48(8): 1994−2006 doi: 10.16383/j.aas.c200284Lu Bin, Fan Xiao-Ming. Research on 3D point cloud skeleton extraction based on improved adaptive K-means clustering. Acta Automatica Sinica, 2022, 48(8): 1994−2006 doi: 10.16383/j.aas.c200284 [13] Qi C R, Su H, Mo K C, Guibas L J. PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 77−85 [14] Qi C R, Yi L, Su H, Guibas L J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 5105−5114 [15] Dong J H, Chen H, Chen S H, Zhao Y G, Yang N. PSFE-Net: Semantic segmentation network for airborne LiDAR transmission corridor scenes inspection. In: Proceedings of the 9th Asia Conference on Power and Electrical Engineering (ACPEE). Shanghai, China: IEEE, 2024. 1538−1542 [16] Liu X N, Shuang F, Li Y, Zhang L Q, Huang X W, Qin J C. SS-IPLE: Semantic segmentation of electric power corridor scene and individual power line extraction from UAV-based LiDAR point cloud. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16: 38−50 [17] 黄郑, 顾徐, 王红星, 张星炜, 张欣. 基于改进PointNet++ 的输电电力杆塔点云语义分割模型. 中国电力, 2023, 56(3): 77−85 doi: 10.11930/j.issn.1004-9649.202206087Huang Zheng, Gu Xu, Wang Hong-Xing, Zhang Xing-Wei, Zhang Xin. Semantic segmentation model for transmission tower point cloud based on improved PointNet++. Electric Power, 2023, 56(3): 77−85 doi: 10.11930/j.issn.1004-9649.202206087 [18] Li W, Luo Z P, Xiao Z L, Chen Y P, Wang C, Li J. A GCN-based method for extracting power lines and pylons from airborne LiDAR data. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: Article No. 5700614 [19] Li G H, Müller M, Thabet A, Ghanem B. DeepGCNs: Can GCNs go as deep as CNNs? In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, South Korea: IEEE, 2019. 9266−9275 [20] Wang Y, Sun Y B, Liu Z W, Sarma S E, Bronstein M M, Solomon J M. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics, 2019, 38(5): Article No. 146 [21] 李建, 王健, 王雷, 李敏, 杨立克, 赵艺龙. 双重注意力机制的电力走廊点云语义分割. 测绘通报, 2025(4): 127−133 doi: 10.13474/j.cnki.11-2246.2025.0421Li Jian, Wang Jian, Wang Lei, Li Min, Yang Li-Ke, Zhao Yi-Long. Dual attention for power corridor point cloud semantic segmentation. Bulletin of Surveying and Mapping, 2025(4): 127−133 doi: 10.13474/j.cnki.11-2246.2025.0421 [22] Wu X Y, Lao Y X, Jiang L, Liu X H, Zhao H S. Point transformer V2: Grouped vector attention and partition-based pooling. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2022. Article No. 2415 [23] Wu X Y, Jiang L, Wang P S, Liu Z J, Liu X H, Qiao Y, et al. Point transformer V3: Simpler, faster, stronger. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 4840−4851 [24] Bu L B, Wang Y F, Ma Q M, Hou Z W, Wang R, Bu F L. Deep hierarchical learning on point clouds in feature space. Neurocomputing, 2025, 630: Article No. 129647 [25] Liu D Z, Hu W, Li X. Point cloud attacks in graph spectral domain: When 3D geometry meets graph signal processing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(5): 3079−3095 doi: 10.1109/TPAMI.2023.3339130 [26] Wen C, Long J Z, Yu B S, Tao D C. PointWavelet: Learning in spectral domain for 3-D point cloud analysis. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(3): 4400−4412 doi: 10.1109/TNNLS.2024.3363244 [27] Rizaldy A, Gloaguen R, Fassnacht F E, Ghamisi P. HyperPointFormer: Multimodal fusion in 3-D space with dual-branch cross-attention transformers. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2025, 18: 21254−21274 [28] Liang D K, Feng T R, Zhou X, Zhang Y M, Zou Z K, Bai X. Parameter-efficient fine-tuning in spectral domain for point cloud learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, 47(12): 10949−10966 doi: 10.1109/TPAMI.2025.3594749 [29] Yang Y Y, Li W, Ao S, Xu Q S, Yu S S, Guo Y, et al. RALoc: Enhancing outdoor LiDAR localization via rotation awareness. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Honolulu, USA: IEEE, 2025. 3304−3313 [30] Zhang W M, Qi J B, Wan P, Wang H T, Xie D H, Wang X Y, et al. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sensing, 2016, 8(6): Article No. 501 doi: 10.3390/rs8060501 [31] Thomas H, Qi C R, Deschaud J E, Marcotegui B, Goulette F, Guibas L J. KPConv: Flexible and deformable convolution for point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, South Korea: IEEE, 2019. 6410−6419 [32] Qian G C, Li Y C, Peng H W, Mai J J, Hammoud H A A K, Elhoseiny M, et al. PointNeXt: Revisiting PointNet++ with improved training and scaling strategies. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: Curran Associates Inc., 2022. Article No. 1685 [33] Lin H J, Zheng X W, Li L J, Chao F, Wang S S, Wang Y, et al. Meta architecture for point cloud analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, Canada: IEEE, 2023. 17682−17691 [34] Yang W K, Lu X H, Chen B J, Lin C L, Bao X Y, Liu W Q, et al. DeLA: An extremely faster network with decoupled local aggregation for large scale point cloud learning. International Journal of Applied Earth Observation and Geoinformation, 2024, 135: Article No. 104255 [35] Sun S F, Rao Y M, Lu J W, Yan H B. X-3D: Explicit 3D structure modeling for point cloud recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2024. 5074−5083 [36] Zhang T, Yuan H B, Qi L, Zhang J N, Zhou Q Y, Ji S P, et al. Point cloud mamba: Point cloud learning via state space model. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. Philadelphia, USA: AAAI Press, 2025. 10121−10130 [37] Zhao H S, Jiang L, Jia J Y, Torr P, Koltun V. Point transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE, 2021. 16239−16248 -
下载: