2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于空间向量分解的边界剥离密度聚类

张瑞霖 郑海阳 苗振国 王鸿鹏

张瑞霖, 郑海阳, 苗振国, 王鸿鹏. 基于空间向量分解的边界剥离密度聚类. 自动化学报, 2023, 49(6): 1195−1213 doi: 10.16383/j.aas.c220208
引用本文: 张瑞霖, 郑海阳, 苗振国, 王鸿鹏. 基于空间向量分解的边界剥离密度聚类. 自动化学报, 2023, 49(6): 1195−1213 doi: 10.16383/j.aas.c220208
Zhang Rui-Lin, Zheng Hai-Yang, Miao Zhen-Guo, Wang Hong-Peng. Density clustering based on the border-peeling using space vector decomposition. Acta Automatica Sinica, 2023, 49(6): 1195−1213 doi: 10.16383/j.aas.c220208
Citation: Zhang Rui-Lin, Zheng Hai-Yang, Miao Zhen-Guo, Wang Hong-Peng. Density clustering based on the border-peeling using space vector decomposition. Acta Automatica Sinica, 2023, 49(6): 1195−1213 doi: 10.16383/j.aas.c220208

基于空间向量分解的边界剥离密度聚类

doi: 10.16383/j.aas.c220208
基金项目: 广东省安全智能新技术重点实验室基础研究项目(2022B1212010005), 深圳市基础研究专项(JCYJ20210324132212030)资助
详细信息
    作者简介:

    张瑞霖:哈尔滨工业大学(深圳)计算机科学与技术学院博士研究生. 主要研究方向为深度学习, 计算机视觉和数据挖掘. E-mail: zzurlz@163.com

    郑海阳:哈尔滨工业大学(深圳)计算机科学与技术学院硕士研究生. 主要研究方向为深度学习. E-mail: 21S151085@stu.hit.edu.cn

    苗振国:哈尔滨工业大学(深圳)计算机科学与技术学院硕士研究生. 主要研究方向为深度学习. E-mail: 20S051017@stu.hit.edu.cn

    王鸿鹏:哈尔滨工业大学(深圳)计算机科学与技术学院教授. 主要研究方向为计算机视觉, 智能机器人和人工智能. 本文通信作者. E-mail: wanghp@hit.edu.cn

Density Clustering Based on the Border-peeling Using Space Vector Decomposition

Funds: Supported by Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies (2022B1212010005) and Shenzhen Fundamental Research Fund (JCYJ20210324132212030)
More Information
    Author Bio:

    ZHANG Rui-Lin Ph.D. candidate at the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen. His research interest covers deep learning, computer vision, and data mining

    ZHENG Hai-Yang Master student at the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen. His main research interest is deep learning

    MIAO Zhen-Guo Master student at the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen. His main research interest is deep learning

    WANG Hong-Peng Professor at the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen. His research interest covers computer vision, intelligent robot, and artificial intelligence. Corresponding author of this paper

  • 摘要: 作为聚类的重要组成部分, 边界点在引导聚类收敛和提升模式识别能力方面起着重要作用, 以BP (Border-peeling clustering)为最新代表的边界剥离聚类借助潜在边界信息来确保簇核心区域的空间隔离, 提高了簇骨架代表性并解决了边界隶属问题. 然而, 现有边界剥离聚类仍存在判别特征不完备、判别模式单一、嵌套迭代等约束. 为此, 提出了基于空间向量分解的边界剥离密度聚类(Density clustering based on the border-peeling using space vector decomposition, CBPVD), 以投影子空间和原始数据空间为基准, 从分布稀疏性(紧密性)和方向偏斜性(对称性)两个视角强化边界的细粒度特征, 进而通过主动边界剥离反向建立簇骨架并指导边界隶属. 与同类算法相比, 40个数据集(人工、UCI、视频图像)上的实验结果以及4个视角的理论分析表明了CBPVD在高维聚类和边界模式识别方面具有良好的综合表现.
  • 图  1  计算边界置信的图示

    Fig.  1  Graph with respect to boundary confidence calculation

    图  2  CBPVD算法流程

    Fig.  2  The algorithm flow of CBPVD

    图  3  合成数据集的聚类可视化结果

    Fig.  3  Visualized results of algorithms on synthetic datasets

    图  4  CBPVD在Olivetti上的聚类结果

    Fig.  4  The clustering results on Olivetti faces by CBPVD

    图  5  运行时间测试

    Fig.  5  Running time test

    图  6  在手写体数据集上识别的边界信息

    Fig.  6  The boundary information extraction on MNIST

    图  7  Nemenyi检验结果

    Fig.  7  The Nemenyi test result

    图  8  鲁棒性分析

    Fig.  8  Robustness analysis

    表  1  参数设置

    Table  1  Hyperparameter configuration

    AlgorithmTime complexity
    K-means$k$= The actual number of clusters
    DPC$dc\in [0.1,20]$
    SNN-DPC$k\in [3,70]$
    GB-DPC$dc\in [0.1,20] $
    EC$dc\in [0.1,20]$ or $dc\in [100,300]$
    BP$k\in[3,70], b\in[0.1,0.5], \epsilon\in[0.1,0.5], T\in[100,120],C=2$
    CBPVD$k\in[3,70], \tau\in[0.1,0.4]$
    下载: 导出CSV

    表  2  数据集基本信息

    Table  2  Basic information of datasets

    数据集大小维度簇数特征
    Compound39926Multi-density, -Scale
    R15600215Micro, Adjoining
    Flame24022Overlapping
    Parabolic200022Cross-winding, Multi-density
    Jain37322Cross-winding, Multi-density
    4k2-far40024Noise, Convex
    D313100231Multiple-Micro cluster
    Aggregation78827Bridging
    Spiral24023Manifold
    Heart disease303132UCI, Clinical medicine
    Hepatitis155192UCI, Clinical medicine
    German Credit1000202UCI, Financial
    Voting435162UCI, Political election
    Credit Approval690152UCI, Credit record
    Bank4521162UCI, Financial credit
    Sonar208602UCI, Geology exploration
    Zoo101716UCI, Biological species
    Parkinson195222UCI, Clinical medicine
    Post9083UCI, Postoperative recovery
    Spectheart267222UCI, Clinical medicine
    Wine178133UCI, Wine ingredients
    Ionosphere351342UCI, Atmospheric structure
    WDBC569302UCI, Cancer
    Optical Recognition56206410OCR, Handwritten Digits
    Olivetti Face4001030440Face, High-dimensional
    You-Tube Faces100001000041Video stream, Face
    RNA-seq801205315Gene expression, Nonlinear
    REUTERS10000100004Word, News, Text
    G2-20204822Noise-20%
    G2-30204822Noise-30%
    G2-40204822Noise-40%
    Size50050025Gaussian
    Size2500250025Gaussian
    Size5000500025Gaussian
    Size100001000025Gaussian
    Dim128102412816High-dimensional
    Dim256102425616High-dimensional
    Dim512102451216High-dimensional
    Dim10241024102416High-dimensional
    MINST1000078410OCR, high-dimensional
    下载: 导出CSV

    表  3  算法在合成数据集上的聚类表现

    Table  3  Performance comparison of algorithms on all synthetic datasets

    DatasetAlgorithmParameterACCPurity JCARIFMI
    4k2-farK-means$k$= 4110.1311
    DPC$dc$= 0.216811111
    GB-DPC$dc$= 0.5110.2611
    SNN-DPC$k$= 1011111
    EC$\sigma$= 111111
    BP 0.980.990.010.970.98
    CBPVD10, 0.111111
    AggregationK-means$k$= 70.780.9400.760.81
    DPC$k$= 7, $dc$= 2.50.910.950.220.840.87
    GB-DPC$dc$= 2.50.640.990.090.570.68
    SNN-DPC$k$= 400.980.9800.960.97
    EC$\sigma$= 5.511011
    BP 10.950.720.990.99
    CBPVD16, 0.2411111
    CompoundK-means$k$= 60.630.830.230.530.63
    DPC$dc$= 1.250.640.830.150.540.64
    GB-DPC$dc$= 1.80.680.830.230.540.64
    SNN-DPC$k$= 120.760.840.240.630.74
    EC$\sigma$= 5.80.680.860.680.590.69
    BP 0.770.910.770.650.73
    CBPVD9, 0.080.900.910.130.940.96
    FlameK-means$k$= 20.830.830.830.430.73
    DPC$dc$= 0.930.840.840.160.450.74
    GB-DPC$dc$= 20.990.990.990.970.98
    SNN-DPC$k$= 50.990.990.010.950.98
    EC$\sigma$= 5.40.800.930.140.510.74
    BP 0.980.990.650.960.98
    CBPVD3, 0.1111111
    SpiralK-means$k$= 30.350.350.33−0.010.33
    DPC$dc$= 1.740.490.490.350.060.38
    GB-DPC$dc$= 2.950.440.440.360.020.35
    SNN-DPC$k$= 1011011
    EC$\sigma$= 100.340.340.3200.58
    BP 0.500.560.500.170.49
    CBPVD5, 0.3211111
    JainK-means$k$= 20.790.790.210.320.70
    DPC$dc$= 1.350.860.860.860.520.79
    GB-DPC$dc$= 1.350.350.940.180.150.44
    SNN-DPC$k$= 100.860.860.140.520.79
    EC$\sigma$= 7.650.790.860.190.510.78
    BP 0.420.980.090.230.53
    CBPVD13, 0.1611011
    R15K-means$k$= 150.810.860.030.800.81
    DPC$dc$= 0.950.990.9900.980.98
    GB-DPC$dc$= 0.20.990.990.070.990.99
    SNN-DPC$k$= 150.990.990.990.990.99
    EC$\sigma$= 1.450.980.980.980.970.97
    BP 0.990.9900.990.99
    CBPVD9, 0.1311111
    ParabolicK-means$k$= 20.810.810.810.390.69
    DPC$dc$= 1.50.820.820.820.410.71
    GB-DPC$dc$= 0.50.940.940.060.770.89
    SNN-DPC$k$= 90.950.950.950.810.91
    EC$\sigma$= 3.050.730.730.730.210.66
    BP 0.190.980.030.130.36
    CBPVD33, 0.2711111
    D31K-means$k$= 310.880.9100.870.87
    DPC$dc$= 1.80.970.9700.940.94
    GB-DPC$dc$= 40.460.460.020.320.45
    SNN-DPC$k$= 400.970.9700.940.94
    EC$\sigma$= 40.910.910.060.880.89
    BP 0.940.9500.900.91
    CBPVD13, 0.150.970.970.070.940.94
    下载: 导出CSV

    表  4  算法在16个真实数据集(UCI)上的聚类表现

    Table  4  Performance comparison of algorithms on 16 real-world datasets

    DatasetAlgorithmParameterACCPurity JCARIFMI
    Heart diseaseK-means$k$= 20.570.570.570.020.52
    DPC$dc$= 19.44240.550.550.450.010.51
    GB-DPC$dc$= 19.44240.540.540.5400.71
    SNN-DPC$k$= 650.590.590.410.030.54
    EC$\sigma$= 1000.540.540.46−0.0010.71
    BP 0.530.540.47−0.0020.68
    CBPVD0.27, 260.680.680.320.120.77
    HepatitisK-means$k$= 20.660.840.66−0.020.67
    DPC$dc$= 10.630.840.01−0.110.61
    GB-DPC$dc$= 10.20.730.700.28−0.010.72
    SNN-DPC$k$= 450.700.840.30−0.070.71
    EC$\sigma$= 5.80.0110.0100.01
    BP 0.830.840.83−0.020.84
    CBPVD10, 0.20.840.840.7600.85
    GermanK-means20.670.700.330.050.66
    DPC$dc$= 53.98140.610.700.610.030.58
    GB-DPC$dc$= 53.98140.610.700.610.030.58
    SNN-DPC$k$= 300.620.700.390.010.61
    EC$\sigma$= 1000.150.720.010.010.20
    BP 0.140.700.070.0010.20
    CBPVD4, 0.390.830.830.830.430.74
    VotingK-means$k$= 20.510.610.51−0.0020.51
    DPC$dc$= 10.810.810.190.390.7
    GB-DPC$dc$= 1.70.870.870.870.540.78
    SNN-DPC$k$= 600.880.880.120.570.79
    EC$\sigma$= 20.750.890.750.420.68
    BP 0.860.910.050.590.79
    CBPVD66, 0.330.880.880.120.680.79
    CreditK-means$k$= 20.550.550.450.0030.71
    DPC$dc$= 10.680.680.680.130.60
    GB-DPC$dc$= 70.550.550.4500.71
    SNN-DPC$k$= 500.610.610.610.050.53
    EC$\sigma$= 8000.560.5900.020.68
    BP 0.330.690.260.060.35
    CBPVD31, 0.330.850.850.850.490.74
    BankK-means$k$= 20.820.880.11−0.0020.82
    DPC$dc$= 2.390.640.880.140.040.65
    GB-DPC$dc$= 100.760.740.24−0.020.76
    SNN-DPC$k$= 30.810.880.810.010.81
    EC$\sigma$= 3000.820.8200.020.82
    BP 0.240.880.090.010.29
    CBPVD24, 0.20.880.880.1200.89
    SonarK-means$k$= 20.540.540.340.500.50
    DPC$dc$= 2.820.580.580.420.020.66
    GB-DPC$dc$= 1.40.510.530.51−0.0040.51
    SNN-DPC$k$= 190.500.530.50−0.010.51
    EC$\sigma$= 1.60.540.570.070.010.66
    BP 0.510.530.51−0.0040.68
    CBPVD9, 0.660.660.660.660.100.60
    ZOOK-means$k$= 70.760.840.620.60.69
    DPC$dc$= 2.40.700.790.360.590.68
    GB-DPC$dc$= 3.60.660.750.030.480.60
    SNN-DPC$k$= 50.560.560.120.310.53
    EC$\sigma$= 2.30.800.810.080.650.73
    BP 0.590.590.230.40.62
    CBPVD10, 0.150.860.860.010.930.94
    ParkinsonK-means$k$= 20.720.750.2800.74
    DPC$dc$= 1.30.660.750.340.050.63
    GB-DPC$dc$= 30.710.710.29−0.050.75
    SNN-DPC$k$= 800.720.750.280.110.69
    EC$\sigma$= 1350.700.750.70.140.66
    BP 0.190.980.030.130.36
    CBPVD13, 0.160.820.820.820.250.81
    POSTK-means$k$= 30.430.710.43−0.0020.45
    DPC$dc$= 10.530.710.53−0.010.52
    GB-DPC$dc$= 2.70.610.710.38−0.030.62
    SNN-DPC$k$= 600.610.710.610.020.60
    EC$\sigma$= 60.700.720.050.040.74
    BP 0.620.720.090.040.61
    CBPVD10, 0.010.790.790.790.250.78
    SpectheartK-means$k$= 20.640.920.64−0.050.69
    DPC$dc$= 1.41420.520.920.48−0.010.65
    GB-DPC$dc$= 1.10.520.920.0800.92
    SNN-DPC$k$= 800.870.920.130.110.87
    EC$\sigma$= 40.920.920.0800.92
    BP 0.910.920.91−0.010.91
    CBPVD15, 0.260.920.920.0800.92
    WineK-means$k$= 40.660.700.110.320.54
    DPC$dc$= 0.50.550.580.430.150.57
    GB-DPC$dc$= 5.60.600.710.350.270.50
    SNN-DPC$k$= 30.620.660.510.340.63
    EC$\sigma$= 2500.660.660.660.370.66
    BP 0.680.710.210.340.56
    CBPVD4, 0.030.910.950.750.80.87
    IonosphereK-means$k$= 20.710.710.710.180.61
    DPC$dc$= 3.70.650.650.350.020.73
    GB-DPC$dc$= 3.70.650.650.350.020.73
    SNN-DPC$k$= 340.670.670.670.110.57
    EC$\sigma$= 50.650.6700.050.73
    BP 0.800.800.800.340.76
    CBPVD6, 0.510.830.830.870.420.77
    WDBCK-means$k$= 20.740.890.220.540.76
    DPC$dc$= 50.670.670.670.100.60
    GB-DPC$dc$= 3.90.630.630.6300.73
    SNN-DPC$k$= 30.810.810.190.360.75
    EC$\sigma$= 3500.820.8700.490.78
    BP 0.440.880.120.250.52
    CBPVD3, 0.60.950.950.050.810.91
    RNN-seqK-means$k$= 50.750.750.170.720.79
    DPC$dc$= 159.60.700.730.390.620.76
    GB-DPC$dc$= 159.60.730.730.540.630.77
    SNN-DPC$k$= 300.730.730.0010.510.71
    EC$\sigma$= 2400.380.380.1700.49
    BP 0.780.740.0020.630.72
    CBPVD10, 0.40.9960.9960.810.990.99
    REUTERSK-means$k$= 40.500.580.220.150.41
    DPC$dc$= 3.50.430.430.280.100.46
    GB-DPC$dc$= 3.50.350.5500.140.41
    SNN-DPC$k$= 400.490.500.490.240.54
    EC$\sigma$= 3000.400.400.4000.55
    BP 0.390.410.380.010.50
    CBPVD20, 0.10.610.610.610.230.47
    下载: 导出CSV

    表  5  图像数据集的聚类结果

    Table  5  Performance comparison of algorithms on image datasets

    DatasetAlgorithmParameterACCPurity JCARIFMI
    OlivettiK-means$k$= 400.640.670.010.5170.54
    DPC$dc$= 0.9220.590.650.020.5230.56
    GB-DPC$dc$= 0.650.650.730.050.5770.59
    SNN-DPC$k$= 400.660.7400.5850.61
    EC$\sigma$= 37000.440.580.020.220.32
    BP 0.030.030.0300.15
    CBPVD4, 0.140.750.7800.6460.68
    OpticalK-means$k$= 100.710.730.040.580.63
    DPC$dc$= 1.10.600.620.090.4750.56
    GB-DPC$dc$= 10.50.610.620.020.4680.56
    SNN-DPC$k$= 100.710.730.200.6290.69
    EC$\sigma$= 300.690.690.170.5960.67
    BP 0.800.8500.7170.75
    CBPVD4, 0.450.930.950.300.8890.90
    You-Tube FacesK-means$k$= 410.520.630.020.510.53
    DPC$dc$= 6.50.530.620.020.480.51
    GB-DPC$dc$= 6.50.310.3100.250.35
    SNN-DPC$k$= 590.570.690.030.470.50
    EC$\sigma$= 1000.510.560.010.400.46
    BP 0.520.620.040.190.32
    CBPVD20, 0.10.660.880.010.620.64
    下载: 导出CSV

    表  6  复杂度对比

    Table  6  The time complexity of algorithms

    AlgorithmTime complexity
    DBSACN$\text{O}(n^2)$
    DPC$\text{O}(n^2)$
    GB-DPC$\text{O}(n\log_2n)$
    SNN-DPC$\text{O}(n^2)$
    DPC-RDE$\text{O}(n^2)$
    RA-Clust$\text{O}(n\sqrt{n})$
    EC$\text{O}(n^2)$
    BP $\text{O}(n^2)$
    CBPVD$\text{O}(n^2)$
    下载: 导出CSV
  • [1] 朱颖雯, 陈松灿. 基于随机投影的高维数据流聚类. 计算机研究与发展, 2020, 57(8): 1683-1696 doi: 10.7544/issn1000-1239.2020.20200432

    Zhu Ying-Wen, Chen Song-Can. High dimensional data stream clustering algorithm based on random projection. Journal of Computer Research and Development, 2020, 57(8): 1683-1696 doi: 10.7544/issn1000-1239.2020.20200432
    [2] Xia S Y, Peng D W, Meng D Y, Zhang C Q, Wang G Y, Giem E, et al. Ball k k-means: fast adaptive clustering with no bounds. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2022, 44(01): 87-99
    [3] Rodriguez A, Laio A. Clustering by fast search and find of density peaks. Science, 2014, 344(6191): 1492-1469 doi: 10.1126/science.1242072
    [4] Flores K G, Garza S E. Density peaks clustering with gap-based automatic center detection. Knowledge-Based Systems, 2020, 206: Article No. 160350
    [5] Wang S L, Li Q, Zhao C F, Zhu X Q, Yuan H N, Dai T R. Extreme clustering–a clustering method via density extreme points. Information Sciences, 2021, 542: 24-39 doi: 10.1016/j.ins.2020.06.069
    [6] Hou J, Zhang A H, Qi N M. Density peak clustering based on relative density relationship. Pattern Recognition, 2020, 108: Article No. 107554
    [7] Xu X, Ding S F, Wang Y R, Wang L J, Jia W K. A fast density peaks clustering algorithm with sparse search. Information Sciences, 2021, 554: 61-83 doi: 10.1016/j.ins.2020.11.050
    [8] Weng S Y, Gou J, Fan Z W. h-DBSCAN: A simple fast DBSCAN algorithm for big data. In: Proceedings of Asian Conference on Machine Learning. New York, USA: PMLR, 2021. 81−96
    [9] Ester M, Kriegel H, Sander J, Xu X W. A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of Knowledge Discovery and Data Mining. New York, USA: ACM, 1996. 226−231
    [10] Fang F, Qiu L, Yuan S F. Adaptive core fusion-based density peak clustering for complex data with arbitrary shapes and densities. Pattern Recognition, 2020, 107: Article No. 107452
    [11] Chen M, Li L J, Wang B, Cheng J J, Pan L N, Chen X Y. Effectively clustering by finding density backbone based-on kNN. Pattern Recognition, 2016, 60: 486-498 doi: 10.1016/j.patcog.2016.04.018
    [12] Averbuch-Elor H, Bar N, Cohen-Or D. Border peeling clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 42(7): 1791-1797
    [13] Cao X F, Qiu B Z, Li X L, Shi Z L, Xu G D, Xu J L. Multidimensional balance-based cluster boundary detection for high-dimensional data. IEEE Transactions on Neural Networks and Learning Systems, 2018, 30(6): 1867-1880
    [14] Qiu B Z, Cao X F. Clustering boundary detection for high dimensional space based on space inversion and Hopkins statistics. Knowledge-Based Systems, 2016, 98: 216-225 doi: 10.1016/j.knosys.2016.01.035
    [15] Zhang R L, Song X H, Ying S R, Ren H L, Zhang B Y, Wang H P. CA-CSM: a novel clustering algorithm based on cluster center selection model. Soft Computing, 2021, 25(13): 8015-8033 doi: 10.1007/s00500-021-05835-w
    [16] Li X L, Han Q, Qiu B Z. A clustering algorithm using skewness-based boundary detection. Neurocomputing, 2018, 275: 618-626 doi: 10.1016/j.neucom.2017.09.023
    [17] Yu H, Chen L Y, Yao J T. A three-way density peak clustering method based on evidence theory. Knowledge-Based Systems, 2021, 211: Article No. 106532
    [18] Tong Q H, Li X, Yuan B. Efficient distributed clustering using boundary information. Neurocomputing, 2018, 275: 2355-2366 doi: 10.1016/j.neucom.2017.11.014
    [19] Zhang S Z, You C, Vidal R, Li C G. Learning a self-expressive network for subspace clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. New York, USA: IEEE, 2021. 12393−12403
    [20] MacQueen J. Classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symp. Math. Statist. Probability. Berkeley, USA: University of California Press, 1967. 281−297
    [21] Liu R, Wang H, Yu X M. Shared-nearest-neighbor-based clustering by fast search and find of density peaks. Information Sciences, 2018, 450: 200-226 doi: 10.1016/j.ins.2018.03.031
    [22] Gong C Y, Su Z G, Wang P H, Wang Q. Cumulative belief peaks evidential K-nearest neighbor clustering. Knowledge-Based Systems, 2020, 200: Article No. 105982
    [23] 邱保志, 张瑞霖, 李向丽. 基于残差分析的混合属性数据聚类算法. 自动化学报, 2020, 46(7): 1420-1432 doi: 10.16383/j.aas.2018.c180030

    QIU Bao-Zhi, ZHANG Rui-Lin, LI Xiang-Li. Clustering algorithm for mixed data based on residual analysis. Acta Automatica Sinica, 2020, 46(7): 1420-1432 doi: 10.16383/j.aas.2018.c180030
    [24] Zhang R L, Miao Z G, Tian Y, Wang H P. A novel density peaks clustering algorithm based on Hopkins statistic. Expert Systems with Applications, 2022, 201: Article No. 116892
    [25] Liu Y H, Ma Z M, Yu F. Adaptive density peak clustering based on K-nearest neighbors with aggregating strategy. Knowledge-Based Systems, 2017, 133: 208-220 doi: 10.1016/j.knosys.2017.07.010
    [26] Abbas M, El-Zoghabi A, Shoukry A. DenMune: Density peak based clustering using mutual nearest neighbors. Pattern Recognition, 2021, 109: Article No. 107589
    [27] Ren Y Z, Hu X H, Shi K, Yu G X, Yao D Z, Xu Z L. Semi-supervised denpeak clustering with pairwise constraints. In: Proceedings of Pacific Rim International Conference on Artificial Intelligence. Cham, Switzerland: Springer, 2018. 837−850
    [28] Ren Y Z, Wang N, Li M X, Xu Z L. Deep density-based image clustering. Knowledge-Based Systems, 2020, 197: 105841 doi: 10.1016/j.knosys.2020.105841
    [29] Gao T F, Chen D, Tang Y B, Du B, Ranjan R, Zomaya A Y. Adaptive density peaks clustering: Towards exploratory EEG analysis. Knowledge-Based Systems, 2022, 240: Article No. 108123
    [30] Xu J, Wang G Y, Deng W H. DenPEHC: density peak based efficient hierarchical clustering. Information Sciences, 2016, 373: 200-218 doi: 10.1016/j.ins.2016.08.086
    [31] Ren Y Z, Kamath U, Domeniconi C, Zhang G J. Boosted mean shift clustering. In: Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Berlin, German: Springer, 2014. 646−661
    [32] Lotfi A, Moradi P, Beigy H. Density peaks clustering based on density backbone and fuzzy neighborhood. Pattern Recognition, 2020, 107: Article No. 107449
    [33] Teng Q, Yong J L. Fast LDP-MST: An efficient density-peak-based clustering method for large-size datasets. IEEE Transactions on Knowledge and Data Engineering, DOI: 10.1109/TKDE.2022.3150403
    [34] Brooks J K. Decomposition theorems for vector measures. Proceedings of the American Mathematical Society, 1969, 21(1): 27-29 doi: 10.1090/S0002-9939-1969-0237743-1
  • 加载中
图(8) / 表(6)
计量
  • 文章访问数:  646
  • HTML全文浏览量:  206
  • PDF下载量:  146
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-03-21
  • 录用日期:  2022-10-14
  • 网络出版日期:  2023-04-17
  • 刊出日期:  2023-06-20

目录

    /

    返回文章
    返回