[1]
|
Fumera G, Roli F, Serrau A. A theoretical analysis of bagging as a linear combination of classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(7): 1293-1299[2] Hao H W, Liu C L, Sako H. Comparison of genetic algorithm and sequential search methods for classifier subset selection. In: Proceedings of the 7th International Conference on Document Analysis and Recognition. Edinburgh, Scotland: IEEE, 2003. 765-769[3] Brown G. An information theoretic perspective on multiple classifier systems. In: Proceedings of the 8th International Workshop on Multiple Classifier Systems. Reykjavik, Iceland: Springer-Verlag, 2009. 344-353[4] Ruta D, Gabrys B. Classifier selection for majority voting. Information Fusion, 2005, 6(1): 63-81[5] Zhou Z H, Wu J X, Tang W. Ensembling neural networks: many could be better than all. Artificial Intelligence, 2002, 137(1-2): 239-263[6] Kang H J, Doermann D. Selection of classifiers for the construction of multiple classifier systems. In: Proceedings of the 8th International Conference on Document Analysis and Recognition. Seoul, Korea: IEEE, 2005. 1194-1198[7] Ko A H R, Sabourin R, Britto A S. From dynamic classifier selection to dynamic ensemble selection. Pattern Recognition, 2008, 41(5): 1735-1748[8] Chen L, Kamel M S. A generalized adaptive ensemble generation and aggregation approach for multiple classifier systems. Pattern Recognition, 2009, 42(5): 629-644[9] Liu R, Yuan B. Multiple classifier combination by clustering and selection. Information Fusion, 2001, 2(3): 163-168[10] Li Guo-Zheng, Yang Jie, Kong An-Sheng, Chen Nian-Yi. Clustering algorithm based selective ensemble. Journal of Fudan University (Natural Science) , 2004, 43(5): 689-691(李国正, 杨杰, 孔安生, 陈念贻. 基于聚类算法的选择性神经网络集成. 复旦学报 (自然科学版), 2004, 43(5): 689-691)[11] Kim Y W, Oh I S. Classifier ensemble selection using hybrid genetic algorithms. Pattern Recognition Letters, 2008, 29(6): 796-802[12] Santos E M, Sabourin R, Maupin P. Over fitting cautious selection of classifier ensembles with genetic algorithms. Information Fusion, 2009, 10(2): 150-162[13] Jackowski K, Wozniak M. Method of classifier selection using the genetic approach. Expert Systems, 2010, 27(2): 114-128[14] Banfield R E, Hall L O, Bowyer K W, Kegelmeyer W P. Ensemble diversity measures and their application to thinning. Information Fusion, 2005, 6(1): 49-62[15] Didaci L, Giacinto G, Foli F, Marcialis G L. A study on the performances of dynamic classifier selection based on local accuracy estimation. Pattern Recognition, 2005, 38(11): 2188-2191[16] Didaci L, Giacinto G. Dynamic classifier selection by adaptive K-nearest neighborhood rule. In: Proceedings of the 5th International Workshop on Multiple Classifier Systems. Cagliari, Italy: Springer-Verlag, 2004. 174-183[17] Kuncheva L I, Whitaker C J. Measures of diversity in classifier ensemble and their relationship with the ensemble accuracy. Machine Learning, 2003, 51(2): 181-207[18] Liu C L, Hao H W, Sako H. Confidence transformation for combining classifiers. Pattern Analysis and Application, 2004, 7(1): 2-17[19] Kherallah M, Haddad L, Alimi A M, Mitiche A. On-line handwritten digit recognition based on trajectory and velocity modeling. Pattern Recognition Letters, 2008, 29(5): 580-594[20] Chen Z. Handwritten digits recognition. In: Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition. Las Vegas, USA: CSREA, 2009. 690-694[21] Hao Hong-Wei, Jiang Rong-Rong. Training sample selection method for neural networks based on nearset neighbor rule. Acta Automatica Sinica, 2007, 33(12): 1247-1251(郝红卫, 蒋蓉蓉. 基于最近邻规则的神经网络训练样本选择方法. 自动化学报, 2007, 33(12): 1247-1251)
|