TY - BOOK AU - Rokach,Lior ED - World Scientific (Firm) TI - Pattern classification using ensemble methods T2 - Series in machine perception and artificial intelligence SN - 9789814271073 AV - TK7882.P3 R65 2010eb U1 - 006.4 22 PY - 2010/// CY - Singapore, Hackensack, N.J. PB - World Scientific Pub. Co. KW - Pattern recognition systems KW - Algorithms KW - Machine learning KW - Classification KW - Set theory KW - Pattern Recognition, Automated KW - Reconnaissance des formes (Informatique) KW - Théorie des ensembles KW - Algorithmes KW - Apprentissage automatique KW - algorithms KW - aat KW - COMPUTERS KW - Optical Data Processing KW - bisacsh KW - fast KW - Mustererkennung KW - swd KW - Electronic book N1 - Includes bibliographical references (pages 185-222) and index; 1. Introduction to pattern classification. 1.1. Pattern classification. 1.2. Induction algorithms. 1.3. Rule induction. 1.4. Decision trees. 1.5. Bayesian methods. 1.6. Other induction methods -- 2. Introduction to ensemble learning. 2.1. Back to the roots. 2.2. The wisdom of crowds. 2.3. The bagging algorithm. 2.4. The boosting algorithm. 2.5. The AdaBoost algorithm. 2.6. No free lunch theorem and ensemble learning. 2.7. Bias-variance decomposition and ensemble learning. 2.8. Occam's razor and ensemble learning. 2.9. Classifier dependency. 2.10. Ensemble methods for advanced classification tasks -- 3. Ensemble classification. 3.1. Fusions methods. 3.2. Selecting classification. 3.3. Mixture of experts and meta learning -- 4. Ensemble diversity. 4.1. Overview. 4.2. Manipulating the inducer. 4.3. Manipulating the training samples. 4.4. Manipulating the target attribute representation. 4.5. Partitioning the search space. 4.6. Multi-inducers. 4.7. Measuring the diversity -- 5. Ensemble selection. 5.1. Ensemble selection. 5.2. Pre selection of the ensemble size. 5.3. Selection of the ensemble size while training. 5.4. Pruning -- post selection of the ensemble size -- 6. Error correcting output codes. 6.1. Code-matrix decomposition of multiclass problems. 6.2. Type I -- training an ensemble given a code-matrix. 6.3. Type II -- adapting code-matrices to the multiclass problems -- 7. Evaluating ensembles of classifiers. 7.1. Generalization error. 7.2. Computational complexity. 7.3. Interpretability of the resulting ensemble. 7.4. Scalability to large datasets. 7.5. Robustness. 7.6. Stability. 7.7. Flexibility. 7.8. Usability. 7.9. Software availability. 7.10. Which ensemble method should be used? N2 - Researchers from various disciplines such as pattern recognition, statistics, and machine learning have explored the use of ensemble methodology since the late seventies. Thus, they are faced with a wide variety of methods, given the growing interest in the field. This book aims to impose a degree of order upon this diversity by presenting a coherent and unified repository of ensemble methods, theories, trends, challenges and applications. The book describes in detail the classical methods, as well as the extensions and novel approaches developed recently. Along with algorithmic descriptions of each method, it also explains the circumstances in which this method is applicable and the consequences and the trade-offs incurred by using the method UR - https://search.ebscohost.com/login.aspx?direct=true&scope=site&db=nlebk&db=nlabk&AN=340641 ER -