The elements of statistical learning: data mining, inference, and prediction
Gespeichert in:
Beteiligte Personen: | , , |
---|---|
Format: | Buch |
Sprache: | Englisch |
Veröffentlicht: |
New York, NY
Springer
[2017]
|
Ausgabe: | Second edition, corrected at 12th printing |
Schriftenreihe: | Springer series in statistics
|
Schlagwörter: | |
Links: | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=029812524&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
Beschreibung: | Ausgabebezeichnung auf Titelrückseite: "Corrected at 12th printing 2017" |
Umfang: | xxii, 745 Seiten Illustrationen, Diagramme |
ISBN: | 9780387848570 0387848576 |
Internformat
MARC
LEADER | 00000nam a2200000 c 4500 | ||
---|---|---|---|
001 | BV044410726 | ||
003 | DE-604 | ||
005 | 20230411 | ||
007 | t| | ||
008 | 170714s2017 xx a||| s||| 00||| eng d | ||
020 | |a 9780387848570 |c print : hbk. : ca. EUR 80.24 (DE) |9 978-0-387-84857-0 | ||
020 | |a 0387848576 |9 0-387-84857-6 | ||
035 | |a (OCoLC)989956421 | ||
035 | |a (DE-599)BVBBV044410726 | ||
040 | |a DE-604 |b ger |e rda | ||
041 | 0 | |a eng | |
049 | |a DE-19 |a DE-898 |a DE-83 |a DE-91G |a DE-573 |a DE-706 |a DE-20 |a DE-739 |a DE-29T |a DE-M347 |a DE-861 |a DE-N2 |a DE-355 |a DE-703 |a DE-521 |a DE-188 |a DE-945 |a DE-860 |a DE-Er8 |a DE-1043 |a DE-92 | ||
050 | 0 | |a Q325.75 | |
082 | 0 | |a 006.3'1 22 |2 22 | |
084 | |a QH 231 |0 (DE-625)141546: |2 rvk | ||
084 | |a ST 530 |0 (DE-625)143679: |2 rvk | ||
084 | |a SK 830 |0 (DE-625)143259: |2 rvk | ||
084 | |a SK 840 |0 (DE-625)143261: |2 rvk | ||
084 | |a CM 4000 |0 (DE-625)18951: |2 rvk | ||
084 | |a 510 |2 23sdnb | ||
084 | |a DAT 708f |2 stub | ||
084 | |a 65Hxx |2 msc | ||
084 | |a MAT 620f |2 stub | ||
100 | 1 | |a Hastie, Trevor |d 1953- |e Verfasser |0 (DE-588)172128242 |4 aut | |
245 | 1 | 0 | |a The elements of statistical learning |b data mining, inference, and prediction |c Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
250 | |a Second edition, corrected at 12th printing | ||
264 | 1 | |a New York, NY |b Springer |c [2017] | |
264 | 4 | |c © 2017 | |
300 | |a xxii, 745 Seiten |b Illustrationen, Diagramme | ||
336 | |b txt |2 rdacontent | ||
337 | |b n |2 rdamedia | ||
338 | |b nc |2 rdacarrier | ||
490 | 0 | |a Springer series in statistics | |
500 | |a Ausgabebezeichnung auf Titelrückseite: "Corrected at 12th printing 2017" | ||
650 | 0 | 7 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Anwendung |0 (DE-588)4196864-5 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Datenanalyse |0 (DE-588)4123037-1 |2 gnd |9 rswk-swf |
650 | 0 | 7 | |a Statistik |0 (DE-588)4056995-0 |2 gnd |9 rswk-swf |
655 | 7 | |0 (DE-588)4056995-0 |a Statistik |2 gnd-content | |
689 | 0 | 0 | |a Statistik |0 (DE-588)4056995-0 |D s |
689 | 0 | 1 | |a Anwendung |0 (DE-588)4196864-5 |D s |
689 | 0 | 2 | |a Datenanalyse |0 (DE-588)4123037-1 |D s |
689 | 0 | |5 DE-604 | |
689 | 1 | 0 | |a Statistik |0 (DE-588)4056995-0 |D s |
689 | 1 | 1 | |a Maschinelles Lernen |0 (DE-588)4193754-5 |D s |
689 | 1 | |5 DE-604 | |
700 | 1 | |a Tibshirani, Robert |d 1956- |e Verfasser |0 (DE-588)172417740 |4 aut | |
700 | 1 | |a Friedman, Jerome H. |d 1939- |e Verfasser |0 (DE-588)134071484 |4 aut | |
776 | 0 | 8 | |i Erscheint auch als |n Online-Ausgabe |z 978-0-387-84858-7 |
856 | 4 | 2 | |m Digitalisierung UB Regensburg - ADAM Catalogue Enrichment |q application/pdf |u http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=029812524&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |3 Inhaltsverzeichnis |
259 | |a 2,12 | ||
943 | 1 | |a oai:aleph.bib-bvb.de:BVB01-029812524 |
Datensatz im Suchindex
DE-BY-TUM_call_number | 0048 MAT 620f 2001 A 17531(2,2017) 0303 MAT 620f 2017 L 727(2,2017) |
---|---|
DE-BY-TUM_katkey | 2293810 |
DE-BY-TUM_location | LSB 03 |
DE-BY-TUM_media_number | 040008855529 040008164830 040008164885 040008164874 040008164705 040008164794 040008503126 040008164829 040008164807 040008164783 040008164818 040008164727 040008164750 040008164841 040008164863 040008164909 040008164761 040008164852 040008164772 040008164749 040008164738 040008164716 040008164896 |
_version_ | 1821934230270115840 |
adam_text | Contents Preface to the Second Edition Preface to the First Edition vii xi 1 Introduction 1 2 Overview of Supervised Learning 2.1 Introduction......................................................................... 2.2 Variable Types and Terminology........................................ 2.3 Two Simple Approaches to Prediction: Least Squares and Nearest Neighbors............................... 2.3.1 Linear Models and Least Squares ...................... 2.3.2 Nearest-Neighbor Methods.................................. 2.3.3 Prom Least Squares to NearestNeighbors .... 2.4 Statistical Decision Theory................................................. 2.5 Local Methods in High Dimensions..................................... 2.6 Statistical Models, Supervised Learning and Function Approximation.............................................. 2.6.1 A Statistical Model for the .Joint Distribution Pr(AT, Y) ................... 2.6.2 Supervised Learning.............................................. 2.6.3 Function Approximation ..................................... 2.7 Structured Regression Models ........................................... 2.7.1 Difficulty of the Problem...................................... 9 9 9 11 11 14 16 18 22 28 28 29 29 32 32
XIV Contents 2.8 Classes of Restricted Estimators........................................ 2.8.1 Roughness Penalty and Bayesian Methods ... 2.8.2 Kernel Methods and Local Regression............... 2.8.3 Basis Functions and Dictionary Methods .... 2.9 Model Selection and theBias-Variance Tradeoff.............. Bibliographic Notes.......................................................................... Exercises............................................................................................ 3 Linear Methods for Regression 3.1 3.2 Introduction.......................................................................... Linear Regression Models and Least Squares.................. 3.2.1 Example: Prostate Cancer .................................. 3.2.2 The Gauss-Markov Theorem............................. 3.2.3 Multiple Regression from Simple Univariate Regression..................... 3.2.4 Multiple Outputs ................................................. 3.3 Subset Selection................................................................... 3.3.1 Best-Subset Selection........................................... 3.3.2 Forward- and Backward-Stepwise Selection . . . 3.3.3 Forward-Stagewise Regression.......................... 3.3.4 Prostate Cancer Data Example (Continued) . . 3.4 Shrinkage Methods................................................................ 3.4.1 Ridge Regression................................................. 3.4.2 The Lasso ............................................................. 3.4.3 Discussion: Subset Selection, RidgeRegression and the Lasso
....................................................... 3.4.4 Least Angle Regression...................................... 3.5 Methods Using Derived Input Directions ......................... 3.5.1 Principal Components Regression................... 3.5.2 Partial Least Squares......................................... 3.6 Discussion: A Comparison of the Selection and Shrinkage Methods....................................................... 3.7 Multiple Outcome Shrinkage and Selection..................... 3.8 More on the Lasso and Related Path Algorithms............ 3.8.1 Incremental Forward Stagewise Regression . . . 3.8.2 Piecewise-Linear Path Algorithms................... 3.8.3 The Dantzig Selector......................................... 3.8.4 The Grouped Lasso............................................ 3.8.5 Further Properties of the Lasso......................... 3.8.6 Pathwise Coordinate Optimization................... 3.9 Computational Considerations........................................... Bibliographic Notes.......................................................................... Exercises............................................................................................ 33 34 34 35 37 39 39 43 43 44 49 51 52 56 57 57 05 GO 61 61 68 *О 82 84 86 86 89 89 90 91 92 93 94 94
Contents 4 Linear Methods for Classification XV 101 4.1 4.2 4.3 Introduction ......................................................................... 101 Linear Regression of an Indicator Matrix............................ 103 Linear Discriminant Analysis.............................................. 10G 4.3.1 Regularized Discriminant Analysis................ 112 4.3.2 Computations for LDA................................... 113 4.3.3 Reduced-Rank Linear Discriminant Analysis . . 113 4.4 Logistic Regression................................................................ 119 4.4.1 Fitting Logistic Regression Models.................. 120 4.4.2 Example: South African Heart Disease ................ 122 4.4.3 Quadratic Approximations and Inference .... 124 4.4.4 Li Regularized Logistic Regression................ 125 4.4.5 Logistic Regression or LDA?........................... 127 4.5 Separating Hyperplanes....................................................... 129 4.5.1 Rosenblatt’s Perceptron Learning Algorithm . . 130 4.5.2 Optimal Separating Hypcrplanes..................... 132 Bibliographic Notes......................................................................... 135 Exercises........................................................................................... 135 5 Basis Expansions and Regularization 5.1 5.2 139 Introduction......................................................................... 139 Piecewise Polynomials and Splines.................................... 141 5.2.1 Natural Cubic Splines...................................... 144 5.2.2 Example: South
African Heart Disease (Continued) 146 5.2.3 Example: Phoneme Recognition..................... 148 5.3 Filtering and Feature Extraction................................... 150 5.4 Smoothing Splines................................................................ 151 5.4.1 Degrees of Freedom and Smoother Matrices . . . 153 5.5 Automatic Selection of the Smoothing Parameters .... 15G 5.5.1 Fixing the Degrees of Freedom........................ 158 5.5.2 The Bias-Variance Tradeoff.............................. 158 5.6 Nonparametric Logistic Regression.................................... 161 5.7 Multidimensional Splines.................................................... 162 5.8 Regularization and Reproducing Kernel Hilbert Spaces . 167 5.8.1 Spaces of Functions Generated by Kernels . . . 168 5.8.2 Examples of RKHS...........................................170 5.9 Wavelet Smoothing ............................................................ 174 5.9.1 Wavelet Bases and the Wavelet Transform . . . 176 5.9.2 Adaptive Wavelet Filtering.............................. 179 Bibliographic Notes.............................................................................181 Exercises........................................................................................... 181 Appendix: Computational Considerations for Splines................... 186 Appendix: H-splines............................................................ 186 Appendix: Computations for Smoothing Splines................ 189
xvi Contents 6 Kernel Smoothing Methods 191 6.1 One-Dimensional Kernel Smoothers......................................192 6.1.1 Local Linear Regression............................................ 194 6.1.2 Local Polynomial Regression...................................197 6.2 Selecting the Width of the Kernel.........................................198 6.3 Local Regression in Hp........................................................... 200 6.4 Structured Local Regression Models in IRP ......................... 201 6.4.1 Structured Kernels.....................................................203 6.4.2 Structured Regression Functions............................ 203 6.5 Local Likelihood and Other Models......................................205 6.6 Kernel Density Estimation and Classification...................... 208 6.6.1 Kernel Density Estimation..................................... 208 6.6.2 Kernel Density Classification...................................210 6.6.3 The Naive Bayes Classifier......................................210 6.7 Radial Basis Functions and Kernels......................................212 6.8 Mixture Models for Density Estimation and Classification 214 6.9 Computational Considerations...............................................216 Bibliographic Notes..............................................................................216 Exercises................................................................................................216 7 Model Assessment and Selection 219 7.1 7.2 7.3
Introduction..............................................................................219 Bias, Variance and Model Complexity...................................219 The Bias-Variance Decomposition.........................................223 7.3.1 Example: Bias-Variance Tradeoff ......................... 226 7.4 Optimism of the Training Error Rate .................................. 228 7.5 Estimates of In-Sample Prediction Error............................... 230 7.6 The Effective Number of Parameters......................................232 7.7 The Bayesian Approach and BIC............................................233 7.8 Minimum Description Length..................................................235 7.9 Vapnik-Chervonenkis Dimension............................................237 7.9.1 Example (Continued)...............................................239 7.10 Cross-Validation....................................................................... 241 7.10.1 Jf-Fold Cross-Validation .........................................241 7.10.2 The Wrong and Right Way to Do Cross-validation...............................................245 7.10.3 Does Cross-Validation ReallyWork?....................... 247 7.11 Bootstrap Methods ................................................................. 249 7.11.1 Example (Continued)................................................ 252 7.12 Conditional or Expected Test Error?..................................... 254 Bibliographic Notes..............................................................................257
Exercises................................................................................................257 8 Model Inference and Averaging 8.1 261 Introduction.............................................................................. 261
Contents xvii 8.2 The Bootstrap and Maximum Likelihood Methods .... 261 8.2.1 A Smoothing Example ........................................... 261 8.2.2 Maximum Likelihood Inference...............................265 8.2.3 Bootstrap versus Maximum Likelihood................267 8.3 Bayesian Methods................................................................... 267 8.4 Relationship Between the Bootstrap and Bayesian Inference .......................................................... 271 8.5 The EM Algorithm ................................................................ 272 8.5.1 Two-Component Mixture Model............................ 272 8.5.2 The EM Algorithm in General...............................276 8.5.3 EM as a Maximization-Maximization Procedure 277 8.6 MCMC for Sampling from the Posterior............................... 279 8.7 Bagging..................................................................................... 282 8.7.1 Example: Trees with Simulated Data...................283 8.8 Model Averaging and Stacking.............................................. 288 8.9 Stochastic Search: Bumping.................................................... 290 Bibliographic Notes............................................................................ 292 Exercises.............................................................................................. 293 8 Additive Models, Trees,and Related Methods 295 9.1 Generalized Additive Models................................................. 295 9.1.1 Fitting Additive Models........................................... 297
9.1.2 Example: Additive Logistic Regression ................299 9.1.3 Summary................................................................... 304 9.2 Tree-Based Methods................................................................ 305 9.2.1 Background ............................................................. 305 9.2.2 Regression Trees....................................................... 307 9.2.3 Classification Trees................................................. 308 9.2.4 Other Issues............................................................. 310 9.2.5 Spam Example (Continued) ..................................313 9.3 PRIM: Bump Hunting............................................................. 317 9.3.1 Spam Example (Continued) .................................. 320 9.4 MARS: Multivariate Adaptive Regression Splines................321 9.4.1 Spam Example (Continued) .................................. 326 9.4.2 Example (Simulated Data)..................................... 327 9.4.3 Other Issues............................................................. 328 9.5 Hierarchical Mixtures of Experts........................................... 329 9.6 Missing Data............................................................................ 332 9.7 Computational Considerations.............................................. 334 Bibliographic Notes............................................................................ 334 Exercises.............................................................................................. 335 10 Boosting and AdditiveTrees 337
10.1 Boosting Methods..................................................................... 337 10.1.1 Outline of This Chapter............... ·......................340
xviii Contents 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 10.10 Boosting Fits an Additive Model............................................341 Forward Stagewise Additive Modeling...................................342 Exponential Loss and AdaBoost............................................343 Why Exponential Loss?........................................................... 345 Loss Functions and Robustness...............................................346 ‘Off-the-Shelf” Procedures forData Mining...........................350 Example: Spam Data.............................................................. 352 Boosting Trees...........................................................................353 Numerical Optimization via Gradient Boosting................... 358 10.10.1 Steepest Descent........................................................ 358 10.10.2 Gradient Boosting..................................................... 359 10.10.3 Implementations of GradientBoosting.....................360 10.11 Right-Sized Trees for Boosting............................................... 361 10.12 Regularization...........................................................................364 10.12.1 Shrinkage.....................................................................364 10.12.2 Subsampling.............................................................. 365 10.13 Interpretation ...........................................................................367 10.13.1 Relative Importance of Predictor Variables . . . 367 10.13.2 Partial Dependence
Plots.........................................369 10.14 Illustrations.................................................................................371 10.14.1 California Housing..................................................... 371 10.14.2 New Zealand Fish..................................................... 375 10.14.3 Demographics Data.................................................. 379 Bibliographic Notes.............................................................................. 380 Exercises................................................................................................ 384 11 Neural Networks 11.1 11.2 11.3 11.4 11.5 389 Introduction..............................................................................389 Projection Pursuit Regression ...........................................389 Neural Networks........................................................................392 Fitting Neural Networks........................................................... 395 Some Issues in Training NeuralNetworks..............................397 11.5.1 Starting Values........................................................... 397 11.5.2 Overfitting................................................................. 398 11.5.3 Scaling of the Inputs ...............................................398 11.5.4 Number of Hidden Units and Layers...................... 400 11.5.5 Multiple Minima........................................................ 400 11.6 Example: Simulated Data........................................................ 401 11.7 Example: ZIP Code
Data........................................................ 404 11.8 Discussion .................................................................................408 11.9 Bayesian Neural Nets and the NIPS2003 Challenge . . . 409 11.9.1 Bayes, Boosting and Bagging...................................410 11.9.2 Performance Comparisons ...................................... 412 11.10 Computational Considerations...............................................414 Bibliographic Notes............................................................................. 415
Contents xix Exercises.............................................................................................. 415 12 Support Vector Machines and Flexible Discriminants 417 12.1 Introduction............................................................................ 417 12.2 The Support Vector Classifier................................................. 417 12.2.1 Computing the Support Vector Classifier .... 420 12.2.2 Mixture Example (Continued)...............................421 12.3 Support Vector Machines and Kernels.................................. 423 12.3.1 Computing the SVM for Classification...................423 12.3.2 The SVM as a Penalization Method......................426 12.3.3 Function Estimation and Reproducing Kernels . 428 12.3.4 SVMs and the Curse of Dimensionality................431 12.3.5 A Path Algorithm for the SVM Classifier .... 432 12.3.6 Support Vector Machines for Regression................434 12.3.7 Regression and Kernels........................................... 436 12.3.8 Discussion ................................................................ 438 12.4 Generalizing Linear Discriminant Analysis .........................438 12.5 Flexible Discriminant Analysis.............................................. 440 12.5.1 Computing the FDA Estimates...............................444 12.6 Penalized Discriminant Analysis........................................... 446 12.7 Mixture Discriminant Analysis.............................................. 449 12.7.1 Example: Waveform Data........................................451 Bibliographic
Notes............................................................................ 455 Exercises.............................................................................................. 455 13 Prototype Methods and Nearest-Neighbors 459 13.1 Introduction............................................................................ 459 13.2 Prototype Methods ................................................................ 459 13.2.1 TL-means Clustering................................................. 460 13.2.2 Learning Vector Quantization ...............................462 13.2.3 Gaussian Mixtures.................................................... 463 13.3 fc-Nearest-Neighbor Classifiers .............................................. 463 13.3.1 Example: A Comparative Study............................468 13.3.2 Example: /с-Nearest-Neighbors and Image Scene Classification...............................470 13.3.3 Invariant Metrics and Tangent Distance................471 13.4 Adaptive Nearest-Neighbor Methods..................................... 475 13.4.1 Example................................................................... 478 13.4.2 Global Dimension Reduction for Nearest-Neighbors.............................................. 479 13.5 Computational Considerations.............................................. 480 Bibliographic Notes............................................................................ 481 Exercises.............................................................................................. 481
XX Contents 14 Unsupervised Learning 14.1 14.2 485 Introduction............................................................................. 485 Association Rules .................................................................... 48 / 14.2.1 Market Basket Analysis........................................... 488 14.2.2 The Apriori Algorithm........................................... 489 14.2.3 Example: Market Basket Analysis......................... 492 14.2.4 Unsupervised as SupervisedLearning.................... 495 14.2.5 Generalized Association Rules............................... 49? 14.2.6 Choice of Supervised Learning Method............ 499 14.2.7 Example: Market Basket Analysis (Continued) . 499 14.3 Cluster Analysis....................................................................... 501 14.3.1 Proximity Matrices..................................................503 14.3.2 Dissimilarities Based on Attributes ...................... 503 14.3.3 Object Dissimilarity................................................. 505 14.3.4 Clustering Algorithms.............................................. 507 14.3.5 Combinatorial Algorithms ..................................... 507 14.3.6 K-means....................................................................509 14.3.7 Gaussian Mixtures as Soft JGmeans Clustering . 510 14.3.8 Example: Human Tumor Microarray Data . . . 512 14.3.9 Vector Quantization................................................. 514 14.3.10 iL-medoids.................................................................515 14.3.11 Practical Issues
....................................................... 518 14.3.12 Hierarchical Clustering........................................... 520 14.4 Self-Organizing Maps..............................................................528 14.5 Principal Components, Curves and Surfaces......................... 534 14.5.1 Principal Components.............................................. 534 14.5.2 Principal Curves and Surfaces............................... 541 14.5.3 Spectral Clustering................................................. 544 14.5.4 Kernel Principal Components.................................. 547 14.5.5 Sparse Principal Components.................................. 550 14.6 Non-negative Matrix Factorization........................................ 553 14.6.1 Archetypal Analysis................................................. 554 14.7 Independent Component Analysis and Exploratory Projection Pursuit..................................... 557 14.7.1 Latent Variables and Factor Analysis................... 558 14.7.2 Independent Component Analysis......................... 560 14.7.3 Exploratory Projection Pursuit............................... 565 14.7.4 A Direct Approach to ICA..................................... 565 14.8 Multidimensional Scaling........................................................570 14.9 Nonlinear Dimension Reduction and Local Multidimensional Scaling..................................... 572 14.10 The Google PageRank Algorithm ........................................ 576 Bibliographic
Notes......................................................................... 578 Exercises........................................................................................... 579
Contents XXI 15 Random Forests 587 15.1 Introduction............................................................................587 15.2 Definition of Random Forests.................................................587 15.3 Details of Random Forests ....................................................592 15.3.1 Out of Bag Samples.................................................592 15.3.2 Variable Importance.................................................593 15.3.3 Proximity Plots.......................................................595 15.3.4 Random Forests and Overfitting............................59G 15.4 Analysis of Random Forests.....................................................597 15.4.1 Variance and the De-CorrelationEffect................. 597 15.4.2 Bias............................................................................000 15.4.3 Adaptive Nearest Neighbors ..................................001 Bibliographic Notes............................................................................002 Exercises..............................................................................................003 Ensemble Learning 605 10.1 Introduction............................................................................005 16.2 Boostingand Regularization Paths......................................... 607 10.2.1 Penalized Regression ..............................................007 10.2.2 The “Bet on Sparsity” Principle............................010 10.2.3 Regularization Paths, Over-fittingand Margins . 013 16.3 Learning
Ensembles................................................................610 10.3.1 Learning a Good Ensemble.....................................017 16.3.2 Rule Ensembles.......................................................622 Bibliographic Notes............................................................................023 Exercises..............................................................................................024 ; 7 Undirected Graphical Models 625 17.1 Introduction............................................................................025 17.2 Markov Graphs and Their Properties ............................. 027 17.3 Undirected Graphical Models for Continuous Variables . 030 17.3.1 Estimation of the Parameters when the Graph Structure is Known...................... 631 17.3.2 Estimation of the Graph Structure.........................035 17.4 Undirected Graphical Models for Discrete Variables . . . 038 17.4.1 Estimation of the Parameters when the Graph Structure is Known...................... 639 17.4.2 Hidden Nodes..........................................................041 17.4.3 Estimation of the Graph Structure.........................042 17.4.4 Restricted Boltzmann Machines............................043 Exercises..............................................................................................045 18 High-Dimensional Problems: p 2 N 649 18.1 When p is Much Bigger than N ............................................049
xxii Contents 18.2 Diagonal Linear Discriminant Analysis and Nearest Shrunken Centroids............................................651 18.3 Linear Classifiers with Quadratic Regularization................654 18.3.1 Regularized Discriminant Analysis......................... 656 18.3.2 Logistic Regression with Quadratic Regularization............................... 657 18.3.3 The Support Vector Classifier ............................... 657 18.3.4 Feature Selection........................................................658 18.3.5 Computational Shortcuts When p N............ 659 18.4 Linear Classifiers with L Regularization............................ 661 18.4.1 Application of Lasso to Protein Mass Spectroscopy ............................... 664 18.4.2 The Fused Lasso for Functional Data................... 666 18.5 Classification When Features are Unavailable...................... 668 18.5.1 Example: String Kernels and Protein Classification.........................................668 18.5.2 Classification and Other Models Using Inner-Product Kernels and Pairwise Distances . 670 18.5.3 Example: Abstracts Classification......................... 672 18.6 High-Dimensional Regression: Supervised Principal Components.........................................674 18.6.1 Connection to Latent-Variable Modeling .... 678 18.6.2 Relationship with Partial Least Squares................680 18.6.3 Pre-Conditioning for Feature Selection ................681 18.7 Feature Assessment and the Multiple-Testing Problem . . 683 18.7.1 The False Discovery Rate........................................
687 18.7.2 Asymmetric Cutpoints and the SAM Procedure 690 18.7.3 A Bayesian Interpretation of the FDR...................692 18.8 Bibliographic Notes.................................................................693 Exercises............................................................................................... 694 References 699 Author Index 729 Index 737
|
any_adam_object | 1 |
author | Hastie, Trevor 1953- Tibshirani, Robert 1956- Friedman, Jerome H. 1939- |
author_GND | (DE-588)172128242 (DE-588)172417740 (DE-588)134071484 |
author_facet | Hastie, Trevor 1953- Tibshirani, Robert 1956- Friedman, Jerome H. 1939- |
author_role | aut aut aut |
author_sort | Hastie, Trevor 1953- |
author_variant | t h th r t rt j h f jh jhf |
building | Verbundindex |
bvnumber | BV044410726 |
callnumber-first | Q - Science |
callnumber-label | Q325 |
callnumber-raw | Q325.75 |
callnumber-search | Q325.75 |
callnumber-sort | Q 3325.75 |
callnumber-subject | Q - General Science |
classification_rvk | QH 231 ST 530 SK 830 SK 840 CM 4000 |
classification_tum | DAT 708f MAT 620f |
ctrlnum | (OCoLC)989956421 (DE-599)BVBBV044410726 |
dewey-full | 006.3'122 |
dewey-hundreds | 000 - Computer science, information, general works |
dewey-ones | 006 - Special computer methods |
dewey-raw | 006.3'1 22 |
dewey-search | 006.3'1 22 |
dewey-sort | 16.3 11 222 |
dewey-tens | 000 - Computer science, information, general works |
discipline | Informatik Psychologie Mathematik Wirtschaftswissenschaften |
edition | Second edition, corrected at 12th printing |
format | Book |
fullrecord | <?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>02827nam a2200649 c 4500</leader><controlfield tag="001">BV044410726</controlfield><controlfield tag="003">DE-604</controlfield><controlfield tag="005">20230411 </controlfield><controlfield tag="007">t|</controlfield><controlfield tag="008">170714s2017 xx a||| s||| 00||| eng d</controlfield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">9780387848570</subfield><subfield code="c">print : hbk. : ca. EUR 80.24 (DE)</subfield><subfield code="9">978-0-387-84857-0</subfield></datafield><datafield tag="020" ind1=" " ind2=" "><subfield code="a">0387848576</subfield><subfield code="9">0-387-84857-6</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(OCoLC)989956421</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)BVBBV044410726</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-604</subfield><subfield code="b">ger</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="049" ind1=" " ind2=" "><subfield code="a">DE-19</subfield><subfield code="a">DE-898</subfield><subfield code="a">DE-83</subfield><subfield code="a">DE-91G</subfield><subfield code="a">DE-573</subfield><subfield code="a">DE-706</subfield><subfield code="a">DE-20</subfield><subfield code="a">DE-739</subfield><subfield code="a">DE-29T</subfield><subfield code="a">DE-M347</subfield><subfield code="a">DE-861</subfield><subfield code="a">DE-N2</subfield><subfield code="a">DE-355</subfield><subfield code="a">DE-703</subfield><subfield code="a">DE-521</subfield><subfield code="a">DE-188</subfield><subfield code="a">DE-945</subfield><subfield code="a">DE-860</subfield><subfield code="a">DE-Er8</subfield><subfield code="a">DE-1043</subfield><subfield code="a">DE-92</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">Q325.75</subfield></datafield><datafield tag="082" ind1="0" ind2=" "><subfield code="a">006.3'1 22</subfield><subfield code="2">22</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">QH 231</subfield><subfield code="0">(DE-625)141546:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">ST 530</subfield><subfield code="0">(DE-625)143679:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 830</subfield><subfield code="0">(DE-625)143259:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">SK 840</subfield><subfield code="0">(DE-625)143261:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">CM 4000</subfield><subfield code="0">(DE-625)18951:</subfield><subfield code="2">rvk</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">510</subfield><subfield code="2">23sdnb</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">DAT 708f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">65Hxx</subfield><subfield code="2">msc</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">MAT 620f</subfield><subfield code="2">stub</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Hastie, Trevor</subfield><subfield code="d">1953-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)172128242</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">The elements of statistical learning</subfield><subfield code="b">data mining, inference, and prediction</subfield><subfield code="c">Trevor Hastie ; Robert Tibshirani ; Jerome Friedman</subfield></datafield><datafield tag="250" ind1=" " ind2=" "><subfield code="a">Second edition, corrected at 12th printing</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="a">New York, NY</subfield><subfield code="b">Springer</subfield><subfield code="c">[2017]</subfield></datafield><datafield tag="264" ind1=" " ind2="4"><subfield code="c">© 2017</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">xxii, 745 Seiten</subfield><subfield code="b">Illustrationen, Diagramme</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="490" ind1="0" ind2=" "><subfield code="a">Springer series in statistics</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">Ausgabebezeichnung auf Titelrückseite: "Corrected at 12th printing 2017"</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Anwendung</subfield><subfield code="0">(DE-588)4196864-5</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="650" ind1="0" ind2="7"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="2">gnd</subfield><subfield code="9">rswk-swf</subfield></datafield><datafield tag="655" ind1=" " ind2="7"><subfield code="0">(DE-588)4056995-0</subfield><subfield code="a">Statistik</subfield><subfield code="2">gnd-content</subfield></datafield><datafield tag="689" ind1="0" ind2="0"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="1"><subfield code="a">Anwendung</subfield><subfield code="0">(DE-588)4196864-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2="2"><subfield code="a">Datenanalyse</subfield><subfield code="0">(DE-588)4123037-1</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="0" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="689" ind1="1" ind2="0"><subfield code="a">Statistik</subfield><subfield code="0">(DE-588)4056995-0</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2="1"><subfield code="a">Maschinelles Lernen</subfield><subfield code="0">(DE-588)4193754-5</subfield><subfield code="D">s</subfield></datafield><datafield tag="689" ind1="1" ind2=" "><subfield code="5">DE-604</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tibshirani, Robert</subfield><subfield code="d">1956-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)172417740</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Friedman, Jerome H.</subfield><subfield code="d">1939-</subfield><subfield code="e">Verfasser</subfield><subfield code="0">(DE-588)134071484</subfield><subfield code="4">aut</subfield></datafield><datafield tag="776" ind1="0" ind2="8"><subfield code="i">Erscheint auch als</subfield><subfield code="n">Online-Ausgabe</subfield><subfield code="z">978-0-387-84858-7</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="m">Digitalisierung UB Regensburg - ADAM Catalogue Enrichment</subfield><subfield code="q">application/pdf</subfield><subfield code="u">http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=029812524&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA</subfield><subfield code="3">Inhaltsverzeichnis</subfield></datafield><datafield tag="259" ind1=" " ind2=" "><subfield code="a">2,12</subfield></datafield><datafield tag="943" ind1="1" ind2=" "><subfield code="a">oai:aleph.bib-bvb.de:BVB01-029812524</subfield></datafield></record></collection> |
genre | (DE-588)4056995-0 Statistik gnd-content |
genre_facet | Statistik |
id | DE-604.BV044410726 |
illustrated | Illustrated |
indexdate | 2024-12-20T18:02:21Z |
institution | BVB |
isbn | 9780387848570 0387848576 |
language | English |
oai_aleph_id | oai:aleph.bib-bvb.de:BVB01-029812524 |
oclc_num | 989956421 |
open_access_boolean | |
owner | DE-19 DE-BY-UBM DE-898 DE-BY-UBR DE-83 DE-91G DE-BY-TUM DE-573 DE-706 DE-20 DE-739 DE-29T DE-M347 DE-861 DE-N2 DE-355 DE-BY-UBR DE-703 DE-521 DE-188 DE-945 DE-860 DE-Er8 DE-1043 DE-92 |
owner_facet | DE-19 DE-BY-UBM DE-898 DE-BY-UBR DE-83 DE-91G DE-BY-TUM DE-573 DE-706 DE-20 DE-739 DE-29T DE-M347 DE-861 DE-N2 DE-355 DE-BY-UBR DE-703 DE-521 DE-188 DE-945 DE-860 DE-Er8 DE-1043 DE-92 |
physical | xxii, 745 Seiten Illustrationen, Diagramme |
publishDate | 2017 |
publishDateSearch | 2017 |
publishDateSort | 2017 |
publisher | Springer |
record_format | marc |
series2 | Springer series in statistics |
spellingShingle | Hastie, Trevor 1953- Tibshirani, Robert 1956- Friedman, Jerome H. 1939- The elements of statistical learning data mining, inference, and prediction Maschinelles Lernen (DE-588)4193754-5 gnd Anwendung (DE-588)4196864-5 gnd Datenanalyse (DE-588)4123037-1 gnd Statistik (DE-588)4056995-0 gnd |
subject_GND | (DE-588)4193754-5 (DE-588)4196864-5 (DE-588)4123037-1 (DE-588)4056995-0 |
title | The elements of statistical learning data mining, inference, and prediction |
title_auth | The elements of statistical learning data mining, inference, and prediction |
title_exact_search | The elements of statistical learning data mining, inference, and prediction |
title_full | The elements of statistical learning data mining, inference, and prediction Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
title_fullStr | The elements of statistical learning data mining, inference, and prediction Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
title_full_unstemmed | The elements of statistical learning data mining, inference, and prediction Trevor Hastie ; Robert Tibshirani ; Jerome Friedman |
title_short | The elements of statistical learning |
title_sort | the elements of statistical learning data mining inference and prediction |
title_sub | data mining, inference, and prediction |
topic | Maschinelles Lernen (DE-588)4193754-5 gnd Anwendung (DE-588)4196864-5 gnd Datenanalyse (DE-588)4123037-1 gnd Statistik (DE-588)4056995-0 gnd |
topic_facet | Maschinelles Lernen Anwendung Datenanalyse Statistik |
url | http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&local_base=BVB01&doc_number=029812524&sequence=000001&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA |
work_keys_str_mv | AT hastietrevor theelementsofstatisticallearningdatamininginferenceandprediction AT tibshiranirobert theelementsofstatisticallearningdatamininginferenceandprediction AT friedmanjeromeh theelementsofstatisticallearningdatamininginferenceandprediction |
Inhaltsverzeichnis
Paper/Kapitel scannen lassen
Paper/Kapitel scannen lassen
Handapparate (nicht verfügbar)
Signatur: |
0048 MAT 620f 2001 A 17531(2,2017) Lageplan |
---|---|
Exemplar 1 | Dauerhaft ausgeliehen Ausgeliehen – Rückgabe bis: 31.12.9999 |
Teilbibliothek Chemie, Lehrbuchsammlung
Signatur: |
0303 MAT 620f 2017 L 727(2,2017) Lageplan |
---|---|
Exemplar 1 | Ausleihbar Am Standort |
Exemplar 2 | Ausleihbar Ausgeliehen – Rückgabe bis: 13.03.2025 |
Exemplar 3 | Ausleihbar Am Standort |
Exemplar 4 | Ausleihbar Am Standort |
Exemplar 5 | Ausleihbar Am Standort |
Exemplar 6 | Ausleihbar Am Standort |
Exemplar 7 | Ausleihbar Am Standort |
Exemplar 8 | Ausleihbar Am Standort |
Exemplar 9 | Ausleihbar Am Standort |
Exemplar 10 | Ausleihbar Am Standort |
Exemplar 11 | Ausleihbar Am Standort |
Exemplar 12 | Ausleihbar Am Standort |
Exemplar 13 | Ausleihbar Am Standort |
Exemplar 14 | Ausleihbar Am Standort |
Exemplar 15 | Ausleihbar Am Standort |
Exemplar 16 | Ausleihbar Am Standort |
Exemplar 17 | Ausleihbar Am Standort |
Exemplar 18 | Ausleihbar Am Standort |
Exemplar 19 | Ausleihbar Am Standort |
Exemplar 20 | Ausleihbar Am Standort |
Exemplar 21 | Ausleihbar Ausgeliehen – Rückgabe bis: 07.04.2025 |
Exemplar 22 | Ausleihbar Ausgeliehen – Rückgabe bis: 19.03.2025 |