![]() The built-in models contained in caret contain code to generate random tuning parameter combinations. Possible values of this argument are "grid" and "random". To use random search, another option is available in trainControl called search. For this reason, it may be inefficient to use random search for the following model codes: ada, AdaBag, AdaBoost.M1, bagEarth, blackboost, blasso, BstLm, bstSm, bstTree, C5.0, C5.0Cost, cubist, earth, enet, foba, gamboost, gbm, glmboost, glmnet, kernelpls, lars, lars2, lasso, lda2, leapBackward, leapForward, leapSeq, LogitBoost, pam, partDSA, pcr, PenalizedLDA, pls, relaxo, rfRules, rotationForest, rotationForestCp, rpart, rpart2, rpartCost, simpls, spikeslab, superpc, widekernelpls, xgbDART, xgbTree.įinally, many of the models wrapped by train have a small number of parameters. This approach is best leveraged when a simple grid search is used. For example, a number of models in caret utilize the “sub-model trick” where M tuning parameter combinations are evaluated, potentially far fewer than M model fits are required. However, there are some models where the efficiency in a small search field can cancel out other optimizations. There are a number of models where this can be beneficial in finding reasonable values of the tuning parameters in a relatively short time. Another is to use a random selection of tuning parameter combinations to cover the parameter space to a lesser extent. An alternative is to use a combination of grid search and racing. This approach is usually effective but, in cases when there are many tuning parameters, it can be inefficient. The default method for optimizing tuning parameters in train is to use a grid search. 22.2 Internal and External Performance Estimates.22 Feature Selection using Simulated Annealing.21.2 Internal and External Performance Estimates.21 Feature Selection using Genetic Algorithms.20.3 Recursive Feature Elimination via caret.20.2 Resampling and External Validation.19 Feature Selection using Univariate Filters.18.1 Models with Built-In Feature Selection.16.6 Neural Networks with a Principal Component Step.16.2 Partial Least Squares Discriminant Analysis.16.1 Yet Another k-Nearest Neighbor Function.13.9 Illustrative Example 6: Offsets in Generalized Linear Models.13.8 Illustrative Example 5: Optimizing probability thresholds for class imbalances.13.7 Illustrative Example 4: PLS Feature Extraction Pre-Processing.13.6 Illustrative Example 3: Nonstandard Formulas.13.5 Illustrative Example 2: Something More Complicated - LogitBoost.13.2 Illustrative Example 1: SVMs with Laplacian Kernels.12.1.2 Using additional data to measure performance. ![]() 12.1.1 More versatile tools for preprocessing data.11.4 Using Custom Subsampling Techniques.7.0.27 Multivariate Adaptive Regression Splines.5.9 Fitting Models Without Parameter Tuning.5.8 Exploring and Comparing Resampling Distributions.5.7 Extracting Predictions and Class Probabilities.5.1 Model Training and Parameter Tuning.4.4 Simple Splitting with Important Groups.4.1 Simple Splitting Based on the Outcome.3.2 Zero- and Near Zero-Variance Predictors.
0 Comments
Leave a Reply. |