library(mlr3verse)
= lrn("classif.ranger",
learner mtry.ratio = to_tune(0, 1),
replace = to_tune(),
sample.fraction = to_tune(1e-1, 1),
num.trees = to_tune(1, 2000)
)
Scope
The predictive performance of modern machine learning algorithms is highly dependent on the choice of their hyperparameter configuration. Options for setting hyperparameters are tuning, manual selection by the user, and using the default configuration of the algorithm. The default configurations are chosen to work with a wide range of data sets but they usually do not achieve the best predictive performance. When tuning a learner in mlr3, we can run the default configuration as a baseline. Seeing how well it performs will tell us whether tuning pays off. If the optimized configurations perform worse, we could expand the search space or try a different optimization algorithm. Of course, it could also be that tuning on the given data set is simply not worth it.
Probst, Boulesteix, and Bischl (2019) studied the tunability of machine learning algorithms. They found that the tunability of algorithms varies widely. Algorithms like glmnet and XGBoost are highly tunable, while algorithms like random forests work well with their default configuration. The highly tunable algorithms should thus beat their baselines more easily with optimized hyperparameters. In this article, we will tune the hyperparameters of a random forest and compare the performance of the default configuration with the optimized configurations.
Example
We tune the hyperparameters of the ranger learner
on the spam
data set. The search space is taken from Bischl et al. (2021).
When creating the tuning instance, we set evaluate_default = TRUE
to test the default hyperparameter configuration. The default configuration is evaluated in the first batch of the tuning run. The other batches use the specified tuning method. In this example, they are randomly drawn configurations.
= tune(
instance tuner = tnr("random_search", batch_size = 5),
task = tsk("spam"),
learner = learner,
resampling = rsmp ("holdout"),
measures = msr("classif.ce"),
term_evals = 51,
evaluate_default = TRUE
)
The default configuration is recorded in the first row of the archive. The other rows contain the results of the random search.
as.data.table(instance$archive)[, .(batch_nr, mtry.ratio, replace, sample.fraction, num.trees, classif.ce)]
batch_nr mtry.ratio replace sample.fraction num.trees classif.ce
1: 1 0.122807018 TRUE 1.0000000 500 0.04954368
2: 2 0.285388074 TRUE 0.1794772 204 0.06584094
3: 2 0.097424099 FALSE 0.9475526 1441 0.04237288
4: 2 0.008888587 FALSE 0.3216562 1868 0.08409387
5: 2 0.335543330 TRUE 0.8122653 106 0.05345502
---
47: 11 0.788995735 FALSE 0.3692454 344 0.06258149
48: 11 0.459305038 TRUE 0.3153485 1354 0.06258149
49: 11 0.220334408 TRUE 0.9357554 817 0.05345502
50: 11 0.868385877 TRUE 0.6743246 1040 0.06127771
51: 11 0.015417312 FALSE 0.5627943 1836 0.08213820
We plot the performances of the evaluated hyperparameter configurations. The blue line connects the best configuration of each batch. We see that the default configuration already performs well and the optimized configurations can not beat it.
library(mlr3viz)
autoplot(instance, type = "performance")
Conlcusion
The time required to test the default configuration is negligible compared to the time required to run the hyperparameter optimization. It gives us a valuable indication of whether our tuning is properly configured. Running the default configuration as a baseline is a good practice that should be used in every tuning run.