requireNamespace("DiceKriging")
Loading required namespace: DiceKriging
The mlr3book has a new chapter on validation and internal tuning
Optimize the hyperparameters of a support vector machine.
Marc Becker
Theresa Ullmann
Michel Lang
Bernd Bischl
Jakob Richter
Martin Binder
March 9, 2021
This is the first part of the practical tuning series. The other parts can be found here:
In this post, we demonstrate how to optimize the hyperparameters of a support vector machine (SVM). We are using the mlr3 machine learning framework with the mlr3tuning extension package.
First, we start by showing the basic building blocks of mlr3tuning and tune the cost
and gamma
hyperparameters of an SVM with a radial basis function on the Iris data set. After that, we use transformations to tune the both hyperparameters on the logarithmic scale. Next, we explain the importance of dependencies to tune hyperparameters like degree
which are dependent on the choice of kernel. After that, we fit an SVM with optimized hyperparameters on the full dataset. Finally, nested resampling is used to compute an unbiased performance estimate of our tuned SVM.
We load the mlr3verse package which pulls in the most important packages for this example.
We initialize the random number generator with a fixed seed for reproducibility, and decrease the verbosity of the logger to keep the output clearly represented. The lgr
package is used for logging in all mlr3 packages. The mlr3 logger prints the logging messages from the base package, whereas the bbotk logger is responsible for logging messages from the optimization packages (e.g. mlr3tuning ).
In the example, we use the Iris data set which classifies 150 flowers in three species of Iris. The flowers are characterized by sepal length and width and petal length and width. The Iris data set allows us to quickly fit models to it. However, the influence of hyperparameter tuning on the predictive performance might be minor. Other data sets might give more meaningful tuning results.
# retrieve the task from mlr3
task = tsk("iris")
# generate a quick textual overview using the skimr package
skimr::skim(task$data())
Name | task$data() |
Number of rows | 150 |
Number of columns | 5 |
Key | NULL |
_______________________ | |
Column type frequency: | |
factor | 1 |
numeric | 4 |
________________________ | |
Group variables | None |
Variable type: factor
skim_variable | n_missing | complete_rate | ordered | n_unique | top_counts |
---|---|---|---|---|---|
Species | 0 | 1 | FALSE | 3 | set: 50, ver: 50, vir: 50 |
Variable type: numeric
skim_variable | n_missing | complete_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
---|---|---|---|---|---|---|---|---|---|---|
Petal.Length | 0 | 1 | 3.76 | 1.77 | 1.0 | 1.6 | 4.35 | 5.1 | 6.9 | ▇▁▆▇▂ |
Petal.Width | 0 | 1 | 1.20 | 0.76 | 0.1 | 0.3 | 1.30 | 1.8 | 2.5 | ▇▁▇▅▃ |
Sepal.Length | 0 | 1 | 5.84 | 0.83 | 4.3 | 5.1 | 5.80 | 6.4 | 7.9 | ▆▇▇▅▂ |
Sepal.Width | 0 | 1 | 3.06 | 0.44 | 2.0 | 2.8 | 3.00 | 3.3 | 4.4 | ▁▆▇▂▁ |
We choose the support vector machine implementation from the e1071 package (which is based on LIBSVM) and use it as a classification machine by setting type
to "C-classification"
.
For tuning, it is important to create a search space that defines the type and range of the hyperparameters. A learner stores all information about its hyperparameters in the slot $param_set
. Not all parameters are tunable. We have to choose a subset of the hyperparameters we want to tune.
We use the to_tune()
function to define the range over which the hyperparameter should be tuned. We opt for the cost
and gamma
hyperparameters of the radial
kernel and set the tuning ranges with lower and upper bounds.
We specify how to evaluate the performance of the different hyperparameter configurations. For this, we choose 3-fold cross validation as the resampling strategy and the classification error as the performance measure.
Usually, we have to select a budget for the tuning. This is done by choosing a Terminator
, which stops the tuning e.g. after a performance level is reached or after a given time. However, some tuners like grid search terminate themselves. In this case, we choose a terminator that never stops and the tuning is not stopped before all grid points are evaluated.
At this point, we can construct a TuningInstanceBatchSingleCrit
that describes the tuning problem.
instance = ti(
task = task,
learner = learner,
resampling = resampling,
measure = measure,
terminator = terminator
)
print(instance)
<TuningInstanceBatchSingleCrit>
* State: Not optimized
* Objective: <ObjectiveTuningBatch:classif.svm_on_iris>
* Search Space:
id class lower upper nlevels
<char> <char> <num> <num> <num>
1: cost ParamDbl 0.1 10 Inf
2: gamma ParamDbl 0.0 5 Inf
* Terminator: <TerminatorNone>
Finally, we have to choose a Tuner
. Grid Search discretizes numeric parameters into a given resolution and constructs a grid from the Cartesian product of these sets. Categorical parameters produce a grid over all levels specified in the search space. In this example, we only use a resolution of 5 to keep the runtime low. Usually, a higher resolution is used to create a denser grid.
<TunerBatchGridSearch>: Grid Search
* Parameters: batch_size=1, resolution=5
* Parameter classes: ParamLgl, ParamInt, ParamDbl, ParamFct
* Properties: dependencies, single-crit, multi-crit
* Packages: mlr3tuning, bbotk
We can preview the proposed configurations by using generate_design_grid()
. This function is internally executed by TunerBatchGridSearch
.
<Design> with 25 rows:
cost gamma
<num> <num>
1: 0.100 0.00
2: 0.100 1.25
3: 0.100 2.50
4: 0.100 3.75
5: 0.100 5.00
6: 2.575 0.00
7: 2.575 1.25
8: 2.575 2.50
9: 2.575 3.75
10: 2.575 5.00
11: 5.050 0.00
12: 5.050 1.25
13: 5.050 2.50
14: 5.050 3.75
15: 5.050 5.00
16: 7.525 0.00
17: 7.525 1.25
18: 7.525 2.50
19: 7.525 3.75
20: 7.525 5.00
21: 10.000 0.00
22: 10.000 1.25
23: 10.000 2.50
24: 10.000 3.75
25: 10.000 5.00
cost gamma
We trigger the tuning by passing the TuningInstanceBatchSingleCrit
to the $optimize()
method of the Tuner
. The instance is modified in-place.
cost gamma learner_param_vals x_domain classif.ce
<num> <num> <list> <list> <num>
1: 2.575 2.5 <list[4]> <list[2]> 0.04666667
We plot the performances depending on the evaluated cost
and gamma
values.
The points mark the evaluated cost
and gamma
values. We should not infer the performance of new values from the heatmap since it is only an interpolation. However, we can see the general interaction between the hyperparameters.
Tuning a learner can be shortened by using the tune()
-shortcut.
learner = lrn("classif.svm", type = "C-classification", kernel = "radial")
learner$param_set$values$cost = to_tune(0.1, 10)
learner$param_set$values$gamma = to_tune(0, 5)
instance = tune(
tuner = tnr("grid_search", resolution = 5),
task = tsk("iris"),
learner = learner,
resampling = rsmp ("holdout"),
measure = msr("classif.ce")
)
Next, we want to tune the cost
and gamma
hyperparameter more efficiently. It is recommended to tune cost
and gamma
on the logarithmic scale (Hsu, Chang, and Lin 2003). The log transformation emphasizes smaller cost
and gamma
values but also creates large values. Therefore, we use a log transformation to emphasize this region of the search space with a denser grid.
Generally speaking, transformations can be used to convert hyperparameters to a new scale. These transformations are applied before the proposed configuration is passed to the Learner
. We can directly define the transformation in the to_tune()
function. The lower and upper bound is set on the original scale.
learner = lrn("classif.svm", type = "C-classification", kernel = "radial")
# tune from 2^-15 to 2^15 on a log scale
learner$param_set$values$cost = to_tune(p_dbl(-15, 15, trafo = function(x) 2^x))
# tune from 2^-15 to 2^5 on a log scale
learner$param_set$values$gamma = to_tune(p_dbl(-15, 5, trafo = function(x) 2^x))
Transformations to the log scale are the ones most commonly used. We can use a shortcut for this transformation. The lower and upper bound is set on the transformed scale.
We use the tune()
-shortcut to run the tuning.
The hyperparameter values after the transformation are stored in the x_domain
column as lists. We can expand these lists into multiple columns by using as.data.table()
. The hyperparameter names are prefixed by x_domain
.
cost gamma x_domain_cost x_domain_gamma
<num> <num> <num> <num>
1: 11.512925 5.756463 1.000000e+05 3.162278e+02
2: 0.000000 -11.512925 1.000000e+00 1.000000e-05
3: -5.756463 -5.756463 3.162278e-03 3.162278e-03
4: 11.512925 -5.756463 1.000000e+05 3.162278e-03
5: 0.000000 -5.756463 1.000000e+00 3.162278e-03
6: -5.756463 0.000000 3.162278e-03 1.000000e+00
7: -11.512925 0.000000 1.000000e-05 1.000000e+00
8: 5.756463 0.000000 3.162278e+02 1.000000e+00
9: 0.000000 0.000000 1.000000e+00 1.000000e+00
10: 11.512925 11.512925 1.000000e+05 1.000000e+05
11: 0.000000 5.756463 1.000000e+00 3.162278e+02
12: 5.756463 -11.512925 3.162278e+02 1.000000e-05
13: 5.756463 11.512925 3.162278e+02 1.000000e+05
14: -11.512925 5.756463 1.000000e-05 3.162278e+02
15: 5.756463 5.756463 3.162278e+02 3.162278e+02
16: 5.756463 -5.756463 3.162278e+02 3.162278e-03
17: 11.512925 -11.512925 1.000000e+05 1.000000e-05
18: -5.756463 5.756463 3.162278e-03 3.162278e+02
19: -11.512925 -11.512925 1.000000e-05 1.000000e-05
20: 11.512925 0.000000 1.000000e+05 1.000000e+00
21: -5.756463 11.512925 3.162278e-03 1.000000e+05
22: -11.512925 11.512925 1.000000e-05 1.000000e+05
23: -5.756463 -11.512925 3.162278e-03 1.000000e-05
24: -11.512925 -5.756463 1.000000e-05 3.162278e-03
25: 0.000000 11.512925 1.000000e+00 1.000000e+05
cost gamma x_domain_cost x_domain_gamma
We plot the performances depending on the evaluated cost
and gamma
values.
library(ggplot2)
library(scales)
autoplot(instance, type = "points", cols_x = c("x_domain_cost", "x_domain_gamma")) +
scale_x_continuous(
trans = log2_trans(),
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x))) +
scale_y_continuous(
trans = log2_trans(),
breaks = trans_breaks("log10", function(x) 10^x),
labels = trans_format("log10", math_format(10^.x)))
Dependencies ensure that certain parameters are only proposed depending on values of other hyperparameters. We want to tune the degree
hyperparameter that is only needed for the polynomial
kernel.
learner = lrn("classif.svm", type = "C-classification")
learner$param_set$values$cost = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE))
learner$param_set$values$gamma = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE))
learner$param_set$values$kernel = to_tune(c("polynomial", "radial"))
learner$param_set$values$degree = to_tune(1, 4)
The dependencies are already stored in the learner parameter set.
Indices: <id>, <on__id>
id on cond
<char> <char> <list>
1: coef0 kernel <Condition:CondAnyOf>
2: cost type <Condition:CondEqual>
3: degree kernel <Condition:CondEqual>
4: gamma kernel <Condition:CondAnyOf>
5: nu type <Condition:CondEqual>
The gamma
hyperparameter depends on the kernel being polynomial
, radial
or sigmoid
whereas the degree
hyperparameter is solely used by the polynomial
kernel.
We preview the grid to show the effect of the dependencies.
<Design> with 12 rows:
cost degree gamma kernel
<num> <int> <num> <char>
1: -11.51293 1 -11.51293 polynomial
2: -11.51293 NA -11.51293 radial
3: -11.51293 1 11.51293 polynomial
4: -11.51293 NA 11.51293 radial
5: -11.51293 4 -11.51293 polynomial
6: -11.51293 4 11.51293 polynomial
7: 11.51293 1 -11.51293 polynomial
8: 11.51293 NA -11.51293 radial
9: 11.51293 1 11.51293 polynomial
10: 11.51293 NA 11.51293 radial
11: 11.51293 4 -11.51293 polynomial
12: 11.51293 4 11.51293 polynomial
The value for degree
is NA
if the dependency on the kernel
is not satisfied.
We use the tune()
-shortcut to run the tuning.
We add the optimized hyperparameters to the learner and train the learner on the full dataset.
The trained model can now be used to make predictions on new data. A common mistake is to report the performance estimated on the resampling sets on which the tuning was performed (instance$result_y
) as the model’s performance. These scores might be biased and overestimate the ability of the fitted model to predict with new data. Instead, we have to use nested resampling to get an unbiased performance estimate.
Tuning should not be performed on the same resampling sets which are used for evaluating the model itself, since this would result in a biased performance estimate. Nested resampling uses an outer and inner resampling to separate the tuning from the performance estimation of the model. We can use the AutoTuner
class for running nested resampling. The AutoTuner
wraps a Learner
and tunes the hyperparameter of the learner during $train()
. This is our inner resampling loop.
learner = lrn("classif.svm", type = "C-classification")
learner$param_set$values$cost = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE))
learner$param_set$values$gamma = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE))
learner$param_set$values$kernel = to_tune(c("polynomial", "radial"))
learner$param_set$values$degree = to_tune(1, 4)
resampling_inner = rsmp("cv", folds = 3)
terminator = trm("none")
tuner = tnr("grid_search", resolution = 3)
at = auto_tuner(
learner = learner,
resampling = resampling_inner,
measure = measure,
terminator = terminator,
tuner = tuner,
store_models = TRUE)
We put the AutoTuner
into a resample()
call to get the outer resampling loop.
We check the inner tuning results for stable hyperparameters. This means that the selected hyperparameters should not vary too much. We might observe unstable models in this example because the small data set and the low number of resampling iterations might introduce too much randomness. Usually, we aim for the selection of stable hyperparameters for all outer training sets.
iteration cost degree gamma kernel classif.ce task_id learner_id resampling_id
<int> <num> <int> <num> <char> <num> <char> <char> <char>
1: 1 0.00000 3 0.00000 polynomial 0.03980986 iris classif.svm.tuned cv
2: 2 0.00000 NA 0.00000 radial 0.02970885 iris classif.svm.tuned cv
3: 3 -11.51293 1 11.51293 polynomial 0.03000594 iris classif.svm.tuned cv
Next, we want to compare the predictive performances estimated on the outer resampling to the inner resampling (extract_inner_tuning_results(rr)
). Significantly lower predictive performances on the outer resampling indicate that the models with the optimized hyperparameters overfit the data.
iteration task_id learner_id resampling_id classif.ce
<int> <char> <char> <char> <num>
1: 1 iris classif.svm.tuned cv 0.06
2: 2 iris classif.svm.tuned cv 0.08
3: 3 iris classif.svm.tuned cv 0.00
The archives of the AutoTuner
s allows us to inspect all evaluated hyperparameters configurations with the associated predictive performances.
extract_inner_tuning_archives(rr, unnest = NULL, exclude_columns = c("resample_result", "uhash", "x_domain", "timestamp"))
iteration cost degree gamma kernel classif.ce runtime_learners warnings errors batch_nr task_id
<int> <num> <int> <num> <char> <num> <num> <int> <int> <int> <char>
1: 1 0.00000 NA -11.51293 radial 0.69994058 0.013 0 0 1 iris
2: 1 11.51293 NA -11.51293 radial 0.04991087 0.008 0 0 2 iris
3: 1 0.00000 4 11.51293 polynomial 0.22846108 0.064 0 0 3 iris
4: 1 0.00000 3 0.00000 polynomial 0.03980986 0.009 0 0 4 iris
5: 1 0.00000 1 -11.51293 polynomial 0.69994058 0.009 0 0 5 iris
---
104: 3 11.51293 NA 11.51293 radial 0.64022579 0.009 0 0 32 iris
105: 3 11.51293 1 -11.51293 polynomial 0.03000594 0.011 0 0 33 iris
106: 3 -11.51293 3 -11.51293 polynomial 0.58140226 0.007 0 0 34 iris
107: 3 11.51293 3 11.51293 polynomial 0.09031491 0.008 0 0 35 iris
108: 3 11.51293 1 0.00000 polynomial 0.07040998 0.015 0 0 36 iris
learner_id resampling_id
<char> <char>
1: classif.svm.tuned cv
2: classif.svm.tuned cv
3: classif.svm.tuned cv
4: classif.svm.tuned cv
5: classif.svm.tuned cv
---
104: classif.svm.tuned cv
105: classif.svm.tuned cv
106: classif.svm.tuned cv
107: classif.svm.tuned cv
108: classif.svm.tuned cv
The aggregated performance of all outer resampling iterations is essentially the unbiased performance of an SVM with optimal hyperparameter found by grid search.
Applying nested resampling can be shortened by using the tune_nested()
-shortcut.
learner = lrn("classif.svm", type = "C-classification")
learner$param_set$values$cost = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE))
learner$param_set$values$gamma = to_tune(p_dbl(1e-5, 1e5, logscale = TRUE))
learner$param_set$values$kernel = to_tune(c("polynomial", "radial"))
learner$param_set$values$degree = to_tune(1, 4)
rr = tune_nested(
tuner = tnr("grid_search", resolution = 3),
task = tsk("iris"),
learner = learner,
inner_resampling = rsmp ("cv", folds = 3),
outer_resampling = rsmp("cv", folds = 3),
measure = msr("classif.ce"),
)
The mlr3book includes chapters on tuning spaces and hyperparameter tuning. The mlr3cheatsheets contain frequently used commands and workflows of mlr3.
═ Session info ═══════════════════════════════════════════════════════════════════════════════════════════════════════
─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────
! package * version date (UTC) lib source
backports 1.5.0 2024-05-23 [1] CRAN (R 4.4.1)
base64enc 0.1-3 2015-07-28 [1] CRAN (R 4.4.1)
bbotk 1.1.1 2024-10-15 [1] CRAN (R 4.4.1)
checkmate 2.3.2 2024-07-29 [1] CRAN (R 4.4.1)
P class 7.3-22 2023-05-03 [?] CRAN (R 4.4.0)
cli 3.6.3 2024-06-21 [1] CRAN (R 4.4.1)
clue 0.3-65 2023-09-23 [1] CRAN (R 4.4.1)
P cluster 2.1.6 2023-12-01 [?] CRAN (R 4.4.0)
P codetools 0.2-20 2024-03-31 [?] CRAN (R 4.4.0)
colorspace 2.1-1 2024-07-26 [1] CRAN (R 4.4.1)
crayon 1.5.3 2024-06-20 [1] CRAN (R 4.4.1)
data.table * 1.16.2 2024-10-10 [1] CRAN (R 4.4.1)
DEoptimR 1.1-3 2023-10-07 [1] CRAN (R 4.4.1)
DiceKriging 1.6.0 2021-02-23 [1] CRAN (R 4.4.1)
digest 0.6.37 2024-08-19 [1] CRAN (R 4.4.1)
diptest 0.77-1 2024-04-10 [1] CRAN (R 4.4.1)
dplyr 1.1.4 2023-11-17 [1] CRAN (R 4.4.1)
e1071 1.7-16 2024-09-16 [1] CRAN (R 4.4.1)
evaluate 1.0.1 2024-10-10 [1] CRAN (R 4.4.1)
fansi 1.0.6 2023-12-08 [1] CRAN (R 4.4.1)
farver 2.1.2 2024-05-13 [1] CRAN (R 4.4.1)
fastmap 1.2.0 2024-05-15 [1] CRAN (R 4.4.1)
flexmix 2.3-19 2023-03-16 [1] CRAN (R 4.4.1)
fpc 2.2-13 2024-09-24 [1] CRAN (R 4.4.1)
future 1.34.0 2024-07-29 [1] CRAN (R 4.4.1)
future.apply 1.11.2 2024-03-28 [1] CRAN (R 4.4.1)
generics 0.1.3 2022-07-05 [1] CRAN (R 4.4.1)
ggplot2 * 3.5.1 2024-04-23 [1] CRAN (R 4.4.1)
globals 0.16.3 2024-03-08 [1] CRAN (R 4.4.1)
glue 1.8.0 2024-09-30 [1] CRAN (R 4.4.1)
gtable 0.3.5 2024-04-22 [1] CRAN (R 4.4.1)
htmltools 0.5.8.1 2024-04-04 [1] CRAN (R 4.4.1)
htmlwidgets 1.6.4 2023-12-06 [1] CRAN (R 4.4.1)
jsonlite 1.8.9 2024-09-20 [1] CRAN (R 4.4.1)
kernlab 0.9-33 2024-08-13 [1] CRAN (R 4.4.1)
knitr 1.48 2024-07-07 [1] CRAN (R 4.4.1)
labeling 0.4.3 2023-08-29 [1] CRAN (R 4.4.1)
P lattice 0.22-5 2023-10-24 [?] CRAN (R 4.3.3)
lgr 0.4.4 2022-09-05 [1] CRAN (R 4.4.1)
lifecycle 1.0.4 2023-11-07 [1] CRAN (R 4.4.1)
listenv 0.9.1 2024-01-29 [1] CRAN (R 4.4.1)
magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.4.1)
P MASS 7.3-61 2024-06-13 [?] CRAN (R 4.4.1)
mclust 6.1.1 2024-04-29 [1] CRAN (R 4.4.1)
mlr3 * 0.21.1 2024-10-18 [1] CRAN (R 4.4.1)
mlr3cluster 0.1.10 2024-10-03 [1] CRAN (R 4.4.1)
mlr3data 0.7.0 2023-06-29 [1] CRAN (R 4.4.1)
mlr3extralearners 0.9.0-9000 2024-10-18 [1] Github (mlr-org/mlr3extralearners@a622524)
mlr3filters 0.8.0 2024-04-10 [1] CRAN (R 4.4.1)
mlr3fselect 1.1.1.9000 2024-10-18 [1] Github (mlr-org/mlr3fselect@e917a02)
mlr3hyperband 0.6.0 2024-06-29 [1] CRAN (R 4.4.1)
mlr3learners 0.7.0 2024-06-28 [1] CRAN (R 4.4.1)
mlr3mbo 0.2.6 2024-10-16 [1] CRAN (R 4.4.1)
mlr3measures 1.0.0 2024-09-11 [1] CRAN (R 4.4.1)
mlr3misc 0.15.1 2024-06-24 [1] CRAN (R 4.4.1)
mlr3pipelines 0.7.0 2024-09-24 [1] CRAN (R 4.4.1)
mlr3tuning 1.0.2 2024-10-14 [1] CRAN (R 4.4.1)
mlr3tuningspaces 0.5.1 2024-06-21 [1] CRAN (R 4.4.1)
mlr3verse * 0.3.0 2024-06-30 [1] CRAN (R 4.4.1)
mlr3viz 0.9.0 2024-07-01 [1] CRAN (R 4.4.1)
mlr3website * 0.0.0.9000 2024-10-18 [1] Github (mlr-org/mlr3website@20d1ddf)
modeltools 0.2-23 2020-03-05 [1] CRAN (R 4.4.1)
munsell 0.5.1 2024-04-01 [1] CRAN (R 4.4.1)
P nnet 7.3-19 2023-05-03 [?] CRAN (R 4.3.3)
palmerpenguins 0.1.1 2022-08-15 [1] CRAN (R 4.4.1)
paradox 1.0.1 2024-07-09 [1] CRAN (R 4.4.1)
parallelly 1.38.0 2024-07-27 [1] CRAN (R 4.4.1)
pillar 1.9.0 2023-03-22 [1] CRAN (R 4.4.1)
pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.4.1)
prabclus 2.3-4 2024-09-24 [1] CRAN (R 4.4.1)
proxy 0.4-27 2022-06-09 [1] CRAN (R 4.4.1)
purrr 1.0.2 2023-08-10 [1] CRAN (R 4.4.1)
R6 2.5.1 2021-08-19 [1] CRAN (R 4.4.1)
Rcpp 1.0.13 2024-07-17 [1] CRAN (R 4.4.1)
renv 1.0.11 2024-10-12 [1] CRAN (R 4.4.1)
repr 1.1.7 2024-03-22 [1] CRAN (R 4.4.1)
rlang 1.1.4 2024-06-04 [1] CRAN (R 4.4.1)
rmarkdown 2.28 2024-08-17 [1] CRAN (R 4.4.1)
robustbase 0.99-4-1 2024-09-27 [1] CRAN (R 4.4.1)
scales * 1.3.0 2023-11-28 [1] CRAN (R 4.4.1)
sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.4.1)
skimr 2.1.5 2022-12-23 [1] CRAN (R 4.4.1)
spacefillr 0.3.3 2024-05-22 [1] CRAN (R 4.4.1)
stringi 1.8.4 2024-05-06 [1] CRAN (R 4.4.1)
stringr 1.5.1 2023-11-14 [1] CRAN (R 4.4.1)
tibble 3.2.1 2023-03-20 [1] CRAN (R 4.4.1)
tidyr 1.3.1 2024-01-24 [1] CRAN (R 4.4.1)
tidyselect 1.2.1 2024-03-11 [1] CRAN (R 4.4.1)
utf8 1.2.4 2023-10-22 [1] CRAN (R 4.4.1)
uuid 1.2-1 2024-07-29 [1] CRAN (R 4.4.1)
vctrs 0.6.5 2023-12-01 [1] CRAN (R 4.4.1)
viridisLite 0.4.2 2023-05-02 [1] CRAN (R 4.4.1)
withr 3.0.1 2024-07-31 [1] CRAN (R 4.4.1)
xfun 0.48 2024-10-03 [1] CRAN (R 4.4.1)
yaml 2.3.10 2024-07-26 [1] CRAN (R 4.4.1)
[1] /home/marc/repositories/mlr3website/mlr-org/renv/library/linux-ubuntu-noble/R-4.4/x86_64-pc-linux-gnu
[2] /home/marc/.cache/R/renv/sandbox/linux-ubuntu-noble/R-4.4/x86_64-pc-linux-gnu/9a444a72
P ── Loaded and on-disk path mismatch.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────