options("install.opts" = "--without-keep.source")
- Why is there only the rpart learner?
- How can I use parallelization?
- Why is the parallelization with the future package slow?
- Why is the parallelization of tuning slow?
- Why are the CPUs on my system not fully utilized when using parallelization?
- How can I use time constraints when tuning?
- Why is method X slower when used via
mlr3
? - Preprocessing factor levels
- Memory Problems
- How can I suppress logging output of learners on the R console
- A learner trained with an old mlr3 version does not work anymore
- Caching of knitr/rmarkdown chunks does not work with mlr3
- How to keep all mlr3 packages up-to-date?
Why is there only the rpart learner?
The base package mlr3 ships only with regression and classification trees from the rpart package and some learners for debugging. A selection of popular learners can be found in the extension package mlr3learners. Survival learners are provided by mlr3proba, cluster learners via mlr3cluster. Additional learners can be found in the extension packages mlr3extralearners. If your favorite learner is missing, please open a learner request. An overview of all learners can be found on our website.
How can I use parallelization?
Parallelization is supported when training learners, resampling, tuning and predicting. We recommend reading the section about Parallelization in the mlr3book
.
Why is the parallelization with the future package slow?
Starting and terminating workers as well as possible communication between workers comes at a price in the form of additionally required runtime which is called parallelization overhead. This overhead strongly varies between parallelization backends and must be carefully weighed against the runtime of the sequential execution to determine if parallelization is worth the effort. When resampling or tuning a fast-fitting learner, it helps to chunk multiple resampling iterations into a single computational job. The option mlr3.exec_chunk_bins
determines the number of chunks to split the resampling iterations into. For example, when running a benchmark with 100 resampling iterations, options("mlr3.exec_chunk_bins" = 4)
creates 4 computational jobs with 25 resampling iterations each. This reduces the parallelization overhead and speeds up the execution. The parallelization of the BLAS library can interfere with future parallelization due to over-utilization of the available cores. Install RhpcBLASctl
so that mlr3 can turn off the parallelization of BLAS. RhpcBLASctl
can only be included as an optional dependency due to licensing issues.
Why is the parallelization of tuning slow?
Tuning can also suffer from the parallelization overhead described above. Additionally, the batch size of the tuner can have a large impact on the runtime. Setting an optimal batch size is explained in the section Parallelization of Tuning of the mlr3book
.
Why are the CPUs on my system not fully utilized when using parallelization?
If there are few jobs with dissimilar runtimes, the system may end up waiting for the last chunk to finish, while other resources are idle. This is referred to as synchronization overhead. When minimizing the synchronization overhead, a too large chunk size can lead to a situation where the last chunk takes much longer than the others. This can be avoided by setting mlr3.exec_chunk_bins
to a smaller value than the number of cores available on the system.
How can I use time constraints when tuning?
Time constraints can be set for individual learners, tuning processes, and nested resampling. The gallery post Time constraints in the mlr3 ecosystem provides an overview of the different options.
Why is method X slower when used via mlr3
?
By default, we set the number of threads of learners to 1 to avoid conflicts with parallelization. Therefore, the default configuration of a learner may be significantly slower than the default configuration of the method when used directly.
Preprocessing factor levels
When working with mlr3
, it is important to avoid using special characters in the levels of factor variables. The presence of symbols such as +, -, <, >, =
, or spaces in the factor levels can cause errors during model training (depends on the learner
used and if the formula
interface is utilized, e.g. as in the surv.parametric
learner). While underscores (_
) and dots (.
) are generally safe to use, other special characters should be avoided. To ensure smooth operation and prevent errors, please follow these guidelines:
- Use descriptive labels with no special characters: Assign meaningful and descriptive labels to factor levels that do not include special characters. For example, instead of
60+
for a factor level of anage
feature, use60_above
. - Use factor encoding: Incorporate a pre-processing step in your data pipeline (e.g. see mlr_pipeops_encode to make sure factors are one-hot encoded, alleviating problems that may arise from factor levels that incorporate strange symbols.
Memory Problems
One explanation for why mlr3
might in some cases use an unusual amount of memory, is when packages are installed with the --with-keep.source
flag. This configuration option is enabled by default when managing dependencies via renv
, see issue #1713. To opt out of this default run the code below, e.g. by adding it to your .Rprofile
:
How can I suppress logging output of learners on the R console
Some learners are quite verbose during their train or predict step, and this clutters the R console. Note that this is different than controlling the generic mlr3 logger, which is covered under Logging. Most of these learners provide some option in their paramset to control output behavior. Another option is to simply use Encapsulation, likely in the evaluate
mode, running the learner in the same R session, but with caught exceptions and redirected output.
library(mlr3)
library(mlr3learners)
= tsk("iris")
mytask # manual option
= lrn("classif.nnet", trace = TRUE)
mylearner # generic option
$encapsulate(method = "evaluate", fallback = lrn("classif.featureless")) mylearner
Warning: The fallback learner 'classif.featureless' and the base learner
'classif.nnet' have different predict types: 'response' != 'prob'.
$train(mytask) mylearner
INFO [09:45:14.892] [mlr3] Calling train method of fallback 'classif.featureless' on task 'iris' with 150 observations {learner: <LearnerClassifFeatureless/LearnerClassif/Learner/R6>}
A learner trained with an old mlr3 version does not work anymore
It is possible that a saved Learner
that was trained with an old mlr3
version does not work with a different version of mlr3
. In general, we recommend saving the computational environment using a tool like renv so this can later be restored and avoiding such situations alltogether. If this is not an option, a possible workaround is to construct the same learner in the currently used mlr3
version and manually set its $state
to the one of the saved learner. This is illustrated below:
Using an old
mlr3
version:= lrn("classif.rpart") learner $train(tsk("iris")) learnersaveRDS(learner, "learner.rds")
With a subsequent
mlr3
version:= lrn("classif.rpart") learner = readRDS("learner.rds") learner_old $state = learner_old$state learner
Note that this is risky and not guaranteed to work because of various reasons: * You might have now loaded a different version of the learner library (in this case the rpart
pacakge). * The internals (such as the structure of the internal $state
) might have changed between the versions.
Therefore, be careful when attempting this solution and double-check that the learner behaves sensibly.
Caching of knitr/rmarkdown chunks does not work with mlr3
{knitr} per default uses R’s lazy-load database to store the results of individual chunks. The lazy-load database is an internal feature of R, and has issues handling active bindings (https://github.com/r-lib/R6/issues/152). Fortunately, it is possible to disable lazy-loading by setting the chunk option cache.lazy
to FALSE
:
::opts_chunk$set(cache = TRUE, cache.lazy = FALSE) knitr
How to keep all mlr3 packages up-to-date?
Either run R’s update.packages()
to update all installed packages, or run
::update_packages("mlr3verse", dependencies = TRUE) devtools
to update only packages from the mlr3verse. Note that this also updates recursive dependencies not listed as a direct import.