Frequently Asked Questions

Why is there only the rpart learner?

The base package mlr3 ships only with regression and classification trees from the rpart package and some learners for debugging. A selection of popular learners can be found in the extension package mlr3learners. Survival learners are provided by mlr3proba, cluster learners via mlr3cluster. Additional learners can be found in the extension packages mlr3extralearners. If your favorite learner is missing, please open a learner request. An overview of all learners can be found on our website.

How can I use parallelization?

Parallelization is supported when training learners, resampling, tuning and predicting. We recommend reading the section about Parallelization in the mlr3book.

Why is the parallelization with the future package slow?

Starting and terminating workers as well as possible communication between workers comes at a price in the form of additionally required runtime which is called parallelization overhead. This overhead strongly varies between parallelization backends and must be carefully weighed against the runtime of the sequential execution to determine if parallelization is worth the effort. When resampling or tuning a fast-fitting learner, it helps to chunk multiple resampling iterations into a single computational job. The option mlr3.exec_chunk_bins determines the number of chunks to split the resampling iterations into. For example, when running a benchmark with 100 resampling iterations, options("mlr3.exec_chunk_bins" = 4) creates 4 computational jobs with 25 resampling iterations each. This reduces the parallelization overhead and speeds up the execution. The parallelization of the BLAS library can interfere with future parallelization due to over-utilization of the available cores. Install RhpcBLASctl so that mlr3 can turn off the parallelization of BLAS. RhpcBLASctl can only be included as an optional dependency due to licensing issues.

Why is the parallelization of tuning slow?

Tuning can also suffer from the parallelization overhead described above. Additionally, the batch size of the tuner can have a large impact on the runtime. Setting an optimal batch size is explained in the section Parallelization of Tuning of the mlr3book.

Why are the CPUs on my system not fully utilized when using parallelization?

If there are few jobs with dissimilar runtimes, the system may end up waiting for the last chunk to finish, while other resources are idle. This is referred to as synchronization overhead. When minimizing the synchronization overhead, a too large chunk size can lead to a situation where the last chunk takes much longer than the others. This can be avoided by setting mlr3.exec_chunk_bins to a smaller value than the number of cores available on the system.

How can I use time constraints when tuning?

Time constraints can be set for individual learners, tuning processes, and nested resampling. The gallery post Time constraints in the mlr3 ecosystem provides an overview of the different options.

Why is method X slower when used via mlr3?

By default, we set the number of threads of learners to 1 to avoid conflicts with parallelization. Therefore, the default configuration of a learner may be significantly slower than the default configuration of the method when used directly.

Preprocessing factor levels

When working with mlr3, it is important to avoid using special characters in the levels of factor variables. The presence of symbols such as +, -, <, >, =, or spaces in the factor levels can cause errors during model training (depends on the learner used and if the formula interface is utilized, e.g. as in the surv.parametric learner). While underscores (_) and dots (.) are generally safe to use, other special characters should be avoided. To ensure smooth operation and prevent errors, please follow these guidelines:

  1. Use descriptive labels with no special characters: Assign meaningful and descriptive labels to factor levels that do not include special characters. For example, instead of 60+ for a factor level of an age feature, use 60_above.
  2. Use factor encoding: Incorporate a pre-processing step in your data pipeline (e.g. see mlr_pipeops_encode to make sure factors are one-hot encoded, alleviating problems that may arise from factor levels that incorporate strange symbols.

Memory Problems

One explanation for why mlr3 might in some cases use an unusual amount of memory, is when packages are installed with the --with-keep.source flag. This configuration option is enabled by default when managing dependencies via renv, see issue #1713. To opt out of this default run the code below, e.g. by adding it to your .Rprofile:

options("install.opts" = "--without-keep.source")

How can I suppress logging output of learners on the R console

Some learners are quite verbose during their train or predict step, and this clutters the R console. Note that this is different than controlling the generic mlr3 logger, which is covered under Logging. Most of these learners provide some option in their paramset to control output behavior. Another option is to simply use Encapsulation, likely in the evaluate mode, running the learner in the same R session, but with caught exceptions and redirected output.

library(mlr3)
library(mlr3learners)
mytask = tsk("iris")
# manual option
mylearner = lrn("classif.nnet", trace = TRUE)
# generic option
mylearner$encapsulate(method = "evaluate", fallback = lrn("classif.featureless"))
Warning: The fallback learner 'classif.featureless' and the base learner
'classif.nnet' have different predict types: 'response' != 'prob'.
mylearner$train(mytask)
INFO  [09:45:14.892] [mlr3] Calling train method of fallback 'classif.featureless' on task 'iris' with 150 observations {learner: <LearnerClassifFeatureless/LearnerClassif/Learner/R6>}

A learner trained with an old mlr3 version does not work anymore

It is possible that a saved Learner that was trained with an old mlr3 version does not work with a different version of mlr3. In general, we recommend saving the computational environment using a tool like renv so this can later be restored and avoiding such situations alltogether. If this is not an option, a possible workaround is to construct the same learner in the currently used mlr3 version and manually set its $state to the one of the saved learner. This is illustrated below:

  1. Using an old mlr3 version:

    learner = lrn("classif.rpart")
    learner$train(tsk("iris"))
    saveRDS(learner, "learner.rds")
  2. With a subsequent mlr3 version:

    learner = lrn("classif.rpart")
    learner_old = readRDS("learner.rds")
    learner$state = learner_old$state

Note that this is risky and not guaranteed to work because of various reasons: * You might have now loaded a different version of the learner library (in this case the rpart pacakge). * The internals (such as the structure of the internal $state) might have changed between the versions.

Therefore, be careful when attempting this solution and double-check that the learner behaves sensibly.

Caching of knitr/rmarkdown chunks does not work with mlr3

{knitr} per default uses R’s lazy-load database to store the results of individual chunks. The lazy-load database is an internal feature of R, and has issues handling active bindings (https://github.com/r-lib/R6/issues/152). Fortunately, it is possible to disable lazy-loading by setting the chunk option cache.lazy to FALSE:

knitr::opts_chunk$set(cache = TRUE, cache.lazy = FALSE)

How to keep all mlr3 packages up-to-date?

Either run R’s update.packages() to update all installed packages, or run

devtools::update_packages("mlr3verse", dependencies = TRUE)

to update only packages from the mlr3verse. Note that this also updates recursive dependencies not listed as a direct import.