Hotstarting

Resume the training of learners.

Authors
Published

January 16, 2023

Scope

Hotstarting a learner resumes the training from an already fitted model. An example would be to train an already fit XGBoost model for an additional 500 boosting iterations. In mlr3, we call this process Hotstarting, where a learner has access to a cache of already trained models which is called a mlr3::HoststartStack We distinguish between forward and backward hotstarting. We start this post with backward hotstarting and then talk about the less efficient forward hotstarting.

Backward Hotstarting

In this example, we optimize the hyperparameters of a random forest and use hotstarting to reduce the runtime. Hotstarting a random forest backwards is very simple. The model remains unchanged and only a subset of the trees is used for prediction i.e. a new model is not fitted. For example, a random forest is trained with 1000 trees and a specific hyperparameter configuration. If another random forest with 500 trees but with the same hyperparameter configuration has to be trained, the model with 1000 trees is copied and only 500 trees are used for prediction.

We load the ranger learner and set the search space from the Bischl et al. (2021) article.

library(mlr3verse)

learner = lrn("classif.ranger",
  mtry.ratio      = to_tune(0, 1),
  replace         = to_tune(),
  sample.fraction = to_tune(1e-1, 1),
  num.trees       = to_tune(1, 2000)
)

We activate hotstarting with the allow_hotstart option. When running a grid search with hotstarting, the grid is sorted by the hot start parameter. This means the models with 2000 trees are trained first. The models with less than 2000 trees hot start on the 2000 trees models which allows the training to be completed immediately.

instance = tune(
  tuner = tnr("grid_search", resolution = 5, batch_size = 5),
  task = tsk("spam"),
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  allow_hotstart = TRUE
)

For comparison, we perform the same tuning without hotstarting.

instance_2 = tune(
  tuner = tnr("grid_search", resolution = 5, batch_size = 5),
  task = tsk("spam"),
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  allow_hotstart = FALSE
)

We plot the time of completion of each batch (see Figure 1). Each batch includes 5 configurations. We can see that tuning with hotstarting is slower at first. As soon as all models are fitted with 2000 trees, the tuning runs much faster and overtakes the tuning without hotstarting.

Figure 1: Time of completion of each batch with and without hotstarting.

Forward Hotstarting

Forward hotstarting is currently only supported by XGBoost. However, we have observed that hotstarting only provides a speed advantage for very large datasets and models with more than 5000 boosting rounds. The reason is that copying the models from the main process to the workers is a major bottleneck. The parallelization package future copies the models sequentially to the workers. Consequently, it takes a long time until the last worker can even start. Moreover, copying itself consumes a lot of time, and copying the model back from the worker blocks the main process again. During the development process, we overestimated the speed benefits of hotstarting and underestimated the overhead of parallelization. We can therefore only advise against using forward hotstarting during tuning. It is much more efficient to use the internal early-stopping mechanism of XGBoost. This eliminates the need to copy models to the worker. See the gallery post on early stopping for an example. We might improve the efficiency of the hotstarting mechanism in the future, if there are convincing use cases.

Manual Hotstarting

Nevertheless, forward hotstarting can be useful without parallelization. If you have an already trained model and want to add more boosting iteration to it. In this example, the learner_5000 is the already trained model. We create a new learner with the same hyperparameters but double the number of boosting iteration. To activate hotstarting, we create a HotstartStack and copy it to the $hotstart_stack slot of the new learner.

task = tsk("spam")

learner_5000 = lrn("classif.xgboost", nrounds = 5000, eta = 0.1)
learner_5000$train(task)

learner_10000 = lrn("classif.xgboost", nrounds = 10000, eta = 0.1)
learner_10000$hotstart_stack = HotstartStack$new(learner_5000)
learner_10000$train(task)

Training the initial model took 59.885 seconds.

learner_5000$state$train_time
[1] 59.885

Adding 5000 boosting rounds took 46.837 seconds.

learner_10000$state$train_time - learner_5000$state$train_time
[1] 46.837

Training the model from the beginning would have taken about two minutes. This means, without parallelization, we get the expected speed advantage.

Conclusion

We have seen how mlr3 enables to reduce the training time, by building on a hotstart stack of already trained learners. One has to be careful, however, when using forward hotstarting during tuning because of the high parallelization overhead that arises from copying the models between the processes. If a model has an internal early stopping implementation, it should usually be relied upon instead of using the mlr3 hotstarting mechanism. However, manual forward hotstarting can be helpful in some situations when we do not want to train a large model from the beginning.

Session Information

sessioninfo::session_info(info = "packages")
═ Session info ═══════════════════════════════════════════════════════════════════════════════════════════════════════
─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────
 ! package           * version    date (UTC) lib source
   backports           1.5.0      2024-05-23 [1] CRAN (R 4.4.1)
   bbotk               1.1.1      2024-10-15 [1] CRAN (R 4.4.1)
   checkmate           2.3.2      2024-07-29 [1] CRAN (R 4.4.1)
 P class               7.3-22     2023-05-03 [?] CRAN (R 4.4.0)
   cli                 3.6.3      2024-06-21 [1] CRAN (R 4.4.1)
   clue                0.3-65     2023-09-23 [1] CRAN (R 4.4.1)
 P cluster             2.1.6      2023-12-01 [?] CRAN (R 4.4.0)
 P codetools           0.2-20     2024-03-31 [?] CRAN (R 4.4.0)
   colorspace          2.1-1      2024-07-26 [1] CRAN (R 4.4.1)
   crayon              1.5.3      2024-06-20 [1] CRAN (R 4.4.1)
   data.table        * 1.16.2     2024-10-10 [1] CRAN (R 4.4.1)
   DEoptimR            1.1-3      2023-10-07 [1] CRAN (R 4.4.1)
   digest              0.6.37     2024-08-19 [1] CRAN (R 4.4.1)
   diptest             0.77-1     2024-04-10 [1] CRAN (R 4.4.1)
   dplyr               1.1.4      2023-11-17 [1] CRAN (R 4.4.1)
   evaluate            1.0.1      2024-10-10 [1] CRAN (R 4.4.1)
   fansi               1.0.6      2023-12-08 [1] CRAN (R 4.4.1)
   farver              2.1.2      2024-05-13 [1] CRAN (R 4.4.1)
   fastmap             1.2.0      2024-05-15 [1] CRAN (R 4.4.1)
   flexmix             2.3-19     2023-03-16 [1] CRAN (R 4.4.1)
   fpc                 2.2-13     2024-09-24 [1] CRAN (R 4.4.1)
   future              1.34.0     2024-07-29 [1] CRAN (R 4.4.1)
   generics            0.1.3      2022-07-05 [1] CRAN (R 4.4.1)
   ggplot2           * 3.5.1      2024-04-23 [1] CRAN (R 4.4.1)
   globals             0.16.3     2024-03-08 [1] CRAN (R 4.4.1)
   glue                1.8.0      2024-09-30 [1] CRAN (R 4.4.1)
   gtable              0.3.5      2024-04-22 [1] CRAN (R 4.4.1)
   htmltools           0.5.8.1    2024-04-04 [1] CRAN (R 4.4.1)
   htmlwidgets         1.6.4      2023-12-06 [1] CRAN (R 4.4.1)
   jsonlite            1.8.9      2024-09-20 [1] CRAN (R 4.4.1)
   kernlab             0.9-33     2024-08-13 [1] CRAN (R 4.4.1)
   knitr               1.48       2024-07-07 [1] CRAN (R 4.4.1)
   labeling            0.4.3      2023-08-29 [1] CRAN (R 4.4.1)
 P lattice             0.22-5     2023-10-24 [?] CRAN (R 4.3.3)
   lgr                 0.4.4      2022-09-05 [1] CRAN (R 4.4.1)
   lifecycle           1.0.4      2023-11-07 [1] CRAN (R 4.4.1)
   listenv             0.9.1      2024-01-29 [1] CRAN (R 4.4.1)
   magrittr            2.0.3      2022-03-30 [1] CRAN (R 4.4.1)
 P MASS                7.3-61     2024-06-13 [?] CRAN (R 4.4.1)
   mclust              6.1.1      2024-04-29 [1] CRAN (R 4.4.1)
   mlr3              * 0.21.1     2024-10-18 [1] CRAN (R 4.4.1)
   mlr3cluster         0.1.10     2024-10-03 [1] CRAN (R 4.4.1)
   mlr3data            0.7.0      2023-06-29 [1] CRAN (R 4.4.1)
   mlr3extralearners   0.9.0-9000 2024-10-18 [1] Github (mlr-org/mlr3extralearners@a622524)
   mlr3filters         0.8.0      2024-04-10 [1] CRAN (R 4.4.1)
   mlr3fselect         1.1.1.9000 2024-10-18 [1] Github (mlr-org/mlr3fselect@e917a02)
   mlr3hyperband       0.6.0      2024-06-29 [1] CRAN (R 4.4.1)
   mlr3learners        0.7.0      2024-06-28 [1] CRAN (R 4.4.1)
   mlr3mbo             0.2.6      2024-10-16 [1] CRAN (R 4.4.1)
   mlr3misc            0.15.1     2024-06-24 [1] CRAN (R 4.4.1)
   mlr3pipelines       0.7.0      2024-09-24 [1] CRAN (R 4.4.1)
   mlr3tuning          1.0.2      2024-10-14 [1] CRAN (R 4.4.1)
   mlr3tuningspaces    0.5.1      2024-06-21 [1] CRAN (R 4.4.1)
   mlr3verse         * 0.3.0      2024-06-30 [1] CRAN (R 4.4.1)
   mlr3viz             0.9.0      2024-07-01 [1] CRAN (R 4.4.1)
   mlr3website       * 0.0.0.9000 2024-10-18 [1] Github (mlr-org/mlr3website@20d1ddf)
   modeltools          0.2-23     2020-03-05 [1] CRAN (R 4.4.1)
   munsell             0.5.1      2024-04-01 [1] CRAN (R 4.4.1)
 P nnet                7.3-19     2023-05-03 [?] CRAN (R 4.3.3)
   palmerpenguins      0.1.1      2022-08-15 [1] CRAN (R 4.4.1)
   paradox             1.0.1      2024-07-09 [1] CRAN (R 4.4.1)
   parallelly          1.38.0     2024-07-27 [1] CRAN (R 4.4.1)
   pillar              1.9.0      2023-03-22 [1] CRAN (R 4.4.1)
   pkgconfig           2.0.3      2019-09-22 [1] CRAN (R 4.4.1)
   prabclus            2.3-4      2024-09-24 [1] CRAN (R 4.4.1)
   R6                  2.5.1      2021-08-19 [1] CRAN (R 4.4.1)
   Rcpp                1.0.13     2024-07-17 [1] CRAN (R 4.4.1)
   renv                1.0.11     2024-10-12 [1] CRAN (R 4.4.1)
   rlang               1.1.4      2024-06-04 [1] CRAN (R 4.4.1)
   rmarkdown           2.28       2024-08-17 [1] CRAN (R 4.4.1)
   robustbase          0.99-4-1   2024-09-27 [1] CRAN (R 4.4.1)
   scales              1.3.0      2023-11-28 [1] CRAN (R 4.4.1)
   sessioninfo         1.2.2      2021-12-06 [1] CRAN (R 4.4.1)
   spacefillr          0.3.3      2024-05-22 [1] CRAN (R 4.4.1)
   stringi             1.8.4      2024-05-06 [1] CRAN (R 4.4.1)
   tibble              3.2.1      2023-03-20 [1] CRAN (R 4.4.1)
   tidyselect          1.2.1      2024-03-11 [1] CRAN (R 4.4.1)
   utf8                1.2.4      2023-10-22 [1] CRAN (R 4.4.1)
   uuid                1.2-1      2024-07-29 [1] CRAN (R 4.4.1)
   vctrs               0.6.5      2023-12-01 [1] CRAN (R 4.4.1)
   viridisLite         0.4.2      2023-05-02 [1] CRAN (R 4.4.1)
   withr               3.0.1      2024-07-31 [1] CRAN (R 4.4.1)
   xfun                0.48       2024-10-03 [1] CRAN (R 4.4.1)
   yaml                2.3.10     2024-07-26 [1] CRAN (R 4.4.1)

 [1] /home/marc/repositories/mlr3website/mlr-org/renv/library/linux-ubuntu-noble/R-4.4/x86_64-pc-linux-gnu
 [2] /home/marc/.cache/R/renv/sandbox/linux-ubuntu-noble/R-4.4/x86_64-pc-linux-gnu/9a444a72

 P ── Loaded and on-disk path mismatch.

──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

References

Bischl, Bernd, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, et al. 2021. “Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges.” arXiv:2107.05847 [Cs, Stat], July. http://arxiv.org/abs/2107.05847.