First release of mlrMBO - the toolbox for Bayesian Block Box Optimization

A short description of the post.

Author

Jakob Richter

Published

March 13, 2017

We are happy to finally announce the first release of mlrMBO on cran after a quite long development time. For the theoretical background and a nearly complete overview of mlrMBOs capabilities you can check our paper on mlrMBO that we presubmitted to arxiv.

The key features of mlrMBO are:

For examples covering different scenarios we have Vignettes that are also available as an online documentation. For mlr users mlrMBO is especially interesting for hyperparameter optimization.

mlrMBO for mlr hyperparameter tuning was already used in an earlier blog post. Nonetheless we want to provide a small toy example to demonstrate the work flow of mlrMBO in this post.

Example

First, we define an objective function that we are going to minimize:

set.seed(1)
library(mlrMBO)
## Loading required package: mlr
## Loading required package: ParamHelpers
## Warning message: 'mlr' is in 'maintenance-only' mode since July 2019. Future development will only happen
## in 'mlr3' (<https://mlr3.mlr-org.com>). Due to the focus on 'mlr3' there might be uncaught bugs meanwhile
## in {mlr} - please consider switching.
## Loading required package: smoof
## Loading required package: checkmate
## Warning: no DISPLAY variable so Tk is not available
fun = makeSingleObjectiveFunction(
  name = "SineMixture",
  fn = function(x) sin(x[1]) * cos(x[2]) / 2 + 0.04 * sum(x^2),
  par.set = makeNumericParamSet(id = "x", len = 2, lower = -5, upper = 5)
)

To define the objective function we use makeSingleObjectiveFunction from the neat package smoof, which gives us the benefit amongst others to be able to directly visualize the function. If you happen to be in need of functions to optimize and benchmark your optimization algorithm I recommend you to have a look at the package!

library(plot3D)
plot3D(fun, contour = TRUE, lightning = TRUE)

Let’s start with the configuration of the optimization:

# In this simple example we construct the control object with the defaults:
ctrl = makeMBOControl()
# For this numeric optimization we are going to use the Expected
# Improvement as infill criterion:
ctrl = setMBOControlInfill(ctrl, crit = crit.ei)
# We will allow for exactly 25 evaluations of the objective function:
ctrl = setMBOControlTermination(ctrl, max.evals = 25L)

The optimization has to so start with an initial design. mlrMBO can automatically create one but here we are going to use a randomly sampled LHS design of our own:

library(ggplot2)
des = generateDesign(n = 8L, par.set = getParamSet(fun),
  fun = lhs::randomLHS)
autoplot(fun, render.levels = TRUE) + geom_point(data = des)

The points demonstrate how the initial design already covers the search space but is missing the area of the global minimum. Before we can start the Bayesian optimization we have to set the surrogate learner to Kriging. Therefore we use an mlr regression learner. In fact, with mlrMBO you can use any regression learner integrated in mlr as a surrogate allowing for many special optimization applications.

sur.lrn = makeLearner("regr.km", predict.type = "se",
  config = list(show.learner.output = FALSE))

Note: mlrMBO can automatically determine a good surrogate learner based on the search space defined for the objective function. For a purely numeric domain it would have chosen Kriging as well with some slight modifications to make it a bit more stable against numerical problems that can occur during optimization.

Finally, we can start the optimization run:

res = mbo(fun = fun, design = des, learner = sur.lrn, control = ctrl,
  show.info = TRUE)
## Computing y column(s) for design. Not provided.
## [mbo] 0: x=0.897,-2.51 : y = -0.0312 : 0.0 secs : initdesign
## [mbo] 0: x=-4.52,-0.278 : y = 1.29 : 0.0 secs : initdesign
## [mbo] 0: x=-2.58,-2.23 : y = 0.63 : 0.0 secs : initdesign
## [mbo] 0: x=-1.69,1.41 : y = 0.112 : 0.0 secs : initdesign
## [mbo] 0: x=4.08,4.23 : y = 1.57 : 0.0 secs : initdesign
## [mbo] 0: x=1.27,-4.52 : y = 0.792 : 0.0 secs : initdesign
## [mbo] 0: x=-0.163,0.425 : y = -0.0656 : 0.0 secs : initdesign
## [mbo] 0: x=3.1,3.25 : y = 0.788 : 0.0 secs : initdesign
## [mbo] 1: x=0.483,-1.18 : y = 0.153 : 0.0 secs : infill_ei
## [mbo] 2: x=-0.0918,1.51 : y = 0.0885 : 0.0 secs : infill_ei
## [mbo] 3: x=-0.856,0.593 : y = -0.27 : 0.0 secs : infill_ei
## [mbo] 4: x=-1.05,-0.239 : y = -0.375 : 0.0 secs : infill_ei
## [mbo] 5: x=-0.694,-2.25 : y = 0.423 : 0.0 secs : infill_ei
## [mbo] 6: x=-1.34,0.00144 : y = -0.415 : 0.0 secs : infill_ei
## [mbo] 7: x=2.3,-2.06 : y = 0.206 : 0.0 secs : infill_ei
## [mbo] 8: x=-1.55,-0.343 : y = -0.37 : 0.0 secs : infill_ei
## [mbo] 9: x=1.84,1.01 : y = 0.433 : 0.0 secs : infill_ei
## [mbo] 10: x=-0.408,4.41 : y = 0.844 : 0.0 secs : infill_ei
## [mbo] 11: x=5,-2.85 : y = 1.78 : 0.0 secs : infill_ei
## [mbo] 12: x=-1.29,-0.0751 : y = -0.412 : 0.0 secs : infill_ei
## [mbo] 13: x=-2.01,-0.0272 : y = -0.291 : 0.0 secs : infill_ei
## [mbo] 14: x=-5,5 : y = 2.14 : 0.0 secs : infill_ei
## [mbo] 15: x=-5,-5 : y = 2.14 : 0.0 secs : infill_ei
## [mbo] 16: x=1.21,3.12 : y = -0.0186 : 0.0 secs : infill_ei
## [mbo] 17: x=-1.28,0.0491 : y = -0.413 : 0.0 secs : infill_ei
res$x
## $x
## [1] -1.342495094  0.001436926
res$y
## [1] -0.4149338

We can see that we have found the global optimum of \(y = -0.414964\) at \(x = (-1.35265,0)\) quite sufficiently. Let’s have a look at the points mlrMBO evaluated. Therefore we can use the OptPath which stores all information about all evaluations during the optimization run:

opdf = as.data.frame(res$opt.path)
autoplot(fun, render.levels = TRUE, render.contours = FALSE) +
  geom_text(data = opdf, aes(label = dob))

It is interesting to see, that for this run the algorithm first went to the local minimum on the top right in the 6th and 7th iteration but later, thanks to the explorative character of the Expected Improvement, found the real global minimum.

Comparison

That is all good, but how do other optimization strategies perform?

A fair comarison

… for stochastic optimization algorithms can only be achieved by repeating the runs. mlrMBO is stochastic as the initial design is generated randomly and the fit of the Kriging surrogate is also not deterministic. Furthermore we should include other optimization strategies like a genetic algorithm and direct competitors like rBayesOpt. An extensive benchmark is available in our mlrMBO paper. The examples here are just meant to demonstrate the package.

Engage

If you want to contribute to mlrMBO we ware always open to suggestions and pull requests on github. You are also invited to fork the repository and build and extend your own optimizer based on our toolbox.