Feature Selection on the Titanic Data Set

Run a feature selection with different algorithms and use nested resampling.

Author

Marc Becker

Published

January 8, 2021

Introduction

In this tutorial, we introduce the mlr3fselect package by comparing feature selection methods on the Titanic disaster data set. The objective of feature selection is to enhance the interpretability of models, speed up the learning process and increase the predictive performance.

We load the mlr3verse package which pulls in the most important packages for this example.

library(mlr3verse)
library(mlr3fselect)

We initialize the random number generator with a fixed seed for reproducibility, and decrease the verbosity of the logger to keep the output clearly represented.

set.seed(7832)
lgr::get_logger("mlr3")$set_threshold("warn")
lgr::get_logger("bbotk")$set_threshold("warn")

Titanic Data Set

The Titanic data set contains data for 887 Titanic passengers, including whether they survived when the Titanic sank. Our goal will be to predict the survival of the Titanic passengers.

After loading the data set from the mlr3data package, we impute the missing age values with the median age of the passengers, set missing embarked values to "s" and remove character features. We could use feature engineering to create new features from the character features, however we want to focus on feature selection in this tutorial.

In addition to the survived column, the reduced data set contains the following attributes for each passenger:

Feature Description
age Age
sex Sex
sib_sp Number of siblings / spouses aboard
parch Number of parents / children aboard
fare Amount paid for the ticket
pc_class Passenger class
embarked Port of embarkation
library(mlr3data)

data("titanic", package = "mlr3data")
titanic$age[is.na(titanic$age)] = median(titanic$age, na.rm = TRUE)
titanic$embarked[is.na(titanic$embarked)] = "S"
titanic$ticket = NULL
titanic$name = NULL
titanic$cabin = NULL
titanic = titanic[!is.na(titanic$survived),]

We construct a binary classification task.

task = as_task_classif(titanic, target = "survived", positive = "yes")

Model

We use the logistic regression learner provided by the mlr3learners package.

library(mlr3learners)

learner = lrn("classif.log_reg")

To evaluate the predictive performance, we choose a 3-fold cross-validation and the classification error as the measure.

resampling = rsmp("cv", folds = 3)
measure = msr("classif.ce")

resampling$instantiate(task)

Classes

The FSelectInstanceSingleCrit class specifies a general feature selection scenario. It includes the ObjectiveFSelect object that encodes the black box objective function which is optimized by a feature selection algorithm. The evaluated feature sets are stored in an ArchiveFSelect object. The archive provides a method for querying the best performing feature set.

The Terminator classes determine when to stop the feature selection. In this example we choose a terminator that stops the feature selection after 10 seconds. The sugar functions trm() and trms() can be used to retrieve terminators from the mlr_terminators dictionary.

terminator = trm("run_time", secs = 10)
FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator)
<FSelectInstanceSingleCrit>
* State:  Not optimized
* Objective: <ObjectiveFSelect:classif.log_reg_on_titanic>
* Terminator: <TerminatorRunTime>

The FSelector subclasses describe the feature selection strategy. The sugar function fs() can be used to retrieve feature selection algorithms from the mlr_fselectors dictionary.

mlr_fselectors
<DictionaryFSelector> with 8 stored values
Keys: design_points, exhaustive_search, genetic_search, random_search, rfe, rfecv, sequential,
  shadow_variable_search

Sequential forward selection

We try sequential forward selection. We chose TerminatorStagnation that stops the feature selection if the predictive performance does not increase anymore.

terminator = trm("stagnation", iters = 5)
instance = FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator)

fselector = fs("sequential")
fselector$optimize(instance)
     age embarked  fare parch pclass  sex sib_sp                features classif.ce
1: FALSE    FALSE FALSE  TRUE   TRUE TRUE   TRUE parch,pclass,sex,sib_sp  0.1964085

The FSelectorSequential object has a special method for displaying the optimization path of the sequential feature selection.

fselector$optimization_path(instance)
    age embarked  fare parch pclass   sex sib_sp classif.ce batch_nr
1: TRUE    FALSE FALSE FALSE  FALSE FALSE  FALSE  0.3838384        1
2: TRUE    FALSE FALSE FALSE  FALSE  TRUE  FALSE  0.2132435        2
3: TRUE    FALSE FALSE FALSE  FALSE  TRUE   TRUE  0.2087542        3
4: TRUE    FALSE FALSE FALSE   TRUE  TRUE   TRUE  0.2143659        4
5: TRUE    FALSE FALSE  TRUE   TRUE  TRUE   TRUE  0.2065095        5
6: TRUE    FALSE  TRUE  TRUE   TRUE  TRUE   TRUE  0.2020202        6

Recursive feature elimination

Recursive feature elimination utilizes the $importance() method of learners. In each iteration the feature(s) with the lowest importance score is dropped. We choose the non-recursive algorithm (recursive = FALSE) which calculates the feature importance once on the complete feature set. The recursive version (recursive = TRUE) recomputes the feature importance on the reduced feature set in every iteration.

learner = lrn("classif.ranger", importance = "impurity")
terminator = trm("none")
instance = FSelectInstanceSingleCrit$new(
  task = task,
  learner = learner,
  resampling = resampling,
  measure = measure,
  terminator = terminator,
  store_models = TRUE)

fselector = fs("rfe", recursive = FALSE)
fselector$optimize(instance)
    age embarked fare parch pclass  sex sib_sp                               features classif.ce
1: TRUE     TRUE TRUE  TRUE   TRUE TRUE   TRUE age,embarked,fare,parch,pclass,sex,...  0.1694725

We access the results.

as.data.table(instance$archive, exclude_columns = c("runtime_learners", "timestamp", "batch_nr", "resample_result", "uhash"))
    age embarked fare parch pclass  sex sib_sp classif.ce warnings errors      importance
1: TRUE     TRUE TRUE  TRUE   TRUE TRUE   TRUE  0.1694725        0      0 7,6,5,4,3,2,...
2: TRUE    FALSE TRUE FALSE  FALSE TRUE  FALSE  0.2132435        0      0           7,6,5
                                 features
1: age,embarked,fare,parch,pclass,sex,...
2:                           age,fare,sex

Nested resampling

It is a common mistake to report the predictive performance estimated on resampling sets during the feature selection as the performance that can be expected from the combined feature selection and model training. The repeated evaluation of the model might leak information about the test sets into the model and thus leads to over-fitting and over-optimistic performance results. Nested resampling uses an outer and inner resampling to separate the feature selection from the performance estimation of the model. We can use the AutoFSelector class for running nested resampling. The AutoFSelector essentially combines a given Learner and feature selection method into a Learner with internal automatic feature selection. The inner resampling loop that is used to determine the best feature set is conducted internally each time the AutoFSelector Learner object is trained.

resampling_inner = rsmp("cv", folds = 5)
measure = msr("classif.ce")

at = AutoFSelector$new(
  learner = learner,
  resampling = resampling_inner,
  measure = measure,
  terminator = terminator,
  fselect = fs("sequential"),
  store_models = TRUE)

We put the AutoFSelector into a resample() call to get the outer resampling loop.

resampling_outer = rsmp("cv", folds = 3)

rr = resample(task, at, resampling_outer, store_models = TRUE)

The aggregated performance of all outer resampling iterations is the unbiased predictive performance we can expected from the logistic regression model with an optimized feature set found by sequential selection.

rr$aggregate()
classif.ce 
 0.1829405 

We check whether the feature sets that were selected in the inner resampling are stable. The selected feature sets should not differ too much. We might observe unstable models in this example because the small data set and the low number of resampling iterations might introduces too much randomness. Usually, we aim for the selection of similar feature sets for all outer training sets.

extract_inner_fselect_results(rr)

Next, we want to compare the predictive performances estimated on the outer resampling to the inner resampling. Significantly lower predictive performances on the outer resampling indicate that the models with the optimized feature sets overfit the data.

rr$score()[, .(iteration, task_id, learner_id, resampling_id, classif.ce)]
   iteration task_id               learner_id resampling_id classif.ce
1:         1 titanic classif.ranger.fselector            cv  0.1515152
2:         2 titanic classif.ranger.fselector            cv  0.1952862
3:         3 titanic classif.ranger.fselector            cv  0.2020202

The archives of the AutoFSelectors gives us all evaluated feature sets with the associated predictive performances.

extract_inner_fselect_archives(rr)

Shortcuts

Selecting a feature subset can be shortened by using the fselect()-shortcut.

instance = fselect(
  tuner = tnr( "random_search",
  task = tsk("iris"),
  learner = lrn("classif.log_reg"),
  resampling = rsmp("cv", folds = 3),
  measure = msr("classif.ce"),
  term_evals = 10
)

Applying nested resampling can be shortened by using the fselect_nested()-shortcut.

rr = fselect_nested(
  tuner = tnr("random_search"),
  task = tsk("iris"),
  learner = lrn("classif.log_reg"),
  inner_resampling = rsmp ("cv", folds = 3),
  outer_resampling = rsmp("cv", folds = 3),
  measure = msr("classif.ce"),
  term_evals = 10
)