causalml icon indicating copy to clipboard operation
causalml copied to clipboard

Recommended hyperparameters tuning approach

Open shaddyab opened this issue 5 years ago • 1 comments

Are there any recommended guidelines or tutorials that should be followed when tuning the hyperparameters of R or X learners? For example, for the X learner, should all four learners (i.e., control_outcome_learner, treatment_outcome_learner, control_effect_learner, and treatment_effect_learner) be tuned separately or combined?

Assuming 1) grid search approach, 2) X-learner 3) Lasso is the meta-learner, 4) alpha is the parameter to be tuned 5) Single treatment, here are some of the approaches I was considers:

  1. In each grid search iteration, evaluate the x-learner where all four algorithm (2 outcome learners + 2 effect learners) have the same hyperparameters. For example, for all 4 algorithms use Lasso(alpha=0.1)

  2. In each grid search iteration, evaluate the x-learner where the learners at each step have the same hyperparameters. For example, for the two effect algorithms (i.e, tau_) use Lasso(alpha=0.1) while for the two outcome algorithms (i.e, mu_) use Lasso(alpha=0.3)

  3. In each grid search iteration, evaluate the x-learner where the learners at each group have the same hyperparameters. For example, for the two control (i.e, _c) algorithms use Lasso(alpha=0.1) while for the two treatment algorithms (i.e, _t) use Lasso(alpha=0.3)

  4. In each grid search iteration, evaluate the x-learner where evaluate where all four algorithm (2 outcome learners + 2 effect learners) have separate hyperparameters. For example, mu_t Lasso(alpha=0.1), mu_c Lasso(alpha=0.2), tau_t Lasso(alpha=0.3), tau_c Lasso(alpha=0.4)

I am aware that options #4 will increase the search space significantly, but is it necessary for appropriate tuning or can I 'compromise' by using one of the other options?

shaddyab avatar Nov 04 '20 21:11 shaddyab