bundle icon indicating copy to clipboard operation
bundle copied to clipboard

Bundling XGBoost objects removes variable names when applying xgb.importance()

Open joranE opened this issue 1 year ago • 2 comments

I'm aware of the extended discussion in #50 but I was still somewhat surprised that an xgboost model object after bundling & unbundling returns garbled variable names when trying to call xgb.importance. I understand that the primary purpose of bundling is to preserve the ability to make new predictions, but I wasn't expecting to lose functionality from non-tidymodels functions form the xgboost package itself.

I understand why various tidymodels functionality wouldn't be preserved, but I would have expected that the object would be passable to functions in the xgboost package designed to work on an xgb.Booster object and get the same results.

Is this also the intended behavior of bundling, in which case I should save the variable importance information prior to bundling if I require it?

> set.seed(1)
> 
> data(agaricus.train)
> data(agaricus.test)
> 
> xgb <- xgboost(data = agaricus.train$data, label = agaricus.train$label,
+                max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
+                objective = "binary:logistic")
[1]	train-logloss:0.233376 
[2]	train-logloss:0.136658 
> 
> xgboost::xgb.importance(model = xgb)
                   Feature       Gain     Cover Frequency
                    <char>      <num>     <num>     <num>
1:               odor=none 0.67615470 0.4978746       0.4
2:         stalk-root=club 0.17135376 0.1920543       0.2
3:       stalk-root=rooted 0.12317236 0.1638750       0.2
4: spore-print-color=green 0.02931918 0.1461960       0.2
> 
> xgb_bundle <- bundle(xgb)
> 
> xgboost::xgb.importance(model = unbundle(xgb_bundle))
   Feature       Gain     Cover Frequency
    <char>      <num>     <num>     <num>
1:     f28 0.67615470 0.4978746       0.4
2:     f55 0.17135376 0.1920543       0.2
3:     f59 0.12317236 0.1638750       0.2
4:    f108 0.02931918 0.1461960       0.2

joranE avatar Jul 22 '24 19:07 joranE

Oh, that's interesting! 👀 Notice that you can observe the same problem without using bundle at all, but instead xgboost::xgb.save.raw() and xgboost::xgb.load.raw() (which is how bundle stores xgboost models):

library(xgboost)

set.seed(1)

data(agaricus.train)
data(agaricus.test)

mod <- xgboost(
  data = agaricus.train$data, label = agaricus.train$label,
  max_depth = 2, eta = 1, nthread = 2, nrounds = 2,
  objective = "binary:logistic"
)
#> [1]  train-logloss:0.233376 
#> [2]  train-logloss:0.136658

object <- xgboost::xgb.save.raw(mod, raw_format = "ubj")
res <- xgboost::xgb.load.raw(object, as_booster = TRUE)
xgboost::xgb.importance(model = res)
#>    Feature       Gain     Cover Frequency
#>     <char>      <num>     <num>     <num>
#> 1:     f28 0.67615470 0.4978746       0.4
#> 2:     f55 0.17135376 0.1920543       0.2
#> 3:     f59 0.12317236 0.1638750       0.2
#> 4:    f108 0.02931918 0.1461960       0.2

Created on 2024-07-29 with reprex v2.1.0

I think the source of this problem is probably the same as https://github.com/dmlc/xgboost/issues/5018 (the feature names need to be stored into the booster for raw_format = "ubj"), if you would like to share the problem you are running into there.

Thanks!

juliasilge avatar Jul 29 '24 22:07 juliasilge

Hi there,

I've had a similar issue, which really tripped me up today. Bundling and unbundling an object doesn't seem to preserve variable names e.g., to use with vip::vip:

library(bundle)
library(parsnip)
library(callr)
library(waldo)

# fit an boosted tree with xgboost via parsnip
mod <-
  boost_tree(trees = 5, mtry = 3) %>%
  set_mode("regression") %>%
  set_engine("xgboost") %>%
  fit(mpg ~ ., data = mtcars[1:25,])

# vip works for original mod:
vip::vip(mod)

temp_file <- tempfile()
saveRDS(mod, temp_file)


# Bundle the model
bundled_mod <-
  bundle(mod)

unbundled_mod <-
  unbundle(bundled_mod)

# However it fails for the unbundled mod (strange variable names)
vip::vip(unbundled_mod)


# But it works okay for the normally saved model
read_back_normal_mod <- readRDS(temp_file)

vip::vip(read_back_normal_mod)

I find this behaviour quite surprising.

I understand that you have iterated that the intention is to reload a model workflow for prediction, but this would seem to suggest you always need to store at least two versions of your models, one not bundled, and one bundled - the former so that you can go back and look at certain things like the variable importances easily.

I guess I assumed that bundle => saveRDS would just replace saveRDS. Maybe this can be made more clear, because the description suggests to me that I should be able to read back in a model and do (all) things I could do before with it.

Thanks for this otherwise great package.

All the best, Seb

sebsilas avatar Sep 09 '24 20:09 sebsilas