Francesca M
Francesca M
Hi, the code part in gradient_boosting.py is in the argsvar part, when boosting is performed: def forward(self, *x): output = [estimator(*x) for estimator in self.estimators_] output = op.sum_with_multiplicative(output, self.shrinkage_rate) output...
Exactly! Thank you
Also, the function sum is not scriptable, but this could be by-passed using @torch.jit.ignore()
My suggestion for the indexed variable is to use a for loop instead.
Is there a way to get hazard ratio with deephit single?
yes, ofc. 
Thank you! In case I want to implement the F.softmax in the forward step of the model, is it ok to do so after y_pred = self.leaf_nodes(mu)?