Francesca M

Results 7 comments of Francesca M

Hi, the code part in gradient_boosting.py is in the argsvar part, when boosting is performed: def forward(self, *x): output = [estimator(*x) for estimator in self.estimators_] output = op.sum_with_multiplicative(output, self.shrinkage_rate) output...

Also, the function sum is not scriptable, but this could be by-passed using @torch.jit.ignore()

My suggestion for the indexed variable is to use a for loop instead.

Is there a way to get hazard ratio with deephit single?

yes, ofc. ![image](https://user-images.githubusercontent.com/83947886/133089688-dbfaf260-1b74-402b-8576-8d93bbfe374d.png)

Thank you! In case I want to implement the F.softmax in the forward step of the model, is it ok to do so after y_pred = self.leaf_nodes(mu)?