Zhenyu Hou

Results 8 comments of Zhenyu Hou

Hi @userTogrul. Thanks for your contribution. GBP is an interesting model and it would be exciting if you could reimplement it in Python. But it seems that the "precomputing the...

Thanks for your contribution. I think it would be better to add a `trainer` for `CapsGNN` instead of modifying `graph_classification.py` in tasks. For how to add a `trainer` for a...

第一次导入的时候需要对底层算子进行编译,所以需要花费几分钟。后续导入的话应该就会比较快。

hi, @jiaruHithub thanks for your interest. The hyperparameters used in transfer learning are exactly the default parameters in the file, `pretraining.py` and `finetune.py` in this [directory](https://github.com/THUDM/GraphMAE/tree/main/chem) . Or you can...

These datasets share most hyper-parameters. Specially, we use `learning_rate decay` for `bace`, and `early-stopping` for `muv/hiv`. The two operations have been implemented in our code. And all other hyper-parameters are...

Hi, have you ever compared the speed of OpenRLHF and nemo-aligner ?

Hi @beckvision , sorry for the late response. We use seed 0-19 just for 20 random experiments to obtain the average performance following the setting of previous works, rather than...

@Zheng0428 Thanks for your reply. My question is that when using the dataset for training, should I calculate the loss for the tokens in responses in each turn, or only...