Zhenyu He
Zhenyu He
I run with 'python ./model/Twitter/BiGCN_Twitter.py Twitter15 10' and got the result "Total_Test_Accuracy: 0.8276|NR F1: 0.7847|FR F1: 0.8360|TR F1: 0.8838|UR F1: 0.7941"
As for Twitter16 dataset, I run with 'python ./model/Twitter/BiGCN_Twitter.py Twitter16 10' and got the result "Total_Test_Accuracy: 0.8642|NR F1: 0.7875|FR F1: 0.8585|TR F1: 0.9318|UR F1: 0.8635"
> > Hi,I run your code with 'python ./model/Twitter/BiGCN_Twitter.py Twitter15 10' to get the average experimental results of 10 iterations of BiGCN model on Twitter15 (running 100 iterations takes too...
> The SMILES have not changed, but models trained will depend on the underlying numpy/pytorch/tensorflow versions which can change dramatically over a year. There is also randomness due to choice...
> Thanks for make your code public. > I have been wondering why b = self.embedding.weight[1:] in model.py rather than b = self.embedding.weight[:] ? in pytorch_code, model.py line87
> Stacked across all layers Are they directly added to the pair embeddings initialized by position embeddings or by an MLP layer?
For example, **the suquence of T1084 is:** "MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH", **which has length of 73** **the pdb file of T1084 is:** REMARK T1084 REMARK 3 RESOLUTION RANGE HIGH (ANGSTROMS) : 1.93 REMARK...
Thanks!
> I'd expect only a very minor performance drop. If you use fixed weights to make the weighted sum you can avoid additional memory consumption by just summing into a...