ffgcc

Results 5 comments of ffgcc

In train_retrieval.py, the score of ITM did not go through softmax. Did you consider using softmax at that time? When filtering, is the score of 0.5 used without softmax? Thanks!

Thanks for your reply, At the same time, I only found the base version of the model. I would like to know how the prompt bert performs in bert-large and...

Here is my result using the parameters: STS12 | STS13 | STS14 | STS15 | STS16 | STSb | SICK-R | Avg. -- | -- | -- | -- |...

Dear, @wenhui0924 I wonder how you mixed the three data when they were not of equal length. Thanks!

我的部署测试方式 和示例一致。 部署方式为: vllm serve ${model_path} --api-key token-abc123 --tensor-parallel-size 4 --gpu-memory-utilization 0.95 --max_model_len 131072 --trust-remote-code 测试方式为: python pred.py --model ${model_path}