Sun
Sun
in the table 3 of the paper "LORA: LOW-RANK ADAPTATION OF LARGE LAN- GUAGE MODELS" it says that finetuning top2 layer of GPT2 meidium require 25.19M trainable parameters. how to...
I think the server is built locally. why we need a extra key for usage permission.
In `bigcodebench/provider/openai.py`, all model processes prompts sequentially, the func `_codegen_batch_via_concurrency` process each prompt n time in parallel, but still sequentially process the whole prompts. Looks wired, and significantly slow down...