[FT] Add System Prompt field in LightevalTaskConfig that can be used by model clients
Issue encountered
My models currently don't follow the template I give. I want to give a system prompt that nudges the models to provide output the way I want it.
Solution/Feature
We could add a new field in LightevalTaskConfig that can be consumed by model clients like LiteLLMClient to send to the API.
Possible alternatives
Alternatively, we could use the generation_grammar, but I think this may be a bit overkill and may be more difficult to implement for different API providers.
@clefourrier Happy to open a PR for this. Please let me know if you see a different solution.
Hi! I'm adding auto-scaling for inference endpoints atm, and will edit the generation mechanisms after that, possibly to get a single homogeneous system - if you need it super fast feel free to add it before, but I'll probably edit the system next week :)
Ok, I see. Thanks for letting me know. I will use a hack in the meantime in that case.
Just briefly asking to double check: This edit to the generation mechanism will also affect the openai/litellm model loaders?
Yes the aim is to have one better centralised system
Need to add to #428
Hi @JoelNiklaus , from what I can see, it's already available for all models with the latest CLI refacto: there's a system prompt param, which is then used when creating the requests from the samples. Do you need stg else wrt this?
I saw that, thanks. However, to me it is still not clear how to use the system prompt in the model_loader (e.g., litellm). The system prompt is not part of GreedyUntilRequest. Also it does not come through with the model_config.
Because you should not use it in the model_loader ^^ The system prompt is added to the request by the PromptManager, which creates the request from the sample, number of few shots, and system prompt if needed. (It's in ligtheval/tasks/prompt_manager)
The system prompt will appear in the details if you want to check this
Haha ok. Sorry if a stupid question, but how can I access the system prompt before I make a call to the API? https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/openai_model.py#L82
As discussed on Slack the PromptManager might need to be adapted to make passing through the system prompt possible.
As of now there is no way to only grab the system prompt. One solution would be to add a system_prompt field to the requests when creating them in the PromptManager so that you can access them in litellm_model
Sounds good. I enabled it in PR #385.