lighteval icon indicating copy to clipboard operation
lighteval copied to clipboard

[FT] Add System Prompt field in LightevalTaskConfig that can be used by model clients

Open JoelNiklaus opened this issue 1 year ago • 14 comments

Issue encountered

My models currently don't follow the template I give. I want to give a system prompt that nudges the models to provide output the way I want it.

Solution/Feature

We could add a new field in LightevalTaskConfig that can be consumed by model clients like LiteLLMClient to send to the API.

Possible alternatives

Alternatively, we could use the generation_grammar, but I think this may be a bit overkill and may be more difficult to implement for different API providers.

JoelNiklaus avatar Nov 28 '24 14:11 JoelNiklaus

@clefourrier Happy to open a PR for this. Please let me know if you see a different solution.

JoelNiklaus avatar Nov 29 '24 08:11 JoelNiklaus

Hi! I'm adding auto-scaling for inference endpoints atm, and will edit the generation mechanisms after that, possibly to get a single homogeneous system - if you need it super fast feel free to add it before, but I'll probably edit the system next week :)

clefourrier avatar Nov 29 '24 09:11 clefourrier

Ok, I see. Thanks for letting me know. I will use a hack in the meantime in that case.

JoelNiklaus avatar Nov 29 '24 09:11 JoelNiklaus

Just briefly asking to double check: This edit to the generation mechanism will also affect the openai/litellm model loaders?

JoelNiklaus avatar Nov 29 '24 10:11 JoelNiklaus

Yes the aim is to have one better centralised system

clefourrier avatar Nov 29 '24 17:11 clefourrier

Need to add to #428

clefourrier avatar Dec 09 '24 18:12 clefourrier

Hi @JoelNiklaus , from what I can see, it's already available for all models with the latest CLI refacto: there's a system prompt param, which is then used when creating the requests from the samples. Do you need stg else wrt this?

clefourrier avatar Dec 10 '24 10:12 clefourrier

I saw that, thanks. However, to me it is still not clear how to use the system prompt in the model_loader (e.g., litellm). The system prompt is not part of GreedyUntilRequest. Also it does not come through with the model_config.

JoelNiklaus avatar Dec 10 '24 10:12 JoelNiklaus

Because you should not use it in the model_loader ^^ The system prompt is added to the request by the PromptManager, which creates the request from the sample, number of few shots, and system prompt if needed. (It's in ligtheval/tasks/prompt_manager)

clefourrier avatar Dec 10 '24 10:12 clefourrier

The system prompt will appear in the details if you want to check this

clefourrier avatar Dec 10 '24 10:12 clefourrier

Haha ok. Sorry if a stupid question, but how can I access the system prompt before I make a call to the API? https://github.com/huggingface/lighteval/blob/main/src/lighteval/models/openai_model.py#L82

JoelNiklaus avatar Dec 10 '24 10:12 JoelNiklaus

As discussed on Slack the PromptManager might need to be adapted to make passing through the system prompt possible.

JoelNiklaus avatar Dec 12 '24 15:12 JoelNiklaus

As of now there is no way to only grab the system prompt. One solution would be to add a system_prompt field to the requests when creating them in the PromptManager so that you can access them in litellm_model

NathanHB avatar Dec 16 '24 13:12 NathanHB

Sounds good. I enabled it in PR #385.

JoelNiklaus avatar Dec 16 '24 15:12 JoelNiklaus