Nathan Habib

Results 133 comments of Nathan Habib

hey @Wauplin ! any news on the state of the release ? :)

hey thanks for the PR ! the model config arg was deprecated in favor of a file. we forgot to remove the in the constructor.

hey ! thanks for the PR, lgtm, i will test it locally next week and merge asap :)

Now loads openai lazily. the dep is no linger required when using lighteval

> You could add a warning/error when loading metrics at the task level, to check there the deps if needed. you mean like I did in task.py to check that...

hi ! the first issue you encoutered happens because the model does not have a `sequence_length` in the model config. this bug is being fixed in #185. the second issue...

hi ! thanks for your PR, the lazy loading of openai has been implemented in #173. However, you can keep the lazy loading of BERT scorer.

hi! since the `llm-as-judge` metric is an official metric, we will be adding `openai` as a required dependency. like clementine said a PR has been opened :)

as we now allow to use transformers for llm as judge, we no longer require openai

Hi ! Thanks for your interest in lighteval ! It seems like integrating `alpaca_eval` would require a custom function in the `JudgeOpenAi` class, as it is not as simple as...