lighteval icon indicating copy to clipboard operation
lighteval copied to clipboard

feat: add JGLUE tasks

Open ryan-minato opened this issue 1 year ago • 6 comments

JGLUE is a widely used test set in the Japanese LLM research community, consisting of five sub-tests (with MARC-ja removed due to a request from Amazon):

JSTS JNLI JSQuAD JCommonsenseQA

Finished Issue #455

ryan-minato avatar Dec 19 '24 16:12 ryan-minato

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Hi! This looks good to me from a glance, thanks for the very detailed work! Did you try to reproduce with this implementation the results obtained with llm-jp-eval, to make sure it is correct?

Sorry for the delay—I was on vacation last week and didn't check anything on GitHub. Wishing you a Happy New Year!

It seems my earlier explanation may have caused some confusion. The tasks created here are not entirely following the llm-jp-eval approach but are instead based on the Stability-AI/lm-evaluation-harness method, which has been out of maintenance for over a year and is currently non-functional.

I am in the process of creating the llm-jp-eval task set, but I plan to submit it in a new PR.

ryan-minato avatar Jan 07 '25 00:01 ryan-minato

Thanks for the explanation! Do you have any other implementation against which you could check your results?

You'll need to run the code quality too :)

clefourrier avatar Jan 07 '25 07:01 clefourrier

You can also add the 2 metrics I highlighted to the core metrics file if you want, as they are very valuable

clefourrier avatar Jan 07 '25 07:01 clefourrier

Thanks for the explanation! Do you have any other implementation against which you could check your results?

You'll need to run the code quality too :)

I’ll start by fixing the CI tonight and then transfer the metrics to the core.

I might also fix the Stability-AI/lm-evaluation-harness to validate the results. This library relies on an outdated Transformers API—it was forked from lm-evaluation-harness back when quantization wasn’t supported, and some datasets have been deprecated, so there could be other unforeseen errors. This could take some time.

ryan-minato avatar Jan 07 '25 07:01 ryan-minato

Hm, would you have a simpler way to make sure your results are within range? Maybe a paper reported results with their implementation and you could try to reproduce it?

clefourrier avatar Jan 07 '25 07:01 clefourrier