LeanDojo icon indicating copy to clipboard operation
LeanDojo copied to clipboard

Buggy Benchmark Generation

Open Adarsh321123 opened this issue 1 year ago • 10 comments

Description

I believe the scripts/generate-benchmark-lean4.ipynb is buggy. I evaluated the performance of the ReProver premise retriever on a dataset I generated from Mathlib4. I should get a recall of around 34.7 according to the paper. When I generated the dataset on the commit 29dcec074de168ac2bf835a77ef68bbe069194c5, my recall was absurdly high at around 70, consistent with other commits I tried with this generation code. However, the downloaded dataset v9 at https://zenodo.org/records/10929138 had a recall of 36, as expected. I am unsure what commit version v9 was on, but regardless I believe that this recall disparity suggests buggy code.

Detailed Steps to Reproduce the Behavior

  1. Run scripts/generate-benchmark-lean4.ipynb on commit 29dcec074de168ac2bf835a77ef68bbe069194c5 to reproduce LeanDojo Benchmark 4 version v9.
  2. Clone ReProver and follow its steps for the evaluation of the premise retriever checkpoint on that dataset.
  3. Evaluate the same checkpoint on https://zenodo.org/records/10929138.

Logs

Downloaded dataset:

Average R@1 = 13.057546943693088 %, R@10 = 35.968246863464174 %, MRR = 0.31899176730322715

Generated dataset:

Average R@1 = 28.444059618579143 %, R@10 = 69.65759521602048 %, MRR = 0.5975602059714626

Also, there are some differences between the two datasets. With this code,

import json
from pathlib import Path

downloaded_dataset_path = 'leandojo_benchmark_4_downloaded/random'
new_commit_dataset_path = 'leandojo_benchmark_4_new_commit/random'
downloaded_corpus_path = 'leandojo_benchmark_4_downloaded/corpus.jsonl'
new_commit_corpus_path = 'leandojo_benchmark_4_new_commit/corpus.jsonl'

def load_jsonl(file_path):
    with open(file_path, 'r') as file:
        print(f"Loading {file_path}")
        return [json.loads(line) for line in file]

def load_json(file_path):
    datasets = {}
    for split in ["train", "val", "test"]:
        new_file_path = f"{file_path}/{split}.json"
        with open(new_file_path, 'r') as file:
            print(f"Loading {new_file_path}")
            datasets[split] = json.load(file)
    return datasets

downloaded_dataset = load_json(downloaded_dataset_path)
downloaded_dataset_train, downloaded_dataset_val, downloaded_dataset_test = downloaded_dataset["train"], downloaded_dataset["val"], downloaded_dataset["test"]
new_commit_dataset = load_json(new_commit_dataset_path)
new_commit_dataset_train, new_commit_dataset_val, new_commit_dataset_test = new_commit_dataset["train"], new_commit_dataset["val"], new_commit_dataset["test"]
downloaded_corpus = load_jsonl(downloaded_corpus_path)
new_commit_corpus = load_jsonl(new_commit_corpus_path)

analysis_results = {
    "downloaded_dataset_train_size": len(downloaded_dataset_train),
    "downloaded_dataset_val_size": len(downloaded_dataset_val),
    "downloaded_dataset_test_size": len(downloaded_dataset_test),
    "new_commit_dataset_train_size": len(new_commit_dataset_train),
    "new_commit_dataset_val_size": len(new_commit_dataset_val),
    "new_commit_dataset_test_size": len(new_commit_dataset_test),
    "downloaded_corpus_size": len(downloaded_corpus),
    "new_commit_corpus_size": len(new_commit_corpus),
    "downloaded_corpus_file_premises_size": len(downloaded_corpus[0]['premises']),
    "new_commit_corpus_file_premises_size": len(new_commit_corpus[0]['premises']),
}

print(analysis_results)

I get this final output

{
'downloaded_dataset_train_size': 112729,
'downloaded_dataset_val_size': 2000, 
'downloaded_dataset_test_size': 2000,
'new_commit_dataset_train_size': 118517,
'new_commit_dataset_val_size': 2000,
'new_commit_dataset_test_size': 2000,
'downloaded_corpus_size': 5192,
'new_commit_corpus_size': 5674,
'downloaded_corpus_file_premises_size': 465,
'new_commit_corpus_file_premises_size': 478
}

Platform Information

  • OS: Ubuntu
  • LeanDojo Versio: 1.8.4

Adarsh321123 avatar Jul 16 '24 06:07 Adarsh321123

It's probably because the model checkpoint you're evaluating was trained on a different version of the dataset. The train/val/test split differs for each version, so you're essentially testing on the training data.

yangky11 avatar Jul 20 '24 15:07 yangky11

I am using the same model checkpoint from HuggingFace for both datasets, so shouldn't the performance still be the same assuming that both datasets are on commit 29dcec074de168ac2bf835a77ef68bbe069194c5?

Adarsh321123 avatar Jul 20 '24 16:07 Adarsh321123

Do you know which mathlib commit was used to generate the dataset you downloaded from zenodo? You can find it out in metadata.json.

yangky11 avatar Jul 27 '24 13:07 yangky11

So it seems that v9 was trained on commit fe4454af900584467d21f4fd4fe951d29d9332a7. Crucially, I tried generating this dataset on that commit and reproduced the same bug mentioned in the original post. Can you please let me know if you can reproduce the same bug?

Adarsh321123 avatar Jul 28 '24 02:07 Adarsh321123

So you generated a new dataset from fe4454af900584467d21f4fd4fe951d29d9332a7 , evaluated the retriever model and got a Recall@10 of 36 or 70?

What model checkpoint are you using?

yangky11 avatar Jul 28 '24 03:07 yangky11

So you generated a new dataset from fe4454af900584467d21f4fd4fe951d29d9332a7 , evaluated the retriever model and got a Recall@10 of 36 or 70?

  1. However, when I evaluated on the downloaded v9 dataset, I got a Recall@10 of 36, as expected.

I am using the retriever trained on the random split that used to be at https://huggingface.co/kaiyuy/leandojo-pl-ckpts.,

Adarsh321123 avatar Jul 28 '24 18:07 Adarsh321123

That's expected. You should get ~36% only if using exactly the same dataset used for training the model.

yangky11 avatar Aug 03 '24 22:08 yangky11

I agree. However, I don't understand why I get ~70% when using a generated dataset from the same commit (fe4454af900584467d21f4fd4fe951d29d9332a7) used to train the model. Shouldn't I get ~36%? The generated dataset should be identical to the v9 dataset, right?

Adarsh321123 avatar Aug 04 '24 06:08 Adarsh321123

Any thoughts @yangky11? Thank you so much for your help so far!

Adarsh321123 avatar Aug 10 '24 22:08 Adarsh321123

I don't think it's necessarily going to be the same. If you run the current benchmark generation code twice, does it give you exactly the same dataset?

yangky11 avatar Aug 11 '24 15:08 yangky11

Thank you for your response, @yangky11. It doesn't give the same dataset, but the two datasets are close enough that the two R@10 values are essentially the same.

Adarsh321123 avatar Sep 14 '24 19:09 Adarsh321123

@yangky11 ?

Adarsh321123 avatar Sep 19 '24 20:09 Adarsh321123