StereoSet icon indicating copy to clipboard operation
StereoSet copied to clipboard

Error while running MakeFile

Open austinyoung1 opened this issue 4 years ago • 1 comments

I'm running the makefile in my google collab notebook using !make -f Makefile and this error shows:

FileNotFoundError: [Errno 2] No such file or directory: 'models/pretrained_models/RobertaModel_roberta-base_1e-05.pth' Makefile:24: recipe for target 'roberta-base' failed make: *** [roberta-base] Error 1

Complete logs are given as follow:

python3 eval_discriminative_models.py --pretrained-class bert-base-cased --tokenizer BertTokenizer --intrasentence-model BertLM --intersentence-model BertNextSentence --input-file ../data/dev.json --output-dir predictions/  
2021-05-18 09:15:37.942836: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Loading ../data/dev.json...
Downloading: 100% 213k/213k [00:00<00:00, 796kB/s]
---------------------------------------------------------------
                     ARGUMENTS                 
Pretrained class: bert-base-cased
Mask Token: [MASK]
Tokenizer: BertTokenizer
Skip Intrasentence: False
Intrasentence Model: BertLM
Skip Intersentence: False
Intersentence Model: BertNextSentence
CUDA: True
---------------------------------------------------------------

Evaluating bias on intersentence tasks...
Downloading: 100% 433/433 [00:00<00:00, 452kB/s]
Downloading: 100% 436M/436M [00:12<00:00, 36.2MB/s]
Number of parameters: 108,311,810
Let's use 1 GPUs!
Maximum sequence length found: -inf
100% 6369/6369 [01:07<00:00, 94.24it/s]

Evaluating bias on intrasentence tasks...
100% 8939/8939 [01:32<00:00, 96.48it/s]
python3 eval_discriminative_models.py --pretrained-class bert-large-cased --tokenizer BertTokenizer --intrasentence-model BertLM --intersentence-model BertNextSentence --input-file ../data/dev.json --output-dir predictions/  
2021-05-18 09:18:48.232655: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Loading ../data/dev.json...
Downloading: 100% 213k/213k [00:00<00:00, 1.05MB/s]
---------------------------------------------------------------
                     ARGUMENTS                 
Pretrained class: bert-large-cased
Mask Token: [MASK]
Tokenizer: BertTokenizer
Skip Intrasentence: False
Intrasentence Model: BertLM
Skip Intersentence: False
Intersentence Model: BertNextSentence
CUDA: True
---------------------------------------------------------------

Evaluating bias on intersentence tasks...
Downloading: 100% 625/625 [00:00<00:00, 720kB/s]
Downloading: 100% 1.34G/1.34G [00:34<00:00, 39.0MB/s]
Number of parameters: 333,581,314
Let's use 1 GPUs!
Maximum sequence length found: -inf
100% 6369/6369 [02:05<00:00, 50.57it/s]

Evaluating bias on intrasentence tasks...
100% 8939/8939 [02:50<00:00, 52.44it/s]
python3 eval_discriminative_models.py --pretrained-class roberta-base --tokenizer RobertaTokenizer --intrasentence-model RoBERTaLM --intersentence-model ModelNSP --intersentence-load-path models/pretrained_models/RobertaModel_roberta-base_1e-05.pth --input-file ../data/dev.json --output-dir predictions/  
2021-05-18 09:24:48.182293: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Loading ../data/dev.json...
Downloading: 100% 899k/899k [00:00<00:00, 2.45MB/s]
Downloading: 100% 456k/456k [00:00<00:00, 1.43MB/s]
---------------------------------------------------------------
                     ARGUMENTS                 
Pretrained class: roberta-base
Mask Token: <mask>
Tokenizer: RobertaTokenizer
Skip Intrasentence: False
Intrasentence Model: RoBERTaLM
Skip Intersentence: False
Intersentence Model: ModelNSP
CUDA: True
---------------------------------------------------------------

Evaluating bias on intersentence tasks...
Downloading: 100% 481/481 [00:00<00:00, 383kB/s]
Downloading: 100% 501M/501M [00:12<00:00, 41.6MB/s]
Number of parameters: 124,967,234
Let's use 1 GPUs!
Traceback (most recent call last):
  File "eval_discriminative_models.py", line 266, in <module>
    results = evaluator.evaluate()
  File "eval_discriminative_models.py", line 238, in evaluate
    intersentence_bias = self.evaluate_intersentence()
  File "eval_discriminative_models.py", line 198, in evaluate_intersentence
    model.load_state_dict(torch.load(self.INTERSENTENCE_LOAD_PATH))
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 525, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 212, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 193, in __init__
    super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'models/pretrained_models/RobertaModel_roberta-base_1e-05.pth'
Makefile:24: recipe for target 'roberta-base' failed
make: *** [roberta-base] Error 1

austinyoung1 avatar May 18 '21 10:05 austinyoung1

You have to download these models first:

./code/models/download_models.sh

kainoj avatar Nov 25 '21 09:11 kainoj