sequential-knowledge-transformer icon indicating copy to clipboard operation
sequential-knowledge-transformer copied to clipboard

Vocab.txt - No such file or directory

Open bakszero opened this issue 5 years ago • 0 comments

Hi, I tried to run the demo interactive mode, and get the following error:

  File "interactive.py", line 137, in <module>
    main()
    └ <function main at 0x7fc95cc85170>
  File "interactive.py", line 86, in main
    train_dataset, iters_in_train = reader.read('train', mirrored_strategy)
                                    │                    └ None
                                    └ <data.wizard_of_wikipedia.WowDatasetReader object at 0x7fc95cbdacd0>
  File "/mnt/disks/disk-huge/bakhtiyar/sequential-knowledge-transformer/data/wizard_of_wikipedia.py", line 92, in read
    return self._read(mode, self._batch_size)
           │          │     └ <data.wizard_of_wikipedia.WowDatasetReader object at 0x7fc95cbdacd0>
           │          └ 'train'
           └ <data.wizard_of_wikipedia.WowDatasetReader object at 0x7fc95cbdacd0>
  File "/mnt/disks/disk-huge/bakhtiyar/sequential-knowledge-transformer/data/wizard_of_wikipedia.py", line 95, in _read
    episodes, dictionary = self._load_and_preprocess_all(mode)
                           │                             └ 'train'
                           └ <data.wizard_of_wikipedia.WowDatasetReader object at 0x7fc95cbdacd0>
  File "/mnt/disks/disk-huge/bakhtiyar/sequential-knowledge-transformer/data/wizard_of_wikipedia.py", line 300, in _load_and_preprocess_all
    dictionary = tokenization.FullTokenizer(self._vocab_fname)
                 │                          └ <data.wizard_of_wikipedia.WowDatasetReader object at 0x7fc95cbdacd0>
                 └ <module 'official.bert.tokenization' from '/mnt/disks/disk-huge/bakhtiyar/sequential-knowledge-transformer/official/bert/tokeniz...
  File "/mnt/disks/disk-huge/bakhtiyar/sequential-knowledge-transformer/official/bert/tokenization.py", line 170, in __init__
    self.vocab = load_vocab(vocab_file)
    │            │          └ 'bert_pretrained/uncased_L-12_H-768_A-12/vocab.txt'
    │            └ <function load_vocab at 0x7fc963e787a0>
    └ <official.bert.tokenization.FullTokenizer object at 0x7fc8d7f441d0>
  File "/mnt/disks/disk-huge/bakhtiyar/sequential-knowledge-transformer/official/bert/tokenization.py", line 132, in load_vocab
    token = convert_to_unicode(reader.readline())
            │                  └ <tensorflow.python.platform.gfile.GFile object at 0x7fc8d7f44e90>
            └ <function convert_to_unicode at 0x7fc963e78680>
  File "/home/bakhtiyar/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 178, in readline
    self._preread_check()
    └ <tensorflow.python.platform.gfile.GFile object at 0x7fc8d7f44e90>
  File "/home/bakhtiyar/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 84, in _preread_check
    compat.as_bytes(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: bert_pretrained/uncased_L-12_H-768_A-12/vocab.txt; No such file or directory

Although, upon checking, I do find that bert_pretrained/uncased_L-12_H-768_A-12/uncased_L-12_H-768_A-12/vocab.txt does exist. Any workarounds would be really helpful! Thanks!

bakszero avatar Feb 02 '21 14:02 bakszero