Shane Neeley
Shane Neeley
Any idea what would cause this? It seems like it gets through most of the examples before failing. ``` INFO:transformers.tokenization_utils:loading file https://s3.amazonaws.com/models.huggingface.co/bert/adamlin/NCBI_BERT_pubmed_mimic_uncased_base_transformers/tokenizer_config.json from cache at /home/ubuntu/.cache/torch/transformers/6389e7150ee74c4594a9117c0b9f0f23db49b25f47d55b7c07c8f32025238a45.1ade4e0ac224a06d83f2cb9821a6656b6b59974d6552e8c728f2657e4ba445d9 INFO:root:Writing example 0 of...
If I want to save and run generation on the model later on, I assume I do something like this: After training: `tokenizer.save_pretrained('./results/')` Later generation: ``` weights = "./results/" tokenizer...
Would be awesome to have the already visited urls saveable so that you can restart a crawl later and not revisit links, to start where you left off.
### Project Abstract **Bubblemint** is a "lazy minting" or "gumball machine" strategy that will be an open source asset for anyone launching collections with Mintbase.io contracts. The initial user will...
Shouldn't c = s[i-1] **not** c = s[-i]? -i is the negative of i, not the index you want.
See new features: https://cloud.google.com/speech-to-text/docs/reference/rest/v1p1beta1/RecognitionConfig