transformers
transformers copied to clipboard
support loading model without config.json file
We already have support for loading a fast tokenizer with a tokenizer.model file only. However, we still require config.json to exist in the model folder (on hub or locally), even though it is not required to load from the model file. This feature will remove this dependency on config.json, allowing users to load from only a tokenizer.model file.
TESTS:
- updated stale tests
- updated existing test to work without config file
- added test in llama to load with only
tokenizer.modelfile
Reviewer: @ArthurZucker
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.