Fix quantize-nbits flag
Thank you for your interest in contributing to Core ML Stable Diffusion! Please review CONTRIBUTING.md first. If you would like to proceed with making a pull request, please indicate your agreement to the terms outlined in CONTRIBUTING.md by checking the box below. If not, please go ahead and fork this repo and make your updates.
We appreciate your interest in the project!
Do not erase the below when submitting your pull request: #########
- [x] I agree to the terms outlined in CONTRIBUTING.md
With transformers 4.29 (current) it is not possible to use quantize-nbits flag (like this). With version 4.34.1 the issue is gone
@nosferatu500 I still get :
ValueError: Input X contains infinity or a value too large for dtype('float64’).
created a new mamba python 3.8 environment but pip install -e . installs 4.39.3.
what is your environment like?
@SpiraMira The version I mentioned is the only one that doesn't have problems with ml-stable-diffusion.
@SpiraMira The version I mentioned is the only one that doesn't have problems with ml-stable-diffusion.
@nosferatu500 Yes, you’re right (hope this gets merged quickly)
I had to rebuild a mamba python 3.8 environment for transformers to “stick” to 4.34.1. I think my issue was my python 3.10 environment defaulting to 4.39.3. downgrading from there to 4.34.1 was problematic (?). Are you running a python 3.10+ environment?
thanks.
@SpiraMira I'm on 3.8