sanjeev-bhandari
sanjeev-bhandari
@NickyDark1, I ran that model in colab and it work #### Without quanitizing ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("h2oai/h2o-danube-1.8b-chat") model = AutoModelForCausalLM.from_pretrained("h2oai/h2o-danube-1.8b-chat") #...
This drama is still going on. It seems @stevesimmons getting away with harm to the community. Don't know when the resolution to this drama will come.
Has this issue been resolved, or do we still need to rely on the https://github.com/astral-sh/uv/issues/7036#issuecomment-2440145724? It's quite frustrating, especially since I rely on multiple Python versions — which is one...