Quentin
Quentin
@vinceliuice Would you agree if I was doing a PR on flathub to add matcha-dark-azul?
I need to dive a little in flatpak first, but it seems extremely easy, especially as you did it already for one. If it works for the one I'm interested...
I tried something [here](https://github.com/flathub/flathub/pull/1523). Not sure I did it correctly though, it seems to build locally.
Maybe I am missing something, but from what I can read, using `model_path=None` and loading the model from CondenserForPretraining is actually doing exactly the same thing as `trainer.train(resume_from_checkpoint=model_args.model_name_or_path)`. You will...
I haven't seen this solution here so I post mine: [wslgit](https://github.com/andy-5/wslgit) I installed that and then linked Sublime Merge git binary to wslgit.exe. It seems to work like a charm...
I can confirm I get the same issue with this model, on T4 and L4 GPUs, with TEI 1.5.0 and 1.4.0. I tried to put DTYPE=float32 as environment variable in...
Default, I think this is what HF sets up automatically. But I can double check and try 89 also.
Ok so it seems that my container was still the turing one when I switched from T4 to L4. I changed the container to default 1.5 and I don't have...
So what I did was: - Use a T4 GPU with turing-1.5.0 text-embeddings-inference, I had numerical issues - Switch GPU to L4 and keep turing-1.5.0 text-embeddings-inference, I had numerical issues...
I can confirm, libfreeaptx is here now, but it feels like pipewire/something_in_the_stack has not been compiled with it.