Martin Popovski

Results 21 comments of Martin Popovski

I can see the function `draw_rectangle` has a positional argument `bbox_color` which takes an RGB (maybe BGR) tuple.

Hi, We did more runs, this time with `xlm-roberta-base` as the tokenizer, `per_device_train_batch_size` up to 12 (as much as we could with 24GB VRAM). Here is the exact config we...

We managed to find the issue and improve the f1 score to **~0.94**.

I can't really send the real images that I was working with. So I tried recreating the issue with an image I found on the internet: ![renditionDownload](https://user-images.githubusercontent.com/48385621/203264215-a7f3ce33-bfea-4213-8cf6-05b997f5b31a.jpg) With these options:...

Line 65 is the line that actually needs to be removed in order to skip the check for the safetensor file size. And here is a temporary patch for Docker...

We just tried the latest docker image on the llama-30b-supercot model and we still get this error on the very first bin, stopping the conversion: ``` 2023-06-19T08:20:19.093178Z INFO download: text_generation_launcher:...

We managed to get llama-30b-supercot to work, here are some key findings if they are useful: Contents of `pytorch_model-00001-of-00243.bin` file (of course it's a binary file, but still): ``` PK%=pytorch_model-00001-of-00243/data.pklFB9ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ�}q.P��PK$(pytorch_model-00001-of-00243/versionFB$ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ3...

> Running into the same issue with Vicuna 13B which is also technically Llama 13B Tried these two: > > * `TheBloke/vicuna-13B-1.1-HF` > > * `eachadea/vicuna-13b-1.1` > > > ```...

I've been using and it is pretty good. I think is newer and a little bit better.

I am using [Text Embeddings Inference from HuggingFace](https://github.com/huggingface/text-embeddings-inference). But it has it's differences. Like having separate images for different hardware acceleration, or that the model can't be changed dynamically at...